The pros and cons of using display capture technology for data collection with young children

Authored by: Garry Falloon

The Routledge International Handbook of Learning with Technology in Early Childhood

Print publication date:  February  2019
Online publication date:  February  2019

Print ISBN: 9781138308169
eBook ISBN: 9781315143040
Adobe ISBN:

10.4324/9781315143040-3

 

Abstract

One of the challenges of researching mobile devices in educational settings is gathering authentic data that provides a holistic and accurate picture of how children interact with apps, and each other, as they complete learning tasks. While device portability can be a significant advantage for children wishing to work in different spaces or with others, for researchers this mobility poses problems, especially when it comes to gathering data from children using multiple devices in whole-class or large group activities. This chapter discusses use of an innovative iPad display capture tool in a range of studies involving young children in a New Zealand school. It details the tool’s functionality and operation, and how the data it captured was analysed and interpreted. While the tool provided unique insights into the children’s device-related activities away from the eyes of the teachers, it also raised a number of ethical challenges and dilemmas arising from its invisibility and ‘surveilling’ nature, and the potential for it to capture ‘grey’ data that may be of a personal or confidential nature. The chapter discusses these issues, and makes recommendations for researchers considering using data capture systems of this nature.

 Add to shortlist  Cite

The pros and cons of using display capture technology for data collection with young children

Introduction

For some years screen capture and display recording software have been used in education, generally for tasks such as creating teaching resources or tutorials (Séror, 2012), recording information for students to access for revision purposes or if they are unable to attend classes (Drumheller & Lawler, 2011; Silva, 2012), monitoring user interaction with websites or other applications (Beach & Willows, 2014; Chaney et al., 2013; Zhang, 2013) and for generating understanding about students’ learning processes in areas such as second language learning and mathematics (Barmby et al., 2009; Xu & Ding, 2014). All of these studies were conducted on desktop computers, using commercial screen recording applications such as Camtasia (Techsmith, 2017a), SnagIt (Techsmith, 2017b) or Captivate (Adobe, 2017). Cited benefits from using screen capture for these purposes include its ability to ‘concurrently collect both screen movement and audio, (enabling) assessment of an individual’s cognitive state that is more accurate than retrospective reflections’ (Chaney et al., 2013, p. 2533), support the collection of data that reveals problems users experience when using applications (Zhang, 2013) and allow deeper insights into how applications assist (or not) conceptual learning (Barmby et al., 2009). Given that screen recording is not a new technology, very few studies appear to have used it as a research tool to gather empirical data about how young children interact with applications and devices – and each other, as they engage with them individually or in groups in formal and informal learning settings. This chapter examines the use of display recording technology to capture data in a range of studies involving 5-year-old children using apps on iPads to help develop literacy, numeracy, science and computational thinking capabilities. They were completed between 2011–2017, commencing soon after the original iPad was launched in 2010, amidst much hype and fanfare as an educational game changer (Coghlan, 2011). This chapter discusses the benefits of using display capture technology in these studies in terms of data quality and authenticity, but also highlights the significant challenges faced in doing so. These relate to technical, learning environment and logistical difficulties, but perhaps more importantly, ethical dilemmas when using this data method with very young children.

Changes to technology and learning environments

In recent years many governments, through their education ministries, have moved to renovate traditional classrooms or build new ones as flexible, innovative or modern learning environments. This broadly follows OECD (Organisation of Economic Co-operation and Development) commentary that promotes future learning spaces as being technology-supported, learner-centred and competency-focused, flexible and collaborative in nature (Kuuskorpi et al., 2011; OECD, 2017). These changes align with what is seen as the desirability of learning spaces to support so-called ‘21st Century’ or ‘future-focused’ pedagogy, able to foster the development of valued competencies such as team work, problem solving, learning independence, self-determination and positive disposition towards learning as a life-long activity. In New Zealand, where these studies were undertaken, redesigns have resulted in large, open learning spaces where two and sometimes three teachers collaboratively plan for and teach between 40–70 students (depending on age level), frequently with the assistance of a range of technology systems and devices (Figure 3.1).

Aligned with physical learning environment changes and technology improvements, has been a parallel move from using fixed, desktop computers, to using mobile devices, such as laptops, Chromebooks and tablets. The improved affordability of these devices, combined with programs such as ‘Bring Your Own Device’ (BYOD), has helped overcome historical technology-access issues in many schools that up until recent times has mitigated against widespread and meaningful integration. The design and portability of new devices and well-developed technical infrastructure (e.g., fibre broadband, long-range Wi-Fi, cloud-based services) has also supported the type of curriculum and pedagogy being developed in these environments, allowing children greater flexibility to work in different spaces, including at home and other out-of-school environments (Figure 3.2).

An example of an Innovative Learning Environment (Flexible Learning Space) from this study

Figure 3.1   An example of an Innovative Learning Environment (Flexible Learning Space) from this study

However, the move to flexible learning spaces and mobile devices has proved challenging to educational researchers, keen to learn how young children use these devices and interact with their apps in naturalistic classroom settings. While conventional methods such as video and audio recording may yield satisfactory interactional data from one or possibly two students at a time – or provide a general overview of activities in a classroom from a single or dual points of capture, the reality is that these methods are generally inadequate for revealing fine-grained information about how highly mobile classes of children use devices and apps, if, what and how they are learning while doing so, and for capturing the subtleties of their interpersonal exchanges. Conventional video recording methods trialled in the early studies in this series also revealed a strong observer or Hawthorne effect that compromised the authenticity of data. Data suggested the young children modified their behaviours or ceased their interactions due to presence of a camera. These challenges required use of a different data method – one that could be used with classes of highly mobile students, and at the same time, captured detailed data in an unobtrusive manner.

Developing a device-embedded recorder

Apple’s iPad presents educators with a robust, reliable and quality-controlled (app-wise) platform to use with children, but confinement to operating within Apple’s ‘Walled Garden’ (Melhuish & Falloon, 2010) means that only AppStore-approved apps can be used. Unfortunately, until very recently, this did not extend to an app that supported recording of the device’s display. Despite inclusion of display capture functionality in more recent releases of Apple’s iOS, operating constraints on the native system, including the recorder shutting down if any protective cover is closed or the device ‘goes to sleep’ (common scenarios with young children), compromises its usefulness for classroom data collection. Therefore, recording the display of the iPads used in these studies required the development of a bespoke recording system, and the unlocking of devices to allow installation of a recording app. Over the course of this research, the bespoke display capture app has evolved from simply recording the device’s display, microphone audio and finger placement (touches), to also optionally allowing for Facecam activation, enabling the children’s facial expressions to be recorded (Figure 3.3).

Flexible learning spaces provide children with different workspace options

Figure 3.2   Flexible learning spaces provide children with different workspace options

The latest version of the recorder can be activated and deactivated in a number of ways, including a pre-set sequence of finger taps on the display, using a ‘record’ button in the app itself or via Wi-Fi from a mobile phone. Apart from a thumbnail image if the Facecam function is activated (see Figure 3.3), no other visible indication that the device is recording is available to the children. Using the recorder over the past six years has enabled rich and highly authentic data to be gathered that accurately reflects the reality of children’s mobile device use in busy classrooms, in a range of different learning contexts and scenarios.

The research context and studies

The recorder was used to collect data from 2011–17, in a range of studies involving children from 5–11 years. All studies were completed in the same school, but involved different children and their teachers. The school was a middle decile 1 contributing primary (years 1–6) comprising 520 students and approximately 30 teaching staff, including the principal, one non-teaching deputy principal and a part-teaching assistant principal. The school had developed a reputation in the community as a caring, forward-thinking and progressive institution, and had quickly become a ‘school of choice’, attracting students from a wide catchment area. While several studies were completed with older children (10 and 11 year olds), for the purposes of this chapter I am going to concentrate specifically on studies involving young children (5-year-olds, less than 6 months at school), as this group presented particular challenges when using the display recording system.

The studies

The first study (2011–12) investigated the extent to which a range of teacher-selected apps could support the children’s phonics development. The apps ranged from simple ‘drag and drop’ designs where the children manipulated phonemes to build words, through to the ‘Mr. Phonics’ series (Christopher Thorne Productions, 2012), where syllable-based word building activities were introduced by a recorded video involving a comical teacher. The study specifically focused on how the young children navigated the apps, and the strategies they applied to solve the learning problems embedded in them (Falloon, 2013).

Study two (2013–14) researched the nature of children’s interaction and talk while they worked collaboratively in pairs to develop literacy-based digital artefacts. It applied Mercer’s (1994) Talking Types framework to analyse the nature of the children’s verbal interactions while they used three different apps, PuppetPals (Polished Play LLC, 2017), Pic Collage (Cardinal Blue Developer, 2018) and Popplet (Notion Developer, 2017), to author content from their work in different learning topics. The study explored how the children negotiated decisions about the content and design of their artefacts, and the extent to which the tools and features of the apps and devices assisted in this process (Falloon & Khoo, 2014).

Top, the early display recorder showing finger placement (white spots) and the latest version (bottom) with Facecam recording

Figures 3.3a, b   Top, the early display recorder showing finger placement (white spots) and the latest version (bottom) with Facecam recording

Study three (2015–16) examined the development of the children’s general thinking skills while they were engaged in basic code-building using the app, Scratch Jnr. Coding tasks were integrated with the geometry component of the children’s mathematics program, where challenges were set to build and run code sequences to draw common shapes of different dimensions (squares, rectangles etc.). Recordings were made as the pairs built and ran their procedures, which were then analysed using a framework developed from Krathwohl’s (2002) revision of Bloom’s Taxonomy, and Brennan and Resnick’s (2012) computational thinking evaluation framework. Data were evaluated to determine how the computational tasks, and the strategies children employed to solve them, facilitated the exercise of different types and levels of thinking, and what contribution each of these made to solving the learning problems (Falloon, 2016).

The final study was undertaken in late 2016 to early 2017, and changed the focus somewhat to exploring low fidelity virtual simulations, and if they could be of value in helping young children learn basic circuit-building procedures and electricity concepts. Using minimally scaffolded guided discovery pedagogy, four apps were selected that progressively stepped the children from building simple series and parallel circuits using ‘drag and drop’ templates, to designing and building their own circuits on blank, breadboard-like testing platforms. It specifically investigated if the children could transfer conceptual and procedural knowledge between apps, and later, whether or not they could transfer this to building the same circuits using real equipment.

The display recorder was installed on 20 iPad Air and Air 2 devices supplied by my university for these studies. Although the school was BYOD, this did not include the junior classrooms, where children had restricted access to a shared set of 10 devices. These were generally used to supplement reading and maths group activities, or for topic research or content development. The school-owned devices were managed by a centralised system, thereby preventing unlocking and installation of the display recorder app. The research devices were left in the classrooms for the teachers to use outside of data collection times, allowing the children to become familiar with them and improving the device-to-student ratio for general curriculum work.

Analysing the display capture data

Screen capture data from all studies were analysed using Studiocode, a video analysis application originally developed for coding sporting, marketing, sales and audience presentation performances. Studiocode integrates the screen captured video and audio with an interactive timeline, that allow coders to log significant events in data against existing code frameworks, or ones generated from a grounded analysis of a data sample (Figure 3.4). These events are time-stamped on individual timelines, with quantitative data including event counts, average time per event, total time per event and percentage of total runtime per event, able to be exported to other applications such as SPSS or Excel, for statistical analysis. Additionally, samples of event-coded data can be saved as separate files and shared for rater-agreement purposes.

The sample framework in Figure 3.4 was generated from data in the most recent study investigating circuit building and electricity concept learning transfer between apps. As no existing frameworks were available, its design referenced early studies where young children used physical equipment for similar tasks (Osborne, 1983; Shipstone, 1984) to identify general levels of capability, comparing this with evidence of similar capabilities present in the display recorder data. From this comparison, four general science understandings relating to circuits were identified (left button column: e.g., resistance, control, uninterrupted current etc.). These were further associated with circuit types (parallel, series), concepts (e.g., operating circuits must be continuous, voltage ‘shared’ in series circuits etc.) and evidence from the display data (right button column: e.g., variability in bulb brightness, ordering of appliances, removal of bulb and effect on circuit etc.). Toggling active buttons (right column) logged events aligned with the code and/or associated code on the timeline (see arrow links), registering these as appropriately coloured ‘blocks’. Data associated with each of these blocks in the individual timeline rows can then be combined and exported as standalone files for presentation or coder-agreement purposes. However, it was necessary to review each sample up to four times (depending on the number of codes), as it was impossible to complete accurate analysis across multiple codes at the same time.

A typical Studiocode window showing timeline (bottom), coding framework (right) and sample video clip

Figure 3.4   A typical Studiocode window showing timeline (bottom), coding framework (right) and sample video clip

While exceedingly time-consuming, using Studiocode enabled deep interrogation of the display data that revealed unique insights into how the young children navigated and used the various app functions and content and worked collaboratively (or not), to build their learning responses and artefacts. It also supported analysis of the rich visual data captured using the device’s Facecam. These data were particularly interesting and valuable, as they frequently provided information about what was happening in the broader environment, during periods of what otherwise may have been interpreted as the children being off-task or inactive.

Planning for and introducing and display recorder system

One of the main purposes of using display recorder was to gather data that accurately reflected the children’s use of the devices in natural classroom settings. While previous studies using mainly desktop technologies have been undertaken in classrooms, virtually all of these have been designed to evaluate specific technology-supported interventions (e.g. Barns et al., 2005; Tang et al., 2006), or explore the efficacy of new applications in particular curriculum disciplines or activities (e.g. Imler & Eichelberger, 2011; Raído, 2013). It was therefore essential that the recorder was used to gather data during learning tasks that were part of each class’s normal curriculum, rather than ‘staged’ events, set up especially for data gathering purposes. Prior to each study, meetings were held with the teachers to identify suitable curriculum topics and negotiate research focuses. In all cases, focuses were chosen that helped the teachers with a particular learning inquiry or question they had, with the expectation that results would be shared with other staff, to help improve technology use, learning design and pedagogy across the school.

The teachers were also keen to investigate how display data might be used for assessment or reporting purposes – that is, to provide visual evidence of the children’s thinking and learning processes while they were working on the devices. They identified a particular issue with using the iPads in that, apart from the ‘end product’, they had very limited data upon which to evaluate the children’s learning processes, or gain information about any learning difficulties they may be having. They saw the display recorder as a means of gathering useful formative and summative data for improving their practice, and for sharing with parents and other evaluation agencies.

Before each study, in addition to normal informed consent procedures, it was a requirement of ethical clearance that parents were given the opportunity to discuss the research, see the display recorder in action and have adequately answered any questions they had relating to any aspect of the work or its procedures. To facilitate this, several information sessions were organised at different times to enable as many interested parents as possible to attend. At these sessions the research goals, how data were to be gathered and analysed (including a demonstration of the recorder and some data gathered previously) and how results would feed back into school programmes, were introduced and explained. An evening seminar was also held in 2014. This two-hour meeting during which research outcomes to date were shared and discussed, was attended by school staff and over 300 parents, grandparents and teachers from neighbouring schools.

The display recorder in the classroom

Managing the recording system with the children evolved considerably from the early to most recent studies. Technology improvements led to more options being available to activate and deactivate the system, and simplification of this process meant most children could do this themselves, with a little initial guidance. While the latest iteration of the recorder includes the ability to activate and deactivate it from a smartphone, this function has never been used, for reasons that will be discussed later. Typically, at the beginning of each session pairs or small groups of children would be allocated a numbered device, and their names registered on a log sheet. They would use the same iPad for the duration of each unit, as their work was saved to the device rather than in the cloud. In the earlier studies where the system was started by a tap sequence, I activated and deactivated the recorder at the beginning and end of each session. However, in later work the children did this for themselves, using the start/stop button (Figure 3.5). Where practical, data were gathered during all teaching sessions from as many pairs or groups as possible. While this resulted in a significant volume of data, doing this allowed for the inevitable technical or logistical problems that arise when using technology with very young children. At the close of each session, data were exported from individual devices to my laptop for later analysis using a transfer app called iExplorer (Macroplant, 2016). Not all data were analysed, but samples were selected using specific criteria aligned with each study’s focus, taking into account any profile considerations relating to the target group being researched.

Ethical considerations

Like any study involving young children, this research required adherence to strict ethical standards, and for its methods, processes and means of data use and dissemination to be scrutinised and approved by my university’s Human Research Ethics Committee. In evaluating the application, the committee raised a number of concerns regarding the recording system, particularly emphasising its potential to record information of a personal nature unrelated to the research, the invisibility to the children of any signs of its operation, and the fine line using it draws between ‘covert surveillance’ and data gathering. Specifically, concerns were expressed about the method being deceptive, and the possibility that despite parental informed consent and ongoing child assent measures, conceptually very young children may have difficulty understanding when they were being recorded, what was being recorded, and how or why. The committee pointed out that while screen recording was not a new technology, its use for research data gathering was not commonplace, and that the few studies that had employed it had generally done so with older participants in managed environments, using desktop computers. They also reminded that the commercially available recorders used in previous studies had a visible indicator, such as a pulsing coloured bar across the top of the screen, or a flashing recording indicator in the task bar. A visible recording indicator was not included in any version of iPad recorder used in these studies.

Later versions of the screen recorder included a visible start/stop button

Figure 3.5   Later versions of the screen recorder included a visible start/stop button

Two questions arose from the committee’s review that needed consideration. First, could the recorder be viewed as deceptive, and if so, was its use justifiable in terms of participant risk vs research benefit? Second, if it was not considered deceptive, what measures would be taken to ensure it was not being used deceptively, and that any data of a personal or confidential nature collected through its use, were appropriately handled? Justifiable sensitivity has existed around using deception in social and scientific research since the early 1960s, when Stanley Milgram’s infamous ‘shock machine’ experiments allegedly caused significant trauma to unwitting participants. 2 While not of the same nature or significance as Milgram’s work, the answer to the question of whether the recorder was deceptive was not found in the technology itself, but how it was to be used. The committee accepted that the practice of screen recording was widespread amongst gamers for strategy-sharing and businesses for training purposes, and that some studies were emerging of its use for educational research (e.g. Zhang & Quintana, 2012). Their concerns were of a more ethically fundamental nature – specifically, whether or not the young children were aware of its use and what it recorded, and if so, whether they held sufficient agency to ask for the recorder not to be activated, deactivated if it was active or withdraw their data after it had been recorded. Consideration also needed to be given to how any data that were recorded that was deemed to be of a private, confidential or concerning nature, would be handled. This was particularly important, given young children’s tendency to discuss with their friends in private, issues they may not feel confident to discuss with adults.

Addressing these concerns and implementing them in practice is complex, requiring careful attention to procedures, and the adoption of a highly reflexive researcher stance (Flewitt, 2005). While procedures related to handling of what might be interpreted as confidential, private or concerning information can be easily drafted for the purposes of ethical approval, implementing them is not so straightforward, relying on the judgement and integrity of the researcher to decide what to do with data of that nature. Procedurally, in these studies this issue was mitigated by a decision to refer to the class teacher any recorded information that suggested a child was being subjected to emotional, psychological or physical harm, to the extent that it was, or potentially could, negatively affect their safety or wellbeing. From there it was to be the decision of the teacher, in conjunction with the principal and, if needed, other relevant authorities, whether or not to act upon the information. Reaching this procedural decision was a difficult process, as if enacted the primary ethical tenet of participant confidentiality could potentially be compromised. However, when the age of the children and the ethical and moral obligation of the researcher to ensure no harm comes to participants was taken into account, the referral pathway was deemed to be an essential measure. Interestingly, over the 6 years these studies were completed, while many amusing interactions were captured, nothing of a concerning nature that required referral was recorded.

The question of agency when researching with young children is a challenging one, given the obvious perceptions of authority and power in the researcher-child relationship (Einarsdóttir, 2007). Fundamentally, the ethical conduct and procedures of any research should allow participants to make informed decisions regarding participation and use of any data they contribute, including the capacity to amend or withdraw their contribution, unless this right is waived in the consent process (Burns, 2000). Exercising this with young children can be problematic, and as such, most often consent is granted via parent proxy. However, some interesting findings from these studies point to possible issues for some children when using display recording, that researchers using this method should be aware of. While it may be possible for young children to amend or withdraw data provided during interviews, by, for example, indicating desired changes while their transcript is being read back to them – or decline permission to use their work as data altogether, the scenario that evolved with display recording during these studies suggests these data are somewhat different. In the early trials (2012–14, pre-Facecam functionality), recorder data that were going to be used in publications and/or as ‘live’ clips at conferences and presentations, were shared with the children involved in its generation, with the expectation being that they would approve or decline its use. Interestingly, when their clips were played back to them, it was apparent some children did not understand and were surprised to learn that the voices they heard and the actions they were viewing were actually their own. Acknowledging that some time had elapsed between when data were recorded and when samples were shared with the children, it did raise the question as to whether or not these young children understood what the display recorder did, and the sort of information it captured about them and their activities.

Unlike interviews that are recorded by a generally known physical device (a tape or audio recorder), or audio recording apps that must be opened and remain so while in use (usually with a visible indicator such as a monitor or flashing bar or light), the early display recorder did not require ‘opening’ – simply tapping three times in a pre-specified area of the display activated and deactivated it. It may have been that the absence of a physical or known recording device, or virtual representation of a device, made it difficult for some children to understand that their comments and actions were being recorded. While parental consent had been granted for all data used in publications and presentations, and assent was gained from the children prior to each recording session, it was somewhat disturbing to learn that a significant number of children did not appear to understand what was going on.

Notwithstanding other issues such as perceptions of authority, these conceptual deficits made it extremely difficult for the children to make any informed decisions relating to use of their data. While later versions of the recorder added an interface with start and stop buttons and optional Facecam recording – both of which should have provided the children with cues that the recorder was active, it is unlikely they would have helped address these issues. This conclusion is supported by the nature of some of the recorded audio that suggested many children either did not know or quickly forgot that they were being recorded, were unaware of what was being recorded or simply didn’t care! While nothing of a referable nature was recorded, many clips captured personal squabbles, references to relationship issues with other children, siblings and family members, afterschool parent and family happenings and events, and other information of a personal and at times private nature. While not supported by any empirical data, it may well be that the absence of a known recording device combined with the children’s location in different, seemingly private work spaces around the large classroom, contributed to a mistaken sense of privacy and anonymity.

The most recent version of the recorder contains two features that have been used sparingly, or not at all. The Facecam recorder was added in late 2015, and while its use was approved by my university through ethics amendment, it met with significant resistance from parents. Interestingly, over the four-and-a-half years I used the basic recorder (no Facecam), only 17 parents declined their child’s participation in the studies. Given the 90% plus consent form return rate, and the total number of children involved (in excess of 300), this number is very small. However, in 2016 and 2017 when the additional checkbox option was added to the consent form asking for permission to record using the Facecam, less than 40% of parents gave permission for this. While parents remained happy to allow audio and activity data to be recorded, they appeared less so with data that contained visually identifying information. The second added option was the capacity to activate and deactivate individual recorders via Wi-Fi using a smartphone app, potentially without the children knowing. While technically operational, this feature was never used in these studies nor evaluated by the ethics committee as it was considered susceptible to misuse, and arguably ‘crossed the line’ between ethical data collection and deception.

The pros and cons of using the recorder

The pros: data quality and authenticity

As stated earlier, the principal reason for using the display recorder was to enhance data authenticity. Referencing computer science research, Lynch defines authentic data as ‘verifying claims that are associated with an object – in effect, verifying that an object is indeed what it claims to be, or what it is claimed to be’ (2000, p. 37). Lynch’s definition highlights the importance of trustworthiness – that is, that data as accurately as possible represents events in any given research scenario. Within the context of these studies, such an appraisal must consider the age of the children, their dynamic and mobile learning environment, the collaborative, learner-focused curriculum design, and the teachers’ goals and purposes for technology use. Collecting data in large, mobile device-supported flexible learning spaces is fundamentally different to conventional classrooms, where physical objects such as fixed walls, desks, shelves, whiteboards and so on, more tightly define the research location. While the design of flexible learning spaces and tablet technology supports greater choice for children to work where they like, it presents major challenges for researchers wishing to capture comprehensive and authentic data reflective of activities across an entire space. Given these constraints, the display recorder was the only viable option available. However, appraising data authenticity from any method requires more than simply determining it was the only workable option. While the display recorder suited this environment and research purpose and yielded intimate and detailed information about the children’s use of the apps and their interactions, problem solving and decision-making strategies, it could be argued that it does not capture learning influences from the wider environment. Although this may be true in terms of recording visual data about the macro-environment, the portability of the system does capture children’s verbal interactions with others as they move around the space, providing interesting insights into behaviours such as peer tutoring, solution sharing and generating shared understanding of learning tasks.

The combination of recorded finger placement and audio can also provide valuable insights into how children share access to the device’s interface while collaborating to solve a problem. This is very useful data that contributes significantly to understanding how these devices can support children’s intra and inter-group collaborative practices. Additionally, the finger placement indication provides detailed information about the children’s use of, and choices about, different scaffolds and cognitive tools embedded in the apps. It also provided interesting insights into which menus and options the children accessed and how often, their ‘app smashing’ behaviours, and added to understanding how they translated verbally stated intentions into physical actions.

While Facecam recording was possible, it was seldom used. Although parents’ decisions relating to this were respected, its restricted use at times caused frustration. In some data where it was activated, additional, highly valuable information relating to the children’s efforts and activities were revealed. For example, several recordings contained prolonged silences that, relying on audio and screen activity alone, could have been interpreted as the children being inactive or off-task. However, Facecam data indicated this wasn’t necessarily the case, as often children were recorded gazing intently and for lengthy periods at the display, showing obvious signs of cognitive load as they tried to work out solutions (e.g. Figure 3.6).

The Facecam recorder also captured clusters of children working collaboratively together, augmenting finger placement, display and audio data, to provide a more complete picture of intra- and inter-group and app/device interaction. From a researcher’s perspective, data of this detail and quality is critical to ensuring research validity, and robustness in the presentation and communication of data and conclusions. However, from a parent’s perspective, such considerations can be outweighed by a perception – rightly or wrongly – that by allowing their child’s face to be displayed in academic contexts such as journals or at conferences, they are potentially endangering their wellbeing or safety. There is little doubt that sensitivity to this has been whipped up in recent years by sensationalised media reporting, but to my knowledge no alleged adverse events have been traced back to the use of children’s images as data in academic publications or at conferences. While, of course, every effort should be taken to ensure the anonymity of children in any research, such as not including names or other specific identifying information, it does seem incongruous that in an era where parents seem happy to display their children publicly on social media, that there is continued resistance to the use of such valuable visual data for academic research purposes.

The Facecam captured children’s lengthy (and often silent) deliberations as they tried to solve problems

Figure 3.6   The Facecam captured children’s lengthy (and often silent) deliberations as they tried to solve problems

In summary, the invisibility of the recorder to the children undoubtedly supported the gathering of data that represented a ‘warts and all’ account of happenings, and yielded highly authentic and at times unique data, that, given the research context, arguably could not have been collected in any other way. However, there were also challenges and issues associated with its use that at worst mediated the benefit of having access to data of this nature.

The cons: challenges and issues

Apart from the technical design and installation difficulties described earlier, a number of challenges and issues arose during the studies relating to the recorder’s use in the classroom. First, on-device display recordings consume a lot of storage capacity. The early trials of this recorder used 16GB iPads which, once the learning apps were installed, left very limited space for storage-hungry recordings. In practice, this limited most data sessions to single events of around 30–40 minutes, after which the recorded files needed to be downloaded to a laptop, before being erased from the device. Improvements to compression software and larger capacity iPads in more recent years has gone a long way to mitigating this issue, although video resolution and quality is often compromised in the quest for smaller file size. Regardless, care still needs to be taken to ensure sufficient storage space exists, especially if using smaller capacity devices, as all data is erased once capacity is reached.

Second, Apple’s continual updating of their iOS meant the recorder, and the background iOS adjustments needed to run it, required ongoing modification, to the extent that it was no longer viable to continue development post iOS8 – given the imminent arrival of a native recorder in iOS11. However, while a native recorder is now available, early trials suggest its usefulness for classroom research is limited due to its exceptionally heavy demands on hardware and storage, its inability to operate on older devices and operational characteristics including shutting down when any protective cover is closed, or when the device goes into ‘sleep’ mode. The latter two scenarios are very common with young children, who regularly close their device’s cover as they move around the classroom. While it is possible to prevent the device from entering sleep mode by adjusting the energy settings, this places considerable drain on the battery.

Third, while display recorders have an advantage of being able to capture data from many students engaged in the same task at the same time, no matter where they are, it also produces huge volumes of data, very quickly. While in itself this isn’t necessarily an issue, these data need to be analysed, and depending on the approach taken and the support available, this can be an exceedingly time-consuming process. While the fine-grained analysis methods used in these studies may not be suitable for all research, considerable thought needs to be given in the research design stage to how data will be analysed, and how outcomes from the analysis will be reported and communicated. Experience using the recorder in these studies demonstrates that it is almost impossible to analyse all data it captures. Therefore, careful consideration must be given to how data is sampled for analysis purposes, so that research questions are responded to in a way that is robust, but achievable. In practice, this means selection criteria need to be generated that will provide a sufficient and appropriate dataset aligned with the focus, needs and methods of the study, but at the same time, are manageable. The analysis approach for this research used Studiocode that supported the extraction of rich and detailed information from data. However, the need to analyse multiple events or log data against several codes within the same clip meant each sample needed to be viewed three and sometimes four times. On average, analysing 45–50 minutes of display data in these studies could take between three and four hours.

Finally, display data suffers from the same issues as video data generally, when it comes to publishing research in more traditional academic journals. While a few journals are emerging that enable authors to hyperlink to ‘live’ video data, and a small number of others allow the uploading and sharing of short clips as supplementary materials, this capacity is usually limited to one or two examples. One of the main advantages of video is the ability to evidence multiple data ‘streams’ simultaneously – as illustrated previously in the discussion of convergence of finger placement indication, display, audio and Facecam data – to provide a more complete and holistic account of events. While, with appropriate permissions these data can be shared at conferences and presentations, valuable nuances are generally lost when authors need to distil video into a series of still screenshots with audio transcripts, to accommodate somewhat dated academic journal publication requirements. Notwithstanding the considerable time involved in doing this, an improved capacity to hyperlink to relevant video data straight from the text in online versions of journals, would enhance the researcher’s capacity to fully communicate and validate interpretations, adding substantially to readers’ meaning-making and the robustness of research findings and conclusions.

Conclusion

Although using display recording in educational research is not new, the advent of mobile devices and changes to school learning environment design has presented new challenges for researchers seeking to capture authentic and detailed data of children’s interactions with apps (and each other), as they complete learning tasks. While the mobile solution described in this chapter proved effective for these studies, it may not be the case for all. Using a recorder with young children carries with it certain sensitivities and considerations that need to be taken into account on a case-by-case basis. While I was fortunate to gain the support of my ethics committee for using the recorder from a ‘benefit vs risk’ assessment, they insisted on well-defined procedures being in place relating to the handling of problematic data, rigid consent and assent procedures and disablement of the remote activation function. Despite these procedures being followed to the letter, it was mildly disturbing to note than some children did not appear to understand what the recorder did, or the type of data it collected about them and their actions. This realisation led to more intense initial assent procedures in latter studies, where the children were shown carefully selected samples of previous recordings, accompanied by an explanation of how the recorder worked. Despite this, it was still apparent in data that many either forgot about the recorder, didn’t fully understand its operation or didn’t care. Researchers using this system with young children should be mindful of these considerations, and not assume that because children have assented, that they fully understand what is going on.

Finally, while the ethics committee approved the Facecam capture function, interestingly the children’s parents held the final word on its use through the majority declining permission to have their child’s image recorded. Although frustrating from a researcher’s perspective, given the heightened sensitivity (justified or not) around the recording and display of children’s faces, parents’ views must be respected. It may well be with the passage of time and the proliferation of such behaviours on social media, that both parents and universities may soften their stance somewhat on this issue, allowing the full richness of the interaction between multiple data streams presented in video, to be fully utilised for research purposes.

Notes

New Zealand uses a decile system based on the SES of the surrounding community, as a means of allocating base funding and resources. A full explanation can be found here (https://www.education.govt.nz/school/running-a-school/resourcing/operational-funding/school-decile-ratings/).

See McArthur, 2009 for an interesting discussion on the ethics of Milgram’s studies (https://link-springer-com.simsrad.net.ocs.mq.edu.au/content/pdf/10.1007%2Fs11948-008-9083-4.pdf).

References

Adobe (2017) Captivate, computer program. www.adobe.com/au/products/captivate.html.
Barmby, P. , Harries, T. , Higgins, S. and Suggate, J. (2009) The array representation and primary children’s understanding and reasoning in multiplication. Educational Studies in Mathematics 70(3), 217–241.
Barns, L. , Scutter, S. and Young, J. (2005) Using screen recording and compression software to support online learning. Innovate: Journal of Online Education 1(5), 1–5.
Beach, P. and Willows, D. (2014) Investigating teachers’ exploration of a professional development website: An innovative approach to understanding the factors that motivate teachers to use Internet-based resources. Canadian Journal of Learning and Technology 40(3), 1–16.
Brennan, K. and Resnick, M. (2012) New frameworks for studying and assessing the development of computational thinking. Paper presented at AERA, Vancouver, BC. Retrieved 16 February 2018, http://web.media.mit.edu/~kbrennan/files/Brennan_Resnick_AERA2012_CT.pdf.
Burns, R. (2000) Introduction to Research Methods (4th edition). Longman, Frenchs Forest, NSW.
Cardinal Blue Developer (2018) PicCollage computer program. https://pic-collage.com/.
Chaney, B. , Barry, A. , Chaney, J. , Stellefson, M. and Webb, M. (2013) Using screen video capture software to aide and inform cognitive interviewing. Quality & Quantity 47(5), 2529–2537.
Christopher Thorne Productions (2012) Mr Phonics, computer program. https://itunes.apple.com/us/app/mr-thorne-does-phonics-letters-sounds-for-ipad/id431679830?mt=8.
Coghlan, M. (2011) iPads – A game changer for education? New Learning TAFE, SA. Retrieved 12 February 2018. www.slideshare.net/michaelc/ipads-a-game-changer-for-education.
Drumheller, K. and Lawler, G. (2011) Capture their attention: Capturing lessons using screen capture software. College Teaching 59(2), 93–93.
Einarsdóttir, J. (2007) Research with children: Methodological and ethical challenges. European Early Childhood Education Journal 15(2), 197–211.
Falloon, G.W. (2013) Young students using iPads: App design and content influences on their learning pathways. Computers & Education 68, 505–521.
Falloon, G.W. (2016) An analysis of young students’ thinking when completing basic coding tasks using Scratch Jnr. on the iPad. Journal of Computer-Assisted Learning 32(6), 576–593.
Falloon, G.W. and Khoo, E. (2014) Exploring young students’ talk in iPad-supported collaborative learning environments. Computers & Education 77, 13–28.
Flewitt, R. (2005) Conducting research with young children: Some ethical considerations. Economic & Social Research Council Special Education, 1–14. Retrieved 21 February 2018. http://oro.open.ac.uk/2720/2/Flewitt(1).pdf.
Imler, B. and Eichelberger, M. (2011) Using screen capture to study user research Behaviour. Library Hitec 29(3), 446–454.
Krathwohl, D.R. (2002) A revision of bloom’s taxonomy: An overview. Theory Into Practice 41(4), 212–225.
Kuuskorpi, M. , Finland, K. and González, N. (2011) The future of the physical learning environment: School facilities that support the user. CELE Exchange 2011/11. Retrieved 26 February 2018. www.oecd.org/education/innovation-education/centreforeffectivelearningenvironmentscele/49167890.pdf.
Lynch, C. (2000) Authenticity and integrity in the digital environment: An exploratory analysis of the central role of trust. Authenticity in a Digital Environment, Council on Library and Information Resources, Washington, DC.
Macroplant (2016) iExplorer file transfer application. https://macroplant.com/iexplorer Retrieved 21 February 2018.
McArthur, D. (2009) Good ethics can sometimes mean better science: Research ethics and the Milgram experiments. Science and Engineering Ethics 15(1), 69–79.
Melhuish, K. and Falloon, G.W. (2010) Looking to the future: M-learning with the iPad. Computers in New Zealand Schools 22(3), 1–16.
Mercer, N. (1994) The quality of talk in children’s joint activity at the computer. Journal of Computer-Assisted Learning 10(1), 24–32.
Notion Developer (2017) Popplet computer program. http://popplet.com/.
OECD (2017) The OECD handbook for innovative learning environments. OECD Publications, Paris. Retrieved 1 March 2018. http://dx.doi.org/9789264277274-en.
Osborne, R. (1983) Towards modifying children’s ideas about electric current. Research in Science and Technological Education 1(1), 73–82.
Polished Play LLC (2017) Puppetpals computer program. www.polishedplay.com/.
Raído, V. (2013) Using screen recording as a diagnostic tool in early process-oriented translator training. In: Kiraly, D. , Hansen-Schirra, S. and Maksymski, K. (eds.) New prospects and perspectives for educating language mediators. Narr Francke Attempto Verlag GmbH & Co, Tübingen, Germany, 121–138.
Séror, J. (2012) Show me! Enhanced feedback through screencasting technology. TESL Canada Journal 30(1), 104–116.
Shipstone, D.M. (1984) A study of children’s understanding of electricity in simple DC circuits. European Journal of Science Education 6(2), 185–198.
Silva, M. (2012) Camtasia in the classroom: Student attitudes and preferences for video commentary or Microsoft Word comments during the revision process. Computers and Composition 29, 1–22.
Tang, J. , Liu, S. , Muller, M. , Lin, J. and Drews, C. (2006) Unobtrusive but Invasive: Using screen recording to collect field data on computer-mediated interaction’, CSCW ’06. Retrieved 1 March 2018. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=E54DE8D5A78B60EED2BDF51C9E4244ED?doi=10.1.1.109.7678&rep=rep1&type=pdf.
Techsmith (2017b) Snagit, computer program. www.techsmith.com/screen-capture.html.
Xu, C. and Ding, Y. (2014) An exploration study of pauses in computer-assisted EFL writing. Language, Learning and Technology 18(3), 80–96.
Zhang, M. (2013) Prompts-based scaffolding for online inquiry: Design intentions and classroom realities. Educational Technology and Society 16(3), 140–151.
Zhang, M. and Quintana, C. (2012) Scaffolding strategies for supporting middle school students’ online inquiry processes. Computers & Education 58, 181–196.
Search for more...
Back to top

Use of cookies on this website

We are using cookies to provide statistics that help us give you the best experience of our site. You can find out more in our Privacy Policy. By continuing to use the site you are agreeing to our use of cookies.