Virtual pedagogical agents and intelligent tutoring systems (ITSs) have been used for many years to deliver education, with comprehensive reviews available for each field (1, 2). The use of social robots has recently been explored in the educational domain, with the expectation of similarly positive benefits for learners (3–5). A recent survey of long-term human-robot interaction (HRI) highlighted the increasing popularity of using social robots in educational environments (6), and restricted surveys have previously been conducted in this domain (7, 8).
In this paper, we present a review of social robots used in education. The scope was limited to robots that were intended to deliver the learning experience through social interaction with learners, as opposed to robots that were used as pedagogical tools for science, technology, engineering, and math (STEM) education. We identified three key research questions: How effective are robot tutors at achieving learning outcomes? What is the contribution made by the robot’s appearance and behavior? And what are the potential roles of a robot in an educational setting? We support our review with data gleaned from a statistical meta-analysis of published literature. We aim to provide a platform for researchers to build on by highlighting the expected outcomes of using robots to deliver education and by suggesting directions for future research.
Benefits of social robots as tutoring agents
The need for technological support in education is driven by demographic and economic factors. Shrinking school budgets, growing numbers of students per classroom, and the demand for greater personalization of curricula for children with diverse needs are fueling research into technology-based support that augments the efforts of parents and teachers. Most commonly, these systems take the form of a software system that provides one-on-one tutoring support. Social interaction enhances learning between humans, in terms of both cognitive and affective outcomes (9, 10). Research has suggested that some of these behavioral influences also translate to interactions between robots and humans (3, 11). Although robots that do not exhibit social behavior can be used as educational tools to teach students about technology [such as in (12)], we limited our review to robots designed specifically to support education through social interactions.
Because virtual agents (presented on laptops, tablets, or phones) can offer some of the same capabilities but without the expense of additional hardware, the need for maintenance, and the challenges of distribution and installation, the use of a robot in an educational setting must be explicitly justified. Compared with virtual agents, physically embodied robots offer three advantages: (i) they can be used for curricula or populations that require engagement with the physical world, (ii) users show more social behaviors that are beneficial for learning when engaging with a physically embodied system, and (iii) users show increased learning gains when interacting with physically embodied systems over virtual agents.
Robots are a natural choice when the material to be taught requires direct physical manipulation of the world. For example, tutoring physical skills, such as handwriting (13) or basketball free throws (14), may be more challenging with a virtual agent, and this approach is also taken in many rehabilitation- or therapy-focused applications (15). In addition, certain populations may require a physically embodied system. Robots have already been proposed to aid individuals with visual impairments (16) and for typically developing children under the age of two (17) who show only minimal learning gains when provided with educational content via screens (18).
In addition, often there is an expectation for robot tutors to be able to move through dynamic and populated spaces and manipulate the physical environment. Although not always needed in the context of education, there are some scenarios where the learning experience benefits from the robot being able to manipulate objects and move autonomously, such as when supporting physical experimentation (19) or moving to the learner rather than the learner moving to the robot. These challenges are not exclusive to social robotics and robot tutors, but the added elements of having the robot operate near and with (young) learners add complexities that are often disregarded in navigation and manipulation.
Physical robots are also more likely to elicit from users social behaviors that are beneficial to learning (20). Robots can be more engaging and enjoyable than a virtual agent in cooperative tasks (21–23) and are often perceived more positively (22, 24, 25). Importantly for tutoring systems, physically present robots yield significantly more compliance to its requests, even when those requests are challenging, than a video representation of the same robot (26).
Last, physical robots have enhanced learning and affected later behavioral choice more substantially than virtual agents. Compared with instructions from virtual characters, videos of robots, or audio-only lessons, robots have produced more rapid learning in cognitive puzzles (27). Similar results have been demonstrated when coaching users to select healthier snacks (24) and when helping users continue a 6-week weight-loss program (28). A comprehensive review (25) concluded that the physical presence of a robot led to positive perceptions and increased task performance when compared with virtual agents or robots displayed on screens.
Technical challenges of building robot tutors
There are a number of challenges in using technology to support education. Using a social robot adds to this set of challenges because of the robot’s presence in the social and physical environment and because of the expectations the robot creates in the user. The social element of the interaction is especially difficult to automate: Although robot tutors can operate autonomously in restricted contexts, fully autonomous social tutoring behavior in unconstrained environments remains elusive.
Perceiving the social world is a first step toward being able to act appropriately. Robot tutors should be able to not only correctly interpret the user’s responses to the educational content offered but also interpret the rapid and nuanced social cues that indicate task engagement, confusion, and attention. Although automatic speech recognition and social signal processing have improved in recent years, sufficient progress has not been made for all populations. Speech recognition for younger users, for example, is still insufficiently robust for most interactions (29). Instead, alternative input technologies, such as a touch-screen tablets or wearable sensors, are used to read responses from the learner and can be used as a proxy to detect engagement and to track the performance of the student (30–32). Robots can also use explicit models of disengagement in a given context (33) and strategies, such as activity switching, to sustain engagement over the interaction (34). Computational vision has made great strides in recent years but is still limited when dealing with the range of environments and social expressions typically found in educational and domestic settings. Although advanced sensing technologies for reading gesture, posture, and gaze (35) have found their way into tutoring robots, most social robot tutors continue to be limited by the degree to which they can accurately interpret the learner’s social behavior.
Armed with whatever social signals can be read from the student, the robot must choose an action that advances the long-term goals of the educational program. However, this can often be a difficult choice, even for experienced human instructors. Should the instructor press on and attempt another problem, advance to a more challenging problem, review how to solve the current problem, offer a hint, or even offer a brief break from instruction? There are often conflicting educational theories in human-based instruction, and whether or not these same theories hold when considering robot instructors is an open question. These choices are also present in ITSs, but the explicit agentic nature of robots often introduces additional options and, at times, complications. Choosing an appropriate emotional support strategy based on the affective state of the child (36), assisting with a meta-cognitive learning strategy (37), deciding when to take a break (31), and encouraging appropriate help-seeking behavior (4) have all been shown to increase student learning gains. Combining these actions with appropriate gestures (38), appropriate and congruent gaze behavior (39), expressive behaviors and attention-guiding behaviors (11), and timely nonverbal behaviors (3) also positively affects student recall and learning. However, merely increasing the amount of social behavior for a robot does not lead to increased learning gains: Certain studies have found that social behavior may be distracting (40, 41). Instead, the social behavior of the robot must be carefully designed in conjunction with the interaction context and task at hand to enhance the educational interaction.
Last, substantial research has focused on personalizing interactions to the specific user. Within the ITS community, computational techniques such as dynamic Bayesian networks, fuzzy decision trees, and hidden Markov models are used to model student knowledge and learning. Similar to on-screen tutoring systems, robot tutors use these same techniques to help tailor the complexity of problems to the capabilities of the student, providing more complex problems only when easier problems have been mastered (42–44). In addition to the selection of personalized content, robotic tutoring systems often provide additional personalization to support individual learning styles and interaction preferences. Even straightforward forms of personalization, such as using a child’s name or referencing personal details within an educational setting, can enhance user perception of the interaction and are important factors in maintaining engagement within learning interactions (45, 46). Other affective personalization strategies have been explored to maintain engagement during a learning interaction by using reinforcement learning to select the robot’s affective responses to the behavior of children (47). A field study showed that students who interacted with a robot that simultaneously demonstrated three types of personalization (nonverbal behavior, verbal behavior, and adaptive content progression) showed increased learning gains and sustained engagement when compared with students interacting with a nonpersonalized robot (48) Although progress has been made in constituent technologies of robot tutors—from perception to action selection and production of behaviors that promote learning—the integration of these technologies and balancing their use to elicit prosocial behavior and consistent learning still remain open challenges.
REVIEW
To support our review, we used a meta-analysis of the literature on robots for education. In this, three key questions framed the meta-analysis and dictated which information was extracted:
1. Efficacy. What are the cognitive and affective outcomes when robots are used in education?
2. Embodiment. What is the impact of using a physically embodied robot when compared with alternative technologies?
3. Interaction role. What are the different roles the robot can take in an educational context?
For the meta-analysis, we used published studies extracted from the Google Scholar, Microsoft Academic Search, and CiteSeerX databases by using the following search terms: robot tutor, robot tutors, socially assistive robotics (with manual filtering of those relevant to education), robot teacher, robot assisted language learning, and robot assisted learning. The earliest published work appeared in 1992, and the survey cutoff date was May 2017. In addition, proceedings of prominent social HRI journals and conferences were manually searched for relevant material: International Conference on Human-Robot Interaction, International Journal of Social Robotics, Journal of Human-Robot Interaction, International Conference on Social Robotics, and the International Symposium on Robot and Human Interactive Communication (RO-MAN).
The selection of papers was based on four additional criteria:
1) Novel experimental evaluations or analyses should be presented.
2) The robot should be used as the teacher (i.e., the robot is an agent in the interaction) rather than the robot being used as an educational prop or a learner with no intention to educate [e.g., (49)].
3) The work must have included a physical robot, with an educative intent. For example, studies considering “coaches” that sought to improve motivation and compliance, but did not engage in education [e.g., (50)], were not included, whereas those that provided tutoring and feedback were included [e.g., (15)].
4) Only full papers were included. Extended abstracts were omitted because these often contained preliminary findings, rather than complete results and full analyses.
We withheld 101 papers for analysis and excluded 12 papers for various reasons (e.g., the paper repeated results from an earlier publication). The analyzed papers together contain 309 study results (51).
To compare outcomes of the different studies, we first divided the outcomes of an intervention into either affective or cognitive. Cognitive outcomes focus on one or more of the following competencies: knowledge, comprehension, application, analysis, synthesis, and evaluation (52–54). Affective outcomes refer to qualities that are not learning outcomes per se, for example, the learner being attentive, receptive, responsive, reflective, or inquisitive (53). The meta-analysis contained 99 (33.6%) data points on cognitive learning outcomes and 196 (66.4%) data points on affective learning outcomes; 14 study results did not contain a comparative experiment on learning outcomes.
Cognitive outcomes are typically measured through pre- and posttests of student knowledge, whereas affective outcomes are more varied and can include self-reported measures and observations by the experimenters. Table 1 contains the most common methods for measuring cognitive and affective outcomes reported in the literature.
- View popup
- View inline
Most studies focused on children (179 data points; 58% of the sample; mean age, 8.2 years; SD, 3.56), whereas adults (≥18 years old) were a lesser focus of research in robot tutoring (98 data points; 32% of the sample; mean age, 30.5; SD, 17.5). For 29 studies (9%), both children and adults were used, or the age of the participants was not specified.
If the results reported an effect size expressed as Cohen’s d, then this was used unaltered. In cases where the effect size was not reported or if it was expressed in a measure other than Cohen’s d, then an online calculator (55) [see also (56)] was used if enough statistical information was present in the paper (typically participant numbers, means, and SDs are sufficient).
We captured the following data gleaned from the publications: the study design, the number of conditions, the number of participants per condition, whether participants were children or adults, participant ages (mean and SD), the robot used, the country in which the study was run, whether the study used a within or between design, the reported outcomes (affective or cognitive, with details on what was measured exactly), the descriptive statistics (where available mean, SD, t, and F values), the effect size as Cohen’s d, whether the study involved one robot teaching one person or one robot teaching many, the role of the robot (presenter, teaching assistant, teacher, peer, or tutor), and the topic under study (embodiment of the robot, social character of the robot, the role of the robot, or other).
The studies in our sample reported more on affective outcomes than cognitive outcomes (Fig. 1A). This is due to the relative ease with which a range of affective outcomes can be assessed by using questionnaires and observational studies, whereas cognitive outcomes require administering a controlled knowledge assessment before and after the interaction with the robot, of which typically only one is reported per study.
Figure 2B shows the countries where studies were run. Robots for learning research, perhaps unsurprisingly, happen predominantly in East Asia (Japan, South Korea, and Taiwan), Europe, and the United States. An exception is the research in Iran on the use of robots to teach English in class settings.
Extracting meaningful statistical data from the published studies is not straightforward. Of the 309 results reported in 101 published studies, only 81 results contained enough data to calculate an effect size, highlighting the need for more rigorous reporting of data in HRI.
Efficacy of robots in education
The efficacy of robots in education is of primary interest, and here, we discuss the outcomes that might be expected when using a robot in education. The aim is to provide a high-level overview of the effect size that might be expected when comparing robots with a variety of control conditions, grouping a range of educational scenarios with many varying factors between studies (see Fig. 3). More specific analyses split by individual factors will be explored in subsequent sections.
Learning effects are divided into cognitive and affective outcomes. Across all studies included in the meta-review, we have 37 results that compared the robot with an alternative, such as an ITS, an on-screen avatar, or human tutoring. Of these, the aggregated mean cognitive outcome effect size (Cohen’s d weighted by N) of robot tutoring is 0.70 [95% confidence interval (CI), 0.66 to 0.75] from 18 data points, with a mean of N = 16.9 participants per data point. The aggregated mean affective outcome effect size (Cohen’s d weighted by N) is 0.59 (95% CI, 0.51 to 0.66) from 19 data points, with a mean of N = 24.4 students per data point. Many studies using robots do not consider learning in comparison with an alternative, such as computer-based or human tutoring, but instead against other versions of the same robot with different behaviors. The limited number of studies that did compare a robot against an alternative offers a positive picture of the contribution to learning made by social robots, with a medium effect size for affective and cognitive outcomes. Furthermore, positive affective outcomes did not imply positive cognitive outcomes, or vice versa. In some studies, introducing a robot improved affective outcomes while not necessarily leading to significant cognitive gains [e.g. (57)].
Human tutors provide a gold standard benchmark for tutoring interactions. Trained tutors are able to adapt to learner needs and modify strategies to maximize learning (58). Previous work (59) has suggested that human tutors produce a mean cognitive outcome effect size (Cohen’s d) of 0.79, so the results observed when using a robot are in a similar region. However, social robots are typically deployed in restricted scenarios: short, well-defined lessons delivered with limited adaptation to individual learners or flexibility in curriculum. There is no suggestion yet that robots have the capability to tutor in a general sense as well as a human can. Comparisons between robots and humans are rare in the literature, so no meta-analysis data were available to compare the cognitive learning effect size.
Robot appearance
Because the positive learning outcomes are driven by the physical presence of the robot, the question remains of what exactly it is about the robot’s appearance that promotes learning. A wide range of robots have been used in the surveyed studies, from small toy-like robots to full-sized android robots. Figure 2A shows the most used robots in the published studies.
The most popular robot in the studies we analyzed is the Nao robot, a 54-cm-tall humanoid by Softbank Robotics Europe available as having 14, 21, or 25 degrees of freedom (see Fig. 4B). The two latter versions of Nao have arms, legs, a torso, and a head. They can walk, gesture, and pan and tilt their head. Nao has a rich sensor suite and an on-board computational core, allowing the robot to be fully autonomous. The dominance of Nao for HRI can be attributed to its wide availability, appealing appearance, accessible price point, technical robustness, and ease of programming. Hence, Nao has become an almost de facto platform for many studies in robots for learning. Another robot popular as a tutor is the Keepon robot, a consumer-grade version of the Keepon Pro research robot. Keepon is a 25-cm-tall snowman-shaped robot with a yellow foam exterior without arms and legs (see Fig. 4C). It has four degrees of freedom to make it pan, roll, tilt, and bop. Originally sold as a novelty for children, it can be used as a research platform after some modification. Nao and Keepon offer two extremes in the design space of social robots, and hence, it is interesting to compare learning outcomes for both.
Comparing Keepon with Nao, the respective cognitive learning gain is d = 0.56 (N = 10; 95% CI, 0.532 to 0.58) and d = 0.76 (N = 8; 95% CI, 0.52 to 1.01); therefore, both show a medium-sized effect. However, we note that direct comparisons between different robots are difficult with the available data, because no studies used the same experimental design, the same curriculum, and the same student population with multiple robots. Furthermore, different robots have tended to be used at different times, becoming popular in studies when that particular hardware model was first made available and decreasing in usage over time. Because the complexity of the experimental protocols has tended to increase, direct comparison is not possible at this point in time.
What is clear from surveying the different robot types is that all robots have a distinctly social character [except for the Heathkit HERO robot used in (60)]. All robots have humanoid features—such as a head, eyes, a mouth, arms, or legs—setting the expectation that the robot has the ability to engage on a social level. Although there are no data on whether the social appearance of the robot is a requirement for effective tutoring, there is evidence that the social and agentic nature of the robots promotes secondary responses conducive to learning (61, 62). The choice of robot very often depends on practical considerations and whether the learners feel comfortable around the robot. The weighted average height of the robots is 62 cm; the shortest robot in use is the Keepon at 25 cm, and the tallest is the RoboThespian humanoid at 175 cm. Shorter robots are often preferred when teaching young children.
Robot behavior
To be effective educational agents, the behavior of social robots must be tailored to support various aspects of learning across different learners and diverse educational contexts. Several studies focused on understanding critical aspects of educational interactions to which robots should respond, as well as determining both what behaviors social robots can use and when to deliver these behaviors to affect learning outcomes.
Our meta-review shows that almost any strategy or social behavior of the robot aimed at increasing learning outcomes has a positive effect. We identified the influence of robot behaviors on cognitive outcomes (d = 0.69; N = 12; 95% CI, 0.56 to 0.83) and affective outcomes (d = 0.70; N = 32; 95% CI, 0.62 to 0.77).
Similar to findings in the ITS community, robots that personalize what content to provide based on user performance during an interaction can increase cognitive learning gains (43, 44). In addition to the adaptive delivery of learning material, social robots can offer socially supportive behaviors and personalized support for learners within an educational context. Personalized social support, such as using a child’s name or referring to previous interactions (45, 46), is the low-hanging fruit of social interaction. More complex prosocial behavior, such as attention-guiding (11), displaying congruent gaze behavior (39), nonverbal immediacy (3), or showing empathy with the learner (36), not only has a positive impact on affective outcomes but also results in increased learning.
However, just as human tutors must at times sit quietly and allow students the opportunity to concentrate on problem solving, robot tutors must also limit their social behavior at appropriate times based on the cognitive load and engagement of the student (40). The social behavior of the robot must be carefully designed in conjunction with the interaction context and task at hand to enhance the educational interaction and avoid student distraction.
It is possible that the positive cognitive and affective learning outcomes of robot tutors are not directly caused by the robot having a physical presence, but rather the physical presence of the robot promotes social behaviors in the learner that, in turn, foster learning and create a positive learning experience. Robots have been shown to have a positive impact on compliance (26), engagement (21–23), and conformity (20), which, in turn, are conducive to achieving learning gains. Hence, a perhaps valuable research direction is to explore what it is about social robots that affects the first-order outcomes of engagement, persuasion, and compliance.
Robot role
Social robots for education include a variety of robots having different roles. Beyond the typical role of a teacher or a tutor, robots can also support learning through peer-to-peer relationships and can support skill consolidation and mastery by acting as a novice. In this section, we provide an overview of the different roles a robot can adopt and what their educational benefits are.
Robot as tutor or teacher
As a tutor or teacher, robots provide direct curriculum support through hints, tutorials, and supervision. These types of educational robots, including teaching assistant robots (63), have the longest history of research and development, often targeting curricular domains for young children. Early field studies placed robots into classrooms to observe whether they would have any qualitative impact on the learners’ attitude and progress, but current research tends toward controlled experimental trials in both laboratory settings and classrooms (64).
A commercial tutor robot called IROBI (Yujin Robotics) was released in the early 2000s. Designed to teach English, IROBI was shown to enhance both concentration on learning activities and academic performance compared with other teaching technology, such as audio material and a web-based application (65).
The focus on younger children links robot education research with other scientific areas, such as language development and developmental psychology (66). On the basis of the earlier work that studied socialization between toddlers and robots in a nursery school (67), a fully autonomous robot was deployed in classrooms. It was shown that the vocabulary skills of 18- to 24-month-old toddlers improved significantly (68). Much of the work in which the robot is used as a tutor focuses on one-to-one interactions, because these offer the greatest potential for personalized education.
In some cases, the robot is used as a novel channel through which a lecture is delivered. In these cases, the robot is not so much interacting with the learners but acts as a teacher or an assistant for the teacher (69). The value of the robot in this case lies in improving attention and motivation in the learners, while the delivery and assessment is done by the human teacher. Here, the delivery is often one to many, with the robot addressing an entire group of learners (33, 63, 69).
Robot as peer
Robots can also be peers or learning companions for humans. Not only does a peer have the potential of being less intimidating than a tutor or teacher, peer-to-peer interactions can have significant advantages over tutor-to-student interactions. Robovie was the first fully autonomous robot to be introduced into an elementary school (70). It was an English-speaking robot targeting two grades (first and sixth) of Japanese children. Through field trials conducted over 2 weeks, improvements in English language skills were observed in some children. In one case, longer periods of attention on learning tasks, faster responses, and more accurate responses were shown with a peer robot compared with an identical-looking tutor robot (19). A long-term primary school study showed that a peer-like humanoid robot able to personalize the interaction could increase child learning of novel subjects (48). Often, the robot is presented as a more knowledgeable peer, guiding the student along a learning trajectory that is neither too easy nor too challenging. However, the role of those robots sometimes becomes ambiguous (tutor versus peer), and it is difficult to place one above the other in general. Learning companions (71), which offer motivational support but otherwise are not tutoring, are also successful cases of a peer-like robot.
Robot as novice
Considerable educational benefits can also be obtained from a robot that takes the role of a novice, allowing the student to take on the role of an instructor that typically improves confidence while, at the same time, establishing learning outcomes. This is an instance of learning by teaching, which is widely known in human education, also referred to as the protégé effect (72). This process involves the learner making an effort to teach the robot, which has a direct impact on their own learning outcomes.
The care-receiving robot (CRR) was the first robot designed with the concept of a teachable robot for education (73). A small humanoid robot introduced into English classes improved the vocabulary learning of 3- to 6-year-old Japanese children (5). The robot was designed to make deliberate errors in English vocabulary but could be corrected through instruction by the children. In addition, CRR was shown to engage children more than alternative technology, which eventually led to the release of a commercial product based on the principle of a robot as a novice (74).
This novice role can also be used to teach motor skills. The CoWriter project explored the use of a teachable robot to help children improve their handwriting skills (13). A small humanoid robot in conjunction with a touch tablet helped children who struggled with handwriting to improve their fine motor skills. Here, the children taught the robot, who initially had very poor handwriting, and in the process of doing so, the children reflected on their own writing and showed improved motor skills (13). This suggests that presenting robots as novices has potential to develop meta-cognitive skills in learners, because the learners are committing to instructing the learning material, requiring a higher level of understanding of the material and an understanding of the internal representations of their robot partner.
In our meta-analysis, the robot was predominantly used as a tutor (48%), followed by a role as teacher (38%). In only 9% of studies was the robot presented as a peer or novice (Fig. 1B). The robot was often used to offer one-to-one interactions (65%), with the robot used in a one-to-many teaching scenario in only 30% of the studies (Fig. 1C). In 5%, the robot had mixed interactions, whereby, for example, it first taught more than one student and then had one-on-one interactions during a quiz.
DISCUSSION
Although an increasing number of studies confirm the promise of social robots for education and tutoring, this Review also lays bare a number of challenges for the field. Robots for learning, and social robotics in general, require a tightly integrated endeavor. Introducing these technologies into educational practice involves solving technical challenges and changing educational practice.
With regard to the technical challenges, building a fluent and contingent interaction between social robots and learners requires the seamless integration of a range of processes in artificial intelligence and robotics. Starting with the input to the system, the robot needs a sufficiently correct interpretation of the social environment for it to respond appropriately. This requires significant progress in constituent technical fields, such as speech recognition and visual social signal processing, before the robot can access the social environment. Speech recognition, for example, is still insufficiently robust to allow the robot to understand spoken utterances from young children. Although these shortcomings can be resolved by using alternative input media, such as touch screens, this does place a considerable constraint on the natural flow of the interaction. For robots to be autonomous, they must make decisions about which actions to take to scaffold learning. Action selection is a challenging domain at best and becomes more difficult when dealing with a pedagogical environment, because the robot must have an understanding of the learner’s ability and progress to allow it to choose appropriate actions. Finally, the generation of verbal and nonverbal output remains a challenge, with the orchestrated timing of verbal and nonverbal actions a prime example. In summary, social interaction requires the seamless functioning of a wide range of cognitive mechanisms. Building artificial social interaction requires the artificial equivalent of these cognitive mechanisms and their interfaces, which is why artificial social interaction is perhaps one of the most formidable challenges in artificial intelligence and robotics.
Introducing social robots in the school curriculum also poses a logistical challenge. The generation of content for social robots for learning is nontrivial, requiring tailor-made material that is likely to be resource-intensive to produce. Currently, the value of the robot lies in tutoring very specific skills, such as mathematics or handwriting, and it is unlikely that robots can take up the wide range of roles a teacher has, such as pedagogical and carer roles. For the time being, robots are mainly deployed in elementary school settings. Although some studies have shown the efficacy of tutoring adolescents and adults, it is unclear whether the approaches that work well for younger children transfer to tutoring older learners.
Introducing robots might also carry risks. For example, studies of ITS have shown that children often do not make the best use of on-demand support and either rely too much on the help function or avoid using help altogether, both resulting in suboptimal learning. Although strategies have been explored to mitigate this particular problem in robots (4), there might be other problems specific to social robots that still need to be identified and for which solutions will be needed.
Social robots have, in the broadest sense, the potential to become part of the educational infrastructure, just as paper, white boards, and computer tablets have. Next to the functional dimension, robots also offer unique personal and social dimensions. A social robot has the potential to deliver a learning experience tailored to the learner, supporting and challenging students in ways unavailable in current resource-limited educational environments. Robots can free up precious time for human teachers, allowing the teacher to focus on what people still do best: providing a comprehensive, empathic, and rewarding educational experience.
Next to the practical considerations of introducing robots in education, there are also ethical issues. How far do we want the education of our children to be delegated to machines, and social robots in particular? Overall, learners are positive about their experience with robots for learning, but parents and teaching staff adopt a more cautious attitude (75). There is much to gain from using robots, but what do we stand to lose? Might robots lead to an impoverished learning experience where what is technologically possible is prioritized over what is actually needed by the learner?
Notwithstanding, robots show great promise when teaching restricted topics, with effect sizes on cognitive outcomes almost matching those of human tutoring. This is remarkable, because our meta-analysis gathered results from a wide range of countries using different robot types, teaching approaches, and deployment contexts. Although the use of robots in educational settings is limited by technical and logistical challenges for now, the benefits of physical embodiment may lift robots above competing learning technologies, and classrooms of the future will likely feature robots that assist a human teacher.
This is an article distributed under the terms of the Science Journals Default License.
REFERENCES AND NOTES
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵CrossRefGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵CrossRefPubMedGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵CrossRefWeb of ScienceGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵CrossRefGoogle Scholar
- ↵PubMedGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Abstract/FREE Full TextGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵CrossRefGoogle Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- ↵Google Scholar
- Web of ScienceGoogle Scholar
- Google Scholar
- Google Scholar
- Google Scholar
- Google Scholar
- CrossRefPubMedWeb of ScienceGoogle Scholar