Research Article Creative Commons, CC-BY
‘Rooting Out’ the Issues for Biomedical Knowledge Flow: A Formal Theory for Mobile Learning Pedagogy
*Corresponding author: Davina Calbraith, Research Nurse, Public Health England, UK
Received: May 31, 2021; Published: July 30, 2021
DOI: 10.34297/AJBSR.2021.13.001914
Abstract
This is a scientific research paper on the importance of getting the technology right and using it to its best advantage in biomedical and human sciences. Ultimately the result of this is better healthcare. Previous biomedical mobile learning reviews and research have almost always had areas where researchers have not quite been able to explain everything they find. These have been described as ‘pedagogical barriers’ and attempts to overcome these has met with varying degrees of success, hampering overall development and progress in the field.
Methods: The basis of this paper is an original mixed methods research study (consisting of usability and observation studies with semistructured interviews) where original and practical hypotheses formed a substantial theory. It uses the Constant Comparative Grounded Theory Method to see whether these have the potential to become a formal theory (capable of governing the underlying planning, delivery and evaluation of optimally effective mobile learning). The developing theory is compared against popular current approaches to discover whether proving or disproving instances of the theory and hypotheses can be found.
Findings: Comparison between the theory and current mobile learning approaches provided both proving and disproving instances and specific elements integral to effective learning packages. Crucially, the mix of elements and non-threatening approaches enabled learners to follow their own learning pathways and have the freedom to explore (meaning learning is more successful). Furthermore, the hypotheses explained phenomena more fully than any other study, and the reasons why researcher’s attempts in the past had not been totally successful. Not only can the substantive theory reach formal theory status, but the flexibility seen in the hypotheses has far reaching implications. Excitingly, this theory and identified crucial elements have the potential to be used in many contexts and disciplines without suffering a drop in quality, rigor, or learner motivation.
Keywords: Mobile Learning; Theory Development; Constant Comparative Method; Grounded Theory; Personalised Learning
Abbreviations: ALP: Adaptive Learning Platforms; AI: Artificial Intelligence; LA: Learning Analytics; LP: Learning Personalisation; MOOCs: Massive Open Online Course; RL: Rhizomatic learning
Introduction
This scientific research paper stresses the importance of getting the technology right and knowing how to use it to its best advantage. In the field of biomedical sciences this ultimately results in better healthcare. A recent review discovered a trend in the most highly cited papers i.e., ‘comparing different mobile learning modes and finding more effective mobile learning approaches Lai, et al. [1]. However, some take a different approach: they compare the impact of new mobile learning approaches with traditional instruction. In many senses this paper combines both. It compares learning modes in pursuit of effective approaches and assesses the impact of learning within these. It does this by using evidence-based research as a basis, and grounded theory to explain why the most effective approaches should be based on real life impact (regardless of whether it is based on traditional or non-traditional methods). This paper is centred around ever-progressing ‘Mobile’ learning which unsurprisingly has a large range of definitions. Because devices continually evolve, definitions evolve too. A deliberately wide definition was chosen to avoid limiting potential findings, i.e., ‘mobile computing and e-learning’ Chee et al. [2].
Using computers to enhance online learning (for medics and healthcare workers) has been around for a while, but when it comes to the pedagogy behind that (i.e. the adult learning theory), many conceptual issues prevent effective planning and delivery of the learning. For example: many educators don’t understand or know how to use mobile learning models and this is especially true if that learning is on the move i.e. ‘mobile learning’ Ginkas, et al. [3]. These conceptual issues therefore need to be identified, ‘rooted out’, and explained if we are to progress beyond these issues, and the limitations they impose on our understanding. This way we can understand in more depth why certain barriers to applying good quality research to mobile learning exists. However, this is not always an easy task. Historically, there was a flurry of pedagogical activity between 2004 to 2011 with many pedagogical approaches being discussed for medical mobile learning. Calbraith, [4] went some way to defining important points to consider (i.e., original generic principles and the part they could play in mobile learning formats, the importance of learner input and perceptions about their learning, and how branch and loop learning systems shape learner customisation). The participants were medics, nurses, Biomedical scientists, and Science Teachers based on two sites (Midlands and Eastern UK) with full ethical approval. Here, members of the general public (i.e. ‘laymen’) were effectively used as a control group. Mixed methods methodology was used to collect data on top-performing pedagogical approaches based on real-life impact (drawn from a systematic review which was then assessed via a modified version of ‘Kirkpatrick’s model’ Kirkpatrick, et al. [5,6,7]. Please see (Table 1). This construct was used as this allowed unhindered observation and examination of the underlying pedagogical aspects of mobile learning. It allowed promotion of cognitive activity and provided the freedom to ‘test and engage’ through testing assumptions (making hypotheses), adjusting variables (experimenting) and introducing content (modifying). Its basic premise allowed pedagogy to be seen and what the successful elements are without other aspects getting in the way. Data was collected on how learners used different online learning approaches during observed usability studies.
The data was verified via semi-structured interviews. This
work highlighted ‘learning routes’ or ‘pathways’ that learners’
chose to take through their learning packages, and importantly
allowed the plotting of learners’ ‘knowledge flow’. This resulted
in faster learning and development of clinical reasoning. Implicit
meanings that learners attached to knowledge were seen to become
explicit, and therefore enabled learning pathways to be plotted.
This process was essentially the learners’ online ‘breadcrumb
trail’. When individual routes from each learner were compared
with many other learner routes, they were seen to become part of
a cohesive whole i.e. they were a well-trodden ‘learning pathway’
that all learners took. This was an original finding. Calbraith and
Dennick [8] developed these concepts into a mobile learning
model. This emphasised the importance of models being flexible
enough to allow relevant and immediately useful information to be
delivered to the learner through guided and informed reasoning.
The combination of methods was successful in uncovering some
previously undiscovered underlying pedagogies for mobile learning
in medical sciences. This was because
a) The learners were an integral part,
b) The impact was measurable, and
c) The research was ‘rooted’ in practice. The generated
hypotheses were developed into an original substantive theory
(according to Glaser and Strauss’s constant comparative grounded
theory method [9]. However, the field had not developed enough
in 2011 for Calbraith to test the theory further. Today, biomedical
and technological fields have moved on, but what is known
about pedagogical barriers remains small. New innovations and
disruptive technologies cause further problems: technology
often charges ahead continually changing our mobile device
use, inevitably leaving pedagogy lagging behind [10]. Much has
been written in the literature about observations made when
using new pedagogy. However, most papers stop short when it comes to adequately explaining what they observed, let alone the
underlying mechanisms. Without such empirical evidence and
explanations pedagogical barriers arise with no readily available
solutions. Aagaard [11] believes pedagogical underpinnings for
mobile learning are “much needed and long over-due” due to how
little is known about ‘technology use and learning interactions’.
Aagaard is not alone. Many authors feel that learning more about
learners’ technology use is a key issue but cannot fully explain why
Zhang, et al. [12,13]. Removing these barriers would mean more
effective learning would be possible. Consequently, the next step
is to examine Calbraith’s hypotheses and substantive theory then
compare this to current approaches to see whether the field has
developed enough for the theory to achieve formal theory status.
This would be a significant step, as it would also remove many
barriers to development. These hypotheses will therefore form
the basis for the novel thinking in this paper to develop conceptual
understanding (and therefore the shape of online learning) in this
field. This paper goes further than others because it explains why
some current pedagogical approaches ‘work’ and some do not; it
provides the crucial factors involved, identifies what elements are
involved, and why barriers exist in the first place. These aspects are
really important as they further our understanding as to why some
approaches to mobile learning are more effective than others. If we
look at the background of pedagogical and knowledge flow issues
for mobile learning, two issues are immediately obvious:
a) There is a lack of applied research, and
b) When researchers try to apply research, unexpected
barriers arise. This paper will therefore look at these issues
before looking at how well the theory explains the successes
and shortfalls of popular, current approaches (which will be
discussed in more detail in the findings section).
Materials and Methods
Glaser, et al. [9] constant comparative grounded theory method
will be used to
a) Examine any conceptual issues for learning, pedagogical
or knowledge flow, and
b) Develop the substantive theory.
Glaser and Strauss recommend three ways to do this in their
original instructions. As Calbraith, [4] had already performed the
first stage regarding theory development (i.e., applying the formed
hypotheses to diverse groups and situations), this paper starts
from the second stage. The aim is to push variables to their limit:
‘to undertake direct data comparisons from other substantive
areas in the researchers experience, or in the literature’. This was
to find the extent to which the resulting theory can explain, prove,
or disprove instances in current popular approaches at formal
theory level, to test emergent theories, and generate theory. As the
grounded theory process for the original for the substantive theory
is already published and is not the focus of this paper, it will not
be repeated here in depth. Each unique observation and verbatim
comment from the original usability studies were coded incidentby-
incident and grouped according to subject and whether they
were positive, negative, or neutral statements. They were tested
until saturation point, which was 6 regardless of the person using it
and the pedagogy being tested. According to Kuzel [12] a saturation
point of 6 indicates that the 420 participants were homogenous.
The demographics, however, indicate otherwise. Instead, it was the
freely formed comments that were homogenous making 20 units
unnecessary.
Results and Discussion
In the research study that forms the basis of this paper Calbraith, [4], the theoretical saturation point was achieved after 6 units, with 12 core categories and 11 pedagogical codes found. Meanings and associations were seen because.
i) the construct and learning packages did not impose extraneous variables which allowed the whole process to be mapped, and ii) attributed meanings were first-hand from learners. This is important. There is a tendency to focus on pedagogical ‘parts’ or ‘stages’ in the literature which effectively renders some interactions ‘invisible’. This study showed that more associations can be seen when looking at the whole process simultaneously, which may explain why some authors and studies have found it difficult to explain why ‘pedagogical deconstruction’ techniques often fail. This paper is based on the 4 Hypotheses found (see Table 2) but before analysis of the wider mobile context can be made, there is a need to look at the issues for biomedical mobile pedagogy in more depth to see if the theory can explain them. The first major pedagogical issue is the sheer lack of applied research for mobile learning and tool use Gikas et al. [3]. A typical example is Baez et al. [13] pedagogical model which they themselves describe as ‘too utopian’ and “unsatisfactory for guiding lecturers in their pedagogical tasks’. This is due to ‘conceptual’ shortcomings and “a lack of awareness of models and strategies, and tools to implement those models” (p436). In other words, the learners’ knowledge flow or ‘learning pathway’ could not be plotted due to conceptual and lecturer-based ‘stumbling blocks’.
So why is this a problem? If educators cannot understand best practice, they cannot apply it. If we do not know how learners use the learning, how can we know the real learner impact? In biomedical and healthcare learning this matters as the end point of learning shapes service delivery, and ultimately patients’ lives. When the learning is taking place asynchronously (via a multitude of approaches and delivery platforms) this adds to the already complicated picture. Similarly, De Oliveira, et al. [14] original work caused them to believe the digital landscape was “complex environment where pedagogical solutions and answers cannot be prescribed, or even discovered”. However, they later decided the pedagogical tools’ context was a key issue. As tools can be used in so many varied contexts it is easy to see why applied research is difficult. Like Baez, et al. [3], De Oliveira et al subsequently recommended ‘bottom-up pedagogies’ i.e. coming from the learners themselves to allow for ‘risk-taking’ and ‘positive views of error’. This ‘bottom-up’ recommendation may be a confirming instance of the hypotheses, as capturing learners’ views was an intrinsic part of these.
Due to the concept/construct used learners were happy trying out ideas so knowledge flowed easily due to the freedom to guess answers. The hypotheses imply that when mixed elements are used this also results in increased knowledge, interest, motivation, and interaction’. Could ‘mixed elements’ be a key component of the theory, knowledge flow, or learning pathways? It is possible as risktaking with no consequence may explain why learners found the learning packages so enjoyable, why the learning motivated them to learn more, and why they perceived the learning as ‘having a good content and interaction balance’. As participants were largely drawn from medical fields where real-life risk-taking is positively discouraged (as it may result in harm to a patient if the correct clinical decision is not taken) this may have added to the appeal. Sung, et al. [15] called for “in-depth experimental research” to remove these aforementioned barriers and provide pedagogical solutions. Pedro, et al. [16] responded to this call by providing some best practice: ‘Collaborative-driven versus data-driven practices’; ‘Informing students about the learning process and ‘how to be focused’; and training educators to implement these. A crucial point is whether the substantive theory and hypotheses can explain, prove, or disprove these best practices. (See Table 3). As the theory matches well to Pedro et al best practice, this could be seen as a ‘proving instance’ of the theory, which is promising. A second major issue for mobile learning pedagogy is the unexpected limitations that crop up when researching this area. One example is Jones, et al. [17], who found unexpected limitations when attempting to explain pedagogical issues when using a conceptual model for data analytics. As Jones’ model is purely theoretical Jones is likely to be unaware of the full potential or real-life impact this has on learner response. Consequently, Jones called for future research to ‘test mock interfaces that simulate (learner) information controls’. This implies that a bottom-up, student-led focus is needed for biomedical mobile learning. In instances where successful research has been achieved, unexpected limitations have then hampered its full application.
Therefore, it is possible that this second issue has perhaps caused the first. Some believe this lack of applied research originates from educators lacking guidance Baran, et al. [18,15] or failing to keep students interested Kuznekoff, et al. [19]. Others maintain it is ‘the nature of the beast’ i.e., pedagogy cannot keep pace with technology Kurzweil, et al. [20]. Clearly a combination of these issues are to blame. Addressing pedagogical problems and discovering key elements are therefore challenging. It is unsurprising that researching this area (and attempting to apply the research to practice) often results in some educators focusing in on the individual problems (e.g., lack of student engagement), and looking for approaches to solve those problems. As a result, many focus on learner motivation as their investigative starting point, and look at this through the lens of ‘gaming’ Rankin et al. [21]. Gaming, or ‘G-Learning’ as it has become, is simply the incorporation of a game into learning. G-Learning often has components that activate prior learning, provide instantaneous feedback, and permit learners to problem-solve, test different hypotheses, and transfer knowledge Pesare et al. [22]. This type of learning is popular with learners Hamm, et al. [23] which implies that they find ‘no consequence risktaking’ engaging. However, gaming per se has inadequate evidence to suggest it is better than traditional methods Dichev, et al. [24].
Lieberoth, et al. [25] supports this view believing that it is the ‘packaging’ (i.e. the elements of the game) rather than gaming itself that stimulates individuals. The developing theory’s hypotheses would also support this view as learners felt safe to try out ideas. It is therefore suggested that it is not just certain elements per se that motivate and engage learners, but the specific ‘mix of elements’ within the game/learning. No study has previously looked at ‘mix of elements’, so taking this stance demands a paradigm shift. This is because it begins to suggest that elements like these, if put into a learning package, could create universally effective mobile learning. This is not suggesting that ‘one size fits all’ (i.e. educators using specific mixed elements without thinking), but it does suggest that when certain mixed elements are present effective learning takes place. If we follow this train of thought to its logical end point it would imply that it may not be a tutor/student’s lack, nor pedagogy’s failure, nor gaming’s success that provide the best in-depth answers to pedagogical knowledge flow issues, but instead, looking at the mix of elements in the learning may provide these answers. Furthermore, this would imply that lack of student engagement or learner motivation are not the best starting points to untangle any underlying pedagogic mechanisms, limitations or failures. This would also explain why pedagogical studies are often only partially successful in getting to the root of underlying pedagogical mechanisms and knowledge flow problems, they start by investigating the problem rather than observing how the learners use the learning (and the impact various elements have on them). This is why researchers find it hard to explain why certain parts of their examination of the subject are inadequate, as they have not looked at their learners’ use of the learning packages (so they are unaware of their learners ‘most used’ learning pathway). This is important as this knowledge identifies which elements are going to be most effective to include in the learning. By contrast, the developing theory was able to explain why these pedagogical issues exist.
In short, the main issues are that mobile learning pedagogy encounters unexpected limitations or barriers when attempting to research it or apply that research to real-life contexts. These barriers house conceptual shortcomings, are difficult to explain, are often created due to inadequate lecturer experience, or are produced by pedagogy’s challenge to keep up with technology. Conversely, successful pedagogies historically contain risk-taking elements; positive views of error; and practices that drive interaction. There are clear indications that bottom-up approaches are key to effective practice and knowledge flow. Use of purely theoretical models to drive theory development appear to be inadequate when used alone but are much stronger when real-life impact is measured concurrently. Development should therefore be rooted in practices that have real-life impact. The developing theory was able to explain why these pedagogical issues exist. The next step was to see whether the developing theory was able to provide more detail to explain why effective pedagogies work, what the crucial elements to include in learning packages were, and how the developing theory ‘stacks up’ when required to explain both successes and problems in other research and approaches. Topical examples in the wider mobile learning context will therefore be discussed to provide further insight, establish important elements, and provide theoretical ‘proving’ and ‘disproving’ instances.
The Wider Mobile Context
Both successful and unsuccessful elements within current concepts and approaches were basically ‘unpicked’ and analysed to see i) the extent to which they can explain their own underlying pedagogy and knowledge flow, and ii) the extent to which any learning pathways can be identified. This is presented here as ‘What works’ and ‘What doesn’t work’ for each context, followed by a comparison with the developing theory to see whether it ‘stands up’. This is an important and necessary step (according to Glaser and Strauss’s constant comparative method) if the theory is to gain formal theory status. The current approaches/contexts analysed are: Learning analytics and MOOCs, Artificial intelligence and Rhizomatic learning, Adaptive Learning Platforms (ALP) and Learning Personalisation, and Flipped Classrooms.
Learning Analytics (LA) and MOOCs
What works – MOOCs are ‘Massive Open Online Courses’ aimed at “unlimited participation and open access via the web” Kaplan, et al. [26]. According to Fidalgo-Blanco, et al. [27] MOOCs have not generally fulfilled expectations due to inadequate quality procedures. But when testing their pedagogical model they found that ‘pedagogical approach’, ‘evaluation’, ‘user experience’, ‘motivation’, ‘learning design’, and ‘interaction’ were all quality indicators. Interestingly, the same elements are found within the hypotheses and developing theory. The second hypothesis states: “A good mix of elements result in increased interaction leading to increased active learning, knowledge and interest which achieves a good element-interaction balance”. Here, ‘motivation’ and ‘interaction’ are explained by allowing a) a greater application of learning, b) greater linkage of knowledge which develops reasoning, and c) decreased feelings of information overload’.
Hausman, et al. [28] model also emphasised the importance of learner characteristics, feedback attributes, action to perpetuate motivation, self-regulation, and ‘flexible interfaces based on dynamic variables. This again reflects elements found in the hypotheses under question, and like Lieberoth, et al. [25] supports the view that it is the ‘mix of elements’ within the learning/ packaging
What doesn’t work - No instances in current literature had enough detail to analyse elements that ‘don’t work’.
In summary, analysis of Learning Analytics literature suggests that it is not actually certain approaches that provide highest quality indicators for effective mobile learning, but instead it is certain elements within those approaches. A critique of Calbraith’s [4] research also supports this as it found that it was specifically the mixture of certain elements that determined how successful the learning was. It is possible that Hausman, et al. [28] have discovered some crucial elements for effective mobile learning as their findings accord with the first three hypotheses tested in the developing theory (see Table 2). This is therefore a further proving instance of the theory. Whilst Learning Analytics may be successful in identifying quality indicators, it cannot presently explain why particular elements are successful, or describe the underlying pedagogy, nor suggest guiding principles to the extent that the developing theory can. Could a mixture of specific elements be the key for all mobile learning packages? Are the elements that Hausman, et al. [28] identified the only dynamic variables or elements?
Artificial Intelligence (AI) and Rhizomatic Learning
What works – The concepts of ‘Learning Analytics’ and ‘AI Learning’ are sometimes confused. Both are often ‘theoretically explored’ but seldom applied in a significant way Ferguson et al. [29-33]. The term ‘Rhizomatic learning’ has been coined from ‘Rhizomes’ which grow with root networks, rather than one established root Bogue, et al. [34]. These pedagogies are based on the work of Deleuz and Guattari Gilles, et al. [35] where learning is autopoietical i.e. the learning system/curriculum maintains itself by learners creating their own content with no predetermined restrictions. Few Rhizomatic learning studies explain implementation barriers or underlying pedagogy, provide equivalent learner engagement, or give empirical validation Tsai, et al. [36]. One exception is Diaz et al. [37] who claim their model is better than what has gone before because it “articulates elements that…enable interactive…mobile learning scenarios to be configured” via AI ‘processing’ of learning paths, and ‘interaction to transform learning’. This sounds promising. Another exception is Kinchin and Gravett, et al. [38]. They do not look at pedagogy per se but believe that ‘transformation’ and the timing of learning are essential elements for future study. So, does the developing theory explain these findings? ‘Timing’ is part of the fourth hypothesis: “Correct timing of elements for the learner increases understanding and reasoning speed as active processing takes place”. When this is achieved increased interaction is seen (see second hypothesis).
The theory therefore explains how these crucial elements interact and why Diaz, et al. [37] successful elements (of ‘time’ and ‘interaction to transform learning’) are important. They simply allow a greater soaking up of the knowledge. The fourth hypothesis offers further explanation: good timing of information increases active learning as it allows the learner to digest the learning without distraction. Multimedia cognitive theory supports these hypotheses via Mayer’s, et al. [39] three cognitive assumptions. Text and images use the same visual learning ‘channel’ so when both are used together repeatedly on one screen Mayer’s ‘limited capacity assumption’ applies. This is because the cognitive load in the working memory is full, so when optimising learning package elements, over-use of the same channel should be avoided. Gunning’s, et al. [40] Psychological Model of Explanation summarises all of this in AI learning pathway terms as: ‘User receives explanation, revises mental model, performs better’. At this point it should be noted that the key elements found in learning analytics are different to those found by Diaz, et al. [37,38] (AI/rhizomatic learning). So, which are right?
Parts of Aguyo, et al. [41] systematic review supports Hausman et al. [28] findings (i.e. learning analytics). Their concept showed ‘intrinsic qualities’ and ‘user input’ as crucial elements. However, Aguyo, et al. [41] also gave a real-life case example which conversely supports Diaz et al. [37] findings (i.e. AI/rhizomatic learning). This included exploration of theoretical principles, ‘intrinsic qualities’ and ‘the unique authenticity of learning contexts as defined by the users’. They published this thereafter as a model following the exact process laid out by Calbraith and Dennick [8] apart from the basic conceptual approach. So, which is correct? It is suggested that both are correct. Both contexts include ‘intrinsic qualities’, centre around the learner, and lean towards learning customisation. Renz, et al. [33] study takes this further. ‘Knewton’ software collected learner data to form ‘learning types’ and ‘success prognoses’. Algorithms produced ‘individual learning packages’ where content was continuously adapted. They discovered ‘individualisation’, ‘knowledge transfer’ and ‘human-digital interaction’ were key elements. They do not directly explain why these are crucial pedagogical knowledge flow elements, but imply it is the type of human-digital interaction that motivates learners. However, this is sufficient for a pattern to emerge. Fidalgo-Blanco et al. [27] motivation and interaction quality indicators (for MOOCs/learning analytics) resonate here. The common theme in both Hausman, et al. [28,37] findings are the intrinsic importance of the learners’ pathways, and the effect this has on learner motivation. Motivation, knowledge transfer and interaction are also features of the developing theory’s second hypothesis, so this is therefore a further proving instance of the theory.
What doesn’t work-Whilst parts of Aguyo, et al. [41] approach
showed similarities to other contexts (‘intrinsic qualities’ and
‘user input’), other parts of their approach did not work. For
example, they admit to ‘challenges of digital intervention decay’
(i.e. AI can ‘optimise’ learning pathways until no original learning
objectives remain, causing ‘learner vulnerability’, Mackness et al.)
[42]. Furthermore, Aguyo, et al. [41] model requires the ‘Chemical
Organisation Theory’ (COT) structure to make it work. COT is ‘a
closed, self-maintaining set of components… to allow mapping and
a fresh perspective of a system’s structure’ Dittrich, et al. [43]. COT
is used to overcome barriers present in their approach, but it limits
their model in several ways:
a) COT/autopoiesis identifies which elements are
‘theoretically possible’ and may even connect some to gain
‘combined meaning’. However, this does not outline the
underlying learning pathway as the developing theory does.
Aguyo, et al. [41] assume that if a combined meaning can be
produced, it is automatically ‘transformative’ by its mere
existence. This misses the underlying steps which explain
how it becomes transformative (i.e., deep change in thinking/
feelings/actions, O’Sullivan, et al. [44]. Their ‘combined
meanings’ cannot be viewed as transformative when they
remain untested, theoretical, and when the conditions under
which they become transformative remain unspecified.
b) If taking the full autopoietical approach using AI, regular
‘sense checks’ are required due to the data’s limited ‘shelf-life’.
This is to ensure learning pathways still reflect the learning
package’s original intentions, and learners still find content
relevant. The developing theory is built upon Calbraith’s, [4]
approach where adaptation is ‘in-built’ as learning preferences
and pathways have already been optimised before learning
starts (via top-performing pedagogies for each respective
discipline). Their real-life impact on learning has already been
evaluated, and pathway ‘saturation’ points already achieved
through usability studies before any learning/course ever runs.
Because of this, all students benefit immediately from the start
of their learning. The period before learning suffers digital
decay is significantly longer than autopoietical/AI designs
because amendments to learning are only needed when a major
new learning theory/pedagogy occurs with an acceptable level
of real-life impact. So, amendments are seldom and negligible.
This compares favourably to the greater changes needed when
using AI/data analytics where learner data is gathered during
use. This inevitably means that learners only benefit after this
data has been gathered.
c) Aguyo and Veloz use heutagogy to add to and extend
beyond traditional approaches. Calbraith’s [4] approach and
Calbraith [7] model, which this theory is built upon, does not
add to traditional approaches but actually merges new and old.
This makes the learning fundamentally more powerful because
only traditional approaches used are those found within the
top-performing approaches with empirical impact. However,
they are not amalgamated to the point that learning becomes
inextricably bound to ‘user footprints’ or their ‘breadcrumb
trails’. This is an important and crucial factor, as it means that
learning based on this theory can be flexible, tailored to each
discipline, and capable of being as wide or specific as needed.
d) Aguyo, et al. [41] purely theoretical principles may
possibly be applied to mobile learning (to develop prospective
‘autopolytic coherence’) but the developing theory answers
the challenges of mobile learning from a rigorous approach
borne out of evidence-based practice. More importantly it is
derived from the learners themselves and is tried and tested to
saturation point in the field before theory is even applied.
In summary, successful elements for AI and Rhizomatic learning are user input, timing, interaction, learning transformation, reallife scenarios, individualisation, knowledge transfer, and humandigital interaction. These accord with the hypotheses and theory. In Calbraith [4] research, students felt the environment was adapted to their needs, and the developing theory is able to explain why. This is important. The hypotheses showed that ‘feelings of perpetual adaption to learner needs’ feeds the ‘perpetuated motivation to learn’ that was seen. This crucially means that not all learning preferences have to be included in the mobile learning package, providing it engenders these feelings and knowledge flows easily. Not all learning preferences were built into the learning packages, yet 100% of learners felt catered for. These feelings can also be explained by the causal relationships found in Calbraith’s, [4] mapped learning pathways. This may also explain why personalised learning concepts are currently so popular as they engender the same feeling. Diaz, et al. [37] model is comprehensive from a technological perspective and shines an important spotlight on ‘elements’ and ‘learning pathways’. It does not, however, specifically explain knowledge flow to the extent that the theory does.
Adaptive Learning Platforms (ALP) and Learning Personalisation (LP)
What works - ALPs are just what they sound like, learning platforms which allow adaptive learning. One purported advantage from blended ALP literature is that it allows different learner centred choice. Al-Zahrani, et al. [45] found ‘motivation to learn’ was a global element. Zhang, et al. [46] do not identify successful elements but imply that ‘timely feedback’ is a core feature and therefore recommend examining ‘learners’ learning emotions’ for future research. These findings do little to elucidate specific pedagogy, knowledge flow or learning pathways but does add further weight to the crucial pedagogical elements for mobile learning already discussed. It should be noted that successful elements in Personalised Learning (i.e. learning customised to the learners’ preference) are hard to determine due to the concepts involved. Differences between personalisation, adaptative educational processes, and responsive learning environments are sometimes imperceptible Bulger, et al. [47].
Hence, specific benefits within research studies are often unapparent, and a lack of theoretical underpinning and ‘persistent pedagogical gaps’ result. Bartolome, et al. [48] therefore recommend evaluating pedagogical perspectives with most impact for the field to evolve. This supports the use of Kirkpatrick’s impact evaluation which was used in the research method this theory is built on. Regarding adaptive learning platforms Bartolome noted two prevalent approaches in the literature: i) ‘system-gathered recommendations’ (Learning Analytics, AI); and ii) learner-centred learning. They did not elicit specific pedagogical elements but suggest including learning behaviours for learners to ‘feel’ learning is meaningful. This again adds further weight and confirming instances to the developing theory.
What doesn’t work - Dziuban et al. [49] believes use of ‘surrogate’ measures within personalised blended learning (i.e., student learning experience/testimony) are ‘the best we can hope for’ when assessing impact. By contrast, learner comments in Calbraith’s [4] research were not surrogate measures but were integral for the learning pathway development. They were also integral for providing answers as to why pedagogies with the highest impact ‘worked’. Bartelome et al. [48] attempted to explain theoretical and pedagogical underpinning via four key elements: Epistemological nature, psychological approach, didactic materialisation, and technological approach. However, they fail to adequately explain how and why these elements are important in personalised learning pedagogy, and instead suggest that ‘the knowledge seen just reflects the underlying design’. Logically, when argued from this epistemological standpoint, if only the design is considered the full amount of knowledge present may be missed. It has already been shown that there is a need to look at the whole ‘learner use process’ (and particularly how learners choose their route through online learning) to see all important parts of the learner pathway. It is therefore asserted that when specific elements are grouped together the total becomes more than the sum of its individual parts, i.e., the mixture of specific elements take on a life of its own to elevate learning to a whole new level due to how they make learners feel. Calbraith and Dennick [8] referred to this phenomenon as ‘value-added’ observations.
From their psychological approach, Bartelome, et al. [48] conclude that pathways learners take depend on the discipline. However, this is not necessarily true. Invariably both disciplinedependent and generic principles can be derived from the pathways learners take, but the actual pathways themselves seen in Calbraith’s, [4] research remained the same, irrespective of discipline. This is a disproving instance for the theory. There is a small but very important distinction here between ‘the steps learners take’ and ‘the learning pathway’. Minor variations were seen between learners and disciplines on the learning pathway which were initially interpreted as different learning pathways. However, as work progressed (and learning paths were overlaid) it was clear these were not different pathways or deviations but were actually parts of the same pathway with the same elements. Not all learners needed to stop at all points on the learning pathway in the same way that not every learner needs to spend as much time as others on certain aspects. This does not amount to them taking a different learning pathway but is, instead, part of learning personalisation and self-personalisation. If we think of the learning pathway as physical stepping-stones, some learners may jump a stone but still carry on the same path/direction (this is particularly true of established learners). All complete the learning. Kukulska- Hulme [50] sums up this phenomenon beautifully: “Personalized learning takes account of learners’ interests, preferences, prior knowledge, competencies, movements and behaviours, but this does not imply that all mobile application designs need to take all these aspects into consideration”.
In summary, the theory and hypotheses presented here affords the learner the feeling of choice when put into practice. Bartolome et al provide both a proving and disproving instance of the theory. Although successful elements are hard to find in current ‘adaptive learning platforms’ and ‘personalised learning literature’, both mention learner importance, their motivation, and timely feedback. As the theory accords with these too, these are likely to be important pedagogical elements. Whilst specific elements cannot easily be seen in this area, they nevertheless highlight some important distinctions between ‘the steps learners take’ and ‘the learning pathway’. This discussion demonstrates that ‘fully personalised’ mobile learning does not have to include every aspect of how the learner learns, especially if the model used contains a mixture of elements that allow learners to feel intuitively catered for. It is therefore suggested that the glue that holds individual elements together is learner use, which then forms the learning pathway.
Flipped Classrooms
What works - Le Roux, et al. [51] advocate ‘flipped classrooms’ because of their active learning components. They stress that teaching preparation, informative feedback, and knowledge application to real-life problems are crucial for success but do not identify these as quality indicators. Huh, et al. [52] found that ‘learning at your own pace’ was an appreciated element for learners. Scupelli, et al. [53] identified ‘timing’ and ‘pacing’ as successful pedagogical elements. Shi et al. [54] found ‘high order thinking skills’ featured in successful flipped classrooms. These are further confirming instances as all these elements are explained by the theory’s fourth hypothesis (as previously discussed).
What doesn’t work - Flipped classroom approaches have sparked interest due to their link with personalised learning and their alleged propensity to create ‘active student engagement’ Hu et al. [55]. However, Lundin et al. [56] note: “Rigorous and empirically well-grounded studies are currently infrequent. Existing evidence is local, siloed, and fragmented due to the nature of the flipped approach”. It therefore produces learning that is unique to learner groups making it difficult to produce generalisations and justify specific pedagogical choices. Even effective studies do not adequately explain why elements work e.g., Le Roux, et al. [51] recommend building ‘a stronger evidence base by means of problem-solving’ despite no visible elements. Also Shi et al. [54] meta-analysis found that “pedagogical approaches significantly moderate the size of the flipped classroom instruction effect (I2 = 72.2%, p < 0.05)”. But they admit to a lack of diverse demographics, contemplation of motivation, and how students think about their learning.
In summary, the attraction for flipped classrooms is clear as it fulfils the need to ‘learn at your own pace’. However, some effective flipped studies also contain’ unformed aspects’ which imply caution before accepting ‘timing’, and ‘pacing’ as key elements. However, as they feature highly in other contexts this appears to be another possible proving instance. The problem-solving, higher order thinking, active learning, informative feedback, and knowledge application to real-life problems accord with the theory’s hypotheses. As this is so, and flipped learning tends to create unique learning that cannot be applied to other groups, it is suggested that it is the elements or mix of elements within flipped learning that causes it to be successful pedagogically, and not necessarily the flipped approach per se.
Limitations of the Theory
Several disciplines were included in the original research the substantive theory which this paper was built on (IT, Educational Technology, Sciences, Education, Nursing, Medicine, Physics, Biomedical Scientists, and ‘laymen’). However, not all disciplines were present. There were many confirming and proving instances seen to validate formal theory status together with some disproving instances of others work, which is highly promising. However, there may be ‘pockets’ within other disciplines, where the theory has not yet been applied. Followers of Castaneda, et al. [57] may criticize this theory due to a perceived ‘overdependence on pre-digital learning theories’ given that some top-performing pedagogies found in this research contain these. Some may believe using old theories with new technology is incongruous. However, it is suggested that if those theories are current top-performing pedagogies (impactwise), they cannot yet be outdated. When pushing the substantive theory to formal theory status, studies with enough detail about pedagogical processes and successful elements were there when it came to elucidating ‘elements’ and ‘mix of elements’. However, they were sparse when explaining knowledge flow, underlying pedagogy and how learning pathways worked. Whilst formal theory promise is clearly shown, full development of the theory is still slightly limited because the field still needs to develop. The theory successfully explains existing evidence and some disproving instances so has been pushed as far as possible within the present maturity of the field.
Implications
The grounded theory approach presented here offers an evidence-based theory to explain why a mixture of specific elements is required for effective mobile learning. This concept requires a paradigm shift when looking for pedagogical answers. Instead of taking the problem as a starting point it takes learner use of the learning to elucidate underlying mechanisms. It is true that some effort must go into successful learning, and the research method this formal theory is built on is no exception. It is ‘front-heavy’ but pay-off comes in the lack of maintenance needed once it is up and running. Using the mix of important elements outlined here when planning mobile learning provides learning that is ‘effective from the outset’. The proving and disproving instances found and discussed have major implications regarding not just the theory’s potential as a formal theory of mobile learning, but also to the range of learning situations, packages and delivery formats that this can be applied to. The application appears limitless.
Conclusion
This paper has shown that sometimes the contexts or underlying concepts that researchers choose to frame the pedagogical evaluation, knowledge flow or learning pathways can actually constrain observation of how learners use the learning. This results in under-developed answers as to why the approaches either work or do not work pedagogically. It also results in failure to map the learning pathway, explain the existence of barriers, or explain why the knowledge flow is not apparent. By contrast, the developing theory put forward here was able to explain not just why certain elements were important, but also what role they had within the learning pathways and the impact on knowledge flow. The research this theory was built upon used Sims (2006) construct. This was therefore a good basis from which to evaluate pedagogical impact because it allowed learner engagement to increase due to the freedom to test their assumptions (and adjust/ introduce new content if necessary) but without fear of failure. This process was shown to be important to learners. The inductive stance taken meant that issues surrounding pedagogical mobile learning practice were explored unhindered and unconstrained. Crucial elements for effective mobile learning were discovered, and underlying pedagogical mechanisms explained. The substantive theory and hypotheses were able to explain disproving instances and why each effective element worked. Knowledge flow was good and original research that this theory is built on showed increased learning speed and development of clinical reasoning. Through a comparison with the wider mobile context, similarities were found between the theories hypotheses and successful elements found in others’ work. The theory’s hypotheses were able to explain both successes and problems in others’ research which is an important step to gain formal theory status.
Specifically, successful elements are pedagogical approach, evaluation, user experience, motivation, learning design, and interaction which accord with the rigorous methodological and theory testing. Learning Analytics approaches support this, suggesting these as quality indicators. Pedagogical issues highlighted ‘risk-taking’ and ‘positive views of error’ as important if learners are to feel comfortable trying new ideas. ‘Feedback’, ‘the learner’, and ‘perpetual motivation’ were significant features. This too was supported by use in AI approaches which highlighted ‘learning paths’, ‘knowledge transfer’, ‘learner transformation’, ‘individualisation’, and ‘human-digital interaction’ as important elements. Adaptive Learning Platforms/Learning Personalisation showed ‘timely feedback’ and ‘learning choice’ were important. Finally, ‘feedback’, ‘knowledge application to real-life problems’, ‘learning at your own pace’ and ‘high order thinking skills’ were seen as vital elements in successful Flipped Classroom approaches.
In short, pedagogies used in this research improved learner confidence because the mixed elements impacted learner feelings positively and made them feel personally catered for. ‘Learner use’ of the intended learning is therefore crucial as it is the glue that holds mixed elements together to form successful learning pathways, and therefore knowledge flow. It should therefore be the starting point for any pedagogical investigation, otherwise important valueadded aspects that form the underlying pedagogical mechanisms will be missed. If these are unnoticed, they eventually form barriers. Conceptual and pedagogical issues that hinder knowledge flow were explained which added further weight to the theory’s formal status. As the theory and hypotheses generated were able to explain successes/pitfalls in contemporary approaches, this potentially demonstrates that use of these mixed elements could have universal application to many topics, disciplines, and approaches to transform mobile learning in any context.
Acknowledgements
None.
Conflicts of Interest
There are no conflicts of interest.
References
- Lai, Chiu-Lin (2019) Trends of mobile learning: A review of the top 100 highly cited papers, British Educational Research Association 51(3): 721-742.
- Chee KN, Yahaya N, Ibrahim NH, Noor Hassan M (2017) Review of Mobile Learning Trends 2010-2015: A Meta-analysis, Educational Technology and Society 20 (2): 113-126.
- Gikas J, Grant M (2013) Mobile Computing Devices in Higher Education: Student Perspectives on Learning With Cell phones, Smartphones and Social Media, Internet and Higher Education 19: 18–26.
- Calbraith, Davina (2011) Discovering effective pedagogical and evaluation approaches for learning objects in medical education, PhD Thesis, University of Nottingham
- Kirkpatrick DL (1976) Evaluation of Training. In: Craig, R.L., Ed., Training and Development Handbook: A Guide to Human Resource Development, McGraw Hill, New York.
- Kirkpatrick DL (1996) Great Ideas Revisited: Revisiting Kirkpatrick’s Four-Level Model. Training & Development 50: 54-57.
- Sims R (2006) Beyond Instructional Design: Making Learning Design a Reality. Journal of Learning Design, 1(2): 1-7
- Calbraith, Davina, Dennick (2011) Chapter 2: ‘Producing Generic Principles and Pedagogies for Mobile Learning: A Rigorous Five Part model’. In Kitchenham A (2011) Models for Interdisciplinary Mobile Learning: Delivering Information to Students, IGI Global: 26-48
- Glaser, Barney G, Strauss, Anselm L, (1967) The Discovery of Grounded Theory: Strategies for Qualitative Research, Chicago, Aldine Publishing Company.
- JISC (2015) Mobile Learning Guides.
- Aagaard J (2018) Magnetic and Multistable: Reinterpreting the Affordances of Educational Technology. International Journal of Educational Technology in Higher Education 15(4).
- Kuzel AJ (1992) Sampling in Qualitative Enquiry, in Crabtree BF and Miller WL (Eds.) Doing Qualitative Research, Sage, Newbury Park, California 31-44
- Baez, Clelia Pineda; Henning, Cristina; and Segovia, Yasbley (2013) Pedagogical Models, Collaborative Work, and Interaction on Online Undergraduate Programmes in Columbia: Still Some Way to Go. International Educational Technology Journal of Higher Education (From the Dossier: Education and Technology in Mexico and Latin America: Outlook and Challenges) 431-445.
- De Oliveira, Janaina Minelli; Henriksen, Danah; Castañeda, Linda; Marimon, Marta; Barberà, Elena; Monereo, Carles; Coll, César; Mahiri, Jabari; and Mishra, Punya (2015) The educational landscape of the digital age: Communication practices pushing (us) forward, International Journal of Educational Technology in Higher Education, 12:12020014 (Special Issue: New Learning Scenarios from a Transformative Perspective. From Global Approaches to Local Proposals.
- Sung Y, Chang K, and Lui T (2016) The Effects of Integrating mobile Devices with Teaching and Learning on Students’ Learning Performance: A Meta-analysis and Research Synthesis. Computers and Education 94: 252-275.
- Pedro, Luis Francisco Mendes Gabriel; De Oliveria Barbosa, Claudia Marine Monica; and Santos, Carlos Manuel das Neves (2018) A Critical Review of Mobile Learning Integration in Formal Contexts, International Journal of Educational Technology in Higher Education, 15:10
- Jones Kyle ML (2019) Learning analytics and higher education: a proposed model for establishing informed consent mechanisms to promote student privacy and autonomy. International Journal of Educational Technology in Higher Education 16:24
- Baran E (2014) A Review of Research on Mobile Learning in Teacher Education. Educational Technology & Society 17(4): 17-32
- Kuznekoff J, Munz S, Titsworth S (2015) Mobile Phones in the Classroom: Examining the effects of texting, twitter, and message content on Student Learning. Communication Education, 64(3): 344-365.
- Kurzweil R (2005) The singularity is Near: When Human Transcend Biology, New York, Penguin Group
- Rankin YA, Gold R Gooch B (2006) Playing for keeps: gaming as a language learning tool.
- Pesare Enrica, RoselliTeresa, Corriero Nicola and Rossano Veronica (2016) Game-based learning and Gamification to promote engagement and motivation in medical learning contexts. Smart Learning Environments 3:5.
- Hamm, Breanna H (2011) Today's Learners: Applying Gaming Elements to Enhance Student Engagement in a University Visual Communication Course.
- Dichev, Christo and Dicheva Darina (2017) Gamifying education: what is known, what is believed and what remains uncertain: a critical review. International Journal of Educational Technology in Higher Education 14:9.
- Lieberoth A (2015) Shallow gamification:psychological effects of framing an activity as a game. Games and Culture 10(3): 249-268.
- Kaplan Andreas M Haenlein Michael (2016) Higher education and the digital revolution: About MOOCs, SPOCs, social media, and the Cookie Monster". Business Horizons 59(4): 441-50.
- Fidalgo Blanco Angel, Sein-Echaluce, María Luisa, García-Peñalvo, Francisco José (2016) Massive Access to Cooperation: Lessons Learned and Proven Results of a Hybrid xmooc/cmooc Pedagogical Approach To MOOCs, International Journal of Educational Technology in Higher Education 13:24.
- Hausman Matthieu, Verpoortn Dominique, Duchateau Dominique, Sylciane Hubert, Detroz Pascal (2018) Learning Analytics: Pedagogy Has to Rule the Way, 9th Biennial Conference of EARLI SIG 1: Assessment and Learning Analytics EARLI - University of Helsinki, Finland.
- Ferguson R, Clow D (2017) Where is the Evidence? A Call to Action for Learning Analytics, in Proceedings of the Seventh International Learning Analytics and Knowledge Conference 56-65.
- Alexander Bryan, Ashford-Rowe Kevin, Barajas-Murphy Noreen, Dobbin Gregory, Knott Jessica, et al. (2019) EDUCAUSE horizon report: 2019 (Higher education edition), EDUCAUSE, US.
- Zawacki Richter Olaf, Marin Victoria I, Bond Melissa, Gouveneur Franziska (2019) Systematic Review of Research on Artificial Intelligence Applications in Higher Education-Where are the Educators? International Journal of Higher Education 16:3
- Ifenthaler D, Yau JYK (2019) Higher Education Stakeholder’s Views on Learning Analytics Policy Recommendations for Supporting Study Success. International Journal of Analytics and Artificial Intelligence for Education 1(1): 28-42.
- Renz A, Hilbig R (2020) Prerequisites for Artificial Intelligence in Further Education: Identification of Drivers, Barriers and Business Models of Educational Technology Companies, International Journal of Educational Technology in Higher Education 17:14.
- Bogue, Ronald (2008) Deleuze and Guattari, Routledge: 67.
- Gillies Donald (2021) Rhizomatic Learning, A Brief Critical Dictionary of Education.
- Tsai YS, Gasevic D (2017) Learning Analytics in Higher Education – Challenges and Policies: A review of Eight Learning Analytics Policies. The Seventh International Learning Analytics and Knowledge Conference, Canada, Vancouver.
- Diaz, Juan Carlos Torres, Moro, Alfonso Infante, Carrión, Pablo Vicente Torres (2015) Mobile learning: perspectives. International Journal of Educational Technology in Higher Education 2:12
- Kinchin IM, Gravett K (2020) Concept mapping in the age of Deleuze: Fresh perspectives and new challenges. Education Sciences 10(3): 82.
- Mayer RE, Heiser J, Lonn S (2001) Cognitive constraints on multimedia learning: when presenting more material results in less understanding, Journal of educational psychology. Cognition and Instruction 6: 41-57.
- Gunning, David (2017) Explainable Artificial Intelligence (XAI), DARPA/120Program Update.
- Aguayo, Claudio, Veloz Tomás (2019) Modelling Digital Learning Design Processes in The Search of Autopoietical Coherence, From the ALIFE19 Workshop Entitled ‘Process Modelling and Self-organization: Methods and Applications.
- Mackness, Jenny, Bell Frances, Funes Mariana (2016) The rhizome: A problematic metaphor for teaching and learning in a MOOC. Australasian Journal of Educational Technology 32(1): 78
- Dittrich, Peter and Speroni Di Fenzio, Pietro (2018) Chemical Organisation Theory: Towards a Theory of Constructive Dynamic Systems), Bio Systems Analysis Group, Jena Centre for Bioinformatics & Department of Mathematics and Computer Science, Friedrich-Schiller-University Jena, D07737 Jena.
- O'Sullivan, E. (1999) Transformative Learning: Educational vision for the 21st Toronto, Canada: University of Toronto Press Inc.
- Al-Zahrani, Hasan, Laxman Kumar (2016) A critical Meta-Analysis of Mobile Learning Research in Higher Education 42(1).
- Zhang, Miaomiao, Zhang Rui (2020) The Hotspots and Trends of Adaptive Learning: A Visualized Analysis Based on CiteSpace. International Journal of Information and Education Technology 10(5).
- Bulger M (2016) Personalised Learning: The Conversation We’re Not Having. Data and Society Working Paper.
- Bartolome Antonio, Castañeda, Linda, Adell Jordi (2018) Personalisation in Educational Technology: The Absence of Underlying Pedagogies. International Journal of Educational Technology in Higher Education.
- Dziuban Charles, Graham Charles R, Moskal Patsy D, Norberg Anders, Sicilia Nicole (2018) Blended Learning: The New Normal and Emerging Technologies. International Journal of Educational Technology in Higher Education 15:3
- Kukulska-Hulme (2016) Personalisation of Language Learning Through Mobile Technologies. Cambridge University Press, Cambridge, UK.
- Le Roux Ingrid, Nagel Lynette (2018) Seeking the Best Blend for Deep Learning in a Flipped Classroom – Viewing Student Perceptions Through the Community of Inquiry Lens. Journal of Educational Technology of Higher Education 15:16
- Huh, Man Kyu (2019) Flipped Classroom Pedagogy Enhances Student Satisfaction and Validated Strategies in Molecular Biology. European Journal of Research and Reflection in Educational Sciences. 7(10).
- Scupelli Peter, Candy Stuart (2019) Teaching Futures: Trade-offs Between Flipped Classroom and Design Studio Course Pedagogies, Conference Paper September.
- Shi Yinghui, Ma Yangiong, MacLeod Jason, Yang Harrison Hao (2020) College Student’s Cognitive Learning Outcomes in Flipped Classroom Instruction: A Meta-Analysis of The Empirical Literature. Journal of Computers in Education 7: 79-103.
- Hu Y, Zhang J, Huang R (2016) Developing, Sharing and using Micro-lectures in Region: Implications Derived from a Government-orientated Micro-lecture Project in Shanghai, in Zhang J and Yang J Chang M, and Chang T (Eds), ICT in Education in Global Context, Lecture Notes in Educational Technology, Singapore: Springer: 293-302.
- Lundin Mona, Rensfeldt Annika Bergviken, Hillman Thomas Lantz-Anderson, Peterson, Louise (2018) Higher Education Dominance and Siloed Knowledge: A Systematic Review of Flipped Classroom Research. Journal of Educational Technology in Higher Education 15:20.
- Castaneda Linda, Selwyn Neil (2018) More than tools? Making Sense of the Ongoing Digitisations of Higher Education. International Journal of Educational Technology in Higher Education 15:22.