#DistanceEducation

2025-11-27

What the social media ban means for rural boarding school students
By Jodie Hamilton

Regional and remote students are being encouraged to learn old-school ways to stay in contact as social media bans for those under 16 come into effect over the summer holidays.

abc.net.au/news/2025-11-27/soc

#SocialMedia #MobileandInternetAccess #RegionalCommunities #GovernmentPolicy #Education #BoardingSchools #TelecommunicationsServicesIndustry #RuralandRemoteCommunities #Teenagers #OnlineSafety #Children #Parenting #ParentingTeenagers #ParentingChildren #SecondaryEducation #DistanceEducation #FederalGovernment #JodieHamilton

Myfirst Collagemyfirstcollege
2025-11-22

✨ Dream Big. Learn Anywhere.
Mata Tripura Sundari Open University, offers flexible online and to help you upgrade your and .
.
🎓 Admissions Open Now!
Apply today and take your next big step.
📞 +91-9289-866-814
🌐 bit.ly/48qELkh
.
.

Dharam KapoorSeoDharam
2025-11-13

Looking to boost your career with flexible, industry-aligned education?

UPES Online Courses offer globally recognized programs designed for modern learners. Whether you’re a working professional or a student aiming to upskill, UPES ensures you gain real-world knowledge from expert faculty and industry collaborations.

Transform your potential into success with UPES Online.

check out here: distanceeducationschool.com/up

UPES University Online
2025-10-04

Virtual learning robot provides 'immeasurable' outcomes in school trial
By Susan Oong

A national program that places robots in classrooms so students can learn from home will from Monday expand its work across Tasmania.

abc.net.au/news/2025-10-05/tas

#DistanceEducation #SecondaryEducation #PrimaryEducation #SusanOong

Mathrubhumi EnglishMathrubhumi_English
2025-09-25
DBU ODLDBUodl
2025-06-16

If you want to study in arts but cannot go to college daily, a distance learning BA can be a good choice. You can study subjects like history, economics, political science, and more from home. This program is flexible and suitable for people with jobs or family duties. The BA degree through distance learning helps you continue your studies at your own pace and still gain knowledge and a degree that matters in the real world.

dbuodl.in/ba-degree-distance-l

2025-04-10

New blog post: Designing AI-Resilient Assessments in Online and Distance Education

Beyond detection tools and gotcha tactics — what if we reimagined assessment itself?

In this post, I explore how assessment design can evolve in response to generative AI, drawing on critical pedagogy and practical strategies for online and distance education.

Read it here: e-learning-rules.com/blog/0020

Abstract digital collage showing a head split in two, with code and geometric shapes symbolising AI, data, and assessment in education.
Jako Olivierjakoolivier
2025-01-11

Be sure to join the Commonwealth of Learning for the Eleventh Pan-Commonwealth Forum on Open Learning (PCF11) in Gaborone, Botswana, from 10 to 12 September 2025 at the Gaborone International Convention Centre.

Last date for submission of Abstracts and Proposals: 30 January 2025

pcf11.org/call-for-proposals/

Neil Mosleyneilrmosley
2024-07-16

"we have seen a significant increase in the number of online degrees this year. Since the start of the year, more than 65 new online degrees have been launched. While these are primarily postgraduate master's degrees, there is a not insignificant amount of undergraduate degrees too." buff.ly/3Xz5Dd0

Neil Mosleyneilrmosley
2024-07-09

"Online education has an excellent pedigree as a format that supports learning while you earn, and..there’s a role for online education in disrupting the dichotomy that has sometimes been presented as either going to university or doing an apprenticeship." buff.ly/3LdzFM6

Neil Mosleyneilrmosley
2024-07-07

"The last six months have seen a continued increase in the number of UK HEIs seeking to seriously enter the online distance education market. While there are various paths to take, it is clear that many UK HEIs are continuing to form partnerships with OPMs and other online education companies." buff.ly/3zjO8TJ

2024-07-04

I had the great pleasure of being invited to the Open University of the Netherlands and, later in the day, to EdLab, Maastricht University a few weeks ago, giving a slightly different talk in each place based on some of the main themes in my most recent book, How Education Works. Although I adapted my slides a little for each audience, with different titles and a few different slides adjusted to the contexts, I could probably have used either presentation interchangeably. In fact, I could as easily have used the slides from my SITE keynote on which both were quite closely based (which is why I am not sharing them here). As well as most of the same slides, I used some of the same words, many of the same examples, and several of the same anecdotes. For the most part, this was essentially the same presentation given twice. Except, of course, it really, really wasn’t. In fact, the two events could barely have been more different, and what everyone (including me) learned was significantly different in each session.

This is highly self-referential. One of the big points of the book is that it only ever makes sense to consider the entire orchestration, including the roles that learners play in making sense of it all the many components of the assembly, designed for the purpose and otherwise. The slides, structure, and content did provide the theme and a certain amount of hardness, but what we (collectively) did with them led to two very different learning experiences. They shared some components and purposes, just as a car, a truck, and a bicycle share some of the same components and purposes, but the assemblies and orchestrations were quite different, leading to very different outcomes. Some of the variation was planned in advance, including an hour of conversation at the end of each presentation and a structure that encouraged dialogue at various points along the way: these were as much workshops as presentations. However, much of the variance occurred not due to any planning but because of the locations themselves. One of the rooms was a well-appointed conventional lecture theatre, the other an airy space with grouped tables, and with huge windows looking out on a busy and attractive campus. In the lecture theatre I essentially gave a lecture: the interactive parts were very much staged, and I had to devise ways to make them work. In the airy room, I had a conversation and had to devise ways to maintain some structure to the process, that was delightfully disrupted by the occasional passing road train and the very tangible lives of others going on outside, as well as an innately more intimate and conversational atmosphere enabled (not entailed) by the layout. Other parts of the context mattered too: the time of day, the temperature, the different needs and interests of the audience, the fact that one occurred in the midst of planning for a major annual event, and so on. All of this had a big effect on how I and others behaved, and on what and how people learned. From one perspective, in both talks, I was sculpting the available affordances and constraints to achieve my intended ends but, from another equally valid point of view, I was being sculpted by them. The creators and maintainers of the rooms and I were teaching partners, coparticipants in the learning process. Pedagogically, and despite the various things I did to assemble the missing parts in each, they were significantly different learning technologies.

The complexity of distance teaching

Train journeys are great contexts for uninterrupted reflection (trains teach too) so, sitting on the train on my journey back the next day, I began to reflect on what all of this means for my usual teaching practice, and made some notes on which this post is based (notebooks teach, too).  I am a distance educator by trade and, as a rule, with exceptions for work-based learning, practicums, co-ops, placements, and a few other limited contexts, distance educators rarely even acknowledge that students occupy a physical space, let alone do we adapt to it. We might sometimes encourage students to use things in their environments as part of a learning activity, but we rarely change our teaching on the fly as a result of the differences between those environments. As I have previously observed, the problem is exacerbated by the illusion that online systems are environments (in the sense of being providers of the context in which we learn) and that we believe we can observe what happens in them. They are not, and we cannot. They are parts of the learners’ own environments, and all we can (ethically) observe are interactions with our designed systems, not the behaviour of the learners within the spaces that they occupy. It is as hard for students to understand our context as it is for us to understand theirs, and that matters too. It makes it trickier to model ways of thinking and approaches to problem solving, for example, if the teacher occupies a different context.

This matters little for some of the harder elements of the teaching process. Information provision, resource design, planning, and at least some forms of assessment and feedback are at least as easy to do at a distance as not. We can certainly do those and make a point of doing them well, thereby providing a little counterbalance. However, facilitation, role modelling, guidance, supporting motivation, fostering networks, monitoring of learning, responsive adaptation, and many other significant teaching roles are more complex to perform because of how little is known about learning activities within an environment. As Peter Goodyear has put it, matter matters. The more that the designated teacher can understand that, the more effective they can be in helping learners to succeed.

Because we are not so able to adapt our teaching to the context, distance learning (more accurately, distance teaching) mostly works because students are the most important teachers, and the pedagogies they add to the raw materials we provide do most of the heavy lifting. Given some shared resources and guided interactions, they are the ones who perform most of the kinds of orchestration and assembly that I added to my two talks in the Netherlands; they are the ones who both adapt and adapt to their spaces for learning. Those better able to do this in the first place tend to do better in the long run, regardless of subject interest or innate ability. This is reflected in the results. In my faculty and on average, more than 95% of our graduate students – who have already proven themselves to be successful learners and so are better able to teach themselves – succeed on any given course, in the sense of reaching the end and achieving a passing grade.  70% of our undergraduates, on the other hand, are the first in their family to take a degree. Many have taken years or even decades out of formal education, and many had poor experiences in school. On average, therefore, they typically have fewer skills in teaching themselves in an academic context (which is a big thing to learn about in and of itself) and we are not able to adapt our teaching to what we cannot perceive, so we are of little assistance either. Without the shared physical context, we can only guess and anticipate when and where they might be learning, and we seldom have the faintest idea how it occurs, save through sparse digital signals that they leave in discussion forums or submitted assignments, or coarse statistics based on web page views. In a few undergraduate core courses within my faculty it is therefore no surprise that the success rates are less than 30%, and (on average) only about half of all our students are successful, with rates that improve dramatically in more senior level courses. The vast majority of those who get to the end pass. Most who don’t succeed drop out. It doesn’t take many core courses with success rates of 30% to eliminate nearly 95% of students by the end of a program.

Teaching with a context

We can better deal with this if we let go of the illusion that we can be in control and, at the same time, find better ways to stay close: to make the learning process including the environment in which it occurs, as visible as possible. It is emphatically not about capturing digital traces and using analytics to reveal patterns. Though such techniques can have a place in helping to build a picture of how learners are responding to our deliberate acts of teaching, they are not even close to a solution for understanding learners in context. Most learning analytics and adaptive systems are McNamara Machines, blind to most of what matters.  There’s a huge risk that we start by measuring the easily measurable then wind up not just ignoring but implicitly denying that the things we cannot measure are important. Yes, it might help us to help students who are going to get to the end anyway to get better grades, but it tells us very little about (for instance) how they are learning, what obstacles they face, or how we could help them orchestrate their learning in the contexts in which they live.  Could generative AI help with that? I think it might. In conversation, an AI agent could ask leading questions, could recommend things to do with the space, could aggregate and report back on how and where students seem to be learning. Unlike traditional adaptive systems, generative AI can play an active discovery role and make broader connections that have not been scripted. However, this is not and should not be a substitute for an actual teacher: rather, it should mediate between humans, amplifying and feeding back, not guiding or informing.

For the most part, though, I think the trick is to use pedagogical designs that are made to support flexibility, that encourage learners to connect with the spaces live and people they share them with, that support them in understanding the impact of the environments they are in, and, as much as possible, to incorporate conduits that make it likely that participants will share information about their contexts and what they are doing in them, such as through reflective learning diaries, shared videos or audio, or introductory discussions intended to elicit that information. A good trick that I’ve used in the past, for example, is to ask students to send virtual postcards showing where they are and what they have been doing (nowadays a microblog post might serve a similar role). Similarly, it can be useful to start discussions that seek ideas about how to configure time and space for learning, sharing problems and solutions from the students themselves. Modelling behaviours can help: in my own communications, I try to reveal things about where I am and what I have been doing that provide some context and background story, especially when it relates to how I am changing as a result of our shared endeavours. Building social interaction opportunities into every inhabited virtual space would help a lot, making it more likely that students will share more of what they are doing and increasing awareness of both the presence and the non-presence (the difference in context) of others. Learning management systems are almost universally utter rubbish for that, typically relegating interactions to controlled areas of course sites and encouraging instrumental and ephemeral discussions that largely ignore context. We need more, more pervasively, and we need better.

None of this will replicate the rich, shared environments of in-person learning, and that is not the point. This is about acknowledging the differences in online and distance learning and building different orchestrations around them. On the whole, the independence of distance students is an extremely good thing, with great motivational benefits, not to mention convenience, much lower environmental harm, exploitable diversity, and many other valuable features that are hard to reproduce in person. When it works, it works very well. We just need to make it work better for those for whom that is not enough. To do that, we need to understand the whole assembly, not just the pieces we provide.

https://jondron.ca/on-the-importance-of-place/

#analytics #architecture #context #distanceEducation #distanceLearning #environment #learningToLearn #lecture #lms #motivation #onlineEducation #onlineLearning #orchestration #place #technology #visibleLearning

Distance learners and teachers in different kinds of space
Neil Mosleyneilrmosley
2024-07-03

"through the vast ecosystem of platforms, there exists a substantial number of online short courses and participants. This does not imply that HEIs can easily tap into this market, but it underscores a persistent demand for short online courses." buff.ly/3Wytrgn

Neil Mosleyneilrmosley
2024-07-02

"we have seen a significant increase in the number of online degrees this year. Since the start of the year, more than 65 new online degrees have been launched. While these are primarily postgraduate master's degrees, there is a not insignificant amount of undergraduate degrees too." buff.ly/3Xz5Dd0

2024-04-19

Fun Fact Friday! Did you know that 65% of community college students took at least one distance learning course during the fall 2021 semester? Online classes can be a great way to accumulate credits. Just make sure that your colleges of interest accept online classes for transferable credit.

#didyouknow #funfact #funfactfriday #collegeapps #college #collegeadmissions #collegeapplications #transferstudent #collegetransfer #onlineeducation #onlinelearning #distanceeducation #distancelearning

2024-01-04

 Well, this was definitely going to happen.

The system discussed in this Wired article is a bot (not available to the general public) that takes characters from the absurdly popular Bluey cartoon series and creates personalized bedtime stories involving them for its creator’s children using ChatGPT+. This is something anyone could do – it doesn’t take a prompt-wizard or specialized bot to do this. You could easily make any reasonably proficient LLM incorporate your child’s interests, friends, family, and characteristics and churn out a decent enough story from it. With copyright-free material you could make the writing style and scenes very similar to the original. A little editorial control may be needed here and there but I think that, with a smart enough prompt, it would do a fairly good, average sort of a job, at least as readable as what an average human might produce, in a fraction of the time. I find this to be hugely problematic, though, and not for the reasons given in the article, though there are certainly some legal and ethical concerns, especially around copyright and privacy as well as the potential for generating dubious, disturbing, or otherwise poor content.

Why stories matter

The thing that bothers me most about this is not the quality of the stories but the quality of the relationship between the author and the reader (or listener).  Stories are the most human of artifacts, the ways that we create and express meaning, no matter how banal. They act as hooks that bind us together, whether invented by a parent or shared across whole cultures. They are a big part of how we learn and establish our relationships with the world and with one another. They are glimpses into how another person thinks and feels: they teach us what it means to be human, in all its rich diversity. They reflect the best and the worst of us, and they teach us about what matters.

My children were in part formed by the stories I made up or read to them 30 or more years ago, and it matters that none were made by machines. The language that I used, the ways that I wove in people and things that were meaningful to them, the attitudes I expressed, the love that went into them, all mattered.  I wish I’d recorded one or two, or jotted down the plots of at least some of the very many Lemmie the Suicidal Lemming stories that were a particular favourite. These were not as dark as they sound – Lemmie was a cheerful creature who just happened to be prone to putting himself in life-threatening situations, usually as a result of following others. Now that they have children of their own, both my kids have deliciously dark but fundamentally compassionate senses of humour and a fierce independence that I’d like to think may, in small part, be a result of such tales.

The books I (or, as they grew, we, and then they) chose probably mattered more. Some had been read to me by my own parents and at least a couple were read to them by their own parents. Like my children, I learned to read very young, largely because my imagination was fired by those stories, and fired by how much they mattered to my parents and siblings. As much as the people around me, the people who wrote and inhabited the books I listened to and later read made me who I am, and taught me much of what I still know today – not just facts to recall in a pub quiz but ways of thinking and understanding the world, and not just because of the values they shared but because of my responses to them, that increasingly challenged those values. Unlike AI-generated tales, these were shared cultural artifacts, read by vast numbers of people, creating a shared cultural context, values, and meanings that helped to sustain and unite the society I lived in. You may not have read many of the same books I read as a middle class boy growing up in 1960s Britain but, even if you are not of my generation or cultural background, you might have read (or seen video adaptations of) one or more children’s works by A.A. Milne, Enid Blyton, C.S. Lewis, J.R.R.Tolkein, Hans Christian Anderson, Charles Dickens, Lewis Caroll, Kenneth Grahame, Rev. W. Awdry, T.S. Eliot, the Brothers Grimm, Norton Juster, Edward Lear, Hugh Lofting, Dr. Seuss, and so on. That matters, and it matters that I can still name them. These were real authors with attitudes, beliefs, ideas, and styles unlike any other. They were products and producers of the times and places they lived in. Many of their attitudes and values are, looking back, troublesome, and that was true even then. So many racist and sexist stereotypes and assumptions, so many false beliefs, so many values and attitudes that had no place in the 1960s, let alone now. And that was good, because it introduced me to a diversity of ways of being and thinking, and allowed me to compare them with my own values and those of other authors, and it prepared me for changes to come because I had noticed the differences between their context and mine, and questioned the reasons.

With careful prompting, generative AIs are already capable of producing work of similar quality and originality to fan fiction or corporate franchise output around the characters and themes of these and many other creative works, and maybe there is a place for that. It couldn’t be much worse than (say) the welter of appallingly sickly, anodyne, Americanized, cookie-cutter, committee-written Thomas the Tank Engine stories that my grandchildren get to watch and read, that bear as little resemblance to Rev. W. Awdry’s sublimely stuffy Railway Stories as Star Wars. It would soften the sting when kids reach the end of a much loved series, perhaps. And, while it is a novelty, a personalized story might be very appealing, albeit that there is something rather distasteful about making a child feel special with the unconscious output of a machine to which nothing matters. But this is not just about value to individuals, living with the histories and habits we have acquired in pre-AI times. This is something that is happening at a ubiquitous and massive scale, everywhere. When this is no longer a novelty but the norm it will change us, and change our societies, in ways that make me shiver. I fear that mass-individualization will in fact be mass-blandification, a myriad of pale shadows that neither challenge nor offend, that shut down rather than open up debate, that reinforce norms that never change and are never challenged (because who else will have read them?), that look back rather than forward, that teach us average ways of thinking, that learn what we like and enclose us in our own private filter bubble, keeping us from evolving, that only surprise us when they go wrong. This is in the nature of generative AIs because all they have to learn from is our own deliberate outputs and, increasingly, the outputs of prior generative AIs, not from any kind of lived experience. They are averaging mirrors whose warped distortions can convince us they are true reflections. Introducing AI-generated stories to very young children, at scale, seems to me to be an awful gamble with very high stakes for their futures. We are performing uncontrolled experiments with stuff that forms minds, values, attitudes, expectations, and meanings that these kids will carry with them for the rest of their lives, and there is at least some reason to suspect that the harm may be greater than the good, both on an individual and a societal level. At the very least, there is a need for a large amount of editorial control, but how many parents of young children have the time or the energy for that?

That said…

Generating, not consuming output

I do see great value in working with and supporting the kids in creating the prompts for those stories themselves. While the technology is moving too fast for these evanescent skills to be describable as generative AI literacies, the techniques they learn and discoveries they make while doing so may help them to understand the strengths and limitations of the tools as they continue to develop, and the outputs will matter more because they contributed to creating them. Plus, it is a great fun way to learn. My nearly 7-year-old grandchild, with the help of their father, has enjoyed and learned a lot from creating images with DALL-E, for instance, and has been doing so long enough to see massive improvements in its capabilities, so has learned some great meta-lessons about the nature of technological evolution too. This has not stopped them from developing their own artistic skills, including with the help of iPads and AI-assisted drawing tools, which offer excellent points of comparison and affordances to reflect on the differences. It has given them critical insight into the nature of the output and the processes that led to it, and it has challenged them to bend the machine to do what they want it to do. This kind of mindful use of the tools as complementary partners, rather than consumption of their products, makes sense to me.

I think the lessons carry forward to adult learning, too. I have huge misgivings about giving generative AIs a didactic role, for the same reasons that having them tell stories to children worry me. However, they can be great teachers for those that make use of them to create output, rather than being targets of the output they have created. For instance I have been really enjoying using ChatGPT+ to help me write an Elgg plugin over the past few weeks, intended to deal with a couple of show-stopping bugs in an upgrade to the Landing that I had been struggling with for about 3 years, on and (mostly) off. I had come to see the problems as intractable, especially as a fair number of far smarter Elgg developers than I had looked at them and failed to see where the problems lay. ChatGPT+ let me try out a lot more ideas than even a large team of developers would have been able to come up with alone, and it took care of some of the mundane repetitive work that made the process slow.  Though none of it was bad, little of its code was particularly good: it made up stuff, omitted stuff, and did things inefficiently. It was really good, though, at putting in explanatory comments and documenting what it was doing. This was great, because the things I had to do to fix the flaws taught me a lot more than I would have learned had they been perfect solutions. Nearly always, it was good enough and well-documented enough to set me on the right path, but the ways it failed drove me to look at source documentation, query the underlying database (now knowing what to look for), follow conversations on GitHub, and examine human-created plugins, from which I learned a lot more and got further inspiration about what to ask the LLM to do next. Because it made different mistakes each time, it helped me to slowly develop a clearer model of how it should really have happened, so I got better and better at solving the problems myself, meanwhile learning a whole raft of useful tricks from the code that worked and at least as much from figuring out why it didn’t. It was very iterative: each attempt sparked ideas for the next attempt. It gave me just enough scaffolding to help me do what I could not do alone. About half way through I discovered the cause of the problem – a single changed word in the 150,000+ lines of code in the core engine, that was intended to better suit the new notification system, but that resulted in the existing 20m+ notification messages in the system failing to display correctly. This gave me ideas for some better prompts, the results of which taught me more. As a result, I am now a better Elgg coder than I was when I began, and I have a solution to a problem that has held up vital improvements to an ailing site used by more than 16,000 people for many years (though there are still a few hurdles to overcome before it reaches the production site).

Filling the right gaps

The final solution actually uses no code from ChatGPT+ at all, but it would not have been possible to get to that point without it. The skills it provided were different to and complementary to my own, and I think that is the critical point. To play an effective teaching role, a teacher has to leave the right kind of gaps for the learner to fill. If they are too large or too small, the learner learns little or nothing. The to and fro between me and the machine, and the ease with which I could try out different ideas, eventually led to those gaps being just the right size so that, instead of being an overwhelming problem, it became an achievable challenge. And that is the story that matters here.

The same is true of the stories that inspire: they leave the right sized gaps for the reader or listener to fill with their own imaginations while providing sufficient scaffolding to guide them, surprise them, or support them on the journey. We are participants in the stories, not passive recipients of them, much as I was a participant in the development of the Elgg plugin and, similarly, we learn through that participation. But there is a crucial difference. While I was learning the mechanical skills of coding from this process (as well as independently developing the soft skills to use them well), the listener to or reader of a story is learning the social, cultural, and emotional skills of being human (as well as, potentially, absorbing a few hard facts and the skills of telling their own stories). A story can be seen as a kind of machine in its own right: one that is designed to make us think and feel in ways that matter to the author. And that, in a nutshell, is why a story produced by a generative AI is such a problematic idea for the reader, but the use of a generative AI to help produce that story can be such a good idea for the writer.

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21680600/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais

https://jondron.ca/stories-that-matter-and-stories-that-dont-some-thoughts-on-appropriate-teaching-roles-for-generative-ais/

#book #digitalEducation #distanceEducation #eLearning #handbook #onlineLearning #openEducation #reference

robot reading a bedtime story to a child
2023-11-26

A month or two ago I shared a “warts-and-all” preprint of this paper on the risks of educational uses of generative AIs. The revised, open-access published version, The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education is now available in the Journal Digital.

The process has been a little fraught. Two reviewers really liked the paper and suggested minimal but worthwhile changes. One quite liked it but had a few reasonable suggestions for improvements that mostly helped to make the paper better. The fourth, though, was bothersome in many ways, and clearly wanted me to write a completely different paper altogether. Despite this, I did most of what they asked, even though some of the changes, in my opinion, made the paper a bit worse. However, I drew the line at the point that they demanded (without giving any reason) that I should refer to 8 very mediocre, forgettable, cookie cutter computer science papers which, on closer inspection, had all clearly been written by the reviewer or their team. The big problem I had with this was not so much the poor quality of the papers, nor even the blatant nepotism/self-promotion of the demand, but the fact that none were in any conceivable way relevant to mine, apart from being about AI: they were about algorithm-tweaking, mostly in the context of traffic movements in cities.  It was as ridiculous as a reviewer of a work on Elizabethan literature requiring the author to refer to papers on slightly more efficient manufacturing processes for staples. Though it is normal and acceptable for reviewers to suggest reference to their own papers when it would clearly lead to improvements, this was an utterly shameless abuse of power of a scale and kind that I have never seen before. I politely refused, making it clear that I was on to their game but not directly calling them out on it.

In retrospect, I slightly regret not calling them out. For a grizzly old researcher like me who could probably find another publisher without too much hassle, it doesn’t matter much if I upset a reviewer enough to make them reject my paper. However, for early-career researchers stuck in the publish-or-perish cycle, it would be very much harder to say no. This kind of behaviour is harmful for the author, the publisher, the reader, and the collective intelligence of the human race. The fact that the reviewer was so desperate to get a few more citations for their own team with so little regard for quality or relevance seems to me to be a poor reflection on them and their institution but, more so, a damning indictment of a broken system of academic publishing, and of the reward systems driving academic promotion and recognition. I do blame the reviewer, but I understand the pressures they might have been under to do such a blatantly immoral thing.

As it happens, my paper has more than a thing or two to say about this kind of McNamara phenomenon, whereby the means used to measure success in a system become and warp its purpose, because it is among the main reasons that generative AIs pose such a threat. It is easy to forget that the ways we establish goals and measure success in educational systems are no more than signals of a much more complex phenomenon with far more expansive goals that are concerned with helping humans to be, individually and in their cultures and societies, as much as with helping them to do particular things. Generative AIs are great at both generating and displaying those signals – better than most humans in many cases – but that’s all they do: the signals signify nothing. For well-defined tasks with well-defined goals they provide a lot of opportunities for cost-saving, quality improvement, and efficiency and, in many occupations, that can be really useful. If you want to quickly generate some high quality advertising copy, the intent of which is to sell a product, then it makes good sense to use a generative AI. Not so much in education, though, where it is too easy to forget that learning objectives, learning outcomes, grades, credentials, and so on are not the purposes of learning but just means for and signals of achieving them.

Though there are other big reasons to be very concerned about using generative AIs in education, some of which I explore in the paper, this particular problem is not so much with the AIs themselves as with the technological systems into which they are, piecemeal, inserted. It’s a problem with thinking locally, not globally; of focusing on one part of the technology assembly without acknowledging its role in the whole. Generative AIs could, right now and with little assistance,  perform almost every measurable task in an educational system from (for students) producing essays and exam answers, to (for teachers) writing activities and assignments, or acting as personal tutors. They could do so better than most people. If that is all that matters to us then we might as well therefore remove the teachers and the students from the system because, quite frankly, they only get in the way. This absurd outcome is more or less exactly the end game that will occur though, if we don’t rethink (or double down on existing rethinking of) how education should work and what it is for, beyond the signals that we usually use to evaluate success or intent. Just thinking of ways to use generative AIs to improve our teaching is well-meaning, but it risks destroying the woods by focusing on the trees. We really need to step back a bit and think of why we bother in the first place.

For more on this, and for my tentative partial solutions to these and other related problems, do read the paper!

Abstract and citation

This paper analyzes the ways that the widespread use of generative AIs (GAIs) in education and, more broadly, in contributing to and reflecting the collective intelligence of our species, can and will change us. Methodologically, the paper applies a theoretical model and grounded argument to present a case that GAIs are different in kind from all previous technologies. The model extends Brian Arthur’s insights into the nature of technologies as the orchestration of phenomena to our use by explaining the nature of humans’ participation in their enactment, whether as part of the orchestration (hard technique, where our roles must be performed correctly) or as orchestrators of phenomena (soft technique, performed creatively or idiosyncratically). Education may be seen as a technological process for developing these soft and hard techniques in humans to participate in the technologies, and thus the collective intelligence, of our cultures. Unlike all earlier technologies, by embodying that collective intelligence themselves, GAIs can closely emulate and implement not only the hard technique but also the soft that, until now, was humanity’s sole domain; the very things that technologies enabled us to do can now be done by the technologies themselves. Because they replace things that learners have to do in order to learn and that teachers must do in order to teach, the consequences for what, how, and even whether learning occurs are profound. The paper explores some of these consequences and concludes with theoretically informed approaches that may help us to avert some dangers while benefiting from the strengths of generative AIs. Its distinctive contributions include a novel means of understanding the distinctive differences between GAIs and all other technologies, a characterization of the nature of generative AIs as collectives (forms of collective intelligence), reasons to avoid the use of GAIs to replace teachers, and a theoretically grounded framework to guide adoption of generative AIs in education.

Dron, J. (2023). The Human Nature of Generative AIs and the Technological Nature of Humanity: Implications for Education. Digital, 3(4), 319–335. https://doi.org/10.3390/digital3040020

Originally posted at: https://landing.athabascau.ca/bookmarks/view/21104429/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education

https://jondron.ca/published-in-digital-the-human-nature-of-generative-ais-and-the-technological-nature-of-humanity-implications-for-education/

#AI #cas #complexAdaptiveSystem #digitalEducation #distanceEducation #eLearning #education #GAI #genAI #generativeAI #learning #onlineLearning #openEducation

Steve McCarty 🇯🇵SteveMcCarty@hcommons.social
2023-10-28

College for intergenerational mobility provides another perspective with which to champion higher education: "The deep inequity of the anti-college movement," Jose Luis Alvarado (The Hill, 10/28/23): thehill.com/opinion/congress-b

We have looked to MOOCs, OERs, open access publications, and online education generally to widen access to higher education for those disadvantaged by the digital divide as well as for learners worldwide who are not affluent enough to access f2f higher education.

Although Dr. Alvarado's article is closer to my experience than you'd imagine, I relied upon the merits of millenia of academia for a series on the academic life, including "The Idea of the University"; download from: hcommons.org/deposits/download

Comments on Dr. Alvarado's article or the above?

#OpenAccess #OER #Education #HigherEducation #HigherEd #AcademicMastodon #AcademicFedi #OnlineLearning #DistanceEducation #OnlineEducation #OpenEducation #multiculturalism

@edutooter @academicchatter @OnlineEducation

Wednesday's Tipswtips
2023-10-21

Educação a Distância

Educação a distância: preenchendo lacunas e trazendo o aprendizado à sua porta. A Educação a Distância oferece flexibilid

wednesdaystips.com/docs/educac

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst