Thursday, 27 April 2017

Women in ICT & engineering: #Gender barriers & solutions #educon17

At Educon one of the main sessions was focusing on the gendered challenges within engineering, which fits with today's ‘girls in ICT’ day. The educon panel was sharing their own stories (being female pioneers, or lack of role models), the clear barrier related to policies stereotyping gender roles, constructed family values allocated to boys and girls, need for dedicated female networks, and the clear glass ceiling when looking at leadership roles for women in IT and engineering). As the discussion moved forward, I remembered that this is mirrored in a recent book I reviewed. If you are looking for similar stories, and want to learn how women addressed the professional challenges they faced in classically male-dominant areas, this is a good book.

This book combines the personal and professional journeys of 29 women and two men who all made a career developing and using educational technology (EdTech). The book provides an inspiring account of what the challenges were which the first EdTech women encountered and how they overcame them in order to create a professional space inside an - at that time - male dominant field. The majority of the EdTech women in this book are connected to the Association for Educational Communications and Technology (AECT), an organisation founded in 1923 to focus on National (US) Education, division of Visual Instruction.  Each of the chapters of the book is written by the women or men who lived through the actual experience of having to secure a career in the EdTech world. These different narratives provide a rich, at times deeply personal, professional and varied account of what it takes to enter and work within the EdTech field. While reading the book I often nodded as parts of the journeys shared were recognisable and – it seems – universal to all EdTech professionals. The importance of mentorship, leadership and trust emerge clearly from all the journeys. And in many of them the act of accepting your own realities (being different, coming from a variety of backgrounds, overcoming deep personal trauma) and uplifting your own situation as well as others by investing in education, provides an inspiring read.

Overview
Ana Donaldson edited this book. She has been a past president of AECT (the Association for Educational Communications and Technology) and in that position she took the initiative to gather journeys from other women who have had an impact on educational technology anywhere from 40 years ago up until 2016. The book is divided into three parts: individual voices, historical perspective and mentoring. Where the majority of the book is taken by the part on individual voices, providing 150 pages of personal as well as professional accounts of what it takes to enter the EdTech field and becoming a renowned EdTech professional. The historical perspective comprises three additions which cover the history or AECT, some of the lesser known pioneering woman in EdTech covered through short vignettes, and a generational focus on EdTech learners and their distinct characteristics put in their technological context. The final part of the book looks at mentoring, but attention to mentoring is already prevalent in many of the individual voices. The mentoring part is different from the mentoring mentioned in the individual voices, as it zooms in on what it takes to be an effective mentor, the necessity of intentional mentorship and the importance of being a role model to new female EdTech students. Each chapter of the book consists of a personal account of life as an EdTech professional. In some cases this personal account is told chronologically, starting from early life right to current career and status, at other times the focus is more on the professional journey highlighting detailed career challenges and successes. After each account a selection of publications by the author of that chapter is offered. These publications consist of high impact journal articles, clearly emphasizing the importance of publishing in high impact journals if you – as an EdTech academic – want to make a name for yourself or get tenure. At the end of the chapter a brief biography per author is given, highlighting personal achievements and interests.

An inside look
Reading through the stories of the individual voices, the reader soon finds reoccurring themes that impacted most of the women who shared their EdTech journeys. The journeys of the women and their consecutive jobs or career titles also reveal a change in jargon, e.g. at first audiovisual education was used instead of the term educational technology, and the titles covering professional positions involving what is now known as EdTech varied depending on institutes and research programs. From those EdTech experts graduating around the late 60’s or early 70’s, it becomes clear that the audiovisual side of education was a male dominated field, including the academic posts investigating any type of technology for education. The women who got EdTech positions often point to strong female role models in their family (mothers, grandmothers) who inspired a new generation of women, as well as open minded male mentors opening doors, supporting endeavors and showing opportunities. From  those early years onward, those women who gained access to EdTech and joined forces to forge lifelong friendships and collaborations seem to have thrived. Connecting and daring to build informal connections with those who seem to have gained status and established professionalism seems to help in getting to grips with the professional, academic as well as personal challenges. More than one author mentions sisterhood as a means to keep motivated and grow stronger, and that is true for all decades covered in this book.    

The importance of getting a PhD becomes clear while reading this volume, and it is amazing how many different indirect reasons there are to take up a PhD (at the beginning of their career many of the women did want to teach in primary or high school and many of them did teach at some point). But it is clear that this is essential if you want to move forward in an academic environment, or if you want to be taken seriously as a professional. Unfortunately, it also becomes clear from the different stories that obtaining a PhD nowadays is no longer a guarantee for getting into a tenure position. When reading the book one understands the importance of getting a formal degree, but many authors also emphasize the importance of accepting yourself. One author summarised: “Know what you know and what you do not, know who you are and who you are not, embrace your young-self – and your aging self”, a message that I feel resonates for more than just the women in the EdTech field.
Taking up leadership early on also beams out as one of the common actions that will increase your chances to make it in EdTech. This includes voluntary work such as starting a minority focused community of practitioners, though a warning is mentioned that we – women – should realise that a lot of the support we offer can be seen as ‘invisible labor’, where supporting students from similar minority groups as ourselves does demand extra time and effort which other academics do not need to address. Leadership actions can result in additional expertise regarding funding skills, co-authoring papers, or creating a community of peers and all of these have a great impact on increasing career options. Taking up leadership also creates flexibility, moving from academia to corporate or visa versa, depending on the passion felt by specific EdTech jobs.    
Mentorship is without doubt crucial both to growing as a professional, as to offering new opportunities to EdTech students. Mentoring allows rapid growth to take place (learning from experts), it offers opportunities to learn from students (keeping in touch with all developments), it enables actions towards more social justice and it often results in lifelong networks.   

Strengths and weaknesses
The journeys of each of these women is astonishing, and provides such a rich texture of diverse backgrounds and opportunities. Some authors mention financial implications and having to work multiple jobs in order to pay for college or university, others mentioned different cultural backgrounds which influenced their perception of what it takes to get into EdTech, and still others had to find their way against personal hardship, or microagressions coming from what should be colleagues … Courage to keep moving forward is clearly present in all journeys.  
One non-outspoken idea is that many of the women shared that they were initially not intending to pursue an academic career, and especially not a PhD. In many cases the idea of obtaining a PhD came from a mentor, a colleague, or chance. This stands in stark contrast to the accounts of those same authors mentioning their male partners, who consciously wanted to get a PhD. A clear reminder of the intricate groups of people who have been socialised into ‘their place in society’, which forces society’s status quo onto them, even though they are well placed to burst that socialized bubble.
The rise of EdTech reaches beyond the history of EdTech women, and the situations they came across while aiming to establish themselves as professionals, nevertheless I feel that the focus on women provides an additional and rich layer of importance to getting a career going in a more male oriented, academic field. It shows additional challenges, and therefor additional strengths that not only provide insights for other women, but also to men coming from different backgrounds and trying to enter academia or a profession which is new to them or their family.
I would have liked more EdTech women of color to be taken up in this volume, their stories reveal the importance of being part of a sub-group of women to obtain mutual empowerment. Although moments of sexism are mentioned by many authors, similarly each woman of color sharing their journey in the book gave accounts of racism on top of the sexism they sometimes encountered. Though sexism, racism and general challenges are mentioned, the book is above all focusing on what helps each one of us (male or female) forward: mentorship, shared family responsibilities, having role models, being part of a network, feeling part of a community of like-minded professionals, and knowing that differences strengthen academia and as a result society as a whole.  
Although many voices can be heard, the book does consist of women and men being linked to AECT. This does limit the scope for those readers not being familiar with this US based organisation, or living in other countries with different educational systems, cultural challenges or EdTech support.  

Coming from a working family myself, without family members knowing what academic jobs are or even what you need to do in order to get one of those jobs, this book makes a difference. It strengthens some of the actions I have taken, and it shows those options I sometimes do not dare to undertake. The book empowered me, and informed me about my chosen professional field.

In a world were quick opinions seem to reign, this book offers many voices that support attention to the importance of community (in all its variance and diversity), creating a nurturing working environment, and actively working to decrease hegemony that effects most of us. Anyone wanting to know more about the professional options available in EdTech, or the challenges you might face as a woman interested in technology, or wanting to get a historical perspective of an emerging educational field, will find answers to these questions in this book.

Liveblog #Educon17 @Kinshuk1 Enhancing learning through adaptivity and personalization in ubiquitous environments

Kinshuk (http://www.kinshuk.info ) was streamed in live from Austin, Texas. Lately he is also increasingly engagement with industry on EdTech (yes, the bridge between university and industry is tightening).
The learning environment is expanding outside of the classroom environment, so how can we incorporate learning in all these environments. Some opportunities (free)
  • Series in springer collection in EdTech (look up book guidelines for this series: I think it is http://www.springer.com/series/11777 ), any new advancements are welcomed.
  •  Journal which is completely open access called ‘smart learning environments’ (Inge, look this up: http://www.springer.com/computer/journal/40561 ) , focus on improved learning environments, and bringing these traditional environments and transforming them into online learning environments.
  • International association of smart learning environments http://www.iaslo.net they look for evidence-based research on the subject.


Current trends in learning
·        Inclusive education,
·        Focus on individual strengths and needs
·        Various learning scenarios – in clsass and outdoor environments
·        Relevance of the learning scenarios with learners living and working environments
·        Authentic learning with physical as well as digital resources
Result: better learning experience due to authentic learning, and ubiquitous access to learning. So learning is now more easily fitted to real life of the learner. Learning needs to be relevant to the learner, but as a teacher you need to become aware of how to capture the attention outside of the classroom.
This means the teachers must become aware of the new teaching/learning opportunities.

Vision
Learning is happening everywhere, at any time, and is highly contextualised.
Seamless integration of learning into every aspect of life with implies immersive, always on learning that happens so naturally and in such small chunks that no conscious effort in needed by actively learning while engaged in education.
We need to make learning as meaningful as possible. The goal of the learning needs to be put across to all the learners, and the learning needs to be made visible (e.g. Hattie)… but all of this is highly demanding for the teacher. Every student is doing different things, so how can the teacher know that her learners are learning? That is why we are looking for much more data, much more information, and the assessments is also coming out of the classrooms and out of the formal, classic design of assignments and assessments.

Smart learning analytics is used to discover what type of learning data is coming in. Discover, analyze and make sense of student, instruction and environmental data from multiple sources to identify learning traces in order to facilitate instructional support in authentic learning environments. This also opens a new type of teaching, namely coaching, give guidance, personalise the feedback given the learner data or the learner information that is viewed and analysed by the teacher. For example, a  flower bed with a placard on what the flowers are, but on the top right there is a QR code with additional information on the flowers, but embedded in its full cycle, use and systematic botanical information. So this means that the information is delivered in an adaptive way (as complex as the learner wants to view it), and open to all. The learning system provides you authentic information within a contextual reality, and with the option to zoom in on additional information. (look at iSpot as additional learning scenario).
Information can now come from different sources: mobiles, environment, internet, people, …. It is like learning traces, a small learning impression that can tell us that learning is actually happening. For instance, looking at paintings in a museum, one painting captures the learners attention, and some things are different to other paintings. The learner might learn something a week later, and gets more information on it, and now a story can be shared by the learner to people that are outside of the classroom. This actual fact proofs that learning has happened.
But a system needs to be in place to proof or visualise the actual learning that is happening.

Remark on data: the learners need to be made aware that their data might be used, for privacy and policy issues.

How can we design instructional support that will make this type of smart learning happen and make it measurable.

Discover
Past record and real-time observation of: learner’s capabilities, preferences and competencies, learenr’s location, learner’s technological use, technologies surrounding the learner, changes keep happening in the learner’s situational context. So knowing the past, does not mean that what is happening today is a meaningful difference to the previous actions, as the contexts of today constantly change.
Miller was pioneering (5 elements of information memorisation).
And although the tech can provide the teachers with lots of additional data, the actual learning experience needs to take into account the changing environment and connected conditions of this environments.

Human-machine learning has an effect on the actual learning process.
Is the learner trying to find new information, is that new information screened critically…
We do have lots of mechanisms that we use to see what the learners are going through and how the learning occurs.
Informal learning happens everywhere, across the potential learning environments, and is there a record of the learning somewhere? Small learning can happen anywhere, but how can we identify it and use it as evidence of learning.

Making sense: learning traces
A learning trace comprises of a network of observed study activities that lead to a measurable cchunk of learning.
Learning traces are sensed ad supply data to learning analytics, where data is typically big, un/semi structured, seemingly unrelated, not quite truthful, and fits multiple models and theories.
What kind of learning, which models can be used to map learning traces to try to understand that learning is actually happening. Learning traces are also important to understand personalised learning, differentiated learning that is happening across the population in all its variety.

Why learning traces are important
Different students can adopt different learning approaches for the same learning activity
Ex,, why a pointed object penetrates better than a blunt object?
A visual-oriented learner may choose to use different approach than an sensoratory learner.

Learner awareness
Personalisation of learning experience through dynamic learner modeling: performance, meta-cognitive skills, cognitive skills, learning styles, affective state, physiological symptoms (eg. The learner is doing something in the lab, and suddenly heart rate will increase, why? What kind of concept is the learners using, are there comparable situations of learning where this occurred?). All of this are tools that can make teachers more informed, enabling more informed decisions on learning.

Technological awareness
Personalization of learning experience through the identification of technological functionality.
Identifying various device functionality
Dynamically optimize the content to suit the functionality
Display capability, audio and video capability…

Location awareness
Personalisation through location modelling
Location base optimal grouping (grouping ad hoc based on mobile location)
Location based adaptation of learning content

Real-life physical objects
Public databases of POIs
QR codes
Wifi and Bluetooth access point identification
Active and passive RFIDs

Surrounding awareness
Learning based on all the surrounding data, context-aware knowledge structures
Identifying specific context-aware knowledge structure among different domains,
Identify learning objectives of real interest to the learner
Propose learning activities to the learner
Lead the learner around the learning environment

Skills and knowledge level detection: competency level, confidence level (evidence-based confidence). For instance using dashboard to get an idea of learning progress,… and what type of skills are affected.

Teachers need to feel that it does not affect their workload, they become more open to these new options.


Question from my end on making learning visible: do you have examples of feedback from the learner that make the actual learning visible.  You mention on how learners learn, but it seems you are more viewing it from a teacher viewpoint, awareness in the learner.
Answer: analytics are coming from a variety of sources and at Austin, Texas, we also work with Codex and MI-dash see the learner progress over time, SCRL which uses self-evaluation, learning initiative design… 

Wednesday, 26 April 2017

#Mobile #assessment based on self-determination theory of motivation #educon17

Talk given at Educon in Athens, Greece by Stavros Nikou, really interesting mobile learning addition in the area of vocational and learning assessment. Mobile devices in assessment: offer and support new learning pedagogies and new ways of assessment: collaborative and personalised assessments.

Motivation of the framework is aiming to address: following the self-determination theory (http://selfdeterminationtheory.org/theory/) : intrinsic and extrinsic motivation. Intrinsic motivation works from insight of the person, and because it is enjoyable. Extrinsic is build upon reward or punishment. What they try to do is get more intrinsic motivation ignited, as it leads to better understanding and better performance.

There are 3 elements in the theory: autonomy, competence, relatedness all of this impacts the self-determination. This study try to use these three elements to increase intrinsic motivation.
Mobile-based assessment motivational framework: the framework is still in a preliminary phase, but of interest. Autonomy: personalised and adaptive guidance, grouping questions into different difficulty levels (adaptive to learner), location specific – context-aware.
Competence: provide emotional and cognitive feedback that is immediate. Drive students to engage in authentic learning activities, appropriate guidance to support learners.
Preliminary evaluation of the proposed framework: paper based and mobile based assessments used prior and after intervention to test out the framework. Using an experimental design, assessments after each week of formal training, two assessments in total for both groups. ANCOVA data analysis used.

Results: significant difference of autonomy, and competence, and relatedness. The framework will be expanded with additional mobile learning features, and framework will be used with different students. Future research wants to enhance the framework.
The mobile assessment had a social media collaborative element in it, and it also made use of more feedback options due to the technical possibilities that the mLearning option had.


Using #learningAnalytics to inform research and practice #educon17

Talk during Educon2017 by Dragan Gasevic known for his award-winning work of his team on the LOCO-Analytics software is considered one of the pioneering contributions in the growing area of learning analytics. In 2014 he founded ProSolo Technologies Inc (https://www.youtube.com/watch?v=4ACNKw7A_04) that develops a software solution for tracking, evaluating, and recognizing competences gained through self-directed learning and social interactions.

He jumps up the stage with a bouncy step and was in good form to get his talk going.

What he understands under learning analytics is the following: shaping the context of learning analytics results in challenges and opportunities. Developing a lifelong learning journey automatically results in a measuring system that can support and guide the learning experience for individuals.
Active learning also means constant funding, to enable the constant iteration of knowledge, research and tech. But even if you provide new information, there are only limited means to understand who in the room is actually learning something, or not. So addressing the need to get meaningful feedback on what is learned is the basis of learning analytics.
Learning system (e.g. LMS), we also use socio-economic details of individuals
No matter which technologies are used, the interaction with these technologies results in digital footprints. Initially the technologists used the digital footprints as a means to adjust the technology. But gradually natural language processing, learning, meaning creation… also became investigated using these technologies.

Actual applications of learning analytics are given: two well known examples
Course Signals from the Purdue university: analysing the student actions within their LMS (blackboard), different student variables, outcome variables for student risk (high, mediat, low risk) provided by algorithms using the data from the digital footprints of each students. The teachers and students got ‘traffic light’ alerts. Those students they used the signals, had an increase of 10 to 20 percent student success.
Doing a content analysis of using course signals, summative feedback seemed to have much less related to student success, but formative (detailed specific) feedback did have immediate effect on learning success.
University of Michigan E2Coach (top 2 public universities in US). They have large science classrooms, but populated by students with very varied science grade background.
In the E2coach project, they used the idea of ‘better than expected’, so they looked at successful learning patterns: successful students would be adaptive (trying different options to learn), and those who self-organised in peer groups, to enable content structuring.
Top performing students were asked to give pointers on what they did to be successful learners. Those pointers were given to new students to provide them feedback on how they could increase learning success, but at the same time giving them the option to learn (self-determination theory). This resulted in about 5 percent improvement of learner success.

Challenges of learning analytics
Four challenges:
Generalisability: while we are seeing predictive models for student success, but they only extend to what can be generalised. Significance of these models were not too applicable across different context, so the generalisability was quite low. Some indicators seem to be significant predictors, yet in other contexts they are not. So what is the reason behind this. This means we are now collecting massive amounts of MOOC data to look for specific reasons. But this work is difficult, as we need to understand what questions do we need to address.
Student agency: also a challenge. How much of student decisions are made by themselves, but the responsibility of learning is in their hands in their hands.
Common myth in learning analytics: more time spend on tasks, the more they will learn. Actually, this is not the case, more reverse actually. Even time with educators is frequently showing that it is an indicator for poor learner success.
Feedback presentation: we felt that the only way to give feedback is visualisation and dashboards. But many different type of vendors involved in learning analytics look into dashboards. But they sometimes these dashboards are harmful, as the students compared with the class performance, resulting in less student engagement and learning. Students sometimes invested less time as they felt from the dashboards they were doing well, so with less investment less learning.
Investment and willingness to understand: http://he-analytics.com and the SheilaProject  http://sheilaproject.eu/ 50plus senior leaders investigated for their understanding of learning analytics. Institutions hardly provide opportunities to learn what learning analtytics are really about. Lack of leadership on learning analytics, so in many cases they are not sure what it entails, or what to do with it. So that results in buying a product… which does not make sense.
Lack of active engagement of all the stakeholders: students are mostly not involved from day one in development of these learner analytics (no user-centered approach).

Direction for learning analytics
Learning analytics are about learning. So we need to fall back on what we already know about learning, then design certain types of intervention using learning analytics. Learning analytics is more than data science, it provides powerful algorithms, machine learning algorithms, system dynamics… but we are end up into a data crunching problem, as we need Theory (particular approaches: cognitive load, self-regulation), practices also inform where to go. We need to take into account whether these results make sense. Which of the correlations are really meaningful, which make sense… but at the same time e need to take into account learning design and the way we are constructing the learning paths for our students. We cannot ignore experimental design, if we also are using meaningful learning analytics. We need to be very specific about study design. Interaction design is for types of interfaces, but they need to be aligned with pedagogical methods.

How does this result in the challenges mentioned before
Generalisability: if we want to make sure we think about this, we need to take into account that one fits all will never work in learning. Different mission, different population, different models, different legislation.. level of individual courses. Differences in instructional design, different courses need different approaches. It is all about contextual information. So what shapes our engagement? Social networks work only for those called weak ties. Networks with only strong ties restrain full learning success. Data mining can help us to analyse networks: exponential random graphs (not sure here?). Machine learning transfer: using it across different domains. Recent good developments addressing this.

Student agency: back to established knowledge (2006 paper: students use operations and to create artefacts for recall or trying to provide arguments or critical thinking). The student decisions are based on student conditions: prior knowledge, study skills, motivations… all of these conditions need to be taken into account. Identifying sub-groups of learners based on algorithms. Some students are really active, but not productive. Some students were only performing were only mediocre active, yet very good performing in terms of studying. Study skills are changing, priorities are changing during learning… so this means different learning agency. Desirable difficulties need to be addressed and investigated. So no significant success between the highly active and mediocre active students, which needs to be studied to find reasons behind it. Learners motivation changes the most during the day as can be seen from literature. So we need to focus to understand these reasons, and to set up interdisciplinary teams to highlight possible reasons while strongly grounding it in existing theory.

Analytics-based Feedback: students need guidance, not only task specific language indicators. This can be done by semi-automatic teacher triggers to provide more support and guidance, resulting in meaningful feedback used by students. (look up research from Sydney, ask reference Inge). Personalised feedback have a significant effect (Inge, again seen in mobimooc). http://ontasklearning.org
Shall we drop the study of visualization? No, it is significant for study skills, and decision making on analytics, but we need to focus on which methods work and to gradually involve visualisations to know what works, what not. Taylor it to specific tasks, design it in a different way than up till now.

Development of analytics capacity and culture: ethics, privacy concerns, very few faculty really ask students for feedback. What are the key points for developing culture: think about discourse (not only technical specs). We need to understand data, we need to work with IT, using different type of models not only data crunching, start from what we know already, finally transformation: we need to step away from learning analytics as technology, we need to talk to our stakeholders to know how to act, how to do it, who is responsible for certain things, … the process can be inclusive adoption process (look it up). We need to think about questions, design strategies, working on the whole phenomenon we need to involve the students. Students are highly aware of the usefulness of their data.

If we want to be successful with learning analytics we need to work together if we want to make a significant difference or impact.

Tuesday, 25 April 2017

Liveblog from university to continued professional learning #educon17 #lifelonglearning

Frank Gielen talks on innovation adoption and transformation. This talk is part of the pre-conference talks of the Educon conferencein Athens, Greece. The talk looks at how to organise learning (master, professional, phd…) to gradually move towards lifelong learning.

Skills gap between what the companies want and which human resources and innovations are available.
The human capital is missing frequently, which means that education is increasingly important.
If you want to transform your ‘old’ energy approach to sustainable or renewable energy approach(es).
So education is core in the innovation process, as you need to train all stakeholders (senior management, workforce on the floor, mid-management…).
Education linked to innovation has two main factors impacting it: speed of adoption (graduates need to be skilled), timeliness (skills need to be used within 2 months at least).
So innovation speed is equivalent with training need. Learning needs to be adapted to speed of innovation.

Personalising learning
Starting from the knowledge triangle: education, industry and research as a baseline for higher education goals which needs to be combined in order to create an employable highly trained workforce coming out of higher ed.
What learning trends are important to stay competitive in the market: Continued Professional Development, become power learners. This means that the human factor needs to be continually developing, in order to be on top of a high turn-around field.
No one size fits all, so in education this means personalised learning, the role of the teacher changes that instead of having a lot of lectures, having online resources which students are knowledgeable to use to create a constant base-line, adding mentoring e.g. the Socratic approach where the teachers are in close contact and support learners.
Solving a challenge also includes having an effect on society.

Merging masters with professional learning
Contemporary learning consists on average of: 70 percent informal learning, 20 percent social learning, 10 percent formal training.
MicroMasters (short online format 10-15 ECTS and commonly project based), in many cases complimentary for the campus teaching, but enabling a blended master. This is something we need to consider as InnoEnergy. But in many cases microMasters are linked to deepening learning in a specific field, and is frequently based on a general foundation (so need for clear learning paths). 
We are shifting towards lifelong learning, blurring the boundaries between master schools, doctoral schools and professional schools.
Learning architecture: MOOCs or microMaster, certified microMaster, blended microMaster with coaching, blended-in-house microMaster with coaching and Bring Your Own Program (BYOP).
A new learning paradigm: personalised, just-in-time learning.
Education is going through a digital transformation. This means that more data is available, which we can start using as a means to support learning. Based on this personalised learning will become available, and lacking skill sets can be found. Data driven education comes a bit closer to enabling personalised learning.
Feedback and coaching has the highest learning impact. This means that teachers need to be prepared to become a guide-on-the-side or a good coach.

Learning entrepreneurship
Learners need to learn it. But not all of the students need to be entrepreneurs, but all of the students need to understand an entrepreneurial skill set: see opportunities, motivate people, drive change, find scarce resources, deal with the uncertainty of innovation. But … then how we measure this, and assess it?

This means being an early adopter, and being a catalyst for educational innovation.

Friday, 21 April 2017

Novel initiative Teach Out: Fake news detecting #criticalthinking #mooc

If you have just a bit of time this week, and you are interested in new ways of online teaching as well as critical thinking... this is a fabulous initiative. The “Fake news, facts and alternative facts” MOOC is part of a teach out course (brief yet meaningful just-in-time learning initiative focused on a hot topic).

Course starts on 21 April 2017 (today)
Course given by the University of Michigan, USA

This is not just a MOOC, actually, it being a MOOC is the boring part. What is really interesting is the philosophy behind the teach out, and the history behind the teach out events. This feels a bit more like an activist driven teaching, admittedly here with a renowned institute.



Brief course description
Learn how to distinguish between credible news sources and identify information biases to become a critical consumer of information.
How can you distinguish between credible information and “fake news?” Reliable information is at the heart of what makes an effective democracy, yet many people find it harder to differentiate trustworthy journalism from propaganda. Increasingly, inaccurate information is shared on social networks and amplified by a growing number of explicitly partisan news outlets. This Teach-Out will examine the processes that generate both accurate and inaccurate news stories and the factors that lead people to believe those stories. 

Participants will gain skills help them to distinguish fact from fiction.

This course is part of a Teach-Out, which is:
·        an event – it takes place over a fixed, short period of time
·        an opportunity – it is open for free participation to everyone around the world
·        a community – it will be joined by a large number of diverse individuals
·        a conversation – an opportunity to give and take ideas and information from people

The University of Michigan Teach-Out Series provides just-in-time community learning events for participants around the world to come together in conversation with the U-M campus community, including faculty experts. The U-M Teach-Out Series is part of our deep commitment to engage the public in exploring and understanding the problems, events, and phenomena most important to society.

Teach-Outs are short learning experiences, each focused on a specific current issue. Attendees will come together over a few days not only to learn about a subject or event but also to gain skills. Teach-Outs are open to the world and are designed to bring together individuals with wide-ranging perspectives in respectful and deep conversation. These events are an opportunity for diverse learners and a multitude of experts to come together to ask questions of one another and explore new solutions to the pressing concerns of our global community. Come, join the conversation!

(Picture: http://maui.hawaii.edu/hooulu/2017/01/07/the-real-consequences-of-fake-news/ )

Companies should attract more Instructional Designers for training #InstructionalDesign #elearning

Online learning is increasingly pushing university learning and professional training into new directions. This means common ground must be set on what online learning is, which approaches are considered as best practices and which factors need to be taken into account to ensure a positive company wide uptake of the training. Although online learning has been around for decades, building steadily on previous evidence-based best practices, it is still quite a challenge to organize online learning across multiple partners, let alone across cultures (in the wide variety of definitions that culture can have).

Earlier this month Lionbridge came out with a white paper entitled “steps for globalizing your eLearning program”. It is a 22 page free eBook, and a way to get your contact data. The report is more corporate than academically inclined (subtitle is ‘save time, money and get better results’), and offers an insight look of how companies see global elearning and which steps to take first. But when reading the report - which does provide useful points - I do feel that corporate learning needs to accept that instructional design expertise is necessary (the experts! the people!) and needs to be attracted by the company, just like top salespeople, marketing, HR … for it is a real profession and it demands more than the capacity to record a movie and put it on YouTube!

In their first step they mention: Creating a globalizing plan
  • Creating business criteria
  • Decide on content types
  • Get cultural input
  • Choose adaptation approach

The report sets global ready content as a baseline: this section mentions content that is culturally neutral. Personally, I do not belief cultural neutrality is possible, therefor I would suggest using a cultural, balanced mix, e.g. mixing cultural depictions or languages, even Englishes (admitting there is more then one type of English and they are all good). But on the bonus side, the report also stresses the importance of using cultural native instructional design (yes!), which I think can be learner-driven content to allow local context to come into the global learning approach. Admittedly, this might result in more time or more cost (depending on who provides that local content), but it also brings the subject matter closer to the learner, which means it brings it closer to the Zone of Proximal Development (Vygotsky) or enables the learner to create personal learning Flow (Csikszentmihalyi) or simply to allow the learner to think ‘this is something of interest to me, and I can learn this easily’.

In a following step: Plan ahead for globalisation
  • Legal issues: looking at IPR or the actual learning that can be produced. 
  • Technology and infrastructure: infrastructure differs. 
  • Assessment and feedback mechanisms: (yes!) Feedback, very important for all involved
  • Selecting a globalizing partner

The report is brief, so not too much detail is given on what is meant with the different sections, but what I did miss here was the addition of peers for providing feedback, or peer actions to create assessments that are actually contextualized and open to cultural approaches. No mention of the instructional design experts in this section either.
In the third section a quick overview is given on what to take into account while creating global elearning content, again the focus is on elements and tools: using non-offensive graphics, avoiding culturally heavy analogies, neutral graphics…, not on the actual instruction, which admittedly would take up more than 22 pages, but the instructional approach is to me the source of learning possibilities.

Promoting diverse pedagogy
The final part of the report looks at the team you need, but …. Still no mention of the instructional design expert (okay, it is a fairly new title, but still!). And no mention of the diversity in pedagogy that could support cultural learning (not every culture is in favor of Socratic approaches, and not every cultural group likes classic lecturing).

Attract instructional designers
While the report makes some brief points of interest, I do feel that it lacks what most reports on training are lacking, they seem to forget that online instruction is a real job, a real profession with real skills and which does take years to become good at, just like any STEM or business oriented job. This does indicate that corporations are acknowledging an interest in online training (and possible profit), but … they still think that it can be built easily and does not require specific expertise.
There is no way around it: if you want quality, you need to attract and use experts. If you want to build high quality online training that will be followed and absorbed by the learner, interactions, knowledge enhancement, neurobiological effects… all of this will matter and needs to be taken into account (or at least one needs to be aware of it).
Now more than ever, you cannot simply ‘produce a video’ and hope people will come. There are too many videos out there, and a video is a media document, not necessarily a learning element. Learning is about thinking about the outcome you want to have, and then work backwards, breaking the learning process down into meaningful steps. Why do you use a video? Why do you use a MCQ? Does this really result in learning, or simply checking boxes and consuming visual media?

Building common ground as a first global elearning step
Somehow I feel that the first step should include overall acceptance of a cooperatively build basis:
What are our quality indicators (media quality, content quality, reusability, entrepreneurial effect of the learning elements, address global diversity in depicting actors (visual and audio), …)

Which online learning basics does everyone in the company (and involved in training) need to know: sharing just-in-time learning (e.g. encountering a new challenge: take notes of challenge and solution), sharing best practices on the job (ideal for mobile options), flipped lectures for training moments (e.g. case study before training hours, role play during workshops…), best practices for audio recordings … these learning basics can be so many things, depending on the training that needs to be created, but it needs to be set up collaboratively. If stakeholders feel they will benefit from training, and they are involved in setting up some ground rules and best practices, they are involved. It all comes down to: which type of learning is needed, what does this mean in terms of pedagogical options available and known, and what do the learners need and use. 

Tuesday, 21 February 2017

2 Free & useful #TELearning in Higher Ed reports #elearning #education

These two reports give a status of TELearning in 2016: one analysing the Technology Enhanced Learning for Higher Education in the UK (233 pages, with appendixes starting at page 78) and case studies of Technology Enhanced Learning (48 pages, with nice examples). I give a brief summary below.

The reports were produced by UCISA (Oxford univesity based network) representing many major UK universities and higher education colleges and it states to have a growing membership among further education colleges, other educational institutions and commercial organisations interested in information systems and technology in UK education.

The used definition of TELearning is: "Any online facility or system that directly supports learning and teaching. This may include a formal VLE (virtual learning environment), e-assessment or e-portfolio software, or lecture capture system, mobile app or collaborative tool that supports student learning. This includes any system that has been developed in-house, as well as commercial or open source tools."

Both reports provide an interesting (though UK-oriented) read. Here is a short overview of what you can find in them:

The report focusing on the TELearning for HE in UK (based on the TELearning survey), I have put the main conclusions next to the main chapters:

Top 5 challenges facing institutions: Staff Development is the most commonly cited challenge, Electronic Management of Assessment, lecture capture/recording continues to move up, technical infrastructure, legal/policy issues.

Factors encouraging the development of TELearning: Enhancing the quality of learning and teaching, meeting student expectations, improving student satisfaction are most common driver for institutional TEL provision. Availability of TEL support staff, encourages the development of TEL, feedback from students, availability and access to tools, school/departmental senior management support. In terms of barriers for TELearning: lack of time, development & consolidating, culture continues to be a key barrier, with Departmental\school culture, and Institutional culture, internal funding, and lack of internal sources of funding to support development.

Strategic questions to ask when considering or implementing TELearning: with Teaching, Learning and Assessment consolidating, the rise of the Student learning experience/student engagement strategy, corporate strategy and library and Learning Resources.

TELearning currently in use: main institutional VLE remains Blackboard and Moodle.
Moodle remains the most commonly used platform across the sector, but rising alternative systems such as Canvas by Instructure, and new platforms eg. Joule by Moodlerooms. SharePoint has rapidly declined. An increase in the number of institutions using open learning platforms such as FutureLearn and Blackboard’s Open Education system. Evaluation activity in reviewing VLE provision: conducting reviews over the last two years. TEL services such as lecture capture is the second most commonly reviewed service by all over the last two years.

Support for TELearning tools: e-submission tools are the most common centrally supported
software, ahead of text matching tools such as Turnitin, SafeAssign and Urkund. Formative and summative e-assessment tools both feature in the Top 5, along with asynchronous communication
tools. Adoption of document sharing tools across the sector and the steady rise in the use of lecture
capture tools. Podcasting tools continue to decline in popularity and the new response items electronic exams and learning analytics appear not to be well established at all as institutional services, with only a handful of institutions currently supporting services in these areas.
Social networking, document sharing and blog tools are the common non-centrally supported tools. TEL tools are being used to support module delivery. Blended learning delivery based on the provision of supplementary learning resources remains the most common use of TEL. Only a small number of institutions actually require students to engage in active learning online across all of their programmes of study. Increasing institutional engagement in the delivery of fully online courses, with over half of 2016 respondents now involved. Growing adoption of MOOC platforms by institutions, but less than half of respondents are pursuing open course delivery.
Little change in the range of online services that higher education institutions are optimising for access by mobile devices. Access to course announcements, email services and course materials and learning resources remain the three leading services optimised for mobile devices. Library services, are being optimised. Optimising lecture recordings at the same level as 2014. The most common ways in which institutions are promoting the use of mobile devices are through the establishment of a bring your own device (BYOD) policy and by loaning out devices to staff and students. Funding for mobile learning projects has reduced in scale.
Outsourcing of institutional services grows: student email, e-Portfolio systems, VLEs and staff email. The type of outsourcing model is dependent on the platform being outsourced: Software as a Service (SaaS) cloud-based model for email services, and to use an institutionally managed, externally hosted model for TEL related tools, such as e-Portfolios and the VLE for blended and fully online courses.
National conferences/seminars and internal staff development all remain as key development activities. Increase in the promotion of accreditation activities, in particular for HEA and CMALT
accreditation.
Electronic Management of Assessment (EMA) making the most demand on TEL support teams. Lecture capture and Mobile technologies as well. The demand from Learning Analytics and from distance learning/fully online courses continues to increase. A new entry which might be expected to make more demands in the future is Accessibility; in particular, demands made by changes to the Disabled Students’ Allowance in the English higher education sector.

A number of appendixes: full data, a longitudinal analysis of TELearning over the past years (going back to 2001), questions that were used for the longitudinal analysis.

The report focusing on the case studies from TELearning:
These case studies are a companion to the earlier report mentioned above. The idea is that the case studies enable to probe themes in the data and shed light on TEL trends through the eyes of representative institutions, offering context to the findings of the overall report.
In each of the case studies, the institutions provide answers to the following TELearning sections: used TELearning strategy, TEL drivers, TEL provision, TEL governance and structures, TEL-specific policies, Competition and Markets Authority (CMA) strategy, Teaching Excellence Framework, Distance Learning and Open Learning, and Future challenges. The diversity of institutes interviewed give a good perspective of the TEL landscape within Higher Education in the UK. 

Friday, 17 February 2017

Recognising Fake news, the need for media literacy #digitalliteracy #literacy #education

I was working on a blogpost on books focusing on EdTech people (the woman, the tasks…), but then I opened up YouTube and I saw that president Trump had his first solo press conference.

I guess we can all benefit from Mike Caulfield's ebook (127 page) on web literacy for students (online version) or here for other versions including pdf), a fabulous book with lots of links and useful actions to become (more) web literate (thank you Stephen Downes for bringing it to my attention). 

After watching it, I thought there was a clear need (for me as an avid supporter of education) to refer to initiatives on the topic of real and fake news, because honestly I do not mind if someone calls something fake or real, as long is that statement is followed by clear arguments describing what you think is fake about it, and why. Before doing that, I want to share the reason for this shift in attention.

I love Amerika, for several reasons: where Europe stays divided, the United States have managed to get its nations to work together, while leaving enough federal freedom to adapt specific topics according to individual nation’s believes; I have worked and honestly like to work with Americans (of all backgrounds) and American organisations, truly I am in complete awe of the Bill of Rights, and the way the constitution is securing freedom for all. I know that a goal as ‘freedom for all’ is difficult to attain, but at least it is an openly set vision, put on paper. I mean, I truly respect such strong incentive to promote freedom for all citizens within a legal framework and the will to achieve that freedom. And due to this love for the United States, I felt that Trump is okay. In democratic freedom, the outcome might not be of anyone’s liking, but … history has shown that democratic freedom can swing in a lot of ways and that it this diversity nurtures new ideas and insights along the way.

However, while watching the press conference I got more and more surprised by what was said and how: there were clear discriminatory references, which I do not think befit a President of all the American people. But okay, to each his own and rhetorical styles can differ (wow, can they differ), but the ongoing remark and reference on Fake News that kept coming up as an excuse and used as a non-sequitur at any point during the press conference just got to me. Manipulation has many faces, and only education can help built critical minds that will be able to judge for themselves, and as such be able to distinguish real from fake news. To me, even if you refer to ‘this is fake news’, I want to hear just exactly what you mean: which part of what news is fake and why. Enlighten me would be the general idea.  

Fake news and believing it: status
A Stanford study released in November 2016, concluded that 82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website. Which seems to indicate that somewhere we are not addressing media or digital literacy very well. On the reasons why this lack of media literacy is occuring, I like the viewpoint of Crystle Martin who looks at misinformation and warcraft in this article; saying:
Teaching information literacy, the process of determining the quality and source of information, has been an emphasis of the American Association of School Librarians for decades. However, teaching of information literacy in school has declined as the number of librarians in schools has declined.
Luckily, there are some opinions and initiatives on distinguishing between fake and real news. Danah Boyd had another look at the history of media literacy, focusing on the cultural context of information consumption that were created over the last 30 years. Danah shared her conclusions in a blogpost on 17 January 2017, entitled 'Did media literacy backfire?' She concluded that media literacy had backfired, in part as it was built upon assumptions (e.g. only media X, Y and Z deliver real news) which often does not relate to the thinking of groups of people that prefer other news sites A, B and C.  

Danah describes it very well:
Think about how this might play out in communities where the “liberal media” is viewed with disdain as an untrustworthy source of information…or in those where science is seen as contradicting the knowledge of religious people…or where degrees are viewed as a weapon of the elite to justify oppression of working people. Needless to say, not everyone agrees on what makes a trusted source.
The cultural and ethical logic each of us has, is instilled in us from a very early age. This also means we look upon specific thinking as being ‘right’ or ‘wrong’. And to be honest, I do not feel this cultural/ethical mind set will deter all of us from being able to become truly media literate. As long as we talk to people across the board. As long as colliding thoughts fuel a dialogue, we will learn from each other and be able to understand each other in better ways (yes, I am one of those people that think that dialogue helps learning, and results in increased understanding, thank you Socrates).
If this is the case, than we need to do a better job of improving media literacy, including listening to people with other opinions and how they see it. It is a bit like the old days, where the people from the neighborhood go to the pub, the barbershop, or any get together were people with different opinions meet, yet feel appreciated even during heated debates.  

Maha Bali, in her blogpost “Fake news, not your main problem” touches on the difficulty of understanding all levels of the reports provided in the news and other media. Sometimes it does demand intellectual background (take the Guardian, I often have to look up definitions, historical fragments etc. to understand a full article, it is tough on time and tough to get through, but … sometimes I think it is worth the effort). Maha Bali is a prolific, and very knowledgeable researcher/educator. She touches on the philosophical implication of ‘post-truth’ and if you are interested, her thesis subject on critical thinking (which she refers to in her blogpost) will probably be a wonderful read (too difficult for me). So, both Maha and Danah refer to the personal being not only political, but also coloring each of our personal critical media literacies. 

If media literacy depends on personally developing skills to distinguish fake (with some truth in it) from real (with some lies in it), I gladly refer to some guidelines provided by Stephen Downes, as they are personal. One of the statements I would think is pivotal to distinguish between fake and real news, is understanding that truth is not limited to one or more media papers/sites/organisations, it is about analysing one bit of news at a time. It is not the organisation that is authoritative at all times, it is the single news item that is true or at least as real as it can get. So, here is a list of actions put forward by Stephen Downes on detecting fake news : Trust no one, look for the direct evidence (verification, confirmation, replication, falsification), avoid error (with major sources of error being: prediction, relevance, precision, perspective), take names (based on trust, evidence and errors), and as a final rule he suggests to diversify in sources (which I really believe in, the pub analogy). 

Another personal take on detecting fake news comes from Tim O'Reilly who describes a personal story, and while doing so he sheds some light on how an algorithm might be involved. 

Thinking about algorithms, you can also turn to some fake news detectors:

The BS detector: a fabulous extension to the Mozilla browser. Looks at extreme bias, conspiracy theory, junk science, hate group, clickbait, rumor mill… http://bsdetector.tech/

Snopes: started out as a website focused on detecting urban legends, and turned into an amazing fact checking website (amazing as you can follow the process of how they look at a specific item and then decide whether it is fake).  ( http://www.snopes.com/

And finally, for those who like to become practical asap: a lesson plan on fake news provided by KQED http://ww2.kqed.org/lowdown/wp-content/uploads/sites/26/2016/12/Fake-news-lesson-plan.pdf

In my view, the increase in accepting the idea of fake news is related to the increased divide within society. So, in a way I agree with Danah Boyd: we read and agree with specific people and news sources, and so we filter our sources to those people and media. Seldom do we read up on sources from media we do not agree with, or people we disagree with. It used to be different, as discussions around specific topics were discussed in our community, with a mix of ideas and preferences.
So maybe media literacy could be done on a community level, where everyone gets together and shares their opinion on certain topics. We recreate the local pub or cafĂ©, where everyone meets and gets into arguments on what they believe (or not). Media literacy – to me – is about embracing diversity of opinion, listening, seeing the arguments from the other side and … making up your own mind again.

So, coming back to president Trumps referencing to fake news. In terms of increasing media literacy, I do not have a problem with referencing to something that is seen as fake news, I do have a problem with that fact not being explained: what is fake about it? Why? And again, with saying that, I mean a real explanation, not simply repeating ‘this is fake news. It is. I tell you it is’ (feel free to imagine the tone of voice that such a sentence might be delivered in), now give me the facts, because I do want to know why you or anyone else is labeling something as true or false.