You have /5 articles left.
Sign up for a free account or log in.
In mid-March of this year, some 250 higher education faculty, administrators, programmers and others involved in the educational technology policy and development areas gathered for the fourth annual U.S.-China Smart Education conference at the University of North Texas. It was a fascinating merging of cultures and subjective perspectives centered on some of the distinctly objective aspects of coding, hardware and standards.
UNT was the perfect host for this event, sharing their technology growth vision; engaging their information and technology faculty members and students, including those from China who could offer informal translation services for the rest of us; and a Texas venue that offered plenty of western music, high-flying camera drones over the Texas Motor Speedway and an immersion in Texas culture.
The technologies that were focused on were state of the art in augmented and virtual reality supported by artificial intelligence. Clearly the attending faculty members from universities across Asia, Europe and North America valued the qualities and potential of rich media to support teaching and learning. We talked pedagogy, engagement, adaptive delivery and related strategies that could best be floated on these vehicles of technology. Simulations and immersion activities were among the key applications most frequently discussed. We eagerly queued up to test the demonstrations that so richly filled the array of goggles shared by vendors.
On my flight home, I pondered the takeaways from this conference. Certainly, there was a universal enthusiasm among those attending the U.S.-China conference for the ways in which these technologies can bring learning experiences to those at a distance. In many ways these technologies can be transformative by personalizing learning and advancing adaptive learning, and by creating a vast array of simulations that capture nuances of personal interaction in limitless contexts. In a down-to-earth practical working example, the simulation of Developing Leadership Skills With Virtual Reality by Carrie Straub of Mursion effectively showed in real time how we can simulate and improve supervisor-employee exchanges.
Yet, because of the complexity of developing many of these technologies, particularly artificial intelligence, the most important role rests with the corporations that develop the technologies. The immense power of artificial intelligence and its independence from explicit programming for every situation requires that effective parameters are put in place to assure products do not go off the intended tracks. To the extent that corporations rather than the universities will control the basic assumptions to be applied within the AI algorithms, we are put in the position that our programs will carry cultural, racial and gender assumptions that may not be consistent with our own. Amy Webb, in her popular new release, The Big Nine, examines the role of the six American and three Chinese companies that are competing for leadership in AI. They share a profit motive, but they have diverse cultural and corporate approaches that should give us pause to consider where they will lead us. In an excerpt from the book published in Fast Company, Webb explains the importance of internal corporate governance in the development process:
The Big Nine should develop a process to evaluate the ethical implications of research, workflows, projects, partnerships and products, and that process should be woven in to most of the job functions within the companies. As a gesture of trust, the Big Nine should publish that process so that we can all gain a better understanding of how decisions are made with regards to our data. Either collaboratively or individually, the Big Nine should develop a code of conduct specifically for its AI workers. It should reflect basic human rights and it should also reflect the company’s unique culture and corporate values. And if anyone violates that code, a clear and protective whistle-blowing channel should be open to staff members.
There is much at stake in the development of AI. The “big nine” corporations are the linkages that ideally will bring cultures together and create a compass for development in this field. Action must be taken now to assure that the underlying assumptions are in the best interests of the learners. A first model for a governance framework for AI has been developed by the Personal Data Protection Commission of Singapore. The 27-page instrument is well worth reading to gain a better understanding of AI and its implications.
This document and these sort of events can spark discussions on emerging technologies and how our adoption could be undertaken broadly across campuses to stimulate discussion, debate and begin the process of setting a framework at your own institution for research and development in this field. A framework customized for your university should be in place before occasions arise in which it will be needed (in some cases, those occasions have already arisen). So, begin today to consider these issues; to delay risks unforeseen consequences for your institution and beyond.