Statewide IT Conference (April 11-13, 2023)
Notes on motivation, AI/ChatGPT, and being "tech adjacent."
“You should come down for Statewide IT next week. You’re totally ‘tech adjacent.”
This from my friend, Adam, a faculty member at IU Southeast who also co-leads the Digital Gardener Initiative and works for UITS (IU’s organizational term for University Information Technology Services). Adam wears a lot of hats. And as a true tech guy who also happens to be a tenured faculty member, he glides effortlessly between these two worlds.
Me, not so much. “Tech adjacent?” I had my doubts.
But…a day trip to Bloomington with three catered meals, swag, and the potential for fancy cookies? There are worse ways to spend a random Wednesday in April.
Statewide IT, now in its 27th year, is a celebration of technical services and information technology at IU and beyond. The corporate sponsorship alone makes it worth the price of admission (which somehow is free for all IU faculty and staff). In the opening keynote, a speaker said “Coca-Cola” six or seven times, half-heartedly joking (but not really?) that it was required as part of the university’s contract with Coke. Amazon, IBM, AT&T, PaloAlto Networks, ElasticSearch. The better the sponsorship, the richer the swag. Or so says the old folk wisdom.
Some of the entries below are very “note-like” and stream-of-consciousness in their approach, and they tend to get more so (as well as more sparse) as the day goes on and my caffeine buzz wears off. This is one of those posts that’s mostly for me, as I tend to use Substack as a bit of an online journal for notes and idea-keeping, so my apologies if this one isn’t as reader-friendly as my usual content.
By the way, it is spring here in Indiana.
![](https://substackcdn.com/image/fetch/w_5760,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6c5a7e35-ea2d-4e5d-8aeb-9fa1ff7ed126_3024x4032.jpeg)
The Four Disciplines of Execution
Kinda sounds like it might be an ancient Chinese manual for smiting one’s mortal enemies. But it’s actually the title of the Wednesday morning keynote, which turned out to be a lot more corporate motivation talk than ancient Chinese martial arts philosophy. Still, it was a useful talk. Keep reading.
The keynote, delivered by an energetic (and possibly over-caffeinated) motivational speaker from FranklinCovey named Chris McChesney, was motivational in the purest sense, but there was substance there. For example, McChesney talked about the twin concepts of the whirlwind—i.e., the daily grind, the mountains of emails, the fires that need putting out, the stuff that has to get done—and the long-term goals we all want to pursue (i.e., finishing a big project or achieving a key performance metric). McChesney talked about strategies balancing the daily whirlwinds we must endure with the long-term goals we all want to cross off our lists. Here are some of the notes I jotted down:
How do we get the job done in the middle of a 100-mile per hour whirlwind?
What lives at the intersection of really important and I don’t think it’s going to happen?
The Law of Diminishing Returns: the more goals you take on the fewer goals you are likely to achieve or execute with excellence.
“There will always be more good ideas than there is capacity to execute.”
Good stuff. But what truly impressed me about McChesney was his dynamic speaking style and his ability to hold the audience’s attention for a solid hour without turning into a parody of that Chris Farley SNL skit. There were a lot of mildly self-deprecating jokes about having seven kids with varying levels of ADHD and of various levels of attractiveness and intelligence. The humor was slightly darker than what you would expect from a FranklinCovey speaker at 9:30 in the morning, and the laughs McChesney was getting from this crowd of techies and geeks in car coats seemed truly authentic. Then again, he’s a professional, so I would expect nothing less.
Incorporating AI to Help Students Succeed
A panel discussion moderated by Jay Gladden (IU’s Associate VP for Learning Technologies), featuring Stefano Fiorino (Manager of Research and Analytics, IU’s Office of Institutional Analytics); Joanna Millunchick (Dean, IU’s Luddy School of Informatics, Computing, and Engineering); Clayton Nicholas (Industry Research Development Specialist); and Mark Werling (IU’s Chief Privacy Officer).
ChatGPT, which is now in its fourth iteration, is causing a lot of anxiety among faculty from across the disciplines. And for good reason. Instructors are asking a lot of questions. They don’t know whether to resist or embrace generative AI and its various platformed offspring (ChatGPT, Dall-E, etc.). Some are doing a bit of both. There’s a wide range of early happenings in this space.
This is an exciting time for education because it means that we have to rethink the way that we do things. And ignoring or banning the use of generative AI in the classroom or among students is not an option; like the calculator and the mass-printed book and the number 2 pencil, we have to embrace this new technology 100%, learn how it works, and learn how to live and work with(in) it.
New technologies are coming up all the time, and there has always been anxiety about what new technology will do to the university. One of the panelists mentioned Medieval professors reading books aloud to students (due to the high cost of books), and the idea that the printing press would spell the end of the Medieval university. This didn’t happen, obviously (I mean, Notre Dame is still a Medieval university), but it did change the university. So perhaps a better way to frame the anxiety surrounding generative AI would be to ask, “How will this technology change what we do in higher ed, my discipline, the broader field of teaching and learning, etc.?”
Still, I get the impression that many faculty are not impressed or satisfied with this big picture re-framing of the issue. There’s a tremendous amount of hand-wringing over cheating and plagiarism, for example, which I completely understand. But we tend to focus on these negatives at the expense of the transformative possibilities, pedagogical and otherwise, that generative AI affords. As others have pointed out, if all we focus on is the possibility that students are cheating, then we risk missing out on a more substantive engagement with the technology, and we model for students an uncritical and fearful approach to new literacy technologies that is counterproductive.
Instead, how can we bring AI/ChatGPT into the embrace of the (writing) classroom? Just a few ideas that came out of the panel:
Using AI to write a first draft of a paper, then perform extensive human-generated revisions. Ask students to reflect on this revision process versus their more traditional revision processes.
Using AI to generate ideas for papers and blog posts.
Use ChatGPT to open up discussions about style.
Have students write a summary and then compare their summary to the one generated by ChatGPT.
Encourage students to learn more about how ChatGPT and other forms of generative AI work. Showcase the various online resources (most of which are free) for learning more about large language models and generative AI. Take class time to talk about these technologies, how they work, and how they are changing both work and learning.
Here are some of the questions that were put to the panel. I’ve tried to capture the essence of the responses.
What do you see as the future of the workforce given the potential disruption that AI/ChatGPT can cause?
Industry will need to embrace AI/ChatGPT as well.
Some predictions have estimated that 20-30% of all jobs will just go away.
We need to be sure that we understand the ramifications of new technology.
We need to be clear what our expectations are for workers and students. If instructors don’t want students using ChatGPT, then they should tell them. But, this is not likely to be a productive approach. Instructors should bring AI/ChatGPT into the classroom, talk candidly with students about what it’s good at and what it’s not so good at, and learn how to use it well so that they can integrate it into the classroom as a new technology tool.
Who needs outreach? Who needs the help from us right now? How can generative AI help with this outreach?
Academic advisors often have 150 students to keep track of in a given semester, which is a staggering number. But what if we can harness AI to look for patterns in student data and performance metrics (e.g., how often are students logging on to Canvas) or answer simple questions for students (e.g., which class should I take next semester?)? What if advisors can offload some of these rote questions and requests to AI and therefore have more time for the complex discussions that (as yet) only humans are truly capable of having?
I think a similar set of strategies would work well in the writing classroom, too.
AI/ChatGPT as your own personal concierge. How might this concept work in the realm of teaching, learning, and student success?
The enhanced search capabilities of AI/ChatGTP: what if we can use the tech to call up a document based on a vague description of the document rather than a precise subject heading? What daily benefits might this have for knowledge workers of all kinds?
What are the ethical dangers of AI? What about privacy concerns?
Algorithmic bias is always an inherent danger to expanding the use and reach of AI. We should continuously be asking ourselves, “Are we being equitable and ethical (always) in our use of these technologies?” In healthcare, for example, if someone feeds a bunch of patient data to ChatGPT and asks it for a patient profile, and then uses that profile in some way, is this a breach of ethics? Privacy? What ethical concerns does this raise?
Transparency here is also key. In a manner that is similar to the earlier remarks on embracing the use of ChatGPT in the classroom, patients, workers, students, and others should be notified when generative AI is in use or in play.
Will AI be a factor in narrowing or increasing the socio-economic divide in higher ed?
These technologies will likely push the rote, menial knowledge work down and elevate the work that only humans can do—tasks that require global intelligence, for example, or tasks that require careful deliberation of competing values.
We tend to project our humanity onto machines, but machines are machines. They are not human. The inter-subjectivity of the human experience cannot be replaced by machines no matter how advanced they may be.
My takeaway from the generative AI panel: Let’s not miss the forest for the trees. There’s an impetus to ignore the affordances of AI/ChatGPT because we are so concerned about students using the technology to cheat.
Refocus on Employee Engagement and Well-being
Facilitated by Amy Betule, Learning and Development Manager, IU Human Resources.
Slow down and pay attention to “red thread” moments. I had never heard of this concept before, but a red thread is basically a positive or warm and fuzzy feeling that you get at work: moments where we feel connected and in control of what we are doing, or moments where we get to work with people we enjoy. Think creatively about your work and how you can pull at those “red threads” more and more. If workers can get their “critical mass” of red thread moments to 20%, research suggests this can make a difference in our contentment with our work.
Start writing down the red threads: what made me feel good about this week? Then start reflecting on this every week.
“Engagement and wellbeing are highly reciprocal, with each influencing the future state of the other to a similar degree.”
Extended Reality (XR) Initiative: Helping Faculty Introduce the Virtual World into their Curriculum
Featuring Randy Newbrough, Todd Kirk, Jeanette Lehr, Kevin Jones, and Jenny Johnson.
Speaking of whirlwinds, this panel presentation was a cyclone of an overview of “XR” or Extended Reality technologies. Here are some notes I jotted down on quick take-aways:
Much like Generative AI/ChatGPT, XR brings with it a host of ethical and pedagogical concerns. If the goal is to re-create or extend reality, then what are the ethics surrounding the recreation of traumatic events?
IUPUI’s University Library houses a 3DVR Lab, which features 15 Meta Quest 2 headsets and one MSI VR One Backpack PC (for mobility), as well as a lot of other equipment that can be used on-site or checked out.
Kevin Jones from IUPUC gave a deliriously awesome presentation, as he is known to do. (Kevin is also a Digital Gardener.) This stuff has definite potential; this is not Second Life 2.0, for those who remember the SL fad from a decade or so ago.
The Web and the Story: How SEO/Content Strategy Increase Engagement and Drive User Experience
Featuring Jesse Lambertson, Cody Weakley, Jamie Witt, and Shea Lazansky.
This was an interesting panel, mostly about how to use SEO to draw eyeballs to your webpage. The panelists all seemed to be in agreement on a few key points that I may (but more likely may not) agree with:
that all IU websites should be written at a 6th grade reading level (!)
that the prospective student audience is always the primary (and most important) audience, and
the implied notion that the primary purpose of a university website is to attract prospective students to your programs and campuses.
This led to a lively discussion and I learned some useful tidbits on SEO and content strategy.
“I Wish My Colleagues Knew”: The Neurodiverse Perspective at IU
Featuring Eric Postel, Systems Administrator, IU Print.
This presentation was all about the sensitivities and subtleties of language as it relates to neurodivergent people (people with ADHD, autism, dyslexia, Tourette syndrome, etc.). I had never considered the challenges that neurodivergent people face in the workplace, and I found this presentation eye-opening. (In fact, some of the topics we touched on made me wonder if I might have some neurodivergent tendencies or qualities myself.)
One major takeaway was that everyone needs to be engaged in trainings on neurodiversity and accommodations for neurodivergent people. I was astonished about how little I knew going in to this presentation. IU needs to develop a self-guided, asynchronous module or set of modules to teach people about neurodiversity. I asked whether one already existed, and they pointed me to LinkedIn Learning modules, which are fine, but I think we can do better and create our own.
This was a fun conference. Great food and swag, engaging, relevant panels, and beautiful weather. I would go back today, but apparently there are only a couple of panels on Thursday and most of the fun stuff is already over. I do plan to go back next year, and perhaps even try to present something of my own.
Random thought: I listened to Dylan’s Blonde on Blonde on the way down to Bloomington Wednesday morning and finished it on my drive back (it is a double album, after all). I love Bob Dylan’s music and life’s work. He’s easily one of my top five artists. But I have never really understood the genius of this particular album. Everyone falls all over themselves to tell you what a masterpiece it is—a work of staggering, nearly incomprehensible genius. But I don’t get it. It’s not bad, don’t get me wrong. But I feel like John Wesley Harding or Nashville Skyline are just as good. If you want to talk about a true masterpiece, it would be Blood on the Tracks. Hands down the greatest breakup album ever set to wax.