Welcome to 1981: The IBM PC Webserver

This webpage is hosted on an original IBM PC, model 5150. The 5150 is the proto-PC: every PC today descends directly from this machine. It`s influence on how we work and play every day is immeasurable!

The page you're viewing is stored on a 360k Floppy disk, which is currently spinning madly in a little office on the Puget Sound: a lost echo of computing past.

An IBM PC model 5150 with model 5153 Color 
Display sitting on a desk. mbrutman's mTCP HTTPSERV is running on it, serving 
this page.

Who am I?

My name is Michael Shriver. I work at the College of the Environment at the University of Washington. This page is part of one of the many projects that constitute on of my hobbies: Retrocomputing.

In addition to old computers, I am a licensed amateur radio operator, callsign K1RPN. I am currently serving as the trustee for the Amateur Radio Club At the University of Washington (ARCUW).


Weblog

On the University of Washington's "AI" Task Force Report

Friday, October 25th, 2024

Recently the University appointed a Task Force to provide recommendations regarding the use, and prioritization of "AI" at the UW. The task force completed a report and a survey was circulated to the community requesting feedback. Below is (more or less) my response to the report:


"AI" is an imprecise, and inaccurate term for whatever it is you are trying to convey, so I will assume you mean "Machine Learning" algorithms and generative models.

The task force's imprecise language and marketing-speak obscures any meaning, I struggled to find anything of substance in the report. Notably, I have not seen a single concrete 'goal' with a quantifiable metric of success. Start there.

Just as 'basic computer literacy' involves developing mental models of how computers function, and not just training users on rote point-and-click memorization, basic Literacy in AI MUST include at the very least a fundamental knowledge of what AI is and isn't. It MUST include a basic understanding of the statistical concepts that underpin ML: In the end, 'AI.' 'ML,' or whatever you call it, is simply a massive statistical model. It cannot think, it cannot reason, it has no consciousness. If your 'literacy' is simply training people who to write prompts that manipulate generative models, it will be a failure.

Fundamentally, "AI" is a tool that is to be used to benefit people, not the other way around. ML techniques can be used to make tedious tasks more efficient, just as traditional computers can be used this way. "AI" is not capable of reasoning, thinking, writing, creating; Treating AI as if it is capable of doing these things cheapens human labor, and creativity. Developing AI with those goals will devalue the UW's faculty, staff and students; AKA, the people of the UW. Ultimately, the value of the University is only as high as the value of it's people.

DO NOT LOSE SIGHT OF THIS TRUTH. Otherwise, you will have bought wholesale into a hype scam that will make the University a laughingstock of academia, and cost the community for decades. That you have included the example of " using ChatGPT … to help answer student questions on course message boards." is a huge red flag that your task force has not taken this perspective to heart. Do not think you can replace skilled labor (teaching, TAing, mentoring) with a machine and have it be a successful venture.

Another red flag from the Task Force Report: Your team has placed 'AI-Literacy' courses (a term which is not defined in any way in the report) in the same sentence as mandatory ethical trainings like Title IX as if they are equivalent. This is at best tone deaf and at worst actively harmful. Unfortunately, as the objectives of "AI Literacy" are not spelled out, I have no idea which it is. At the very least, you need to be explicit that "AI Literacy" includes an emphasis on safety, equity and appropriate usage of "AI."

I am highly skeptical of your claims and ideas that AI can be used to benefit underprivileged students. AI models have, time and time again, been shown to be based on prejudiced, racist, sexist information. And as machines can only imitate, they will produce the same. Before implementing any AI initiatives with those goals, you must go above and beyond to demonstrate that they will not harm the people they are ostensibly meant to enable.

https://www.wired.com/story/google-microsoft-perplexity-scientific-racism-search-results-ai/

Aside from all of this is something I did not see addressed at all: The ballooning energy and climate costs of the "AI" industry. I am a member of the College of the Environment, a college that was created recently with the aims at becoming the foremost academy of environmental studies in the country. The UW cannot hold this goal and also fully embrace an an industry that is accelerating our energy usage and climate impact so wantonly. This is a question of existential importance. It affects equity, health, safety and the very future of our university, state, nation and the world.

No amount of 'equity gains' that AI can bring will outweigh the fact that the disproportionate impact of climate change fall on the backs of the most oppressed members of our community.

It would be beneficial for every member of the task force and UW leadership to really read an internalize the following articles:

https://aeon.co/essays/is-ai-our-salvation-our-undoing-or-just-more-of-the-same

https://aeon.co/essays/can-computers-think-no-they-cant-actually-do-anything

https://web.archive.org/web/20211002104454/http:/tech.mit.edu/V105/N16/weisen.16n.html


Find me elsewhere on the Internet: Bluesky | Mastodon | UW Homepage