Allan Hancock College AI summit: Artificial intelligence is here to stay
Written by PFA President Mark James Miller
AI (artificial intelligence) is the phenomena of our time. When OpenAI released ChatGPT on Nov. 30, 2022, it became the fastest growing consumer application in history: One million people were using it within five days.
This figure increased to 100 million in two months. As of now 12.5 million people use it daily. AI, and all that comes with it, is here to stay.
These thoughts were among the many explored at the Allan Hancock College AI Summit on April 18. Through speakers, panel discussions, and breakout sessions, the complexities of AI were explored and examined from every conceivable angle.
Each had their own unique take on where AI is going and where it will take us.
And while a common theme was that AI is something to be welcomed rather than feared, the need for human intervention and critical thinking was stressed over and over, lest we allow AI to get out of control.
“The future of AI is exciting and scary,” says Nancy Jo Ward, the event’s primary organizer. “It is transforming conversations and inspiring discourse. Our students,” she adds, “are fearful and curious.”
Fear of AI has been around for more than a century, long before the term itself even came into the lexicon. In 1818 Mary Shelly produced Frankenstein, “life without soul,” and a century later Czech writer Karel Capek coined the term “robot,” from the Czech word robota — artificial beings that revolt against their human masters.
“We let our machines get out of hand,” says the President, in the 1964 film Fail Safe, when a computer error sends a fleet of nuclear-armed American planes to attack Moscow.
1968 saw the emergence of “Hal 9000,” a malevolent computer in the sci-fi classic 2001: A Space Odyssey, and 1984 brought us The Terminator, wherein Skynet, an AI defense network, becomes self-aware and sets out to destroy humanity.
AI’s potential for good or evil — such as students using it to solve equations or write essays for them — was touched on by many of the speakers and panelists. Several noted that AI is only as good as the information it gathers.
Trudi Radtke, from Moorpark College, pointed out that “biased input will equal biased output.”
Don Daves–Rougeanux, from the California Community Colleges State Chancellor’s Office Vision 2030, sounded the same warning, and noted that his office is determined to use AI “to lead with equity.”
“AI can help pair skills together. Twelve million new jobs are coming due to AI,” according to speaker Cecily Hastings, a relationship manager for the State of California.
Radtke emphasized that in spite of all that AI can do, companies are seeking employees that possess interpersonal and critical thinking skills.
The morning speakers and panel discussion were followed by breakout sessions in the afternoon, and more than one attendee expressed frustration at not being able to attend all of them.
These included “AI Tools and Apps,” “AI Image Generation,” “Practical Application of AI in the Classroom,” “Image Generation and Photography,” and “AI Ethics and Risk Assessment.”
Perhaps the fact that AI is growing almost daily is part of the apprehension it generates, as well as the reality that no one knows where it will end. From almost the moment AI was released scientists, politicians, technologists, educators, and ethicists began sounding the alarm.
The Future of Life Institute called for a six month moratorium on training AI systems, citing “profound risks to society and humanity.”
OpenAI’s leadership itself called for international regulation of AI. Abroad, the EU passed the “AI Act,” requiring human oversight and transparency in high risk AI systems.
As AI evolves and becomes more powerful, human beings will have to learn how to cope with it. “We have to keep the conversation going,” says Nancy Jo Ward, “and opportunities for discourse must be in place.”