Boston University (BU) announced that it has formed an AI Task Force that will asses both the benefits and drawbacks of generative artificial intelligence, as well as define pathways for the use of the technology on campus.

“All of us are witnessing in real time the rapid emergence of generative AI across so many aspects of daily life, including its potential to transform the quality of education, learning outcomes, and experiences in both promising and concerning ways,” Kenneth Lutchen, interim university provost and chief academic officer, said in an email sent to faculty and staff.

In a statement, the university said the task force is charged with developing specific recommendations for how generative AI can be used successfully in education and research, as well as policies and best practices regarding the use of AI by students and faculty to help prevent its misuse or negative impact.

The university gave the task force four main objectives to accomplish:

  • Gather information about initiatives throughout BU and effective practices being adopted at other major research universities that are likewise focused on balancing the creative, multidisciplinary, and ethical use of generative AI;
  • Develop specific recommendations for how generative AI can be used to amplify learning outcomes in undergraduate and graduate education and for faculty and students in research;
  • Create a set of recommended policies and best practices that can be adopted – and adapted – university-wide regarding the use of AI by students and faculty and help prevent its misuse or negative impact on learning outcomes or in research; and
  • Lay groundwork for a university-wide repository of examples of positive uses for generative AI that others can adopt, as well as misuses others should want to avoid.

Lutchen acknowledged the potential opportunities for innovation, but said the university had concerns about the challenges AI poses to academic integrity, intellectual property, and job security. There are efforts already underway to create AI guidelines in some parts of the university, but Lutchen said current efforts are not entirely aligned on goals or approaches.

“We are excited for the work of this task force – and the vast and varied expertise these members bring – to help sort through important questions and arrive at shared solutions, best practices, and ethical approaches BU can carry forward as an institution,” he said.

The task force is cochaired by Yannis Paschalidis, a distinguished professor of engineering and director of the Rafik B. Hariri Institute for Computing and Computational Science and Engineering, and Wesley Wildman, a professor of philosophy, theology, and ethics and of computing and data sciences. The task force also consists of 14 members of faculty and administration from across the university, including both the Charles River Campus and the Medical Campus.

The university said the task force is expected to deliver interim recommendations by December 31, and a full report during the spring semester.

“The entire society is fascinated and worried about generative AI models and their impact, and similarly, we should be in academia,” says Paschalidis. “The task force will listen to many expert voices on campus that view these issues from different angles and will seek to synthesize basic high-level guidelines on the impact of generative AI in our teaching and research mission. We will be formulating guidelines and not detailed prescriptions that are unlikely to remain relevant for too long.”

Read More About
Kate Polit
Kate Polit
Kate Polit is MeriTalk SLG's Assistant Copy & Production Editor, covering Cybersecurity, Education, Homeland Security, Veterans Affairs