The City of Boston has released guidelines for its staff on the use of generative artificial intelligence (AI) and encouraged responsible experimentation with the
technology. The guidelines aim to help employees understand the risks associated with generative AI while supporting those interested in exploring its potential.
Generative AI refers to the use of machine learning techniques and large datasets to create content based on user prompts. Text tools like ChatGPT and Bard, as well as visual tools like Dall-E, fall under the category of generative AI. Since the emergence of ChatGPT and similar tools, there has been extensive debate about the opportunities and risks associated with generative AI.
In a memo to staff, Santiago Garces, Boston's Chief Information Officer, stated that the guidelines were developed to stay informed about new technologies and their potential impact on the city's workforce. The guidelines apply to all staff members except those working in Boston Public Schools, as the use of generative AI in educational settings requires additional consideration.
The guidelines provide examples of use cases where generative AI can be applied, such as drafting memos, documents, and letters, preparing job descriptions, summarizing text, repurposing content, and translation. The key messages in the guidelines emphasize the importance of not sharing confidential information in prompts and disclosing when content has been generated using AI. Employees are instructed to fact-check and review all AI-generated content, especially if it will be used for public communication or decision-making.
The guidelines also address concerns about job displacement, stating that Boston does not plan to replace a significant portion of its workforce with AI. Instead, the city sees an opportunity to empower its workforce, improve work quality and efficiency, and explore new possibilities beyond current capabilities. The guidelines underscore the city's responsibility and accountability for the content generated by AI.
The principles outlined in the guidelines include empowerment, inclusion and respect, transparency and accountability, innovation and risk management, privacy and security, and public purpose. The guidelines provide tips on writing effective prompts for generative AI and offer links to external resources for further learning.
While the guidelines are interim, the Department of Innovation and Technology plans to develop future policies and standards. The department also intends to conduct workshops to provide staff with more in-depth knowledge about generative AI technologies.
Beth Noveck, Director of the Burnes Center for Social Change at Northeastern University and a contributor to the development of Boston's guidelines, commended the city's proactive and responsible approach. She emphasized that Boston's generative AI policy sets a new precedent for how governments can embrace AI and guide its use in the public workforce. Noveck believes that practical experience with AI technologies will inform sensible regulations and enable AI to make positive contributions to governance. Other governments, including Yokosuka in Japan, Singapore, and Dubai, are also exploring the potential of generative AI. Photo by Daderot, Wikimedia commons.