Artificial intelligence has a range of uses in government. It can be used to further public policy objectives, as well as assist the public to interact with the government. According to the Harvard Business Review, "Applications of artificial intelligence to the public sector are broad and growing, with early experiments taking place around the world." Hila Mehr from the Ash Center for Democratic Governance and Innovation at Harvard University notes that AI in government is not new, with postal services using machine methods in the late 1990s to recognise handwriting on envelopes to automatically route letters. The use of AI in government comes with significant benefits, but also carries risks.
1. Uses of AI in government
The potential uses of AI in government are wide and varied, with Deloitte considering that "Cognitive technologies could eventually revolutionize every facet of government operations". Mehr suggests that six types of government problems are appropriate for AI applications:
Large datasets - where these are too large for employees to work efficiently and multiple datasets could be combined to provide greater insights.
Diverse data - where data takes a variety of forms such as visual and linguistic and needs to be summarised regularly.
Procedural - repetitive tasks where inputs or outputs have a binary answer.
Predictable scenario - historical data makes the situation predictable.
Resource allocation - such as where administrative support is required to complete tasks more quickly.
Experts shortage - including where basic questions could be answered and niche issues can be learned.
Meher states that "While applications of AI in government work have not kept pace with the rapid expansion of AI in the private sector, the potential use cases in the public sector mirror common applications in the private sector."
Potential and actual uses of AI in government can be divided into three broad categories: those that contribute to public policy objectives; those that assist public interactions with the government; and other uses.
1.1. Uses of AI in government Assisting public interactions with government
AI can be used to assist members of the public to interact with government and access government services, for example by:
Filling out forms
Assisting with searching documents e.g. IP Australia’s trade mark search
Answering questions using virtual assistants or chatbots see below
Directing requests to the appropriate area within government
Examples of virtual assistants or chatbots being used by government include the following:
Australias National Disability Insurance Scheme NDIS is developing a virtual assistant called "Nadia" which takes the form of an avatar using the voice of actor Cate Blanchett. Nadia is intended to assist users of the NDIS to navigate the service. Costing some $4.5 million, the project has been postponed following a number of issues. Nadia was developed using IBM Watson, however, the Australian Government is considering other platforms such as Microsoft Cortana for its further development.
The Australian Governments Department of Human Services uses virtual assistants on parts of its website to answer questions and encourage users to stay in the digital channel. As at December 2018, a virtual assistant called "Sam" could answer general questions about family, job seeker and student payments and related information. The Department also introduced an internally-facing virtual assistant called "MelissHR" to make it easier for departmental staff to access human resources information.
Launched in February 2016, the Australian Taxation Office has a virtual assistant on its website called "Alex". As at 30 June 2017, Alex could respond to more than 500 questions, had engaged in 1.5 million conversations and resolved over 81% of enquiries at first contact.
1.2. Uses of AI in government Other uses
Other uses of AI in government include:
2. Public Sector Regulation
Public sector regulation of artificial intelligence is considered necessary to both promote and manage AI, but challenging. In 2017 Elon Musk called for regulation of AI development. In response, politicians have expressed skepticism about the wisdom of regulating a technology that is still in development. Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. Subsequently, the development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national, and international levels and in a variety of fields, from public service management to law enforcement, the financial sector, robotics, the military, and international law. For instance, China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, and leading to proposals for global regulation. In the US, steering on security-related AI is provided by the National Security Commission on Artificial Intelligence,
3. Potential benefits
AI offers potential efficiencies and costs savings for the government. For example, Deloitte has estimated that automation could save US Government employees between 96.7 million to 1.2 billion hours a year, resulting in potential savings of between $3.3 billion to $41.1 billion a year. The Harvard Business Review has stated that while this may lead a government to reduce employee numbers, "Governments could instead choose to invest in the quality of its services. They can re-employ workers’ time towards more rewarding work that requires lateral thinking, empathy, and creativity - all things at which humans continue to outperform even the most sophisticated AI program."
4. Potential risks
Potential risks associated with the use of AI in government include AI becoming susceptible to bias, a lack of transparency in how an AI application may make decisions, and the accountability for any such decisions.