Clicky

Govt technology procurement is a ‘vulnerability’

The procurement process is According to Ed Santow, Commissioner for Human Rights, this is a major weakness in government development and use of new technologies, and human rights need to be better considered in this process.

After three years of research and work, the Australian Commission on Human Rights last week released its final report on Human Rights and Technology with nearly 40 recommendations to the government to ensure that new technologies are fair and human rights are protected.

A number of recommendations related to public procurement, which Santow says is currently a significant weakness.

“The government needs to be able to ask the right questions in the procurement process. That is a point of real vulnerability. If you don’t do that right, you can end up with an AI system that does real damage, ”Santow told InnovationAus.

coat of arms
Procurement is the next big challenge for government innovation. Photo credit: Sunflowerey Shutterstock.com

“If you do that right, you can make sure that the procurement process is asking the right questions and providing the right protection, and then you have a really robust system that the government can trust.”

The commission called on the government to instruct the Treasury Department and the Digital Transformation Agency to amend current procurement laws, guidelines and guidelines to ensure that human rights are protected in the design and development of new technologies.

“It is becoming more and more common to use public procurement processes as a lever to influence behavior in order to achieve different policy outcomes. It is vital that the government procures AI-powered decision-making systems that are secure and protect human rights, ”the report said.

“The Australian government generally works with and relies on the private sector to develop AI-powered decision-making systems. It is therefore generally accepted that public procurement should focus on ensuring that these systems are secure and respect human rights. “

This should start with a review of current procurement rules and policies to ensure that they reflect a human rights-based approach to new technologies. Safeguards should then be put in place if the government wants to procure an AI-powered decision-making system, with an emphasis on making the tool transparent, explainable and accountable.

The Human Rights Commission cited the UK Government’s Artificial Intelligence Guidelines for Sourcing AI from last year as an example of a way forward for Australia.

These guidelines are intended to ensure that the risks associated with this technology are identified and managed in the early procurement phase and that the algorithms can be explained and interpreted as design criteria.

There is also an important need to improve the public sector’s knowledge of these new technologies and the risks involved, Santow said.

“You need to understand high-level strategic risks and opportunities so that you can make a sound decision about where it is safe and effective to use AI in government agencies or in an enterprise,” he said.

“The government needs to understand more precisely where the weaknesses are in the procurement process so that it can execute the process well and address these risks.”

This doesn’t require every officer to have a thorough knowledge of AI, but general training on AI is required, he said.

“You don’t have to be able to take a car apart and build it from scratch, but you have to know that this lever goes forward when you push it so you can use it safely. This level of upskilling is important, ”said Santow.

“It’s not about making everyone a data scientist or AI expert, but about empowering people to act effectively in a world that is increasingly working with AI.”

The Human Rights Commission also recommended that the Digital Transformation Agency’s Digital Sourcing Framework for ICT procurement be amended to include specific references to protecting human rights and that every provider of an AI-powered decision-making system must conduct a human rights impact assessment.

Government decision-makers should also be provided with additional guidance to help determine whether an AI-powered decision-making system will support compliance with the legal measures that guarantee a right to redress and to ensure compliance with relevant Australian and international standards, the report said .

Do you know more? Contact James Riley via E-mail or signal.

-