First publishedon www.ITSInternational.com
Governments must gain the trust of their citizens when it comes to increasing the use of artificial intelligence (AI), warns a new report.
The Centre for Public Impact (CPI) thinktank, which was founded by consultant Boston Consulting Group, said that public trust in AI is low. While AI has the potential in mobility to make public transport responsive to traveller needs in real time, for example, the influence of AI is viewed negatively by some.
Launching an action plan for governments at the Tallinn Digital Summit in Estonia, CPI said that many governments are not adequately prepared, and are not taking the right steps to engage and inform citizens of where and how AI is being used.
Such information is vital to give AI “trust and legitimacy”, CPI believes. Programme director Danny Buerkli says: “When it comes to AI in government we either hear hype or horror; but never the reality.”
Its paper ‘How to make AI work in government and for people’ suggests that governments:
- Understand the real needs of your users - understand their actual problems, and build systems around them (and not around some pretend problem just to use AI)
- Focus on specific and doable tasks
- Build AI literacy in the organisation and the public
- Keep maintaining and improving AI systems - and adapt them to changing circumstances
- Design for and embrace extended scrutiny - be resolutely open towards the public, your employees and other governments and organisations about what you are doing
Boston Consulting Group said that a survey of 14,000 internet users in 30 countries revealed that nearly a third (32%) of citizens are ‘strongly concerned’ that the moral and ethical issues of AI have not been resolved.