12 Sep - 23 Sep 2022

E-Consultation: Accessing Information in the Digital Era

Aseem Andrews • 8 September 2022

Accessing Information in the Digital Era: Artificial Intelligence, e-Governance and Access to Information

Artificial Intelligence (AI) and e-Governance can play an important role to improve access to information in our digital world. They can help bridge the digital divide, by giving citizens access to tailor-made and accessible information and assure services are more efficient. Citizens can access public sector information and services nearly instantly in a transparent and accessible way.

However, these developments also raise questions about fundamental rights and ethical use of Al and e-Governance by public institutions. Since AI uses citizen data, how do we protect the privacy of citizens and trust those who are shaping the future of AI? How can we ensure AI and e-Governance are built for the benefit of all, including women, persons with disabilities and marginalized groups?

It is clear that everyone is influenced in one way or another and daily-life is increasingly being impacted by new technologies.

Participants at the upcoming global 2022 celebrations of Universal Access to Information will adopt the Tashkent Declaration that will reaffirm the commitment by Member States to recognising and respecting the fundamental right of access to information and will cover the principles of good governance and related issues raised by Artificial Intelligence (AI) and emerging technologies with regard to access to information. Developed by experts to address new challenges raised by the impact of digitalization, it seeks to reaffirm the commitment to the right to access information, which is crucial to the advancement of human rights and to sustainable development. 

Invitation - participate and pave way of future of AI and e-Governance for Access to Information.

UNDP and UNESCO promote multi-stakeholder engagement and accountability by jointly launching this global online eConsultation. This initiative provides an opportunity to any registered Hub user to provide feedback, views, questions, comments and concerns on the Tashkent Declaration and how it can best respond to current trends and challenges – in line with Agenda 2030. This Declaration will be adopted during the global celebrations of International Day for Universal Access to Information (IDUAI) on 28 September 2022.


Post your contribution in the comment box (below) guided by the below mentioned questions which have been shared to help frame the discussion. Or you can reflect on previously posted comments or any other related issues.

  • How could AI and e-Governance enhance the right to Access to Information?
  • How much can we rely on AI and e-Governance to access public information, and how can we make sure that of AI and e-Governance services do not leave any one behind?
  • What are the incentives and policies to entice governments and citizens to embrace the latest tools in support of e-Governance and AI?
  • How can citizen’s rights be guaranteed when e-governance services are outsourced? 
    Which mechanisms should be put in place to ensure transparency and accountability of the use of Artificial Intelligence in e-governance and decision making?
  • How can privacy and personal data be protected in the age of digital governance and transparency and open data?

There might be multiple themes emerging from the discussion, and many different aspects to consider while outlining the opportunities and challenges of accessing information in the digital era. We hope this forum can help provide a space to discuss your view on how the Tashkent Declaration captures the current trends, and how could it be implemented.

Contributions are open until the 23rd September, 2022. We look forward to a fruitful e-discussion with you!


Comments (17)


Dear participants,

Thank you in advance for visiting the page and adding your valuable contribution, supporting in advancing access to information as integral to Agenda 2030.

We look forward to joining together before and during the International Day for Universal Access to Information to shape the future of our #RightToKnow 2.0 . The platform SDG 16 Hub offers an excellent platform to brainstorm, and share feels and views on how to turn AI and e-Governance to serve the public interest and promote inclusive approaches for access to information, or any other concerns and remarks you may have on that matter. 

Universal Access to Information is a key pillar for resilient and inclusive knowledge societies and sustainable development. With 8 years remaining to achieve the SDGs, do not miss the opportunity to contribute to the ATI Declaration aiming at edging closer to the Agenda 2030 with universal access to information. 



Aseem Andrews Moderator

Dear Colleagues,

It gives me great pleasure to co-moderate this eConsultation on a very important topic of great relevance for all of us. We live in an age of unprecedented Information Big Bang! Experts estimate that with all the technological advancements taking place our global data is doubling every two years leading to a revolution in information technology. With faster networks and other technological advancements, this data explosion will grow even faster and bigger in the future.

Artificial intelligence (AI) and other frontier technologies like machine learning are driving big data growth! With the growth and deployment of these frontier technologies it has thrown up several questions about their ethical use and how to ensure the equitable access to information in a way that does not impede personal privacy and use of personal information.

Therefore, as my co-moderator has emphasized let us not waste this opportunity to contribute to the Tashkent Declaration and access of information in the Digital Era! We remain available for any questions or comments.


Aseem Andrews


Jaco du Toit

Dear colleagues,  we are looking forward to your comments to be taken into account during the discussions in Tashkent, Uzbekistan during the celebrations of the International Day of Universal Access to Information on the theme: Artificial Intelligence, e-Governance and Access to Information".

-What is the role of e-Governance and artificial intelligence in ensuring the right to information on an equal basis? 

-How can digitalisation of governance be advanced in a way that increases the effectiveness of decision making in the public interest while still delivering transparency and accountability?  

Let us know what you think 

AHM Bazlur Rahman

Dear Aseem AndrewsINGRID LOUISE & Jaco du Toit

Greetings from Bangladesh NGOs Network for Radio and Communication (BNNRC)!  http://www.bnnrc.net 

I trust this message finds you and your family members in good health.   

BNNRC’s approach to media development is both knowledge-driven and context-sensitive. It considers the challenges and opportunities created by Bangladesh's rapidly changing media environment, including community radio broadcasting development and giving voices to the voiceless. 

BNNRC is in Special Consultative Status with the Economic and Social Council (ECOSOC) accredited with World Summit on the Information Society (WSIS), SDGs Media Compact of the United Nations and UN WSIS prize winner 2016, Champion 2017, 2019, 2020 & 2021 for media development and ICT4D.

We would like to endorse the Tashkent Declaration on Universal Access to Information and would appreciate it if you develop a plan of action for the declaration.

I hope you and your family members are staying safe and healthy. Please stay safe and take care. 

With best regards, 


AHM. Bazlur Rahman-S21BR | Chief Executive Officer |Bangladesh NGOs Network for Radio and Communication (BNNRC)

[Consultative Status with the ECOSOC of the United Nations & associated with the UN Department of Global Communications]

Policy Research Fellow, Shaping the Future of Media, Information & Entertainment in the Era of the Fourth Industrial Revolution(4th IR)

House: 9/4 Road: 2, Shaymoli, Dhaka-1207| Bangladesh| Phone: +8801711881647 |  +88 02 48116262 | +88 02 9101479 | +88 02 48119374 | ceo@bnnrc.net | bnnrcbd@gmail.com | http://www.bnnrc.net


Allison Cohen

It would be interesting to acknowledge that marginalized populations may associate the government's use of AI with surveillance or breaches to privacy rights, which have disproportionately affected their communities. Therefore, it may be important to emphasize trust building with those populations (in the context of the government's use of AI for ATI) since these communities' involvement in this programming is critical. 


As the youth consultations suggested for final report UNICEF 's Prospects for Children in 2022 consultations, there is much need to take caution as there's a "thin line when security becomes surveillance" and threatens communities' of their "safety". This sense of "perceived" breach in HR has to addressed - which again various through regions/cultures/communities. 

We have continually witness the "gap between communities" in access grow as the digital economy grows - where access to internet is an issue, how does the government take care of spreading misinformation and involving local leaders and communities in decisions? Ho does transparency and accountability translate in so many levels as to deliver to a "layman"? Would be interesting to see these implications transfer into the framework....



Thank you for the opportunity to comment.

(1) ""Expressing concern about the persistent divides in society in terms of exercising the right of access to information, to the detriment of women, persons with disabilities and other marginalized groups.""  The way this is phrased implies that somehow no men experience the divide, and this is not correct. Please consider rethinking this sentence. The persistent divides are to the detriment of all people with poor access to information.


(2) The Declaration does not do enough to recognize pernicious uses of digital formats to spread disinformation, misinformation, and fake news; or simply to foster divisions and unrest.  It could usefully seek language to recognize this, and the constraints this will require regulatory frameworks to impose, while still pushing for information access.


(3) In the same light, ability to analyse the information you have access to is crucial,and could be better emphasized.

Angharad Devereux

Many thanks for the opportunity to contribute, please find below my comments on the questions posed. For more information on how UNDP utilises data and AI tools to inform programming to prevent violent extremism in a risk-informed manner, please see here.

How could AI and e-Governance enhance the right to Access to Information?

How much can we rely on AI and e-Governance to access public information, and how can we make sure that of AI and e-Governance services do not leave any one behind?

  • Surveys must be made of geographic internet coverage and efforts to enhance this must accompany any e-governance efforts, alongside digital literacy efforts to make any access to technology meaningful. Civil society should be assisted to help in these efforts as they will understand the needs of their communities in a more nuanced manner. Including these efforts within school curriculums can also ensure more have access to this type of learning.
  • Face-to-face efforts cannot be replaced, but can be complimented by e-governance.
  • Algorithmic transparency and platforms with transparent systems must be used for governance-related information. Too much control should not be held by private platforms. Whilst access to information has always been thought of increasing the likelihood of finding truth and debunking myths, targeted information can undo the ability for the public to access to a wide range of information and opinion.
  • Rigorous risk assessment mechanisms alongside meaningful complaints/feedback procedures must accompany any e-governance efforts, including independent oversight.


What are the incentives and policies to entice governments and citizens to embrace the latest tools in support of e-Governance and AI?

How can citizen’s rights be guaranteed when e-governance services are outsourced? 
Which mechanisms should be put in place to ensure transparency and accountability of the use of Artificial Intelligence in e-governance and decision making?

  • If data is collected to be used to inform decision making, it must be remembered that the reach of digital research is limited. Data can provide insight on the gender, demographic profile, location and other measurable characteristics of followers when they consent to sharing this data on public platforms and using this data is authorised by the Terms of Service of the platform(s) in question, unless other data sharing agreements have been reached. The nature of online data made available from social media platforms is the information that individuals choose to share. Material intentionally shared for public display is a purposefully constructed, digitally mediated identity, or digital avatar.


  • By engaging partners including civil society and tech platforms, new expertise, technology and capacity can be leveraged to enhance quality, efficiency, legitimacy and relevance of interventions. However, obtaining partnerships should never be seen as a shifting of responsibility. Rigorous verification of stakeholder ethical and human-rights standards must come first and foremost in partner consideration. Full assessment models for assessing the ethics and human rights compliancy of outsourced companies are needed to standardise private/pubic agreements. Part of this entails ensuring that stated methodologies are transparent, as are employment standards of those working for the organisation in question, and data collection and storage practices are systematically risk- assessed and human rights- compliant. Funding towards partnerships must be justified against the project objectives, ensuring both quality and value assurance. The nature of associated opportunities, risks and responsibilities can differ depending on the scale of the partner utilised.
  • The United Nations Strategy and Plan of Action on Hate Speech was developed on the basis of a joint effort by 14 United Nations entities, and tasks the United Nations with addressing “the root causes and drivers of hate speech”, on the one hand, and enabling effective responses to its impact upon societies, on the other. A subsequent Guidance was developed by the United Nations Office on Genocide Prevention and the Responsibility to Protect, the designated United Nations focal point on the Strategy, to provide more detailed advice and direction on how the Strategy should be effectively implemented by United Nations field presences. COMMITMENT 6, Action 17 of the guidance gives guidance on engaging private actors.
  • Consideration of whether the business model of partners is in fact at odds with efforts to prevent divisive, harmful and violent content should be incorporated into risk assessment.
  • Partnership with development actors can act as a positive guise of human rights compliance for tech companies; hence, any partnership or agreement should take this into consideration and follow due diligence, which can encourage positive, mutual working relationships.
  • The “Human Rights Due Diligence Training Facilitation Guide” provides flexible training modules that clarify what is required for companies to conduct human rights due diligence. The Guide is complemented by a Human Rights Self-Assessment Training Tool featuring 99 potential business-related human rights risks with references to international human rights instruments and relevant SDGs.  In addition, in order to help guide United Nations staff in undertaking due diligence within partner selection, the United Nations advocates for key messaging when engaging with potential partners in order to ensure that an awareness of the human rights implications of stakeholder practice and policy frames all potential collaboration.


Some considerations when partnering with specific actors are as follows:


Benefits- By engaging partners with monitoring expertise, technology and capacity can be enhanced to increase the quality, efficiency, legitimacy and relevance of interventions. Professional organisations will also have experience in the risks and mitigation strategies of projects that access and utilise online data, as well as in communicating findings in an impactful manner. These organisations will likely also have an understanding of the best ways to access data from the big platforms through experience and established relationships.

Considerations- Ensuring that stated methodologies and associated costs are transparent (i.e. exactly how objectives will be reached and communicated), as are employment standards of those working for the organisation in question, and data collection and storage practices are systematically risk assessed and human rights compliant. Funding to establish and maintain partnerships should be justified against the project objectives, ensuring both quality and value assurance.



Benefits: Much discussion online is highly context-specific and therefore needs local knowledge, and fluency in local languages and dialects. This is particularly beneficial considering the locally relevant, dynamic, and constantly evolving nature of language and dialects. Granular local knowledge can help mitigate some of the biases amplified by machine-learning algorithms. CSOs can also go far in validating data gathered online, which is generally more difficult to achieve than data gathered through traditional collection methods such as interviews.

Considerations: CSO’s rights and wellbeing must be preserved through any PVE project that utilises online data and AI. Those who work in the highly sensitive area of PVE can be targeted both online and offline, and risk exposure to upsetting material.




  • Society can be polarised through algorithms that encourage users to view agreeable information encouraging individuals to feed off different facts from each other.
  • Research suggests that more extreme content closer to platform terms of service (i.e. the terms and conditions of platform use), will get more views. This is compounded by the fact that, due to this rise, recommender systems may promote extreme content.
  • On the most harmful end of the scale is the fact that algorithms have been shown to have the potential to increasingly feed – and indeed have fed – an initial interest in extreme material,   aiding the creation of alternative news networks. Algorithms have become more apt at finding ‘rabbit holes’ or ‘filter bubbles’ for individuals to get lost in online and bypass thoughtful consideration by dramatically amplifying confirmation bias. This occurs by the use of positive intermittent reinforcement techniques to manipulate dopamine release, in order to keep users engaged online and therefore more likely to come into contact with advertisements.
  • Since AI-enabled technologies have the potential to influence individuals’ thoughts, there is a clear relevance to the right to freedom of thought in its internal dimension.
  • Transparency on human rights compliance of these platforms’ terms of service, including user consent for data usage, and redress mechanisms is needed. Additionally, meaningful external, including democratic oversight, is a crucial consideration when using the data of such platforms.
  • The bargaining power of practitioners or CSO’s vis-à-vis big tech companies can be limited.

Benefits: Large tech companies often serve as gatekeepers of data needed to train and develop AI algorithms if tools are being developed in-house, and usually have a specialised skills and higher capacity to design, develop and maintain innovative technological tools to monitor VE trends.


How can privacy and personal data be protected in the age of digital governance and transparency and open data?

Using the guidance of international law:

The United Nations General Assembly Resolution 68/167 on the right to privacy in the digital age describes the “unlawful or arbitrary collection of personal data” as a highly intrusive act that could violate “the rights to privacy and to freedom of expression and may contradict the tenets of a democratic society”.22 The right to privacy protects each individual’s “private sphere”, an “area of autonomous development, interaction and liberty” where they are safeguarded “from state intervention and from excessive unsolicited intervention by other uninvited individuals”.

While national privacy laws and frameworks vary in content, most follow a set of common principles, including that personal data processing should be “fair, lawful and transparent”, as well as limited to what is “necessary and proportionate to a legitimate aim”.

Therefore, efforts should be made to minimise the collection, storage, and dissemination of personally identifiable information (PII), except where it is warranted, and/or material to the findings of the research.

This use of publicly made data is supported, for example, by Article 9.2(e) of the European Union’s General Data Protection Regulation (GDPR), allowing for the processing of what it terms “special categories of personal data” if it has been “manifestly made public by the data subject”.

Reem El Sawy

Thank you for the opportunity to contribute, kindly find below my insights

- The internet and digital information outlets, AI, social media, etc has had an impact on the speed and scale of access to information, and although it contributed to inclusion, transparency and participation in the governance realm, it poses risks around information integrity which can be detrimental to the democratic process. And thus, more aggressive efforts are required to specifically tackle disinformation

There should exist a means to verify information and its methodology should be publicly available to cater to building more trust between citizens and governments

- Investing in capacity building and awareness raising on how to identify and combat disinformation is very important

- Working on strengthening the social contract in communities and building trust between government institutions and communities is crucial before utilizing AI in public institutions for 2 reasons; 1) citizens may view them as government surveillance tools which can deepen the sense of mistrust and 2) citizens may not trust publicly available information especially in contexts that lean more towards covert operations and sealed information, particularly when those sources of information are governmental ones

- There will be resistance from government institutions to deploy such measures and have their data publicly available, thus there should be a mechanism to incentivize governments to have more open information outlets other than laws or decrees

-Behavioral Insights (BI) is a tool that can be utilized to persuade people to make use and accept digital tools for access to information

- Tools and information outlets should be accessible (especially to PWDs) to ensure inclusivity and no one is left behind. They should also be context-specific to cater to each community’s own circumstances

- Tailored incentives and approaches should be given to different sectors of society, for example youth as they constitute a rather large segment of the population and they are the ones most likely to make use of digital tools and have an impact on how efforts progress

- Strengthening the infrastructure required for deploying such tools is a must, some areas do not have access to internet, in which case a different means should be made available. Bridging the digital divide through digital literacy efforts is important to ensure we do not contribute to exacerbating existing inequalities



Dear colleagues, 

ARTICLE 19 welcomes the opportunity of submitting comments through this e-consultation. However, we note how comments to these types of consultations should be better run through a questionnaire or other means for contributions that would allow more unpacking. The questions on AI, e-governance and ATI are very broad and don't allow a meaningful and insightful discussion on substantive issues.

Furthermore, the same space is also hosting comments related to The Tashkent Declaration on Universal Access to Information. We are concerned for the very weak language used in the declaration regarding the right of access to information and its implementation by key stakeholders including inter-governmental organisations, Member States and the civil society. Such language doesn't advance international standards but it even falls short on the existing standards. As a global NGO engaged in promoting and advancing international standards on freedom of expression and the right to information, we strongly encourage UNESCO to take this declaration as a serious opportunity to advance such standards and not to set this open process as tick boxing.

This includes, among other things, the working language and texts to provide comments on should be at least translated in the UNESCO six official languages to ensure diverse and inclusive participation as well as reducing those linguistic barriers that are at the center of its mandate. 

We remain at your disposal and we look forward to engaging and contributing to the discussion.

Ilaria Fevola

Jaco du Toit

Dear Ilaria, thank you for your comments.  Please note that we are also running a Guestbook , in the format of a questionnaire for the and comments are welcome.  It is indeed a broad subject but we hope the discussions will enable us to focus on a rights based approach and inclusion.

For the Tashkent declaration, please do not hesitate to suggest any changes to the language as it relates to the right of access to information and its implementation by key stakeholders including inter-governmental organizations, Member States and the civil society.

Regarding the language versions of the Declaration a French version is already circulating and a Spanish version will follow soon. 

  • How could AI and e-Governance enhance the right to Access to Information?

Artificial intelligence accelerates the process of e-Governance. Big data become an integral part of governance and dictions makers. AI and e- governance created smooth ways to flow information worldwide. Today, access to information is not only for the nations it is shifting into global impacts. Specially, while it comes to proactive information AI and e-governance created easy platforms on online engines for everyone to gain and ask information. Through, online systems information is online or could be asked some time automatic replay and in many occasions information is provided. But at the some time risk is still there:

1-first of all, the process on how to be checked and prevent information poisonous in different ways

2-second; how information is not abused on controlling human behaviour or states may create platforms to control populations through sensors which is called( Orwellian State), China is among those firstly had been used to control through information given to mobiles.

  • How much can we rely on AI and e-Governance to access public information, and how can we make sure that of AI and e-Governance services do not leave any one behind?

AI and e-governance shall be considered into regional context. Although, there is no dought, both have a mechanical role or could say the main tools to create simple online system but there are still unsolved questions regarding different context. Based on my experiences; AI and e- Governance would not be enough access public information, the reasons;

  1. In the context, like Afghanistan, Ozbikistan… the social capacity is not still existed. In other word, people still like to ask physically, or the governments do not rely on online systems. Also, governments upload limited information which is not useful most of time. Therefore, other mechanism and alternatives still should be considered.
  2. Some times, links and the complexity of systems regrets most of questers.

It depends on the regions, many countries still suffer uneducated population, many countries people still do not access to free internet and even cell phone, people with disabilities.

  • What are the incentives and policies to entice governments and citizens to embrace the latest tools in support of e-Governance and AI?

The first police for the governments would a clear e-governance law. The second, step shall be compliance with other state laws. In many countries courts don not recognises Whatpp, email and other apps chats as evidences. Also, the economic aspects of e-Governance shall be a huge matter for the states and the changes it carry is really a beg deal interim of organizational budget. It could be considered as an incentive. Also, a strategic time calculation and links to the development would be very prohibitive for the states still straggling different issues.

Also, many countries still suffer lack of stable archive system e-Governance and AI enable to create systems and make sure documents are kept.

For the citizens, access to the internet and fair connectivity is an issue. Also, reliable and responsive states must not abuse it. IF e-Governance and AI is only used on the favour of power citizens definitely will not be relied on it. Fighting corruption, holding politicians responsive and initiate monitoring public costs through easy access would be very encouraging. But undemocratized countries know how e- Governance and AI blast their dictatorship, therefore required deeper brainstorm.

  • How can citizen’s rights be guaranteed when e-governance services are outsourced? 
    Which mechanisms should be put in place to ensure transparency and accountability of the use of Artificial Intelligence in e-governance and decision making?

First of all, there should be a clear definition on what part of e- Governance could be outsourceable? I think, there should be some limitations of privacies or part of information should not be outsourced. For example, information regarding DNA and genetics, Crimes Citizens information should be categorized and based  on categories shall be outsourced. Information could be abused or leaked specially, in fragile states.

That is the hardest part of AI in e-Governance. But still there would be some ways to put mechanisms in place;

  1. A monitoring mechanism; a separate body is needed to monitor if decisions are made transparently.
  2. Official platforms shall be monitored internally

 Unfortunately, corruption is always there and it could be catastrophic if not transparent.


  • How can privacy and personal data be protected in the age of digital governance and transparency and open data?

First of all, it would be totally based on how the systems are stable. Secondly, it differ than the countries. Many countries are not able to protects their data it could be hacked easily or leaked.

  1. Today, registration and leaving a lot of information become a normal live, going in an exhibition or hotel you have to register. There is no a single guide for citizens on what kind of information they provide or not. I think, for the protection of privacy, there should be guidelines for private sectors on what kind of information they can gain. At the some time, a legal awareness on how people can protect their own privacies and why this is so important for everyone. This is specially, for developing countries.
  2. Clear definitions and consent of people if their data could be used for the scientific purpose or any research. People are not aware but their date is used for research and so staffs which is a cause of disclosure.
  3. International legal framework for the states abuses data on their political agenda
Mert Atay

Dear colleagues, 

Greetings from UNDP IICPSD, SDG AI Lab and I hope this message finds all of you well! As a lab, we would like to thank you for this amazing opportunity to contribute to the discussion. I am Mert Atay, a Data Science Fellow at SDG AI Lab, and in this post, I will be sharing my personal thoughts and ideas on the matter as more of a technical person.  

In today’s world, e-Governance and AI are important actors which can foster numerous governance procedures. Although they may seem like an enhancement to the current procedures at the first sight, with the rapid growth in the amount of unprecedented information every day and interconnected expansion of these technologies, as Aseem mentioned in his comment, makes e-Governance and AI vital for governments to be able conduct their functions and responsibilities. Nowadays, more governments are implementing e-Governance technologies as it is crucial for governments to be able to adapt to those rapid changes and employ evolving policies to answer citizens’ ever-changing needs. However, the introduction of those technologies also brings about new discussions, one of which is the topic of Access to Information. For this reason, we value and endorse Tashkent Declaration on recognizing and respecting the fundamental right of access to information. 

  • How could AI and e-Governance enhance the right to Access to Information? 

An important capability of AI and e-Governance is their ability to process and connect vast amounts of data. These technologies not only ease access to information but also enable the efficient processing of data. This allows AI to discover underlying relationships between the data and link them together. In fact, these linkages and insights can enhance the right to access information especially where there are great amounts of data that individuals cannot inspect or cannot make expert deductions with the available information on their own. For instance, AI powered chatbots can highly help in finding the relevant information from a large data storage. Additionally, in many cases, the relationships of data are proved to be important and accessing one would require accessing the other which could easily be neglected or overlooked in a traditional governance procedure. As citizens become more aware of how information is connected, they can also become more aware of the information they can request and should access. 

  • How much can we rely on AI and e-Governance to access public information, and how can we make sure that AI and e-Governance services do not leave anyone behind? 

AI and e-Governance are key actors for accessing public information as the internet connects many technologies and devices today. With the proliferation of connected personal devices and the growth of digital generations that are born, more people are gaining an online presence every day. Therefore, providing people access to the internet, educating them about the (ethical and responsible) usage of the technology are important points for not leaving anyone behind. In addition, we also need to make the information available and accessible for previous generations.  

  • What are the incentives and policies to entice governments and citizens to embrace the latest tools in support of e-Governance and AI? 

The most important incentive would be building a competent and a highly functional e-Governance system with a strong infrastructure. Doing so can prove both the citizens and the officials, the power and capabilities of AI and e-Governance on improving, easing, and enhancing governance processes, which makes them realize that AI and e-Governance are vital in the modern world. 

  • How can citizen’s rights be guaranteed when e-governance services are outsourced? Which mechanisms should be put in place to ensure transparency and accountability of the use of Artificial Intelligence in e-governance and decision making? 

I believe outsourcing would make guaranteeing the citizens’ rights more difficult and should be employed in a limited fashion (in an ethical, transparent, and competent government) as it introduces a new interlocutor for the citizens. In such a case, the necessary privacy, security and data protection laws and policies should be employed by the officials. The policies should encourage to minimize personal, private data and should monitor the process with standards. The citizens should be openly informed about how their data is used for the said services and how it is processed, and they should be aware of the officials’ responsibilities in guaranteeing their rights. However, I would like to underline the governments’ ethics, transparency, and competency once again.  

  • How can privacy and personal data be protected in the age of digital governance and transparency and open data? 

As each technology becomes more connected nowadays, privacy and personal data issues may arise. As for the technical aspect of the matter, personal data can be highly protected by employing competent cryptography techniques and computer security measures. However, which data to protect is a topic of discussion. I believe this topic should be addressed by the officials and legislators, and which aspects/parts of the internet and data can be considered public needs to be clearly defined (similar to the distinctions of public space and private property etc.). The citizens should also be informed about the development of such policies and should be fully aware of the public/personal data distinction.


We hope our post proves to be beneficial to the discussion and we would like to thank you colleagues once again for the opportunity to join the conversation and thank all of you for your time reading our message.

We look forward to the International Day for Universal Access to Information!

Best Regards.

Roger Duthie - ICTJ

Dear Colleagues,

Thank you for the opportunity to contribute to the discussion. The Tashkent Declaration on Universal Access to Information highlights the importance of the right to access to information to civil society, human rights, democracy, and sustainable development. It also explicitly connects access to information to the right to know the truth about gross violations of human rights and international humanitarian law. This is a connection that ICTJ (www.ictj.org) believes warrants further attention and improved understanding. 

Especially in contexts of transition from repression and violent conflict, the truth about such violations is integral to all dimensions of transitional justice, including acknowledgement, accountability, redress, and prevention of recurrence. All such efforts involve both technical efforts to gather and share information as well as more political efforts to use that information to develop narratives and interventions around responsibility, harm, and change.

Transitional justice processes can contribute to the right to access to information in the context of digitalization by adopting new tools for documenting violations and analyzing data; digitalizing archives; protecting advocates and data; and communicating with and increasing awareness, access, and participation of victims. They can also take steps to confront the broader challenges to the truth by countering dis/misinformation, facilitating digital media literacy and education, and catalyzing regulations, legislation, and standards that constitute the relevant legal and policy frameworks. 

Roger Duthie, ICTJ


Dear colleagues, 

Greetings from the Centre for Human Rights (CFHR), Pakistan, and I hope this message finds all of you well! At the outset, we would like to thank you for this amazing opportunity to contribute to this important discussion on the role of Artificial Intelligence (AI) and e-Governance.

Foreseeing the all-encompassing role that AI will come to play in the contemporary world, in 2021, a fellow at CFHR authored a report that initiated the conversation on risks associated with the use of AI in Pakistan’s judiciary; in particular, that of algorithmic unfairness in automated decision-making concerning the constitutional freedoms of inequality and non-discrimination in Pakistan. In the same year, CFHR inaugurated a new research division with the creation of the Institute for Responsible Artificial Intelligence & Human Rights. The Institute aims to pioneer the legal and policy ecosystem for artificial intelligence in Pakistan to ensure a sustainable and human rights-compliant future.

How could AI and e-Governance enhance the right to Access to Information?

The provision of services available to the public on a 24/7 basis enhances this right while simultaneously, the automated process also helps in making information fill in the gap of the digital divide by identifying areas that have the most potential to contribute and need the required infrastructure to boost that growth. When we talk about Pakistan — a developing country — we clearly see how the implementation of AI and E-governance can help in mitigating corruption, mismanagement of resources, and costs associated with public institutions. Furthermore, the use of automated processes helps in transparency and detection of discrepancies that would help make the governmental institutions stronger, sustainable and effective. However, this process is a long and tedious one as a lot of developing countries like Pakistan do not have the majority of their institutions digitized to the point where AI can be effectively and efficiently implemented.

How much can we rely on AI and e-Governance to access public information, and how can we make sure that AI and e-Governance services do not leave anyone behind?

If we talk about AI and E-governance in Pakistan’s context, we can see that Pakistan has 2 million pending cases that not only overwhelm the justice system in Pakistan but also delay access to justice which has detrimental impacts on society. In 2019, former chief justice of Pakistan Asif Saeed Khosa, announced using AI to aid decision-making. Even though the project is still in the development stages. The project involves building an intelligent knowledge-based system that speeds up the whole court process and simultaneously undergoes modification with all the precedents, cases, and judgments being input into it. The provision of knowledge about similar judgments, precedents, and cases, along with recommendations for judgments has the potential to ensure speedy justice in Pakistan.

However, as with any opportunity, there are associated risks as well, including the looming threat posed to the right to equality and non-discrimination by decisions made through intelligent and data-driven algorithmic systems. This poses a threat not only to Sustainable Development Goals 10 and 16. Before using AI in legal decisions, it is imperative to develop an Ethical Framework for AI implementation by removing factors such as discrimination and bias, particularly ensuring algorithmic fairness in the design and development of AI systems for algorithmic decision-making in institutions and sectors have historically shown not to be rights-neutral. It is also imperative to note that algorithmic bias does not happen by random chance or AI autonomy. Still, it is rather a systematic error that stems from the inaccuracy of the algorithm itself. To put it simply, human bias or institutional bias causes bias in AI, resulting in repeated discrimination, unfairness, and inequality against some people or groups of people more than others. Therefore it is vital to prevent both explicit and implicit bias for AI to operate precisely.

 In addition, given the present situation of Pakistan due to floods, the use of AI and E-governance can also help in greater efficiency of state institutions and Identification of risks related to disasters that can help in timely mitigation strategies and policies that are implemented for the greater benefit not only that instead of unequal distribution of aid resources, but AI can also help in identifying areas so that resources are distributed equally to the point where no one is left behind, ending the bias of favoring some areas to gain popular vote instead of all.

What are the incentives and policies to entice governments and citizens to embrace the latest tools in support of e-Governance and AI?

The ability to automate tasks, save costs and resources, the capabilities to operate under tight budgets and the provision of enhanced access to quality services by the public sector is something to entice governments with. Simultaneously, the fact that the use of AI diminishes the power to manipulate data to the point where levels of corruption and mismanagement are difficult to detect is one aspect of enticement. Still, if we look at the larger picture, we can see the digitalization of data, which is readily available at a single click and makes processes much faster, helping mitigate the burden on governmental institutions. Furthermore, Pakistan, one of the most vulnerable countries to climate change, can respond rapidly to disasters with the help of predictive analytics and other parameters.

How can privacy and personal data be protected in the age of digital governance and transparency and open data?

In an analysis of Pakistan’s cyber laws, we have discovered some elements that require the recognition of the new technological challenges already present in our current laws, policies, or guidelines to deal specifically with the algorithmic bias in Pakistan. The digital policy presently only discusses how to encourage the development and use of AI but does not discuss its risks. A separate policy should explicitly codify various themes, such as transparency, accountability, and fairness in AI technology. Since governmental institutions have access to data that pertains to the public and businesses, it is all the more important to safeguard that data. Considering Pakistan still has to digitalize the majority of its institutions but looking forward to it, it is vital to use AI-based integrated protection systems that work autonomously and does not require human intervention. In the context of Pakistan, there are cyber laws, but their only present to prevent criminal behavior online. In contrast, when we talk about data protection and privacy laws, there are none except a few in the pipeline waiting to be put into effect which makes the matter all the more sensitive and puts the general public at risk.


We look forward to engaging and contributing to the discussion and hope to provide insights that can help the discussion move forward.

Haris Ahmad Khan,




Connor Rees

Dear Colleagues,

Thank you for the opportunity to contribute to this discussion.

I am interpreting the questions regarding the relationship between AI and the right to access to Information, access public information, inclusivity, citizens  rights, transparency and accountability, and data protection are all captured in the design phase of an AI system/tool. 

Perhaps a useful point of discussion to capture these questions can look at what tools can be present in the design phase of an AI systems to maximise its utility while respecting the diversity of the end user/beneficiary. In line with this interpretation, a topic that may be worth broaching is the design and implementation of measurable ethical AI principles. The design of a framework containing measurable and ethical AI principles would work to capture the majority of these topics within a single project. The format of such a document can be seen in several examples that I will list below. I will only add that each of these can be improved by expanding on the measurably element of each principle, to ensure that these principles (and the topic of your questions) can be measured and improved.

UNESCO: https://unesdoc.unesco.org/ark:/48223/pf0000381137
World Economic Forum:  https://www.weforum.org/agenda/2021/06/ethical-principles-for-ai/
The Institute for Ethical AI and ML https://ethical.institute/principles.html

Best wishes,

Please log in or sign up to comment.