Blog and News

Global AI: Inclusive and Ethical Practices for AI Development and Use

Artificial intelligence (AI) is rapidly advancing and being integrated into various industries, from healthcare to finance to entertainment. While AI has the potential to improve efficiency, accuracy, and innovation, it also poses ethical challenges and risks of bias and discrimination. Therefore, it is crucial to ensure that AI development and use are inclusive and ethical, considering diverse user experiences and perspectives, and addressing potential biases and consequences.

Research Best Practices

To ensure that AI research is safe, ethical, and responsible on a global scale, a comprehensive approach is necessary. Efforts to regulate and promote responsible AI are underway among various global organizations, conferences, and corporations. UNESCO has compiled a collection of guidance and documents in its Digital Library, providing valuable resources for AI practitioners. The World Health Organization has developed guidelines specifically for the use of AI in healthcare. The European Union has taken a comprehensive approach with the EU Artificial Intelligence Act, which includes social, safety, and economic regulations to ensure responsible AI. Arxiv.org, hosted by Cornell University, offers access to a vast collection of global AI research papers and journals. Additionally, the International Standards Organization has published guiding standards for risk management and AI quality management, addressing the unique challenges of ethical AI use.

On the domestic front, the US Government is actively developing a framework to regulate and govern AI use, including an AI Bill of Rights. The National Institute of Standards and Technology plays a crucial role in advancing the security and implementation of AI systems for the federal government by publishing risk management frameworks, AI standards, and fundamental AI research.

The AI industry is also actively promoting responsible research and development. OpenAI, for example, leads the way in responsible AI use by regularly sharing safety research and blog posts outlining their efforts. Anthropic is another significant player in the generative AI space, working hard to educate the public on deploying safe and accountable AI models. Azure AI has contributed to the responsible AI landscape by establishing frameworks and standards.

Across all these organizations, four major themes emerge as instrumental for safe and ethical AI research. These themes include ensuring system transparency, protecting the public's privacy, promoting inclusivity and diversity, and maintaining accountability through human involvement or oversight.

Bixal's user experience (UX) experts conduct research to understand how organizations and individuals are using these guidelines to operationalize AI adoption. While these resources provide a solid foundation, there is still work to be done in better understanding the practical application of these principles and guidelines.

International Engagement

There are successful models in other technologies within international development and relations that can be researched and replicated. Bixal employs a human-centered design approach to explore how existing partnerships have (or have not) worked for end users. In the healthcare data space, there are a few examples to draw upon. One notable example is the Health Data Collaborative Digital Health & Interoperability subgroup. This collaborative effort brings together donors, governments, and technical experts from 100 countries to optimize the use of health information technology and reduce duplication and fragmentation. This model can serve as a valuable guide in examining how AI can be applied in a coordinated and contextually appropriate manner in different country contexts.

While the Health Data Collaborative primarily focuses on the policy level, there are also models that concentrate on the service delivery level. The Mayo Clinic has developed a model that, although not international, offers valuable lessons applicable to any context. The Mayo Clinic has extensively explored the use of AI in healthcare service delivery and research, ensuring the highest ethical standards are maintained. This includes thorough evaluations of data privacy, biases, and accuracy of the AI outputs. To accomplish this, the Mayo Clinic collaborates with a diverse team of practitioners, researchers, data ethicists, legal experts, and independent review boards, fostering continuous improvement of their model. Additionally, the Mayo Clinic leverages context-specific models to address the unique needs of different populations. Bixal's team has begun researching these models to understand the best practices for ensuring success for their clients and internal operations.

Foundation Models

Foundation model developers have a responsibility to adhere to responsible AI practices. This includes training models on diverse and representative datasets to minimize biases, ensuring transparency in how the models work and make decisions, and conducting rigorous testing for various edge cases to ensure they handle unexpected inputs effectively. To ensure efficient and effective use, comprehensive documentation on the model's capabilities, limitations, training data, and best usage practices is highly important for downstream implementers. Developers must also consider the ethical implications of their models, such as privacy concerns and the potential for misuse, and implement safeguards to mitigate these risks. Engaging with a diverse range of stakeholders, including ethicists, domain experts, and potential users, is crucial to fully understand the broader implications of the technology.

When using foundation models, it is critical to recognize their limitations and deploy them appropriately in suitable contexts. Fine-tuning models for specific tasks requires a deep understanding of both the foundational and application domains. This understanding ensures that downstream foundation models are used appropriately and effectively. Implementers are responsible for incorporating ethical considerations to avoid the propagation of biases and harm to individuals when deploying these models. Additionally, models should be designed to provide accurate results, and an additional downstream system should be in place to collect and analyze user feedback.

User feedback plays a vital role in helping developers identify any issues with the model's performance or impact that need to be addressed. By promptly communicating these issues, developers can improve the model and provide a better user experience. Continuous monitoring of the model's performance is essential to detect potential issues before they escalate into more significant problems.

Foundation model developers should strive to provide reliable and high-performing models that can be easily adapted to various tasks. This includes offering clear instructions, best practices, and support for fine-tuning and adjusting the models to specific domains. Sharing details about the training process, data sources, and model decision-making processes is crucial to establish trust and enable effective fine-tuning. Developers should also provide guidance on ethical use, including highlighting potential risks and recommended precautions to avoid misuse. Engaging with active communities or forums where users can exchange experiences, seek assistance, and collaborate on problem-solving can further enhance the development and implementation of foundational models.

Bixal has direct experience in performing this work for its partners.

Human Impacts

Safe and ethical research into the human impacts of AI systems is crucial for mitigating harm and informing equitable and just AI policies. However, for research to be effective, it must address historical and current power imbalances and amplify the voices of those who have been excluded from the development and management of AI systems. AI researchers need to have a deep understanding of the communities they study or collaborate with experts from those communities, taking into account the needs and norms of different global contexts. Ethical research frameworks should go beyond avoiding harm and actively promote collective well-being and community empowerment. Failure to address biases and power imbalances within the AI system will perpetuate existing inequities.

Global research partnerships or arrangements (such as the Distributed AI Research Institute and the Abundant Intelligences research program) are essential in studying the human impacts of AI because they consider the intersection of global and local contexts. Cultural awareness and sensitivity are particularly crucial when researching the impact of AI on mental health, as conceptions of mental health vary across cultures and are context dependent. It's important to recognize that research practices deemed ethical in one cultural context may not be recognized as such in others, such as compensation for participation or consent processes.

Bixal's Human Experience Team is guided by ethical research principles from thought leaders like Alba Villamil, as reflected in 18F's User Experience Guide. They continuously engage in discussions on ethical and safe research practices in their work with the federal government, ensuring the communities impacted by their projects are at the center. This moves beyond the "do no harm" baseline, employing trauma-informed practices that prioritize the care and respect of research participants. Cultural sensitivity and collaborative approaches are used to empower communities and promote healing. This is particularly important in researching the human impacts of AI, as the technology has the potential to help the “historically dispossessed to reassert their culture, their voice, and their right to determine their own future” (MIT Technology Review). An example of this is Te Hiku Media in New Zealand, which preserves the Māori language while ensuring community sovereignty over their data. There is much to learn from negative examples of AI research and development that have worsened inequalities, such as biased facial recognition technology or exploitative labor arrangements in data labeling.

In summary, conducting safe and ethical research into the human impacts of AI requires a deep understanding of power dynamics, cultural context, and the promotion of community wellbeing. Bixal exemplifies these principles through its ethical research practices and commitment to centering the voices of impacted communities.

Enabling Infrastructure

Cloud services such as Amazon Web Services and Azure AI provide researchers with scalable and powerful platforms for advanced AI modeling and analysis. These services are Federal Risk and Authorization Management Program (FedRAMP) authorized, ensuring high levels of safety and security. They grant immediate access to the computing resources and tools required for complex AI modeling and analysis.

However, using cloud-based services presents challenges related to data transparency and control. To address this, utilizing FedRAMP-authorized services and government community clouds like AWS GovCloud and Azure GCC High allows better visibility and control over data tracking, usage, and storage within the appropriate geographic region. This allows developers, product managers, and information security members to focus data governance at the software level rather than managing the hardware itself.

Ensuring security for confidential and sensitive on-premises data stored during the AI data lifecycle requires strengthening access control and implementing robust security protocols. This approach provides complete visibility and control over AI computing resources, enabling organizations to monitor and log interactions between humans and machines, ensuring compliance and security. However, it requires significant investment in hardware, expert personnel, regular maintenance, audits, and consistent supervision to maintain a high level of security.

The decision to utilize cloud services or on-premises infrastructure depends on the hosting requirements of the host country, the need for data control and security, and ease of access to powerful computing resources. Adopting a hybrid model can be a viable solution, using cloud services for less-sensitive tasks and reserving on-premises resources for highly sensitive or classified data. This approach allows researchers to maintain agility and security in their AI research.

To effectively implement this strategy, it is imperative to:

  • Establish clear data governance policies to guide processing on cloud services and on-premises systems.
  • Use strong encryption and anonymization techniques when using cloud-based platforms to enhance data privacy.
  • Develop clear contingency plans and conduct regular security audits to monitor and evaluate data processing, regardless of location.

While cloud services offer computational power and convenience advantages, stringent security measures must be incorporated into AI research infrastructure to ensure complete transparency and control over data. An intelligently designed hybrid infrastructure can provide the most secure and efficient environment for advancing AI research.

Global Equity Considerations

AI holds great promise in enhancing digital resources' inclusivity, particularly for marginalized populations, such as women, youth in developing economies, and people with disabilities. However, the current state of AI reflects a digital landscape filled with ableism, biased data, and exclusionary practices. This is emphasized in a 2019 report from the AI Now Institute: “Centering disability in discussions of AI can help refocus and refine our work to remediate AI’s harms, moving away from a narrow focus on technology and its potential to ‘solve’ and ‘assist,’ and toward approaches that account for social context, history, and the power structures within which AI is produced and deployed.”

15 percent of the world's population currently experiences disabilities, a figure projected to rise, and a significant portion resides in underserved regions. Therefore, it's crucial to acknowledge that AI, relying primarily on online data sources, fails to adequately represent nearly three billion offline individuals. Thus, AI cannot be considered fully inclusive unless these biases are corrected.

At Bixal, we put "people absolutely first" in our product design and development, incorporating diverse user experiences, particularly those of underrepresented groups, in our mission to eliminate biases, barriers, and assumptions. Inclusive research practices are pivotal to our approach, with partnerships formed with organizations representing various marginalized communities, ensuring a broad spectrum of perspectives. Through these partnerships, we advise federal entities on conducting inclusive research and translating insights into actionable design enhancements for national digital platforms.

We collaborate with local advocacy groups in developing economies, fostering personal relationships to recruit participants for inclusive research initiatives. Bixal has a global network of local partners in the digital realm that we draw upon for our work.

We recognize that AI outcomes are shaped by underlying data, often reflecting biases against marginalized individuals. But with meticulous guidance, we can navigate AI's vast potential to transform big data into a more inclusive and representative landscape that amplifies historically underserved voices, ensuring that digital resources are truly inclusive and accessible to all.

The following Bixal team members contributed to the writing of this article: Jeff Fortune, senior director of artificial intelligence and data engineering; Russell Flench, director of design operations; Annie Schwartz, interim VP of data; Sofya Savkina, experience research manager; Amy Cole, digital accessibility manager; Carolyn Pollack, UX research manager; Liz Mason, MEL director; and Ewa Beaujon, content manager.

Contact

How can we help?
We'd love to hear from you.

CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.