Follow these 5 principles to make AI more inclusive for everyone


Opinions expressed by Entrepreneur contributors are their own.

From creating just-for-fun images of the Pope to algorithms that help sort job applications and ease the burden of hiring managers, artificial intelligence programs have taken over the public consciousness and the business world. However, it is vital not to overlook the potentially deep-seated ethical issues associated with it.

These advances technology tools generate content by taking resources from existing data and other materials, but if those resources are even partially the result of race or gender bias, for example, AI is likely to repeat it. For those of us who want to live in a world where diversity, equity and inclusion (DEI) are at the forefront of emerging technology, we should all be concerned about how AI systems are creating content and what impact their production has on society.

So whether you're a developer, an entrepreneur of an AI startup, or just a concerned citizen like me, consider these principles that can be integrated into AI applications and programs to ensure they create more ethical and equitable outcomes .

Connected: What will it take to build a truly ethical AI? These 3 tips can help

1. Create user-centered design

User-centered design ensures that the software you are developing is inclusive of its users. This may include features such as voice interactions and screen reader capability that assist those with visual impairments. Speech recognition models, meanwhile, can be more inclusive of different types of voices (such as for women, or applying accents from around the world).

Simply put, developers need to pay attention to who their AI systems are aimed at—think outside the pool of engineers who created them. This is especially vital if they and/or the company's entrepreneurs hope to scale the products globally.

2. Build a diverse team of reviewers and decision makers

The development team of an AI application or program is essential, not only in its creation, but also from a review and decision-making perspective. A 2023 report published by New York University's AI Now Institute described the lack of diversity at multiple levels of AI development. It included the remarkable statistics that at least 80% of AI professors are men and that less than 20% of AI researchers at the world's top tech companies are women. Without proper checks, balances and representation development, we run the serious risk of feeding AI programs with dated and/or biased data that perpetuate unfair tropes for certain groups.

3. Auditing data sets and creating accountability structures

It's not necessarily anyone's direct fault if older data perpetuating prejudice is present, but IS someone's fault if the records aren't checked regularly. To ensure that AI is producing the highest quality results with DEI in mind, developers must carefully evaluate and analyze the information they are using. They should ask: How old is he? Where do you come from? What does it contain? Is it ethical or correct at this time? Perhaps most importantly, datasets must ensure that AI perpetuates a positive future for DEI rather than a negative future sourced from the past.

Connected: These entrepreneurs are confronting biases in artificial intelligence

4. Collect and curate various data

If, after auditing the information an AI program is using, you notice inconsistencies, biases and/or biases, work to gather better material. This is easier said than done: collecting data takes months, even years, but is well worth the effort.

To help drive this process, if you're an entrepreneur running an AI start-up and have the resources to do research and development, create projects where team members create new data that represents voices, faces and attributes of different. This will result in more relevant source material for apps and programs that we can all benefit from – essentially creating a brighter future that shows different individuals as multidimensional rather than one-sided or otherwise simplified.

Connected: Artificial Intelligence can be racist, sexist and creepy. Here are 5 ways you can counteract this in your enterprise

5. Engage in AI ethics training on bias and inclusion

As a DEI consultant and proud LinkedIn course creator, Navigating AI through an intersectional DEI lens, I learned the power of DEI's focus on AI development and the positive ripple effects it has.

If you or your team are trying to create a list of related tasks for developers, reviewers, and others, I recommend organizing relevant ethics training, including the online course that can help you solve problems in real time.

Sometimes all you need is a coach to help walk you through the process and solve each problem one at a time to create a sustainable outcome that produces more inclusive, diverse and ethical AI data and programs.

Connected: 6 traits you need to succeed in the AI-accelerated workplace

Developers, entrepreneurs, and others who care about reducing bias in AI should use our collective energy to train ourselves how to build teams of diverse reviewers who can screen and audit data and focus on designs that make programs more inclusive and accessible. The result will be a landscape that represents a wider range of users as well as better content.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *