Anzeige
Mehr »
Freitag, 13.02.2026 - Börsentäglich über 12.000 News
Die Kommerzialisierung der räumlichen Intelligenz in Billionen-Märkten beginnt jetzt
Anzeige

Indizes

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Aktien

Kurs

%
News
24 h / 7 T
Aufrufe
7 Tage

Xetra-Orderbuch

Fonds

Kurs

%

Devisen

Kurs

%

Rohstoffe

Kurs

%

Themen

Kurs

%

Erweiterte Suche
ACCESS Newswire
581 Leser
Artikel bewerten:
(2)

Lenovo: AI Needs To Deal With Gender Bias - Or It Will Never Reach Its Potential

NORTHAMPTON, MA / ACCESSWIRE / June 13, 2023 / Lenovo

Lenovo, Tuesday, June 13, 2023, Press release picture

Ada Lopez, Senior Manager, Global Product Diversity Office, Lenovo

The past year has seen artificial intelligence (AI) become a dinner-table topic of conversation around the world, thanks to bots such as ChatGPT, which dazzles users with its ability to compose lifelike text and even computer code. But what happens when AI makes wrong decisions?

It's a serious issue. Bias - and gender bias in particular - is common in AI systems, leading to a variety of harms, from discrimination and reduced transparency, to security and privacy issues. In the worst cases, wrong AI decisions could damage careers and even cost lives. Without dealing with AI's bias problem, we risk an imbalanced future - one in which AI will never reach its full potential as a tool for the greater good.

AI is only as good as the data sets it is trained on. Much data is skewed towards men, as is the language used in everything from online news articles to books. Research shows that training AI on Google News data leads to associating men with roles such as 'captain' and 'financier', whereas women are associated with 'receptionist' and 'homemaker'.

As a result, many AI systems, trained on such biased data and often created by largely male teams, have had significant problems with women, from credit card companies which seem to offer more generous credit to men, to tools screening for everything from COVID to liver disease. These are areas where wrong decisions can damage people's financial or physical health.

This is compounded by the fact that just 22% of professionals in AI and data science are women, according to the World Economic Forum's research. Gender itself is also becoming a more complex topic, thanks to non-binary and transgender expressions, leading to more potential for bias in many different forms.

AI is a powerful tool that offers us the chance to solve previously unsolvable problems from cancer to climate change - but unless the bias issue is addressed, AI risks being untrustworthy, and ultimately irrelevant. If AI professionals cannot confront the bias issue, these tools will not be useful, and the artificial intelligence industry risks another 'AI Winter', as seen in the 1970s, when interest in the technology dried up.

Dealing with data

Going forward, businesses will increasingly rely on AI technology to turn their data into value. According to Lenovo's Data for Humanity report, 88% of business leaders say that AI technology will be an important factor in helping their organisation unlock the value of its data over the next five years.

So how will business leaders deal with the problem of bias? For the first time in history, we have this powerful technology that is entirely created from our own understanding of the world. AI is a mirror that we hold up to ourselves. We shouldn't be shocked by what we see in this mirror. Instead, we should use this knowledge to change the way we do things. That starts with ensuring that the way our organisations work is fair in terms of gender representation and inclusion - but also by paying attention to how data is collected and used.

Whenever you start collecting data, processing it, or using it, you risk inserting bias. Bias can creep in anywhere: if there is more data for one gender, for example, or if questions were written by men.

For business leaders, thinking about where data comes from, how it's used, and how bias can be combatted will become increasingly important.

Technical solutions will also play an important part. Data scientists don't have the luxury of going through every line of text used in a training model.

There are two solutions to this: one is having many more people to test the model and spot problems. But the better solution is to have more efficient tools to find bias, either in the data which the AI is fed with, or in the model itself. With ChatGPT, for example, the researchers use a mental learning model to annotate potentially problematic data. The AI community needs to focus on this. Tools to provide greater transparency in the way AI works will also be important.

Understanding bias

It also helps if we consider the broader context. The tools we use today are already creating bias in the models we will apply in the future. We might think that we have 'solved' a bias issue now, but in 50 years, for example, new tools or pieces of evidence might change completely how we look at certain things. This was the case with the history of Rett syndrome diagnosis, where data was primarily collected from girls. The lack of data on boys with the disorder introduced bias into data modelling several years later and led to inaccurate diagnoses and treatment recommendations for boys.

Similarly, in 100 years, humans might work for only three days a week. That would mean that data from now is skewed towards a five-day way of looking at things. Data scientists and business leaders must take context into account. Understanding social context is equally important for businesses operating in multiple territories today.

Mastering such issues will be one of the touchstones of responsible AI. For business leaders using AI technology, being conscious of these issues will grow in importance, along with public and regulatory interest. By next year, 60% of AI providers will offer a means to deal with possible harm caused by the technology alongside the tech itself, according to Gartner.

Business leaders must plan thoroughly for responsible AI and create their own definition of what this means for their organisation, by identifying the risks and assessing where bias can creep in. They need to engage with stakeholders to understand potential problems and distinguish how to move forward with best practices. Using AI responsibly will be a long journey, and one that will require constant attention from leadership.

The rewards of using AI responsibly, and rooting out bias wherever it creeps in, will be considerable, allowing business leaders to improve their reputation for trust, fairness and accountability, while delivering real value to their organisation, to customers and to society as a whole.

Businesses need to deal with this at board level to ensure bias is dealt with and AI is used responsibly across the whole organisation. This could include launching their own Responsible AI board to ensure that all AI applications are evaluated for bias and other problems. Leaders also need to address the broader problem of women in STEM, particularly in data science. Women - especially those in leadership roles - will be central to solving the issue of gender bias in AI.

An AI-driven future

Understanding the problem of gender bias and working towards effective ways of dealing with it will be vitally important to forward-thinking organisations hoping to use AI to unlock the value of their data.

Thinking carefully about how AI is used across an organisation, using tools to detect bias and ensure transparency, will help. But business leaders also need to take a broader view of where their data comes from, how it is used, and what steps are being taken to avoid bias. Doing so will be essential to unlocking the value of their data - and creating an inclusive future where AI can work to its fullest potential.

View additional multimedia and more ESG storytelling from Lenovo on 3blmedia.com.

Contact Info:
Spokesperson: Lenovo
Website: https://www.3blmedia.com/profiles/lenovo
Email: info@3blmedia.com

SOURCE: Lenovo

View source version on accesswire.com:
https://www.accesswire.com/761013/AI-Needs-To-Deal-With-Gender-Bias-Or-It-Will-Never-Reach-Its-Potential

© 2023 ACCESS Newswire
Favoritenwechsel - diese 5 Werte sollten Anleger im Depot haben!
Das Börsenjahr 2026 ist für viele Anleger ernüchternd gestartet. Tech-Werte straucheln, der Nasdaq 100 tritt auf der Stelle und ausgerechnet alte Favoriten wie Microsoft und SAP rutschen zweistellig ab. KI ist plötzlich kein Rückenwind mehr, sondern ein Belastungsfaktor, weil Investoren beginnen, die finanzielle Nachhaltigkeit zu hinterfragen.

Gleichzeitig vollzieht sich an der Wall Street ein lautloser Favoritenwechsel. Während viele auf Wachstum setzen, feiern Value-Titel mit verlässlichen Cashflows ihr Comeback: Telekommunikation, Industrie, Energie, Pharma – die „Cashmaschinen“ der Realwirtschaft verdrängen hoch bewertete Hoffnungsträger.

In unserem aktuellen Spezialreport stellen wir fünf Aktien vor, die genau in dieses neue Marktbild passen: solide, günstig bewertet und mit attraktiver Dividende. Werte, die nicht nur laufende Erträge liefern, sondern auch bei Marktkorrekturen Sicherheit bieten.

Jetzt den kostenlosen Report sichern – bevor der Value-Zug 2026 endgültig abfährt!

Dieses exklusive PDF ist nur für kurze Zeit gratis verfügbar.
Werbehinweise: Die Billigung des Basisprospekts durch die BaFin ist nicht als ihre Befürwortung der angebotenen Wertpapiere zu verstehen. Wir empfehlen Interessenten und potenziellen Anlegern den Basisprospekt und die Endgültigen Bedingungen zu lesen, bevor sie eine Anlageentscheidung treffen, um sich möglichst umfassend zu informieren, insbesondere über die potenziellen Risiken und Chancen des Wertpapiers. Sie sind im Begriff, ein Produkt zu erwerben, das nicht einfach ist und schwer zu verstehen sein kann.