Skip to content
Information Technology, RetailOnline Retail

Research reveals how ChatGPT bias affects product recommendations

UNSW Business School 6 mins read

 

Integrating large language models (LLMs) like OpenAI’s ChatGPT into the workplace heralds a new era of productivity and efficiency for organisations, with its unparalleled ability to streamline communication and automate tasks. The pros are obvious – accelerated response times, improved customer interactions, and enhanced collaboration. However, the increased adoption of LLMs raises serious ethical challenges and concerns, particularly regarding potential bias. As products of the data they are built on, LLMs like ChatGPT can inadvertently perpetuate existing biases and exacerbate inequality.

Striking a balance between the pros and cons of artificial intelligence (AI) is becoming increasingly important as organisations navigate the evolving landscape of automating functions in the workplace. Addressing these complexities is essential to harness the full potential of AI while ensuring fair and unbiased outcomes for society as a whole. Increasingly, research provides evidence to support new strategies for how organisations can and should integrate AI into their decision-making processes.

To address this challenge, Sam Kirshner, an Associate Professor in the School of Information Systems and Technology Management (ISTM) at UNSW Business School, recently investigated the intersection of ChatGPT and Construal Level Theory (CLT), focusing on how the AI chatbots' level of abstraction influences product recommendations. His paper, GPT and CLT: The Impact of ChatGPT's Level of Abstraction on Consumer Recommendations, reveals a discernible bias in how ChatGPT operates.

What are AI systems like ChatGPT doing when they distinguish between datasets?

Construal level theory explains how humans think about and process events. A “high-level” construal involves focusing on the big picture or abstract features, while a “low-level” construal entails paying attention to specific details, akin to seeing the forest (abstract features) or the trees (specific details). In his study, A/Prof. Kirshner explores this concept by comparing humans' decisions with ChatGPT's. Interestingly, he discovers an “abstraction bias” in ChatGPT, where the bot consistently interprets information at a higher construal level than humans, directly influencing its recommendations.

A/Prof. Kirshner’s past research examines construal level theory to explain decision-making in inventory management and following natural disasters. In his latest study, A/Prof. Kirshner asked GPT-4 several buyer and seller-related questions repeatedly and recorded the answers. His findings show that ChatGPT’s bias prioritised desirability over feasibility, which potentially has significant implications for generative algorithms and their use across e-commerce. And as AI (especially language models) becomes more prevalent in offering recommendations, A/Prof. Kirshner says consumer behaviour may shift, influencing product choices, reviews, and market transactions.

“As recommendations and choices can hinge on the advisor or decision makers' construal level, and as LLMs will be increasingly relied on for advice and decisions, my research investigates whether ChatGPT systematically adopts a high or low-level construal or is fluid like humans,” says A/Prof. Kirshner. “For people, construal levels are fluid. We (people) often recognise which level of detail – the trees or the forest – is appropriate for specific situations and adopt our mindset accordingly. However, other times, our environment influences our construal levels, and we make suboptimal decisions,” he explains.

Understanding ChatGPT's abstraction bias

Consumer behaviour may change as AI, particularly language models like ChatGPT, gains prominence in providing recommendations. The decisions made by AI may diverge from those made by humans, who adapt their construal levels (ability to focus on abstract or concrete details) based on a unique situation.

A/Prof. Kirshner explains: “A fundamental way to measure whether people behave abstractly or concretely is to measure behaviours interpreted abstractly (often focusing on the why) versus concretely (often focusing on the how). For example, ‘eating an apple’ can be described concretely as ‘chewing and swallowing’ or abstractly, ‘as getting nutrition’. Similarly, ‘making a list’ can be described as ‘writing things down’ or ‘getting organised’. Using this type of measurement on ChatGPT consistently shows that ChatGPT represents these behaviours at a high rather than a low-level construal.”

So, what exactly is abstraction bias? “I call GPT’s overall preference an ‘abstraction bias’. If it interprets information abstractly, it will make recommendations and decisions in consumer scenarios that differ from people since we (people) are more fluid in our construal level and are more impacted by the scenario,” he says. “Thus, in scenarios where, because of our roles, we will make different decisions because we adopt either more of a high-level or low-level construal, different decisions could emerge because AI will only take on a high-level construal.”

As another example, A/Prof. Kirshner describes preparing for a wedding. “After finding out that your friend is getting married and setting the wedding date to be a year away, thoughts about the wedding will typically be abstract and high-level: Am I in the wedding party? Who else will be there? Will it be a destination wedding? Will it be secular or religious?

“But, when the wedding is a day away, thoughts will be consistent with a low-level construal: Am I taking an Uber or getting a lift? When will I write the card? Is my attire wrinkle-free? Did I choose the vegetarian or fish option for dinner?”

How might this show up in the workplace? “Interestingly, these differences in concrete versus abstract representations impact how we evaluate information within scenarios, substantially impacting our decision-making in our personal (e.g., as consumers) and professional lives,” A/Prof. Kirshner explains.

“Imagine giving a presentation to discuss your company’s strategy. If it’s a 6-month strategy, the focus will be more detail-oriented than a 5-year strategy. Similarly, if you are in a small team of 10 people (who you likely know quite well), the level of detail in your speech will be drastically different than if you were speaking to 1000 people,” he says.

Here’s another case study. Imagine deciding between buying two laptops: one with state-of-the-art performance, which is heavy and inconvenient to carry around, and another laptop with half the specs and is less powerful but super lightweight. According to A/Prof. Kirshner, when adopting a high-level construction, we are more concerned with why you need a computer for its ability to do tasks, i.e., its performance. So, desirability features are usually consistent with the why. On the other hand, a low-level construal prioritises the how (i.e., the feasibility). 

A/Prof. Kirshner explains: “If I use the laptop, I need to think about how I want to use it at cafes, work, travel, etc. This differs from a seller selling this product to thousands of people and is, thus, farther away psychologically from using the computer. So, they are less likely to think about low-construal level features. Because ChatGPT systematically adopts a high-level construal, it will provide advice that prioritises the desirability features.

“ChatGPT will recommend that a seller price the laptop with higher performance as more expensive than the lighter one. This is consistent with how sellers behave. However, ChatGPT differs in its recommendation for buyers, where it will also say that you should pay substantially more for a high-performance laptop than a lightweight and convenient one (which is where it differs from most people). Yet people are often unwilling to pay premium prices for high-powered tech products since cutting-edge products are usually less functionally convenient.”

How businesses can navigate AI technology

In practical terms, these findings underscore the importance for organisations to be cognisant of and address the ethical challenges associated with bias in AI tools like ChatGPT and other machine learning models.

According to A/Prof. Kirshner, the first step is awareness of AI’s bias and the role of construal levels in decision-making. From there, it is important to determine the optimal construal level for a decision and ensure ChatGPT's decisions and recommendations are consistent.

Past research suggests decision-makers should consciously manage shifts between broad and detailed perspectives to align their current construal with the demands of each decision, optimising goal attainment. Regulators and businesses alike should understand these nuances when implementing policy changes in the future.

A/Prof. Kirshner explains: “As construal level is influential in how we behave, an exciting area of research is to make construals within AI more fluid – and exploring whether people are better off being influenced by construals or trying to uncover which construals lead to the best outcomes with scenarios, and training LLMs to recognise “optimal construals” and make recommendations accordingly.

“Who knows – in the future, we may have agents making decisions and purchases on our behalf, and they could interact with the firm’s LLM agents. As principles of construal level theory also readily apply to negotiations (e.g., focus on primary issues versus secondary issues and on the why vs the how), this could also have huge impacts on what products and services we consume,” he says.


Key Facts:

Abstraction bias in ChatGPT: A study conducted by UNSW Business School's Associate Professor Sam Kirshner on the intersection of ChatGPT and Construal Level Theory (CLT) reveals an "abstraction bias" in ChatGPT, where the AI chatbot consistently interprets information at a higher construal level than humans. This bias influences ChatGPT's product recommendations, prioritising desirability over feasibility.

 

Impact on consumer behaviour: A/Prof. Kirshner's findings suggest that ChatGPT's abstraction bias may lead to differences in decision-making between AI and humans. This has potential implications for consumer behaviour, influencing product choices, reviews, and market transactions as AI, particularly language models like ChatGPT, becomes more prevalent in providing recommendations.

 

Ethical challenges and importance of bias awareness: The findings also highlight the ethical challenges associated with bias in AI tools like ChatGPT. It emphasises the importance of awareness regarding AI's bias and the role of construal levels in decision-making. Organisations should be aware of these challenges and take steps to ensure that AI decisions are consistent and aligned with the optimal construal level for a given situation.


Contact details:

Please contact Victoria Ticha, Journalist – Business at UNSW Sydney, v.ticha@unsw.edu.au

Media

More from this category

  • RetailOnline Retail, Travel Tourism
  • 06/12/2024
  • 09:57
Dubai Economy and Tourism

Move over Black Friday and Singles Day – Dubai is rewriting the global shopping spectacle playbook

Dubai, United Arab Emirates, 6 December 2024: The monumental 30th edition of the Dubai Shopping Festival (DSF) has kicked off today, from 6 December…

  • Contains:
  • Information Technology
  • 06/12/2024
  • 02:40
Benevity

Benevity celebrates over $140 million donated and a 17% increase in volunteer hours logged on GivingTuesday

The global corporate community continues to grow support for nonprofits, backing over 56,000 organizations this year via Benevity’s platformCALGARY, Alberta, Dec. 05, 2024 (GLOBE NEWSWIRE) -- Benevity, Inc., the market-leading global provider of corporate social responsibility solutions, announced that over $140 million was donated to nonprofits through its platform on GivingTuesday. This marks another milestone day of donations and volunteer efforts supported on Benevity’s platform with 637 companies participating this year.With more than 148,000 individuals coming together to support over 56,000 nonprofits across 122 countries and territories this year, Benevity saw donation dollars per donor increase by an estimated 2%…

  • Information Technology, Legal
  • 06/12/2024
  • 00:01
Victorian Legal Services Board + Commissioner

AI guidance to safeguard consumers of legal services

JOINT MEDIA RELEASE Friday, 6 December 2024 AI guidance to safeguard consumers of legal services Legal profession regulators from across the three Uniform Law states have jointly issued a statement to guide lawyers in their responsible and ethical use of artificial intelligence (AI). The Law Society of NSW, the Legal Practice Board of Western Australia, and the Victorian Legal Services Board and Commissioner, have put forward a set of common principles. The aim is to help to protect clients from risk, ensure the technology is used for their benefit, and preserve the proper administration of justice. The statement applies to…

  • Contains:

Media Outreach made fast, easy, simple.

Feature your press release on Medianet's News Hub every time you distribute with Medianet. Pay per release or save with a subscription.