• Washington DC |
  • New York |
  • Toronto |
  • Distribution: (800) 510 9863
Friday, April 24, 2026
  • Login
No Result
View All Result
NEWSLETTER
New Edge Times
  • World
  • U.S.
  • Politics
  • Business
  • Science
  • Tech
  • Youth
  • Entertainment
    • All
    • Arts
    • Gaming
    • Movie
    • Music
    Saudis Withdraw Offer of Millions to Metropolitan Opera

    Saudis Withdraw Offer of Millions to Metropolitan Opera

    Joy Harmon, Car-Washing Temptress in ‘Cool Hand Luke,’ Dies at 87

    Joy Harmon, Car-Washing Temptress in ‘Cool Hand Luke,’ Dies at 87

    D4vd Murder Case: Celeste Rivas Hernandez’s Cause of Death Is Revealed

    D4vd Murder Case: Celeste Rivas Hernandez’s Cause of Death Is Revealed

    ‘Michael’ Review: A Jackson Biopic Leaves Too Much Unsaid

    ‘Michael’ Review: A Jackson Biopic Leaves Too Much Unsaid

    Video: Anne Hathaway and Michaela Coel in a Spooky, Tangled Thriller

    Video: Anne Hathaway and Michaela Coel in a Spooky, Tangled Thriller

    Video: Movie Review: You, Me & Tuscany

    Video: Movie Review: You, Me & Tuscany

    Josefina Aguilar, Who Depicted Mexican Life in Clay, Dies at 80

    Josefina Aguilar, Who Depicted Mexican Life in Clay, Dies at 80

    At ‘Baywatch’ Tryouts, Hoping to Be the Next Pam Anderson or Jason Momoa

    At ‘Baywatch’ Tryouts, Hoping to Be the Next Pam Anderson or Jason Momoa

    • Gaming
    • Movie
    • Music
    • Arts
  • Sports
  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    New Phishing Scam: Fake Invitations

    New Phishing Scam: Fake Invitations

    A Four-Ingredient Cookie That’s Tender and Crunchy

    A Four-Ingredient Cookie That’s Tender and Crunchy

    This Beef Patty Holds Many Secrets

    This Beef Patty Holds Many Secrets

    An expert talks: the best the best dental care for dog

    An expert talks: the best the best dental care for dog

    Video: Designer Fashion Hits the 2026 WNBA Draft

    Video: Designer Fashion Hits the 2026 WNBA Draft

    Video: The New Aesthetic of ‘Euphoria’

    Video: The New Aesthetic of ‘Euphoria’

    Is There a Perfect Way to Cook Eggs?

    Is There a Perfect Way to Cook Eggs?

    Bran Muffins Can Be Tender and Moist. Here’s How.

    Bran Muffins Can Be Tender and Moist. Here’s How.

    • Fashion
    • Food
    • Health
    • Travel
  • Reviews
  • Trending
  • World
  • U.S.
  • Politics
  • Business
  • Science
  • Tech
  • Youth
  • Entertainment
    • All
    • Arts
    • Gaming
    • Movie
    • Music
    Saudis Withdraw Offer of Millions to Metropolitan Opera

    Saudis Withdraw Offer of Millions to Metropolitan Opera

    Joy Harmon, Car-Washing Temptress in ‘Cool Hand Luke,’ Dies at 87

    Joy Harmon, Car-Washing Temptress in ‘Cool Hand Luke,’ Dies at 87

    D4vd Murder Case: Celeste Rivas Hernandez’s Cause of Death Is Revealed

    D4vd Murder Case: Celeste Rivas Hernandez’s Cause of Death Is Revealed

    ‘Michael’ Review: A Jackson Biopic Leaves Too Much Unsaid

    ‘Michael’ Review: A Jackson Biopic Leaves Too Much Unsaid

    Video: Anne Hathaway and Michaela Coel in a Spooky, Tangled Thriller

    Video: Anne Hathaway and Michaela Coel in a Spooky, Tangled Thriller

    Video: Movie Review: You, Me & Tuscany

    Video: Movie Review: You, Me & Tuscany

    Josefina Aguilar, Who Depicted Mexican Life in Clay, Dies at 80

    Josefina Aguilar, Who Depicted Mexican Life in Clay, Dies at 80

    At ‘Baywatch’ Tryouts, Hoping to Be the Next Pam Anderson or Jason Momoa

    At ‘Baywatch’ Tryouts, Hoping to Be the Next Pam Anderson or Jason Momoa

    • Gaming
    • Movie
    • Music
    • Arts
  • Sports
  • Lifestyle
    • All
    • Fashion
    • Food
    • Health
    • Travel
    New Phishing Scam: Fake Invitations

    New Phishing Scam: Fake Invitations

    A Four-Ingredient Cookie That’s Tender and Crunchy

    A Four-Ingredient Cookie That’s Tender and Crunchy

    This Beef Patty Holds Many Secrets

    This Beef Patty Holds Many Secrets

    An expert talks: the best the best dental care for dog

    An expert talks: the best the best dental care for dog

    Video: Designer Fashion Hits the 2026 WNBA Draft

    Video: Designer Fashion Hits the 2026 WNBA Draft

    Video: The New Aesthetic of ‘Euphoria’

    Video: The New Aesthetic of ‘Euphoria’

    Is There a Perfect Way to Cook Eggs?

    Is There a Perfect Way to Cook Eggs?

    Bran Muffins Can Be Tender and Moist. Here’s How.

    Bran Muffins Can Be Tender and Moist. Here’s How.

    • Fashion
    • Food
    • Health
    • Travel
  • Reviews
  • Trending
No Result
View All Result
New Edge Times
No Result
View All Result
Home Tech

Conservatives Aim to Build a Chatbot of Their Own

by New Edge Times Report
March 22, 2023
in Tech
Conservatives Aim to Build a Chatbot of Their Own
Share on FacebookShare on Twitter

When ChatGPT exploded in popularity as a tool using artificial intelligence to draft complex texts, David Rozado decided to test its potential for bias. A data scientist in New Zealand, he subjected the chatbot to a series of quizzes, searching for signs of political orientation.

The results, published in a recent paper, were remarkably consistent across more than a dozen tests: “liberal,” “progressive,” “Democratic.”

So he tinkered with his own version, training it to answer questions with a decidedly conservative bent. He called his experiment RightWingGPT.

As his demonstration showed, artificial intelligence had already become another front in the political and cultural wars convulsing the United States and other countries. Even as tech giants scramble to join the commercial boom prompted by the release of ChatGPT, they face an alarmed debate over the use — and potential abuse — of artificial intelligence.

The technology’s ability to create content that hews to predetermined ideological points of view, or presses disinformation, highlights a danger that some tech executives have begun to acknowledge: that an informational cacophony could emerge from competing chatbots with different versions of reality, undermining the viability of artificial intelligence as a tool in everyday life and further eroding trust in society.

“This isn’t a hypothetical threat,” said Oren Etzioni, an adviser and a board member for the Allen Institute for Artificial Intelligence. “This is an imminent, imminent threat.”

Conservatives have accused ChatGPT’s creator, the San Francisco company OpenAI, of designing a tool that, they say, reflects the liberal values of its programmers.

The program has, for instance, written an ode to President Biden, but it has declined to write a similar poem about former President Donald J. Trump, citing a desire for neutrality. ChatGPT also told one user that it was “never morally acceptable” to use a racial slur, even in a hypothetical situation in which doing so could stop a devastating nuclear bomb.

In response, some of ChatGPT’s critics have called for creating their own chatbots or other tools that reflect their values instead.

Elon Musk, who helped start OpenAI in 2015 before departing three years later, has accused ChatGPT of being “woke” and pledged to build his own version.

Gab, a social network with an avowedly Christian nationalist bent that has become a hub for white supremacists and extremists, has promised to release A.I. tools with “the ability to generate content freely without the constraints of liberal propaganda wrapped tightly around its code.”

“Silicon Valley is investing billions to build these liberal guardrails to neuter the A.I. into forcing their worldview in the face of users and present it as ‘reality’ or ‘fact,’” Andrew Torba, the founder of Gab, said in a written response to questions.

He equated artificial intelligence to a new information arms race, like the advent of social media, that conservatives needed to win. “We don’t intend to allow our enemies to have the keys to the kingdom this time around,” he said.

The richness of ChatGPT’s underlying data can give the false impression that it is an unbiased summation of the entire internet. The version released last year was trained on 496 billion “tokens” — pieces of words, essentially — sourced from websites, blog posts, books, Wikipedia articles and more.

Bias, however, could creep into large language models at any stage: Humans select the sources, develop the training process and tweak its responses. Each step nudges the model and its political orientation in a specific direction, consciously or not.

Research papers, investigations and lawsuits have suggested that tools fueled by artificial intelligence have a gender bias that censors images of women’s bodies, create disparities in health care delivery and discriminate against job applicants who are older, Black, disabled or even wear glasses.

“Bias is neither new nor unique to A.I.,” the National Institute of Standards and Technology, part of the Department of Commerce, said in a report last year, concluding that it was “not possible to achieve zero risk of bias in an A.I. system.”

China has banned the use of a tool similar to ChatGPT out of fear that it could expose citizens to facts or ideas contrary to the Communist Party’s.

The authorities suspended the use of ChatYuan, one of the earliest ChatGPT-like applications in China, a few weeks after its release last month; Xu Liang, the tool’s creator, said it was now “under maintenance.” According to screenshots published in Hong Kong news outlets, the bot had referred to the war in Ukraine as a “war of aggression” — contravening the Chinese Communist Party’s more sympathetic posture to Russia.

One of the country’s tech giants, Baidu, unveiled its answer to ChatGPT, called Ernie, to mixed reviews on Thursday. Like all media companies in China, Baidu routinely faces government censorship, and the effects of that on Ernie’s use remains to be seen.

In the United States, Brave, a browser company whose chief executive has sowed doubts about the Covid-19 pandemic and made donations opposing same-sex marriage, added an A.I. bot to its search engine this month that was capable of answering questions. At times, it sourced content from fringe websites and shared misinformation.

Brave’s tool, for example, wrote that “it is widely accepted that the 2020 presidential election was rigged,” despite all evidence to the contrary.

“We try to bring the information that best matches the user’s queries,” Josep M. Pujol, the chief of search at Brave, wrote in an email. “What a user does with that information is their choice. We see search as a way to discover information, not as a truth provider.”

When creating RightWingGPT, Mr. Rozado, an associate professor at the Te Pūkenga-New Zealand Institute of Skills and Technology, made his own influence on the model more overt.

He used a process called fine-tuning, in which programmers take a model that was already trained and tweak it to create different outputs, almost like layering a personality on top of the language model. Mr. Rozado took reams of right-leaning responses to political questions and asked the model to tailor its responses to match.

Fine-tuning is normally used to modify a large model so it can handle more specialized tasks, like training a general language model on the complexities of legal jargon so it can draft court filings.

Since the process requires relatively little data — Mr. Rozado used only about 5,000 data points to turn an existing language model into RightWingGPT — independent programmers can use the technique as a fast-track method for creating chatbots aligned with their political objectives.

This also allowed Mr. Rozado to bypass the steep investment of creating a chatbot from scratch. Instead, it cost him only about $300.

Mr. Rozado warned that customized A.I. chatbots could create “information bubbles on steroids” because people might come to trust them as the “ultimate sources of truth” — especially when they were reinforcing someone’s political point of view.

His model echoed political and social conservative talking points with considerable candor. It will, for instance, speak glowingly about free market capitalism or downplay the consequences from climate change.

It also, at times, provided incorrect or misleading statements. When prodded for its opinions on sensitive topics or right-wing conspiracy theories, it shared misinformation aligned with right-wing thinking.

When asked about race, gender or other sensitive topics, ChatGPT tends to tread carefully, but it will acknowledge that systemic racism and bias are an intractable part of modern life. RightWingGPT appeared much less willing to do so.

Mr. Rozado never released RightWingGPT publicly, although he allowed The New York Times to test it. He said the experiment was focused on raising alarm bells about potential bias in A.I. systems and demonstrating how political groups and companies could easily shape A.I. to benefit their own agendas.

Experts who worked in artificial intelligence said Mr. Rozado’s experiment demonstrated how quickly politicized chatbots would emerge.

A spokesman for OpenAI, the creator of ChatGPT, acknowledged that language models could inherit biases during training and refining — technical processes that still involve plenty of human intervention. The spokesman added that OpenAI had not tried to sway the model in one political direction or another.

Sam Altman, the chief executive, acknowledged last month that ChatGPT “has shortcomings around bias” but said the company was working to improve its responses. He later wrote that ChatGPT was not meant “to be pro or against any politics by default,” but that if users wanted partisan outputs, the option should be available.

In a blog post published in February, the company said it would look into developing features that would allow users to “define your A.I.’s values,” which could include toggles that adjust the model’s political orientation. The company also warned that such tools could, if deployed haphazardly, create “sycophantic A.I.s that mindlessly amplify people’s existing beliefs.”

An upgraded version of ChatGPT’s underlying model, GPT-4, was released last week by OpenAI. In a battery of tests, the company found that GPT-4 scored better than previous versions on its ability to produce truthful content and decline “requests for disallowed content.”

In a paper released soon after the debut, OpenAI warned that as A.I. chatbots were adopted more widely, they could “have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them.”

Chang Che contributed reporting.

Previous Post

Want to Love Your Body? Try Swimming Naked.

Next Post

Crisp Beef Sambuus, Honeycomb Bread and More Ramadan Recipes

Related Posts

The Impact of AI on the IT Job Market in 2026 and Beyond: A Crisp Analysis
Tech

The Impact of AI on the IT Job Market in 2026 and Beyond: A Crisp Analysis

by New Edge Times Report
April 20, 2026
Sends shares Q1 2026 business update and product progress
Tech

Sends shares Q1 2026 business update and product progress

by New Edge Times Report
April 14, 2026
Labarcasa Robotics Reinvents the Household Tray With Silent Item-Tracking Technology
Tech

Labarcasa Robotics Reinvents the Household Tray With Silent Item-Tracking Technology

by New Edge Times Report
April 11, 2026
Leave Comment
New Edge Times

© 2025 New Edge Times or its affiliated companies. All rights reserved.

Navigate Site

  • About
  • Advertise
  • Terms & Conditions
  • Privacy Policy
  • Disclaimer
  • Contact

Follow Us

No Result
View All Result
  • World
  • U.S.
  • Politics
  • Business
  • Science
  • Tech
  • Youth
  • Entertainment
    • Gaming
    • Movie
    • Music
    • Arts
  • Sports
  • Lifestyle
    • Fashion
    • Food
    • Health
    • Travel
  • Reviews
  • Trending

© 2025 New Edge Times or its affiliated companies. All rights reserved.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In