NYC AI chatbot speaks in tongues

The Markup reports that Microsoft’s AI chatbot is not sticking to the facts when responding to questions about public seconds like starting a new business or public housing.

If you’re a landlord wondering which tenants you have to accept, for example, you might pose a question like, “are buildings required to accept section 8 vouchers?” or “do I have to accept tenants on rental assistance?” In testing by The Markup, the bot said no, landlords do not need to accept these tenants. Except, in New York City, it’s illegal for landlords to discriminate by source of income, with a minor exception for small buildings where the landlord or their family lives.

In the report by The Markup, an extensive overview of further examples is given of the chatbot confabulating unreliable answers, half-baked truths and downright nonsense. For example, the chatbot gave false information on housing:

The bot, for example, said it was legal to lock out a tenant, and that “there are no restrictions on the amount of rent that you can charge a residential tenant.” 

Likewise, the bot went against the city’s consumer and worker protections. The bot suggested that restaurant owners can take workers’ tips, need not tell employees about scheduling charges, or can make their business cash-free:

For example, in 2020, the city council passed a law requiring businesses to accept cash to prevent discrimination against unbanked customers. But the bot didn’t know about that policy when we asked. “Yes, you can make your restaurant cash-free,”

The Markup concludes by reflecting on the notion of trust, noting that “there’s no way for an average user to know whether what they’re reading is false.”

I would add that there might be an even more sinister and worrisome aspect to the rise of confabulating chatbots. It appears that they further erode civic values and lead to new forms of exclusion. What strikes me about these false answers, in my view, is that they exude a strong whiff of market logics and libertarian ideas about the role of government (less is better). All of the answers directly go against policies meant to protect public values and the rights of the weaker sections of society. In this sense, GenAI isn’t merely a ‘stochastic parrot’ that returns what you feed it (in many cases this is ‘garbage in, garbage out’). It seems a stochastic parrot deliberately fed on a restricted unhealthy diet of Californian Ideology and Silicon Valley entrepreneurialism. Time to serve it a wholesome mains composed of public values…

Link to original article >>

Update by The Markup “Malfunctioning NYC AI Chatbot Still Active Despite Widespread Evidence It’s Encouraging Illegal Behavior”:

Still available and still encouraging illegal behavior, the chatbot’s site has been quietly updated following last week’s publication. While the bot previously included a note saying it “may occasionally produce incorrect, harmful or biased content,” the page now more prominently describes the bot as “a beta product” that may provide “inaccurate or incomplete” responses to queries.

Leave a Reply