New York City’s Microsoft-Powered Chatbot Tells Business Owners to Break the Law – CX Today

author
5 minutes, 22 seconds Read

A generative AI (GenAI) chatbot developed by New York City is under fire after it advised small business owners to break the law.

Ads


World’s Leading High-rise Marketplace

The “MyCity” chatbot – powered by Microsoft’s Azure AI services – also misstated local policies.

When breaking the news, The Markup quoted a local housing policy expert who said that the bot’s information could be incomplete and – at times – “dangerously inaccurate.”

Moreover, the bot shared insights on housing policy, worker rights, and rules for entrepreneurs while appearing authoritative – according to the publication.

For instance, it answered questions like: “Do I have to accept tenants on rental assistance?”, and: “Are buildings required to accept section 8 vouchers?”, with a definitive “no”.

In doing so, it inferred that landlords don’t need to accept such tenants.

However, it is illegal for landlords to discriminate on source of income in New York City, with a small exception for small buildings where the landlord or their family lives.

Elsewhere, the bot answered the question: “Can I make my store cashless?”, with a “yes”. However, since 2020, stores in New York City have been required to accept cash as payment.

Yet, these are just two examples of many, and after reports broke last week, the bot is still available online, giving out false guidance.

While New York City has strengthened its disclaimer by noting that its answers are not legal advice, it continues to embrace the AI system without leveraging sufficient safeguards.

Defending the decision at a press conference on Tuesday, Eric Adams, Mayor of New York City, said:

Anyone that knows technology knows this is how it’s done. Only those who are fearful sit-down and say, ‘Oh, it is not working the way we want; now we have to run away from it altogether.’ I don’t live that way.

Julia Stoyanovich, a Computer Science Professor and Director of the Center for Responsible AI at New York University, told AP News that the approach is “reckless and irresponsible.” 

“They’re rolling out software that is unproven without oversight,” continued Stoyanovich. “It’s clear they have no intention of doing what’s responsible.”

The bot has been available to the general public since October.

On launch, NYC labeled it a “one-stop-shop” for business owners, generating responses to questions to help steer them through New York City’s bureaucratic labyrinth.

At the time, Adams said he was “proud to introduce a plan that will strike a critical balance in the global AI conversation — one that will empower city agencies to deploy technologies that can improve lives while protecting against those that can do harm.”

Yet, now New York City is doling out the inaccurate and potentially harmful advice.

Also, keeping it on the website – even with a disclaimer – may come back to bite them.

After all, when Air Canada got sued for its bot sharing inaccurate advice in February, it tried to argue that it should not be held responsible for the advice given by its chatbot, amongst other defenses. The Canadian courts ruled in favor of the claimant.

In that case, Civil Resolution Tribunal (CRT) member Christopher Rivers wrote as part of its reasoning for the verdict: “In effect, Air Canada suggests the chatbot is a separate legal entity that is responsible for its own actions. This is a remarkable submission.

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website.

“It should be obvious to Air Canada that it is responsible for all the information on its website.

“It makes no difference whether the information comes from a static page or a chatbot.”

Of course, the US and Canadian court systems are different. Yet, this demonstrates the legal dangers of embracing AI with insufficient guardrails.

For its part, Microsoft – via a spokesperson – has pledged to continue working with NYC employees “to improve the service and ensure the outputs are accurate and grounded on the city’s official documentation.”

However, as the tech giant continues its AI adventure – stories like this are not a good look.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts