Internal staff reports plead with Google: 'Please don't launch' Bard AI.
From the outside, Google Bard looks like a rushed product trying to compete with ChatGPT, and some Googlers share those sentiments. A new report from Bloomberg interviewed 18 current and former employees and came up with a slew of damning comments and concerns about AI ethics teams being "depowered and demoralized" so Google could get Bard out.
According to the report, Google employees were asked to test a pre-release version of Bard for their feedback, which was mostly ignored so that Bard could launch faster. Internal discussions tracked by Bloomberg called Bard a "cringe worthy" and a "pathological liar." When asked how to land the plane, it gave incorrect instructions that would have led to the crash. One employee asked for diving instructions and was told that "it would likely result in serious injury or death." One employee wrapped up Bard's problems in a February post titled "Bard is worse than useless: please don't launch." Bard was launched in March.
You could probably say many of the same things about the AI competitor Google is chasing, OpenAI's ChatGPT. Both may provide biased or false information and hallucinate incorrect answers. Google is way behind ChatGPT, and the company is panicking about ChatGPT's ability to answer questions people might otherwise type into Google Search. ChatGPT's creator, OpenAI, has been criticized for taking a lax approach to AI security and ethics. Google now finds itself in a difficult situation. If the company's only interest is to appease the stock market and catch up with ChatGPT, it probably won't be able to do that if it slows down to consider ethical issues.
Meredith Whittaker, a former Google executive and president of the Signal Foundation, told Bloomberg that "artificial intelligence ethics have taken a back seat at Google," saying that "If ethics don't come before profit and growth, eventually jobs will." Several Google leaders for AI ethics have been fired or left the company in recent years, Bloomberg says that today Google's AI ethics reviews are "almost entirely voluntary."
While you could do something at Google to try to slow down the release for ethics, it probably won't be great for your career. The report says, "One former employee said they asked to work on fairness in machine learning and were routinely discouraged — to the point that it affected their performance evaluations. Managers protested that it interfered with their 'real work.'"

Post a Comment
you have any problem , please let me know.