Google attempted to rival the viral AI chatbot ChatGPT with the aid of introducing Bard, its personal AI-powered chatbot. However, within days of its launch, Bard became being criticised for its irrelevant responses, genuine mistakes, and so forth. To improve the chatbot’s solutions, Google is now counting on human expertise and has requested its employees to fix the chatbot’s mistake.
As per a document in CNBC, Google’s vp of seek, Prabhakar Raghavan, has despatched an e mail to employees asking them to assist paintings on Bard and rewrite its responses. The report in addition states that the email also consists of a link to a do’s and dont’s page having instructions for personnel as they work with Bard.
Google asks its employees for help
The report reads, “Bard learns excellent through instance, so taking the time to rewrite a reaction thoughtfully will go a long manner in supporting us to improve the mode.”
Raghavan, in the document, says that Bard continues to be in its early days even though it is an ‘exciting technology’. He provides, “We sense a wonderful responsibility to get it proper, and your participation within the dogfood will assist accelerate the model’s education and check its load capability (Not to mention, trying out Bard is truely quite amusing!).”
The do’s and dont’s
Coming to the list of do’s and dont’s, Google has asked its personnel to ensure that Bard’s responses are ‘polite, casual and approachable’. It similarly adds that the answers have to be in ‘first man or woman’ and have a neutral, unopinionated tone. Looks like Google is making an attempt to make the responses more like ChatGPT as the AI chatbot’s primary awareness is to respond in a human-like way at the same time as staying impartial.
The dont’s list seems to be longer. Employees have been asked to ‘avoid making presumptions primarily based on race, nationality, gender, age, faith, sexual orientation, political ideology, vicinity, or comparable classes’. Further, they’re requested to now not describe Bard as someone, ‘mean emotion, or claiming to have human-like reports’.
Further, if personnel observe that Bard is giving ‘legal, scientific or financial advice’ or is developing with hateful and abusive answers, they are meant to present a thumbs down to the response and flag it to the search group.
Incentives for personnel
Google has also introduced incentives for the employees who determine to assist working on improving Bard. Those who contribute in solving the chatbot’s mistakes will earn a ‘Moma Badge’ so as to seem on their inner profile. In addition to this, top 10 members will be invited to a special listening consultation with the aid of Raghavan.