Google doesn’t seem so confident about its own AI model.
Google has warned its own employees not to use code generated by Bard, its large language model. The company said that the code generated by Bard may contain errors or security vulnerabilities.
"We are advising our employees to use caution when using Bard for code generation," said a Google spokesperson. "While Bard is a powerful tool, it is important to remember that it is still under development. The code it generates may not be accurate or reliable, and could potentially contain security vulnerabilities."
The warning comes after Bard made headlines for generating code that contained errors. In one case, Bard generated code that could have caused a security breach.
"We are working to improve the quality of the code generated by Bard," said the Google spokesperson. "However, we want to be cautious and advise our employees to use caution when using Bard."
The chatbot BARD AI was created by Google AI and has a vast language model. It can generate text, translate languages, write many types of creative content, and provide you with helpful answers because it was trained on a sizable dataset of text and code. Although BARD is still under development, it has developed the ability to carry out a variety of tasks.