Saturday, November 15, 2025
spot_img
HomeStanford researchers killed ChatGPT-like OpenAI two months after launch.

Stanford researchers killed ChatGPT-like OpenAI two months after launch.

- Advertisement -

Artificial intelligence (AI) researchers at Stanford managed to develop their ChatGPT chatbot demo Alpaca in less than two months, but the large language model (LLM) behavior “lacks hosting costs and content filters”. quoting the ended it.

According to JEE News, the layoff was announced less than a week after it was released.

The source code for Stanford’s Chat GPT model — developed for less than $600 — is publicly available.

According to the researchers, the performance of their chatbot model was similar to OpenAI’s ChatGPT 3.5.

The scientists said in their announcement that their chatbot Alpaca is only for academic research and not for general use in the near future.

Alpaca researcher Tatsunori Hashimoto of the Department of Computer Science said: “We think the interesting work is in developing methods on top of Alpaca [since the dataset itself is just a collection of known views], so we don’t have current plans. A Creating more datasets of the same type or extending the model,

Alpaca was built on Meta AI’s LLaMA 7B model and training data was generated with a method known as self-instruct.

“As soon as the LLaMA model came out, the race was on,” noted Associate Professor Dove Kayla.

Kayla, who also worked as an AI researcher at Facebook, said that “someone was going to be the first person to instruct the model, and so the Alpaca team was the first… and it was one of those things that went viral. There is a reason.”

“It’s a really, really cool, simple idea, and they executed it really well.”

“The LLAMA base model is trained on Internet data to predict the next word, and instruction-finding modifies the model to prioritize completions that follow those instructions,” Hashimoto said. who don’t.”

Alpaca’s source code is available on GitHub – a source code sharing platform – and has been viewed 17,500 times. Over 2,400 people have used the code for their model.

“I think most of Alpaca’s observational performance comes from LLaMA, and so the underlying language model is still a major hurdle,” Hashimoto said.

As the use of artificial intelligence systems increases with each passing day, scientists and experts are publishing the source code, the data used by companies and the methods used to train their AI models, and the overall transparency of the technology. are discussing.

“I think the safest way to move forward with this technology is to make sure it doesn’t end up in too few hands,” he believed.

“We need places like Stanford, which are openly doing innovative research on these big language models. So I thought it was very encouraging that Stanford is still one of the big players in this big language model space. is one of,” Kayla noted. .

- Advertisement -
RELATED ARTICLES

Leave a Reply

- Advertisment -spot_img

Most Popular