Check out our list of top companies

Check out our carefully compiled lists of the most relevant and impactful companies within their fields.

Check out our list of top unicorns

Read and learn about the biggest companies that various countries have produced, how they made it, and what the future looks like for them.
April 28, 2023

It's already too late for giant AI models, says OpenAI's CEO

Sam Altman says that future strides in artificial intelligence will require new ideas

As technology continues to advance, artificial intelligence (AI) has become a hot topic in the world of tech. One of the companies at the forefront of AI research is OpenAI, led by CEO Sam Altman. OpenAI's latest project, ChatGPT, has gained a lot of attention for its impressive capabilities, but Altman has recently cautioned that the company's research strategy of creating ever-larger language models has reached its limits.

OpenAI has made significant strides in language processing through their development of GPT-4, their latest language model project. This AI model was likely trained using trillions of words of text and many thousands of computer chips, with a cost of over $100 million. OpenAI's approach has been to take existing machine learning algorithms and scale them up to previously unimagined sizes, resulting in impressive advances in the field of AI.

Despite this success, Altman believes that the era of creating giant language models has come to an end. He recently spoke at an event held at MIT, where he suggested that OpenAI will need to find other ways to improve the technology. Altman's statement marks a significant turning point in the race to develop and deploy new AI algorithms.

Since the launch of ChatGPT in November, other tech giants such as Microsoft and Google have been quick to develop their own chatbots using similar technology. Meanwhile, numerous startups have been investing heavily in building even larger algorithms in an effort to catch up with OpenAI's technology.

Altman's declaration suggests that GPT-4 could be the last major advance to emerge from OpenAI's strategy of making the models bigger and feeding them more data. He did not say what kind of research strategies or techniques might take its place, but OpenAI's own estimates suggest diminishing returns on scaling up model size. There are also physical limits to how many data centers the company can build and how quickly it can build them.

According to Nick Frosst, a co-founder at Cohere who previously worked on AI at Google, progress on transformers, the type of machine learning model at the heart of GPT-4 and its rivals, lies beyond scaling. Frosst suggests that new AI model designs or architectures and that further tuning based on human feedback are promising directions that many researchers are already exploring.

OpenAI's family of language algorithms, including GPT-2, GPT-3, and now GPT-4, consists of an artificial neural network. This software is loosely inspired by the way neurons work together and is trained to predict the words that should follow a given string of text. OpenAI researchers found that scaling up made the model more coherent, with GPT-2 having 1.5 billion parameters in its largest form, and GPT-3 having a whopping 175 billion parameters.

When OpenAI announced GPT-4, many expected it to be a model of vertigo-inducing size and complexity. However, Altman's recent statement suggests that OpenAI is moving away from this strategy, and researchers are now exploring other avenues to improve AI technology. While the future of AI remains uncertain, it's clear that companies like OpenAI will continue to push the boundaries of what's possible.

Neil Hodgson Coyle
Neil Hodgson-Coyle
Editorial chief at TechNews180
Back to top

Related articles

chevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram