HomeTechnologyLarge-Scale Language Models Broaden AI Applications in Industry and Enterprise

Large-Scale Language Models Broaden AI Applications in Industry and Enterprise

|

[ad_1]

This article is part of the VB Lab Insights series on AI sponsored by Microsoft and Nvidia.


In July 2022, the AI ​​world and global mass media buzz around the release of DALL-E 2, a generative AI with 3.5 billion parameters developed by Open AI. Then there is ChatGPT, an interactive conversational large language model developed and trained by OpenAI.

Before this, flashy text-to-image models had attracted a lot of media and industry attention. But the expansion of Open AI’s public trial of a new conversational chatbot in December 2022 has brought another large language model (LLM) into the spotlight.

LLM is a learning algorithm that can recognize, summarize, translate, predict, and generate language using very large text-based datasets, requiring little training supervision.them handle different tasks Examples include answering customer questions or recognizing and generating text, sound and images with high precision. In addition to text-to-image, a growing number of other modes include text-to-text, text-to-3D, text-to-video, digital biology, and more.

Quietly expanding the impact of artificial intelligence

Over the past two years, LLM Neural Networks has been quietly expanding the impact of AI in fields and functions such as healthcare, gaming, finance, robotics, including enterprise development of software and machine learning.

“Large language models have proven to be flexible and capable of answering deep domain questions, translating languages, understanding and summarizing documents, writing stories and computing programs,” said Bryan Catanzaro, VP of Applied Deep Learning Research, Nvidia.

The arrival of ChatGPT marks the definitive emergence of another LLM, the basis for generative AI and transformer neural networks, which are increasingly being hailed as revolutionary disruptors of AI, including enterprise applications.

Large Language Models (LLM): Increasingly Sophisticated – and Powerful – for All Credit: Nvidia

‘AI-first’ infrastructure supports corporate-level LL.M.

The idea originated in an influential research paper in 2017, and took off a year later with the release of BERT (Bidirectional Encoder Representations from Transformer) open-source software and OpenAI’s GPT-3 model. As the complexity and size of these pretrained models has grown (recently by a factor of 10 per year), so has their power and popularity. Today, the world’s largest models, the PaLM 540B and Megatron 530B, are both LLMs.

In fact, as one of the newest and most powerful model classes, LLMs are increasingly replacing convolutions and recurrents. A key advance is the combination of specialized AI hardware, scalable friendly architectures, frameworks, customizable models, and automation with a robust “AI-first” infrastructure. This makes it possible to deploy and scale production-ready LLM on public and private clouds and via APIs across a wide range of mainstream commercial and enterprise-level applications.

An LL.M. can help businesses marshal intelligence through knowledge learned across multiple fields, Catanzaro said. Doing so helps accelerate innovation, scaling and unlocking the value of AI in ways previously only available on supercomputers.

Striking new examples abound. For example, Tabnine created an AI assistant for software developers running multiple LLMs. The Tel Aviv-based company says it helps more than one million developers worldwide program faster in 20 software languages ​​and 15 editors thanks to full-line and full-feature completion, Automate up to 30% of your code.

Tokyo-based Rinna uses LLM to create chatbots used by millions of people in Japan, as well as tools that let developers build custom bots and AI-driven characters.

One of the most famous and mature examples is Microsoft Translator. The Azure-based service, with billions of parameters, came into focus a decade ago for helping disaster relief workers understand Haitian Creole while responding to a magnitude 7.0 earthquake. The free personal translator app continues to evolve and now supports text, speech, conversations, camera photos and screenshots in over 70 languages.

Overcome tough challenges with new targeted technologies

Transformer models apply an evolving set of mathematical techniques called attention or self-attention to detect the subtle ways in which even distant data elements in a series influence each other.

How large models work in practice is simple. Case in point: text generation and decoding is handled by GPT-3, an autoregressive language model that uses deep learning to generate human-like text. codeIs a descendant of GPT-3, responsible for tasks such as coding, adding comments, and rewriting code to improve efficiency. The new NLLB-200 model can handle translations in more than 200 languages.

The rapid progress of the past five years has been driven in large part by the desire to create larger, more powerful networks with less effort.

Large language models (LLMs) have grown exponentially in size and power over the past four years.Image credit: Microsoft/Nvidia

Despite rapid and impressive advances in technology, scale, and performance, LLM and complex natural language models have been difficult to develop, train, deploy, and maintain, making them impractical or Inaccessible.

When creating large models from scratch or customizing and fine-tuning them for specific use cases, challenges quickly arise. Most critically, processing large numbers of free-form text documents requires a lot of time and computing power, usually GPUs.

Dave Salvator, Director of Accelerated Computing at Nvidia, explains: “What is needed is massive computing power to train and deploy LLMs. Performance directly impacts the overall cost of training an LLM and the cost of deploying an LLM-enabled product or service into production. Specialized software for Distributed training and inference of these models using multiple GPUs on multiple nodes in the cluster is also essential. And since models and user needs vary in size, complexity, and intensity, scaling up or down Flexibility is another key factor.”

The latter is especially important. The commercial adoption of LLM depends on a highly scalable infrastructure with computational power to deliver results in real-time and an efficient inference serving solution. The ongoing partnership between Microsoft and Nvidia is working to help enterprises meet these daunting demands. Industry giants are collaborating on products and integrations to train and deploy LLMs with billions and trillions of parameters.One key is to more tightly couple containerization Nvidia NeMo Megatron framework and many other target products with Microsoft Azure AI Infrastructure, which can provide 95% scaling efficiency on 1400 GPUs.

Accelerate AI development and innovation in life sciences

Accelerating the development of software and AI applications is emerging as a high-value use case, as Tabnine discovered. Today’s generative AI techniques augment the efforts of software engineers to optimize productivity and accuracy.

Natural Language Processing Cloud is an advanced software service that helps organizations fine-tune and deploy AI models; its LLM makes it easy to understand text, generate and extract entities without DevOps.

While LLMs help AI understand human language, they’re not limited to that. New developments make it easier to train large-scale neural networks on biomolecular and chemical data. The ability to understand these “languages” allows researchers to develop and deploy artificial intelligence to discover new patterns and insights into biological sequences and human health.Thanks to these capabilities, top biotech and pharmaceutical companies have adopted Nvidia’s upcoming BioNeMo Services Accelerate drug discovery research.

“With the widespread adoption of large language models in the protein domain, the ability to efficiently train LLMs and quickly adapt model architectures becomes important,” explained Istvan Redl, head of machine learning at Peptone, a biotech startup from Nvidia Inception program. “We believe these two engineering aspects—scalability and rapid experimentation—are exactly what the BioNeMo framework can deliver.”

research from Rostlab, Technical University of Munichand work by teams from Harvard, Yale, and NYU, among others, is helping scientists understand proteins, DNA/RNA, and generate chemical structures de novo.

what’s next

The creation of specialized frameworks, servers, software and tools has made LLM more feasible and accessible, driving new use cases. New advances are already driving a wave of innovation in the field of artificial intelligence and machine learning. The much-anticipated release of GPT-4 is likely to reinforce the growing belief that “Transformer AI” represents a major advance that will fundamentally change the way AI systems are trained and built.

For businesses, LLM offers the promise of boosting the adoption of AI that is hampered by a lack of workers to build models. With just a few hundred prompts, basic LLM can be easily utilized by organizations without AI expertise—a huge advantage.

Many analysts predict that the LLM technology and industry will continue to mature and evolve rapidly over the next decade.A number of new large models have emerged in the last year, including Megatron-Turing NLG, a 530 billion parameter LLM released by Microsoft and Nvidia. The model is used internally in various applications to reduce risk and identify fraud, reduce customer complaints, increase automation and analyze customer sentiment.

Ongoing research and commercialization are expected to lead to various new models and applications in computational photography, education, and mobile user interaction experiences.A running count of industry startups includes: More than 150 in the field of generative AI alone.

“Customers are continuously auto-generating their text on the gigantic GPT-3 model with unrivaled range, accuracy and latency. The combination of Nvidia NeMo Megatron and Azure’s infrastructure provides the power needed to solve ever-changing problems.” Scalability, adaptability, and great potential. The future of LLM has never been brighter, as Microsoft is committed to always bringing the latest products to the cloud, such as the latest GPUs or all models with trillions of parameters,” said Microsoft AI and HPC Hugo Affaticati, Technical Program Manager for Benchmarking, said.

Robot control is a particularly promising frontier. Researchers are now using transformer-based models to teach robots used in manufacturing, construction, autonomous driving and personal assistants. Some believe that powerful LLMs will continue to replace traditional convolutional AI models. A good example is TimeSformer, designed by researchers at Meta AI and Dartmouth, which uses transformers to analyze video.

In fact, Transformer AI’s “foundational model” represents a potentially massive paradigm shift in AI. Unlike most of today’s LLMs that are built and maintained for specific tasks, a single underlying model can be designed to solve a wide variety of tasks. Stanford University, for example, recently created a new center to explore its impact.

“The size and scope of the underlying models over the past few years has expanded our imagination of what’s possible,” Stanford researchers wrote recently, and promises “widely beneficial applications to society”.

For businesses, the practical value certainly goes well beyond generating “art” images of Darth Vader ice fishing.


VB Lab Insights content is created in partnership with a company that paid for the post or has a business relationship with VentureBeat, and they are clearly labeled throughout. For more information, please contact sales@venturebeat.com.

[ad_2]

Source link