Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Anaconda launches unified AI platform, Parasoft provides agentic AI capabilities to testing instruments, and extra – SD Occasions Every day Digest

    May 13, 2025

    Kong Occasion Gateway makes it simpler to work with Apache Kafka

    May 13, 2025

    Coding Assistants Threaten the Software program Provide Chain

    May 13, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    TC Technology NewsTC Technology News
    • Home
    • Big Data
    • Drone
    • Software Development
    • Software Engineering
    • Technology
    TC Technology NewsTC Technology News
    Home»Big Data»OpenAI launches GPT-4o Lengthy Output with 16X token capability
    Big Data

    OpenAI launches GPT-4o Lengthy Output with 16X token capability

    adminBy adminJuly 30, 2024Updated:July 31, 2024No Comments5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    OpenAI launches GPT-4o Lengthy Output with 16X token capability
    Share
    Facebook Twitter LinkedIn Pinterest Email
    OpenAI launches GPT-4o Lengthy Output with 16X token capability

    Be part of our day by day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra


    OpenAI is reportedly eyeing a money crunch, however that isn’t stopping the preeminent generative AI firm from persevering with to launch a gentle stream of recent fashions and updates.

    Yesterday, the corporate quietly posted a webpage saying a brand new giant language mannequin (LLM): GPT-4o Lengthy Output, which is a variation on its signature GPT-4o mannequin from Could, however with a massively prolonged output dimension: as much as 64,000 tokens of output as a substitute of GPT-4o’s preliminary 4,000 — a 16-fold improve.

    Tokens, as you might recall, seek advice from the numerical representations of ideas, grammatical constructions, and combos of letters and numbers organized primarily based on their semantic which means behind-the-scenes of an LLM.

    The phrase “Whats up” is one token, for instance, however so too is “hello.” You possibly can see an interactive demo of tokens in motion by way of OpenAI’s Tokenizer right here. Machine studying researcher Simon Willison additionally has an excellent interactive token encoder/decoder.

    By providing a 16X improve in token outputs with the brand new GPT-4o Lengthy Output variant, OpenAI is now giving customers — and extra particularly, third-party builders constructing atop its software programming interface (API) — the chance to have the chatbot return far longer responses, as much as a couple of 200-page novel in size.

    Why is OpenAI launching an extended output mannequin?

    OpenAI’s choice to introduce this prolonged output functionality stems from buyer suggestions indicating a necessity for longer output contexts.

    An OpenAI spokesperson defined to VentureBeat: “We heard suggestions from our clients that they’d like an extended output context. We’re all the time testing new methods we will greatest serve our clients’ wants.”

    The alpha testing section is anticipated to final for just a few weeks, permitting OpenAI to assemble information on how successfully the prolonged output meets consumer wants.

    This enhanced functionality is especially advantageous for purposes requiring detailed and intensive output, resembling code modifying and writing enchancment.

    By providing extra prolonged outputs, the GPT-4o mannequin can present extra complete and nuanced responses, which may considerably profit these use circumstances.

    Distinction between context and output

    Already, since launch, GPT-4o supplied a most 128,000 context window — the quantity of tokens the mannequin can deal with in anyone interplay, together with each enter and output tokens.

    For GPT-4o Lengthy Output, this most context window stays at 128,000.

    So how is OpenAI in a position to improve the variety of output tokens 16-fold from 4,000 to 64,000 tokens whereas protecting the general context window at 128,000?

    It name comes all the way down to some basic math: despite the fact that the unique GPT-4o from Could had a complete context window of 128,000 tokens, its single output message was restricted to 4,000.

    Equally, for the brand new GPT-4o mini window, the whole context is 128,000 however the most output has been raised to 16,000 tokens.

    Which means for GPT-4o, the consumer can present as much as 124,000 tokens as an enter and obtain as much as 4,000 most output from the mannequin in a single interplay. They will additionally present extra tokens as enter however obtain fewer as output, whereas nonetheless including as much as 128,000 complete tokens.

    For GPT-4o mini, the consumer can present as much as 112,000 tokens as an enter in an effort to get a most output of 16,000 tokens again.

    For GPT-4o Lengthy Output, the whole context window remains to be capped at 128,000. But, now, the consumer can present as much as 64,000 tokens price of enter in change for a most of 64,000 tokens again out — that’s, if the consumer or developer of an software constructed atop it desires to prioritize longer LLM responses whereas limiting the inputs.

    In all circumstances, the consumer or developer should make a alternative or trade-off: do they wish to sacrifice some enter tokens in favor of longer outputs whereas nonetheless remaining at 128,000 tokens complete? For customers who need longer solutions, the GPT-4o Lengthy Output now presents this as an choice.

    Priced aggressively and affordably

    The brand new GPT-4o Lengthy Output mannequin is priced as follows:

    • $6 USD per 1 million enter tokens
    • $18 per 1 million output tokens

    Evaluate that to the common GPT-4o pricing which is $5 per million enter tokens and $15 per million output, and even the brand new GPT-4o mini at $0.15 per million enter and $0.60 per million output, and you’ll see it’s priced reasonably aggressively, persevering with OpenAI’s latest chorus that it desires to make highly effective AI reasonably priced and accessible to huge swaths of the developer userbase.

    At present, entry to this experimental mannequin is restricted to a small group of trusted companions. The spokesperson added, “We’re conducting alpha testing for just a few weeks with a small variety of trusted companions to see if longer outputs assist their use circumstances.”

    Relying on the outcomes of this testing section, OpenAI could contemplate increasing entry to a broader buyer base.

    Future prospects

    The continued alpha take a look at will present beneficial insights into the sensible purposes and potential advantages of the prolonged output mannequin.

    If the suggestions from the preliminary group of companions is optimistic, OpenAI could contemplate making this functionality extra extensively out there, enabling a broader vary of customers to learn from the improved output capabilities.

    Clearly, with the GPT-4o Lengthy Output mannequin, OpenAI hopes to deal with an excellent wider vary of buyer requests and energy purposes requiring detailed responses.

    VB Each day

    Keep within the know! Get the most recent information in your inbox day by day

    By subscribing, you comply with VentureBeat’s Phrases of Service.

    Thanks for subscribing. Try extra VB newsletters right here.

    An error occured.



    Supply hyperlink
    Post Views: 70
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    admin
    • Website

    Related Posts

    Do not Miss this Anthropic’s Immediate Engineering Course in 2024

    August 23, 2024

    Healthcare Know-how Traits in 2024

    August 23, 2024

    Lure your foes with Valorant’s subsequent defensive agent: Vyse

    August 23, 2024

    Sony Group and Startale unveil Soneium blockchain to speed up Web3 innovation

    August 23, 2024
    Add A Comment

    Leave A Reply Cancel Reply

    Editors Picks

    Anaconda launches unified AI platform, Parasoft provides agentic AI capabilities to testing instruments, and extra – SD Occasions Every day Digest

    May 13, 2025

    Kong Occasion Gateway makes it simpler to work with Apache Kafka

    May 13, 2025

    Coding Assistants Threaten the Software program Provide Chain

    May 13, 2025

    Anthropic and the Mannequin Context Protocol with David Soria Parra

    May 13, 2025
    Load More
    TC Technology News
    Facebook X (Twitter) Instagram Pinterest Vimeo YouTube
    • About Us
    • Contact Us
    • Disclaimer
    • Privacy Policy
    • Terms and Conditions
    © 2025ALL RIGHTS RESERVED Tebcoconsulting.

    Type above and press Enter to search. Press Esc to cancel.