Each time you employ your voice to generate a message on a Samsung Galaxy cell phone or activate a Google Residence gadget, you’re utilizing instruments Chanwoo Kim helped develop. The previous govt vice chairman of Samsung Analysis’s World AI Facilities focuses on end-to-end speech recognition, end-to-end text-to-speech instruments, and language modeling.
“Probably the most rewarding a part of my profession helps to develop applied sciences that my family and friends members use and revel in,” Kim says.
He lately left Samsung to proceed his work within the discipline at Korea College, in Seoul, main the college’s speech and language processing laboratory. A professor of synthetic intelligence, he says he’s captivated with educating the following technology of tech leaders.
“I’m excited to have my very own lab on the faculty and to information college students in analysis,” he says.
Bringing Google Residence to market
When Amazon introduced in 2014 it was creating good audio system with AI assistive know-how, a gadget now generally known as Echo, Google determined to develop its personal model. Kim noticed a job for his experience within the endeavor—he has a Ph.D. in language and knowledge know-how from Carnegie Mellon, and he specialised in sturdy speech recognition. Buddies of his who have been engaged on such initiatives at Google in Mountain View, Calif., inspired him to use for a software program engineering job there. He left Microsoft in Seattle the place he had labored for 3 years as a software program improvement engineer and speech scientist.
After becoming a member of Google’s acoustic modeling crew in 2013, he labored to make sure the corporate’s AI assistive know-how, utilized in Google Residence merchandise, may carry out within the presence of background noise.
Chanwoo Kim
Employer
Korea College in Seoul
Title
Director of the the speech and language processing lab and professor of synthetic intelligence
Member grade
Member
Alma maters
Seoul Nationwide College; Carnegie Mellon
He led an effort to enhance Google Residence’s speech-recognition algorithms, together with the usage of acoustic modeling, which permits a tool to interpret the connection between speech and phonemes (phonetic items in languages).
“When folks used the speech-recognition operate on their cellphones, they have been solely standing about 1 meter away from the gadget at most,” he says. “For the speaker, my crew and I had to ensure it understood the consumer after they have been speaking farther away.”
Kim proposed utilizing large-scale information augmentation that simulates far-field speech information to reinforce the gadget’s speech-recognition capabilities. Information augmentation analyzes coaching information obtained and artificially generates further coaching information to enhance recognition accuracy.
His contributions enabled the corporate to launch its first Google Residence product, a wise speaker, in 2016.
“That was a very rewarding expertise,” he says.
That very same yr, Kim moved as much as senior software program engineer and continued bettering the algorithms utilized by Google Residence for large-scale information augmentation. He additionally additional developed applied sciences to scale back the time and computing energy utilized by the neural community and to enhance multi-microphone beamforming for far-field speech recognition.
Kim, who grew up in South Korea, missed his household, and in 2018 he moved again, becoming a member of Samsung as vice chairman of its AI Heart in Seoul.
When he joined Samsung, he aimed to develop end-to-end speech recognition and text-to-speech recognition engines for the corporate’s merchandise, specializing in on-device processing. To assist him attain his targets, he based a speech processing lab and led a crew of researchers creating neural networks to exchange the standard speech-recognition programs then utilized by Samsung’s AI gadgets.
“Probably the most rewarding a part of my work helps to develop applied sciences that my family and friends members use and revel in.”
These programs included an acoustic mannequin, a language mannequin, a pronunciation mannequin, a weighted finite state transducer, and an inverse textual content normalizer. The language mannequin appears on the relationship between the phrases being spoken by the consumer, whereas the pronunciation mannequin acts as a dictionary. The inverse textual content normalizer, most frequently utilized by text-to-speech instruments on telephones, converts speech into written expressions.
As a result of the elements have been cumbersome, it was not doable to develop an correct, on-device speech-recognition system utilizing typical know-how, Kim says. An end-to-end neural community would full all of the duties and “significantly simplify speech-recognition programs,” he says.
Chanwoo Kim [top row, seventh from the right] with a few of the members of his speech processing lab at Samsung Analysis.Chanwoo Kim
He and his crew used a streaming attention-based strategy to develop their mannequin. An enter sequence—the spoken phrases—are encoded, then decoded right into a goal sequence with the assistance of a context vector, a numeric illustration of phrases generated by a pretrained deep studying mannequin for machine translation.
The mannequin was commercialized in 2019 and is now a part of Samsung’s Galaxy telephone. That very same yr, a cloud model of the system was commercialized and is utilized by the telephone’s digital assistant, Bixby.
Kim’s crew continued to enhance speech recognition and text-to-speech programs in different merchandise, and yearly they commercialized a brand new engine.
They embrace the power-normalized cepstral coefficients, which enhance the accuracy of speech recognition in environments with disturbances resembling additive noise, adjustments within the sign, a number of audio system, and reverberation. It suppresses the results of background noise by utilizing statistics to estimate traits. It’s now utilized in quite a lot of Samsung merchandise together with air conditioners, cellphones, and robotic vacuum cleaners.
Samsung promoted Kim in 2021 to govt vice chairman of its six World AI Facilities, positioned in Cambridge, England; Montreal; Seoul; Silicon Valley; New York; and Toronto.
In that function he oversaw analysis on incorporating synthetic intelligence and machine studying into Samsung merchandise. He’s the youngest individual to be an govt vice chairman on the firm.
He additionally led the event of Samsung’s generative giant language fashions, which advanced in Samsung Gauss. The suite of generative AI fashions can generate code, photos, and textual content.
In March he left the corporate to hitch Korea College as a professor of synthetic intelligence—which is a dream come true, he says.
“Once I first began my doctoral work, my dream was to pursue a profession in academia,” Kim says. “However after incomes my Ph.D., I discovered myself drawn to the impression my analysis may have on actual merchandise, so I made a decision to enter business.”
He says he was excited to hitch Korea College, as “it has a robust presence in synthetic intelligence” and is likely one of the prime universities within the nation.
Kim says his analysis will deal with generative speech fashions, multimodal processing, and integrating generative speech with language fashions.
Chasing his dream at Carnegie Mellon
Kim’s father was {an electrical} engineer, and from a younger age, Kim needed to observe in his footsteps, he says. He attended a science-focused highschool in Seoul to get a head begin in studying engineering matters and programming. He earned his bachelor’s and grasp’s levels in electrical engineering from Seoul Nationwide College in 1998 and 2001, respectively.
Kim lengthy had hoped to earn a doctoral diploma from a U.S. college as a result of he felt it could give him extra alternatives.
And that’s precisely what he did. He left for Pittsburgh in 2005 to pursue a Ph.D. in language and knowledge know-how at Carnegie Mellon.
“I made a decision to main in speech recognition as a result of I used to be thinking about elevating the usual of high quality,” he says. “I additionally favored that the sphere is multifaceted, and I may work on {hardware} or software program and simply shift focus from real-time sign processing to picture sign processing or one other sector of the sphere.”
Kim did his doctoral work beneath the steering of IEEE Life Fellow Richard Stern, who in all probability is greatest identified for his theoretical work in how the human mind compares sound coming from every ear to evaluate the place the sound is coming from.
“At the moment, I needed to enhance the accuracy of automated speech recognition programs in noisy environments or when there have been a number of audio system,” he says. He developed a number of sign processing algorithms that used mathematical representations created from details about how people course of auditory data.
Kim earned his Ph.D. in 2010 and joined Microsoft in Seattle as a software program improvement engineer and speech scientist. He labored at Microsoft for 3 years earlier than becoming a member of Google.
Entry to reliable data
Kim joined IEEE when he was a doctoral pupil so he may current his analysis papers at IEEE conferences. In 2016 a paper he wrote with Stern was printed within the IEEE/ACM Transactions on Audio, Speech, and Language Processing. It received them the 2019 IEEE Sign Processing Society’s Finest Paper Award. Kim felt honored, he says, to obtain this “prestigious award.”
Kim maintains his IEEE membership partly as a result of, he says, IEEE is a reliable supply of knowledge, and he can entry the most recent technical data.
One other good thing about membership is IEEE’s world community, Kim says.
“By being a member, I’ve the chance to fulfill different engineers in my discipline,” he says.
He’s an everyday attendee on the annual IEEE Convention for Acoustics, Speech, and Sign Processing. This yr he’s the technical program committee’s vice chair for the assembly, which is scheduled for subsequent month in Seoul.