
Each time you utilize your voice to generate a message on a Samsung Galaxy cell phone or activate a Google Residence system, you’re utilizing instruments Chanwoo Kim helped develop. The previous govt vice chairman of Samsung Analysis’s International AI Facilities makes a speciality of end-to-end speech recognition, end-to-end text-to-speech instruments, and language modeling.
“Probably the most rewarding a part of my profession helps to develop applied sciences that my family and friends members use and revel in,” Kim says.
He lately left Samsung to proceed his work within the subject at Korea College, in Seoul, main the varsity’s speech and language processing laboratory. A professor of synthetic intelligence, he says he’s obsessed with educating the following technology of tech leaders.
“I’m excited to have my very own lab on the college and to information college students in analysis,” he says.
Bringing Google Residence to market
When Amazon introduced in 2014 it was creating good audio system with AI assistive expertise, a gadget now generally known as Echo, Google determined to develop its personal model. Kim noticed a job for his experience within the endeavor—he has a Ph.D. in language and data expertise from Carnegie Mellon, and he specialised in sturdy speech recognition. Pals of his who had been engaged on such tasks at Google in Mountain View, Calif., inspired him to use for a software program engineering job there. He left Microsoft in Seattle the place he had labored for 3 years as a software program improvement engineer and speech scientist.
After becoming a member of Google’s acoustic modeling group in 2013, he labored to make sure the corporate’s AI assistive expertise, utilized in Google Residence merchandise, might carry out within the presence of background noise.
Chanwoo Kim
Employer
Korea College in Seoul
Title
Director of the the speech and language processing lab and professor of synthetic intelligence
Member grade
Member
Alma maters
Seoul Nationwide College; Carnegie Mellon
He led an effort to enhance Google Residence’s speech-recognition algorithms, together with the usage of acoustic modeling, which permits a tool to interpret the connection between speech and phonemes (phonetic models in languages).
“When individuals used the speech-recognition operate on their cell phones, they had been solely standing about 1 meter away from the system at most,” he says. “For the speaker, my group and I had to ensure it understood the consumer after they had been speaking farther away.”
Kim proposed utilizing large-scale knowledge augmentation that simulates far-field speech knowledge to reinforce the system’s speech-recognition capabilities. Knowledge augmentation analyzes coaching knowledge acquired and artificially generates extra coaching knowledge to enhance recognition accuracy.
His contributions enabled the corporate to launch its first Google Residence product, a wise speaker, in 2016.
“That was a very rewarding expertise,” he says.
That very same yr, Kim moved as much as senior software program engineer and continued bettering the algorithms utilized by Google Residence for large-scale knowledge augmentation. He additionally additional developed applied sciences to scale back the time and computing energy utilized by the neural community and to enhance multi-microphone beamforming for far-field speech recognition.
Kim, who grew up in South Korea, missed his household, and in 2018 he moved again, becoming a member of Samsung as vice chairman of its AI Heart in Seoul.
When he joined Samsung, he aimed to develop end-to-end speech recognition and text-to-speech recognition engines for the corporate’s merchandise, specializing in on-device processing. To assist him attain his targets, he based a speech processing lab and led a group of researchers creating neural networks to interchange the traditional speech-recognition methods then utilized by Samsung’s AI gadgets.
“Probably the most rewarding a part of my work helps to develop applied sciences that my family and friends members use and revel in.”
These methods included an acoustic mannequin, a language mannequin, a pronunciation mannequin, a weighted finite state transducer, and an inverse textual content normalizer. The language mannequin seems on the relationship between the phrases being spoken by the consumer, whereas the pronunciation mannequin acts as a dictionary. The inverse textual content normalizer, most frequently utilized by text-to-speech instruments on telephones, converts speech into written expressions.
As a result of the parts had been cumbersome, it was not potential to develop an correct, on-device speech-recognition system utilizing typical expertise, Kim says. An end-to-end neural community would full all of the duties and “vastly simplify speech-recognition methods,” he says.
Chanwoo Kim [top row, seventh from the right] with a number of the members of his speech processing lab at Samsung Analysis.Chanwoo Kim
He and his group used a streaming attention-based method to develop their mannequin. An enter sequence—the spoken phrases—are encoded, then decoded right into a goal sequence with the assistance of a context vector, a numeric illustration of phrases generated by a pretrained deep studying mannequin for machine translation.
The mannequin was commercialized in 2019 and is now a part of Samsung’s Galaxy telephone. That very same yr, a cloud model of the system was commercialized and is utilized by the telephone’s digital assistant, Bixby.
Kim’s group continued to enhance speech recognition and text-to-speech methods in different merchandise, and yearly they commercialized a brand new engine.
They embrace the power-normalized cepstral coefficients, which enhance the accuracy of speech recognition in environments with disturbances akin to additive noise, adjustments within the sign, a number of audio system, and reverberation. It suppresses the consequences of background noise by utilizing statistics to estimate traits. It’s now utilized in a wide range of Samsung merchandise together with air conditioners, cellphones, and robotic vacuum cleaners.
Samsung promoted Kim in 2021 to govt vice chairman of its six International AI Facilities, situated in Cambridge, England; Montreal; Seoul; Silicon Valley; New York; and Toronto.
In that function he oversaw analysis on incorporating synthetic intelligence and machine studying into Samsung merchandise. He’s the youngest individual to be an govt vice chairman on the firm.
He additionally led the event of Samsung’s generative giant language fashions, which developed in Samsung Gauss. The suite of generative AI fashions can generate code, photographs, and textual content.
In March he left the corporate to affix Korea College as a professor of synthetic intelligence—which is a dream come true, he says.
“After I first began my doctoral work, my dream was to pursue a profession in academia,” Kim says. “However after incomes my Ph.D., I discovered myself drawn to the influence my analysis might have on actual merchandise, so I made a decision to enter business.”
He says he was excited to affix Korea College, as “it has a powerful presence in synthetic intelligence” and is likely one of the prime universities within the nation.
Kim says his analysis will concentrate on generative speech fashions, multimodal processing, and integrating generative speech with language fashions.
Chasing his dream at Carnegie Mellon
Kim’s father was {an electrical} engineer, and from a younger age, Kim needed to observe in his footsteps, he says. He attended a science-focused highschool in Seoul to get a head begin in studying engineering matters and programming. He earned his bachelor’s and grasp’s levels in electrical engineering from Seoul Nationwide College in 1998 and 2001, respectively.
Kim lengthy had hoped to earn a doctoral diploma from a U.S. college as a result of he felt it could give him extra alternatives.
And that’s precisely what he did. He left for Pittsburgh in 2005 to pursue a Ph.D. in language and data expertise at Carnegie Mellon.
“I made a decision to main in speech recognition as a result of I used to be taken with elevating the usual of high quality,” he says. “I additionally favored that the sector is multifaceted, and I might work on {hardware} or software program and simply shift focus from real-time sign processing to picture sign processing or one other sector of the sector.”
Kim did his doctoral work beneath the steering of IEEE Life Fellow Richard Stern, who in all probability is finest identified for his theoretical work in how the human mind compares sound coming from every ear to evaluate the place the sound is coming from.
“At the moment, I needed to enhance the accuracy of automated speech recognition methods in noisy environments or when there have been a number of audio system,” he says. He developed a number of sign processing algorithms that used mathematical representations created from details about how people course of auditory data.
Kim earned his Ph.D. in 2010 and joined Microsoft in Seattle as a software program improvement engineer and speech scientist. He labored at Microsoft for 3 years earlier than becoming a member of Google.
Entry to reliable data
Kim joined IEEE when he was a doctoral scholar so he might current his analysis papers at IEEE conferences. In 2016 a paper he wrote with Stern was printed within the IEEE/ACM Transactions on Audio, Speech, and Language Processing. It received them the 2019 IEEE Sign Processing Society’s Finest Paper Award. Kim felt honored, he says, to obtain this “prestigious award.”
Kim maintains his IEEE membership partly as a result of, he says, IEEE is a reliable supply of knowledge, and he can entry the newest technical data.
One other advantage of membership is IEEE’s world community, Kim says.
“By being a member, I’ve the chance to satisfy different engineers in my subject,” he says.
He’s an everyday attendee on the annual IEEE Convention for Acoustics, Speech, and Sign Processing. This yr he’s the technical program committee’s vice chair for the assembly, which is scheduled for subsequent month in Seoul.