Techno Blender
Digitally Yours.

Alibaba staff offers glimpse into life of building LLM in China

0 20


Chinese tech companies are gathering all sorts of resources and talent to narrow their gap with OpenAI, and experiences for researchers on both sides of the Pacific Ocean can be surprisingly similar. A recent X post from an Alibaba researcher offers a rare glimpse into the life of developing large language models at the ecommerce firm, which is amongst a raft of Chinese internet giants striving to match the capabilities of ChatGPT.

Binyuan Hui, a natural language processing researcher at Alibaba’s large language model team Qwen, shared his daily schedule on X, mirroring a post by OpenAI researcher Jason Wei that went viral recently.

The parallel glimpse into their typical day reveals striking similarities, with wake-up times at 9 a.m. and bedtime around 1 a.m. Both start the day with meetings, followed by a period of coding, model training and brainstorming with colleagues. Even after getting home, they continue to run experiments at night and ponder on ways to enhance their models well into bedtime.

The notable difference is that Hui, the Alibaba employee, mentioned reading research papers and browsing X to catch up on “what is happening in the world.” And as a commentator pointed out, Hui doesn’t have a glass of wine after he arrives home like Wei does.

This intense work regime is not unusual in China’s current LLM space, where tech talent with top university degrees are joining tech companies in droves to build competitive AI models. To a certain extent, Hui’s demanding schedule reflects a personal drive to match, if not outpace, Silicon Valley companies in the AI space. It seems different from the involuntary “996” work hours associated with more “traditional” types of Chinese internet businesses that involve heavy operations, such as video games and ecommerce.

Indeed, even renowned AI investor and computer scientist Kai-Fu Lee puts in an incredible amount of effort. When I interviewed Lee about his newly minted LLM unicorn 01.AI in November, he admitted that late hours were the norm, but employees were willingly working hard. That day, one of his staff messaged him at 2:15 a.m. to express his excitement about being part of 01.AI’s mission.

Such work ethics partly explain the rapid speed at which China’s tech firms are able to introduce LLMs. Qwen, for example, has open sourced a series of foundation models trained with both English and Chinese data. The largest has 72 billion parameters, which are like knowledge a model gains from historical training data and define its ability to generate contextually relevant responses. The team was also quick to introduce commercial applications. Last April, Alibaba began integrating Qwen into its enterprise communication platform Dingtalk and online retailer Tmall.

No definite leader has emerged in China’s LLM space so far, and venture capital firms and corporate investors are spreading their bets across multiple contenders. Besides building its own LLM in-house, Alibaba has been aggressively investing in startups such as Moonshot AI, Zhipu AI, Baichuan and 01.AI.

Facing competition, Alibaba has been trying to carve out a niche, and its multilingual move could become a selling point. In December, the company released an LLM for several Southeast Asian languages. Called SeaLLM, the model is capable of processing information in Vietnamese, Indonesian, Thai, Malay, Khmer, Lao, Tagalog and Burmese. Through its cloud computing business and acquisition of ecommerce platform Lazada, Alibaba has established a sizable footprint in the region and can potentially introduce SeaLLM to these services down the road.




Chinese tech companies are gathering all sorts of resources and talent to narrow their gap with OpenAI, and experiences for researchers on both sides of the Pacific Ocean can be surprisingly similar. A recent X post from an Alibaba researcher offers a rare glimpse into the life of developing large language models at the ecommerce firm, which is amongst a raft of Chinese internet giants striving to match the capabilities of ChatGPT.

Binyuan Hui, a natural language processing researcher at Alibaba’s large language model team Qwen, shared his daily schedule on X, mirroring a post by OpenAI researcher Jason Wei that went viral recently.

The parallel glimpse into their typical day reveals striking similarities, with wake-up times at 9 a.m. and bedtime around 1 a.m. Both start the day with meetings, followed by a period of coding, model training and brainstorming with colleagues. Even after getting home, they continue to run experiments at night and ponder on ways to enhance their models well into bedtime.

The notable difference is that Hui, the Alibaba employee, mentioned reading research papers and browsing X to catch up on “what is happening in the world.” And as a commentator pointed out, Hui doesn’t have a glass of wine after he arrives home like Wei does.

This intense work regime is not unusual in China’s current LLM space, where tech talent with top university degrees are joining tech companies in droves to build competitive AI models. To a certain extent, Hui’s demanding schedule reflects a personal drive to match, if not outpace, Silicon Valley companies in the AI space. It seems different from the involuntary “996” work hours associated with more “traditional” types of Chinese internet businesses that involve heavy operations, such as video games and ecommerce.

Indeed, even renowned AI investor and computer scientist Kai-Fu Lee puts in an incredible amount of effort. When I interviewed Lee about his newly minted LLM unicorn 01.AI in November, he admitted that late hours were the norm, but employees were willingly working hard. That day, one of his staff messaged him at 2:15 a.m. to express his excitement about being part of 01.AI’s mission.

Such work ethics partly explain the rapid speed at which China’s tech firms are able to introduce LLMs. Qwen, for example, has open sourced a series of foundation models trained with both English and Chinese data. The largest has 72 billion parameters, which are like knowledge a model gains from historical training data and define its ability to generate contextually relevant responses. The team was also quick to introduce commercial applications. Last April, Alibaba began integrating Qwen into its enterprise communication platform Dingtalk and online retailer Tmall.

No definite leader has emerged in China’s LLM space so far, and venture capital firms and corporate investors are spreading their bets across multiple contenders. Besides building its own LLM in-house, Alibaba has been aggressively investing in startups such as Moonshot AI, Zhipu AI, Baichuan and 01.AI.

Facing competition, Alibaba has been trying to carve out a niche, and its multilingual move could become a selling point. In December, the company released an LLM for several Southeast Asian languages. Called SeaLLM, the model is capable of processing information in Vietnamese, Indonesian, Thai, Malay, Khmer, Lao, Tagalog and Burmese. Through its cloud computing business and acquisition of ecommerce platform Lazada, Alibaba has established a sizable footprint in the region and can potentially introduce SeaLLM to these services down the road.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment