Cheetah Mobile Inc (CMCM) 2024 Q1 法說會逐字稿

完整原文

使用警語:中文譯文來源為 Google 翻譯,僅供參考,實際內容請以英文原文為主

  • Operator

  • Good day and welcome to the Cheetah Mobile first-quarter 2024 earnings conference call. (Operator Instructions) Please note this event is being recorded. I would now like to turn the conference over to Helen, Investor Relations for Cheetah Mobile.

  • Helen Zhu - IR

  • Thank you, operator. Welcome to Cheetah Mobile fourth-quarter 2024 (sic - "first-quarter 2024") earnings conference call. With us today are our company's Chairman and CEO, Mr. Fu Sheng; and Director and CFO, Mr. Thomas Ren. Following management's prepared remarks, we'll conduct the Q&A session.

  • Before we begin, I refer you to the Safe Harbor statement in our earnings release, which also applies to our conference call today as we will make forward-looking statements.

  • At this time, I would now like to turn the conference call over to Chairman and CEO, Mr. Fu Sheng. Please go ahead, Fu.

  • Sheng Fu - Chief Executive Officer, Chairman of the Board of Directors

  • Hello, everyone. Thank you for joining us today. This is our first earnings calls since November 2021, and we are excited to share our progress as we resume our quality updates. Cheetah Mobile is making changes. We are moving from focus on 2C to 2B. In Q1, our revenue from AI and others [or] enterprise focused increased by 62% compared to last year and a 36% from the previous quarter. Now these revenues make up 43% of our total revenue. We expect this to grow to about 50% by the end of the year, making significant step in our transformation.

  • Our recent acquisition of Beijing OrionStar, an AI service provider, was an important move. It gave us a skilled sales team, strong-ties with business customers, and end-to-end capabilities for LLMs, including model training, file training, developing LLM-based apps, and enhancing service robots, a new attachment for interacting with end users and the customers in the AI era. With OrionStar, we are now focusing on making customer enterprise apps with LLMs and introducing LLM-powered robot for specific business needs.

  • We see two main reasons for this focus. First, market opportunity. Unlike competitive 2C market enterprise are increasingly choosing LLM-based apps on private cloud due to data security concerns. However, they face challenges in developing paired apps and presenting a substantial opportunity in China's enterprise sector.

  • Second, synergies. Bringing together Cheetah and OrionStar allow us to combine our app enterprise with AI skills, better converting the market opportunities. By selling robots to business, we can even find new ways to use LLMs to improve efficiency. We are using products driving approach to enhance LLM capabilities. This is why we focus on the 10B parameters LLM segments and avoid large upfront investments in GPUs. We believe that achieving parameters LLM is unnecessary. And the enterprise can deploy and use 10B LLM on private clouds at a lower cost.

  • Over past few months, we change our 14E parameters foundation models from scratch, which have been approved by authorities for our large scale rollout and rank among the top of various lists. Additionally, we are fighting nearly all leading open resource -- open source foundation models to offer more options for our customers, all without significantly increasing costs.

  • Furthermore, we have seen positive developments by integrating LLM-based apps into our service robot. In particular, our delivery robot can now interact better with users, leading to increased demand, especially in Japan and South Korea. Currently, our overseas revenue has surpassed domestic revenues and continue to grow steadily. With LLMs, we believe the features of our robot, service robot will expand even further.

  • I would also like to highlight how we assist our customers in using LLM-based efficiently. For example, we helped Chengdu University develop an LLM-based features for its app, improving user experience. We also developed LLL powerful customer service features for another customer products, including WeChat mini programs, apps, and our service robots. This service is now available in [Ucha], helping local residents apply for housing funds. We are also working with enterprise in China's franchise industry to improve management internationally with LMM based apps.

  • In the early stage of LLM-based app developments, we closely work with our customers to understand their needs and verify areas for improvements with LLMs, find the most appropriate LLM file change and the developer customer apps. This process help us standardize some LLM-based apps and accountabilities, particularly in customer service, enterprise management, and a chain which we can replicate to more customers.

  • As a result, we are more monitoring customer feedback and certification. Additionally, the other applications can be incorporate into our service robots, our long-term business model in LLMs era were involved selling robots and offering valued-added service. As we focus on building LLM-based app for enterprise, we will shift our resources from our agency in Internet business to AI business. This will improve the operating margin of our Internet business, which we use as our financial performance metric.

  • In summary, LLM is a once-in-a-generation opportunity. With OrionStar out and our clear strategy, we are now confident in our direction. We would like to emphasize that we don't want to set short-time revenue growth targets, but we are aggressively prioritizing our customers satisfaction and building light house projects. By doing so, we believe we will establish a new growth engine to drive system sustainable long-term growth in both revenue and margins over time. All we need is a bit of patience. We thank you all dedicated employees for their hard work in making this happen.

  • Thank you. And Thomas?

  • Jintao Ren - Chief Financial Officer, Director

  • Thank you, [Fu-du]. Hello, everyone, on the call. Please note that unless stated otherwise, all money amounts are in RMB terms. Today, I'm going to talk about two topics. First, our continued investment in large language models or LLM, resulting in a widened operating loss for the quarter, while total revenue has resumed its increase. Second, our healthy balance sheet.

  • First, we are investing in LLMs. We aim to help enterprises quickly develop LLM-based new apps. As Fu-du mentioned in his speech, our acquisition of OrionStar has allowed service robots to become a key revenue contributor to the segment of the AI and others. In the Q1 of 2024, revenues from AI and others increased by 62% year over year and 36% quarter over quarter to RMB81 million, accounting for 43% of total revenue in the same period.

  • Driven by contributions from Beijing OrionStar, our total revenue increased by 12% year over year and 14% quarter over quarter to RMB190 million. This acquisition also allow the two teams from Cheetah and OrionStar to work more efficiently together to better capture the opportunity in LLMs as we help Chinese enterprises develop apps on our end to boost productivity. We expect this will lead to a substantial growth in revenue over time.

  • In addition, LLMs are enabling us to improve the product experience provided by our service robots, which are now more capable of answering users' different inquiries. This enhancement has strengthened our competitiveness and should drive the sale of our service robots over time.

  • In Q1 of 2024, our total non-GAAP costs and expenses increased 21% year over year and 19% quarter over quarter. And non-GAAP operating loss was RMB66 million in the quarter, up from RMB42 million in the same period last year and RMB49 million in the previous quarter. This is primarily due to the investments in LLM mentioned earlier.

  • Through Beijing OrionStar, we acquired many R&D talents and 2B sales personnel, which are very important for us to capitalize on the opportunity in this sector. As of March 31, 2024, we had about 860 employees up from about 720 a year ago. We are also renting GPUs for model training and fine tuning. Excluding the impact of the aforementioned investments in LLM, our costs and expenses as well our margins remain stable.

  • For example, excluding SBC, our operating profit for the Internet business was 7.9% in the quarter up from 3.1% in the same quarter last year. As we continue to review our product portfolio and remove products that did not attract user pain points, we will continue this approach moving forward.

  • At the same time, we will continue to invest in talent, both in R&D, specialized in LLMs, and 2B sales personnel to help us seize the LLM opportunity to build a new growth engine for Cheetah. Our investments will be backed by our strong cash reserve. At the same time, we will continue to increase our operating profit for the Internet business.

  • Secondly, Cheetah Mobile has a healthy balance sheet. As of March 31, 2024, we had cash and cash equivalents and short-term investments of about USD250 million. In addition, we had a USD130 million of long-term investments, which includes several holdings in well-known entities such as [Lingxishenzhi].

  • Lastly, in line with the practice of comparable China-based companies listed in the US capital market, we have decided not to provide revenue guidance going forward. Thank you.

  • Helen Zhu - IR

  • Everyone for today's call, management will answer session in Chinese and an AI agents will translate management's comments into English in a separate line. Please note the translation is for convenience purpose only.

  • In the case of any discrepancy, management statement in Chinese works well. If you are able to hear Chinese -- the English translation, a transcript in English will be available on our IR website within seven working days. Thank you so much. And Operator, please now take questions. Thank you.

  • Operator

  • (Operator Instructions) Ladies and gentlemen, please stand by for the English translation of the question-and-answer session.

  • Unidentified Participant

  • The first question is what are the plans and goals for Cheetah in 2024? Which areas does it plan to focus on customer base, technology, or product.

  • So I think our goal is to thoroughly implement our strategic transformation. That is to say, after several years, Cheetah Mobile has gradually shifted from a company with a focus on the home market to a company with a focus on the land market and some capabilities. The main focus of this to be is the artificial intelligence, large model which is a technological wave. What we really need to focus on is to do a good job in the application of artificial intelligence and building this direction.

  • This direction is the core strategy of our entire company. And we have newly set a company slogan, which is that we want to become a provider of new intelligent productivity tools in the era of artificial intelligence. Of course, this productivity tool mainly refers to the 2B industry this time.

  • Regarding the specific points you mentioned, we think it is still the product. Although artificial intelligence is very popular, there are not many truly working products. Maybe the technical role of large models is very powerful, but there are not many good cases that can be used by enterprise users now. We see that now we need to do a solid job in the application of enterprise users, use artificial intelligence to help it take effect.

  • In addition, we will [fund] our service robots with our large models so that our service robots can have better interaction capabilities, better self-awareness and measurement capabilities and be used by our enterprise customers in more scenarios. I think it will be very good if we can do a good job in construction in 2024.

  • Second question is that Cheetah Mobile is a company that focuses on 2C business. Now the company wants to join into 2B, do large model private deployment and make robots. Where does your confidence come from?

  • 2B business is different from 2C business and it may need to spend a lot of energy, maintaining customers and customer relationships, how the President Fu layout after that OrionStar store, which was originally invested by Cheetah. I've also spent a lot of energy assisting OrionStar in doing this because as soon as this news came out, it was due to 2B. This hour is equivalent to the market.

  • So in this process, we have a team and we also participated in some. And then we also learned a lot experience in the process of internal OrionStar's acquisition. And I myself for the 2B that you said, we also spent -- that is to say we spent a long time learning, and this includes what you said about spending a lot of energy, maintaining customers and customer relationships. I think the most important thing is to build a set of organizational capabilities suitable for 2B.

  • We have also spent a lot of time in the past six months on various --. One more thing I want to say is that in addition to the internal newness of OrionStar, we actually did a lot of work in 2B. A few years ago, including a business called [Quayon], which is to provide enterprises with cloud services from Amazon and Google. We are also Google in China, and it's equivalent to a not a gold medal, but a couple of our partner. So in fact, at that time, we had already begun to continuously explore how to communicate with 2B customers and how our organization can adapt to such a 2B market.

  • It should be said that indeed, the transformation from 2C to 2B has a lot of this called transformation pain. But we, including me in our management, has spent a lot of energy, not only learning, but also practicing --. You said that maintaining customer relationships and customer relationships this is indeed time consuming to do 2B. Of course, now that our organization has been built, there is a similar to always iron triangle, and we have a dedicated [AR] position to serve our customers.

  • And myself spent time maintaining not the customer relationship, but more communicating with customers to obtain their needs. Because only when I use the top leader to understand the needs of customers, can I do a good job in a 2B business, which is also what we have learned in the past few years.

  • For this layout, I think our idea today is first of all, we have to build this benchmark customer, and we now have several top customers in this industry who are delivering --. This delivery is crucial to us although we are doing 2B, the role of 2C is to attach importance to the user experience, which is still our lifeblood, and will definitely make our customers feel that this is enough. That is the services and products we provide to them are enough to satisfy them. After building the benchmark customers, as I just said, we can standardize some of our own standard parts and then replicate them.

  • Secondly, another thing is that due to the construction of our of the organization, many of our customer relationships can be borrowed in this kind of business of large models and can also be used in the business of robots here. There are many cross case scenarios where customers of large models and machines purchased together. So I think that Cheetah Mobile is now in the start-up period again.

  • We can't say that there's any particularly big layout. The more important thing is to choose the artificial intelligence front and then serve the customers well. Under the tide of artificial intelligence, we really do applications in a downturn manner to make customers feel satisfied.

  • The third question is that the company's accounts receivable, prepayment, and accounts payable are relatively large. Can you explain what business this is caused by? How will the company manage receivables and payables?

  • Thank you for your question. The large amounts in the several accounts are all related to one of our advertising agency businesses. Our Cheetah advertising agency business is to help many Chinese advertisers, several relatively large overseas platforms, online broadcasting platforms to purchase advertisements. Since the amount of revenue we recognize is only the advertising agency fee, the full amount of the customers purchase of advertisements and our payments to the advertising platform is recorded in the two accounts of payment and payable that you just mentioned.

  • This business is actually also a 2B business and we have been operating for nearly 10 years. However, during these 10 years, we have actually formed a very strict mechanism to evaluate the credit performance of advertisers and manage the accounts receivable and payable periods. We are still very confident in the cash management of this business.

  • This question is about how the company plans to make Orion unlock value for Cheetah's shareholders and whether they will consider listing Orion separately.

  • You mentioned Orion. In fact, after our acquisition, the focus of Orion's internal business is that most of the welfare business of our group is placed in this entity of Orion. However, as a listed company of Cheetah Mobile, we are always committed to creating the greatest value for shareholders of Cheetah Mobile.

  • Regarding the planning of Orion, we will comprehensively evaluate various capital operational opportunities, including the possibility of listing the subsidiaries separately or conducting independent financing. Our goal is to enhance the market value of Orion's business performance through effective ways, thereby further enhancing the stock price of the entire company. In every role, we will fully consider the market environment, company strategy, and the long-term interests of shareholders of Cheetah Mobile's to ensure that every step we take can bring the greatest return to shareholders.

  • The first question is that many cloud vendors did project-based work in the previous years, which was why they criticized what innovation does Cheetah have in the private deployment of large models? What is the level of project revenue and profit margin that we are doing? Why to enterprise applications of large models require private deployment?

  • Many cloud vendors provide standardized model, fine tuning tools and application development tools for enterprises to use and a certain scale of invocation volume has been formed. At the same time, the inference cost of large models of cloud vendors is constantly reducing and even some models are directly free. In this case, why do enterprises still need private deployments? What types of characteristics of enterprises are suitable for private deployment of large models?

  • I will answer briefly first regarding the project based system of cloud vendors. To be honest, I don't know much about it because as far as I know, some cloud vendors projects are particularly large. A huge private deployment of the cloud is actually not the same concept as the planning we are talking about today. But in fact, cloud vendors have been providing their customers with good enough deployment services. It's just that companies like Amazon, their deployment and personnel costs are very high. So they let partners complete it. Just like we went there and undertook many such projects. And because such partners have lower costs, there is also a lot of profit.

  • Regarding the revenue and profit margin of our projects today, frankly speaking, the real thing we do to help enterprises with private deployment is still in the stage of benchmarking. As I just mentioned in the manuscript, we are not considering too much about this because of user considerations. However, the model we are launching now is more about being able to work with partners. This is equivalent to fill-in model. When we have a particularly clear time, we will talk about it again.

  • And why to large enterprises and private deployment of large models? Because the larger the enterprise, the more consideration it will give to data security. And today, when you really use a large model, what you really passed to it is a lot of internal documents of the enterprise, especially some sensitive documents. In fact, the vast majority of enterprises are very concerned about this because the [indulgeness] ability of large models comes from data. After the data on the Internet is exhausted, the data inside the enterprise is also very important data source.

  • So at least at the customer level, we see quite a lot of concerns and basically companies of a certain size all require private deployment of large models. The difference between us and previous clouds is that when we do private deployment: first, the cost of private deployment of large models itself is not high. It is not a complex deploying system. In fact, today, you can basically deploy large models in to it by using some servers to open servers. So the cost of this deployment itself is very low.

  • Secondly, when doing project-based work, because today we are doing AI implementation when we deploy large models. Large models have a different facility from previous businesses, because their own reasoning and comprehension abilities are relatively strong. This makes it easier for us to approach customers in other industries than in the previous era, including not only the cloud but also SaaS. In fact, our ability to cross domain has been greatly enhanced compared to before.

  • That is to say in the past, if I didn't know enough about this industry, it was actually very difficult for me to do, too, but because large models understand themselves. I don't know if you understand what I mean by that is they will understand a lot of professional knowledge by themselves. So our deployment workload is not called deployment, but the workload of helping them to applications will be much less than before. And once this strong ties formed, it's replicability will also be much stronger.

  • For example, we just mentioned that we did this for a government project to do provident funds and it took us a long time to do it. But when the second customer came, our deployment might only take two or three weeks to complete. Thirdly, you asked that the first cost of large also cloud vendors is constantly decreasing and even some models are directly free.

  • Now most of the free models are open-source models. And when using open-source models, the vast majority of customers we encounter require private deployment, but private deployment is divided into two aspects. One is called private deployment within the internal network, which is required for enterprises with high security and the other is called private deployment in the cloud. That is I deploy a model in the cloud, but the model can only be used by any data to be crossed.

  • And this kind of deployment is also our private deployment. So what you just asked is actually mainly due to data security considerations. Of course, if you use the large model a lot, it's not just one machine. The cost of this definition is constantly decreasing for now 12 years Zhuang, this actually still has some advantage.

  • The third is what kind of enterprises and what are suitable. We think that the larger the enterprise, the more basically needs data security. So conversely, this is also a good thing for us. More of the enterprises' needs come from large enterprises with strong payment capabilities.

  • This question is about how you think about the relationship between the robotics business and large scale models. What are the promotional effects of the companies, all in large scale model enterprise applications on the robotics business today?

  • The customer base of our robot is enterprise users. And after acquisition, the store was cleared out and they not only have more of an agency system that is their agents, but also do a lot of enterprise information implementation and deployment services. So just from the channel perspective, a large part of it can be reused.

  • Secondly, from a technical point of view, this large model is the brain of the robot in the past, except for industrial machinery, other robots in the robotics industry have not developed well. One of the important reasons is that the brain power of the machine is limited. And now why is the robotics industry so popular, in fact, after the breakthrough of the large scale model, which brings enhanced decision making and judgment capabilities.

  • Now everyone believes that the robotics industry, whether it is service robots or industrial robots or even human noise robots will have a bright future.

  • So what is the relationship between us? On the one hand, as we just mentioned, the customer relationship can be reused and a large part of it can be reused for enterprise customers in the 2B segment.

  • Once you establish a connection, you will find that those who sell robots are also interested in a large model. When you talk to them about this, they will feel that you can also help them improve many positions before.

  • Secondly, more importantly, if we do not develop the capabilities of the large model, well, our robots will lose its competitiveness in the long run, right, because we are not just a hardware manufacturer, but really focus on its autonomous decision-making capabilities.

  • Now through the training, fine-tuning and application of the large scale model, we have a clear understanding of how to apply the capabilities of the large model to robots. We have already started some training in this area internally. It is no longer just about training the large scale model, but also combining the robotics with the large scale model and the intelligent language model. This capability will continue to expand.

  • At this stage, what we can see is that in the past few years, we have been doing a lot of voice interaction, but the growth has not been good enough because the user base is not large enough. And when the questions go beyond the scope, it cannot answer. Now with the large-scale language model, the smoothness and satisfaction of the communication has been greatly improved.

  • We have also disclosed our data to the market. At that time, we helped customers to the (spoken in foreign language) and we were able to achieve an accuracy rate of about 97%. This data shows that when a robot can answer with such a high accuracy rate during the explanation, its practicalities equivalent to that of a human.

  • And this increase in accuracy is not achieved by a large amount of manual work in the past, but by in putting some documents into it and it can achieve a high level of accuracy. Therefore, at this stage, it is obvious that the demand for robots in (spoken in foreign language) area be increasing, especially since the large scale model is itself diverse.

  • In the past, our robots overseas were mostly low speed because crossing a language was a lot of work for us. But now due to the large scale model, it is naturally multilingual. So we will also launch this in overseas next step language interactive robots. Of course, in the long run, we are also doing some training on robotic arms to enable robots to do some work, but this still needs some time.

  • Thank you. My question is about chips and at the background of high end chips being restricted in China. Will Cheetah continue to train its own large model? When fine tuning and iterating models for enterprise customers, how does Cheetah solve the chip problem?

  • This was founded in 1716, and they were already doing artificial intelligence back then. And in 2017, Cheetah will be leaving shouted out the slogan of (inaudible) and also collaborated a lot with [six stars] in the aspect of AI. So our experiences didn't start from last year. Although the large language model has some different characteristics from the previous models, the underlying neural network, the transformer we have already used in the earliest T and some of the slowdown was used in the speech model.

  • So our entire team's understanding of the transformer has been too long term precipitation. Well, as fine-tuning things, you said the competitive advantage I think it comes more from the granularity, that is being able to do is find enough. Because fine tuning itself is the preparation of about hundreds of thousands of pieces of courses and also the refinement of this data, according to this scenario.

  • It also requires a lot of careful and detailed management as well as communication with the needs of users here. I think if we talk about competitive advantages or who has any competitive advantages in such a fierce market, it is very difficult for a company to say that it has any unique and insurmountable advantages in technology.

  • You think are more advantages come from the combination with customers in the market. That is what we really focus on is the process of rapid iteration rather than a certain point that you can do, and others can't. So we constantly emphasize the importance of users, word of mouth and the implementation of some projects.

  • And then you talk about private deployment itself. In fact, the private deployment of large models is not difficult at all. What we really do is not to private deployment of large models, but after deploying into the user's network. According to the user's business characteristics and business needs to do the corresponding (inaudible) thing. The difficulty lies in that today's model capabilities has not reached the level of a universally AI.

  • Also we'll not -- someone asked before, that is, if the margin has been right on site, that is and we our ability of today's large language model has certain reasoning, but it is a large gap compared to the needs of the enterprise scenario.

  • What is needed here is to do the application, the competitive advantage of doing the application is your insight into the customers' needs and the use of a whole set of technical means to help them provide a solution within this demand, because the customer only cares about whether this thing is satisfactory to me, not whether it is solved by the model, is it solved by the model or other technologies in the application.

  • We have found through practice that a downward GDPI model when really used in many enterprises is just to do a professional knowledge Q&A. And the satisfaction of customers is not satisfactory. This is our own practice. So it is necessary to customize some (inaudible) according to the customers' needs, unless the (inaudible) work with the large model.

  • Only after this collaborative work can user really reached the so-called digital employee role. In fact, it seems that there is a lack of such a real solution in the market that can truly provide customers with satisfaction. This is our understanding of the market. So when you ask about the competitive advantage, in fact, we are exploring the depth of the customers' needs and doing the details as well as for the relevant talents. On this point, first of all, our leader because it involves us, we will not talk about it, but he has also published papers and has sufficient academic and industrial foresight.

  • Then in terms of the specific implementers and some algorithm engineers, there is a considerable reserve of talents in China. At this point, it is not too difficult to recruit such people in the market. So we are not going to compete in the large parameters and teaching only of large models. So our demand for so-called top talent is not that high.

  • We are more about combining the already over supply capabilities of large models to provide our enterprise customers with a set of solutions. This is our focus and their private deployments, how to solve the problem of continuous model iteration, when we selected an open-source launch model based on the enterprise scenario for customers to do fine tuning deployment and application development and the enterprise started to use it.

  • But now the base model is evolving very quickly when the base model is updated, will it completely overwrite the capabilities of the large models as we fine tune for the enterprise.

  • Yes. Thank you very much for your question. There are a few concepts here that I would like to explain first, namely, fine tuning and application. These are two different concepts. In fact, in most enterprise scenarios, there is no need to do large-scale fine tuning specifically for the enterprise because the basic capabilities of the current model in the enterprise above 10 billion parameters.

  • We now believe that the basic capabilities of a 300 billion parameter model can basically meet the requirements of most enterprise application frameworks. Moreover, most enterprises rarely have so much data to provide that fine tuning to large models will bring more changes. Our current approach is to use what is called an application suite to combine the model with the needs of the enterprise.

  • Well, when the model is updated, the application suite will not become obsolete because more of it is combined with some internal systems at the enterprise such as calling it you ask a large model question such as how do I go about handling the document today, he will say what documents do you need to provide?

  • You tell me your ID number and after you tell it, he will go to check the interface with the ID number. This is part of the application. After this application interface is written, your model will be updated again. It doesn't affect him at all. That's the first one.

  • Secondly, in fact, after the model's capabilities are enhanced, the smoothness of the application that is the accuracy rate, and various aspects of the user experience will be improved. I don't think this is conflicting at all, but now it seems that no matter how much the model's capabilities are improved. They cannot know the needs of every enterprise.

  • That is if you look closely at the various needs of the enterprise, they are different. Today's models are so trend based on this Internet data. And, for example, has no idea what the administrative needs of Cheetah Mobile are. What are its internal documents and what are the needs of this employee?

  • These are all things that need to be solved by applications now. It seems that the upgrade of this model and recently there are several startup models that we especially welcome, because in this way, some of our applications can be better done on it. Which is different from some of the previous ones, because the upgrade of the model, the first API interface will not change much.

  • The second, we also have a lot of interactions with him through what is called a prompt way. This will not be affected after the model is angry, the model and the application suite or a complementary relationship. And currently, it seems that for a long time or I would think that within the Hong Kong University, it is unlikely that a model can be deployed online, and users can use it. In fact, there are quite a lot of opportunities and great demand for enterprise-level applications here.

  • My question is about chips under the background of pirate chips being restricted in China, will Cheetah continue to train its own large model? When fine tuning and iterating models for enterprise customers, how does Cheetah solve the chip problem for the large model?

  • The number of parameters we have is in the scale of just over 10 billion parameters as we just mentioned. And we are still learning about this one, but the number of parameters will not be much larger because we are more considering the actual costs of the enterprise because such a model with a certain number of parameters, if really used probably only requires one, a higher end server can be used.

  • So these are combined with our needs the users' needs. This is the first point. So our demand for chips is not that great compared to many companies that do this change essential or larger parameter and large models, our chip demand is not that great.

  • Secondly, we think that today's training of this large model is indeed very nature, and we don't need to do repetitive large-scale construction on this. In fact, we thought earlier that the open source community would be very prosperous. More and more open source models with good enough performance will emerge.

  • Today, it seems that the situation is the same. And this also makes us when choosing a model not only our own model, but also many open source models that can be directly provided to customers, so we are not so worried about the chip issue. And then we will now focus more on our own training.

  • In addition to the large scale model that Fang asks about, we may focus more on upgrading skills and mechanical collaboration, because I think the supply of language models is already sufficient to meet market demand. And we are also doing more applications on it. We are not nature and (inaudible) change the ability and the original model itself. So we are not so worried about the chip issue.

  • Helen Zhu - IR

  • So thank you so much, business talent from Cheetah Mobile IR team as this is AI translation line. And all the translation is powered by an [ATP] that was developed by our team. So thank you so much for your accommodations and we'll have the English transcripts to be available as soon as we can. We'll put them on our IR website within seven working days. Again, thank you so much.

  • And if you have any further questions, please let our team know. Thank you so much. Bye-bye.

  • Operator

  • Conference has now concluded. We thank you for attending today's presentation. You may now disconnect your lines and have a nice day. Okay.