Major transformation in the field of Generative AI

A brief summary of the major transformation in the field of Generative AI

In 2023, generative AI took the consumer market by storm, reaching a record number of users in record time. As we now observe the exponential growth of generative AI opportunities in the corporate sector in 2024, some pessimists argue that generative AI will never scale. However, at CBOT, having followed this field since GPT 2.0, we have successfully reaped the results of our R&D efforts in this area.

We have launched the CBOT GPT module, which integrates with both existing powerful global LLMs (Large Language Models) and the CBOT LLM that runs on OnPremises organizations’ own servers, providing it to our business partners.

In order to define CBOT’s product development strategy in the field of generative AI, we held discussions in the first half of 2024 with each of our partners, all of whom are leaders in their respective industries, to understand their approaches. We witnessed how significantly the attitude toward GenAI has changed in the last six months.

From CBOT’s perspective, we are sharing a brief summary of the major transformation in the field of generative AI.

Enjoy reading!

Leaders’ initial hesitation to use generative AI has now been replaced by curiosity.

Leaders’ initial hesitation to use generative AI has now been replaced by curiosity. They are increasing the number of applications using open-source models, bringing pilot projects to production, and expanding their budgets.

Most corporate companies still lack the knowledge and the right technical skills needed to implement and scale generative AI.

At CBOT, we conduct numerous informational meetings with all stakeholders in the ecosystem to explain that having just a model provider API is not enough to use generative AI in corporate solutions. To achieve successful results, it’s not only platforms but also specialized skills that are needed to correctly design, maintain, scale, and report on the infrastructure components.

In response to this need, we have established a dedicated team at CBOT to help our partners train their own GenAI models and support their projects. This team goes beyond applying existing models and provides professional services for company-specific model development with CBOT LLM.

Resource Allocation: Budgets Are Increasing Rapidly and Allocated to More Permanent Software and Licensing Items.

In 2023, GenAI experiments conducted with foundational model APIs showed us highly interesting and promising results. Following these low-budget studies, as we entered 2024, we observed that many companies were increasing their resources for GenAI projects. Projects developed especially on the CBOT Platform using hybrid NLP + LLM structures are drawing attention. Companies’ GenAI spending has increased 2 to 5 times compared to last year, and we believe they will see a substantial return on these investments.

Most of the GenAI spending in 2023 was funded by “innovation” budgets and “one-time” funds. This isn’t too surprising, as everyone initially wants to test and see results. However, by 2024, the situation changed slightly; many leaders are now shifting these expenses to permanent software, licensing, and operational budgets.

At CBOT, we foresee that by 2025, only a small portion of GenAI spending will come from innovation budgets, making it increasingly critical for leaders who have not yet invested to make their decisions.

Leaders’ GenAI budgets are primarily aimed at efficiency and cost savings.

In 2024, we observe that many leaders are directing their GenAI budgets towards customer service to reduce personnel costs. This trend is rapidly expanding the use cases of GenAI, exciting us about what the future might hold. We have increased our R&D investments in this area, estimating that savings in LLM-supported customer service could reach up to 90% per call with the right strategies.

Thanks to the “AI CoWorkers” feature developed within the CBOT GPT module, we have enabled the implementation of generative AI-based employees in many different fields, not just customer service.

Here are some example use cases:

The Role of ROI is Becoming More Evident

We frequently observe that leaders are increasingly focusing on calculating the ROI of productivity gains delivered by AI. Although they still have some questions about indicators like NPS and customer satisfaction, they are searching for clearer and more definitive methods to evaluate more tangible outcomes such as revenue growth, cost savings, and efficiency.

We believe that the importance of ROI will further increase in the next 2-3 years, and its role will become more prominent as leaders implement AI projects. In this process, employees expressing that they are using their time more efficiently will be the most important positive indicator.

Models: Organizations Are Leaning Towards Multiple Models and the Open Source World.

As we entered 2024, most corporate companies were experimenting with one or at most two models, usually those from OpenAI. Today, however, company leaders are keen to test multiple models and use some of them concurrently in production environments. With the rapid changes in technology, the ultimate goal is to test the latest models and open-source solutions, quickly switch when needed, and achieve the highest efficiency.

Our discussions clearly show that interest in open-source models among leaders is growing. Organizations are becoming more willing to use or transition to open-source models when they see that these models, customized to their specific needs, offer performance comparable to closed-source models. This indicates a significant increase in open-source usage in the coming periods.

As of 2023, we see that 80% of GenAI projects are conducted with closed-source and 20% with open-source codes. In the future, we foresee this balance equalizing to 50% closed-source and 50% open-source.

Currently, as a Microsoft partner, CBOT implements most of its projects, as expected, with market-leading Azure OpenAI and OpenAI models. However, the GPT Module of the CBOT Platform is designed with the foresight that more models will become widespread and the demand for training open-source models will increase in the future. This way, we aim to offer our users full flexibility in model selection.

 

 

Control Requests, Sensitive Use Cases, and Corporate Data Security Concerns

We observe that demands for control and customization are the main factors driving the use of open-source models; this shows that the cost advantage offered by ready-made solutions is no longer as prominent. Organizations are ready to take the necessary steps to ensure the security of sensitive data, understand why models produce specific outputs, and make fine adjustments according to specific use cases. The approach of “getting the right answer in a secure environment is well worth the cost” allows leaders to proceed, aware of the costs they will face.

Especially institutions like banks and public organizations that are under strict regulatory oversight, as well as companies where IP is at the core of their business model, are hesitant to share their sensitive data with closed-source model providers due to data security concerns. These institutions prioritize protecting their sensitive data in a more secure environment and therefore maintain a conservative stance.

At CBOT, we employ two different methods for institutions with such security sensitivities:

1.CBOT LLM

Customizing open-source models to be institution-specific and setting them up on-premises with 100% external closure. Click for detailed information.

CBOT Data Control CoWorker

Before data from the on-premises system is sent to a closed-source model, it is passed through a control point to identify data that should not be sent, masking it before sending or blocking the transmission.

Leaders Mostly Customize Existing Models Instead of Redesigning Them Entirely

In 2023, proprietary AI models like OpenAI’s GPT-4 and Meta’s LLaMA made a big splash and became the focal point of discussions in the industry. These projects demonstrated the power of large language models and had a decisive impact on companies’ AI investments.

As we move into 2024, we see that organizations are still eager to customize AI models. However, with the rapid proliferation of high-quality open-source models, it’s evident that many companies prefer to adapt existing models to their needs rather than training their large language models from scratch. In this context, methods like Retrieval-Augmented Generation (RAG) enhance the efficiency of open-source models, achieving stronger performance in specific areas. Customizing models offered through platforms like Hugging Face has become a common strategy among organizations.

We anticipate that this trend will continue in 2024 and beyond. Companies will likely continue choosing to optimize existing models rather than developing new ones from scratch. This strategy offers time and cost advantages while providing opportunities for faster and more effective solutions.

The decision-makers we spoke to prioritize reasoning ability, reliability, and cloud-based access ease in their AI model preferences. However, they are also open to models that offer different advantages.

At CBOT, we have assembled Turkey’s most experienced team in AI model customization. Since 2019, we have maintained our leadership in the AI field based on NLP and deep learning, and we continue to do so in generative AI. Through strong collaborations with academic circles, we successfully integrate the latest technologies into our projects.

Most Models Have Similar Performance, and Rapid Model Changes Have Gained Importance

At CBOT, we achieved remarkable results in all our tests with various models. While the ecosystem often evaluates models based on general performance metrics, we compared both open-source and closed-source models according to our criteria.

Although closed-source models often perform better in external tests, we have given high scores to open-source models because they can be more easily adapted to specific needs. For example, we observed that adapted versions of Mistral and LLaMA performed as well as OpenAI in some areas. This shows us that model performances are converging, creating a broader range of capable models.

The CBOT Platform offers the ability to switch models in projects in seconds. This allows us to test new models quickly and deploy the most suitable one for various needs instantly. We have designed model switching to be done with just an API change, ensuring quick and seamless transitions. Additionally, we have created “model gardens” to store models suitable for different needs, reducing provider dependency and allowing flexible adaptation to market innovations.

Now Turning into Potential Performance: More Moves to Production

We have seen that those new to generative AI technology initially focused on writing code with their resources rather than purchasing platforms to implement GenAI projects. Organizations hesitated to purchase platforms due to a lack of sufficient success stories on corporate platforms and a limited understanding of the technology’s boundaries.

The main reason for this trend was the widespread perception that “thanks to foundational models and APIs, it’s easier to create our AI applications.” However, it soon became clear that the time and effort spent on these projects often did not yield the expected results.

Ultimately, they realized how platforms like CBOT Platform, which go beyond the “LLM + UI” approach and reshape corporate workflows using proprietary data, make the process much easier. With this awareness, those wanting to turn potential into actual performance began favoring platform solutions, particularly for their features such as secure use of proprietary data, accelerated workflows, and flexibility in model transitions.

Organizations Prefer to Test Internally Before Moving to External Use

At CBOT, we see that organizations are inclined to try GenAI technology in internal processes first and then move toward external use cases. Two major concerns underpin this approach: the first is the risk of hallucinations and security vulnerabilities created by GenAI; the second is the potential negative impact this technology may have on public relations in sensitive sectors like healthcare and finance.

In the first three quarters of 2024, most of the demands we met were for internal efficiency-enhancing applications and areas where human intervention is critical. For example, AI-based personal assistants developed to improve employee experience helped employees manage their daily tasks and meetings more efficiently. Additionally, the digital workers we call “AI CoWorkers” significantly increased operational efficiency by automating repetitive tasks in process automation projects.

These experiences show that companies are willing to be more cautious when using GenAI technology in external projects.

Final Word: The Market Size is Growing Rapidly

At CBOT, we foresee that total spending on GenAI model APIs and adaptations will show significant growth in 2025. This indicates that the current market has a high potential for expansion, and organizations are rapidly increasing their investments in this area.

In particular, we observed that in the third quarter of 2024, companies accelerated their pace of exploring and implementing GenAI solutions. Projects and agreements are now finalized much faster and on a larger scale.

Organizations are expanding their budgets and restructuring their application strategies with various models to best capitalize on the opportunities provided by GenAI. We anticipate that this trend will continue beyond 2025, with even more progress in production processes.

This rapidly developing market extends beyond the foundational model layer, covering a broad spectrum from adaptation services and tools to model servers, application development processes, and bespoke AI applications.

At CBOT, we are excited to contribute to exploring the new opportunities this dynamic and growing market offers our partners. With our extensive expertise and experience in this field, we remain committed to providing you with the highest quality solutions.

Pegasus Travel Assistant (GenAI) – Client Story