Skip to main content

Blog and News

Harnessing Artificial Intelligence for Knowledge Management

By Director of Monitoring, Evaluation, and Learning (MEL), Liz Mason.

In the world of knowledge management (KM), artificial intelligence (AI) is becoming an increasingly valuable tool for accelerating the extraction of key insights from research and distilling analysis and data into concise summaries. In this blog post, Bixal’s director of monitoring, evaluation, and learning (MEL), Liz Mason, shares insights gained through Bixal’s exploration of AI, including design elements, the importance of engaging subject matter experts, and level of investment necessary for successful implementation. Read on to discover how to better use AI to enhance your organization's knowledge management activities.

Design Elements

When comparing commercial software with open-source solutions, we encounter various pros and cons relating to data security, reliability, and cost-effectiveness. When working with sensitive or private data, using vendors such as OpenAI, Azure, and Google necessitates sending data into their systems. In our work, we used the open-source Mistral 7B Instruct LLM as a foundation model that we further fine-tuned on Retrieval Augmented Generation (RAG) and document analysis. This LLM kept the data secure by processing it locally. The open-source approach also kept costs down and allowed us to run experiments without additional costs for licenses or the need to obtain vendor approvals.

In terms of reliability, both options offer unique advantages. Localized enterprise installations tend to have fewer “hallucination” issues — unpredictable behavior caused by incorrect input interpretation or model limitations —since customization is streamlined during implementation. Nevertheless, open-source projects benefit from large communities constantly addressing bugs and inconsistencies across diverse use case scenarios. Furthermore, incorporating external resources into generated content improves credibility and transparency, a task that is simpler with open-source tools explicitly designed for sourcing attribution. 

Ultimately, the choice between enterprise and open-source platforms requires careful consideration based on organizational requirements and priorities.

Subject Matter Experts (SMEs) 

The inclusion of SMEs was instrumental in improving the quality of outputs during the pilot phases. Strategically integrating SMEs — such as agricultural and livestock experts — into the review process of an agricultural research-focused pilot from the outset ensured these specialists could provide valuable insights promptly. This significantly enhanced the relevance and comprehensibility of the outputs. The SMEs played a pivotal role by offering clear guidance on expectations and use cases, which facilitated the refinement of our prompt development and data-processing techniques and underscored the indispensable value of SME involvement in prompt engineering. Through this specialized engagement, we were able to fine-tune our approach, leading to improved overall outcomes and highlighting the profound impact that expert insights can have on the quality and usefulness of technological solutions.

Level of Investment

When implementing AI-driven approaches in knowledge management (KM), it is crucial to invest in labor and infrastructure. Our pilot strategy allowed our client to explore the potential future applications of AI in KM activities while being mindful of the constraints imposed by timelines and resources. To move these pilots to a more advanced stage of implementation, it is essential to focus on enhancing outputs and conducting a thorough evaluation of the significant training needs for maintaining accuracy and efficiency.

Refining LLMs for domain-specific tasks requires a significant upfront investment, including large volumes of high-quality, human-annotated data. This undertaking also necessitates a robust server and cloud infrastructure equipped with specialized resources, supporting approximately two weeks of continuous operation. Additionally, conducting a meticulous analysis of the LLM is crucial to mitigate biases and ensure the generation of relevant content. Although this initial investment may seem substantial, it is a strategic one that leads to long-term savings. By producing more consistent and efficient results, this investment lays the foundation for scalable and sustainable AI applications in KM. Ultimately, the upfront financial and resource commitments are offset by enhanced operational efficiency and reduced need for ongoing manual intervention, demonstrating the compelling case for the value of front-end investment in AI technologies.


How can we help?
We'd love to hear from you.