AI
Chain of Thought + Reinforcement Learning: Innovations in DeepSeek-R1 and Kimi k1.5 Papers
·1302 words·7 mins
AI
LLM
CoT
Reinforcement Learning
DeepSeek
Kimi
Model Distillation
Chain of Thought
An in-depth analysis of breakthroughs in reasoning capabilities by DeepSeek-R1 and Kimi k1.5. Exploring how DeepSeek enhances reasoning through GRPO algorithm and model distillation, and Kimi’s innovations in long-form Chain of Thought and reinforcement learning.
Building a LightRAG Knowledge Base with TiDB Vector
·1445 words·7 mins
RAG
LLM
AI
TiDB
Engineering Practice
After reviewing LightRAG, I found that its persistence support was still limited, missing the most important TiDB (not really). So I took some time to contribute and write about it.
From paper to source code: a detailed explanation of the RAG algorithm
·9743 words·46 mins
RAG
LLM
AI
This article aims to explore the architectural design and specific code implementation of the RAG algorithm through the interpretation of papers and source code. This article mainly discusses GraphRAG, LightRAG and RAPTOR RAG, and also mentions Contextual Retrieval proposed by Anthropic and the evaluation method of the RAG algorithm. In the end, it is recommended that different methods be selected according to the size of the knowledge base document.
Rerank Models
·2502 words·12 mins
search
AI
RAG
With the popularity of the Transformer architecture, many Embedding and Rerank models are now based on this architecture. Taking this opportunity, we will sort out the process and history of the research, and take stock of the architectures adopted by several well-known Rerank models and the companies that developed them. Finally, we will return to the topic and briefly discuss whether Rerank should be used in RAG scenarios.
Mixed Expert (MoE) Model Notes
·1388 words·7 mins
MoE
Large Model
AI
Paper Reading
This article mainly sorts out the relevant concepts of the hybrid expert model (MoE), and introduces the architectures and optimization methods of several open source MoE models, such as GShard, Switch Transformers, DeepSeek-MoE, and LLaMA-MoE. The characteristics and optimization methods of these models are also introduced.