-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Request] 未来新知识库功能请求 #6054
Comments
👀 @BryceWG Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
🥰 Requirement descriptionIt is generally believed that the knowledge base mainly uses rag technology, which corresponds to directly using files as context. 🧐 SolutionWhen selecting an attached file in a conversation, you can choose to directly use the 'file' already in the knowledge base as the context 📝 Supplementary informationNo response |
其实现在的交互就是支持的,但是由于之前是 RAG 的方式没法做全文注入所以效果不理想,这次2.0会把全文注入的能力加上,应该在一些需要全文引用的场景,效果会大大提升的 |
In fact, the current interaction is supported, but since the previous RAG method could not be used for full text injection, the effect is not ideal. This time, 2.0 will add the ability to inject the full text, and it should be greatly improved in some scenarios that require full text citation. of |
same request |
补充一下,目前lobechat上传文件后还需要等待向量化才送给大模型。有些PPT或PDF文件本身有比较复杂的图表时会报错向量化失败。但是其他类似的平台例如cherry studio似乎是将整个文件发送给大模型,速度快而且读的内容更精准。 |
To add, lobechat currently needs to wait for vectorization to be given to the big model after uploading the file. Some PPT or PDF files themselves have relatively complex charts and will report error vectorization failure. But other similar platforms such as cherry studio seem to send the entire file to the big model, which is fast and read more accurately. |
我也用过cherry,只有少数服务商的api支持直接接收文件,按作者说的没有单独适配的api其实都是本地解析出文件内容再发送 |
I have also used cherry. Only a few APIs of service providers support direct reception of files. According to the author's statement, APIs that do not have separate adaptations are actually parsed locally and then sent |
1.建议接入Doc2X等一线文档解析API,提高知识库文档解析精度。 |
|
希望后面版本中,知识库在设置里增加一个选择量化的模型的入口,方便用户选择自己想用的在线或本地模型构建知识库,帮助文档里,更改环境变量的方式,换了一下,有太麻烦了,有些模型又报错,后面知识库希望可以直接添加网页地址,自动量化网页数据到知识库 |
I hope that in the later version, the knowledge base will add an entry to select the quantized model in the settings, so that users can choose the online or local model they want to use to build the knowledge base. In the help document, the way to change the environment variables, it has changed, it is too I'm in trouble. Some models reported an error again. I hope that the knowledge base can directly add the web address and automatically quantify the web page data to the knowledge base. |
期望可以在线创建文档,markdown格式在线编辑内容,保存后的数据可以手动向量化,在对话时可以勾选某个文档或者某个文档的目录,等同于AI笔记+对话 |
I hope to create documents online, edit content online in markdown format, and save data can be manually vectorized. During conversation, you can check a document or a directory of a document, which is equivalent to AI notes + dialogue. |
对于 Gemini 这样的服务商,提供了文件上传的接口,希望在全文注入时可选直接调用这种接口以获得更好的性能 |
For service providers like Gemini, they provide an interface for file upload, hoping to directly call this interface when injecting the full text to obtain better performance. |
个人建议可以在聊天界面的侧边栏增加一个 panel,里面有当前对话的文件/知识库列表及复选框,每次可选发送部分文件/在部分文件中进行 RAG 检索(或许是 NotebookLM 类似的交互体验) |
Personally, I suggest you add a panel to the sidebar of the chat interface, which contains the current conversation file/knowledge base list and check boxes. You can select some files/retrieve RAG in some files (perhaps similar to NotebookLM. Interactive experience) |
建议知识库加上这些功能 |
Recommended knowledge base to add these features
|
🥰 需求描述
一般认为知识库是主要利用rag技术,与直接把文件作为上下文相对应。
我的设想是为知识库新增一个功能:在对话中选择附加文件时,可以选择直接把已经在知识库里的‘文件’作为上下文,当然也保留把‘知识库’作为上下文。相当于为知识库增加一个云盘的功能,让里面的文件增加一个快速调用的方式。
🧐 解决方案
在对话中选择附加文件时,可以选择直接把已经在知识库里的‘文件’作为上下文
📝 补充信息
No response
@arvinxx : 借该 issue 召集下大家的诉求,如果有对知识库目前不满意的地方,欢迎提出来,3月份开始做知识库 2.0 改造,你的每个诉求我都会看到
The text was updated successfully, but these errors were encountered: