You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

dataset_service.py 122KB

пре 2 година
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 1 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 9 месеци
пре 9 месеци
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 3 месеци
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 1 година
пре 3 месеци
пре 3 месеци
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 3 месеци
пре 3 месеци
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934935936937938939940941942943944945946947948949950951952953954955956957958959960961962963964965966967968969970971972973974975976977978979980981982983984985986987988989990991992993994995996997998999100010011002100310041005100610071008100910101011101210131014101510161017101810191020102110221023102410251026102710281029103010311032103310341035103610371038103910401041104210431044104510461047104810491050105110521053105410551056105710581059106010611062106310641065106610671068106910701071107210731074107510761077107810791080108110821083108410851086108710881089109010911092109310941095109610971098109911001101110211031104110511061107110811091110111111121113111411151116111711181119112011211122112311241125112611271128112911301131113211331134113511361137113811391140114111421143114411451146114711481149115011511152115311541155115611571158115911601161116211631164116511661167116811691170117111721173117411751176117711781179118011811182118311841185118611871188118911901191119211931194119511961197119811991200120112021203120412051206120712081209121012111212121312141215121612171218121912201221122212231224122512261227122812291230123112321233123412351236123712381239124012411242124312441245124612471248124912501251125212531254125512561257125812591260126112621263126412651266126712681269127012711272127312741275127612771278127912801281128212831284128512861287128812891290129112921293129412951296129712981299130013011302130313041305130613071308130913101311131213131314131513161317131813191320132113221323132413251326132713281329133013311332133313341335133613371338133913401341134213431344134513461347134813491350135113521353135413551356135713581359136013611362136313641365136613671368136913701371137213731374137513761377137813791380138113821383138413851386138713881389139013911392139313941395139613971398139914001401140214031404140514061407140814091410141114121413141414151416141714181419142014211422142314241425142614271428142914301431143214331434143514361437143814391440144114421443144414451446144714481449145014511452145314541455145614571458145914601461146214631464146514661467146814691470147114721473147414751476147714781479148014811482148314841485148614871488148914901491149214931494149514961497149814991500150115021503150415051506150715081509151015111512151315141515151615171518151915201521152215231524152515261527152815291530153115321533153415351536153715381539154015411542154315441545154615471548154915501551155215531554155515561557155815591560156115621563156415651566156715681569157015711572157315741575157615771578157915801581158215831584158515861587158815891590159115921593159415951596159715981599160016011602160316041605160616071608160916101611161216131614161516161617161816191620162116221623162416251626162716281629163016311632163316341635163616371638163916401641164216431644164516461647164816491650165116521653165416551656165716581659166016611662166316641665166616671668166916701671167216731674167516761677167816791680168116821683168416851686168716881689169016911692169316941695169616971698169917001701170217031704170517061707170817091710171117121713171417151716171717181719172017211722172317241725172617271728172917301731173217331734173517361737173817391740174117421743174417451746174717481749175017511752175317541755175617571758175917601761176217631764176517661767176817691770177117721773177417751776177717781779178017811782178317841785178617871788178917901791179217931794179517961797179817991800180118021803180418051806180718081809181018111812181318141815181618171818181918201821182218231824182518261827182818291830183118321833183418351836183718381839184018411842184318441845184618471848184918501851185218531854185518561857185818591860186118621863186418651866186718681869187018711872187318741875187618771878187918801881188218831884188518861887188818891890189118921893189418951896189718981899190019011902190319041905190619071908190919101911191219131914191519161917191819191920192119221923192419251926192719281929193019311932193319341935193619371938193919401941194219431944194519461947194819491950195119521953195419551956195719581959196019611962196319641965196619671968196919701971197219731974197519761977197819791980198119821983198419851986198719881989199019911992199319941995199619971998199920002001200220032004200520062007200820092010201120122013201420152016201720182019202020212022202320242025202620272028202920302031203220332034203520362037203820392040204120422043204420452046204720482049205020512052205320542055205620572058205920602061206220632064206520662067206820692070207120722073207420752076207720782079208020812082208320842085208620872088208920902091209220932094209520962097209820992100210121022103210421052106210721082109211021112112211321142115211621172118211921202121212221232124212521262127212821292130213121322133213421352136213721382139214021412142214321442145214621472148214921502151215221532154215521562157215821592160216121622163216421652166216721682169217021712172217321742175217621772178217921802181218221832184218521862187218821892190219121922193219421952196219721982199220022012202220322042205220622072208220922102211221222132214221522162217221822192220222122222223222422252226222722282229223022312232223322342235223622372238223922402241224222432244224522462247224822492250225122522253225422552256225722582259226022612262226322642265226622672268226922702271227222732274227522762277227822792280228122822283228422852286228722882289229022912292229322942295229622972298229923002301230223032304230523062307230823092310231123122313231423152316231723182319232023212322232323242325232623272328232923302331233223332334233523362337233823392340234123422343234423452346234723482349235023512352235323542355235623572358235923602361236223632364236523662367236823692370237123722373237423752376237723782379238023812382238323842385238623872388238923902391239223932394239523962397239823992400240124022403240424052406240724082409241024112412241324142415241624172418241924202421242224232424242524262427242824292430243124322433243424352436243724382439244024412442244324442445244624472448244924502451245224532454245524562457245824592460246124622463246424652466246724682469247024712472247324742475247624772478247924802481248224832484248524862487248824892490249124922493249424952496249724982499250025012502250325042505250625072508250925102511251225132514251525162517251825192520252125222523252425252526252725282529253025312532253325342535253625372538253925402541254225432544254525462547254825492550255125522553255425552556255725582559256025612562256325642565256625672568256925702571257225732574257525762577257825792580258125822583258425852586258725882589259025912592259325942595259625972598259926002601260226032604260526062607260826092610261126122613261426152616261726182619262026212622262326242625262626272628262926302631263226332634263526362637263826392640264126422643264426452646264726482649265026512652265326542655265626572658265926602661266226632664266526662667266826692670267126722673267426752676267726782679268026812682268326842685268626872688268926902691269226932694269526962697269826992700270127022703270427052706270727082709271027112712271327142715271627172718271927202721272227232724272527262727272827292730273127322733273427352736273727382739274027412742274327442745274627472748274927502751275227532754275527562757275827592760276127622763276427652766276727682769277027712772277327742775277627772778277927802781278227832784278527862787278827892790279127922793279427952796279727982799280028012802280328042805280628072808280928102811
  1. import copy
  2. import datetime
  3. import json
  4. import logging
  5. import secrets
  6. import time
  7. import uuid
  8. from collections import Counter
  9. from typing import Any, Literal, Optional
  10. import sqlalchemy as sa
  11. from sqlalchemy import exists, func, select
  12. from sqlalchemy.orm import Session
  13. from werkzeug.exceptions import NotFound
  14. from configs import dify_config
  15. from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError
  16. from core.model_manager import ModelManager
  17. from core.model_runtime.entities.model_entities import ModelType
  18. from core.plugin.entities.plugin import ModelProviderID
  19. from core.rag.index_processor.constant.built_in_field import BuiltInField
  20. from core.rag.index_processor.constant.index_type import IndexType
  21. from core.rag.retrieval.retrieval_methods import RetrievalMethod
  22. from events.dataset_event import dataset_was_deleted
  23. from events.document_event import document_was_deleted
  24. from extensions.ext_database import db
  25. from extensions.ext_redis import redis_client
  26. from libs import helper
  27. from libs.datetime_utils import naive_utc_now
  28. from libs.login import current_user
  29. from models.account import Account, TenantAccountRole
  30. from models.dataset import (
  31. AppDatasetJoin,
  32. ChildChunk,
  33. Dataset,
  34. DatasetAutoDisableLog,
  35. DatasetCollectionBinding,
  36. DatasetPermission,
  37. DatasetPermissionEnum,
  38. DatasetProcessRule,
  39. DatasetQuery,
  40. Document,
  41. DocumentSegment,
  42. ExternalKnowledgeBindings,
  43. )
  44. from models.model import UploadFile
  45. from models.source import DataSourceOauthBinding
  46. from services.entities.knowledge_entities.knowledge_entities import (
  47. ChildChunkUpdateArgs,
  48. KnowledgeConfig,
  49. RerankingModel,
  50. RetrievalModel,
  51. SegmentUpdateArgs,
  52. )
  53. from services.errors.account import NoPermissionError
  54. from services.errors.chunk import ChildChunkDeleteIndexError, ChildChunkIndexingError
  55. from services.errors.dataset import DatasetNameDuplicateError
  56. from services.errors.document import DocumentIndexingError
  57. from services.errors.file import FileNotExistsError
  58. from services.external_knowledge_service import ExternalDatasetService
  59. from services.feature_service import FeatureModel, FeatureService
  60. from services.tag_service import TagService
  61. from services.vector_service import VectorService
  62. from tasks.add_document_to_index_task import add_document_to_index_task
  63. from tasks.batch_clean_document_task import batch_clean_document_task
  64. from tasks.clean_notion_document_task import clean_notion_document_task
  65. from tasks.deal_dataset_vector_index_task import deal_dataset_vector_index_task
  66. from tasks.delete_segment_from_index_task import delete_segment_from_index_task
  67. from tasks.disable_segment_from_index_task import disable_segment_from_index_task
  68. from tasks.disable_segments_from_index_task import disable_segments_from_index_task
  69. from tasks.document_indexing_task import document_indexing_task
  70. from tasks.document_indexing_update_task import document_indexing_update_task
  71. from tasks.duplicate_document_indexing_task import duplicate_document_indexing_task
  72. from tasks.enable_segments_to_index_task import enable_segments_to_index_task
  73. from tasks.recover_document_indexing_task import recover_document_indexing_task
  74. from tasks.remove_document_from_index_task import remove_document_from_index_task
  75. from tasks.retry_document_indexing_task import retry_document_indexing_task
  76. from tasks.sync_website_document_indexing_task import sync_website_document_indexing_task
  77. logger = logging.getLogger(__name__)
  78. class DatasetService:
  79. @staticmethod
  80. def get_datasets(page, per_page, tenant_id=None, user=None, search=None, tag_ids=None, include_all=False):
  81. query = select(Dataset).where(Dataset.tenant_id == tenant_id).order_by(Dataset.created_at.desc())
  82. if user:
  83. # get permitted dataset ids
  84. dataset_permission = (
  85. db.session.query(DatasetPermission).filter_by(account_id=user.id, tenant_id=tenant_id).all()
  86. )
  87. permitted_dataset_ids = {dp.dataset_id for dp in dataset_permission} if dataset_permission else None
  88. if user.current_role == TenantAccountRole.DATASET_OPERATOR:
  89. # only show datasets that the user has permission to access
  90. # Check if permitted_dataset_ids is not empty to avoid WHERE false condition
  91. if permitted_dataset_ids and len(permitted_dataset_ids) > 0:
  92. query = query.where(Dataset.id.in_(permitted_dataset_ids))
  93. else:
  94. return [], 0
  95. else:
  96. if user.current_role != TenantAccountRole.OWNER or not include_all:
  97. # show all datasets that the user has permission to access
  98. # Check if permitted_dataset_ids is not empty to avoid WHERE false condition
  99. if permitted_dataset_ids and len(permitted_dataset_ids) > 0:
  100. query = query.where(
  101. db.or_(
  102. Dataset.permission == DatasetPermissionEnum.ALL_TEAM,
  103. db.and_(
  104. Dataset.permission == DatasetPermissionEnum.ONLY_ME, Dataset.created_by == user.id
  105. ),
  106. db.and_(
  107. Dataset.permission == DatasetPermissionEnum.PARTIAL_TEAM,
  108. Dataset.id.in_(permitted_dataset_ids),
  109. ),
  110. )
  111. )
  112. else:
  113. query = query.where(
  114. db.or_(
  115. Dataset.permission == DatasetPermissionEnum.ALL_TEAM,
  116. db.and_(
  117. Dataset.permission == DatasetPermissionEnum.ONLY_ME, Dataset.created_by == user.id
  118. ),
  119. )
  120. )
  121. else:
  122. # if no user, only show datasets that are shared with all team members
  123. query = query.where(Dataset.permission == DatasetPermissionEnum.ALL_TEAM)
  124. if search:
  125. query = query.where(Dataset.name.ilike(f"%{search}%"))
  126. # Check if tag_ids is not empty to avoid WHERE false condition
  127. if tag_ids and len(tag_ids) > 0:
  128. if tenant_id is not None:
  129. target_ids = TagService.get_target_ids_by_tag_ids(
  130. "knowledge",
  131. tenant_id,
  132. tag_ids,
  133. )
  134. else:
  135. target_ids = []
  136. if target_ids and len(target_ids) > 0:
  137. query = query.where(Dataset.id.in_(target_ids))
  138. else:
  139. return [], 0
  140. datasets = db.paginate(select=query, page=page, per_page=per_page, max_per_page=100, error_out=False)
  141. return datasets.items, datasets.total
  142. @staticmethod
  143. def get_process_rules(dataset_id):
  144. # get the latest process rule
  145. dataset_process_rule = (
  146. db.session.query(DatasetProcessRule)
  147. .where(DatasetProcessRule.dataset_id == dataset_id)
  148. .order_by(DatasetProcessRule.created_at.desc())
  149. .limit(1)
  150. .one_or_none()
  151. )
  152. if dataset_process_rule:
  153. mode = dataset_process_rule.mode
  154. rules = dataset_process_rule.rules_dict
  155. else:
  156. mode = DocumentService.DEFAULT_RULES["mode"]
  157. rules = DocumentService.DEFAULT_RULES["rules"]
  158. return {"mode": mode, "rules": rules}
  159. @staticmethod
  160. def get_datasets_by_ids(ids, tenant_id):
  161. # Check if ids is not empty to avoid WHERE false condition
  162. if not ids or len(ids) == 0:
  163. return [], 0
  164. stmt = select(Dataset).where(Dataset.id.in_(ids), Dataset.tenant_id == tenant_id)
  165. datasets = db.paginate(select=stmt, page=1, per_page=len(ids), max_per_page=len(ids), error_out=False)
  166. return datasets.items, datasets.total
  167. @staticmethod
  168. def create_empty_dataset(
  169. tenant_id: str,
  170. name: str,
  171. description: Optional[str],
  172. indexing_technique: Optional[str],
  173. account: Account,
  174. permission: Optional[str] = None,
  175. provider: str = "vendor",
  176. external_knowledge_api_id: Optional[str] = None,
  177. external_knowledge_id: Optional[str] = None,
  178. embedding_model_provider: Optional[str] = None,
  179. embedding_model_name: Optional[str] = None,
  180. retrieval_model: Optional[RetrievalModel] = None,
  181. ):
  182. # check if dataset name already exists
  183. if db.session.query(Dataset).filter_by(name=name, tenant_id=tenant_id).first():
  184. raise DatasetNameDuplicateError(f"Dataset with name {name} already exists.")
  185. embedding_model = None
  186. if indexing_technique == "high_quality":
  187. model_manager = ModelManager()
  188. if embedding_model_provider and embedding_model_name:
  189. # check if embedding model setting is valid
  190. DatasetService.check_embedding_model_setting(tenant_id, embedding_model_provider, embedding_model_name)
  191. embedding_model = model_manager.get_model_instance(
  192. tenant_id=tenant_id,
  193. provider=embedding_model_provider,
  194. model_type=ModelType.TEXT_EMBEDDING,
  195. model=embedding_model_name,
  196. )
  197. else:
  198. embedding_model = model_manager.get_default_model_instance(
  199. tenant_id=tenant_id, model_type=ModelType.TEXT_EMBEDDING
  200. )
  201. if retrieval_model and retrieval_model.reranking_model:
  202. if (
  203. retrieval_model.reranking_model.reranking_provider_name
  204. and retrieval_model.reranking_model.reranking_model_name
  205. ):
  206. # check if reranking model setting is valid
  207. DatasetService.check_reranking_model_setting(
  208. tenant_id,
  209. retrieval_model.reranking_model.reranking_provider_name,
  210. retrieval_model.reranking_model.reranking_model_name,
  211. )
  212. dataset = Dataset(name=name, indexing_technique=indexing_technique)
  213. # dataset = Dataset(name=name, provider=provider, config=config)
  214. dataset.description = description
  215. dataset.created_by = account.id
  216. dataset.updated_by = account.id
  217. dataset.tenant_id = tenant_id
  218. dataset.embedding_model_provider = embedding_model.provider if embedding_model else None # type: ignore
  219. dataset.embedding_model = embedding_model.model if embedding_model else None # type: ignore
  220. dataset.retrieval_model = retrieval_model.model_dump() if retrieval_model else None # type: ignore
  221. dataset.permission = permission or DatasetPermissionEnum.ONLY_ME
  222. dataset.provider = provider
  223. db.session.add(dataset)
  224. db.session.flush()
  225. if provider == "external" and external_knowledge_api_id:
  226. external_knowledge_api = ExternalDatasetService.get_external_knowledge_api(external_knowledge_api_id)
  227. if not external_knowledge_api:
  228. raise ValueError("External API template not found.")
  229. external_knowledge_binding = ExternalKnowledgeBindings(
  230. tenant_id=tenant_id,
  231. dataset_id=dataset.id,
  232. external_knowledge_api_id=external_knowledge_api_id,
  233. external_knowledge_id=external_knowledge_id,
  234. created_by=account.id,
  235. )
  236. db.session.add(external_knowledge_binding)
  237. db.session.commit()
  238. return dataset
  239. @staticmethod
  240. def get_dataset(dataset_id) -> Optional[Dataset]:
  241. dataset: Optional[Dataset] = db.session.query(Dataset).filter_by(id=dataset_id).first()
  242. return dataset
  243. @staticmethod
  244. def check_doc_form(dataset: Dataset, doc_form: str):
  245. if dataset.doc_form and doc_form != dataset.doc_form:
  246. raise ValueError("doc_form is different from the dataset doc_form.")
  247. @staticmethod
  248. def check_dataset_model_setting(dataset):
  249. if dataset.indexing_technique == "high_quality":
  250. try:
  251. model_manager = ModelManager()
  252. model_manager.get_model_instance(
  253. tenant_id=dataset.tenant_id,
  254. provider=dataset.embedding_model_provider,
  255. model_type=ModelType.TEXT_EMBEDDING,
  256. model=dataset.embedding_model,
  257. )
  258. except LLMBadRequestError:
  259. raise ValueError(
  260. "No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
  261. )
  262. except ProviderTokenNotInitError as ex:
  263. raise ValueError(f"The dataset is unavailable, due to: {ex.description}")
  264. @staticmethod
  265. def check_embedding_model_setting(tenant_id: str, embedding_model_provider: str, embedding_model: str):
  266. try:
  267. model_manager = ModelManager()
  268. model_manager.get_model_instance(
  269. tenant_id=tenant_id,
  270. provider=embedding_model_provider,
  271. model_type=ModelType.TEXT_EMBEDDING,
  272. model=embedding_model,
  273. )
  274. except LLMBadRequestError:
  275. raise ValueError(
  276. "No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
  277. )
  278. except ProviderTokenNotInitError as ex:
  279. raise ValueError(ex.description)
  280. @staticmethod
  281. def check_reranking_model_setting(tenant_id: str, reranking_model_provider: str, reranking_model: str):
  282. try:
  283. model_manager = ModelManager()
  284. model_manager.get_model_instance(
  285. tenant_id=tenant_id,
  286. provider=reranking_model_provider,
  287. model_type=ModelType.RERANK,
  288. model=reranking_model,
  289. )
  290. except LLMBadRequestError:
  291. raise ValueError(
  292. "No Rerank Model available. Please configure a valid provider in the Settings -> Model Provider."
  293. )
  294. except ProviderTokenNotInitError as ex:
  295. raise ValueError(ex.description)
  296. @staticmethod
  297. def update_dataset(dataset_id, data, user):
  298. """
  299. Update dataset configuration and settings.
  300. Args:
  301. dataset_id: The unique identifier of the dataset to update
  302. data: Dictionary containing the update data
  303. user: The user performing the update operation
  304. Returns:
  305. Dataset: The updated dataset object
  306. Raises:
  307. ValueError: If dataset not found or validation fails
  308. NoPermissionError: If user lacks permission to update the dataset
  309. """
  310. # Retrieve and validate dataset existence
  311. dataset = DatasetService.get_dataset(dataset_id)
  312. if not dataset:
  313. raise ValueError("Dataset not found")
  314. # Verify user has permission to update this dataset
  315. DatasetService.check_dataset_permission(dataset, user)
  316. # Handle external dataset updates
  317. if dataset.provider == "external":
  318. return DatasetService._update_external_dataset(dataset, data, user)
  319. else:
  320. return DatasetService._update_internal_dataset(dataset, data, user)
  321. @staticmethod
  322. def _update_external_dataset(dataset, data, user):
  323. """
  324. Update external dataset configuration.
  325. Args:
  326. dataset: The dataset object to update
  327. data: Update data dictionary
  328. user: User performing the update
  329. Returns:
  330. Dataset: Updated dataset object
  331. """
  332. # Update retrieval model if provided
  333. external_retrieval_model = data.get("external_retrieval_model", None)
  334. if external_retrieval_model:
  335. dataset.retrieval_model = external_retrieval_model
  336. # Update basic dataset properties
  337. dataset.name = data.get("name", dataset.name)
  338. dataset.description = data.get("description", dataset.description)
  339. # Update permission if provided
  340. permission = data.get("permission")
  341. if permission:
  342. dataset.permission = permission
  343. # Validate and update external knowledge configuration
  344. external_knowledge_id = data.get("external_knowledge_id", None)
  345. external_knowledge_api_id = data.get("external_knowledge_api_id", None)
  346. if not external_knowledge_id:
  347. raise ValueError("External knowledge id is required.")
  348. if not external_knowledge_api_id:
  349. raise ValueError("External knowledge api id is required.")
  350. # Update metadata fields
  351. dataset.updated_by = user.id if user else None
  352. dataset.updated_at = naive_utc_now()
  353. db.session.add(dataset)
  354. # Update external knowledge binding
  355. DatasetService._update_external_knowledge_binding(dataset.id, external_knowledge_id, external_knowledge_api_id)
  356. # Commit changes to database
  357. db.session.commit()
  358. return dataset
  359. @staticmethod
  360. def _update_external_knowledge_binding(dataset_id, external_knowledge_id, external_knowledge_api_id):
  361. """
  362. Update external knowledge binding configuration.
  363. Args:
  364. dataset_id: Dataset identifier
  365. external_knowledge_id: External knowledge identifier
  366. external_knowledge_api_id: External knowledge API identifier
  367. """
  368. with Session(db.engine) as session:
  369. external_knowledge_binding = (
  370. session.query(ExternalKnowledgeBindings).filter_by(dataset_id=dataset_id).first()
  371. )
  372. if not external_knowledge_binding:
  373. raise ValueError("External knowledge binding not found.")
  374. # Update binding if values have changed
  375. if (
  376. external_knowledge_binding.external_knowledge_id != external_knowledge_id
  377. or external_knowledge_binding.external_knowledge_api_id != external_knowledge_api_id
  378. ):
  379. external_knowledge_binding.external_knowledge_id = external_knowledge_id
  380. external_knowledge_binding.external_knowledge_api_id = external_knowledge_api_id
  381. db.session.add(external_knowledge_binding)
  382. @staticmethod
  383. def _update_internal_dataset(dataset, data, user):
  384. """
  385. Update internal dataset configuration.
  386. Args:
  387. dataset: The dataset object to update
  388. data: Update data dictionary
  389. user: User performing the update
  390. Returns:
  391. Dataset: Updated dataset object
  392. """
  393. # Remove external-specific fields from update data
  394. data.pop("partial_member_list", None)
  395. data.pop("external_knowledge_api_id", None)
  396. data.pop("external_knowledge_id", None)
  397. data.pop("external_retrieval_model", None)
  398. # Filter out None values except for description field
  399. filtered_data = {k: v for k, v in data.items() if v is not None or k == "description"}
  400. # Handle indexing technique changes and embedding model updates
  401. action = DatasetService._handle_indexing_technique_change(dataset, data, filtered_data)
  402. # Add metadata fields
  403. filtered_data["updated_by"] = user.id
  404. filtered_data["updated_at"] = naive_utc_now()
  405. # update Retrieval model
  406. filtered_data["retrieval_model"] = data["retrieval_model"]
  407. # Update dataset in database
  408. db.session.query(Dataset).filter_by(id=dataset.id).update(filtered_data)
  409. db.session.commit()
  410. # Trigger vector index task if indexing technique changed
  411. if action:
  412. deal_dataset_vector_index_task.delay(dataset.id, action)
  413. return dataset
  414. @staticmethod
  415. def _handle_indexing_technique_change(dataset, data, filtered_data):
  416. """
  417. Handle changes in indexing technique and configure embedding models accordingly.
  418. Args:
  419. dataset: Current dataset object
  420. data: Update data dictionary
  421. filtered_data: Filtered update data
  422. Returns:
  423. str: Action to perform ('add', 'remove', 'update', or None)
  424. """
  425. if dataset.indexing_technique != data["indexing_technique"]:
  426. if data["indexing_technique"] == "economy":
  427. # Remove embedding model configuration for economy mode
  428. filtered_data["embedding_model"] = None
  429. filtered_data["embedding_model_provider"] = None
  430. filtered_data["collection_binding_id"] = None
  431. return "remove"
  432. elif data["indexing_technique"] == "high_quality":
  433. # Configure embedding model for high quality mode
  434. DatasetService._configure_embedding_model_for_high_quality(data, filtered_data)
  435. return "add"
  436. else:
  437. # Handle embedding model updates when indexing technique remains the same
  438. return DatasetService._handle_embedding_model_update_when_technique_unchanged(dataset, data, filtered_data)
  439. return None
  440. @staticmethod
  441. def _configure_embedding_model_for_high_quality(data, filtered_data):
  442. """
  443. Configure embedding model settings for high quality indexing.
  444. Args:
  445. data: Update data dictionary
  446. filtered_data: Filtered update data to modify
  447. """
  448. # assert isinstance(current_user, Account) and current_user.current_tenant_id is not None
  449. try:
  450. model_manager = ModelManager()
  451. assert isinstance(current_user, Account)
  452. assert current_user.current_tenant_id is not None
  453. embedding_model = model_manager.get_model_instance(
  454. tenant_id=current_user.current_tenant_id,
  455. provider=data["embedding_model_provider"],
  456. model_type=ModelType.TEXT_EMBEDDING,
  457. model=data["embedding_model"],
  458. )
  459. filtered_data["embedding_model"] = embedding_model.model
  460. filtered_data["embedding_model_provider"] = embedding_model.provider
  461. dataset_collection_binding = DatasetCollectionBindingService.get_dataset_collection_binding(
  462. embedding_model.provider, embedding_model.model
  463. )
  464. filtered_data["collection_binding_id"] = dataset_collection_binding.id
  465. except LLMBadRequestError:
  466. raise ValueError(
  467. "No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
  468. )
  469. except ProviderTokenNotInitError as ex:
  470. raise ValueError(ex.description)
  471. @staticmethod
  472. def _handle_embedding_model_update_when_technique_unchanged(dataset, data, filtered_data):
  473. """
  474. Handle embedding model updates when indexing technique remains the same.
  475. Args:
  476. dataset: Current dataset object
  477. data: Update data dictionary
  478. filtered_data: Filtered update data to modify
  479. Returns:
  480. str: Action to perform ('update' or None)
  481. """
  482. # Skip embedding model checks if not provided in the update request
  483. if (
  484. "embedding_model_provider" not in data
  485. or "embedding_model" not in data
  486. or not data.get("embedding_model_provider")
  487. or not data.get("embedding_model")
  488. ):
  489. DatasetService._preserve_existing_embedding_settings(dataset, filtered_data)
  490. return None
  491. else:
  492. return DatasetService._update_embedding_model_settings(dataset, data, filtered_data)
  493. @staticmethod
  494. def _preserve_existing_embedding_settings(dataset, filtered_data):
  495. """
  496. Preserve existing embedding model settings when not provided in update.
  497. Args:
  498. dataset: Current dataset object
  499. filtered_data: Filtered update data to modify
  500. """
  501. # If the dataset already has embedding model settings, use those
  502. if dataset.embedding_model_provider and dataset.embedding_model:
  503. filtered_data["embedding_model_provider"] = dataset.embedding_model_provider
  504. filtered_data["embedding_model"] = dataset.embedding_model
  505. # If collection_binding_id exists, keep it too
  506. if dataset.collection_binding_id:
  507. filtered_data["collection_binding_id"] = dataset.collection_binding_id
  508. # Otherwise, don't try to update embedding model settings at all
  509. # Remove these fields from filtered_data if they exist but are None/empty
  510. if "embedding_model_provider" in filtered_data and not filtered_data["embedding_model_provider"]:
  511. del filtered_data["embedding_model_provider"]
  512. if "embedding_model" in filtered_data and not filtered_data["embedding_model"]:
  513. del filtered_data["embedding_model"]
  514. @staticmethod
  515. def _update_embedding_model_settings(dataset, data, filtered_data):
  516. """
  517. Update embedding model settings with new values.
  518. Args:
  519. dataset: Current dataset object
  520. data: Update data dictionary
  521. filtered_data: Filtered update data to modify
  522. Returns:
  523. str: Action to perform ('update' or None)
  524. """
  525. try:
  526. # Compare current and new model provider settings
  527. current_provider_str = (
  528. str(ModelProviderID(dataset.embedding_model_provider)) if dataset.embedding_model_provider else None
  529. )
  530. new_provider_str = (
  531. str(ModelProviderID(data["embedding_model_provider"])) if data["embedding_model_provider"] else None
  532. )
  533. # Only update if values are different
  534. if current_provider_str != new_provider_str or data["embedding_model"] != dataset.embedding_model:
  535. DatasetService._apply_new_embedding_settings(dataset, data, filtered_data)
  536. return "update"
  537. except LLMBadRequestError:
  538. raise ValueError(
  539. "No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
  540. )
  541. except ProviderTokenNotInitError as ex:
  542. raise ValueError(ex.description)
  543. return None
  544. @staticmethod
  545. def _apply_new_embedding_settings(dataset, data, filtered_data):
  546. """
  547. Apply new embedding model settings to the dataset.
  548. Args:
  549. dataset: Current dataset object
  550. data: Update data dictionary
  551. filtered_data: Filtered update data to modify
  552. """
  553. # assert isinstance(current_user, Account) and current_user.current_tenant_id is not None
  554. model_manager = ModelManager()
  555. try:
  556. assert isinstance(current_user, Account)
  557. assert current_user.current_tenant_id is not None
  558. embedding_model = model_manager.get_model_instance(
  559. tenant_id=current_user.current_tenant_id,
  560. provider=data["embedding_model_provider"],
  561. model_type=ModelType.TEXT_EMBEDDING,
  562. model=data["embedding_model"],
  563. )
  564. except ProviderTokenNotInitError:
  565. # If we can't get the embedding model, preserve existing settings
  566. logger.warning(
  567. "Failed to initialize embedding model %s/%s, preserving existing settings",
  568. data["embedding_model_provider"],
  569. data["embedding_model"],
  570. )
  571. if dataset.embedding_model_provider and dataset.embedding_model:
  572. filtered_data["embedding_model_provider"] = dataset.embedding_model_provider
  573. filtered_data["embedding_model"] = dataset.embedding_model
  574. if dataset.collection_binding_id:
  575. filtered_data["collection_binding_id"] = dataset.collection_binding_id
  576. # Skip the rest of the embedding model update
  577. return
  578. # Apply new embedding model settings
  579. filtered_data["embedding_model"] = embedding_model.model
  580. filtered_data["embedding_model_provider"] = embedding_model.provider
  581. dataset_collection_binding = DatasetCollectionBindingService.get_dataset_collection_binding(
  582. embedding_model.provider, embedding_model.model
  583. )
  584. filtered_data["collection_binding_id"] = dataset_collection_binding.id
  585. @staticmethod
  586. def delete_dataset(dataset_id, user):
  587. dataset = DatasetService.get_dataset(dataset_id)
  588. if dataset is None:
  589. return False
  590. DatasetService.check_dataset_permission(dataset, user)
  591. dataset_was_deleted.send(dataset)
  592. db.session.delete(dataset)
  593. db.session.commit()
  594. return True
  595. @staticmethod
  596. def dataset_use_check(dataset_id) -> bool:
  597. stmt = select(exists().where(AppDatasetJoin.dataset_id == dataset_id))
  598. return db.session.execute(stmt).scalar_one()
  599. @staticmethod
  600. def check_dataset_permission(dataset, user):
  601. if dataset.tenant_id != user.current_tenant_id:
  602. logger.debug("User %s does not have permission to access dataset %s", user.id, dataset.id)
  603. raise NoPermissionError("You do not have permission to access this dataset.")
  604. if user.current_role != TenantAccountRole.OWNER:
  605. if dataset.permission == DatasetPermissionEnum.ONLY_ME and dataset.created_by != user.id:
  606. logger.debug("User %s does not have permission to access dataset %s", user.id, dataset.id)
  607. raise NoPermissionError("You do not have permission to access this dataset.")
  608. if dataset.permission == DatasetPermissionEnum.PARTIAL_TEAM:
  609. # For partial team permission, user needs explicit permission or be the creator
  610. if dataset.created_by != user.id:
  611. user_permission = (
  612. db.session.query(DatasetPermission).filter_by(dataset_id=dataset.id, account_id=user.id).first()
  613. )
  614. if not user_permission:
  615. logger.debug("User %s does not have permission to access dataset %s", user.id, dataset.id)
  616. raise NoPermissionError("You do not have permission to access this dataset.")
  617. @staticmethod
  618. def check_dataset_operator_permission(user: Optional[Account] = None, dataset: Optional[Dataset] = None):
  619. if not dataset:
  620. raise ValueError("Dataset not found")
  621. if not user:
  622. raise ValueError("User not found")
  623. if user.current_role != TenantAccountRole.OWNER:
  624. if dataset.permission == DatasetPermissionEnum.ONLY_ME:
  625. if dataset.created_by != user.id:
  626. raise NoPermissionError("You do not have permission to access this dataset.")
  627. elif dataset.permission == DatasetPermissionEnum.PARTIAL_TEAM:
  628. if not any(
  629. dp.dataset_id == dataset.id
  630. for dp in db.session.query(DatasetPermission).filter_by(account_id=user.id).all()
  631. ):
  632. raise NoPermissionError("You do not have permission to access this dataset.")
  633. @staticmethod
  634. def get_dataset_queries(dataset_id: str, page: int, per_page: int):
  635. stmt = select(DatasetQuery).filter_by(dataset_id=dataset_id).order_by(db.desc(DatasetQuery.created_at))
  636. dataset_queries = db.paginate(select=stmt, page=page, per_page=per_page, max_per_page=100, error_out=False)
  637. return dataset_queries.items, dataset_queries.total
  638. @staticmethod
  639. def get_related_apps(dataset_id: str):
  640. return (
  641. db.session.query(AppDatasetJoin)
  642. .where(AppDatasetJoin.dataset_id == dataset_id)
  643. .order_by(db.desc(AppDatasetJoin.created_at))
  644. .all()
  645. )
  646. @staticmethod
  647. def get_dataset_auto_disable_logs(dataset_id: str):
  648. assert isinstance(current_user, Account)
  649. assert current_user.current_tenant_id is not None
  650. features = FeatureService.get_features(current_user.current_tenant_id)
  651. if not features.billing.enabled or features.billing.subscription.plan == "sandbox":
  652. return {
  653. "document_ids": [],
  654. "count": 0,
  655. }
  656. # get recent 30 days auto disable logs
  657. start_date = datetime.datetime.now() - datetime.timedelta(days=30)
  658. dataset_auto_disable_logs = (
  659. db.session.query(DatasetAutoDisableLog)
  660. .where(
  661. DatasetAutoDisableLog.dataset_id == dataset_id,
  662. DatasetAutoDisableLog.created_at >= start_date,
  663. )
  664. .all()
  665. )
  666. if dataset_auto_disable_logs:
  667. return {
  668. "document_ids": [log.document_id for log in dataset_auto_disable_logs],
  669. "count": len(dataset_auto_disable_logs),
  670. }
  671. return {
  672. "document_ids": [],
  673. "count": 0,
  674. }
  675. class DocumentService:
  676. DEFAULT_RULES: dict[str, Any] = {
  677. "mode": "custom",
  678. "rules": {
  679. "pre_processing_rules": [
  680. {"id": "remove_extra_spaces", "enabled": True},
  681. {"id": "remove_urls_emails", "enabled": False},
  682. ],
  683. "segmentation": {"delimiter": "\n", "max_tokens": 1024, "chunk_overlap": 50},
  684. },
  685. "limits": {
  686. "indexing_max_segmentation_tokens_length": dify_config.INDEXING_MAX_SEGMENTATION_TOKENS_LENGTH,
  687. },
  688. }
  689. DOCUMENT_METADATA_SCHEMA: dict[str, Any] = {
  690. "book": {
  691. "title": str,
  692. "language": str,
  693. "author": str,
  694. "publisher": str,
  695. "publication_date": str,
  696. "isbn": str,
  697. "category": str,
  698. },
  699. "web_page": {
  700. "title": str,
  701. "url": str,
  702. "language": str,
  703. "publish_date": str,
  704. "author/publisher": str,
  705. "topic/keywords": str,
  706. "description": str,
  707. },
  708. "paper": {
  709. "title": str,
  710. "language": str,
  711. "author": str,
  712. "publish_date": str,
  713. "journal/conference_name": str,
  714. "volume/issue/page_numbers": str,
  715. "doi": str,
  716. "topic/keywords": str,
  717. "abstract": str,
  718. },
  719. "social_media_post": {
  720. "platform": str,
  721. "author/username": str,
  722. "publish_date": str,
  723. "post_url": str,
  724. "topic/tags": str,
  725. },
  726. "wikipedia_entry": {
  727. "title": str,
  728. "language": str,
  729. "web_page_url": str,
  730. "last_edit_date": str,
  731. "editor/contributor": str,
  732. "summary/introduction": str,
  733. },
  734. "personal_document": {
  735. "title": str,
  736. "author": str,
  737. "creation_date": str,
  738. "last_modified_date": str,
  739. "document_type": str,
  740. "tags/category": str,
  741. },
  742. "business_document": {
  743. "title": str,
  744. "author": str,
  745. "creation_date": str,
  746. "last_modified_date": str,
  747. "document_type": str,
  748. "department/team": str,
  749. },
  750. "im_chat_log": {
  751. "chat_platform": str,
  752. "chat_participants/group_name": str,
  753. "start_date": str,
  754. "end_date": str,
  755. "summary": str,
  756. },
  757. "synced_from_notion": {
  758. "title": str,
  759. "language": str,
  760. "author/creator": str,
  761. "creation_date": str,
  762. "last_modified_date": str,
  763. "notion_page_link": str,
  764. "category/tags": str,
  765. "description": str,
  766. },
  767. "synced_from_github": {
  768. "repository_name": str,
  769. "repository_description": str,
  770. "repository_owner/organization": str,
  771. "code_filename": str,
  772. "code_file_path": str,
  773. "programming_language": str,
  774. "github_link": str,
  775. "open_source_license": str,
  776. "commit_date": str,
  777. "commit_author": str,
  778. },
  779. "others": dict,
  780. }
  781. @staticmethod
  782. def get_document(dataset_id: str, document_id: Optional[str] = None) -> Optional[Document]:
  783. if document_id:
  784. document = (
  785. db.session.query(Document).where(Document.id == document_id, Document.dataset_id == dataset_id).first()
  786. )
  787. return document
  788. else:
  789. return None
  790. @staticmethod
  791. def get_document_by_id(document_id: str) -> Optional[Document]:
  792. document = db.session.query(Document).where(Document.id == document_id).first()
  793. return document
  794. @staticmethod
  795. def get_document_by_ids(document_ids: list[str]) -> list[Document]:
  796. documents = (
  797. db.session.query(Document)
  798. .where(
  799. Document.id.in_(document_ids),
  800. Document.enabled == True,
  801. Document.indexing_status == "completed",
  802. Document.archived == False,
  803. )
  804. .all()
  805. )
  806. return documents
  807. @staticmethod
  808. def get_document_by_dataset_id(dataset_id: str) -> list[Document]:
  809. documents = (
  810. db.session.query(Document)
  811. .where(
  812. Document.dataset_id == dataset_id,
  813. Document.enabled == True,
  814. )
  815. .all()
  816. )
  817. return documents
  818. @staticmethod
  819. def get_working_documents_by_dataset_id(dataset_id: str) -> list[Document]:
  820. documents = (
  821. db.session.query(Document)
  822. .where(
  823. Document.dataset_id == dataset_id,
  824. Document.enabled == True,
  825. Document.indexing_status == "completed",
  826. Document.archived == False,
  827. )
  828. .all()
  829. )
  830. return documents
  831. @staticmethod
  832. def get_error_documents_by_dataset_id(dataset_id: str) -> list[Document]:
  833. documents = (
  834. db.session.query(Document)
  835. .where(Document.dataset_id == dataset_id, Document.indexing_status.in_(["error", "paused"]))
  836. .all()
  837. )
  838. return documents
  839. @staticmethod
  840. def get_batch_documents(dataset_id: str, batch: str) -> list[Document]:
  841. assert isinstance(current_user, Account)
  842. documents = (
  843. db.session.query(Document)
  844. .where(
  845. Document.batch == batch,
  846. Document.dataset_id == dataset_id,
  847. Document.tenant_id == current_user.current_tenant_id,
  848. )
  849. .all()
  850. )
  851. return documents
  852. @staticmethod
  853. def get_document_file_detail(file_id: str):
  854. file_detail = db.session.query(UploadFile).where(UploadFile.id == file_id).one_or_none()
  855. return file_detail
  856. @staticmethod
  857. def check_archived(document):
  858. if document.archived:
  859. return True
  860. else:
  861. return False
  862. @staticmethod
  863. def delete_document(document):
  864. # trigger document_was_deleted signal
  865. file_id = None
  866. if document.data_source_type == "upload_file":
  867. if document.data_source_info:
  868. data_source_info = document.data_source_info_dict
  869. if data_source_info and "upload_file_id" in data_source_info:
  870. file_id = data_source_info["upload_file_id"]
  871. document_was_deleted.send(
  872. document.id, dataset_id=document.dataset_id, doc_form=document.doc_form, file_id=file_id
  873. )
  874. db.session.delete(document)
  875. db.session.commit()
  876. @staticmethod
  877. def delete_documents(dataset: Dataset, document_ids: list[str]):
  878. # Check if document_ids is not empty to avoid WHERE false condition
  879. if not document_ids or len(document_ids) == 0:
  880. return
  881. documents = db.session.query(Document).where(Document.id.in_(document_ids)).all()
  882. file_ids = [
  883. document.data_source_info_dict["upload_file_id"]
  884. for document in documents
  885. if document.data_source_type == "upload_file" and document.data_source_info_dict
  886. ]
  887. if dataset.doc_form is not None:
  888. batch_clean_document_task.delay(document_ids, dataset.id, dataset.doc_form, file_ids)
  889. for document in documents:
  890. db.session.delete(document)
  891. db.session.commit()
  892. @staticmethod
  893. def rename_document(dataset_id: str, document_id: str, name: str) -> Document:
  894. assert isinstance(current_user, Account)
  895. dataset = DatasetService.get_dataset(dataset_id)
  896. if not dataset:
  897. raise ValueError("Dataset not found.")
  898. document = DocumentService.get_document(dataset_id, document_id)
  899. if not document:
  900. raise ValueError("Document not found.")
  901. if document.tenant_id != current_user.current_tenant_id:
  902. raise ValueError("No permission.")
  903. if dataset.built_in_field_enabled:
  904. if document.doc_metadata:
  905. doc_metadata = copy.deepcopy(document.doc_metadata)
  906. doc_metadata[BuiltInField.document_name.value] = name
  907. document.doc_metadata = doc_metadata
  908. document.name = name
  909. db.session.add(document)
  910. db.session.commit()
  911. return document
  912. @staticmethod
  913. def pause_document(document):
  914. if document.indexing_status not in {"waiting", "parsing", "cleaning", "splitting", "indexing"}:
  915. raise DocumentIndexingError()
  916. # update document to be paused
  917. assert current_user is not None
  918. document.is_paused = True
  919. document.paused_by = current_user.id
  920. document.paused_at = naive_utc_now()
  921. db.session.add(document)
  922. db.session.commit()
  923. # set document paused flag
  924. indexing_cache_key = f"document_{document.id}_is_paused"
  925. redis_client.setnx(indexing_cache_key, "True")
  926. @staticmethod
  927. def recover_document(document):
  928. if not document.is_paused:
  929. raise DocumentIndexingError()
  930. # update document to be recover
  931. document.is_paused = False
  932. document.paused_by = None
  933. document.paused_at = None
  934. db.session.add(document)
  935. db.session.commit()
  936. # delete paused flag
  937. indexing_cache_key = f"document_{document.id}_is_paused"
  938. redis_client.delete(indexing_cache_key)
  939. # trigger async task
  940. recover_document_indexing_task.delay(document.dataset_id, document.id)
  941. @staticmethod
  942. def retry_document(dataset_id: str, documents: list[Document]):
  943. for document in documents:
  944. # add retry flag
  945. retry_indexing_cache_key = f"document_{document.id}_is_retried"
  946. cache_result = redis_client.get(retry_indexing_cache_key)
  947. if cache_result is not None:
  948. raise ValueError("Document is being retried, please try again later")
  949. # retry document indexing
  950. document.indexing_status = "waiting"
  951. db.session.add(document)
  952. db.session.commit()
  953. redis_client.setex(retry_indexing_cache_key, 600, 1)
  954. # trigger async task
  955. document_ids = [document.id for document in documents]
  956. retry_document_indexing_task.delay(dataset_id, document_ids)
  957. @staticmethod
  958. def sync_website_document(dataset_id: str, document: Document):
  959. # add sync flag
  960. sync_indexing_cache_key = f"document_{document.id}_is_sync"
  961. cache_result = redis_client.get(sync_indexing_cache_key)
  962. if cache_result is not None:
  963. raise ValueError("Document is being synced, please try again later")
  964. # sync document indexing
  965. document.indexing_status = "waiting"
  966. data_source_info = document.data_source_info_dict
  967. if data_source_info:
  968. data_source_info["mode"] = "scrape"
  969. document.data_source_info = json.dumps(data_source_info, ensure_ascii=False)
  970. db.session.add(document)
  971. db.session.commit()
  972. redis_client.setex(sync_indexing_cache_key, 600, 1)
  973. sync_website_document_indexing_task.delay(dataset_id, document.id)
  974. @staticmethod
  975. def get_documents_position(dataset_id):
  976. document = (
  977. db.session.query(Document).filter_by(dataset_id=dataset_id).order_by(Document.position.desc()).first()
  978. )
  979. if document:
  980. return document.position + 1
  981. else:
  982. return 1
  983. @staticmethod
  984. def save_document_with_dataset_id(
  985. dataset: Dataset,
  986. knowledge_config: KnowledgeConfig,
  987. account: Account | Any,
  988. dataset_process_rule: Optional[DatasetProcessRule] = None,
  989. created_from: str = "web",
  990. ) -> tuple[list[Document], str]:
  991. # check doc_form
  992. DatasetService.check_doc_form(dataset, knowledge_config.doc_form)
  993. # check document limit
  994. assert isinstance(current_user, Account)
  995. assert current_user.current_tenant_id is not None
  996. features = FeatureService.get_features(current_user.current_tenant_id)
  997. if features.billing.enabled:
  998. if not knowledge_config.original_document_id:
  999. count = 0
  1000. if knowledge_config.data_source:
  1001. if knowledge_config.data_source.info_list.data_source_type == "upload_file":
  1002. upload_file_list = knowledge_config.data_source.info_list.file_info_list.file_ids # type: ignore
  1003. count = len(upload_file_list)
  1004. elif knowledge_config.data_source.info_list.data_source_type == "notion_import":
  1005. notion_info_list = knowledge_config.data_source.info_list.notion_info_list
  1006. for notion_info in notion_info_list: # type: ignore
  1007. count = count + len(notion_info.pages)
  1008. elif knowledge_config.data_source.info_list.data_source_type == "website_crawl":
  1009. website_info = knowledge_config.data_source.info_list.website_info_list
  1010. count = len(website_info.urls) # type: ignore
  1011. batch_upload_limit = int(dify_config.BATCH_UPLOAD_LIMIT)
  1012. if features.billing.subscription.plan == "sandbox" and count > 1:
  1013. raise ValueError("Your current plan does not support batch upload, please upgrade your plan.")
  1014. if count > batch_upload_limit:
  1015. raise ValueError(f"You have reached the batch upload limit of {batch_upload_limit}.")
  1016. DocumentService.check_documents_upload_quota(count, features)
  1017. # if dataset is empty, update dataset data_source_type
  1018. if not dataset.data_source_type:
  1019. dataset.data_source_type = knowledge_config.data_source.info_list.data_source_type # type: ignore
  1020. if not dataset.indexing_technique:
  1021. if knowledge_config.indexing_technique not in Dataset.INDEXING_TECHNIQUE_LIST:
  1022. raise ValueError("Indexing technique is invalid")
  1023. dataset.indexing_technique = knowledge_config.indexing_technique
  1024. if knowledge_config.indexing_technique == "high_quality":
  1025. model_manager = ModelManager()
  1026. if knowledge_config.embedding_model and knowledge_config.embedding_model_provider:
  1027. dataset_embedding_model = knowledge_config.embedding_model
  1028. dataset_embedding_model_provider = knowledge_config.embedding_model_provider
  1029. else:
  1030. embedding_model = model_manager.get_default_model_instance(
  1031. tenant_id=current_user.current_tenant_id, model_type=ModelType.TEXT_EMBEDDING
  1032. )
  1033. dataset_embedding_model = embedding_model.model
  1034. dataset_embedding_model_provider = embedding_model.provider
  1035. dataset.embedding_model = dataset_embedding_model
  1036. dataset.embedding_model_provider = dataset_embedding_model_provider
  1037. dataset_collection_binding = DatasetCollectionBindingService.get_dataset_collection_binding(
  1038. dataset_embedding_model_provider, dataset_embedding_model
  1039. )
  1040. dataset.collection_binding_id = dataset_collection_binding.id
  1041. if not dataset.retrieval_model:
  1042. default_retrieval_model = {
  1043. "search_method": RetrievalMethod.SEMANTIC_SEARCH.value,
  1044. "reranking_enable": False,
  1045. "reranking_model": {"reranking_provider_name": "", "reranking_model_name": ""},
  1046. "top_k": 4,
  1047. "score_threshold_enabled": False,
  1048. }
  1049. dataset.retrieval_model = (
  1050. knowledge_config.retrieval_model.model_dump()
  1051. if knowledge_config.retrieval_model
  1052. else default_retrieval_model
  1053. ) # type: ignore
  1054. documents = []
  1055. if knowledge_config.original_document_id:
  1056. document = DocumentService.update_document_with_dataset_id(dataset, knowledge_config, account)
  1057. documents.append(document)
  1058. batch = document.batch
  1059. else:
  1060. batch = time.strftime("%Y%m%d%H%M%S") + str(100000 + secrets.randbelow(exclusive_upper_bound=900000))
  1061. # save process rule
  1062. if not dataset_process_rule:
  1063. process_rule = knowledge_config.process_rule
  1064. if process_rule:
  1065. if process_rule.mode in ("custom", "hierarchical"):
  1066. if process_rule.rules:
  1067. dataset_process_rule = DatasetProcessRule(
  1068. dataset_id=dataset.id,
  1069. mode=process_rule.mode,
  1070. rules=process_rule.rules.model_dump_json() if process_rule.rules else None,
  1071. created_by=account.id,
  1072. )
  1073. else:
  1074. dataset_process_rule = dataset.latest_process_rule
  1075. if not dataset_process_rule:
  1076. raise ValueError("No process rule found.")
  1077. elif process_rule.mode == "automatic":
  1078. dataset_process_rule = DatasetProcessRule(
  1079. dataset_id=dataset.id,
  1080. mode=process_rule.mode,
  1081. rules=json.dumps(DatasetProcessRule.AUTOMATIC_RULES),
  1082. created_by=account.id,
  1083. )
  1084. else:
  1085. logger.warning(
  1086. "Invalid process rule mode: %s, can not find dataset process rule",
  1087. process_rule.mode,
  1088. )
  1089. return [], ""
  1090. db.session.add(dataset_process_rule)
  1091. db.session.commit()
  1092. lock_name = f"add_document_lock_dataset_id_{dataset.id}"
  1093. with redis_client.lock(lock_name, timeout=600):
  1094. position = DocumentService.get_documents_position(dataset.id)
  1095. document_ids = []
  1096. duplicate_document_ids = []
  1097. if knowledge_config.data_source.info_list.data_source_type == "upload_file": # type: ignore
  1098. upload_file_list = knowledge_config.data_source.info_list.file_info_list.file_ids # type: ignore
  1099. for file_id in upload_file_list:
  1100. file = (
  1101. db.session.query(UploadFile)
  1102. .where(UploadFile.tenant_id == dataset.tenant_id, UploadFile.id == file_id)
  1103. .first()
  1104. )
  1105. # raise error if file not found
  1106. if not file:
  1107. raise FileNotExistsError()
  1108. file_name = file.name
  1109. data_source_info = {
  1110. "upload_file_id": file_id,
  1111. }
  1112. # check duplicate
  1113. if knowledge_config.duplicate:
  1114. document = (
  1115. db.session.query(Document)
  1116. .filter_by(
  1117. dataset_id=dataset.id,
  1118. tenant_id=current_user.current_tenant_id,
  1119. data_source_type="upload_file",
  1120. enabled=True,
  1121. name=file_name,
  1122. )
  1123. .first()
  1124. )
  1125. if document:
  1126. document.dataset_process_rule_id = dataset_process_rule.id # type: ignore
  1127. document.updated_at = naive_utc_now()
  1128. document.created_from = created_from
  1129. document.doc_form = knowledge_config.doc_form
  1130. document.doc_language = knowledge_config.doc_language
  1131. document.data_source_info = json.dumps(data_source_info)
  1132. document.batch = batch
  1133. document.indexing_status = "waiting"
  1134. db.session.add(document)
  1135. documents.append(document)
  1136. duplicate_document_ids.append(document.id)
  1137. continue
  1138. document = DocumentService.build_document(
  1139. dataset,
  1140. dataset_process_rule.id, # type: ignore
  1141. knowledge_config.data_source.info_list.data_source_type, # type: ignore
  1142. knowledge_config.doc_form,
  1143. knowledge_config.doc_language,
  1144. data_source_info,
  1145. created_from,
  1146. position,
  1147. account,
  1148. file_name,
  1149. batch,
  1150. )
  1151. db.session.add(document)
  1152. db.session.flush()
  1153. document_ids.append(document.id)
  1154. documents.append(document)
  1155. position += 1
  1156. elif knowledge_config.data_source.info_list.data_source_type == "notion_import": # type: ignore
  1157. notion_info_list = knowledge_config.data_source.info_list.notion_info_list # type: ignore
  1158. if not notion_info_list:
  1159. raise ValueError("No notion info list found.")
  1160. exist_page_ids = []
  1161. exist_document = {}
  1162. documents = (
  1163. db.session.query(Document)
  1164. .filter_by(
  1165. dataset_id=dataset.id,
  1166. tenant_id=current_user.current_tenant_id,
  1167. data_source_type="notion_import",
  1168. enabled=True,
  1169. )
  1170. .all()
  1171. )
  1172. if documents:
  1173. for document in documents:
  1174. data_source_info = json.loads(document.data_source_info)
  1175. exist_page_ids.append(data_source_info["notion_page_id"])
  1176. exist_document[data_source_info["notion_page_id"]] = document.id
  1177. for notion_info in notion_info_list:
  1178. workspace_id = notion_info.workspace_id
  1179. data_source_binding = (
  1180. db.session.query(DataSourceOauthBinding)
  1181. .where(
  1182. db.and_(
  1183. DataSourceOauthBinding.tenant_id == current_user.current_tenant_id,
  1184. DataSourceOauthBinding.provider == "notion",
  1185. DataSourceOauthBinding.disabled == False,
  1186. DataSourceOauthBinding.source_info["workspace_id"] == f'"{workspace_id}"',
  1187. )
  1188. )
  1189. .first()
  1190. )
  1191. if not data_source_binding:
  1192. raise ValueError("Data source binding not found.")
  1193. for page in notion_info.pages:
  1194. if page.page_id not in exist_page_ids:
  1195. data_source_info = {
  1196. "notion_workspace_id": workspace_id,
  1197. "notion_page_id": page.page_id,
  1198. "notion_page_icon": page.page_icon.model_dump() if page.page_icon else None,
  1199. "type": page.type,
  1200. }
  1201. # Truncate page name to 255 characters to prevent DB field length errors
  1202. truncated_page_name = page.page_name[:255] if page.page_name else "nopagename"
  1203. document = DocumentService.build_document(
  1204. dataset,
  1205. dataset_process_rule.id, # type: ignore
  1206. knowledge_config.data_source.info_list.data_source_type, # type: ignore
  1207. knowledge_config.doc_form,
  1208. knowledge_config.doc_language,
  1209. data_source_info,
  1210. created_from,
  1211. position,
  1212. account,
  1213. truncated_page_name,
  1214. batch,
  1215. )
  1216. db.session.add(document)
  1217. db.session.flush()
  1218. document_ids.append(document.id)
  1219. documents.append(document)
  1220. position += 1
  1221. else:
  1222. exist_document.pop(page.page_id)
  1223. # delete not selected documents
  1224. if len(exist_document) > 0:
  1225. clean_notion_document_task.delay(list(exist_document.values()), dataset.id)
  1226. elif knowledge_config.data_source.info_list.data_source_type == "website_crawl": # type: ignore
  1227. website_info = knowledge_config.data_source.info_list.website_info_list # type: ignore
  1228. if not website_info:
  1229. raise ValueError("No website info list found.")
  1230. urls = website_info.urls
  1231. for url in urls:
  1232. data_source_info = {
  1233. "url": url,
  1234. "provider": website_info.provider,
  1235. "job_id": website_info.job_id,
  1236. "only_main_content": website_info.only_main_content,
  1237. "mode": "crawl",
  1238. }
  1239. if len(url) > 255:
  1240. document_name = url[:200] + "..."
  1241. else:
  1242. document_name = url
  1243. document = DocumentService.build_document(
  1244. dataset,
  1245. dataset_process_rule.id, # type: ignore
  1246. knowledge_config.data_source.info_list.data_source_type, # type: ignore
  1247. knowledge_config.doc_form,
  1248. knowledge_config.doc_language,
  1249. data_source_info,
  1250. created_from,
  1251. position,
  1252. account,
  1253. document_name,
  1254. batch,
  1255. )
  1256. db.session.add(document)
  1257. db.session.flush()
  1258. document_ids.append(document.id)
  1259. documents.append(document)
  1260. position += 1
  1261. db.session.commit()
  1262. # trigger async task
  1263. if document_ids:
  1264. document_indexing_task.delay(dataset.id, document_ids)
  1265. if duplicate_document_ids:
  1266. duplicate_document_indexing_task.delay(dataset.id, duplicate_document_ids)
  1267. return documents, batch
  1268. @staticmethod
  1269. def check_documents_upload_quota(count: int, features: FeatureModel):
  1270. can_upload_size = features.documents_upload_quota.limit - features.documents_upload_quota.size
  1271. if count > can_upload_size:
  1272. raise ValueError(
  1273. f"You have reached the limit of your subscription. Only {can_upload_size} documents can be uploaded."
  1274. )
  1275. @staticmethod
  1276. def build_document(
  1277. dataset: Dataset,
  1278. process_rule_id: str,
  1279. data_source_type: str,
  1280. document_form: str,
  1281. document_language: str,
  1282. data_source_info: dict,
  1283. created_from: str,
  1284. position: int,
  1285. account: Account,
  1286. name: str,
  1287. batch: str,
  1288. ):
  1289. document = Document(
  1290. tenant_id=dataset.tenant_id,
  1291. dataset_id=dataset.id,
  1292. position=position,
  1293. data_source_type=data_source_type,
  1294. data_source_info=json.dumps(data_source_info),
  1295. dataset_process_rule_id=process_rule_id,
  1296. batch=batch,
  1297. name=name,
  1298. created_from=created_from,
  1299. created_by=account.id,
  1300. doc_form=document_form,
  1301. doc_language=document_language,
  1302. )
  1303. doc_metadata = {}
  1304. if dataset.built_in_field_enabled:
  1305. doc_metadata = {
  1306. BuiltInField.document_name: name,
  1307. BuiltInField.uploader: account.name,
  1308. BuiltInField.upload_date: datetime.datetime.now(datetime.UTC).strftime("%Y-%m-%d %H:%M:%S"),
  1309. BuiltInField.last_update_date: datetime.datetime.now(datetime.UTC).strftime("%Y-%m-%d %H:%M:%S"),
  1310. BuiltInField.source: data_source_type,
  1311. }
  1312. if doc_metadata:
  1313. document.doc_metadata = doc_metadata
  1314. return document
  1315. @staticmethod
  1316. def get_tenant_documents_count():
  1317. assert isinstance(current_user, Account)
  1318. documents_count = (
  1319. db.session.query(Document)
  1320. .where(
  1321. Document.completed_at.isnot(None),
  1322. Document.enabled == True,
  1323. Document.archived == False,
  1324. Document.tenant_id == current_user.current_tenant_id,
  1325. )
  1326. .count()
  1327. )
  1328. return documents_count
  1329. @staticmethod
  1330. def update_document_with_dataset_id(
  1331. dataset: Dataset,
  1332. document_data: KnowledgeConfig,
  1333. account: Account,
  1334. dataset_process_rule: Optional[DatasetProcessRule] = None,
  1335. created_from: str = "web",
  1336. ):
  1337. assert isinstance(current_user, Account)
  1338. DatasetService.check_dataset_model_setting(dataset)
  1339. document = DocumentService.get_document(dataset.id, document_data.original_document_id)
  1340. if document is None:
  1341. raise NotFound("Document not found")
  1342. if document.display_status != "available":
  1343. raise ValueError("Document is not available")
  1344. # save process rule
  1345. if document_data.process_rule:
  1346. process_rule = document_data.process_rule
  1347. if process_rule.mode in {"custom", "hierarchical"}:
  1348. dataset_process_rule = DatasetProcessRule(
  1349. dataset_id=dataset.id,
  1350. mode=process_rule.mode,
  1351. rules=process_rule.rules.model_dump_json() if process_rule.rules else None,
  1352. created_by=account.id,
  1353. )
  1354. elif process_rule.mode == "automatic":
  1355. dataset_process_rule = DatasetProcessRule(
  1356. dataset_id=dataset.id,
  1357. mode=process_rule.mode,
  1358. rules=json.dumps(DatasetProcessRule.AUTOMATIC_RULES),
  1359. created_by=account.id,
  1360. )
  1361. if dataset_process_rule is not None:
  1362. db.session.add(dataset_process_rule)
  1363. db.session.commit()
  1364. document.dataset_process_rule_id = dataset_process_rule.id
  1365. # update document data source
  1366. if document_data.data_source:
  1367. file_name = ""
  1368. data_source_info = {}
  1369. if document_data.data_source.info_list.data_source_type == "upload_file":
  1370. if not document_data.data_source.info_list.file_info_list:
  1371. raise ValueError("No file info list found.")
  1372. upload_file_list = document_data.data_source.info_list.file_info_list.file_ids
  1373. for file_id in upload_file_list:
  1374. file = (
  1375. db.session.query(UploadFile)
  1376. .where(UploadFile.tenant_id == dataset.tenant_id, UploadFile.id == file_id)
  1377. .first()
  1378. )
  1379. # raise error if file not found
  1380. if not file:
  1381. raise FileNotExistsError()
  1382. file_name = file.name
  1383. data_source_info = {
  1384. "upload_file_id": file_id,
  1385. }
  1386. elif document_data.data_source.info_list.data_source_type == "notion_import":
  1387. if not document_data.data_source.info_list.notion_info_list:
  1388. raise ValueError("No notion info list found.")
  1389. notion_info_list = document_data.data_source.info_list.notion_info_list
  1390. for notion_info in notion_info_list:
  1391. workspace_id = notion_info.workspace_id
  1392. data_source_binding = (
  1393. db.session.query(DataSourceOauthBinding)
  1394. .where(
  1395. sa.and_(
  1396. DataSourceOauthBinding.tenant_id == current_user.current_tenant_id,
  1397. DataSourceOauthBinding.provider == "notion",
  1398. DataSourceOauthBinding.disabled == False,
  1399. DataSourceOauthBinding.source_info["workspace_id"] == f'"{workspace_id}"',
  1400. )
  1401. )
  1402. .first()
  1403. )
  1404. if not data_source_binding:
  1405. raise ValueError("Data source binding not found.")
  1406. for page in notion_info.pages:
  1407. data_source_info = {
  1408. "notion_workspace_id": workspace_id,
  1409. "notion_page_id": page.page_id,
  1410. "notion_page_icon": page.page_icon.model_dump() if page.page_icon else None, # type: ignore
  1411. "type": page.type,
  1412. }
  1413. elif document_data.data_source.info_list.data_source_type == "website_crawl":
  1414. website_info = document_data.data_source.info_list.website_info_list
  1415. if website_info:
  1416. urls = website_info.urls
  1417. for url in urls:
  1418. data_source_info = {
  1419. "url": url,
  1420. "provider": website_info.provider,
  1421. "job_id": website_info.job_id,
  1422. "only_main_content": website_info.only_main_content, # type: ignore
  1423. "mode": "crawl",
  1424. }
  1425. document.data_source_type = document_data.data_source.info_list.data_source_type
  1426. document.data_source_info = json.dumps(data_source_info)
  1427. document.name = file_name
  1428. # update document name
  1429. if document_data.name:
  1430. document.name = document_data.name
  1431. # update document to be waiting
  1432. document.indexing_status = "waiting"
  1433. document.completed_at = None
  1434. document.processing_started_at = None
  1435. document.parsing_completed_at = None
  1436. document.cleaning_completed_at = None
  1437. document.splitting_completed_at = None
  1438. document.updated_at = naive_utc_now()
  1439. document.created_from = created_from
  1440. document.doc_form = document_data.doc_form
  1441. db.session.add(document)
  1442. db.session.commit()
  1443. # update document segment
  1444. db.session.query(DocumentSegment).filter_by(document_id=document.id).update(
  1445. {DocumentSegment.status: "re_segment"}
  1446. ) # type: ignore
  1447. db.session.commit()
  1448. # trigger async task
  1449. document_indexing_update_task.delay(document.dataset_id, document.id)
  1450. return document
  1451. @staticmethod
  1452. def save_document_without_dataset_id(tenant_id: str, knowledge_config: KnowledgeConfig, account: Account):
  1453. assert isinstance(current_user, Account)
  1454. assert current_user.current_tenant_id is not None
  1455. features = FeatureService.get_features(current_user.current_tenant_id)
  1456. if features.billing.enabled:
  1457. count = 0
  1458. if knowledge_config.data_source.info_list.data_source_type == "upload_file": # type: ignore
  1459. upload_file_list = (
  1460. knowledge_config.data_source.info_list.file_info_list.file_ids # type: ignore
  1461. if knowledge_config.data_source.info_list.file_info_list # type: ignore
  1462. else []
  1463. )
  1464. count = len(upload_file_list)
  1465. elif knowledge_config.data_source.info_list.data_source_type == "notion_import": # type: ignore
  1466. notion_info_list = knowledge_config.data_source.info_list.notion_info_list # type: ignore
  1467. if notion_info_list:
  1468. for notion_info in notion_info_list:
  1469. count = count + len(notion_info.pages)
  1470. elif knowledge_config.data_source.info_list.data_source_type == "website_crawl": # type: ignore
  1471. website_info = knowledge_config.data_source.info_list.website_info_list # type: ignore
  1472. if website_info:
  1473. count = len(website_info.urls)
  1474. if features.billing.subscription.plan == "sandbox" and count > 1:
  1475. raise ValueError("Your current plan does not support batch upload, please upgrade your plan.")
  1476. batch_upload_limit = int(dify_config.BATCH_UPLOAD_LIMIT)
  1477. if count > batch_upload_limit:
  1478. raise ValueError(f"You have reached the batch upload limit of {batch_upload_limit}.")
  1479. DocumentService.check_documents_upload_quota(count, features)
  1480. dataset_collection_binding_id = None
  1481. retrieval_model = None
  1482. if knowledge_config.indexing_technique == "high_quality":
  1483. dataset_collection_binding = DatasetCollectionBindingService.get_dataset_collection_binding(
  1484. knowledge_config.embedding_model_provider, # type: ignore
  1485. knowledge_config.embedding_model, # type: ignore
  1486. )
  1487. dataset_collection_binding_id = dataset_collection_binding.id
  1488. if knowledge_config.retrieval_model:
  1489. retrieval_model = knowledge_config.retrieval_model
  1490. else:
  1491. retrieval_model = RetrievalModel(
  1492. search_method=RetrievalMethod.SEMANTIC_SEARCH.value,
  1493. reranking_enable=False,
  1494. reranking_model=RerankingModel(reranking_provider_name="", reranking_model_name=""),
  1495. top_k=4,
  1496. score_threshold_enabled=False,
  1497. )
  1498. # save dataset
  1499. dataset = Dataset(
  1500. tenant_id=tenant_id,
  1501. name="",
  1502. data_source_type=knowledge_config.data_source.info_list.data_source_type, # type: ignore
  1503. indexing_technique=knowledge_config.indexing_technique,
  1504. created_by=account.id,
  1505. embedding_model=knowledge_config.embedding_model,
  1506. embedding_model_provider=knowledge_config.embedding_model_provider,
  1507. collection_binding_id=dataset_collection_binding_id,
  1508. retrieval_model=retrieval_model.model_dump() if retrieval_model else None,
  1509. )
  1510. db.session.add(dataset) # type: ignore
  1511. db.session.flush()
  1512. documents, batch = DocumentService.save_document_with_dataset_id(dataset, knowledge_config, account)
  1513. cut_length = 18
  1514. cut_name = documents[0].name[:cut_length]
  1515. dataset.name = cut_name + "..."
  1516. dataset.description = "useful for when you want to answer queries about the " + documents[0].name
  1517. db.session.commit()
  1518. return dataset, documents, batch
  1519. @classmethod
  1520. def document_create_args_validate(cls, knowledge_config: KnowledgeConfig):
  1521. if not knowledge_config.data_source and not knowledge_config.process_rule:
  1522. raise ValueError("Data source or Process rule is required")
  1523. else:
  1524. if knowledge_config.data_source:
  1525. DocumentService.data_source_args_validate(knowledge_config)
  1526. if knowledge_config.process_rule:
  1527. DocumentService.process_rule_args_validate(knowledge_config)
  1528. @classmethod
  1529. def data_source_args_validate(cls, knowledge_config: KnowledgeConfig):
  1530. if not knowledge_config.data_source:
  1531. raise ValueError("Data source is required")
  1532. if knowledge_config.data_source.info_list.data_source_type not in Document.DATA_SOURCES:
  1533. raise ValueError("Data source type is invalid")
  1534. if not knowledge_config.data_source.info_list:
  1535. raise ValueError("Data source info is required")
  1536. if knowledge_config.data_source.info_list.data_source_type == "upload_file":
  1537. if not knowledge_config.data_source.info_list.file_info_list:
  1538. raise ValueError("File source info is required")
  1539. if knowledge_config.data_source.info_list.data_source_type == "notion_import":
  1540. if not knowledge_config.data_source.info_list.notion_info_list:
  1541. raise ValueError("Notion source info is required")
  1542. if knowledge_config.data_source.info_list.data_source_type == "website_crawl":
  1543. if not knowledge_config.data_source.info_list.website_info_list:
  1544. raise ValueError("Website source info is required")
  1545. @classmethod
  1546. def process_rule_args_validate(cls, knowledge_config: KnowledgeConfig):
  1547. if not knowledge_config.process_rule:
  1548. raise ValueError("Process rule is required")
  1549. if not knowledge_config.process_rule.mode:
  1550. raise ValueError("Process rule mode is required")
  1551. if knowledge_config.process_rule.mode not in DatasetProcessRule.MODES:
  1552. raise ValueError("Process rule mode is invalid")
  1553. if knowledge_config.process_rule.mode == "automatic":
  1554. knowledge_config.process_rule.rules = None
  1555. else:
  1556. if not knowledge_config.process_rule.rules:
  1557. raise ValueError("Process rule rules is required")
  1558. if knowledge_config.process_rule.rules.pre_processing_rules is None:
  1559. raise ValueError("Process rule pre_processing_rules is required")
  1560. unique_pre_processing_rule_dicts = {}
  1561. for pre_processing_rule in knowledge_config.process_rule.rules.pre_processing_rules:
  1562. if not pre_processing_rule.id:
  1563. raise ValueError("Process rule pre_processing_rules id is required")
  1564. if not isinstance(pre_processing_rule.enabled, bool):
  1565. raise ValueError("Process rule pre_processing_rules enabled is invalid")
  1566. unique_pre_processing_rule_dicts[pre_processing_rule.id] = pre_processing_rule
  1567. knowledge_config.process_rule.rules.pre_processing_rules = list(unique_pre_processing_rule_dicts.values())
  1568. if not knowledge_config.process_rule.rules.segmentation:
  1569. raise ValueError("Process rule segmentation is required")
  1570. if not knowledge_config.process_rule.rules.segmentation.separator:
  1571. raise ValueError("Process rule segmentation separator is required")
  1572. if not isinstance(knowledge_config.process_rule.rules.segmentation.separator, str):
  1573. raise ValueError("Process rule segmentation separator is invalid")
  1574. if not (
  1575. knowledge_config.process_rule.mode == "hierarchical"
  1576. and knowledge_config.process_rule.rules.parent_mode == "full-doc"
  1577. ):
  1578. if not knowledge_config.process_rule.rules.segmentation.max_tokens:
  1579. raise ValueError("Process rule segmentation max_tokens is required")
  1580. if not isinstance(knowledge_config.process_rule.rules.segmentation.max_tokens, int):
  1581. raise ValueError("Process rule segmentation max_tokens is invalid")
  1582. @classmethod
  1583. def estimate_args_validate(cls, args: dict):
  1584. if "info_list" not in args or not args["info_list"]:
  1585. raise ValueError("Data source info is required")
  1586. if not isinstance(args["info_list"], dict):
  1587. raise ValueError("Data info is invalid")
  1588. if "process_rule" not in args or not args["process_rule"]:
  1589. raise ValueError("Process rule is required")
  1590. if not isinstance(args["process_rule"], dict):
  1591. raise ValueError("Process rule is invalid")
  1592. if "mode" not in args["process_rule"] or not args["process_rule"]["mode"]:
  1593. raise ValueError("Process rule mode is required")
  1594. if args["process_rule"]["mode"] not in DatasetProcessRule.MODES:
  1595. raise ValueError("Process rule mode is invalid")
  1596. if args["process_rule"]["mode"] == "automatic":
  1597. args["process_rule"]["rules"] = {}
  1598. else:
  1599. if "rules" not in args["process_rule"] or not args["process_rule"]["rules"]:
  1600. raise ValueError("Process rule rules is required")
  1601. if not isinstance(args["process_rule"]["rules"], dict):
  1602. raise ValueError("Process rule rules is invalid")
  1603. if (
  1604. "pre_processing_rules" not in args["process_rule"]["rules"]
  1605. or args["process_rule"]["rules"]["pre_processing_rules"] is None
  1606. ):
  1607. raise ValueError("Process rule pre_processing_rules is required")
  1608. if not isinstance(args["process_rule"]["rules"]["pre_processing_rules"], list):
  1609. raise ValueError("Process rule pre_processing_rules is invalid")
  1610. unique_pre_processing_rule_dicts = {}
  1611. for pre_processing_rule in args["process_rule"]["rules"]["pre_processing_rules"]:
  1612. if "id" not in pre_processing_rule or not pre_processing_rule["id"]:
  1613. raise ValueError("Process rule pre_processing_rules id is required")
  1614. if pre_processing_rule["id"] not in DatasetProcessRule.PRE_PROCESSING_RULES:
  1615. raise ValueError("Process rule pre_processing_rules id is invalid")
  1616. if "enabled" not in pre_processing_rule or pre_processing_rule["enabled"] is None:
  1617. raise ValueError("Process rule pre_processing_rules enabled is required")
  1618. if not isinstance(pre_processing_rule["enabled"], bool):
  1619. raise ValueError("Process rule pre_processing_rules enabled is invalid")
  1620. unique_pre_processing_rule_dicts[pre_processing_rule["id"]] = pre_processing_rule
  1621. args["process_rule"]["rules"]["pre_processing_rules"] = list(unique_pre_processing_rule_dicts.values())
  1622. if (
  1623. "segmentation" not in args["process_rule"]["rules"]
  1624. or args["process_rule"]["rules"]["segmentation"] is None
  1625. ):
  1626. raise ValueError("Process rule segmentation is required")
  1627. if not isinstance(args["process_rule"]["rules"]["segmentation"], dict):
  1628. raise ValueError("Process rule segmentation is invalid")
  1629. if (
  1630. "separator" not in args["process_rule"]["rules"]["segmentation"]
  1631. or not args["process_rule"]["rules"]["segmentation"]["separator"]
  1632. ):
  1633. raise ValueError("Process rule segmentation separator is required")
  1634. if not isinstance(args["process_rule"]["rules"]["segmentation"]["separator"], str):
  1635. raise ValueError("Process rule segmentation separator is invalid")
  1636. if (
  1637. "max_tokens" not in args["process_rule"]["rules"]["segmentation"]
  1638. or not args["process_rule"]["rules"]["segmentation"]["max_tokens"]
  1639. ):
  1640. raise ValueError("Process rule segmentation max_tokens is required")
  1641. if not isinstance(args["process_rule"]["rules"]["segmentation"]["max_tokens"], int):
  1642. raise ValueError("Process rule segmentation max_tokens is invalid")
  1643. @staticmethod
  1644. def batch_update_document_status(
  1645. dataset: Dataset, document_ids: list[str], action: Literal["enable", "disable", "archive", "un_archive"], user
  1646. ):
  1647. """
  1648. Batch update document status.
  1649. Args:
  1650. dataset (Dataset): The dataset object
  1651. document_ids (list[str]): List of document IDs to update
  1652. action (Literal["enable", "disable", "archive", "un_archive"]): Action to perform
  1653. user: Current user performing the action
  1654. Raises:
  1655. DocumentIndexingError: If document is being indexed or not in correct state
  1656. ValueError: If action is invalid
  1657. """
  1658. if not document_ids:
  1659. return
  1660. # Early validation of action parameter
  1661. valid_actions = ["enable", "disable", "archive", "un_archive"]
  1662. if action not in valid_actions:
  1663. raise ValueError(f"Invalid action: {action}. Must be one of {valid_actions}")
  1664. documents_to_update = []
  1665. # First pass: validate all documents and prepare updates
  1666. for document_id in document_ids:
  1667. document = DocumentService.get_document(dataset.id, document_id)
  1668. if not document:
  1669. continue
  1670. # Check if document is being indexed
  1671. indexing_cache_key = f"document_{document.id}_indexing"
  1672. cache_result = redis_client.get(indexing_cache_key)
  1673. if cache_result is not None:
  1674. raise DocumentIndexingError(f"Document:{document.name} is being indexed, please try again later")
  1675. # Prepare update based on action
  1676. update_info = DocumentService._prepare_document_status_update(document, action, user)
  1677. if update_info:
  1678. documents_to_update.append(update_info)
  1679. # Second pass: apply all updates in a single transaction
  1680. if documents_to_update:
  1681. try:
  1682. for update_info in documents_to_update:
  1683. document = update_info["document"]
  1684. updates = update_info["updates"]
  1685. # Apply updates to the document
  1686. for field, value in updates.items():
  1687. setattr(document, field, value)
  1688. db.session.add(document)
  1689. # Batch commit all changes
  1690. db.session.commit()
  1691. except Exception as e:
  1692. # Rollback on any error
  1693. db.session.rollback()
  1694. raise e
  1695. # Execute async tasks and set Redis cache after successful commit
  1696. # propagation_error is used to capture any errors for submitting async task execution
  1697. propagation_error = None
  1698. for update_info in documents_to_update:
  1699. try:
  1700. # Execute async tasks after successful commit
  1701. if update_info["async_task"]:
  1702. task_info = update_info["async_task"]
  1703. task_func = task_info["function"]
  1704. task_args = task_info["args"]
  1705. task_func.delay(*task_args)
  1706. except Exception as e:
  1707. # Log the error but do not rollback the transaction
  1708. logger.exception("Error executing async task for document %s", update_info["document"].id)
  1709. # don't raise the error immediately, but capture it for later
  1710. propagation_error = e
  1711. try:
  1712. # Set Redis cache if needed after successful commit
  1713. if update_info["set_cache"]:
  1714. document = update_info["document"]
  1715. indexing_cache_key = f"document_{document.id}_indexing"
  1716. redis_client.setex(indexing_cache_key, 600, 1)
  1717. except Exception as e:
  1718. # Log the error but do not rollback the transaction
  1719. logger.exception("Error setting cache for document %s", update_info["document"].id)
  1720. # Raise any propagation error after all updates
  1721. if propagation_error:
  1722. raise propagation_error
  1723. @staticmethod
  1724. def _prepare_document_status_update(
  1725. document: Document, action: Literal["enable", "disable", "archive", "un_archive"], user
  1726. ):
  1727. """Prepare document status update information.
  1728. Args:
  1729. document: Document object to update
  1730. action: Action to perform
  1731. user: Current user
  1732. Returns:
  1733. dict: Update information or None if no update needed
  1734. """
  1735. now = naive_utc_now()
  1736. if action == "enable":
  1737. return DocumentService._prepare_enable_update(document, now)
  1738. elif action == "disable":
  1739. return DocumentService._prepare_disable_update(document, user, now)
  1740. elif action == "archive":
  1741. return DocumentService._prepare_archive_update(document, user, now)
  1742. elif action == "un_archive":
  1743. return DocumentService._prepare_unarchive_update(document, now)
  1744. return None
  1745. @staticmethod
  1746. def _prepare_enable_update(document, now):
  1747. """Prepare updates for enabling a document."""
  1748. if document.enabled:
  1749. return None
  1750. return {
  1751. "document": document,
  1752. "updates": {"enabled": True, "disabled_at": None, "disabled_by": None, "updated_at": now},
  1753. "async_task": {"function": add_document_to_index_task, "args": [document.id]},
  1754. "set_cache": True,
  1755. }
  1756. @staticmethod
  1757. def _prepare_disable_update(document, user, now):
  1758. """Prepare updates for disabling a document."""
  1759. if not document.completed_at or document.indexing_status != "completed":
  1760. raise DocumentIndexingError(f"Document: {document.name} is not completed.")
  1761. if not document.enabled:
  1762. return None
  1763. return {
  1764. "document": document,
  1765. "updates": {"enabled": False, "disabled_at": now, "disabled_by": user.id, "updated_at": now},
  1766. "async_task": {"function": remove_document_from_index_task, "args": [document.id]},
  1767. "set_cache": True,
  1768. }
  1769. @staticmethod
  1770. def _prepare_archive_update(document, user, now):
  1771. """Prepare updates for archiving a document."""
  1772. if document.archived:
  1773. return None
  1774. update_info = {
  1775. "document": document,
  1776. "updates": {"archived": True, "archived_at": now, "archived_by": user.id, "updated_at": now},
  1777. "async_task": None,
  1778. "set_cache": False,
  1779. }
  1780. # Only set async task and cache if document is currently enabled
  1781. if document.enabled:
  1782. update_info["async_task"] = {"function": remove_document_from_index_task, "args": [document.id]}
  1783. update_info["set_cache"] = True
  1784. return update_info
  1785. @staticmethod
  1786. def _prepare_unarchive_update(document, now):
  1787. """Prepare updates for unarchiving a document."""
  1788. if not document.archived:
  1789. return None
  1790. update_info = {
  1791. "document": document,
  1792. "updates": {"archived": False, "archived_at": None, "archived_by": None, "updated_at": now},
  1793. "async_task": None,
  1794. "set_cache": False,
  1795. }
  1796. # Only re-index if the document is currently enabled
  1797. if document.enabled:
  1798. update_info["async_task"] = {"function": add_document_to_index_task, "args": [document.id]}
  1799. update_info["set_cache"] = True
  1800. return update_info
  1801. class SegmentService:
  1802. @classmethod
  1803. def segment_create_args_validate(cls, args: dict, document: Document):
  1804. if document.doc_form == "qa_model":
  1805. if "answer" not in args or not args["answer"]:
  1806. raise ValueError("Answer is required")
  1807. if not args["answer"].strip():
  1808. raise ValueError("Answer is empty")
  1809. if "content" not in args or not args["content"] or not args["content"].strip():
  1810. raise ValueError("Content is empty")
  1811. @classmethod
  1812. def create_segment(cls, args: dict, document: Document, dataset: Dataset):
  1813. assert isinstance(current_user, Account)
  1814. assert current_user.current_tenant_id is not None
  1815. content = args["content"]
  1816. doc_id = str(uuid.uuid4())
  1817. segment_hash = helper.generate_text_hash(content)
  1818. tokens = 0
  1819. if dataset.indexing_technique == "high_quality":
  1820. model_manager = ModelManager()
  1821. embedding_model = model_manager.get_model_instance(
  1822. tenant_id=current_user.current_tenant_id,
  1823. provider=dataset.embedding_model_provider,
  1824. model_type=ModelType.TEXT_EMBEDDING,
  1825. model=dataset.embedding_model,
  1826. )
  1827. # calc embedding use tokens
  1828. tokens = embedding_model.get_text_embedding_num_tokens(texts=[content])[0]
  1829. lock_name = f"add_segment_lock_document_id_{document.id}"
  1830. with redis_client.lock(lock_name, timeout=600):
  1831. max_position = (
  1832. db.session.query(func.max(DocumentSegment.position))
  1833. .where(DocumentSegment.document_id == document.id)
  1834. .scalar()
  1835. )
  1836. segment_document = DocumentSegment(
  1837. tenant_id=current_user.current_tenant_id,
  1838. dataset_id=document.dataset_id,
  1839. document_id=document.id,
  1840. index_node_id=doc_id,
  1841. index_node_hash=segment_hash,
  1842. position=max_position + 1 if max_position else 1,
  1843. content=content,
  1844. word_count=len(content),
  1845. tokens=tokens,
  1846. status="completed",
  1847. indexing_at=naive_utc_now(),
  1848. completed_at=naive_utc_now(),
  1849. created_by=current_user.id,
  1850. )
  1851. if document.doc_form == "qa_model":
  1852. segment_document.word_count += len(args["answer"])
  1853. segment_document.answer = args["answer"]
  1854. db.session.add(segment_document)
  1855. # update document word count
  1856. assert document.word_count is not None
  1857. document.word_count += segment_document.word_count
  1858. db.session.add(document)
  1859. db.session.commit()
  1860. # save vector index
  1861. try:
  1862. VectorService.create_segments_vector([args["keywords"]], [segment_document], dataset, document.doc_form)
  1863. except Exception as e:
  1864. logger.exception("create segment index failed")
  1865. segment_document.enabled = False
  1866. segment_document.disabled_at = naive_utc_now()
  1867. segment_document.status = "error"
  1868. segment_document.error = str(e)
  1869. db.session.commit()
  1870. segment = db.session.query(DocumentSegment).where(DocumentSegment.id == segment_document.id).first()
  1871. return segment
  1872. @classmethod
  1873. def multi_create_segment(cls, segments: list, document: Document, dataset: Dataset):
  1874. assert isinstance(current_user, Account)
  1875. assert current_user.current_tenant_id is not None
  1876. lock_name = f"multi_add_segment_lock_document_id_{document.id}"
  1877. increment_word_count = 0
  1878. with redis_client.lock(lock_name, timeout=600):
  1879. embedding_model = None
  1880. if dataset.indexing_technique == "high_quality":
  1881. model_manager = ModelManager()
  1882. embedding_model = model_manager.get_model_instance(
  1883. tenant_id=current_user.current_tenant_id,
  1884. provider=dataset.embedding_model_provider,
  1885. model_type=ModelType.TEXT_EMBEDDING,
  1886. model=dataset.embedding_model,
  1887. )
  1888. max_position = (
  1889. db.session.query(func.max(DocumentSegment.position))
  1890. .where(DocumentSegment.document_id == document.id)
  1891. .scalar()
  1892. )
  1893. pre_segment_data_list = []
  1894. segment_data_list = []
  1895. keywords_list = []
  1896. position = max_position + 1 if max_position else 1
  1897. for segment_item in segments:
  1898. content = segment_item["content"]
  1899. doc_id = str(uuid.uuid4())
  1900. segment_hash = helper.generate_text_hash(content)
  1901. tokens = 0
  1902. if dataset.indexing_technique == "high_quality" and embedding_model:
  1903. # calc embedding use tokens
  1904. if document.doc_form == "qa_model":
  1905. tokens = embedding_model.get_text_embedding_num_tokens(
  1906. texts=[content + segment_item["answer"]]
  1907. )[0]
  1908. else:
  1909. tokens = embedding_model.get_text_embedding_num_tokens(texts=[content])[0]
  1910. segment_document = DocumentSegment(
  1911. tenant_id=current_user.current_tenant_id,
  1912. dataset_id=document.dataset_id,
  1913. document_id=document.id,
  1914. index_node_id=doc_id,
  1915. index_node_hash=segment_hash,
  1916. position=position,
  1917. content=content,
  1918. word_count=len(content),
  1919. tokens=tokens,
  1920. keywords=segment_item.get("keywords", []),
  1921. status="completed",
  1922. indexing_at=naive_utc_now(),
  1923. completed_at=naive_utc_now(),
  1924. created_by=current_user.id,
  1925. )
  1926. if document.doc_form == "qa_model":
  1927. segment_document.answer = segment_item["answer"]
  1928. segment_document.word_count += len(segment_item["answer"])
  1929. increment_word_count += segment_document.word_count
  1930. db.session.add(segment_document)
  1931. segment_data_list.append(segment_document)
  1932. position += 1
  1933. pre_segment_data_list.append(segment_document)
  1934. if "keywords" in segment_item:
  1935. keywords_list.append(segment_item["keywords"])
  1936. else:
  1937. keywords_list.append(None)
  1938. # update document word count
  1939. assert document.word_count is not None
  1940. document.word_count += increment_word_count
  1941. db.session.add(document)
  1942. try:
  1943. # save vector index
  1944. VectorService.create_segments_vector(keywords_list, pre_segment_data_list, dataset, document.doc_form)
  1945. except Exception as e:
  1946. logger.exception("create segment index failed")
  1947. for segment_document in segment_data_list:
  1948. segment_document.enabled = False
  1949. segment_document.disabled_at = naive_utc_now()
  1950. segment_document.status = "error"
  1951. segment_document.error = str(e)
  1952. db.session.commit()
  1953. return segment_data_list
  1954. @classmethod
  1955. def update_segment(cls, args: SegmentUpdateArgs, segment: DocumentSegment, document: Document, dataset: Dataset):
  1956. assert isinstance(current_user, Account)
  1957. assert current_user.current_tenant_id is not None
  1958. indexing_cache_key = f"segment_{segment.id}_indexing"
  1959. cache_result = redis_client.get(indexing_cache_key)
  1960. if cache_result is not None:
  1961. raise ValueError("Segment is indexing, please try again later")
  1962. if args.enabled is not None:
  1963. action = args.enabled
  1964. if segment.enabled != action:
  1965. if not action:
  1966. segment.enabled = action
  1967. segment.disabled_at = naive_utc_now()
  1968. segment.disabled_by = current_user.id
  1969. db.session.add(segment)
  1970. db.session.commit()
  1971. # Set cache to prevent indexing the same segment multiple times
  1972. redis_client.setex(indexing_cache_key, 600, 1)
  1973. disable_segment_from_index_task.delay(segment.id)
  1974. return segment
  1975. if not segment.enabled:
  1976. if args.enabled is not None:
  1977. if not args.enabled:
  1978. raise ValueError("Can't update disabled segment")
  1979. else:
  1980. raise ValueError("Can't update disabled segment")
  1981. try:
  1982. word_count_change = segment.word_count
  1983. content = args.content or segment.content
  1984. if segment.content == content:
  1985. segment.word_count = len(content)
  1986. if document.doc_form == "qa_model":
  1987. segment.answer = args.answer
  1988. segment.word_count += len(args.answer) if args.answer else 0
  1989. word_count_change = segment.word_count - word_count_change
  1990. keyword_changed = False
  1991. if args.keywords:
  1992. if Counter(segment.keywords) != Counter(args.keywords):
  1993. segment.keywords = args.keywords
  1994. keyword_changed = True
  1995. segment.enabled = True
  1996. segment.disabled_at = None
  1997. segment.disabled_by = None
  1998. db.session.add(segment)
  1999. db.session.commit()
  2000. # update document word count
  2001. if word_count_change != 0:
  2002. assert document.word_count is not None
  2003. document.word_count = max(0, document.word_count + word_count_change)
  2004. db.session.add(document)
  2005. # update segment index task
  2006. if document.doc_form == IndexType.PARENT_CHILD_INDEX and args.regenerate_child_chunks:
  2007. # regenerate child chunks
  2008. # get embedding model instance
  2009. if dataset.indexing_technique == "high_quality":
  2010. # check embedding model setting
  2011. model_manager = ModelManager()
  2012. if dataset.embedding_model_provider:
  2013. embedding_model_instance = model_manager.get_model_instance(
  2014. tenant_id=dataset.tenant_id,
  2015. provider=dataset.embedding_model_provider,
  2016. model_type=ModelType.TEXT_EMBEDDING,
  2017. model=dataset.embedding_model,
  2018. )
  2019. else:
  2020. embedding_model_instance = model_manager.get_default_model_instance(
  2021. tenant_id=dataset.tenant_id,
  2022. model_type=ModelType.TEXT_EMBEDDING,
  2023. )
  2024. else:
  2025. raise ValueError("The knowledge base index technique is not high quality!")
  2026. # get the process rule
  2027. processing_rule = (
  2028. db.session.query(DatasetProcessRule)
  2029. .where(DatasetProcessRule.id == document.dataset_process_rule_id)
  2030. .first()
  2031. )
  2032. if not processing_rule:
  2033. raise ValueError("No processing rule found.")
  2034. VectorService.generate_child_chunks(
  2035. segment, document, dataset, embedding_model_instance, processing_rule, True
  2036. )
  2037. elif document.doc_form in (IndexType.PARAGRAPH_INDEX, IndexType.QA_INDEX):
  2038. if args.enabled or keyword_changed:
  2039. # update segment vector index
  2040. VectorService.update_segment_vector(args.keywords, segment, dataset)
  2041. else:
  2042. segment_hash = helper.generate_text_hash(content)
  2043. tokens = 0
  2044. if dataset.indexing_technique == "high_quality":
  2045. model_manager = ModelManager()
  2046. embedding_model = model_manager.get_model_instance(
  2047. tenant_id=current_user.current_tenant_id,
  2048. provider=dataset.embedding_model_provider,
  2049. model_type=ModelType.TEXT_EMBEDDING,
  2050. model=dataset.embedding_model,
  2051. )
  2052. # calc embedding use tokens
  2053. if document.doc_form == "qa_model":
  2054. segment.answer = args.answer
  2055. tokens = embedding_model.get_text_embedding_num_tokens(texts=[content + segment.answer])[0] # type: ignore
  2056. else:
  2057. tokens = embedding_model.get_text_embedding_num_tokens(texts=[content])[0]
  2058. segment.content = content
  2059. segment.index_node_hash = segment_hash
  2060. segment.word_count = len(content)
  2061. segment.tokens = tokens
  2062. segment.status = "completed"
  2063. segment.indexing_at = naive_utc_now()
  2064. segment.completed_at = naive_utc_now()
  2065. segment.updated_by = current_user.id
  2066. segment.updated_at = naive_utc_now()
  2067. segment.enabled = True
  2068. segment.disabled_at = None
  2069. segment.disabled_by = None
  2070. if document.doc_form == "qa_model":
  2071. segment.answer = args.answer
  2072. segment.word_count += len(args.answer) if args.answer else 0
  2073. word_count_change = segment.word_count - word_count_change
  2074. # update document word count
  2075. if word_count_change != 0:
  2076. assert document.word_count is not None
  2077. document.word_count = max(0, document.word_count + word_count_change)
  2078. db.session.add(document)
  2079. db.session.add(segment)
  2080. db.session.commit()
  2081. if document.doc_form == IndexType.PARENT_CHILD_INDEX and args.regenerate_child_chunks:
  2082. # get embedding model instance
  2083. if dataset.indexing_technique == "high_quality":
  2084. # check embedding model setting
  2085. model_manager = ModelManager()
  2086. if dataset.embedding_model_provider:
  2087. embedding_model_instance = model_manager.get_model_instance(
  2088. tenant_id=dataset.tenant_id,
  2089. provider=dataset.embedding_model_provider,
  2090. model_type=ModelType.TEXT_EMBEDDING,
  2091. model=dataset.embedding_model,
  2092. )
  2093. else:
  2094. embedding_model_instance = model_manager.get_default_model_instance(
  2095. tenant_id=dataset.tenant_id,
  2096. model_type=ModelType.TEXT_EMBEDDING,
  2097. )
  2098. else:
  2099. raise ValueError("The knowledge base index technique is not high quality!")
  2100. # get the process rule
  2101. processing_rule = (
  2102. db.session.query(DatasetProcessRule)
  2103. .where(DatasetProcessRule.id == document.dataset_process_rule_id)
  2104. .first()
  2105. )
  2106. if not processing_rule:
  2107. raise ValueError("No processing rule found.")
  2108. VectorService.generate_child_chunks(
  2109. segment, document, dataset, embedding_model_instance, processing_rule, True
  2110. )
  2111. elif document.doc_form in (IndexType.PARAGRAPH_INDEX, IndexType.QA_INDEX):
  2112. # update segment vector index
  2113. VectorService.update_segment_vector(args.keywords, segment, dataset)
  2114. except Exception as e:
  2115. logger.exception("update segment index failed")
  2116. segment.enabled = False
  2117. segment.disabled_at = naive_utc_now()
  2118. segment.status = "error"
  2119. segment.error = str(e)
  2120. db.session.commit()
  2121. new_segment = db.session.query(DocumentSegment).where(DocumentSegment.id == segment.id).first()
  2122. return new_segment
  2123. @classmethod
  2124. def delete_segment(cls, segment: DocumentSegment, document: Document, dataset: Dataset):
  2125. indexing_cache_key = f"segment_{segment.id}_delete_indexing"
  2126. cache_result = redis_client.get(indexing_cache_key)
  2127. if cache_result is not None:
  2128. raise ValueError("Segment is deleting.")
  2129. # enabled segment need to delete index
  2130. if segment.enabled:
  2131. # send delete segment index task
  2132. redis_client.setex(indexing_cache_key, 600, 1)
  2133. delete_segment_from_index_task.delay([segment.index_node_id], dataset.id, document.id)
  2134. db.session.delete(segment)
  2135. # update document word count
  2136. assert document.word_count is not None
  2137. document.word_count -= segment.word_count
  2138. db.session.add(document)
  2139. db.session.commit()
  2140. @classmethod
  2141. def delete_segments(cls, segment_ids: list, document: Document, dataset: Dataset):
  2142. assert isinstance(current_user, Account)
  2143. segments = (
  2144. db.session.query(DocumentSegment.index_node_id, DocumentSegment.word_count)
  2145. .where(
  2146. DocumentSegment.id.in_(segment_ids),
  2147. DocumentSegment.dataset_id == dataset.id,
  2148. DocumentSegment.document_id == document.id,
  2149. DocumentSegment.tenant_id == current_user.current_tenant_id,
  2150. )
  2151. .all()
  2152. )
  2153. if not segments:
  2154. return
  2155. index_node_ids = [seg.index_node_id for seg in segments]
  2156. total_words = sum(seg.word_count for seg in segments)
  2157. document.word_count = (
  2158. document.word_count - total_words if document.word_count and document.word_count > total_words else 0
  2159. )
  2160. db.session.add(document)
  2161. delete_segment_from_index_task.delay(index_node_ids, dataset.id, document.id)
  2162. db.session.query(DocumentSegment).where(DocumentSegment.id.in_(segment_ids)).delete()
  2163. db.session.commit()
  2164. @classmethod
  2165. def update_segments_status(
  2166. cls, segment_ids: list, action: Literal["enable", "disable"], dataset: Dataset, document: Document
  2167. ):
  2168. assert current_user is not None
  2169. # Check if segment_ids is not empty to avoid WHERE false condition
  2170. if not segment_ids or len(segment_ids) == 0:
  2171. return
  2172. if action == "enable":
  2173. segments = (
  2174. db.session.query(DocumentSegment)
  2175. .where(
  2176. DocumentSegment.id.in_(segment_ids),
  2177. DocumentSegment.dataset_id == dataset.id,
  2178. DocumentSegment.document_id == document.id,
  2179. DocumentSegment.enabled == False,
  2180. )
  2181. .all()
  2182. )
  2183. if not segments:
  2184. return
  2185. real_deal_segment_ids = []
  2186. for segment in segments:
  2187. indexing_cache_key = f"segment_{segment.id}_indexing"
  2188. cache_result = redis_client.get(indexing_cache_key)
  2189. if cache_result is not None:
  2190. continue
  2191. segment.enabled = True
  2192. segment.disabled_at = None
  2193. segment.disabled_by = None
  2194. db.session.add(segment)
  2195. real_deal_segment_ids.append(segment.id)
  2196. db.session.commit()
  2197. enable_segments_to_index_task.delay(real_deal_segment_ids, dataset.id, document.id)
  2198. elif action == "disable":
  2199. segments = (
  2200. db.session.query(DocumentSegment)
  2201. .where(
  2202. DocumentSegment.id.in_(segment_ids),
  2203. DocumentSegment.dataset_id == dataset.id,
  2204. DocumentSegment.document_id == document.id,
  2205. DocumentSegment.enabled == True,
  2206. )
  2207. .all()
  2208. )
  2209. if not segments:
  2210. return
  2211. real_deal_segment_ids = []
  2212. for segment in segments:
  2213. indexing_cache_key = f"segment_{segment.id}_indexing"
  2214. cache_result = redis_client.get(indexing_cache_key)
  2215. if cache_result is not None:
  2216. continue
  2217. segment.enabled = False
  2218. segment.disabled_at = naive_utc_now()
  2219. segment.disabled_by = current_user.id
  2220. db.session.add(segment)
  2221. real_deal_segment_ids.append(segment.id)
  2222. db.session.commit()
  2223. disable_segments_from_index_task.delay(real_deal_segment_ids, dataset.id, document.id)
  2224. @classmethod
  2225. def create_child_chunk(
  2226. cls, content: str, segment: DocumentSegment, document: Document, dataset: Dataset
  2227. ) -> ChildChunk:
  2228. assert isinstance(current_user, Account)
  2229. lock_name = f"add_child_lock_{segment.id}"
  2230. with redis_client.lock(lock_name, timeout=20):
  2231. index_node_id = str(uuid.uuid4())
  2232. index_node_hash = helper.generate_text_hash(content)
  2233. max_position = (
  2234. db.session.query(func.max(ChildChunk.position))
  2235. .where(
  2236. ChildChunk.tenant_id == current_user.current_tenant_id,
  2237. ChildChunk.dataset_id == dataset.id,
  2238. ChildChunk.document_id == document.id,
  2239. ChildChunk.segment_id == segment.id,
  2240. )
  2241. .scalar()
  2242. )
  2243. child_chunk = ChildChunk(
  2244. tenant_id=current_user.current_tenant_id,
  2245. dataset_id=dataset.id,
  2246. document_id=document.id,
  2247. segment_id=segment.id,
  2248. position=max_position + 1 if max_position else 1,
  2249. index_node_id=index_node_id,
  2250. index_node_hash=index_node_hash,
  2251. content=content,
  2252. word_count=len(content),
  2253. type="customized",
  2254. created_by=current_user.id,
  2255. )
  2256. db.session.add(child_chunk)
  2257. # save vector index
  2258. try:
  2259. VectorService.create_child_chunk_vector(child_chunk, dataset)
  2260. except Exception as e:
  2261. logger.exception("create child chunk index failed")
  2262. db.session.rollback()
  2263. raise ChildChunkIndexingError(str(e))
  2264. db.session.commit()
  2265. return child_chunk
  2266. @classmethod
  2267. def update_child_chunks(
  2268. cls,
  2269. child_chunks_update_args: list[ChildChunkUpdateArgs],
  2270. segment: DocumentSegment,
  2271. document: Document,
  2272. dataset: Dataset,
  2273. ) -> list[ChildChunk]:
  2274. assert isinstance(current_user, Account)
  2275. child_chunks = (
  2276. db.session.query(ChildChunk)
  2277. .where(
  2278. ChildChunk.dataset_id == dataset.id,
  2279. ChildChunk.document_id == document.id,
  2280. ChildChunk.segment_id == segment.id,
  2281. )
  2282. .all()
  2283. )
  2284. child_chunks_map = {chunk.id: chunk for chunk in child_chunks}
  2285. new_child_chunks, update_child_chunks, delete_child_chunks, new_child_chunks_args = [], [], [], []
  2286. for child_chunk_update_args in child_chunks_update_args:
  2287. if child_chunk_update_args.id:
  2288. child_chunk = child_chunks_map.pop(child_chunk_update_args.id, None)
  2289. if child_chunk:
  2290. if child_chunk.content != child_chunk_update_args.content:
  2291. child_chunk.content = child_chunk_update_args.content
  2292. child_chunk.word_count = len(child_chunk.content)
  2293. child_chunk.updated_by = current_user.id
  2294. child_chunk.updated_at = naive_utc_now()
  2295. child_chunk.type = "customized"
  2296. update_child_chunks.append(child_chunk)
  2297. else:
  2298. new_child_chunks_args.append(child_chunk_update_args)
  2299. if child_chunks_map:
  2300. delete_child_chunks = list(child_chunks_map.values())
  2301. try:
  2302. if update_child_chunks:
  2303. db.session.bulk_save_objects(update_child_chunks)
  2304. if delete_child_chunks:
  2305. for child_chunk in delete_child_chunks:
  2306. db.session.delete(child_chunk)
  2307. if new_child_chunks_args:
  2308. child_chunk_count = len(child_chunks)
  2309. for position, args in enumerate(new_child_chunks_args, start=child_chunk_count + 1):
  2310. index_node_id = str(uuid.uuid4())
  2311. index_node_hash = helper.generate_text_hash(args.content)
  2312. child_chunk = ChildChunk(
  2313. tenant_id=current_user.current_tenant_id,
  2314. dataset_id=dataset.id,
  2315. document_id=document.id,
  2316. segment_id=segment.id,
  2317. position=position,
  2318. index_node_id=index_node_id,
  2319. index_node_hash=index_node_hash,
  2320. content=args.content,
  2321. word_count=len(args.content),
  2322. type="customized",
  2323. created_by=current_user.id,
  2324. )
  2325. db.session.add(child_chunk)
  2326. db.session.flush()
  2327. new_child_chunks.append(child_chunk)
  2328. VectorService.update_child_chunk_vector(new_child_chunks, update_child_chunks, delete_child_chunks, dataset)
  2329. db.session.commit()
  2330. except Exception as e:
  2331. logger.exception("update child chunk index failed")
  2332. db.session.rollback()
  2333. raise ChildChunkIndexingError(str(e))
  2334. return sorted(new_child_chunks + update_child_chunks, key=lambda x: x.position)
  2335. @classmethod
  2336. def update_child_chunk(
  2337. cls,
  2338. content: str,
  2339. child_chunk: ChildChunk,
  2340. segment: DocumentSegment,
  2341. document: Document,
  2342. dataset: Dataset,
  2343. ) -> ChildChunk:
  2344. assert current_user is not None
  2345. try:
  2346. child_chunk.content = content
  2347. child_chunk.word_count = len(content)
  2348. child_chunk.updated_by = current_user.id
  2349. child_chunk.updated_at = naive_utc_now()
  2350. child_chunk.type = "customized"
  2351. db.session.add(child_chunk)
  2352. VectorService.update_child_chunk_vector([], [child_chunk], [], dataset)
  2353. db.session.commit()
  2354. except Exception as e:
  2355. logger.exception("update child chunk index failed")
  2356. db.session.rollback()
  2357. raise ChildChunkIndexingError(str(e))
  2358. return child_chunk
  2359. @classmethod
  2360. def delete_child_chunk(cls, child_chunk: ChildChunk, dataset: Dataset):
  2361. db.session.delete(child_chunk)
  2362. try:
  2363. VectorService.delete_child_chunk_vector(child_chunk, dataset)
  2364. except Exception as e:
  2365. logger.exception("delete child chunk index failed")
  2366. db.session.rollback()
  2367. raise ChildChunkDeleteIndexError(str(e))
  2368. db.session.commit()
  2369. @classmethod
  2370. def get_child_chunks(
  2371. cls, segment_id: str, document_id: str, dataset_id: str, page: int, limit: int, keyword: Optional[str] = None
  2372. ):
  2373. assert isinstance(current_user, Account)
  2374. query = (
  2375. select(ChildChunk)
  2376. .filter_by(
  2377. tenant_id=current_user.current_tenant_id,
  2378. dataset_id=dataset_id,
  2379. document_id=document_id,
  2380. segment_id=segment_id,
  2381. )
  2382. .order_by(ChildChunk.position.asc())
  2383. )
  2384. if keyword:
  2385. query = query.where(ChildChunk.content.ilike(f"%{keyword}%"))
  2386. return db.paginate(select=query, page=page, per_page=limit, max_per_page=100, error_out=False)
  2387. @classmethod
  2388. def get_child_chunk_by_id(cls, child_chunk_id: str, tenant_id: str) -> Optional[ChildChunk]:
  2389. """Get a child chunk by its ID."""
  2390. result = (
  2391. db.session.query(ChildChunk)
  2392. .where(ChildChunk.id == child_chunk_id, ChildChunk.tenant_id == tenant_id)
  2393. .first()
  2394. )
  2395. return result if isinstance(result, ChildChunk) else None
  2396. @classmethod
  2397. def get_segments(
  2398. cls,
  2399. document_id: str,
  2400. tenant_id: str,
  2401. status_list: list[str] | None = None,
  2402. keyword: str | None = None,
  2403. page: int = 1,
  2404. limit: int = 20,
  2405. ):
  2406. """Get segments for a document with optional filtering."""
  2407. query = select(DocumentSegment).where(
  2408. DocumentSegment.document_id == document_id, DocumentSegment.tenant_id == tenant_id
  2409. )
  2410. # Check if status_list is not empty to avoid WHERE false condition
  2411. if status_list and len(status_list) > 0:
  2412. query = query.where(DocumentSegment.status.in_(status_list))
  2413. if keyword:
  2414. query = query.where(DocumentSegment.content.ilike(f"%{keyword}%"))
  2415. query = query.order_by(DocumentSegment.position.asc())
  2416. paginated_segments = db.paginate(select=query, page=page, per_page=limit, max_per_page=100, error_out=False)
  2417. return paginated_segments.items, paginated_segments.total
  2418. @classmethod
  2419. def get_segment_by_id(cls, segment_id: str, tenant_id: str) -> Optional[DocumentSegment]:
  2420. """Get a segment by its ID."""
  2421. result = (
  2422. db.session.query(DocumentSegment)
  2423. .where(DocumentSegment.id == segment_id, DocumentSegment.tenant_id == tenant_id)
  2424. .first()
  2425. )
  2426. return result if isinstance(result, DocumentSegment) else None
  2427. class DatasetCollectionBindingService:
  2428. @classmethod
  2429. def get_dataset_collection_binding(
  2430. cls, provider_name: str, model_name: str, collection_type: str = "dataset"
  2431. ) -> DatasetCollectionBinding:
  2432. dataset_collection_binding = (
  2433. db.session.query(DatasetCollectionBinding)
  2434. .where(
  2435. DatasetCollectionBinding.provider_name == provider_name,
  2436. DatasetCollectionBinding.model_name == model_name,
  2437. DatasetCollectionBinding.type == collection_type,
  2438. )
  2439. .order_by(DatasetCollectionBinding.created_at)
  2440. .first()
  2441. )
  2442. if not dataset_collection_binding:
  2443. dataset_collection_binding = DatasetCollectionBinding(
  2444. provider_name=provider_name,
  2445. model_name=model_name,
  2446. collection_name=Dataset.gen_collection_name_by_id(str(uuid.uuid4())),
  2447. type=collection_type,
  2448. )
  2449. db.session.add(dataset_collection_binding)
  2450. db.session.commit()
  2451. return dataset_collection_binding
  2452. @classmethod
  2453. def get_dataset_collection_binding_by_id_and_type(
  2454. cls, collection_binding_id: str, collection_type: str = "dataset"
  2455. ) -> DatasetCollectionBinding:
  2456. dataset_collection_binding = (
  2457. db.session.query(DatasetCollectionBinding)
  2458. .where(
  2459. DatasetCollectionBinding.id == collection_binding_id, DatasetCollectionBinding.type == collection_type
  2460. )
  2461. .order_by(DatasetCollectionBinding.created_at)
  2462. .first()
  2463. )
  2464. if not dataset_collection_binding:
  2465. raise ValueError("Dataset collection binding not found")
  2466. return dataset_collection_binding
  2467. class DatasetPermissionService:
  2468. @classmethod
  2469. def get_dataset_partial_member_list(cls, dataset_id):
  2470. user_list_query = (
  2471. db.session.query(
  2472. DatasetPermission.account_id,
  2473. )
  2474. .where(DatasetPermission.dataset_id == dataset_id)
  2475. .all()
  2476. )
  2477. user_list = []
  2478. for user in user_list_query:
  2479. user_list.append(user.account_id)
  2480. return user_list
  2481. @classmethod
  2482. def update_partial_member_list(cls, tenant_id, dataset_id, user_list):
  2483. try:
  2484. db.session.query(DatasetPermission).where(DatasetPermission.dataset_id == dataset_id).delete()
  2485. permissions = []
  2486. for user in user_list:
  2487. permission = DatasetPermission(
  2488. tenant_id=tenant_id,
  2489. dataset_id=dataset_id,
  2490. account_id=user["user_id"],
  2491. )
  2492. permissions.append(permission)
  2493. db.session.add_all(permissions)
  2494. db.session.commit()
  2495. except Exception as e:
  2496. db.session.rollback()
  2497. raise e
  2498. @classmethod
  2499. def check_permission(cls, user, dataset, requested_permission, requested_partial_member_list):
  2500. if not user.is_dataset_editor:
  2501. raise NoPermissionError("User does not have permission to edit this dataset.")
  2502. if user.is_dataset_operator and dataset.permission != requested_permission:
  2503. raise NoPermissionError("Dataset operators cannot change the dataset permissions.")
  2504. if user.is_dataset_operator and requested_permission == "partial_members":
  2505. if not requested_partial_member_list:
  2506. raise ValueError("Partial member list is required when setting to partial members.")
  2507. local_member_list = cls.get_dataset_partial_member_list(dataset.id)
  2508. request_member_list = [user["user_id"] for user in requested_partial_member_list]
  2509. if set(local_member_list) != set(request_member_list):
  2510. raise ValueError("Dataset operators cannot change the dataset permissions.")
  2511. @classmethod
  2512. def clear_partial_member_list(cls, dataset_id):
  2513. try:
  2514. db.session.query(DatasetPermission).where(DatasetPermission.dataset_id == dataset_id).delete()
  2515. db.session.commit()
  2516. except Exception as e:
  2517. db.session.rollback()
  2518. raise e