You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

пре 2 година
пре 2 година
пре 7 месеци
пре 1 година
пре 2 година
пре 2 година
пре 2 година
пре 11 месеци
пре 2 година
пре 2 година
пре 2 година
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
Introduce Plugins (#13836) Signed-off-by: yihong0618 <zouzou0208@gmail.com> Signed-off-by: -LAN- <laipz8200@outlook.com> Signed-off-by: xhe <xw897002528@gmail.com> Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: takatost <takatost@gmail.com> Co-authored-by: kurokobo <kuro664@gmail.com> Co-authored-by: Novice Lee <novicelee@NoviPro.local> Co-authored-by: zxhlyh <jasonapring2015@outlook.com> Co-authored-by: AkaraChen <akarachen@outlook.com> Co-authored-by: Yi <yxiaoisme@gmail.com> Co-authored-by: Joel <iamjoel007@gmail.com> Co-authored-by: JzoNg <jzongcode@gmail.com> Co-authored-by: twwu <twwu@dify.ai> Co-authored-by: Hiroshi Fujita <fujita-h@users.noreply.github.com> Co-authored-by: AkaraChen <85140972+AkaraChen@users.noreply.github.com> Co-authored-by: NFish <douxc512@gmail.com> Co-authored-by: Wu Tianwei <30284043+WTW0313@users.noreply.github.com> Co-authored-by: 非法操作 <hjlarry@163.com> Co-authored-by: Novice <857526207@qq.com> Co-authored-by: Hiroki Nagai <82458324+nagaihiroki-git@users.noreply.github.com> Co-authored-by: Gen Sato <52241300+halogen22@users.noreply.github.com> Co-authored-by: eux <euxuuu@gmail.com> Co-authored-by: huangzhuo1949 <167434202+huangzhuo1949@users.noreply.github.com> Co-authored-by: huangzhuo <huangzhuo1@xiaomi.com> Co-authored-by: lotsik <lotsik@mail.ru> Co-authored-by: crazywoola <100913391+crazywoola@users.noreply.github.com> Co-authored-by: nite-knite <nkCoding@gmail.com> Co-authored-by: Jyong <76649700+JohnJyong@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: gakkiyomi <gakkiyomi@aliyun.com> Co-authored-by: CN-P5 <heibai2006@gmail.com> Co-authored-by: CN-P5 <heibai2006@qq.com> Co-authored-by: Chuehnone <1897025+chuehnone@users.noreply.github.com> Co-authored-by: yihong <zouzou0208@gmail.com> Co-authored-by: Kevin9703 <51311316+Kevin9703@users.noreply.github.com> Co-authored-by: -LAN- <laipz8200@outlook.com> Co-authored-by: Boris Feld <lothiraldan@gmail.com> Co-authored-by: mbo <himabo@gmail.com> Co-authored-by: mabo <mabo@aeyes.ai> Co-authored-by: Warren Chen <warren.chen830@gmail.com> Co-authored-by: JzoNgKVO <27049666+JzoNgKVO@users.noreply.github.com> Co-authored-by: jiandanfeng <chenjh3@wangsu.com> Co-authored-by: zhu-an <70234959+xhdd123321@users.noreply.github.com> Co-authored-by: zhaoqingyu.1075 <zhaoqingyu.1075@bytedance.com> Co-authored-by: 海狸大師 <86974027+yenslife@users.noreply.github.com> Co-authored-by: Xu Song <xusong.vip@gmail.com> Co-authored-by: rayshaw001 <396301947@163.com> Co-authored-by: Ding Jiatong <dingjiatong@gmail.com> Co-authored-by: Bowen Liang <liangbowen@gf.com.cn> Co-authored-by: JasonVV <jasonwangiii@outlook.com> Co-authored-by: le0zh <newlight@qq.com> Co-authored-by: zhuxinliang <zhuxinliang@didiglobal.com> Co-authored-by: k-zaku <zaku99@outlook.jp> Co-authored-by: luckylhb90 <luckylhb90@gmail.com> Co-authored-by: hobo.l <hobo.l@binance.com> Co-authored-by: jiangbo721 <365065261@qq.com> Co-authored-by: 刘江波 <jiangbo721@163.com> Co-authored-by: Shun Miyazawa <34241526+miya@users.noreply.github.com> Co-authored-by: EricPan <30651140+Egfly@users.noreply.github.com> Co-authored-by: crazywoola <427733928@qq.com> Co-authored-by: sino <sino2322@gmail.com> Co-authored-by: Jhvcc <37662342+Jhvcc@users.noreply.github.com> Co-authored-by: lowell <lowell.hu@zkteco.in> Co-authored-by: Boris Polonsky <BorisPolonsky@users.noreply.github.com> Co-authored-by: Ademílson Tonato <ademilsonft@outlook.com> Co-authored-by: Ademílson Tonato <ademilson.tonato@refurbed.com> Co-authored-by: IWAI, Masaharu <iwaim.sub@gmail.com> Co-authored-by: Yueh-Po Peng (Yabi) <94939112+y10ab1@users.noreply.github.com> Co-authored-by: Jason <ggbbddjm@gmail.com> Co-authored-by: Xin Zhang <sjhpzx@gmail.com> Co-authored-by: yjc980121 <3898524+yjc980121@users.noreply.github.com> Co-authored-by: heyszt <36215648+hieheihei@users.noreply.github.com> Co-authored-by: Abdullah AlOsaimi <osaimiacc@gmail.com> Co-authored-by: Abdullah AlOsaimi <189027247+osaimi@users.noreply.github.com> Co-authored-by: Yingchun Lai <laiyingchun@apache.org> Co-authored-by: Hash Brown <hi@xzd.me> Co-authored-by: zuodongxu <192560071+zuodongxu@users.noreply.github.com> Co-authored-by: Masashi Tomooka <tmokmss@users.noreply.github.com> Co-authored-by: aplio <ryo.091219@gmail.com> Co-authored-by: Obada Khalili <54270856+obadakhalili@users.noreply.github.com> Co-authored-by: Nam Vu <zuzoovn@gmail.com> Co-authored-by: Kei YAMAZAKI <1715090+kei-yamazaki@users.noreply.github.com> Co-authored-by: TechnoHouse <13776377+deephbz@users.noreply.github.com> Co-authored-by: Riddhimaan-Senapati <114703025+Riddhimaan-Senapati@users.noreply.github.com> Co-authored-by: MaFee921 <31881301+2284730142@users.noreply.github.com> Co-authored-by: te-chan <t-nakanome@sakura-is.co.jp> Co-authored-by: HQidea <HQidea@users.noreply.github.com> Co-authored-by: Joshbly <36315710+Joshbly@users.noreply.github.com> Co-authored-by: xhe <xw897002528@gmail.com> Co-authored-by: weiwenyan-dev <154779315+weiwenyan-dev@users.noreply.github.com> Co-authored-by: ex_wenyan.wei <ex_wenyan.wei@tcl.com> Co-authored-by: engchina <12236799+engchina@users.noreply.github.com> Co-authored-by: engchina <atjapan2015@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: 呆萌闷油瓶 <253605712@qq.com> Co-authored-by: Kemal <kemalmeler@outlook.com> Co-authored-by: Lazy_Frog <4590648+lazyFrogLOL@users.noreply.github.com> Co-authored-by: Yi Xiao <54782454+YIXIAO0@users.noreply.github.com> Co-authored-by: Steven sun <98230804+Tuyohai@users.noreply.github.com> Co-authored-by: steven <sunzwj@digitalchina.com> Co-authored-by: Kalo Chin <91766386+fdb02983rhy@users.noreply.github.com> Co-authored-by: Katy Tao <34019945+KatyTao@users.noreply.github.com> Co-authored-by: depy <42985524+h4ckdepy@users.noreply.github.com> Co-authored-by: 胡春东 <gycm520@gmail.com> Co-authored-by: Junjie.M <118170653@qq.com> Co-authored-by: MuYu <mr.muzea@gmail.com> Co-authored-by: Naoki Takashima <39912547+takatea@users.noreply.github.com> Co-authored-by: Summer-Gu <37869445+gubinjie@users.noreply.github.com> Co-authored-by: Fei He <droxer.he@gmail.com> Co-authored-by: ybalbert001 <120714773+ybalbert001@users.noreply.github.com> Co-authored-by: Yuanbo Li <ybalbert@amazon.com> Co-authored-by: douxc <7553076+douxc@users.noreply.github.com> Co-authored-by: liuzhenghua <1090179900@qq.com> Co-authored-by: Wu Jiayang <62842862+Wu-Jiayang@users.noreply.github.com> Co-authored-by: Your Name <you@example.com> Co-authored-by: kimjion <45935338+kimjion@users.noreply.github.com> Co-authored-by: AugNSo <song.tiankai@icloud.com> Co-authored-by: llinvokerl <38915183+llinvokerl@users.noreply.github.com> Co-authored-by: liusurong.lsr <liusurong.lsr@alibaba-inc.com> Co-authored-by: Vasu Negi <vasu-negi@users.noreply.github.com> Co-authored-by: Hundredwz <1808096180@qq.com> Co-authored-by: Xiyuan Chen <52963600+GareArc@users.noreply.github.com>
пре 8 месеци
пре 2 година
пре 7 месеци
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 7 месеци
пре 2 година
пре 2 година
пре 4 месеци
пре 4 месеци
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 7 месеци
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 2 година
пре 3 месеци
пре 3 месеци
пре 2 година
пре 2 година
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823
  1. import flask_restx
  2. from flask import request
  3. from flask_login import current_user
  4. from flask_restx import Resource, marshal, marshal_with, reqparse
  5. from werkzeug.exceptions import Forbidden, NotFound
  6. import services
  7. from configs import dify_config
  8. from controllers.console import api
  9. from controllers.console.apikey import api_key_fields, api_key_list
  10. from controllers.console.app.error import ProviderNotInitializeError
  11. from controllers.console.datasets.error import DatasetInUseError, DatasetNameDuplicateError, IndexingEstimateError
  12. from controllers.console.wraps import (
  13. account_initialization_required,
  14. cloud_edition_billing_rate_limit_check,
  15. enterprise_license_required,
  16. setup_required,
  17. )
  18. from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError
  19. from core.indexing_runner import IndexingRunner
  20. from core.model_runtime.entities.model_entities import ModelType
  21. from core.provider_manager import ProviderManager
  22. from core.rag.datasource.vdb.vector_type import VectorType
  23. from core.rag.extractor.entity.datasource_type import DatasourceType
  24. from core.rag.extractor.entity.extract_setting import ExtractSetting
  25. from core.rag.retrieval.retrieval_methods import RetrievalMethod
  26. from extensions.ext_database import db
  27. from fields.app_fields import related_app_list
  28. from fields.dataset_fields import dataset_detail_fields, dataset_query_detail_fields
  29. from fields.document_fields import document_status_fields
  30. from libs.login import login_required
  31. from models import ApiToken, Dataset, Document, DocumentSegment, UploadFile
  32. from models.dataset import DatasetPermissionEnum
  33. from models.provider_ids import ModelProviderID
  34. from services.dataset_service import DatasetPermissionService, DatasetService, DocumentService
  35. def _validate_name(name):
  36. if not name or len(name) < 1 or len(name) > 40:
  37. raise ValueError("Name must be between 1 to 40 characters.")
  38. return name
  39. def _validate_description_length(description):
  40. if description and len(description) > 400:
  41. raise ValueError("Description cannot exceed 400 characters.")
  42. return description
  43. class DatasetListApi(Resource):
  44. @setup_required
  45. @login_required
  46. @account_initialization_required
  47. @enterprise_license_required
  48. def get(self):
  49. page = request.args.get("page", default=1, type=int)
  50. limit = request.args.get("limit", default=20, type=int)
  51. ids = request.args.getlist("ids")
  52. # provider = request.args.get("provider", default="vendor")
  53. search = request.args.get("keyword", default=None, type=str)
  54. tag_ids = request.args.getlist("tag_ids")
  55. include_all = request.args.get("include_all", default="false").lower() == "true"
  56. if ids:
  57. datasets, total = DatasetService.get_datasets_by_ids(ids, current_user.current_tenant_id)
  58. else:
  59. datasets, total = DatasetService.get_datasets(
  60. page, limit, current_user.current_tenant_id, current_user, search, tag_ids, include_all
  61. )
  62. # check embedding setting
  63. provider_manager = ProviderManager()
  64. configurations = provider_manager.get_configurations(tenant_id=current_user.current_tenant_id)
  65. embedding_models = configurations.get_models(model_type=ModelType.TEXT_EMBEDDING, only_active=True)
  66. model_names = []
  67. for embedding_model in embedding_models:
  68. model_names.append(f"{embedding_model.model}:{embedding_model.provider.provider}")
  69. data = marshal(datasets, dataset_detail_fields)
  70. for item in data:
  71. # convert embedding_model_provider to plugin standard format
  72. if item["indexing_technique"] == "high_quality" and item["embedding_model_provider"]:
  73. item["embedding_model_provider"] = str(ModelProviderID(item["embedding_model_provider"]))
  74. item_model = f"{item['embedding_model']}:{item['embedding_model_provider']}"
  75. if item_model in model_names:
  76. item["embedding_available"] = True
  77. else:
  78. item["embedding_available"] = False
  79. else:
  80. item["embedding_available"] = True
  81. if item.get("permission") == "partial_members":
  82. part_users_list = DatasetPermissionService.get_dataset_partial_member_list(item["id"])
  83. item.update({"partial_member_list": part_users_list})
  84. else:
  85. item.update({"partial_member_list": []})
  86. response = {"data": data, "has_more": len(datasets) == limit, "limit": limit, "total": total, "page": page}
  87. return response, 200
  88. @setup_required
  89. @login_required
  90. @account_initialization_required
  91. @cloud_edition_billing_rate_limit_check("knowledge")
  92. def post(self):
  93. parser = reqparse.RequestParser()
  94. parser.add_argument(
  95. "name",
  96. nullable=False,
  97. required=True,
  98. help="type is required. Name must be between 1 to 40 characters.",
  99. type=_validate_name,
  100. )
  101. parser.add_argument(
  102. "description",
  103. type=_validate_description_length,
  104. nullable=True,
  105. required=False,
  106. default="",
  107. )
  108. parser.add_argument(
  109. "indexing_technique",
  110. type=str,
  111. location="json",
  112. choices=Dataset.INDEXING_TECHNIQUE_LIST,
  113. nullable=True,
  114. help="Invalid indexing technique.",
  115. )
  116. parser.add_argument(
  117. "external_knowledge_api_id",
  118. type=str,
  119. nullable=True,
  120. required=False,
  121. )
  122. parser.add_argument(
  123. "provider",
  124. type=str,
  125. nullable=True,
  126. choices=Dataset.PROVIDER_LIST,
  127. required=False,
  128. default="vendor",
  129. )
  130. parser.add_argument(
  131. "external_knowledge_id",
  132. type=str,
  133. nullable=True,
  134. required=False,
  135. )
  136. args = parser.parse_args()
  137. # The role of the current user in the ta table must be admin, owner, or editor, or dataset_operator
  138. if not current_user.is_dataset_editor:
  139. raise Forbidden()
  140. try:
  141. dataset = DatasetService.create_empty_dataset(
  142. tenant_id=current_user.current_tenant_id,
  143. name=args["name"],
  144. description=args["description"],
  145. indexing_technique=args["indexing_technique"],
  146. account=current_user,
  147. permission=DatasetPermissionEnum.ONLY_ME,
  148. provider=args["provider"],
  149. external_knowledge_api_id=args["external_knowledge_api_id"],
  150. external_knowledge_id=args["external_knowledge_id"],
  151. )
  152. except services.errors.dataset.DatasetNameDuplicateError:
  153. raise DatasetNameDuplicateError()
  154. return marshal(dataset, dataset_detail_fields), 201
  155. class DatasetApi(Resource):
  156. @setup_required
  157. @login_required
  158. @account_initialization_required
  159. def get(self, dataset_id):
  160. dataset_id_str = str(dataset_id)
  161. dataset = DatasetService.get_dataset(dataset_id_str)
  162. if dataset is None:
  163. raise NotFound("Dataset not found.")
  164. try:
  165. DatasetService.check_dataset_permission(dataset, current_user)
  166. except services.errors.account.NoPermissionError as e:
  167. raise Forbidden(str(e))
  168. data = marshal(dataset, dataset_detail_fields)
  169. if dataset.indexing_technique == "high_quality":
  170. if dataset.embedding_model_provider:
  171. provider_id = ModelProviderID(dataset.embedding_model_provider)
  172. data["embedding_model_provider"] = str(provider_id)
  173. if data.get("permission") == "partial_members":
  174. part_users_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)
  175. data.update({"partial_member_list": part_users_list})
  176. # check embedding setting
  177. provider_manager = ProviderManager()
  178. configurations = provider_manager.get_configurations(tenant_id=current_user.current_tenant_id)
  179. embedding_models = configurations.get_models(model_type=ModelType.TEXT_EMBEDDING, only_active=True)
  180. model_names = []
  181. for embedding_model in embedding_models:
  182. model_names.append(f"{embedding_model.model}:{embedding_model.provider.provider}")
  183. if data["indexing_technique"] == "high_quality":
  184. item_model = f"{data['embedding_model']}:{data['embedding_model_provider']}"
  185. if item_model in model_names:
  186. data["embedding_available"] = True
  187. else:
  188. data["embedding_available"] = False
  189. else:
  190. data["embedding_available"] = True
  191. return data, 200
  192. @setup_required
  193. @login_required
  194. @account_initialization_required
  195. @cloud_edition_billing_rate_limit_check("knowledge")
  196. def patch(self, dataset_id):
  197. dataset_id_str = str(dataset_id)
  198. dataset = DatasetService.get_dataset(dataset_id_str)
  199. if dataset is None:
  200. raise NotFound("Dataset not found.")
  201. parser = reqparse.RequestParser()
  202. parser.add_argument(
  203. "name",
  204. nullable=False,
  205. help="type is required. Name must be between 1 to 40 characters.",
  206. type=_validate_name,
  207. )
  208. parser.add_argument("description", location="json", store_missing=False, type=_validate_description_length)
  209. parser.add_argument(
  210. "indexing_technique",
  211. type=str,
  212. location="json",
  213. choices=Dataset.INDEXING_TECHNIQUE_LIST,
  214. nullable=True,
  215. help="Invalid indexing technique.",
  216. )
  217. parser.add_argument(
  218. "permission",
  219. type=str,
  220. location="json",
  221. choices=(DatasetPermissionEnum.ONLY_ME, DatasetPermissionEnum.ALL_TEAM, DatasetPermissionEnum.PARTIAL_TEAM),
  222. help="Invalid permission.",
  223. )
  224. parser.add_argument("embedding_model", type=str, location="json", help="Invalid embedding model.")
  225. parser.add_argument(
  226. "embedding_model_provider", type=str, location="json", help="Invalid embedding model provider."
  227. )
  228. parser.add_argument("retrieval_model", type=dict, location="json", help="Invalid retrieval model.")
  229. parser.add_argument("partial_member_list", type=list, location="json", help="Invalid parent user list.")
  230. parser.add_argument(
  231. "external_retrieval_model",
  232. type=dict,
  233. required=False,
  234. nullable=True,
  235. location="json",
  236. help="Invalid external retrieval model.",
  237. )
  238. parser.add_argument(
  239. "external_knowledge_id",
  240. type=str,
  241. required=False,
  242. nullable=True,
  243. location="json",
  244. help="Invalid external knowledge id.",
  245. )
  246. parser.add_argument(
  247. "external_knowledge_api_id",
  248. type=str,
  249. required=False,
  250. nullable=True,
  251. location="json",
  252. help="Invalid external knowledge api id.",
  253. )
  254. parser.add_argument(
  255. "icon_info",
  256. type=dict,
  257. required=False,
  258. nullable=True,
  259. location="json",
  260. help="Invalid icon info.",
  261. )
  262. args = parser.parse_args()
  263. data = request.get_json()
  264. # check embedding model setting
  265. if (
  266. data.get("indexing_technique") == "high_quality"
  267. and data.get("embedding_model_provider") is not None
  268. and data.get("embedding_model") is not None
  269. ):
  270. DatasetService.check_embedding_model_setting(
  271. dataset.tenant_id, data.get("embedding_model_provider"), data.get("embedding_model")
  272. )
  273. # The role of the current user in the ta table must be admin, owner, editor, or dataset_operator
  274. DatasetPermissionService.check_permission(
  275. current_user, dataset, data.get("permission"), data.get("partial_member_list")
  276. )
  277. dataset = DatasetService.update_dataset(dataset_id_str, args, current_user)
  278. if dataset is None:
  279. raise NotFound("Dataset not found.")
  280. result_data = marshal(dataset, dataset_detail_fields)
  281. tenant_id = current_user.current_tenant_id
  282. if data.get("partial_member_list") and data.get("permission") == "partial_members":
  283. DatasetPermissionService.update_partial_member_list(
  284. tenant_id, dataset_id_str, data.get("partial_member_list")
  285. )
  286. # clear partial member list when permission is only_me or all_team_members
  287. elif (
  288. data.get("permission") == DatasetPermissionEnum.ONLY_ME
  289. or data.get("permission") == DatasetPermissionEnum.ALL_TEAM
  290. ):
  291. DatasetPermissionService.clear_partial_member_list(dataset_id_str)
  292. partial_member_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)
  293. result_data.update({"partial_member_list": partial_member_list})
  294. return result_data, 200
  295. @setup_required
  296. @login_required
  297. @account_initialization_required
  298. @cloud_edition_billing_rate_limit_check("knowledge")
  299. def delete(self, dataset_id):
  300. dataset_id_str = str(dataset_id)
  301. # The role of the current user in the ta table must be admin, owner, or editor
  302. if not current_user.is_editor or current_user.is_dataset_operator:
  303. raise Forbidden()
  304. try:
  305. if DatasetService.delete_dataset(dataset_id_str, current_user):
  306. DatasetPermissionService.clear_partial_member_list(dataset_id_str)
  307. return {"result": "success"}, 204
  308. else:
  309. raise NotFound("Dataset not found.")
  310. except services.errors.dataset.DatasetInUseError:
  311. raise DatasetInUseError()
  312. class DatasetUseCheckApi(Resource):
  313. @setup_required
  314. @login_required
  315. @account_initialization_required
  316. def get(self, dataset_id):
  317. dataset_id_str = str(dataset_id)
  318. dataset_is_using = DatasetService.dataset_use_check(dataset_id_str)
  319. return {"is_using": dataset_is_using}, 200
  320. class DatasetQueryApi(Resource):
  321. @setup_required
  322. @login_required
  323. @account_initialization_required
  324. def get(self, dataset_id):
  325. dataset_id_str = str(dataset_id)
  326. dataset = DatasetService.get_dataset(dataset_id_str)
  327. if dataset is None:
  328. raise NotFound("Dataset not found.")
  329. try:
  330. DatasetService.check_dataset_permission(dataset, current_user)
  331. except services.errors.account.NoPermissionError as e:
  332. raise Forbidden(str(e))
  333. page = request.args.get("page", default=1, type=int)
  334. limit = request.args.get("limit", default=20, type=int)
  335. dataset_queries, total = DatasetService.get_dataset_queries(dataset_id=dataset.id, page=page, per_page=limit)
  336. response = {
  337. "data": marshal(dataset_queries, dataset_query_detail_fields),
  338. "has_more": len(dataset_queries) == limit,
  339. "limit": limit,
  340. "total": total,
  341. "page": page,
  342. }
  343. return response, 200
  344. class DatasetIndexingEstimateApi(Resource):
  345. @setup_required
  346. @login_required
  347. @account_initialization_required
  348. def post(self):
  349. parser = reqparse.RequestParser()
  350. parser.add_argument("info_list", type=dict, required=True, nullable=True, location="json")
  351. parser.add_argument("process_rule", type=dict, required=True, nullable=True, location="json")
  352. parser.add_argument(
  353. "indexing_technique",
  354. type=str,
  355. required=True,
  356. choices=Dataset.INDEXING_TECHNIQUE_LIST,
  357. nullable=True,
  358. location="json",
  359. )
  360. parser.add_argument("doc_form", type=str, default="text_model", required=False, nullable=False, location="json")
  361. parser.add_argument("dataset_id", type=str, required=False, nullable=False, location="json")
  362. parser.add_argument(
  363. "doc_language", type=str, default="English", required=False, nullable=False, location="json"
  364. )
  365. args = parser.parse_args()
  366. # validate args
  367. DocumentService.estimate_args_validate(args)
  368. extract_settings = []
  369. if args["info_list"]["data_source_type"] == "upload_file":
  370. file_ids = args["info_list"]["file_info_list"]["file_ids"]
  371. file_details = (
  372. db.session.query(UploadFile)
  373. .where(UploadFile.tenant_id == current_user.current_tenant_id, UploadFile.id.in_(file_ids))
  374. .all()
  375. )
  376. if file_details is None:
  377. raise NotFound("File not found.")
  378. if file_details:
  379. for file_detail in file_details:
  380. extract_setting = ExtractSetting(
  381. datasource_type=DatasourceType.FILE.value,
  382. upload_file=file_detail,
  383. document_model=args["doc_form"],
  384. )
  385. extract_settings.append(extract_setting)
  386. elif args["info_list"]["data_source_type"] == "notion_import":
  387. notion_info_list = args["info_list"]["notion_info_list"]
  388. for notion_info in notion_info_list:
  389. workspace_id = notion_info["workspace_id"]
  390. credential_id = notion_info.get("credential_id")
  391. for page in notion_info["pages"]:
  392. extract_setting = ExtractSetting(
  393. datasource_type=DatasourceType.NOTION.value,
  394. notion_info={
  395. "credential_id": credential_id,
  396. "notion_workspace_id": workspace_id,
  397. "notion_obj_id": page["page_id"],
  398. "notion_page_type": page["type"],
  399. "tenant_id": current_user.current_tenant_id,
  400. },
  401. document_model=args["doc_form"],
  402. )
  403. extract_settings.append(extract_setting)
  404. elif args["info_list"]["data_source_type"] == "website_crawl":
  405. website_info_list = args["info_list"]["website_info_list"]
  406. for url in website_info_list["urls"]:
  407. extract_setting = ExtractSetting(
  408. datasource_type=DatasourceType.WEBSITE.value,
  409. website_info={
  410. "provider": website_info_list["provider"],
  411. "job_id": website_info_list["job_id"],
  412. "url": url,
  413. "tenant_id": current_user.current_tenant_id,
  414. "mode": "crawl",
  415. "only_main_content": website_info_list["only_main_content"],
  416. },
  417. document_model=args["doc_form"],
  418. )
  419. extract_settings.append(extract_setting)
  420. else:
  421. raise ValueError("Data source type not support")
  422. indexing_runner = IndexingRunner()
  423. try:
  424. response = indexing_runner.indexing_estimate(
  425. current_user.current_tenant_id,
  426. extract_settings,
  427. args["process_rule"],
  428. args["doc_form"],
  429. args["doc_language"],
  430. args["dataset_id"],
  431. args["indexing_technique"],
  432. )
  433. except LLMBadRequestError:
  434. raise ProviderNotInitializeError(
  435. "No Embedding Model available. Please configure a valid provider in the Settings -> Model Provider."
  436. )
  437. except ProviderTokenNotInitError as ex:
  438. raise ProviderNotInitializeError(ex.description)
  439. except Exception as e:
  440. raise IndexingEstimateError(str(e))
  441. return response.model_dump(), 200
  442. class DatasetRelatedAppListApi(Resource):
  443. @setup_required
  444. @login_required
  445. @account_initialization_required
  446. @marshal_with(related_app_list)
  447. def get(self, dataset_id):
  448. dataset_id_str = str(dataset_id)
  449. dataset = DatasetService.get_dataset(dataset_id_str)
  450. if dataset is None:
  451. raise NotFound("Dataset not found.")
  452. try:
  453. DatasetService.check_dataset_permission(dataset, current_user)
  454. except services.errors.account.NoPermissionError as e:
  455. raise Forbidden(str(e))
  456. app_dataset_joins = DatasetService.get_related_apps(dataset.id)
  457. related_apps = []
  458. for app_dataset_join in app_dataset_joins:
  459. app_model = app_dataset_join.app
  460. if app_model:
  461. related_apps.append(app_model)
  462. return {"data": related_apps, "total": len(related_apps)}, 200
  463. class DatasetIndexingStatusApi(Resource):
  464. @setup_required
  465. @login_required
  466. @account_initialization_required
  467. def get(self, dataset_id):
  468. dataset_id = str(dataset_id)
  469. documents = (
  470. db.session.query(Document)
  471. .where(Document.dataset_id == dataset_id, Document.tenant_id == current_user.current_tenant_id)
  472. .all()
  473. )
  474. documents_status = []
  475. for document in documents:
  476. completed_segments = (
  477. db.session.query(DocumentSegment)
  478. .where(
  479. DocumentSegment.completed_at.isnot(None),
  480. DocumentSegment.document_id == str(document.id),
  481. DocumentSegment.status != "re_segment",
  482. )
  483. .count()
  484. )
  485. total_segments = (
  486. db.session.query(DocumentSegment)
  487. .where(DocumentSegment.document_id == str(document.id), DocumentSegment.status != "re_segment")
  488. .count()
  489. )
  490. # Create a dictionary with document attributes and additional fields
  491. document_dict = {
  492. "id": document.id,
  493. "indexing_status": document.indexing_status,
  494. "processing_started_at": document.processing_started_at,
  495. "parsing_completed_at": document.parsing_completed_at,
  496. "cleaning_completed_at": document.cleaning_completed_at,
  497. "splitting_completed_at": document.splitting_completed_at,
  498. "completed_at": document.completed_at,
  499. "paused_at": document.paused_at,
  500. "error": document.error,
  501. "stopped_at": document.stopped_at,
  502. "completed_segments": completed_segments,
  503. "total_segments": total_segments,
  504. }
  505. documents_status.append(marshal(document_dict, document_status_fields))
  506. data = {"data": documents_status}
  507. return data, 200
  508. class DatasetApiKeyApi(Resource):
  509. max_keys = 10
  510. token_prefix = "dataset-"
  511. resource_type = "dataset"
  512. @setup_required
  513. @login_required
  514. @account_initialization_required
  515. @marshal_with(api_key_list)
  516. def get(self):
  517. keys = (
  518. db.session.query(ApiToken)
  519. .where(ApiToken.type == self.resource_type, ApiToken.tenant_id == current_user.current_tenant_id)
  520. .all()
  521. )
  522. return {"items": keys}
  523. @setup_required
  524. @login_required
  525. @account_initialization_required
  526. @marshal_with(api_key_fields)
  527. def post(self):
  528. # The role of the current user in the ta table must be admin or owner
  529. if not current_user.is_admin_or_owner:
  530. raise Forbidden()
  531. current_key_count = (
  532. db.session.query(ApiToken)
  533. .where(ApiToken.type == self.resource_type, ApiToken.tenant_id == current_user.current_tenant_id)
  534. .count()
  535. )
  536. if current_key_count >= self.max_keys:
  537. flask_restx.abort(
  538. 400,
  539. message=f"Cannot create more than {self.max_keys} API keys for this resource type.",
  540. code="max_keys_exceeded",
  541. )
  542. key = ApiToken.generate_api_key(self.token_prefix, 24)
  543. api_token = ApiToken()
  544. api_token.tenant_id = current_user.current_tenant_id
  545. api_token.token = key
  546. api_token.type = self.resource_type
  547. db.session.add(api_token)
  548. db.session.commit()
  549. return api_token, 200
  550. class DatasetApiDeleteApi(Resource):
  551. resource_type = "dataset"
  552. @setup_required
  553. @login_required
  554. @account_initialization_required
  555. def delete(self, api_key_id):
  556. api_key_id = str(api_key_id)
  557. # The role of the current user in the ta table must be admin or owner
  558. if not current_user.is_admin_or_owner:
  559. raise Forbidden()
  560. key = (
  561. db.session.query(ApiToken)
  562. .where(
  563. ApiToken.tenant_id == current_user.current_tenant_id,
  564. ApiToken.type == self.resource_type,
  565. ApiToken.id == api_key_id,
  566. )
  567. .first()
  568. )
  569. if key is None:
  570. flask_restx.abort(404, message="API key not found")
  571. db.session.query(ApiToken).where(ApiToken.id == api_key_id).delete()
  572. db.session.commit()
  573. return {"result": "success"}, 204
  574. class DatasetApiBaseUrlApi(Resource):
  575. @setup_required
  576. @login_required
  577. @account_initialization_required
  578. def get(self):
  579. return {"api_base_url": (dify_config.SERVICE_API_URL or request.host_url.rstrip("/")) + "/v1"}
  580. class DatasetRetrievalSettingApi(Resource):
  581. @setup_required
  582. @login_required
  583. @account_initialization_required
  584. def get(self):
  585. vector_type = dify_config.VECTOR_STORE
  586. match vector_type:
  587. case (
  588. VectorType.RELYT
  589. | VectorType.TIDB_VECTOR
  590. | VectorType.CHROMA
  591. | VectorType.PGVECTO_RS
  592. | VectorType.BAIDU
  593. | VectorType.VIKINGDB
  594. | VectorType.UPSTASH
  595. ):
  596. return {"retrieval_method": [RetrievalMethod.SEMANTIC_SEARCH.value]}
  597. case (
  598. VectorType.QDRANT
  599. | VectorType.WEAVIATE
  600. | VectorType.OPENSEARCH
  601. | VectorType.ANALYTICDB
  602. | VectorType.MYSCALE
  603. | VectorType.ORACLE
  604. | VectorType.ELASTICSEARCH
  605. | VectorType.ELASTICSEARCH_JA
  606. | VectorType.PGVECTOR
  607. | VectorType.VASTBASE
  608. | VectorType.TIDB_ON_QDRANT
  609. | VectorType.LINDORM
  610. | VectorType.COUCHBASE
  611. | VectorType.MILVUS
  612. | VectorType.OPENGAUSS
  613. | VectorType.OCEANBASE
  614. | VectorType.TABLESTORE
  615. | VectorType.HUAWEI_CLOUD
  616. | VectorType.TENCENT
  617. | VectorType.MATRIXONE
  618. | VectorType.CLICKZETTA
  619. ):
  620. return {
  621. "retrieval_method": [
  622. RetrievalMethod.SEMANTIC_SEARCH.value,
  623. RetrievalMethod.FULL_TEXT_SEARCH.value,
  624. RetrievalMethod.HYBRID_SEARCH.value,
  625. ]
  626. }
  627. case _:
  628. raise ValueError(f"Unsupported vector db type {vector_type}.")
  629. class DatasetRetrievalSettingMockApi(Resource):
  630. @setup_required
  631. @login_required
  632. @account_initialization_required
  633. def get(self, vector_type):
  634. match vector_type:
  635. case (
  636. VectorType.MILVUS
  637. | VectorType.RELYT
  638. | VectorType.TIDB_VECTOR
  639. | VectorType.CHROMA
  640. | VectorType.PGVECTO_RS
  641. | VectorType.BAIDU
  642. | VectorType.VIKINGDB
  643. | VectorType.UPSTASH
  644. ):
  645. return {"retrieval_method": [RetrievalMethod.SEMANTIC_SEARCH.value]}
  646. case (
  647. VectorType.QDRANT
  648. | VectorType.WEAVIATE
  649. | VectorType.OPENSEARCH
  650. | VectorType.ANALYTICDB
  651. | VectorType.MYSCALE
  652. | VectorType.ORACLE
  653. | VectorType.ELASTICSEARCH
  654. | VectorType.ELASTICSEARCH_JA
  655. | VectorType.COUCHBASE
  656. | VectorType.PGVECTOR
  657. | VectorType.VASTBASE
  658. | VectorType.LINDORM
  659. | VectorType.OPENGAUSS
  660. | VectorType.OCEANBASE
  661. | VectorType.TABLESTORE
  662. | VectorType.TENCENT
  663. | VectorType.HUAWEI_CLOUD
  664. | VectorType.MATRIXONE
  665. | VectorType.CLICKZETTA
  666. ):
  667. return {
  668. "retrieval_method": [
  669. RetrievalMethod.SEMANTIC_SEARCH.value,
  670. RetrievalMethod.FULL_TEXT_SEARCH.value,
  671. RetrievalMethod.HYBRID_SEARCH.value,
  672. ]
  673. }
  674. case _:
  675. raise ValueError(f"Unsupported vector db type {vector_type}.")
  676. class DatasetErrorDocs(Resource):
  677. @setup_required
  678. @login_required
  679. @account_initialization_required
  680. def get(self, dataset_id):
  681. dataset_id_str = str(dataset_id)
  682. dataset = DatasetService.get_dataset(dataset_id_str)
  683. if dataset is None:
  684. raise NotFound("Dataset not found.")
  685. results = DocumentService.get_error_documents_by_dataset_id(dataset_id_str)
  686. return {"data": [marshal(item, document_status_fields) for item in results], "total": len(results)}, 200
  687. class DatasetPermissionUserListApi(Resource):
  688. @setup_required
  689. @login_required
  690. @account_initialization_required
  691. def get(self, dataset_id):
  692. dataset_id_str = str(dataset_id)
  693. dataset = DatasetService.get_dataset(dataset_id_str)
  694. if dataset is None:
  695. raise NotFound("Dataset not found.")
  696. try:
  697. DatasetService.check_dataset_permission(dataset, current_user)
  698. except services.errors.account.NoPermissionError as e:
  699. raise Forbidden(str(e))
  700. partial_members_list = DatasetPermissionService.get_dataset_partial_member_list(dataset_id_str)
  701. return {
  702. "data": partial_members_list,
  703. }, 200
  704. class DatasetAutoDisableLogApi(Resource):
  705. @setup_required
  706. @login_required
  707. @account_initialization_required
  708. def get(self, dataset_id):
  709. dataset_id_str = str(dataset_id)
  710. dataset = DatasetService.get_dataset(dataset_id_str)
  711. if dataset is None:
  712. raise NotFound("Dataset not found.")
  713. return DatasetService.get_dataset_auto_disable_logs(dataset_id_str), 200
  714. api.add_resource(DatasetListApi, "/datasets")
  715. api.add_resource(DatasetApi, "/datasets/<uuid:dataset_id>")
  716. api.add_resource(DatasetUseCheckApi, "/datasets/<uuid:dataset_id>/use-check")
  717. api.add_resource(DatasetQueryApi, "/datasets/<uuid:dataset_id>/queries")
  718. api.add_resource(DatasetErrorDocs, "/datasets/<uuid:dataset_id>/error-docs")
  719. api.add_resource(DatasetIndexingEstimateApi, "/datasets/indexing-estimate")
  720. api.add_resource(DatasetRelatedAppListApi, "/datasets/<uuid:dataset_id>/related-apps")
  721. api.add_resource(DatasetIndexingStatusApi, "/datasets/<uuid:dataset_id>/indexing-status")
  722. api.add_resource(DatasetApiKeyApi, "/datasets/api-keys")
  723. api.add_resource(DatasetApiDeleteApi, "/datasets/api-keys/<uuid:api_key_id>")
  724. api.add_resource(DatasetApiBaseUrlApi, "/datasets/api-base-info")
  725. api.add_resource(DatasetRetrievalSettingApi, "/datasets/retrieval-setting")
  726. api.add_resource(DatasetRetrievalSettingMockApi, "/datasets/retrieval-setting/<string:vector_type>")
  727. api.add_resource(DatasetPermissionUserListApi, "/datasets/<uuid:dataset_id>/permission-part-users")
  728. api.add_resource(DatasetAutoDisableLogApi, "/datasets/<uuid:dataset_id>/auto-disable-logs")