使用flexget过滤免费种
参考GitHub大神的项目,不过目前大佬好像不更新了,我只用它rss皇后的免费种,不过貌似有点问题,还好有另一位大佬解决了,提了PR,我在这里备份一下,防止以后自己忘记。如果有大佬还不会用flexget,可以先参考: 首先在flexget目 […]
使用flexget过滤指定大小的种子
之前分享过如何使用flexget配合PT软件进行自动下载的方法,详见: 今天来说一下如何使用flexget自动下载指定大小的种子。 下面是配置文件:
1 2 3 4 5 6 7 8 |
tasks: hdsky: rss: https://xxxx content_size: #启用大小过滤 min: 10240 # 文件小于 2048M 就不下载 max: 999900 # 文件大于 9999M 就不下载 strict: no #不要动 download: /Users/mcj/Downloads/CCC不备份/flexget/hdsky |
上面的代码很明白了,需要 […]
Flexget配置文件汇总详解
参考某个大佬的,我会时不时的更新。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 |
#使用前请将注释删除以免出问题。#后面带汉字的就是注释 #如果不想启用 Web-UI 则删除本段代码 web_server: bind: 0.0.0.0 port: 50001 #web-ui的监听端口 # ssl_certificate: '/etc/ssl/private/myCert.pem' #SSL证书位置,启用https的话,把前面的#号删除 # ssl_private_key: '/etc/ssl/private/myKey.key' #SSL证书位置,启用https的话,把前面的#号删除 web_ui: yes #启用web-ui base_url: /flex #网址后缀 run_v2: yes #启动 V2 版本 #定时器-定时抓取一次指定任务的rss,不需要自动化就删除本段,更多查看 https://flexget.com/Plugins/Daemon/scheduler schedules: - tasks: [myrssfeed, task_b] #指定 myreefeed 和 task_b 两个任务 schedule: minute: "*/30" #每30分抓取 - tasks: [task_c, task_d] schedule: minute: "*/30" hour: 22,23 #每日 22:30和23:30 抓取 #任务列表 tasks: #保持不变 myrssfeed: #任务名称,改冒号前的。 rss: http://mysite.com/myfeed.rss # rss 地址 accept_all: no #是否全部下载,不想过滤就yes然后把downlod之前的全删了 if: #启用 if 条件过滤 - "'ABC' in title": accept #标题含有 ABC 就下载 - "'DEF' in title": reject #标题含有 DEF 就不下载 content_size: #启用大小过滤 min: 2048 # 文件小于 2048M 就不下载 max: 9999 # 文件大于 9999M 就不下载 strict: no #不要动 download: /path/of/your/torrents/download-dir/ #flexget 的种子下载目录 #下面是 deluge 通过 rpc 方式自动添加种子下载,transmission也是类似的,自行搜索下。 deluge: host:localhost #不要改 port: 13222 #填 daemon 监听的端口 user: localclient #不要改 pass: dsad5a6s5d6as #填密码密文 #执行 cat ~/.config/deluge/auth 会获得localclient:446d2cd96bfc7e15003fab1f11e9238b94671521:10 #其中 446d2cd96bfc7e15003fab1f11e9238b94671521 就是密码密文 |
另一个配置:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
# 预设模板 templates: # 剩余空间模板,当 path 对应的路径的剩余空间小于 space 规定的数值的时候停止 RSS 下载 freespace: free_space: path: /home/SCRIPTUSERNAME space: 10240 # qb 的模板,之后写 qb 就是指把种子推送到 qb 进行下载;下面 tr de rt 也是如此 # 我脚本里账号密码都帮你写好了,除非你自己改了账号、密码或者端口,不然以下这些客户端设置不用修改 qb: qbittorrent: path: /home/SCRIPTUSERNAME/qbittorrent/download/ host: localhost port: 2017 username: SCRIPTUSERNAME password: SCRIPTPASSWORD tr: transmission: path: /home/SCRIPTUSERNAME/transmission/download/ host: localhost port: 9099 username: SCRIPTUSERNAME password: SCRIPTPASSWORD de: deluge: path: /home/SCRIPTUSERNAME/deluge/download/ host: localhost port: 58846 username: SCRIPTUSERNAME password: SCRIPTPASSWORD # 体积过滤模板,min 是符合条件的最小种子体积,max 是符合条件的最大种子体积,单位均为 MB # strict 默认是 yes,表示在无法确定大小的情况下就不下载,这里把它改成 no 了 # 也就是说,这段 size 的意思是,只下载体积为 6000-666666 MB 的种子,其他不满足条件的种子不下载 size: content_size: min: 6000 max: 666666 strict: no # 任务 tasks: # Web-HDSky 是任务名称,基本上随便起 Web-HDSky: # RSS 链接请自己修改成你实际的链接 rss: https://hdsky.me/torrentrss.php # 因为 HDSWEB 发单集的时候用的标题是一样的, 因此下过一次后 # 之后新发出来的单集由于标题一样,flexget 会当成是以前已经下过的种子 # 为了避免这个问题,对 seen 插件设定为只检查 url 是否一致 seen: fields: - url # 正则表达式;标题带 HDSWEB 的种子就下载(accept,接受),不想下载的话就写拒绝(reject) regexp: accept: - HDSWEB # 调用上边的 de 模板 template: de # 可以不使用模板的体积过滤,针对每个任务单独设置体积过滤 content_size: min: 3000 max: 500000 strict: no # 以下设定实现的效果:对这个任务加载到 deluge 的种子,自动添加 WEB-DL 的标签 # 自动限制上传速度到 100MB/s(防止超速 ban),下完后自动移动到 /mnt/HDSky/HDSWEB deluge: label: WEB-DL # Limit upload speed to 100 MiB/s in case of being auto-banned max_up_speed: 102400 move_completed_path: /mnt/HDSky/HDSWEB ADC-AnimeBD-JPN: rss: http://asiandvdclub.org/rss.xml if: - "'Anime' and 'AVC' in title": accept - "'subs only' in title": reject - "'Custom' in description": reject # 这三个过滤条件组合起来就是,下载标题里带 Anime 和 AVC 且不含 subs only 的种子 # 并排除掉 描述页 里含有 Custom 字眼的种子 # 这也就约等于,RSS 日版动画蓝光碟(非日版、DIY 碟、DVD 都过滤掉) # RSS ADC 需要 Cookies,这里我们用 headers 插件来加上 cookies # 如何获取 Cookies 请看另外一篇教程 headers: Cookie: "uid=12345; pass=abcdefg" # 转换 RSS 链接,将原本形如 http://asiandvdclub.org/details.php?id=123456 的种子描述页面链接 # 替换为形如 http://asiandvdclub.org/download.php?id=123456 的种子下载链接 urlrewrite: sitename: regexp: 'http://asiandvdclub.org/details.php\?id=(?P<id>\d+)' format: 'http://asiandvdclub.org/download.php?id=\g<id>' qbittorrent: label: ADC # 刷 ADC 不用限速,我这里写这个限速模板只是想告诉你 # Flexget 支持添加种子到 qBittorrent 的时候自动设定单种限速 maxdownspeed: 30000 # Flexget WebUI 设定,可以不改 web_server: port: 6566 web_ui: yes # base_url: /flexget # base_url 是为了反代设置的,需要使用反代的话就取消这个的注释,然后在安装了 rTorrent 的情况下(不装 rt 的话没有 nginx) # Flexget WebUI 地址就变成了 https://你盒子的 IP 地址/flexget # 这里关闭 schedules 功能,也就是说没有启用 RSS,如何启用请看下文 schedules: no |
本文最后更新于2021年8月21日,已超过 […]
Python2安装pandas报错
Python2.7安装Pandas报错: Command “python setup.py egg_info” failed with error code 1 in /tmp/pip-build-juzWZl/pandas/ [crayo […]
Typecho美化汇总
之前介绍过本站点进行的一些美化操作,最近开始搭建Typecho博客,在这里记录一些Plug-and-Play的函数或者模块,以便于其他博客的复用。 5 统计文章总数、分类总数、评论总数、页面总数 [crayon-686c98e7ebc2a4 […]
Typecho中Widget_Archive的详解
Widget_Archive的内涵 Widget_Archive是Typecho中非常重要的一个组件,基本上所有文章/页面内容的渲染都离不开这个组件。 Widget_Archive类位于/var/Widget/Archive.php里面,其 […]
Typecho中Widget_Archive的API用法
如果需要更加原理性的讲解,请移步到《Typecho中Widget_Archive详解》,本文主要列举并介绍Widget_Archive的API用法。 Widget_Archive是用于加载皮肤文件的主要入口,包括加载index.php/se […]
AssertionError: Default process group is not initialized
detectron2遇到问题:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
Traceback (most recent call last): File "projects/SparseRCNN/train_net.py", line 143, in <module> args=(args,), File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/launch.py", line 62, in launch main_func(*args) File "projects/SparseRCNN/train_net.py", line 131, in main return trainer.train() File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/defaults.py", line 419, in train super().train(self.start_iter, self.max_iter) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/train_loop.py", line 134, in train self.run_step() File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/defaults.py", line 429, in run_step self._trainer.run_step() File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/train_loop.py", line 228, in run_step loss_dict = self.model(data) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/projects/SparseRCNN/sparsercnn/detector.py", line 121, in forward src = self.backbone(images.tensor) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/modeling/backbone/fpn.py", line 127, in forward bottom_up_features = self.bottom_up(x) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/modeling/backbone/resnet.py", line 434, in forward x = self.stem(x) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/modeling/backbone/resnet.py", line 356, in forward x = self.conv1(x) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/layers/wrappers.py", line 80, in forward x = self.norm(x) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/nn/modules/batchnorm.py", line 493, in forward world_size = torch.distributed.get_world_size(process_group) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 620, in get_world_size return _get_group_size(group) File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 219, in _get_group_size _check_default_pg() File "/home/ubuntu/anaconda3/envs/sparse/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 210, in _check_default_pg "Default process group is not initialized" AssertionError: Default process group is not initialized |
解决方法: vi detectron2/engine/launch.py 修改 if world_size > 1: 为 if worl […]
Typecho插件开发入门教程之HelloWorld
任何语言的学习,都是从Hello World开始的,本文也不脱俗,我们也从Hello World来开始我们的学习之旅。 基本结构 1.文件结构 首先是插件的文件构成。
1 2 3 |
HelloWorld 插件文件夹 | |——Plugin.php 插件核心文件 |
插件文 […]
ValueError: tuple.index(x): x not in tuple
detectron2使用自定义数据集进行训练的时候,提示: ValueError: tuple.index(x): x not in tuple
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
Traceback (most recent call last): File "projects/SparseRCNN/train_net.py", line 143, in <module> args=(args,), File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/launch.py", line 62, in launch main_func(*args) File "projects/SparseRCNN/train_net.py", line 129, in main trainer = Trainer(cfg) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/engine/defaults.py", line 284, in __init__ data_loader = self.build_train_loader(cfg) File "projects/SparseRCNN/train_net.py", line 53, in build_train_loader return build_detection_train_loader(cfg, mapper=mapper) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/config/config.py", line 201, in wrapped explicit_args = _get_args_from_config(from_config, *args, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/config/config.py", line 236, in _get_args_from_config ret = from_config_func(*args, **kwargs) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/build.py", line 309, in _train_loader_from_config proposal_files=cfg.DATASETS.PROPOSAL_FILES_TRAIN if cfg.MODEL.LOAD_PROPOSALS else None, File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/build.py", line 222, in get_detection_dataset_dicts dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names] File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/build.py", line 222, in <listcomp> dataset_dicts = [DatasetCatalog.get(dataset_name) for dataset_name in dataset_names] File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/catalog.py", line 58, in get return f() File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/datasets/pascal_voc.py", line 78, in <lambda> DatasetCatalog.register(name, lambda: load_voc_instances(dirname, split, class_names)) File "/home/ubuntu/bigdisk/part2/SparseR-CNN/detectron2/data/datasets/pascal_voc.py", line 70, in load_voc_instances {"category_id": class_names.index(cls), "bbox": bbox, "bbox_mode": BoxMode.XYXY_ABS} ValueError: tuple.index(x): x not in tuple |
原因是数据类别不一致,修改: […]