id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2268691566
Question about poses_bounds.npy in the ExBluRF dataset Thanks for your great work! There are poses_bounds.npy and poses_bounds_subframes.npy in each scene of your dataset. Are they estimated from blurry training views + sharp test views or sharp (ground-truth) training views + sharp test views? The reason that I am asking is that when I tried to estimate camera poses from blurry training views using COLMAP, COLMAP failed to estimate camera poses. I suspect the reason is that training views contain severe blur so that COLMAP cannot estimate feature points. Therefore, I want to figure out whether estimating camera poses from blurry images from ExBluRF dataset is possible or not. Thank you in advance. Hi, thanks for the attention. The camera extrinsic in poses_bounds.npy is estimated from SHARP training views + sharp GT views, for the very reason you have pointed out: it is challenging to extract feature points from extreme blurry observations and obtained points are also erroneous. ExBluRF does not assume noisy initial pose caused by blur (which we have to admit, is an experimental/non-practical settings.) We have been aware of that problem and accomplished new work, DeblurGS, to address the noisy-pose problem: link. Although the readme and code of DeblurGS is still being cleaned up, visiting is always welcome. I appreciate your kind response! All my questions have been clarified :)
gharchive/issue
2024-04-29T10:42:34
2025-04-01T04:36:00.974194
{ "authors": [ "hmyang0727", "taekkii" ], "repo": "taekkii/ExBluRF", "url": "https://github.com/taekkii/ExBluRF/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1826543366
Fix 'change root directory' TODO + show path in file explorer Note that if the path starts with the home directory, it will be collapsed to ~ After chatting with Marwan, we will move forward with this PR to see how the change feels. However, we will eventually want to move this call to the root readDirectory call, and update the parent. We should persist the users root directory with the username, and reset it when it changes.
gharchive/pull-request
2023-07-28T14:36:43
2025-04-01T04:36:01.090629
{ "authors": [ "marwan-at-work", "tylersmalley" ], "repo": "tailscale-dev/vscode-tailscale", "url": "https://github.com/tailscale-dev/vscode-tailscale/pull/129", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
491825236
Can't use node built in modules (fs, path, etc) in config files @serh11p's fix for watching modules in config files causes errors when require'ing node builtins like fs, path, etc. (node:55727) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, open 'fs' at Object.openSync (fs.js:448:3) at Object.readFileSync (fs.js:348:35) at createModule (node_modules/tailwindcss/lib/lib/getModuleDependencies.js:19:30) at mdl.requires.forEach.dep (node_modules/tailwindcss/lib/lib/getModuleDependencies.js:42:25) at Array.forEach (<anonymous>) at getModuleDependencies (node_modules/tailwindcss/lib/lib/getModuleDependencies.js:34:18) at node_modules/tailwindcss/lib/lib/registerConfigAsDependency.js:20:40 Really! Thanks for the issue. I will fix it and create PR at morning! <3 Resolved by #1121 👍
gharchive/issue
2019-09-10T18:18:25
2025-04-01T04:36:01.160317
{ "authors": [ "adamwathan", "serh11p", "snaptopixel" ], "repo": "tailwindcss/tailwindcss", "url": "https://github.com/tailwindcss/tailwindcss/issues/1119", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
833091902
Translate doesn't appear to support arbitrary values <div class="mr-[-9.5rem]"></div> <div class="translate-x-[-9.5rem]"></div> The first <div> works as expected, but the second one does not appear to have the corresponding Tailwind utility class generated. Just tested and it is generated — are you just missing the transform class for it to take affect? <div class="transform translate-x-[-9.5rem]"></div> Hm, weird, I have transform-gpu on it, will investigate further and see if there's something weird in my build process, thanks. Just some weirdness in my build process, sorry for bothering! @adamwathan p.s. tailwind-jit is absolutely wonderful ✨ thanks for all your work! Cool no problem glad it is working! 🙌
gharchive/issue
2021-03-16T18:35:58
2025-04-01T04:36:01.175621
{ "authors": [ "adamwathan", "pheuter" ], "repo": "tailwindlabs/tailwindcss-jit", "url": "https://github.com/tailwindlabs/tailwindcss-jit/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
55675612
我如果装了两个不同的douban.fm,我要怎么选择我想要的那个启动? turingou/douban.fm taizilongxu/douban.fm 这两个启动命令都一样啊,怎么办? 卸掉一个~~ 或者你可以 clone 这个项目,用 $ python douban.py 运行
gharchive/issue
2015-01-27T21:16:58
2025-04-01T04:36:01.194699
{ "authors": [ "ShiftWang", "taizilongxu" ], "repo": "taizilongxu/douban.fm", "url": "https://github.com/taizilongxu/douban.fm/issues/29", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1269587551
Need a way to allow service.mock suffix Currently when I enter service.mock suffix it doesn't catch. I have to enter just mock suffix which isn't accurate enough. This issue has been resolved in v1.0.4. @MatanYadaev Sorry for the late response. This issue has been resolved in v1.0.4.
gharchive/issue
2022-06-13T15:09:28
2025-04-01T04:36:01.208879
{ "authors": [ "MatanYadaev", "takuya-nakayasu" ], "repo": "takuya-nakayasu/eslint-plugin-angular-file-naming", "url": "https://github.com/takuya-nakayasu/eslint-plugin-angular-file-naming/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
350367760
vorax4 won't start Hello, Today odd thing happened when I tried to use vorax4. I'm receiving the following message: Initializing connection... /usr/share/ruby/logger.rb:746:in initialize' /usr/share/ruby/logger.rb:746:in open' /usr/share/ruby/logger.rb:746:in open_logfile' /usr/share/ruby/logger.rb:738:in set_dev' /usr/share/ruby/logger.rb:673:in initialize' /usr/share/ruby/logger.rb:387:in new' /usr/share/ruby/logger.rb:387:in initialize' /home/fahad/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess.rb:36:in new' /home/fahad/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess.rb:36:in logger' /home/fahad/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess/abstract_process.rb:184:in log' /home/fahad/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess/unix/process.rb:35:in exited?' /home/fahad/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess/abstract_process.rb:134:in alive?' eval:33:in sqlplus_alive?' eval:14:in with_sqlplus' eval:2:in `' Error detected while processing function 125_OpenNode[2]..302[8]..316[12]..124_Connect[7]..vorax#sqlplus#Connect[14]..vorax#sqlplus#Initialize[7]..vorax#sqlplus#ExecImmediate[11]..vorax#ruby#SqlplusExec: line 34: TypeError: no implicit conversion of Object into String Error detected while processing function 125_OpenNode[2]..302: line 8: E171: Missing :endif would you please help me in resolving such error, because currently, my work is really dependent on your plugin. Thank you. Hello, What has changed than before, when Vorax could start? It's an odd error... Really I don't know. Sqlplus is working fine from terminal, and vim. Hi talek, Is there a way to solve this issue. Thank you. Hi, Please tell me more about your configuration. *) OS *) ruby version *) Vim version *) Vorax version Also, do you use an OS provided vim with ruby support or did you compile VIM yourself? OS Information LSB Version: :core-4.1-amd64:core-4.1-noarch Distributor ID: Fedora Description: Fedora release 28 (Twenty Eight) Release: 28 Codename: TwentyEight Kernel Information 4.17.18-200.fc28.x86_64 Ruby Version ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux] Vim Version VIM - Vi IMproved 8.1 (2018 May 18, compiled Aug 13 2018 13:12:23) Included patches: 1-279 Modified by bugzilla@redhat.com Compiled by bugzilla@redhat.com Huge version with GTK3 GUI. Features included (+) or not (-): +acl +extra_search +mouse_netterm +tag_old_static +arabic +farsi +mouse_sgr -tag_any_white +autocmd +file_in_path -mouse_sysmouse -tcl +autochdir +find_in_path +mouse_urxvt +termguicolors -autoservername +float +mouse_xterm +terminal +balloon_eval +folding +multi_byte +terminfo +balloon_eval_term -footer +multi_lang +termresponse +browse +fork() -mzscheme +textobjects ++builtin_terms +gettext +netbeans_intg +timers +byte_offset -hangul_input +num64 +title +channel +iconv +packages +toolbar +cindent +insert_expand +path_extra +user_commands +clientserver +job +perl/dyn +vartabs +clipboard +jumplist +persistent_undo +vertsplit +cmdline_compl +keymap +postscript +virtualedit +cmdline_hist +lambda +printer +visual +cmdline_info +langmap +profile +visualextra +comments +libcall +python/dyn +viminfo +conceal +linebreak +python3/dyn +vreplace +cryptv +lispindent +quickfix +wildignore +cscope +listcmds +reltime +wildmenu +cursorbind +localmap +rightleft +windows +cursorshape +lua/dyn +ruby/dyn +writebackup +dialog_con_gui +menu +scrollbind +X11 +diff +mksession +signs -xfontset +digraphs +modify_fname +smartindent +xim +dnd +mouse +startuptime +xpm -ebcdic +mouseshape +statusline +xsmp_interact +emacs_tags +mouse_dec -sun_workshop +xterm_clipboard +eval +mouse_gpm +syntax -xterm_save +ex_extra -mouse_jsbterm +tag_binary system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" 2nd user vimrc file: "~/.vim/vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" 2nd user gvimrc file: "~/.vim/gvimrc" defaults file: "$VIMRUNTIME/defaults.vim" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/etc" f-b for $VIMRUNTIME: "/usr/share/vim/vim81" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -I/usr/include/gtk-3.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib64/glib-2.0/include -I/usr/include/fribidi -I/usr/include/cairo -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng16 -I/usr/include/uuid -I/usr/include/harfbuzz -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/gio-unix-2.0/ -I/usr/include/libdrm -I/usr/include/atk-1.0 -I/usr/include/at-spi2-atk/2.0 -I/usr/include/at-spi-2.0 -I/usr/include/dbus-1.0 -I/usr/lib64/dbus-1.0/include -pthread -O2 -g -pipe -Wall -Werror=format-security -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/python3.6m -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -fstack-protector -rdynamic -Wl,-export-dynamic -Wl,--enable-new-dtags -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-3 -lgdk-3 -lpangocairo-1.0 -lpango-1.0 -latk-1.0 -lcairo-gobject -lcairo -lgdk_pixbuf-2.0 -lgio-2.0 -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXpm -lXt -lX11 -lSM -lICE -lm -lselinux -lncurses -lacl -lattr -lgpm -ldl -Wl,--enable-new-dtags -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -fstack-protector-strong -L/usr/local/lib -L/usr/lib64/perl5/CORE -lperl -lpthread -lresolv -ldl -lm -lcrypt -lutil -lc Vorax Version vorax_version = "4.3.55" Also, I'm using an OS provided vim with ruby support. Ok, I was able to reproduce the issue on a Fedora system. Apparently the problem is lurking down into the childprocess library. What I did was to go directly into ~/.gem/ruby/gems/childprocess-0.9.0/lib/childprocess.rb and change the line 36 as shown below: The funny part is that the error appears just within the ruby code invoked from Vim, not directly as a plain/regular ruby script. Please check if this workaround can be used on your system. Thank you. It is working fine now. I really appreciate it.
gharchive/issue
2018-08-14T10:23:31
2025-04-01T04:36:01.234124
{ "authors": [ "fahad3git", "talek" ], "repo": "talek/vorax4", "url": "https://github.com/talek/vorax4/issues/86", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1679385553
hotfix/dependency update Changes/Commits: def: update flutter_local_notifications to latest version Updated to ^14.0.0-dev.2 from ^13.0.0 def: updated url_launcher and firebase dependencies Updated to url_launcher: ^6.1.10 from ^6.1.9 Updated to firebase_core: ^2.10.0 from ^2.4.0 Updated to firebase_messaging: ^14.4.1 from ^14.2.1 flutter test runs fine, and also example. First of all thank you for your contribution. We can't accept any contributions, no matter how small, if you haven't signed the CLA that you find in this repo, so please sign either the individual or the corporate CLA and mail it to legal@talkjs.com That being said, we can't publish on packages that depend on prereleases, so the flutter_local_notification package cannot be at version 14.0.0-dev.2 Are you doing something that depends on that particular version, that is not possible to do on version 13.0.0? Ok, I'm going to close this pull request. Thanks.
gharchive/pull-request
2023-04-22T07:10:06
2025-04-01T04:36:01.241916
{ "authors": [ "bugnano", "theiskaa" ], "repo": "talkjs/talkjs-flutter", "url": "https://github.com/talkjs/talkjs-flutter/pull/31", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2039697652
redoc breaks when there is Json The following code will break redoc #[derive(Debug, Serialize, Deserialize, JsonSchema)] pub struct Request { } pub async fn request( Json(request): Json<Request>, ) -> impl IntoApiResponse { (StatusCode::OK, Json(())).into_response() } ApiRouter::new() .api_route("/foo", post(request)) I got Something went wrong... Invalid reference token: definitions Stack trace Error: Invalid reference token: definitions at o.get (http://localhost:3300/redoc:18:261574) at Yc (http://localhost:3300/redoc:49:71447) at eu (http://localhost:3300/redoc:49:76077) at nu.generateExample (http://localhost:3300/redoc:49:78343) at new nu (http://localhost:3300/redoc:49:77771) at http://localhost:3300/redoc:49:78906 at Array.map (<anonymous>) at new au (http://localhost:3300/redoc:49:78877) at new su (http://localhost:3300/redoc:49:79673) at get requestBody (http://localhost:3300/redoc:49:82742) ReDoc Version: 2.0.0 Commit: 5fb4daa It is caused by #/definitions/ being the default definition path, if aide::gen::extract_module(true) is not called. Docs of aide::gen::extract_module says the default is enabled, but SchemaSettings is not fully initialized in GenContext::new()
gharchive/issue
2023-12-13T13:18:16
2025-04-01T04:36:01.276196
{ "authors": [ "JakkuSakura" ], "repo": "tamasfe/aide", "url": "https://github.com/tamasfe/aide/issues/95", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2222746215
WWW'24 Papers about Machine Unlearning Unlink to Unlearn: Simplifying Edge Unlearning in GNNs https://arxiv.org/abs/2402.10695 Paper added.
gharchive/issue
2024-04-03T12:02:58
2025-04-01T04:36:01.285742
{ "authors": [ "adasken", "tamlhp" ], "repo": "tamlhp/awesome-machine-unlearning", "url": "https://github.com/tamlhp/awesome-machine-unlearning/issues/31", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
156603338
Periodic reporting params Periodic reporting params added What's the purpose of CommandHandler? I think it complicates things. I've put some comments in the diff as well. Related the CommandHandler. If there is interface or base class the handlers can be handled in a general way (in a list this case). If a new handler is implemented in the future only one additional line is necessary in the SendCommandFragment. Something like this: commandHandlers.add(new NewCustomCommandHandler()); I still don't like the idea of CommandHandler. First of all, the name is confusing. The class basically updates the UI. Also, instead of having some mapping, you go through all the command handlers and call them all. Method names are weird as well. Let's go through each method: constructor - doesn't do anything useful onCommandSelected - this is the main method, but I think it doesn't have enough complexity to have a separate class hierarchy to handle it onCommandNothingSelected - duplicates onCommandSelected onCommandAddParameters - this is very simple method as well You still have widgets in the fragment, so you are not actually decoupling fragment from the command types. I think for now we can keep all the logic in the fragment (this is also consistent with web app). Changes are done. Merged. Thanks for pull request.
gharchive/pull-request
2016-05-24T20:38:01
2025-04-01T04:36:01.292923
{ "authors": [ "gaborgsomogyi", "tananaev" ], "repo": "tananaev/traccar-manager-android", "url": "https://github.com/tananaev/traccar-manager-android/pull/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
56786994
Version number is not provided Hello! I am building traccar from source files on Ubuntu 12.04 LTS 32Bit. When I run package.sh file according to step 13 at http://www.traccar.org/docs/build.jsp, I got this message "Version number is not provided". Please suggest what am I missing? You need to provide a version number as a command line argument. Which version number? Please give me more details. I simply put the package with 1.0 version number as a command line argument and the zip files has generated. But as I install traccar and start the daemon using following command.. -sudo /opt/traccar/bin/traccar start -[sudo] password for user: -Starting traccar... -Waiting for traccar.................. -WARNING: traccar may have failed to start. and can't open localhost:8082 Have you fixed the issue? It has been fixed automatically by putting a version number after the command. Or you are asking anything else? You said you are getting "WARNING: traccar may have failed to start". Yes sir, I was getting this message during starting the daemon of traccar (sudo /opt/traccar/bin/traccar start). When I saw in log files, There was a license error. After searching for this type of errors, I found that I have installed a professional version of wrapper-delta-pack by mistake. Then I remove it and installed the community version which is free to use. And I got the solution.
gharchive/issue
2015-02-06T09:02:53
2025-04-01T04:36:01.297705
{ "authors": [ "khansquare", "tananaev" ], "repo": "tananaev/traccar", "url": "https://github.com/tananaev/traccar/issues/1064", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
463276631
A new question Have you implemented this algorithm in other deep learning frameworks, such as pytorch, caffe ? I have only implemented this algorithm in tensorflow. If I find other implementations in other frameworks, I will post references in the readme.
gharchive/issue
2019-07-02T14:54:26
2025-04-01T04:36:01.298687
{ "authors": [ "tancik", "tongshiwen" ], "repo": "tancik/StegaStamp", "url": "https://github.com/tancik/StegaStamp/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1379110313
[NEW ICON] F# / FSharp Skill Name F# Why? F# is a excellent functional programming language running on .NET platform. Reference Image Please Add This
gharchive/issue
2022-09-20T09:33:23
2025-04-01T04:36:01.300494
{ "authors": [ "A-Boring-Square", "Kyocius" ], "repo": "tandpfun/skill-icons", "url": "https://github.com/tandpfun/skill-icons/issues/205", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
156542462
Link to import page It looks like we have a page for scene import, but I don't see a link to it anywhere? We should add it to the nav (and possibly link elsewhere too). https://github.com/tangrams/tangram-docs/blob/gh-pages/pages/import.md Resolves with https://github.com/mapzen/mapzen-docs-generator/pull/149 and 565ea5f09299797b415e13d3888c6e34c5d3a537
gharchive/issue
2016-05-24T15:46:43
2025-04-01T04:36:01.331657
{ "authors": [ "bcamper", "meetar" ], "repo": "tangrams/tangram-docs", "url": "https://github.com/tangrams/tangram-docs/issues/96", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2568300225
Add contributor.md file I would like to add this because this file is a crucial document in open-source repositories, providing clear guidelines and instructions for anyone interested in contributing to the project. It outlines how contributors can get involved, the contribution process, and the repository's coding standards and rules. Please assign me this task. Hii @tanishaness could you please assign this to me? hii @tanishaness I noticed that my contributions are not reflected on the leaderboard, and I would like to kindly request your assistance in rectifying this issue. It’s important for me to have my contributions acknowledged, and I believe they have met the required criteria. Thank you for your attention to this matter. I appreciate your support, and I look forward to your response.
gharchive/issue
2024-10-05T20:12:11
2025-04-01T04:36:01.336762
{ "authors": [ "Amulya-B28", "Varsha567" ], "repo": "tanishaness/SPROCTOR", "url": "https://github.com/tanishaness/SPROCTOR/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1893824715
About the performance gap with the released checkpoints Thanks for your great work. I have two questions: With the same kinetics400 validation set (19796 videos) as that of mmaction, the same setting as your configs/recognition/vit/vitclip_base_k400.py (32 x 3 x 1 Views during testing), the checkpoint vit_b_clip_32frame_k400.pth you provided, my evaluation results on kinetics400 validation set is 83.34 (acc@1) and 96.45 (acc@5), which is lower than your results given in README.md, i.e., 84.7 (acc@1) and 96.7 (acc@5). Is there any possible reason for the gap (e.g., do you have a smaller kinetics400 validation set due to expired links)? The checkpoint vit_b_clip_32frame_diving48.pth you provided is tested on 32 x 1 x 1 Views, according to README.md. But the Views in configs/recognition/vit/vitclip_base_diving48.py is 32 x 1 x 3. My evaluation results is 88.43 (acc@1, 32 x1 x 3) and 88.32 (acc@1, 32 x 1 x 1), which is lower than your results given in README.md, i.e., 88.9 (acc@1, 32 x 1 x 1). Is there any possible reason for the gap? I am also confused about the following mismatch: The checkpoint vit_b_clip_32frame_k700.pth you provided is tested on 32 x 3 x 3 Views, according to README.md. But the Views in configs/recognition/vit/vitclip_base_k700.py is 8 x 3 x 3. With the same kinetics700 validation set (34824 videos) as that of mmaction, the checkpoint vit_b_clip_32frame_k700.pth you provided and 32 x 3 x 3 testing Views, my evaluation result on kinetics700 validation set is 75.78 (acc@1), which is lower than your result given in README.md, i.e., 76.9 (acc@1). Is there any possible reason for the gap? Hi @yangbang18 , thanks for your interest in our work. We have 19404 validation videos. We are using the Kinetics-400 dataset from here. It may be caused by the difference of environment and device. The config is an example. You could modify the frames and frame_interval for different settings. I don't have the access to the K700 dataset now. We were downloading the K700 dataset following this Sorry, I can't visit your kinetics400 link (even with VPN). BTW, I have some new findings recently. With the same kinetics400 validation set (19796 videos) as that of mmaction, I re-produce the training process at 8 V100s with configs/recognition/vit/vitclip_base_k400.py, which produces 83.36 (acc@1) and 96.41 (acc@5) under 32x3x1 views. These results are similar to the checkpoint vit_b_clip_32frame_k400.pth you provided. With your acc@1 (84.9% according to the paper) reported on 19404 videos, the performance range of the model on my validation set (19796 videos) would be [(19404 * 84.9% + 392 * 0%) / 19796 = 83.2%, (19404 * 84.9% + 392 * 100%) / 19796 = 85.2%] Given that my re-produced 83.36 is close to the lower bound (83.2), I suspect the missing 392 (19796 - 19404) videos in your validation set are hard for the model to classify. About the claim: It may be caused by the difference of environment and device, I also had a try. I evaluated the released vit_b_clip_32frame_k400.pth checkpoint at V100 and 4090. Both devices gave the same results. Hi, the link is from academic torrent. The link is provided in MMAction2 . You may try other VPN. I will check the results on Diving48. I downloaded kinetics 400 at https://opendatalab.com/OpenMMLab/Kinetics-400, the same data as MMAction2. I downloaded kinetics 400 at https://opendatalab.com/OpenMMLab/Kinetics-400, the same data as MMAction2 (i.e., the same number of training/validation videos). So did the kinetics 700. I can reproduce Diving48 results by training. So you can overlook this part. Hello, @yangbang18 I've been trying to reproduce Diving48 results by training recently. But I can't obtain the reported results. Could you kindly provide your settings, configuration, or log? Thank you.
gharchive/issue
2023-09-13T06:20:33
2025-04-01T04:36:01.375514
{ "authors": [ "hsi-che-lin", "taoyang1122", "yangbang18" ], "repo": "taoyang1122/adapt-image-models", "url": "https://github.com/taoyang1122/adapt-image-models/issues/35", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
172399295
After package update [11:39:52] 'wiredep' errored after 91 ms [11:39:52] TypeError: Cannot read property 'length' of undefined at flattenGlob (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:9:25) at setToBase (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:48:12) at module.exports (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:56:19) at Object.gs.createStream (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/glob-stream/index.js:34:42) at Object.gs.create (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/glob-stream/index.js:76:43) at Gulp.src (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/vinyl-fs/lib/src/index.js:33:23) at Gulp.<anonymous> (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/gulpfile.js:276:15) at module.exports (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/lib/runTask.js:34:7) at Gulp.Orchestrator._runTask (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/index.js:273:3) at Gulp.Orchestrator._runStep (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/index.js:214:10) [11:39:52] 'build' errored after 93 ms [11:39:52] TypeError in plugin 'run-sequence(wiredep)' Message: Cannot read property 'length' of undefined Stack: TypeError: Cannot read property 'length' of undefined at flattenGlob (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:9:25) at setToBase (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:48:12) at module.exports (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/glob2base/index.js:56:19) at Object.gs.createStream (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/glob-stream/index.js:34:42) at Object.gs.create (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/glob-stream/index.js:76:43) at Gulp.src (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/gulp/node_modules/vinyl-fs/lib/src/index.js:33:23) at Gulp.<anonymous> (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/gulpfile.js:276:15) at module.exports (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/lib/runTask.js:34:7) at Gulp.Orchestrator._runTask (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/index.js:273:3) at Gulp.Orchestrator._runStep (/Users/alexander.sokolovskiy/projects/people/web/app/themes/people/node_modules/orchestrator/index.js:214:10) [11:39:52] Finished 'default' after 96 ms All Configs default from here https://github.com/roots/sage
gharchive/issue
2016-08-22T08:51:57
2025-04-01T04:36:01.380716
{ "authors": [ "Phalconline" ], "repo": "taptapship/wiredep", "url": "https://github.com/taptapship/wiredep/issues/254", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2700666313
🛑 Tardis Wiki is down In be5fc48, Tardis Wiki (https://tardis.wiki/wiki/) was down: HTTP code: 522 Response time: 19870 ms Resolved: Tardis Wiki is back up in b1b4271 after 1 hour, 5 minutes.
gharchive/issue
2024-11-28T03:48:24
2025-04-01T04:36:01.417158
{ "authors": [ "Bongo50" ], "repo": "tardis-wiki/uptime", "url": "https://github.com/tardis-wiki/uptime/issues/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2398140714
feat: add log4rs logger The purpose is to supplement the log crate by adding configurability through a yaml file. With one central logger, we are able to save data in more granularity, including different output files and not merely runtime logs. Re-uses the log4rs implementation in tari L1 chain. Resolves #17
gharchive/pull-request
2024-07-09T12:54:56
2025-04-01T04:36:01.421002
{ "authors": [ "therealdannzor" ], "repo": "tari-project/sha-p2pool", "url": "https://github.com/tari-project/sha-p2pool/pull/18", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2130152868
fix: header sync and lingering volumes Description Show header sync status Cleanup anonymous volumes that some containers (tor 👀) leave lingering after shutdown Motivation and Context No lingering volumes and more accurate setup. How Has This Been Tested? Manually ACK
gharchive/pull-request
2024-02-12T13:35:04
2025-04-01T04:36:01.422841
{ "authors": [ "CjS77", "brianp" ], "repo": "tari-project/tari-launchpad", "url": "https://github.com/tari-project/tari-launchpad/pull/317", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
259198151
SyntaxError: Unexpected token => ElasticDump version : 3.3.1 Elasticsearch version : 1.7.3 Full command; elasticdump --input=http://localhost:9200/mean --output=/home/user/ElasticExportedData/elastic_mean_data.json --type=data /home/user/.nvm/versions/node/v0.12.17/lib/node_modules/elasticdump/elasticdump.js:18 .reduce((params, param) => { ^^ SyntaxError: Unexpected token => at exports.runInThisContext (vm.js:73:16) at Module._compile (module.js:443:25) at Object.Module._extensions..js (module.js:478:10) at Module.load (module.js:355:32) at Function.Module._load (module.js:310:12) at Module.require (module.js:365:17) at require (module.js:384:17) at Object. (/home/user/.nvm/versions/node/v0.12.17/lib/node_modules/elasticdump/bin/elasticdump:5:19) at Module._compile (module.js:460:26) at Object.Module._extensions..js (module.js:478:10) Elsaticdump requires node v4 or higher (codified here). You are using v0.12. NPM should have warned you of this when in installed the project.
gharchive/issue
2017-09-20T15:06:35
2025-04-01T04:36:01.470747
{ "authors": [ "GITburakdeniz", "evantahler" ], "repo": "taskrabbit/elasticsearch-dump", "url": "https://github.com/taskrabbit/elasticsearch-dump/issues/349", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2251672302
Megathread: Streamer issues, DXFeed licensing, missing event types & symbols Hi folks, I wanted to let everyone know about some developments that will affect a lot of users of the SDK. Recently, a lot of users have been having issues with the streamer (#135, #137, #141) It turns out this is due to some changes that have been implemented upstream in the API. For a bit of background reading see the Javascript SDK's page: tastytrade/tastytrade-api-js#7, tastytrade/tastytrade-api-js#9, tastytrade/tastytrade-api-js#40. Long story short, there ~are~ were two types of streamers supported in this SDK, the DXFeed and the DXLink streamer. DXFeed was an older protocol which now appears to be completely defunct. DXLink is the new protocol, and most users had already migrated to it. Not too big of a deal up to this point. However, it appears that Tastytrade is now providing two different websockets for use with DXLink. The first, obtained by hitting the /quote-streamer-tokens endpoint, has been around for a long time. It is the endpoint used by official Tastytrade applications (I confirmed it's still being used by the Tasty web platform with a MITM just now), and doesn't have many restrictions with respect to event types, instrument types, etc. The second, newer one, /api-quote-tokens, is designed specifically for us API users. Apparently, the /quote-streamer-tokens endpoint is now reserved for internal usage, and it is considered a violation of DXFeed TOS to use it. According to Devin Moss from TT: "Distributing quote data is something that puts tastytrade at some risk of breeching [sic] licensing agreements, especially if they were to make it available to all users with API access. tastytrade has no control over what API users do with those quotes, which poses a real possibility that abusers could use the data to source their own proprietary projects." Well, there's definitely plenty of people using the data "to source their own proprietary projects," though I thought that was kind of the whole purpose of the API... It certainly appears most users are using it alongside Tastytrade in a way that seems to fit within the API's intended purpose. So where does that leave us? Well, to start off, some "normal" users will be unaffected. Those who are not trading indices nor futures and don't use greeks will hopefully be able to use the new websocket without problems. It's already mostly implemented in #140. However, there are many users who are relying on the streamer for other kinds of instruments (especially SPX afaict) or events, like greeks. These users could continue to use the old DXLink websocket; however, Devin has also said: "For the time being, please only use the [/api-quote-tokens] endpoint. API users who rely on the /quote-streamer-tokens endpoint will be flagged and put on delayed quotes." So this is pretty bad news for a lot of users. I guess it was too good to be true that simply creating a Tastytrade account was enough to get real-time data for any instrument you could want? Anyways, that's pretty much where we're at for now. It doesn't seem like Tastytrade will change this, but maybe if we get enough people to email them something can be worked out. Hypothetically, someone could also try to get around the restrictions on the /quote-streamer-tokens endpoint. This project has been using the default Python requests user agent for all REST requests, so I'd imagine that's probably how we're getting flagged. Purely hypothetical, of course ;) So here's the plan. The main branch and the releases for this repository will follow the DXFeed TOS. However, I've created a branch advanced-streamer for people who need the old functionality. I'll do my best to keep it working if possible. Thanks for all your hard work and the update. I view the loss of the ability to get greeks as an absolute disaster! This is a good overview, @Graeme22 By the way, some users are completely fine with the delayed quotes for the automated trading via API. There is also a technical problem with using dxfeed streaming quotes within API services on a backend: client.connect('wss://demo.dxfeed.com/dxlink-ws') - unlike previous implementation, this is strictly designed for the web and app clients. It is still possible to use it in the backend, but it looks akward. I hope TT will figure out some pricing for API streaming. Of course, there is an avenue to sign up directly with Dxfeed; the problem is that the whole industry and Dxfeed are not particularly interested in catering to small API development shops. I'm also considering abstracting quote service in the backend, so any provider can be plugged in if needed. The problem with that TT backend is too tied up to the formats and structures of dxfeed, I suspect it will be very hard to separate them. I haven't looked into that too much yet. It's interesting to hear what others are planning to do. So here's the plan. The main branch and the version v8.0+ releases for this repository will follow the DXFeed TOS. However, I've created a branch advanced-streamer for people who need the old functionality, which I'll also release periodically as v7.1+. I'll do my best to keep it working if possible. Please consider a larger buffer for the advanced streamer because subscribing TAS for the whole SPX chain is unstable with too many subscribe calls Also, consider sending a different user-agent header instead of python requests Please consider a larger buffer for the advanced streamer because subscribing TAS for the whole SPX chain is unstable with too many subscribe calls So that's actually not in my control at all. Personally I haven't had problems with whole SPX chain, but you could consider stitching together two separate streamer if it's not working for you. Also, consider sending a different user-agent header instead of python requests The v7.1 release is already using a random, realistic user agent! Please consider a larger buffer for the advanced streamer because subscribing TAS for the whole SPX chain is unstable with too many subscribe calls So that's actually not in my control at all. Personally I haven't had problems with whole SPX chain, but you could consider stitching together two separate streamer if it's not working for you. Are you subscribing to TAS of the full chain? I was using the old repo under the old tastyworks package before switching to DXLinkStreamer here and I was able to chunk 10k symbols per subscribe call with the old streamer. With the new streamer I can do max of 2k before I hit the buffer size. Not sure if this is the reason for the streamer dropping more often (sending too many subscriptions calls). Also, consider sending a different user-agent header instead of python requests The v7.1 release is already using a random, realistic user agent! Not for the logging in to tastytrade though (TT could be flagging via that) Are you subscribing to TAS of the full chain? Nope, I was doing Greeks and quotes. I was using the old repo under the old tastyworks package before switching to DXLinkStreamer here and I was able to chunk 10k symbols per subscribe call with the old streamer. With the new streamer I can do max of 2k before I hit the buffer size. Not sure if this is the reason for the streamer dropping more often (sending too many subscriptions calls). Not much that can be done here, though it seems like as it's something you only have to do once it shouldn't be too big of a deal? Granted I don't know your use case. Not for the logging in to tastytrade though (TT could be flagging via that) Ha, good point! Would you be able to do a PR to fix that? Hi all, can someone pls confirm if VXX, UVIX, and VIXY are working? I've been facing issues with an older version of the API (6.4) that has become very unreliable in receiving those quotes for the past few days. I was working fine for many month. That's probably the issue. Could you see if the latest (7.2) fixes it? Those are ETFs so no reason they shouldn't work normally. Thank you for your note, I'm looking into it now. I need to perform a minor surgery on it, as I had forked the repository to implement a mechanism that inserts an SSL certificate into the Streamer class to connect with the websocket. By the way, this would be a great, fundamental feature. Additionally, it's not so easy to test since it sometimes works and sometimes doesn't over the past few days. You're welcome to open a PR, though I'm not sure why we'd need to secure quotes streams, maybe you could enlighten me on that? I had to set up a complete RnD project on Windows Server 2022 to figure out why the websocket for live data was delivering no data or unreliable data. The same code ran fine on my Windows 11 computer, Linux , and Mac. As soon as I switched to Win Server 2022, it didn't work. I received Python error messages indicating that there was something wrong with the SSL certificate, or perhaps they weren't properly found on the server. Consequently, I assumed that the upstream library needed them. After I added these cert.pem files to the websocket.connect, it worked flawlessly for months for me and many of my colleagues. Got an email from TT, seems like the new API should now support indices and futures? Can anyone confirm? Got an email from TT, seems like the new API should now support indices and futures? Can anyone confirm? As of now, I was able to get quotes for futures options but not indices. Greeks did not work. I had to set up a complete RnD project on Windows Server 2022 to figure out why the websocket for live data was delivering no data or unreliable data. The same code ran fine on my Windows 11 computer, Linux , and Mac. As soon as I switched to Win Server 2022, it didn't work. I received Python error messages indicating that there was something wrong with the SSL certificate, or perhaps they weren't properly found on the server. Consequently, I assumed that the upstream library needed them. After I added these cert.pem files to the websocket.connect, it worked flawlessly for months for me and many of my colleagues. Interesting! Could you open a PR to add this behavior optionally? Eg with a flag to turn it on. And maybe add what you just said to the docs on the streamer as well? Nope, I was doing Greeks and quotes. Are you still able to get the greeks? If yes, with the 7.1 branch? Is there a way to get the bid and ask price of an asset without using the streamer? Is there a way to get the bid and ask price of an asset without using the streamer? I can't get the quote examples from the docs to run using the DXLinkStreamer, I get "TastytradeError: Connection timed out". Nope, streamer is required for live prices. What version of the SDK are you using? Hi @Graeme22, I was using version 8.0, from Jupyter notebook. I just tried running it from an IDE and now I see that the issue is related to SSL certificate. SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000) Hi @Graeme22, I was using version 8.0, from Jupyter notebook. I just tried running it from an IDE and now I see that the issue is related to SSL certificate. I installed certificates, and now it's working. Thanks! Looks like the reported issue which I highlighted 2 weeks ago... Looks like the reported issue which I highlighted 2 weeks ago... Could be! @ferdousbhai what OS are you on? @kaidaniel82 could you open a PR with your fix? import ssl ssl_context = ssl.create_default_context(cafile=cert_path) async with DXLinkStreamer(session, ssl_context=ssl_context) as data_feed: Thanks @kaidaniel82! Did you find that it was necessary to use specific settings for the SSL context? Or was it just the presence of the context that made everything work? I believe my suggestion and the one from Certifi are essentially the same. Certifi packages the certificates in the site-packages folder and provides the path to the certificates using the where() command. I have also used this method elsewhere in my app, specifically when using a Discord API in my product, where I use Certifi. Due to the development history, I previously downloaded the certificates from Mozilla for Tasty API and stored them in my app as files. These are, to my knowledge, the same certificates that Certifi includes. Both methods should work and, as far as I understand, have the same effect. Can anyone confirm if the streamer is working for indices (e.g. SPY)? Can anyone confirm if the streamer is working for options on indices (e.g. SPY)? Hi, The streamer works every symbol that can be traded on TastyTrade. So, SPY and other indices are available. I have not been able to stream SPY options quotes. The program just hangs awaiting a quote from the streamer. I have not been able to stream SPY options quotes. The program just hangs awaiting a quote from the streamer. Can you post your code, SDK version, etc.? session = ProductionSession(username, password) async with DXLinkStreamer(session) as streamer: # Initial subscription to SPY print("Subscribing to stream...") await streamer.subscribe(EventType.QUOTE, [symbol]) print(f"Subscribed to {symbol} stream.") quote = await streamer.get_event(EventType.QUOTE) print(f"Receiving quotes: {quote.eventSymbol} has ask price of {quote.askPrice}") The program hangs forever on the quote line. And what are you using as symbol. It would be helpful to give the full function and the call to that function. Here is a simplified version of the code, but captures all the essentials. import asyncio import os import datetime as dt import pandas as pd from tastytrade import ProductionSession, Account, DXLinkStreamer from tastytrade.dxfeed import EventType from datetime import datetime import pandas_market_calendars as mcal import pytz from dotenv import load_dotenv async def main(session, symbol, market_close): async with DXLinkStreamer(session) as streamer: # Initial subscription to SPY print("Subscribing to stream...") await streamer.subscribe(EventType.QUOTE, [symbol]) print(f"Subscribed to {symbol} stream.") # Continuously listen for quotes print("Starting while loop for stream.") # resp = streamer._connect() while dt.datetime.utcnow() < market_close: # Get the current time in that timezone # new_york_time = datetime.now(new_york_tz) # Market close given in UTC time print("In the loop, awaiting quote...") print(f"The event type is {EventType.QUOTE}") # resp = streamer.listen(EventType.QUOTE) quote = await streamer.get_event(EventType.QUOTE) print(f"Receiving quotes: {quote.eventSymbol} has ask price of {quote.askPrice}") load_dotenv() symbol = 'SPY' # symbol = 'TSLA' username = os.environ.get('TastyTrade_Username') password = os.environ.get('TastyTrade_Password') account_num = os.environ.get('TastyTrade_AccountNum') session = ProductionSession(username, password) expiration = dt.date.today() + dt.timedelta(days=1) # Create a calendar object for NYSE nyse = mcal.get_calendar('NYSE') # Create a timezone object for New York new_york_tz = pytz.timezone('America/New_York') current_time_ny = datetime.now(new_york_tz) current_date_ny = current_time_ny.date() # Get the trading days for the current month # Adjust the start and end dates as needed to cover the current date SPY_time_offset = 15 # SPY options close 15 minutes AFTER market start_date = current_date_ny.replace(day=1) end_date = current_date_ny.replace(day=1).replace(month=current_date_ny.month % 12 + 1) - dt.timedelta(days=1) schedule = nyse.schedule(start_date=start_date, end_date=end_date) day_schedule = schedule[schedule.index == pd.to_datetime(current_date_ny)] market_close = day_schedule['market_close'].iloc[0] market_close = market_close.replace(tzinfo=None) # This is given in UTC market_close = market_close + dt.timedelta(minutes=SPY_time_offset) # Run Streamer if __name__ == "__main__": asyncio.run(main(session=session, symbol=symbol, market_close=market_close)) I ran your code and I'm getting the SPY code back. Ran it on both branches (master and advanced_stream) Quote(eventSymbol='SPY', eventTime=0, sequence=0, timeNanoPart=0, bidTime=0, bidExchangeCode='Q', askTime=0, askExchangeCode='Q', bidPrice=Decimal('540.71'), askPrice=Decimal('540.74'), bidSize=721, askSize=1037) Hmmm I'm not sure what is wrong on my end in that case. Ok thanks, I uninstalled the package and re-installed. Working again, appreciate the help. Great! By the way, some of the functions you were creating are already included in the SDK, you may find them useful: https://tastyworks-api.readthedocs.io/en/latest/tastytrade.html#tastytrade.utils.now_in_new_york Looks like the reported issue which I highlighted 2 weeks ago... Could be! @ferdousbhai what OS are you on? @kaidaniel82 could you open a PR with your fix? MacOS Am I correct in saying that at the moment the DXLink WebSocket can only be used to stream Quote data and none of the other market events like Greeks or Profile which users of the tasty API have access to? Do I need to go through DXFeed to get the option chain, for instance? Not directly related to the SDK but would love any general pointers. Am I correct in saying that at the moment the DXLink WebSocket can only be used to stream Quote data and none of the other market events like Greeks or Profile which users of the tasty API have access to? Do I need to go through DXFeed to get the option chain, for instance? Not directly related to the SDK but would love any general pointers. Nope! As long as you're using the SDK, it'll behave like one of the official Tasty clients and you can get quotes, greeks, candles and more with no problems. Thanks Graeme... I guess nobody knows when they will add Greeks to their DXLink offering? From the tasty documentation, which I only discovered today: Firstly, thanks for all your effort. I ran into this issue as well on mac, but I was able to fix it this way. I just have a question. If I want to deploy my app with this DXLinkStreamer on cloud such as gcp or AWS, do I still need this bit or not? Hi, glad to help! I think probably not, I've been deploying to Heroku without any issues so I suspect it's a Mac-only thing. Firstly, thanks for all your effort. I ran into this issue as well on mac, but I was able to fix it this way. I just have a question. If I want to deploy my app with this DXLinkStreamer on cloud such as gcp or AWS, do I still need this bit or not? Hi, glad to help! I think probably not, I've been deploying to Heroku without any issues so I suspect it's a Mac-only thing. @Graeme22 i ran into this issue on my windows server. @Graeme22 i ran into this issue on my windows server. This forced me to used the method i described... Confirmed not Mac-only! TL;DR use Linux 😁 Hey @Graeme22 , Ran into a warning regarding this, DeprecationWarning: ssl.SSLContext() without protocol argument is deprecated. async with DXLinkStreamer(s, ssl.SSLContext(cafile=certifi.where())) as streamer: I suppose we just ignore it? Thanks Hey @Graeme22 , Ran into a warning regarding this, DeprecationWarning: ssl.SSLContext() without protocol argument is deprecated. async with DXLinkStreamer(s, ssl.SSLContext(cafile=certifi.where())) as streamer: I suppose we just ignore it? Thanks Yup! If you ever get an actual error instead of a warning, you can follow the instructions here: https://stackoverflow.com/a/72810634/13938613 Thanks Graeme... I guess nobody knows when they will add Greeks to their DXLink offering? fyi. greeks is avilable. 🤫 https://github.com/tastyware/tastytrade/blob/97e1bc6632cfd4a15721da816085eb906a02bcb0/docs/data-streamer.rst#L76 Hi @Graeme22, I am thinking about building an option strategy scanner. In this app I would need to subscribe to 220.000 option contacts Quotes and Greeks, thinking of creating as many parallel Streamers/Websocket connections as instruments, ca. 180-190. Do you think it is feasible to get quotes and greeks for 220k contracts every 1-5 minutes (the less the better)? Is there any other bottleneck when using simultaneous connections and subscribe ca. 2000-2500 streamer-symbols? Have you tried the API in such a high performance environment? Closing this as Tastytrade has been letting people use the streamer with full capabilities for quite a while now. If at some point something changes, Tasty adds greeks, or licensing becomes a problem again, I'll reopen. I can't seem to get Candle or Quote data for SPX/SPXW/XSP. I've tried using the Go sdk, Javascript SDK, and sending the websocket events without using an sdk at all and nothing returns. Here is an example with the JavaScript SDK, have I missed something? import { DXLinkWebSocketClient, DXLinkFeed, FeedContract, FeedDataFormat } from "@dxfeed/dxlink-api"; const client = new DXLinkWebSocketClient(); client.connect("wss://tasty-openapi-ws.dxfeed.com/realtime"); // Token returned from /api-quote-tokens client.setAuthToken("..."); const feed = new DXLinkFeed(client, FeedContract.AUTO); feed.configure({ acceptAggregationPeriod: 10, acceptDataFormat: FeedDataFormat.COMPACT, acceptEventFields: {Candle:["eventSymbol", "open", "close", "high", "low", "volume"]} }); feed.addSubscriptions({ type: "Candle", symbol: "SPX" }); feed.addEventListener((events) => console.log(events)); setInterval(() => {}, 1 << 30);
gharchive/issue
2024-04-18T22:16:23
2025-04-01T04:36:01.509415
{ "authors": [ "DustinJSilk", "Graeme22", "OperationalFallacy", "Quenos", "ferdousbhai", "fisher8-jake", "inanisvitae", "joshuakoh7", "kaidaniel82", "leaph", "marwinsteiner", "pangyuteng", "some-person-0" ], "repo": "tastyware/tastytrade", "url": "https://github.com/tastyware/tastytrade/issues/142", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1182786716
Adding a shortcut to vote On the original trello voting power up, you can vote in a card by pressing the key v while hovering, or inside a card. It would be handy to have the same option on the Leaner Coffee power up. Thank you Hi, Amanda. Unfortunately, that's not technically possible (yet). It's been asked before (see #37 and #40), so you're not alone in wishing this functionality were available. I bumped this old thread in Trello's community forums; I'll wait a few more days to see whether anyone from Atlassian gives us an update. Alas, it appears it's still not gonna be possible, and won't be for a while. 😕 (screenshots from the community forums, link in the previous post)
gharchive/issue
2022-03-28T02:21:01
2025-04-01T04:36:01.514169
{ "authors": [ "amandavarella", "tatablack" ], "repo": "tatablack/leaner-coffee-powerup", "url": "https://github.com/tatablack/leaner-coffee-powerup/issues/55", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
797287837
Misskey のリアクション後につく背景色を変更したい 現状 Misskey でリアクションを付けた後の背景色(以下の画像の水色部分)について、設定から変更できないように見受けられます。 リアクションはよく使う機能なので、見た目に与える影響も大きめだと思います。 ご検討のほど、よろしくお願いいたします。 https://github.com/tateisu/SubwayTooter/commit/4cf16c6ee890a7d492cd90bf9bc68dffff879e9d で対応。そのうちリリースされるでしょう。 https://github.com/tateisu/SubwayTooter/commit/4cf16c6ee890a7d492cd90bf9bc68dffff879e9d で対応。そのうちリリースされるでしょう。 ご対応ありがとうございます! ご対応ありがとうございます!
gharchive/issue
2021-01-30T00:27:17
2025-04-01T04:36:01.516932
{ "authors": [ "kintsuba", "tateisu" ], "repo": "tateisu/SubwayTooter", "url": "https://github.com/tateisu/SubwayTooter/issues/150", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1921770209
feat/user-feedback Please review before merging. :tada: This PR is included in version 1.72.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2023-10-02T11:53:59
2025-04-01T04:36:01.537917
{ "authors": [ "romain-cambonie" ], "repo": "taxi-gestion/client", "url": "https://github.com/taxi-gestion/client/pull/133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
473074199
Track Assist move to infer moveset probabilities Sample messages: |move|p1a: Delcatty|Assist|p1a: Delcatty |move|p1a: Delcatty|Leech Seed|p2a: Heatran|[from]Assist Randomly calls a move from a party pokemon's moveset (even fainted pokemon). If the move is not known by any of the user's party, consider it as a possibility for any non-complete moveset. A move called by Assist means that one of the user's party pokemon definitely has this move, but what isn't definite is which pokemon has the move or if multiple pokemon have the move. Not sure if it would be useful to track this. Obsoleted by #321. Keeping things simple, and this case is too rare/obscure to be seen in actual play.
gharchive/issue
2019-07-25T21:17:04
2025-04-01T04:36:01.539411
{ "authors": [ "taylorhansen" ], "repo": "taylorhansen/pokemonshowdown-ai", "url": "https://github.com/taylorhansen/pokemonshowdown-ai/issues/83", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
497791543
ignore the contents of markdown blockquotes > none of this should be spell checked Blockquotes, right? Or are we talking about code blocks? If it's the former, I wouldn't be averse to adding a way to configure the tool to ignore the contents of blockquotes. The latter should already be ignored. Block quotes I thought about this more and I don't know if it makes sense to add a configuration option for this. It seems like ignoring blockquotes would be a relatively rarely-used feature.
gharchive/issue
2019-09-24T16:19:20
2025-04-01T04:36:01.643028
{ "authors": [ "hkiang01", "tbroadley" ], "repo": "tbroadley/spellchecker-cli", "url": "https://github.com/tbroadley/spellchecker-cli/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2312633588
Create 2 warehouses for each site On creating of Site document 2 warehouse must get created One with the same name as that of the site and the other with suffix '-Scrap' Example if Site Name is Dummy then Warehouse names will be Dummy and Dummy-Scrap okay sir i will do it
gharchive/issue
2024-05-23T11:11:14
2025-04-01T04:36:01.707067
{ "authors": [ "abdul-hannan-shaikh", "partiksha021" ], "repo": "tcbinfotechpvtltd/hrcinfra", "url": "https://github.com/tcbinfotechpvtltd/hrcinfra/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1710790843
doesn't build Starting Performing "debug" build using /usr/bin/dmd for x86_64. Building bc-string 1.3.3: building configuration [default] /home/user/.dub/packages/bc-string-1.3.3/bc-string/source/bc/core/demangle.d(917,37): Error: returning `this.parseTypeFunction(name, IsDelegate.no)` escapes a reference to parameter `this` /home/user/.dub/packages/bc-string-1.3.3/bc-string/source/bc/core/demangle.d(782,12): perhaps change the `return scope` into `scope return` /home/user/.dub/packages/bc-string-1.3.3/bc-string/source/bc/core/demangle.d(2036,14): Error: template instance `bc.core.demangle.Demangle!(NoHooks)` error instantiating Error /usr/bin/dmd failed with exit code 1. On Line 782, transposing scope and return removes the failure https://github.com/tchaloupka/bc-string/pull/6
gharchive/issue
2023-05-15T20:39:27
2025-04-01T04:36:01.712791
{ "authors": [ "redthing1" ], "repo": "tchaloupka/bc-string", "url": "https://github.com/tchaloupka/bc-string/issues/5", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
92680995
iterator - string - test description is wrong The last test on the iterator - string kata is wrong, i think it should say done=true, that's also what the assert ask for... Great project by the way, would love to hear if you need any help Oh right, the text above is wrong. Thanks for finding it. If you want to you can write katas see github.com/tddbin/katas PRs very welcome Am 02.07.2015 18:37 schrieb "chiptus" notifications@github.com: The last test on the iterator - string kata is wrong, i think it should say done=true, that's also what the assert ask for... Great project by the way, would love to hear if you need any help — Reply to this email directly or view it on GitHub https://github.com/tddbin/tddbin-frontend/issues/17. Fix is on the way Wolfram Am 02.07.2015 19:08 schrieb "Wolfram Kriesing" kriesing@uxebu.com: Oh right, the text above is wrong. Thanks for finding it. If you want to you can write katas see github.com/tddbin/katas PRs very welcome Am 02.07.2015 18:37 schrieb "chiptus" notifications@github.com: The last test on the iterator - string kata is wrong, i think it should say done=true, that's also what the assert ask for... Great project by the way, would love to hear if you need any help — Reply to this email directly or view it on GitHub https://github.com/tddbin/tddbin-frontend/issues/17. does this fix it? https://github.com/tddbin/katas/commit/9be329e87e991e88b647c5af1a8b8b7d855151f8 closing it, hope thats ok great, thanks! On Thu, Jul 2, 2015 at 9:21 PM Wolfram Kriesing notifications@github.com wrote: does this fix it? tddbin/katas@9be329e https://github.com/tddbin/katas/commit/9be329e87e991e88b647c5af1a8b8b7d855151f8 closing it, hope thats ok — Reply to this email directly or view it on GitHub https://github.com/tddbin/tddbin-frontend/issues/17#issuecomment-118115420 .
gharchive/issue
2015-07-02T16:37:29
2025-04-01T04:36:01.776934
{ "authors": [ "chiptus", "tddbin", "wolframkriesing" ], "repo": "tddbin/tddbin-frontend", "url": "https://github.com/tddbin/tddbin-frontend/issues/17", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
595877574
#400 Wrong file identifier/HTTP URL provided error when sending an image second time from Telegram X (first one was sent with timer and expired) Steps: Send a new image with timer (first time upload) wait for the peer to view the image and timer to expire upload the same image again without timer This was fixed server-side.
gharchive/issue
2020-04-07T13:41:54
2025-04-01T04:36:01.781840
{ "authors": [ "Sajilck", "levlam" ], "repo": "tdlib/td", "url": "https://github.com/tdlib/td/issues/991", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
61151777
In RDF, should SKOS be used for ac:layer In the trivial RDF rendering at http://species-id.net/wiki/Audubon_Core_Term_List_RDF_Version, the layer is represented as an ac term. Should it be rendered as a SKOS property instead? Original issue reported on code.google.com by morris.bob on 1 Oct 2012 at 2:03 OK, currently there is a term called tdwgutility:layer that is being used to provide the information about the layer, with additional terms for "required" and "repeatable" See https://github.com/tdwg/rs.tdwg.org/blob/split_ac_terms/Iptc4xmpExt-for-ac/Iptc4xmpExt-for-ac-column-mappings.csv for example. The actual sorting into categories is done by tdwgutility:organizedInClass, information that's used to categorize terms in the autogenerated page https://github.com/tdwg/ac/blob/documentation-conversion/doc/termlist.md See https://github.com/tdwg/ac/blob/documentation-conversion/code/build_page.py for how it's being done. I don't think we need to go down the full SKOS route with AC terms since SKOS collections are intended for concepts (i.e. controlled vocabulary terms) rather than properties as we have in AC.
gharchive/issue
2015-03-13T17:34:00
2025-04-01T04:36:01.786298
{ "authors": [ "GoogleCodeExporter", "baskaufs" ], "repo": "tdwg/ac", "url": "https://github.com/tdwg/ac/issues/26", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
364517702
SIFT install on Ubuntu 18.04.1 LTS Incomplete due to Failures I'm trying to install SIFT on Ubuntu 18.04.1 LTS and getting the following results. Incomplete due to Failures -- Success: 199, Failure: 82 List of Failures (first 10 only) NOTE: First failure is generally the root cause. IMPORTANT: If opening a ticket, please include this information. - ID: python-software-properties SLS: sift.packages.python-software-properties Run#: 0 Comment: Problem encountered installing package(s). Additional info follows: It appears the failure is due to python-software-properties so I tried to find and install this package manually and found... $ apt-get install python-software-properties Reading package lists... Done Building dependency tree Reading state information... Done Package python-software-properties is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: software-properties-common E: Package 'python-software-properties' has no installation candidate When I tried to install the replacement (software-properties-common), I got... $ apt-get install software-properties-common Reading package lists... Done Building dependency tree Reading state information... Done software-properties-common is already the newest version (0.96.24.32.5). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Is there something I can do to work around these errors and package changes or is this something that needs to be fixed on the SIFT package end of things? 18.04 is not yet supported. Sent from my iPhone On Sep 27, 2018, at 08:57, qiMakur notifications@github.com wrote: I'm trying to install SIFT on Ubuntu 18.04.1 LTS and getting the following results. Incomplete due to Failures -- Success: 199, Failure: 82 List of Failures (first 10 only) NOTE: First failure is generally the root cause. IMPORTANT: If opening a ticket, please include this information. ID: python-software-properties SLS: sift.packages.python-software-properties Run#: 0 Comment: Problem encountered installing package(s). Additional info follows: It appears the failure is due to python-software-properties so I tried to find and install this package manually and found... $ apt-get install python-software-properties Reading package lists... Done Building dependency tree Reading state information... Done Package python-software-properties is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source However the following packages replace it: software-properties-common E: Package 'python-software-properties' has no installation candidate When I tried to install the replacement (software-properties-common), I got... $ apt-get install software-properties-common Reading package lists... Done Building dependency tree Reading state information... Done software-properties-common is already the newest version (0.96.24.32.5). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Is there something I can do to work around these errors and package changes or is this something that needs to be fixed on the SIFT package end of things? — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread. any idea when it will be? I'm having the same issues with 18.04. I am going to try to use an earlier version of ubuntu. I did re read the instructions and it does state to download ubuntu 16.04 18.04 is still not supported 100%. When can we expect 18.04 to be supported? Beta is up. Instructions are at https://github.com/teamdfir/sift-saltstack Sent from my iPhone On Jun 16, 2019, at 13:14, JustinDuaneNewman notifications@github.com wrote: When can we expect 18.04 to be supported? — You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.
gharchive/issue
2018-09-27T14:56:49
2025-04-01T04:36:01.848870
{ "authors": [ "DataLoreSpecialist", "ekristen", "jma4227", "qiMakur" ], "repo": "teamdfir/sift", "url": "https://github.com/teamdfir/sift/issues/312", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1731647833
news added Description As already mentioned in #ISSUE_NUMBER, this PR tackles: ... ... ... In particular, the ... Checklist [ ] I followed the indications in the CONTRIBUTING [ ] The documentation related to the proposed change has been updated accordingly (also comments in code). [ ] Have you written new tests for your core changes, as applicable? [ ] Have you successfully ran tests with your changes locally? [ ] Ready for review! :rocket: Fixes Fixes # @bfabio quando puoi, grazie
gharchive/pull-request
2023-05-30T07:05:33
2025-04-01T04:36:01.855878
{ "authors": [ "danieledebernardinDTD" ], "repo": "teamdigitale/padigitale2026.gov.it-site", "url": "https://github.com/teamdigitale/padigitale2026.gov.it-site/pull/654", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1077474802
🛑 PMS Application - https://app.staging.greyfinch.com/ is down In 0616db3, PMS Application - https://app.staging.greyfinch.com/ (https://app.staging.greyfinch.com/) was down: HTTP code: 503 Response time: 172 ms Resolved: PMS Application - https://app.staging.greyfinch.com/ is back up in ff7fb30.
gharchive/issue
2021-12-11T08:57:34
2025-04-01T04:36:01.862435
{ "authors": [ "neil-buckley" ], "repo": "teamgreyfinch/staging-status", "url": "https://github.com/teamgreyfinch/staging-status/issues/731", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
591724029
Unable to install hephy workflow While following docs on setting up workflow, I ran the following command which is failing. Looks like this is deprecated and needs to replace with apps/v1? ➜ helm install hephy/workflow --namespace deis --set router.host_port.enabled=true Error: validation failed: [unable to recognize "": no matches for kind "DaemonSet" in version "extensions/v1beta1", unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"] The issue is for sure deprecated extensions/v1beta1 api, we need to add support for apps/v1 (which is not landed in Workflow release yet.) You're absolutely right, but for now there are two options: You can install an earlier kubelet version, the deprecation was made final in 1.16.0 so you can still use the latest Workflow release, v2.21.4, with kubelet control plane 1.15.10 You can still enable those deprecated API groups, at least up to Kubelet 1.17.x (not sure about 1.18) by passing a runtime-config at (I think) kubelet start time: --runtime-config=extensions/v1beta1/daemonsets=true,extensions/v1beta1/deployments=true,extensions/v1beta1/replicasets=true,extensions/v1beta1/networkpolicies=true This is a priority issue for us, thanks for reporting it; I think we will keep this open even though we are already aware, and there are other issues representing this problem, because it is a blocker for new users who might not know anything about API deprecations yet, (in some near future state we'd like to hopefully be able to keep it that way at least until you know the ropes!) @kingdonb Thanks for the quick reply. I have created a new minikube cluster with v1.15.11 and when installing workflow, it fails with ➜ ~ helm install hephy/workflow --namespace deis --set router.host_port.enabled=true NAME: pruning-orangutan Error: transport is closing @ChillarAnand We discovered this the other day as well, sorry I did not include it in my reply For some reason, Hephy Workflow also does not currently appear to work with Helm 2.16.x Helm 2.15.x and Helm 3.x are both OK with the latest Hephy Workflow, from my testing at least. Not sure why Helm 2.16 has a problem, but it does seem to have more than one problem here, perhaps trying to warn us about problems in our chart before similar breaking changes land in Helm 3? We've also had verification that Helm 2.14.x works fine. Also found that Helm 3.1.2 seemingly works fine. We did not have to change our charts at all to accommodate the major upgrade. We seen different issues with Helm 2.16. If you look at your kube-system tiller deployment pod's logs after transport is closing, you will probably see that it's crashing with something like this: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x14e314f] goroutine 832 [running]: k8s.io/helm/pkg/kube.getSelectorFromObject(0x1c01c40, 0xc00087a000, 0xc00087a000, 0x0) /go/src/k8s.io/helm/pkg/kube/client.go:924 +0x26f k8s.io/helm/pkg/kube.(*Client).getSelectRelationPod(0xc00030a3a0, 0xc000238e00, 0xc000af8030, 0xc000b07770, 0x18, 0xc0000ea648) /go/src/k8s.io/helm/pkg/kube/client.go:1104 +0x195 k8s.io/helm/pkg/kube.(*Client).Get.func2(0xc000238e00, 0x0, 0x0) /go/src/k8s.io/helm/pkg/kube/client.go:366 +0xd1 k8s.io/helm/pkg/kube.batchPerform.func1(0xc00008f0e0, 0xc00079aea0, 0xc0003e0e10, 0xc000238e00) /go/src/k8s.io/helm/pkg/kube/client.go:752 +0x30 created by k8s.io/helm/pkg/kube.batchPerform /go/src/k8s.io/helm/pkg/kube/client.go:751 +0xb8 We saw different errors when users bumped their Helm client back down to an earlier version but forgot to helm init --upgrade to bring tiller version to the same mark. Make sure your Helm server and client versions match in helm version output if you are still on Helm 2.x. It does not seem right to me that charts which worked fine in 2.15.x are broken in 2.16.x, but I also thought the Helm 2.x release series would be over at the end of the 2.15.x line, I will see if the Helm team can help, I can't say if there's a good explanation for this. @bacongobbler ? It also seemed that Helm 2.16.x was pulling a much older version of Hephy Workflow from the chart museum library. Please note the latest version is v2.21.4, not v2.20.1. Maybe this is sorting differently because our chart version is not strictly semver (the v is not in the semver spec)? $ helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE hephy 2 Sat Mar 28 10:40:57 2020 DEPLOYED workflow-v2.20.1 deis I looked through some other current charts and they all seemed to be compliant with this strict interpretation of semver, so maybe this is something we need to change on our end. I did not find a chart maintainer guide to moving to Helm 3 but I'm sure there is such a guide, my goog-fu is the problem. From the Helm.sh home page it looks like Helm 2.15 is not quite advertised anymore, only 2.16 and 2.14, it's not clear to me why this is either. (I'd recommend going with Helm 3, as it is the latest version and worked without issues under our supported k8s versions.) @kingdonb Thanks for the detailed explanation and suggesting a solution. I upgraded helm to 3.x and it still failed to install workflow. However downgrading helm to 2.15.2 worked. It installed workflow. Some of the pods are in CrashLoopBackOff state. ➜ k8s kd get po NAME READY STATUS RESTARTS AGE deis-builder-57cf7db484-v7dp5 0/1 Running 2 67m deis-controller-7fbd88b5f8-gdf76 1/1 Running 3 67m deis-database-66f68b9776-ws4lr 1/1 Running 0 67m deis-logger-58bb65fdd7-hhkjl 1/1 Running 2 67m deis-logger-fluentd-bk8p6 0/1 CrashLoopBackOff 18 67m deis-logger-redis-5976d6ddc7-66jqr 1/1 Running 0 67m deis-minio-54f57f88b8-ppp5x 1/1 Running 0 67m deis-monitor-grafana-6454b46bd8-67j6m 1/1 Running 0 67m deis-monitor-influxdb-69f85d6d56-jv8ph 1/1 Running 0 67m deis-monitor-telegraf-d9xqc 0/1 RunContainerError 18 67m deis-nsqd-6dc776449-5hcrp 1/1 Running 0 67m deis-registry-677fdfd6cf-6hgxw 1/1 Running 1 67m deis-registry-proxy-r5x82 1/1 Running 0 67m deis-router-f9998457c-8c4sz 0/1 CrashLoopBackOff 25 67m deis-workflow-manager-5564f67d4-2lxb9 1/1 Running 0 67m ➜ k8s kd logs deis-router-f9998457c-8c4sz 2020/04/02 14:19:38 INFO: Starting nginx... 2020/04/02 14:19:38 INFO: nginx started. 2020/04/02 14:19:38 Error building model; not modifying certs or configuration: deployments.extensions "deis-router" is forbidden: User "system:serviceaccount:deis:deis-router" cannot get resource "deployments" in API group "extensions" in the namespace "deis". 2020/04/02 14:19:48 Error building model; not modifying certs or configuration: deployments.extensions "deis-router" is forbidden: User "system:serviceaccount:deis:deis-router" cannot get resource "deployments" in API group "extensions" in the namespace "deis". 2020/04/02 14:19:58 Error building model; not modifying certs or configuration: deployments.extensions "deis-router" is forbidden: User "system:serviceaccount:deis:deis-router" cannot get resource "deployments" in API group "extensions" in the namespace "deis". 2020/04/02 14:20:08 Error building model; not modifying certs or configuration: deployments.extensions "deis-router" is forbidden: User "system:serviceaccount:deis:deis-router" cannot get resource "deployments" in API group "extensions" in the namespace "deis". ➜ k8s kd logs deis-logger-fluentd-bk8p6 2020-04-02 14:26:57 +0000 [info]: parsing config file is succeeded path="/fluentd/etc/fluentd.conf" #<Thread:0x00005634bf0d48d8@/usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-kubernetes_metadata_filter-2.1.4/lib/fluent/plugin/filter_kubernetes_metadata.rb:243 run> terminated with exception (report_on_exception is true): /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-kubernetes_metadata_filter-2.1.4/lib/fluent/plugin/kubernetes_metadata_watch_namespaces.rb:34:in `rescue in start_namespace_watch': start_namespace_watch: Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.96.0.1:443: namespaces is forbidden: User "system:serviceaccount:deis:deis-logger-fluentd" cannot list resource "namespaces" in API group "" at the cluster scope ({"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces is forbidden: User \"system:serviceaccount:deis:deis-logger-fluentd\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope","reason":"Forbidden","details":{"kind":"namespaces"},"code":403} (Fluent::ConfigError) ) from /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-kubernetes_metadata_filter-2.1.4/lib/fluent/plugin/kubernetes_metadata_watch_namespaces.rb:27:in `start_namespace_watch' from /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-kubernetes_metadata_filter-2.1.4/lib/fluent/plugin/filter_kubernetes_metadata.rb:243:in `block in configure' 2020-04-02 14:26:58 +0000 [error]: config error file="/fluentd/etc/fluentd.conf" error_class=Fluent::ConfigError error="start_namespace_watch: Exception encountered setting up namespace watch from Kubernetes API v1 endpoint https://10.96.0.1:443: namespaces is forbidden: User \"system:serviceaccount:deis:deis-logger-fluentd\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope ({\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"namespaces is forbidden: User \\\"system:serviceaccount:deis:deis-logger-fluentd\\\" cannot list resource \\\"namespaces\\\" in API group \\\"\\\" at the cluster scope\",\"reason\":\"Forbidden\",\"details\":{\"kind\":\"namespaces\"},\"code\":403}\n)" ➜ k8s kd describe po deis-monitor-telegraf-d9xqc Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 4m51s (x292 over 69m) kubelet, m01 Back-off restarting failed container I can't say if there's a good explanation for this. @bacongobbler ? The error you are seeing seems related to https://github.com/helm/helm/issues/7812, @kingdonb. Kubernetes broke backward compatibility somewhere around 1.14/1.15, which in turn broke helm status. In order to fix that, we had to change some bits around how we generate internal kubernetes objects, which likely caused this error to crop up. I'd follow up there and see if you can find out anything more on what causes this (or if you can identify a fix). https://github.com/helm/helm/pull/7840 could potentially fix this issue, so I'd try that as well. Thanks, @bacongobbler That fix is on 2.16.x and we cant use 2.16.x to install workflow. I have upgraded to helm 3.1.2. Do you know how to install workflow with it? This doesn't seem to work. ➜ k8s helm install hephy/workflow --namespace deis --set router.host_port.enabled=true --generate-name Error: create: failed to create: namespaces "deis" not found Run kubectl create namespace deis, or use the --create-namespace flag. Thanks, @bacongobbler With a new minikube k8s cluster and helm 3.1.2, it is still in same state. ➜ k8s kd get po NAME READY STATUS RESTARTS AGE deis-builder-57cf7db484-x8rgw 0/1 Running 0 10m deis-controller-7fbd88b5f8-ps86w 1/1 Running 4 10m deis-database-66f68b9776-5tdk7 1/1 Running 0 10m deis-logger-58bb65fdd7-sgdbm 1/1 Running 0 10m deis-logger-fluentd-8gc7d 0/1 CrashLoopBackOff 6 10m deis-logger-redis-5976d6ddc7-nt9pr 1/1 Running 0 10m deis-minio-54f57f88b8-m7w9d 1/1 Running 0 10m deis-monitor-grafana-6454b46bd8-w2v45 1/1 Running 0 10m deis-monitor-influxdb-69f85d6d56-pttcd 1/1 Running 0 10m deis-monitor-telegraf-h4hlx 0/1 CrashLoopBackOff 6 10m deis-nsqd-6dc776449-ph75x 1/1 Running 0 10m deis-registry-677fdfd6cf-xhnbg 1/1 Running 1 10m deis-registry-proxy-x7b9w 1/1 Running 0 10m deis-router-f9998457c-5blcr 0/1 CrashLoopBackOff 7 10m deis-workflow-manager-5564f67d4-9z2tz 1/1 Running 0 10m Thanks @bacongobbler! That's super helpful. I will take a look at those, and follow up with my own testing to see if the issue can be resolved through that PR. @ChillarAnand The error you're seeing now, using both Helm 2.15 and Helm 3.x, is a failure to enable rbac when your cluster has RBAC requirements enabled on it. When you run "helm install" or "helm upgrade --install" please try adding --set global.use_rbac=true, or otherwise find a way to set this in your helm chart values. It probably makes sense to enable RBAC by default for Workflow in the next minor release, given we will be adding support for K8s 1.16+, I think it's probably true that by now, everyone and their uncles and aunts must have RBAC enabled by default on their clusters. I'm not sure if there are any tools like minikube at this point that don't enable RBAC by default, so we should follow suit. Thanks, @kingdonb I have created a new cluster and installed workflow with ➜ k8s helm install hephy/workflow --namespace deis --set router.host_port.enabled=true --set global.use_rbac=true --generate-name ➜ k8s kd get po NAME READY STATUS RESTARTS AGE deis-builder-57cf7db484-m6nk8 1/1 Running 2 8m56s deis-controller-7fbd88b5f8-2mxhx 1/1 Running 3 8m56s deis-database-66f68b9776-drxlv 1/1 Running 0 8m56s deis-logger-58bb65fdd7-8rr8g 1/1 Running 4 8m56s deis-logger-fluentd-8grzg 1/1 Running 0 8m56s deis-logger-redis-5976d6ddc7-zmn7x 1/1 Running 0 8m56s deis-minio-54f57f88b8-sz75w 1/1 Running 0 8m56s deis-monitor-grafana-6454b46bd8-hjkbg 1/1 Running 0 8m56s deis-monitor-influxdb-69f85d6d56-mvr6v 1/1 Running 0 8m56s deis-monitor-telegraf-ngkcd 0/1 CrashLoopBackOff 6 8m56s deis-nsqd-6dc776449-szfcz 1/1 Running 0 8m56s deis-registry-677fdfd6cf-4m9ds 1/1 Running 0 8m55s deis-registry-proxy-tqvrj 1/1 Running 0 8m56s deis-router-f9998457c-q7vdb 1/1 Running 0 8m56s deis-workflow-manager-5564f67d4-m5f9j 1/1 Running 0 8m56s Only telegraf seems to be failing. ➜ k8s kd logs deis-monitor-telegraf-ngkcd ➜ k8s kd describe pod deis-monitor-telegraf-ngkcd Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m50s default-scheduler Successfully assigned deis/deis-monitor-telegraf-ngkcd to m01 Normal Pulling 9m47s kubelet, m01 Pulling image "hephy/telegraf:v2.11.1" Normal Pulled 9m20s kubelet, m01 Successfully pulled image "hephy/telegraf:v2.11.1" Warning Failed 9m19s kubelet, m01 Error: failed to start container "deis-monitor-telegraf": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/8abb7fb589f34edb2b51e69c96571d67f1875b203b3f902138e799a7d760a4fe/merged\\\" at \\\"/var/lib/docker/overlay2/8abb7fb589f34edb2b51e69c96571d67f1875b203b3f902138e799a7d760a4fe/merged/run/utmp\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Warning Failed 9m18s kubelet, m01 Error: failed to start container "deis-monitor-telegraf": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/fad0bddb48f64da2412b5f33df6008162f5deb16f0599f8185f14add402ee1cd/merged\\\" at \\\"/var/lib/docker/overlay2/fad0bddb48f64da2412b5f33df6008162f5deb16f0599f8185f14add402ee1cd/merged/run/utmp\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Warning Failed 9m5s kubelet, m01 Error: failed to start container "deis-monitor-telegraf": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/fe5f770220eb9f91cebfa3431519d2d9c897c226ddc3ff37ab6a80d6e045af64/merged\\\" at \\\"/var/lib/docker/overlay2/fe5f770220eb9f91cebfa3431519d2d9c897c226ddc3ff37ab6a80d6e045af64/merged/run/utmp\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Warning Failed 8m39s kubelet, m01 Error: failed to start container "deis-monitor-telegraf": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/7fb8f9ec70239bdfefbcbd319227d84ab26e3fe912a83e1a1ae8477386be3d95/merged\\\" at \\\"/var/lib/docker/overlay2/7fb8f9ec70239bdfefbcbd319227d84ab26e3fe912a83e1a1ae8477386be3d95/merged/run/utmp\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Normal Pulled 7m47s (x4 over 9m19s) kubelet, m01 Container image "hephy/telegraf:v2.11.1" already present on machine Normal Created 7m46s (x5 over 9m19s) kubelet, m01 Created container deis-monitor-telegraf Warning Failed 7m46s kubelet, m01 Error: failed to start container "deis-monitor-telegraf": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/21795f591164fb6fff57b6a090a7ff9830019ee183b0d30f9dc50bafecd41c72/merged\\\" at \\\"/var/lib/docker/overlay2/21795f591164fb6fff57b6a090a7ff9830019ee183b0d30f9dc50bafecd41c72/merged/run/utmp\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type Warning BackOff 4m46s (x20 over 8m55s) kubelet, m01 Back-off restarting failed container Thanks @ChillarAnand – can you give some details about your cluster and how it was provisioned? I feel like I have seen this one before but drawing a blank here. Is it a managed cluster, if so what service provider managing; if not a managed service, then what process have you used to stand it up (kops, kubeadm, kind, k3s?) I think the last time I saw this telegraf issue reported it was a kind user, trying to build a process for their CI system to create workflow deployments in an automated way. I'm not finding any notes about how they resolved the telegraf CrashLoopBackOff. I wonder, if you are using kind (please confirm or deny) is it possible that /var/run/utmp does not exist on the machine that runs kind? That's what the error message in the log you posted seems to suggest, but I'm not sure what that means or why it would be a problem. Sure seems clear, you can't mount a non-directory or non-existent path to a directory path in the container. (But why is /var/run/utmp being mounted, and what will be the other results of this failure? I can't answer these, don't know enough about monitor+logger+telegraf stack.) It looks like telegraf is part of the deis-monitor stack that sits between nsq and influxdb, probably involved in collecting metrics, so while you have this failure, you may not be able to use at least deis logs feature, or possibly some data will be missing in grafana, but everything else should work. mounting \\\"/var/run/utmp\\\" to rootfs \\\"/var/lib/docker/overlay2/.../merged\\\" at \\\"/var/lib/docker/overlay2/.../merged/run/utmp\\\" caused \\\"not a directory\\\ "\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type This seems to be a common issue that is reported against a number of projects that are using telegraf, it looks like maybe we just need to make sure we upgrade to the latest telegraf in the next workflow release. I don't know, and it's not clear to me where this failure is coming from. I am testing this locally using minikube cluster. Starting cluster with $ minikube start --memory=8g We will try to repro on a minikube setup then, before our next release, to see if there's something we can do about telegraf. Thanks for persistence and making clear reports, this will be helpful for others who might trip over the same things between now and whenever we can make that release! On Mac minikube cluster, workflow is installing without any errors. On AWS ec2 (Ubuntu 18.04) minikube cluster, the above mentioned issue is happening. On AWS EKS cluster, installing workflow is stuck at the following step from ~30 minutes. After uninstalling workflow, without deleting the cluster using the following commands and then reinstalling worked. Not sure what might be the issue. ➜ k8s k delete namespace deis ➜ k8s k delete clusterrole deis:deis-router deis:deis-workflow-manager deis:deis-controller deis:deis-logger-fluentd deis:deis-router deis:deis-builder ➜ k8s k delete clusterrolebinding deis:deis-router deis:deis-workflow-manager deis:deis-controller deis:deis-logger-fluentd deis:deis-router deis:deis-builder Hi @ChillarAnand , from the looks of it the registry proxy is not able to resolve the internal ip address of the registry pod, likely because it's not deployed as ClusterIP service. When working with EKS or k8s on AWS it is best to configure the ECR registry instead of using the built in registry component anyways. Here are some instructions for that: https://docs.teamhephy.com/installing-workflow/configuring-registry/ Unable to update the registry as helm inspect seems to be failing. ➜ k8s helm repo list NAME URL hephy https://charts.teamhephy.com/ appscode https://charts.appscode.com/stable/ ➜ k8s helm ls --all NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION rabbitmq default 1 2020-04-07 18:11:34.278973 +0530 IST deployed rabbitmq-ha-1.44.1 3.8.0 ➜ k8s helm inspect values helpy/workflow Error: failed to download "helpy/workflow" (hint: running `helm repo update` may help) ➜ k8s helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "appscode" chart repository ...Successfully got an update from the "hephy" chart repository Update Complete. ⎈ Happy Helming!⎈ ➜ k8s helm inspect values helpy/workflow Error: failed to download "helpy/workflow" (hint: running `helm repo update` may help) Is still kops is the recommended way to install hephy on AWS? Now, there is eksctl which will spin up the cluster with just 1 command. I am closing this as we figured out the issue and you were able to install. Please open another issue if having other problems.
gharchive/issue
2020-04-01T08:25:19
2025-04-01T04:36:01.889818
{ "authors": [ "ChillarAnand", "Cryptophobia", "bacongobbler", "kingdonb" ], "repo": "teamhephy/workflow", "url": "https://github.com/teamhephy/workflow/issues/115", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
91287161
within_iframe needs a name/id Hello and thank you for your good work :) I'm in the situation that I want to test interaction with an iframe that is not completely under my control and that has neither an id nor a name (I create a div with an id, and some 3rd party JavaScript puts the iframe inside the div). That works when I run with chrome, but not when I run with poltergeist (1.6.0) - sadly. By the looks of it only iframes with names/ids are supported: https://github.com/teampoltergeist/poltergeist/blob/master/lib/capybara/poltergeist/browser.rb#L125-L130 Is there a way to make it work just given a capybara node? Would be happy to help on this. e.g. current code that works with chrome but not poltergeist is: iframe = find('#id-of-the-div iframe') within_frame iframe do # MAGIC end As a workaround I was thinking about adding a name/id to the iframe myself but it seems that doesn't work as well due to #559 Thanks! Tobi We're also encountering this for our project at https://github.com/EFForg/phantom-of-the-capitol, would love to see fixed I had the same issue, and found an (admittedly not very nice) workaround: The iframes can be referenced using an integer ID, and the library I was interacting was nice enough to always add the iframes in the same order. That way I could do the following: within_frame(0) { fill_in('cardpan', with: "4111111111111111") } within_frame(1) { select("1", from: "cardexpiremonth") } within_frame(2) { select("2020", from: "cardexpireyear") } within_frame(3) { fill_in('cardcvc2', with: "123") } within_frame(4) { select('Visa', from: "cardtype") } Should be fixed by PR #674 that was merged
gharchive/issue
2015-06-26T16:00:35
2025-04-01T04:36:01.895990
{ "authors": [ "Hainish", "PragTob", "mamhoff", "twalpole" ], "repo": "teampoltergeist/poltergeist", "url": "https://github.com/teampoltergeist/poltergeist/issues/630", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
248246931
Broken rails UJS loading when use arrow functions When you have form with remote: true and add some js function, which use arrow function (just have it somewhere, no even need to call it) and submit such form in feature test then you get an unknown format error while in browser form works fine. Using standard function instead of arrow do not lead to such behavior. phantomJS version: 2.1.1 poltergeist version: 1.15.0 system: Ubuntu 14.04.5 LTS debug output and stacktrace: vagrant@vagrant-ubuntu-trusty-64:/vagrant/poltergeist_bug$ rspec spec/features/submit.rb {"id":"8c523b10-0493-4f9d-a188-034bf1fad9ed","name":"set_debug","args":[true]} {"command_id":"8c523b10-0493-4f9d-a188-034bf1fad9ed","response":true} {"id":"8a56cbad-53bc-4428-bed6-f1df09df42be","name":"visit","args":["http://127.0.0.1:33684/"]} {"command_id":"8a56cbad-53bc-4428-bed6-f1df09df42be","response":{"status":"success"}} {"id":"af32d7ce-0c14-4ed2-96bb-6178e009b821","name":"find","args":["xpath",".//input[((((./@type = 'submit') or (./@type = 'reset')) or (./@type = 'image')) or (./@type = 'button'))][((./@id = 'Save') or ((./@value = 'Save') or (./@title = 'Save')))] | .//input[(./@type = 'image')][(./@alt = 'Save')] | .//button[(((./@id = 'Save') or ((./@value = 'Save') or (./@title = 'Save'))) or ((normalize-space(string(.)) = 'Save') or .//img[(./@alt = 'Save')]))] | .//input[(./@type = 'image')][(./@alt = 'Save')]"]} {"command_id":"af32d7ce-0c14-4ed2-96bb-6178e009b821","response":{"page_id":1,"ids":[0]}} {"id":"5d8110c2-99db-46ec-b411-4d74746b6ebf","name":"visible","args":[1,0]} {"command_id":"5d8110c2-99db-46ec-b411-4d74746b6ebf","response":true} {"id":"77eae35b-869f-4676-8b3d-343af33b321d","name":"disabled","args":[1,0]} {"command_id":"77eae35b-869f-4676-8b3d-343af33b321d","response":false} {"id":"ef6ac9a2-e386-4bd3-8758-6511ed01aa76","name":"click","args":[1,0]} {"command_id":"ef6ac9a2-e386-4bd3-8758-6511ed01aa76","response":{"position":{"x":33.5,"y":42.5}}} {"id":"41b4a5e0-bce7-45a6-b3f7-ca01358e9dc8","name":"find","args":["xpath","/html"]} {"command_id":"41b4a5e0-bce7-45a6-b3f7-ca01358e9dc8","response":{"page_id":2,"ids":[0]}} {"id":"f7062392-1d40-4beb-aab0-acaf2271a54b","name":"visible","args":[2,0]} {"command_id":"f7062392-1d40-4beb-aab0-acaf2271a54b","response":true} {"id":"9a403d5a-4bc6-4e54-8737-275454fef778","name":"visible_text","args":[2,0]} {"command_id":"9a403d5a-4bc6-4e54-8737-275454fef778","response":"Internal Server Error\n\nBaseController#create is missing a template for this request format and variant. request.formats: [\"text/html\"] request.variant: []\nWEBrick/1.3.1 (Ruby/2.3.1/2016-04-26) at 127.0.0.1:33684"} {"id":"a178fdd3-20ea-42d6-aaec-f57359098a0e","name":"find","args":["xpath","/html"]} {"command_id":"a178fdd3-20ea-42d6-aaec-f57359098a0e","response":{"page_id":2,"ids":[1]}} {"id":"cf860fbe-ecef-46fe-b378-840e43c3ae34","name":"visible","args":[2,1]} {"command_id":"cf860fbe-ecef-46fe-b378-840e43c3ae34","response":true} {"id":"47859a23-5e3a-4880-b3b0-32f38319ef81","name":"all_text","args":[2,1]} {"command_id":"47859a23-5e3a-4880-b3b0-32f38319ef81","response":"Internal Server Error\n \n Internal Server Error\n BaseController#create is missing a template for this request format and variant.\n\nrequest.formats: [\"text/html\"]\nrequest.variant: []\n \n \n WEBrick/1.3.1 (Ruby/2.3.1/2016-04-26) at\n 127.0.0.1:33684\n \n \n\n"} {"id":"52d65b8b-be30-43f9-b0a0-40896a235e7e","name":"reset","args":[]} {"command_id":"52d65b8b-be30-43f9-b0a0-40896a235e7e","response":true} F Failures: 1) Submit form via AJAX submit Failure/Error: raise ActionController::UnknownFormat, message ActionController::UnknownFormat: BaseController#create is missing a template for this request format and variant. request.formats: ["text/html"] request.variant: [] # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/etag.rb:25:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/conditional_get.rb:38:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/head.rb:12:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/session/abstract/id.rb:232:in `context' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/session/abstract/id.rb:226:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/railties-5.0.5/lib/rails/rack/logger.rb:36:in `call_app' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/railties-5.0.5/lib/rails/rack/logger.rb:24:in `block in call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/railties-5.0.5/lib/rails/rack/logger.rb:24:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/method_override.rb:22:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/runtime.rb:22:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/sendfile.rb:111:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/railties-5.0.5/lib/rails/engine.rb:522:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/urlmap.rb:68:in `block in call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/urlmap.rb:53:in `each' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/urlmap.rb:53:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/capybara-2.15.1/lib/capybara/server.rb:44:in `call' # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/rack-2.0.3/lib/rack/handler/webrick.rb:86:in `service' # ------------------ # --- Caused by: --- # Capybara::ExpectationNotMet: # expected to find text "form submitted succesfully" in "Internal Server Error BaseController#create is missing a template for this request format and variant. request.formats: [\"text/html\"] request.variant: [] WEBrick/1.3.1 (Ruby/2.3.1/2016-04-26) at 127.0.0.1:33684" # /home/vagrant/.rvm/gems/ruby-2.3.1/gems/capybara-2.15.1/lib/capybara/node/matchers.rb:617:in `block in assert_text' Repository to reproduce with more verbose description - https://github.com/sd-kin/poltergeist_bug/tree/master P.S. Sorry for my english PhantomJS 2.1.1 only supports up to ES5 - therefore it doesn’t support arrow functions. Either try the PhantomJS 2.5 beta or transpile your code to ES5
gharchive/issue
2017-08-06T13:22:28
2025-04-01T04:36:01.902120
{ "authors": [ "sd-kin", "twalpole" ], "repo": "teampoltergeist/poltergeist", "url": "https://github.com/teampoltergeist/poltergeist/issues/903", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
995627389
Batch OCR Invoice PDFs and extract text to input into web based fields - can but hard Hi! I am currently trying to develop a RPA system where invoices that were scanned manually(into PDF) are OCR-ed and specific text are then extracted for filling up fields in a web-based form. I was wondering if TagUI is able to do this? A little background, I'm currently a uni student interning and was given this task to perform. I came across your tool while studying and was really impressed and am trying to use it for the task listed above. Thank you for your help!! Hi @Domsdorm see below link for a full solution and demo of what you mentioned. It is an automation script to solve Automation Anywhere Week 4 RPA challenge. The tough part is from converting from the unstructured image data into structured data. This is very tough to get right, and involve a lot of trial and error and work. You can see below link to know more about the considerations and options to do this. https://github.com/kelaberetiv/TagUI/issues/1093#issuecomment-907996687 Thanks @kensoh for the reply! Currently am trying to bypass the firewall that my company has by using the steps you have told me about in the telegram group. I Will update if I run into any troubles when doing the code. Really appreciate you taking your own free time to help. Cheers! Currently am trying to run the code for the Week 4 RPA challenge but OpenJDK is needed, is there anyway to bypass this? Also is there a way to check in the script if the invoice data(for example Invoice number) is correctly being pulled? For 1st question, need OpenJDK / Java 64-bit to do the part on opening file explorer to choose file. But there is workaround, you can use r.upload() to choose the file without opening the file browser (criteria needed by organiser for the challenge). See this solution from another user - https://github.com/DanielCCF/BotGamesAA/blob/master/Week4/Solution-Python.py#L125 For 2nd question, this RPA package requires user to know Python. I'm assuming that you are new to Python that's why you ask this. Because the answer is already written in the Python script itself, the OCR of the image files and extracting the individual data like invoice number. This automation is hard to understand and do without Python knowledge. Most of it is Python programming knowledge, only some are RPA concepts related to this tool. Yupp I'm not from a CS background but am interested in learning more. Thank you for your patience tho 😅 Currently I am trying to select a drop down option (incoming WO) using . I am able to select the main header but am unable to select the Incoming WO option. ') r.click('//[@aria-haspopup="hauptMenu:submenu:12"]') r.wait(1) r.click('//[@tabindex=""-1">Incoming WO<"] Is there a way to select that option? Or by using r.click(x, y). If using r.click(x, y) is possible, how do I know which x,y values to input. You can try if using the r.select() works - see more on usage and examples in API section.
gharchive/issue
2021-09-14T06:08:34
2025-04-01T04:36:01.925916
{ "authors": [ "Domsdorm", "kensoh" ], "repo": "tebelorg/RPA-Python", "url": "https://github.com/tebelorg/RPA-Python/issues/304", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
3119644
Option to work with Twitter Bootstrap Refs Issue #10 - adds the option in the config to make the html work with Twitter Bootstrap 2.0 I just released tabulous version 1.2.0, which adds support for Twitter Bootstrap version 2. Fantastic! Thanks for all the great work!
gharchive/issue
2012-02-07T05:51:30
2025-04-01T04:36:01.928174
{ "authors": [ "climyao", "edave", "techiferous" ], "repo": "techiferous/tabulous", "url": "https://github.com/techiferous/tabulous/issues/12", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
918204515
fix: PTP edition tags remove freeleech Also sort editionTags in alphabet order 印象中应该还有halfLeech吧 已添加
gharchive/pull-request
2021-06-11T04:20:43
2025-04-01T04:36:01.940073
{ "authors": [ "sabersalv", "techmovie" ], "repo": "techmovie/easy-upload", "url": "https://github.com/techmovie/easy-upload/pull/114", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
486178638
Getting error in angular-2-daterangepicke module while making build in angular 2 Hello Everyone, I have used npm angular-2-daterangepicke and follow steps that are given here (https://www.npmjs.com/package/angular-2-daterangepicker). This date range picker is working fine in my local machine but while making build for the production it gives some error as given below. ERROR in: Unexpected value 'DaterangepickerModule in C:/xxx/node_modules/angular-2-daterangepicker/index.js' imported by the module 'AppModule in C:/xxx/src/app/app.module.ts'. Please add a @NgModule annotation. In app.module.ts I have code like this. import { DaterangepickerModule } from 'angular-2-daterangepicker'; @NgModule({ declarations: [], imports: [ DaterangepickerModule ] }) export class AppModule { } If there is someone who can solve this problem then please help me to find out the solution to this problem. I have got some temporary solution on GitHub for making a build, by searching for the solution of the same problem i.e. (ng build --prod --aot=false --build-optimizer=false) but I think it is not the right way to create a build. This is fixed in v2.1.1 https://www.npmjs.com/package/angular-datetimerangepicker
gharchive/issue
2019-08-28T06:10:46
2025-04-01T04:36:01.945385
{ "authors": [ "anand171", "technikhil314" ], "repo": "technikhil314/angular-components", "url": "https://github.com/technikhil314/angular-components/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
506569568
Update constants.js Update my e-mail address @20manas We will be fetching members details from a backend. Thanks.
gharchive/pull-request
2019-10-14T10:30:59
2025-04-01T04:36:01.947263
{ "authors": [ "20manas", "dwivediabhimanyu" ], "repo": "technojam/technojam-frontend", "url": "https://github.com/technojam/technojam-frontend/pull/190", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1334731115
fix: ensure that feature state is only set with valid ID This PR fixes an error that was showing up in the console: "Error: The feature id parameter must be provided." We had tried setting the tree circle's state to selected. However because of a wrong if/else condition, the ID that was to be selected was undefined. Now I've changed the conditional logic now that this error doesn't happen again. :tada: This PR is included in version 1.0.0-staging.1 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2022-08-10T14:37:19
2025-04-01T04:36:01.951080
{ "authors": [ "dnsos", "tsboter" ], "repo": "technologiestiftung/treewatch-frontend", "url": "https://github.com/technologiestiftung/treewatch-frontend/pull/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
115343745
Problem in clustering intersections Hi, I'm running OpenStreetMap.jl in Julia 0.3.10 x64 on Win 7. When I was using the below code to filter and cluster intersections: nodes, hwys, builds, feats = getOSMData(MAP_FILENAME) highway_sets = findHighwaySets(hwys) intersection_mapping = findIntersectionClusters(nodes,intersections,highway_sets,max_dist=15) Julia shows ERROR: 'findIntersectionCluster' has no method matching findIntersectionClusters(::Dict{Int64,LLA}, ::Dict{Int64,Intersection}, ::Array{HighwaySet,1}) In intersections.jl, it shows: function findIntersectionClusters( nodes::Dict{Int,ENU}, intersections_in::Dict{Int,Intersection}, highway_clusters::Vector{HighwaySet}; max_dist=15.0 ) It seems 'findIntersectionCluster' does not accept input (Dict, Dict, Array), or OpenStreetMap.jl does not support 64-bit? Any suggestion is appreciated. Thanks! My bad, ENU(nodes::Dict{Int,LLA}, reference::LLA) is needed to convert LLA to ENU first.... Perhaps it's worth mentioning in the document? Thanks for pointing that out, I'll update the documentation soon!
gharchive/issue
2015-11-05T18:24:51
2025-04-01T04:36:02.007783
{ "authors": [ "liyan2015", "tedsteiner" ], "repo": "tedsteiner/OpenStreetMap.jl", "url": "https://github.com/tedsteiner/OpenStreetMap.jl/issues/78", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2082684416
Drop connection closed messages from test output A follow-up for closed #2524. We decided to go with editing the output. Avoiding splitting output and using regular expressions to make sure the code is fast and memory efficient in case the output is large. Resolves #2429 Pull Request Checklist [x] implement the feature [x] extend the test coverage Looks good, but pre-commit seems to be unhappy. /packit test --identifier full /packit test --identifier full For reviewers: tests fail due to a known issue being fixed in #2621 /packit test --identifier full /packit test --identifier full testing against new images: https://gitlab.com/testing-farm/infrastructure/-/merge_requests/417 /packit test --identifier full
gharchive/pull-request
2024-01-15T21:12:16
2025-04-01T04:36:02.015400
{ "authors": [ "psss", "thrix" ], "repo": "teemtee/tmt", "url": "https://github.com/teemtee/tmt/pull/2617", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
339993107
Modifications not respected on macOS 10.13.6 KE: 12.1.0 macOS: 10.13.6 I recently updated macOS to 10.13.6. My configuration settings are still present, but the OS does not recognize or utilize them. My re-mapped external devices are now back to factory default configuration(s). This issue seems to be resolved by updating to 12.1.4b Confirmed that 12.1.4 fixes issues with the latest High Sierra 10.13.6. In case anyone else has issues finding the latest version ("Check for updates" doesn't download it and it doesn't appear to be tagged or available via official download page) it's here: https://github.com/tekezo/pqrs.org/tree/master/webroot/osx/karabiner/files
gharchive/issue
2018-07-10T20:23:13
2025-04-01T04:36:02.030465
{ "authors": [ "PSalant726", "jbiel" ], "repo": "tekezo/Karabiner-Elements", "url": "https://github.com/tekezo/Karabiner-Elements/issues/1489", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
243047575
Example JSON: opening new windows of applications via json shell_command This JSON example lets you open a new iTerm2 window via "left control, left shift, and `" or a new Chrome window via "left control, left shift, and 1", using the new shell_command in the complex modifications json element. (The ability to launch new windows, and not just applications, via keyboard bindings is much harder on Mac than Linux, and may be useful for recent converts like me trying to salvage their old Linux habits via Karabiner). Thanks! Please add it to the following repository if you want. https://github.com/pqrs-org/KE-complex_modifications/ Thanks for the application, and thanks for the suggestion. The requested PR: https://github.com/pqrs-org/KE-complex_modifications/pull/37
gharchive/pull-request
2017-07-14T16:33:34
2025-04-01T04:36:02.032778
{ "authors": [ "smlewis", "tekezo" ], "repo": "tekezo/Karabiner-Elements", "url": "https://github.com/tekezo/Karabiner-Elements/pull/837", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1411755128
chelits er kau chelits er kau created by johnbent@gmail.com on 2017-01-07 14:22:18 johnbent@gmail.com replied, I never heard 'chelits er kau/kemiu' until today when Eli asked Charl what it means. I guess he hears kids say it a lot. Charl says it means "you're in trouble". I can't find it anywhere. Seems like you could say 'chelits er ngii/ngak/kemam/kid/tir' but Charl and Eli say they've never heard those. They do say it can be used by itself ('chelits') but always directed towards the listener. Anyone confirm? Spelling seem ok? Native word or borrowed? Other forms? Other usages? mngiruchelbad@gmail.com replied, Alits er kau!!!! Children word for uhoh.  johnbent@gmail.com replied, Is it chalits or alits? How can we figure it out? mngiruchelbad@gmail.com replied, On Sat, Jan 7, 2017 at 5:18 PM John Bent (Debugle) < notifications@debugle.com> wrote: Both ways but seems people these days are getting away from ch 😢 --- Write ABOVE THIS LINE to reply --- John Bent posted a new comment on chelits er kau http://debugle.com/projects/28361#issue=324493 : Is it chalits or alits? How can we figure it out? View this on Debugle http://debugle.com/projects/28361#issue=324493 Stop receiving email http://debugle.com/issues/unsubscribe/53b37e0e5d9bd6441fd0c8e0bc8b5fcbd420a193 notifications about this item. johnbent@gmail.com replied, Aleks/Justin, which would conform more closely with Josephs? Chalits or alits? palau371@gmail.com replied, I guess it's related to ALII that's why I would add it as ALITS. but some Palauans like to pronounce CH in the beginning of many other words. we can see it comparing some PDEF's witn ENG's
gharchive/issue
2022-10-17T15:09:44
2025-04-01T04:36:02.041262
{ "authors": [ "johnbent" ], "repo": "tekinged/missing", "url": "https://github.com/tekinged/missing/issues/184", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1412436950
ngoterebakel ngoterebakel created by jimgeselbracht@yahoo.com on 2021-09-13 16:47:25 jimgeselbracht@yahoo.com replied, In the song "Imeyungs a Delad a Di Mel Tang," the last verse has:  Ulekum meng ngoterebakel a beluu / me sel bor ngii me sel kngar ngii / a dekersii el me dekiar / e kaiuekeed kau me ngak / e kaiuekeed kau me ngak.  Tekinged currently has "oterebakel" as accordion, but not ngoterebakel.  Yoichi Rengiil informs me that the translation of this first phrase is "if only the villages could trade places," with ngoterebakel meaning "trade places" or "interchange."  The word for accordion (oterebakel) is taken from the fact that when playing the accordion, the go back and forth, sort of trading places.  I wonder if ngoterebakel should be spelled differently. jlukesemiwo@gmail.com replied, Jim, I think "ngoterebakel" is actually just "ng oterebakel" so there's no different spell g here. It's just how it's written. You know how "ng" is added to the beginning of several words and when we say them, it sounds like the "ng" is attached to the word. Some examples: Ngulmeklatk, ngulmekedong, ngulbengeang, ngolekngemed, ngomekrur, etc. jimgeselbracht@yahoo.com replied, Thanks Jelga.  Since oterebakel is in the dictionary as a noun (n.a.s, accordion), shouldn't there be an alternate definition as a verb?  Also, two dumb questions:  in the song the lyric is "Ulekum meng ng oterebakel a beluu."  1)  why wouldn't it be "te oterebakel a beluu" since the lyric is talking about the two villages trading places (Imeyungs as the son and Ngerbungs as the daughter)? and 2) why are there two "ng" in a row?
gharchive/issue
2022-10-18T01:25:47
2025-04-01T04:36:02.045619
{ "authors": [ "johnbent" ], "repo": "tekinged/missing", "url": "https://github.com/tekinged/missing/issues/346", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1283745422
TEP-0090: Matrix - Implement isSuccessful for Runs Changes TEP-0090: Matrix proposed executing a PipelineTask in parallel TaskRuns and Runs with substitutions from combinations of Parameters in a Matrix. In this change, we implement the isSuccessful member function of ResolvedPipelineRunTask for matrixed Runs. If the ResolvedPipelineRunTask is matrixed, it is successful only if all of its Runs completed successfully. /kind feature Submitter Checklist As the author of this PR, please check off the items in this checklist: [n/a] Docs included if any changes are user facing [x] Tests included if any functionality added or changed [x] Follows the commit message standard [x] Meets the Tekton contributor standards (including functionality, content, code) [x] Release notes block below has been filled in (if there are no user facing changes, use release note "NONE") Release Notes Matrixed `PipelineTasks` with `Custom Tasks` are successful when all `Runs` have completed successfully. Related PR: https://github.com/tektoncd/pipeline/pull/4980 /approve /test pull-tekton-pipeline-integration-tests Tagging a PR as a feature without release notes is little misleading, please add a note if possible 🙏 thank you @jerop 👎 /lgtm @pritidesai had not included release notes because it's an implementation detail of a larger feature - added a note describing what this PR is doing though - or maybe what I should do is remove the "feature" label? 🤔 @pritidesai had not included release notes because it's an implementation detail of a larger feature - added a note describing what this PR is doing though - or maybe what I should do is remove the "feature" label? 🤔 yeah that would be helpful. I generally like to collect notes for the feature PRs while creating a GitHub release page and making sure they are highlighted.
gharchive/pull-request
2022-06-24T13:20:05
2025-04-01T04:36:02.070839
{ "authors": [ "abayer", "jerop", "pritidesai" ], "repo": "tektoncd/pipeline", "url": "https://github.com/tektoncd/pipeline/pull/5035", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
707649161
Add LoginUrl Closes #172 Thanks for the contribution, @TheNeikos! Could you please also write a working example that I can run locally and test? Examples goes in the lib/examples folder
gharchive/pull-request
2020-09-23T20:13:10
2025-04-01T04:36:02.097528
{ "authors": [ "TheNeikos", "gugahoa" ], "repo": "telegram-rs/telegram-bot", "url": "https://github.com/telegram-rs/telegram-bot/pull/212", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
96695778
Error on "make test" after npm install telehash-c (OSX) ld: can't open output file for writing: bin/test_lib_base32, errno=2 for architecture x86_64 clang: error: linker command failed with exit code 1 (use -v to see invocation) make[1]: *** [bin/test_lib_base32] Error 1 make: *** [test] Error 2 The reason: There is a telehash-c/test/bin directory in the telehash-c repository. However, there is no telehash-c/test/bin directory after an npm install telehash-c on OSX. Does the .gitignore file in test/bin cause the directory to be skipped? This is more of rant than a direct answer to your question. Perhaps I'm old fashioned, but for C code, I think autotools is the way-to-go to build the tarball as a released package vs. pulling from Github. (Github supports features so you can upload the tarball as a feature, which is just on S3, and have automated things pull it down). I think @quartzjer would blow a gasket if I turned telehash-c into an autotools project but, one could specify in autoconf/automake what files were going into the distribution rather then relying on the version control's system of ignoring files. /rant I wouldn't blow a gasket, but I have a lot of autotools scar tissue that I'm not keen on dealing with :) Ages ago my goal was to have telehash-c be two projects, a lower level pure C one similar to sqlite.c that can be embedded into anything, and a higher level project w/ all the unixy and friendly/wrapper stuff (and yes, autotools), but it isn't so clear if that's the best way to organize it still... thoughts welcome! The two approaches makes sense. But like sqlite, they offer both the source code that you can download & they are in the OS vendors packing system. So, I don't think they are incompatible. I think you'd want to structure the project as autotools overall. Then add a target that makes the one-c-file like sqlite does. But autotools makes it much easier to package into debs and rpms. Ultimately, a user should be able to apt-get install telehash-c. The bastard child is arduino however. That could also been in the autotools configure. So, you download the tarball and you make arduino which should probably produce an arduino library. The releasable thing should be a tarball however, not the github project. I can fork the repo and try to put an autotools wrapper to let you see how the unixy would work... fyi @jbdatko, my long term goals for telehash-c as a codebase have evolved to it being primarily for embedded usage, and not aimed to easily accommodate traditional desktop/server apps or as a shared lib. I would much rather energy be put into dedicated implementations for them using newer (safer) things like rust, go, c#, node, etc.
gharchive/issue
2015-07-23T00:12:28
2025-04-01T04:36:02.172443
{ "authors": [ "cfvaughnii", "jbdatko", "quartzjer" ], "repo": "telehash/telehash-c", "url": "https://github.com/telehash/telehash-c/issues/63", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1960588353
Update User.py Profile больше не работает Profile работает.
gharchive/pull-request
2023-10-25T05:47:43
2025-04-01T04:36:02.227181
{ "authors": [ "teleportx", "yawaflua" ], "repo": "teleportx/Py-SPW", "url": "https://github.com/teleportx/Py-SPW/pull/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
348297706
Should include common guidelines and workflow examples Right now there's always some initial friction in moving from a WIP module into a RELEASE one. I would suggest we move some of Kristians best practices and some of the common mistakes into some kind of GUIDELINES.md file, where we can refer design choices and preferences that the team has. Use variable names/descriptions from the official resource as much as possible. Not prefix all variable names with lambda_ when the this is mainly a lambda function. At least to me, lambda_ is moot when most of the variables are passed only to the lambda resource Second this! Should be a living document that we can collaborate on to keep things somewhat consistent between modules (which makes them easier to use also) 👍
gharchive/issue
2018-08-07T12:23:47
2025-04-01T04:36:02.274266
{ "authors": [ "itsdalmo", "rickardl" ], "repo": "telia-oss/terraform-module-template", "url": "https://github.com/telia-oss/terraform-module-template/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
474587725
Create PSR Tracker PSR tracker to update/initialize PSR in DB and (future) check for GitHub version/changes. done
gharchive/issue
2019-07-30T13:32:39
2025-04-01T04:36:02.275086
{ "authors": [ "mdcoon" ], "repo": "tellor-io/TellorMiner", "url": "https://github.com/tellor-io/TellorMiner/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1388501318
Design new create ballot Is your feature request related to a problem? Please describe. The creating ballot form is hard to use and not intuitive at all. Also, its tiny and everything is crowded. Describe the solution you'd like Design that uses the whole screen and helps the user to focus on creating their ballot. Acceptance criteria The design should assume the user doesn't have previous knowledge of how polls work The design must work for mobile, tablet and desktop Design focused on polls ❓ Reflect current naming (e.g. DAO instead of treasury) Indicate to the user how each voting type works Indicate to the user how each DAO works Max and min options with explanation If the user is not in any DAO, display Join a DAO & create poll instead of Create Poll. This should have an explanation of why joining a DAO is required. Add explanation for stake/liquid choices in ballot configuration when treasury allows a choice. Use another component to create options that look more friendly. This issue is already aborded in #79, maybe we can rename the issue and continue with it @poplexity should we only focus on polls for this form? We're not entirely sure. I will suggest splitting the Ballot creation form into several steps using this wonderful component "Stepper" https://quasar.dev/vue-components/stepper 1 - Ballot main info Title Description Image URL (with preview) file.pdf 2 - DAO selection (x) Telos community ( ) Custom DAO "information about the selected option" If Custom DAO is selected then show list of current registered DAOs for user Button "Join new DAO" to redirect the user to DAOs section Vote with: ( ) Liquid tokens ( ) Staked token (x) Both 3 - Voting options (x) Simple Yes or No ( ) Yes/No/Abstain ( ) Multiple option "information about the selected option" if any of the first two are selected, then show two or three fields to edit the default words if the Multiple Options is selected, then show an editable item list like a simple TODO-list app: intput for the displayable text of the new option "add" button list of already created items with a cross button to delete it from the list button: "clear list" checkbox: [x] "Voters can only choose one option" if the user unchecks it, then we show: "information about 'min' n 'max' fields" input "Min" input "max" 4 - Open Voting checkbox: [x] "Open voting right away" if user keep option checked then show input: "Ending date" 5 - Finish show preview with the info alert about the cost in TLOS Button "Confirm" There's a video for the current approach: https://github.com/telosnetwork/app-telos-native/pull/188
gharchive/issue
2022-09-27T23:20:14
2025-04-01T04:36:02.288465
{ "authors": [ "Viterbo", "karynemayer" ], "repo": "telosnetwork/app-telos-native", "url": "https://github.com/telosnetwork/app-telos-native/issues/146", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2376532220
🛑 MYSITE2 is down In 8979fd2, MYSITE2 ($SECRET_SITE_OBOERU) was down: HTTP code: 0 Response time: 0 ms Resolved: MYSITE2 is back up in 5870835 after 46 minutes.
gharchive/issue
2024-06-27T01:35:26
2025-04-01T04:36:02.321152
{ "authors": [ "templain" ], "repo": "templain/mywatcher", "url": "https://github.com/templain/mywatcher/issues/3407", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2286379211
Java SDK seems to not play well with Replay tests. Setting child workflow ID using Temporal library seems to be throwing a non-deterministic error Details attached on the temporal forum support request - https://community.temporal.io/t/workflowreplayer-replayworkflowexecution-throwing-nondeterministicexception-when-reading-workflow-id-from-workflow/12040/6 Copied from the post above ======== Here is a sample repo which replicates the error. The repo also has replay tests and sample workflow history if required under the src/test folder https://github.com/gauravojha/learningTemporal/tree/master So what we are trying to do is (Also mentioned here, hope this helps - learningTemporal/src/main/java/dev/ojha/learningtemporal/workflows/HelloWorldWorkflowImpl.java at master · gauravojha/learningTemporal · GitHub) Have a parent workflow trigger a child workflow What we are trying to do is, reusing the parent workflows ID (this we get by using the provided Temporal utility method - Workflow.getInfo().getWorkflowId(), and then using this is a prefix and just append "-child" to it. So for instance if the parent workflow ID is "helloWorld", we are assigning our child workflow ID as "helloWorld-child", as this simplifies some usecases of tracing and audit for our usecases When we do this, and replay the payload through the replay tests, we get an undeterministic error, I have attached the trace for this specific case at the end Failure handling event 5 of type 'EVENT_TYPE_START_CHILD_WORKFLOW_EXECUTION_INITIATED' during replay. [TMPRL1100] Command COMMAND_TYPE_START_CHILD_WORKFLOW_EXECUTION doesn't match event EVENT_TYPE_START_CHILD_WORKFLOW_EXECUTION_INITIATED with EventId=5 on check workflowId with an expected value 'workflow_id_in_replay-child' and an actual value 'HelloWorldWorkflow-child' However, if we don't reuse the parent's workflow ID, and let temporal use a random ID as the workflow ID of the child, then the replay tests pass. How are you replaying history? If you are using WorkflowReplayer.replayWorkflowExecution then when you construct WorkflowExecutionHistory you need to specify the workflow ID How are you replaying history? If you are using WorkflowReplayer.replayWorkflowExecution then when you construct WorkflowExecutionHistory you need to specify the workflow ID @Quinn-With-Two-Ns sorry, i am not sure i understand.. I am not doing anything complicated in my code if you see it. my workflow implementation is pretty simple, it only triggers a child workflow. I am not doing any special replaying in there. Any replay is being done in the replay test which is written using instructions at https://docs.temporal.io/dev-guide/java/durable-execution#add-replay-test This is the workflow implementation - https://github.com/gauravojha/learningTemporal/blob/652d2ba76e80b7d1a96946fae42b3d1dec0b3b3f/src/main/java/dev/ojha/learningtemporal/workflows/HelloWorldWorkflowImpl.java#L35-L40 This is the only place where any replay happens (in the WorkflowReplayer in a unit test) - https://github.com/gauravojha/learningTemporal/blob/652d2ba76e80b7d1a96946fae42b3d1dec0b3b3f/src/test/java/dev/ojha/learningtemporal/TemporalTesting.java#L45 From the forum question where this is originally posted, it seems to be a bug in the SDK? Got it.. Let me try @Quinn-With-Two-Ns, and thank you for pointing in the right direction .. i couldnt find this recommendation in the replay test documentation. If this is a critical statement, I feel this should be mentioned in the document in the document, since as a user I may not end up taking the first recommendation from the doc? You need to specificy a workflow ID when using the replayer since it is not always in history Yes I agree we should improve the documentation here. Passing the workflow ID is only important if the workflow uses the workflow ID as part of its logic which most don't.
gharchive/issue
2024-05-08T20:19:31
2025-04-01T04:36:02.341238
{ "authors": [ "Quinn-With-Two-Ns", "gauravojha" ], "repo": "temporalio/sdk-java", "url": "https://github.com/temporalio/sdk-java/issues/2057", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
797469306
Possibility to set middleware to tenant routes This PR add possibility to set middlewares to tenant specific routes. Reason In my tenant specific views when I try to access global variable $errors it's undefined. That is because dynamic loaded routes.php file is missing web group middleware. Applying the changes from that PR that problem is solved. Can't you just manually add the middleware with a route group in the tenant routes file? We also don't add a controller prefix (before laravel 8 at least) or any of the other stuff that routes/web.php gets. I'm not specifically opposed to this, but I'm not sure it's really adding any benefit since it's already possible to do what you're asking.
gharchive/pull-request
2021-01-30T16:17:41
2025-04-01T04:36:02.394138
{ "authors": [ "dneykov", "fletch3555" ], "repo": "tenancy/multi-tenant", "url": "https://github.com/tenancy/multi-tenant/pull/979", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
640246213
Unable to restore model using crf_decode System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Cataline 10.15.3 TensorFlow version and how it was installed (source or binary): 2.2.0 (from pypi) TensorFlow-Addons version and how it was installed (source or binary): 0.7.0 (from pypi) Python version: 3.7.4 Is GPU used? (yes/no): no Describe the bug When trying to load a model with tf.keras.models.load_model which uses tfa.text.crf.crf_decode it fails on restoring the custom object. This seems to occur as it can't build the object because the init method takes extra arguments. Below there is a simple (standalone) example that fails and the traceback of the error. Code to reproduce the issue import numpy as np import tensorflow as tf import tensorflow_addons as tfa class CRFLoss: def __init__(self, transition): self.transition = transition self.__name__ = "CRFLoss" def __call__(self, true, pred): log_likelihood, _ = tfa.text.crf.crf_log_likelihood( inputs=pred, tag_indices=true, sequence_lengths=tf.ones(tf.shape(true)[0]) * 10, transition_params=self.transition, ) return tf.reduce_mean(-log_likelihood) def get_config(self): """Return helper config.""" config = {"transition": self.transition.numpy().tolist()} return config @classmethod def from_config(cls, config): config["transition"] = tf.constant(config["transition"], dtype=tf.float32) return cls(**config) def create_model(): input_tensor = tf.keras.Input(shape=(10, 3), dtype=tf.float32, name="input_tensor") seq_len = tf.keras.Input(shape=(), dtype=tf.int32, name="seq_len") transition = tf.constant([[1, 1, 0], [0, 1, 1], [1, 0, 1]], dtype=tf.float32) output = tf.multiply(input_tensor, tf.constant(1.0)) decoded, _ = tfa.text.crf.crf_decode(input_tensor, transition, seq_len) loss = CRFLoss(transition) model = tf.keras.Model(inputs=[input_tensor, seq_len], outputs=[output, decoded], name="example_model") model.compile(optimizer="Adam", loss={"tf_op_layer_Mul": loss}) return model x_data = { "input_tensor": np.random.random_sample((5, 10, 3)).astype(dtype=np.float32), "seq_len": np.array([10] * 5, dtype=np.int32), } y_data = {"tf_op_layer_Mul": np.random.randint(0, 3, (5, 10))} model = create_model() model.fit(x_data, y_data) model.predict({"input_tensor": tf.expand_dims(x_data["input_tensor"][0], 0), "seq_len": np.array([10])}) tf.saved_model.save(model, "./example_model/") tf.keras.backend.clear_session() model = tf.keras.models.load_model( "./example_model/", custom_objects={"CrfDecodeForwardRnnCell": tfa.text.crf.CrfDecodeForwardRnnCell, "CRFLoss": CRFLoss} ) model.fit(x_data, y_data) model.predict({"input_tensor": tf.expand_dims(x_data["input_tensor"][0], 0), "seq_len": np.array([10])}) Other info / logs Traceback (most recent call last): File "addon_issue_snippet.py", line 60, in <module> custom_objects={"CrfDecodeForwardRnnCell": tfa.text.crf.CrfDecodeForwardRnnCell, "CRFLoss": CRFLoss} File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 190, in load_model return saved_model_load.load(filepath, compile) File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 116, in load model = tf_load.load_internal(path, loader_cls=KerasObjectLoader) File "venv/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 604, in load_internal export_dir) File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 188, in __init__ super(KerasObjectLoader, self).__init__(*args, **kwargs) File "venv/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py", line 123, in __init__ self._load_all() File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 209, in _load_all self._layer_nodes = self._load_layers() File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 309, in _load_layers layers[node_id] = self._load_layer(proto.user_object, node_id) File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 337, in _load_layer obj, setter = revive_custom_object(proto.identifier, metadata) File "venv/lib/python3.7/site-packages/tensorflow/python/keras/saving/saved_model/load.py", line 782, in revive_custom_object .format(identifier)) ValueError: Unable to restore custom object of type _tf_keras_rnn_layer currently. Please make sure that the layer implements `get_config`and `from_config` when saving. In addition, please use the `custom_objects` arg when calling `load_model()`. Possible solution I've tried to debug this a bit and the solution I've found to work is to implement get_config and from_config on tfa.text.crf.CrfDecodeForwardRnnCell. For now I'm patching those methods in the object but it isn't very clean. If the proposed solution is good I'd gladly add the methods and create a PR. Hi @AntPeixe, your solution is correct IMO. Feel free to open an PR and request my review. Thank you!
gharchive/issue
2020-06-17T08:27:26
2025-04-01T04:36:02.435586
{ "authors": [ "AntPeixe", "WindQAQ" ], "repo": "tensorflow/addons", "url": "https://github.com/tensorflow/addons/issues/1934", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
297645286
Where to see who wins when playing with gnu go Hi, I know this is a very naive question. I am new to Go. I played a game using my model against gnu go following the command in the readme (the twogtp command) of this repository. Anyone can tell me how to check who wins? The gnugo.sgf is below: (;FF[4]CA[UTF-8]AP[gogui-twogtp:1.4.9]SZ[9] KM[6.5]PB[GNU Go]PW[Somebot-000001-model]DT[2018-02-15] C[Black command: gnugo --mode gtp White command: python3 main.py gtp -l models/000001-model Black version: 3.8 White version: 0.1 Result[Black]: B+78.5 Result[White]: ? Host: xliu-linux3 (Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz) Date: February 15, 2018 4:09:07 PM PST] ;B[ee];W[dg];B[cf];W[di];B[cd];W[fa];B[gb];W[ig];B[gg];W[ei] ;B[ff];W[gi];B[hh];W[fc];B[fb];W[eh];B[cg];W[if];B[he];W[cc] ;B[bc];W[ie];B[hd];W[da];B[dc];W[ih];B[ch];W[ga];B[ha];W[ad] ;B[ed];W[be];B[bd];W[de];B[df];W[fg];B[hi];W[eb];B[hb];W[gd] ;B[db];W[ef];B[gh];W[bh];B[ci];W[ca];B[cb];W[ac];B[ab];W[af] ;B[ce];W[id];B[ic];W[ag];B[ba];W[gf];B[fe];W[ah];B[fh];W[ec] ;B[hf];W[ii];B[hg];W[ae];B[hc];W[ig];B[dd];W[if];B[];W[ge] ;B[];W[ie];B[];W[id];B[];W[ib];B[];W[]) And there is a gnugo.dat file: Black: GNU Go BlackCommand: gnugo --mode gtp BlackLabel: GNU Go BlackVersion: 3.8 Date: February 15, 2018 4:08:51 PM PST Host: xliu-linux3 (Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz) Komi: 6.5 Referee: - Size: 9 White: Somebot-000001-model WhiteCommand: python3 main.py gtp -l models/000001-model WhiteLabel: Somebot-000001-model WhiteVersion: 0.1 Xml: 0 #GAME RES_B RES_W RES_R ALT DUP LEN TIME_B TIME_W CPU_B CPU_W ERR ERR_MSG 0 B+78.5 ? ? 0 - 78 10.1 4.8 10.2 0 0 at the top of the sgf it says: Result[Black]: B+78.5 More details on the sgf specifications can be found here
gharchive/issue
2018-02-16T00:28:27
2025-04-01T04:36:02.473712
{ "authors": [ "amj", "weedwind" ], "repo": "tensorflow/minigo", "url": "https://github.com/tensorflow/minigo/issues/94", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
273205377
get the decoder hidden states after decoding After I pass an explicit output layer like here, I see that that decoder outpus after dynamic_decode is the output distribution of size |V| where V is the vocab. How can I recover the decoder hidden states ? A follow up question: In tf.contrib.seq2seq.BasicDecoder, the outut_layer parameter is optional. Then, while doing Greedy Decoding, if I don't pass any value for that parameter, will it perform argmax on RNN hidden states and pass the output to the next time step decoder input (which is actually unintended) ? Need attention, please! @rajarsheem Yes, the outputs of the dynamic_decode in NMT codebase is the vocab logits. If you don't give BasicDecoder an output_layer and using GreedyEmbeddingHelper, I think it will use RNN hidden states as the logits, and argmax on the hidden state to try to get a word id. Not passing the output_layer is useful for using other helper, such as, ScheduledOutputTrainingHelper You may implement a custom helper that takes the output layer, generate the vocab logits within the helper, and return both vocab logits and hidden states. It's kind of strange that the hidden states are, by default, not exposed :/ "I think it will use RNN hidden states as the logits, and argmax on the hidden state to try to get a word id." It is quite very undesirable. @rajarsheem During training, we can apply the output_layer after all time steps finished here because we have the word ids in target language. So the outputs here contains rnn outputs (which is the h state when using LSTM). During inference, we have to pass the rnn outputs through the output layer at each time step to get the next word id. @oahziur I don't get your first point. How can we compute the hidden states of all steps in the first place without using the output layer and taking the argmax to feed in next step input ? @rajarsheem we don't feed hidden state argmax because we have the target ids. See how the TrainingHelper is created https://github.com/tensorflow/nmt/blob/master/nmt/model.py#L373. Yeah, I get your point. But if I am not using teacher forcing (or using GreedyEmbedingHelper), I would my predicted ids to be used. And for that, I would be needing the output layer to be used as a part of the decoder. So, I need to hack my way into it to use output_layer as a part of the decoder and also make the dynamic_decode return hidden states. Any suggestions about what should be the flow? @rajarsheem Yes, I think you can implement a custom GreedyEmebddingHelper (which accepts an output layer), so you don't need to pass-in the output layer to the BasicDecoder. For example, You can insert code before here to convert the rnn_outputs to logits. This is what I did: added a new attribute final_output in BasicDecoderOutput namedtuple that shall store outputs whenever there is an output_layer in BasicDecoder. In the step() of BasicDecoder, final_outputs which is linearly transformed cell_outputs is what going inside sample and also sent as a parameter to outputs which is essentially a BasicDecoderOutput and is returned. Few other changes were there. Consequently, when dynamic_decode is returning a BasicDecoderOutput, it already has an attribute final_output that has the unnormalized logits and rnn_output being the cell output. @oahziur @rajarsheem Could you help with a similar issus #298 ? Thanks. +1 because users may have use cases where the decoder's outputs are needed without being passed through the final feedforward layer. Example: the decoder uses scheduled sampling, so the dense layer is needed, but the user wants to use sampled softmax and hence needs the rnn outputs without being passed through the dense layer. @rajarsheem I had to create an adapted version of BasicDecoder for similar reasons. Often one wants an RNN to output not only logits but also some additional loss terms or summary metrics. The functionality of theoutput_layer is to extract logits from a cell output which is necessary for the mechanics in the step function. However, if an output_layer is present, right now step then returns the logits (unfortunately called cell_outputs as well) rather than the original outputs.
gharchive/issue
2017-11-12T06:01:01
2025-04-01T04:36:02.643815
{ "authors": [ "danielwatson6", "oahziur", "peterpan2018", "rajarsheem", "schmiflo" ], "repo": "tensorflow/nmt", "url": "https://github.com/tensorflow/nmt/issues/170", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
132753871
Build pip package from source failed I got the following error while building pip package from the master version tensorflow/core/BUILD:75:1: Target '//google/protobuf:protobuf_python_genproto' is not visible from target '//tensorflow/core:protos_all_py_genproto'. Check the visibility declaration of the former target if you think the dependency is legitimate. Bazel version 0.1.4 and 0.1.5 @damienmg: any ideas here? Did something change about visibility targets? What Bazel version should we use? The site says 0.1.1 If you are building from HEAD, you need to look at the 'master' version of the docs, not 0.6.0. Which says: "Follow instructions here to install the dependencies for bazel. Then download the latest stable bazel version using the installer for your system and run the installer as mentioned there" And currently bazel stable is 0.1.5. (Bazel is evolving very quickly, along with us, so there's some unavoidable churn at the moment). Successfully built bleeding edge TensorFlow pip package using 0.1.4 just now. Does anyone have this problem and manage to fix it? I used both bazel-0.1.5-installer-linux-x86_64.sh and bazel-0.1.4-installer-linux-x86_64.sh with no luck. The version of Tensorflow I tried is c4207090d68151fb967672a90ea6d574e62cfba0 (the latest master version at the moment) What version of protobuf do you have? Did you accidentally move the submodule ref in the tensorflow repo to something too new or too old? On Wed, Feb 10, 2016 at 11:56 PM mondatu notifications@github.com wrote: Does anyone have this problem and manage to fix it? I used both bazel-0.1.5-installer-linux-x86_64.sh and bazel-0.1.4-installer-linux-x86_64.sh with no luck. The version of Tensorflow I tried is c420709 https://github.com/tensorflow/tensorflow/commit/c4207090d68151fb967672a90ea6d574e62cfba0 (the latest master version at the moment) — Reply to this email directly or view it on GitHub https://github.com/tensorflow/tensorflow/issues/1041#issuecomment-182755053 . I finally manage to build it with a fresh clone of the repository. Maybe I had an old build artifact that broke the build. My settings : bazel 0.1.5, python 3.4 I think the submodule pointer to protobuf may have been confused. Closing. @martinwicke This is my protobuf version: git submodule status +55ad57a235c009d0414aed1781072adda0c89137 google/protobuf (v3.0.0-alpha-4-179-g55ad57a) I didn't recall moving any submodule ref. When I reverted to some previous Tensorflow version (on Feb 8), e.g., df66b9fe049cb17e58454f309b662bdaf0d14fdb, I didn't have any problem with compiling. I did a fresh clone as @jrabary did and it works fine now. Thanks, mates!
gharchive/issue
2016-02-10T16:43:49
2025-04-01T04:36:02.688438
{ "authors": [ "Dringite", "jrabary", "martinwicke", "mondatu", "vrv" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/1041", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
317705993
use freeze_graph and report TypeError: main() takes exactly 1 argument (0 given) ubuntu@vm:~/code/tensorflow/tensorflow$ freeze_graph \ --input_graph=/tmp/mobilenet_v1_1.0_224.pb --input_meta_graph=/tmp/mobilenet_v1_1.0_224.ckpt.meta --input_binary=true --input_checkpoint=/tmp/checkpoints/mobilenet_v1_1.0_224.ckpt --output_node_names=MobileNetV1/Predictions/Reshape_1 --output_graph=/tmp/mobilenet_v1_1.0_224_frozen.pb /usr/local/lib/python2.7/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "/usr/local/bin/freeze_graph", line 11, in sys.exit(main()) TypeError: main() takes exactly 1 argument (0 given) Try input_meta_graph=/tmp/mobilenet_v1_1.0_224.ckpt instead of .meta I found solution: python -m tensorflow.python.tools.freeze_graph --input_graph=/tmp/mobilenet_v1_1.0_224.pb --input_checkpoint=/tmp/checkpoints/mobilenet_v1_1.0_224.ckpt --input_binary=true --output_graph=/tmp/mobilenet_v1_1.0_224_frozen.pb --output_node_names=MobilenetV1/Predictions/Reshape_1
gharchive/issue
2018-04-25T16:37:45
2025-04-01T04:36:02.694023
{ "authors": [ "Swordancer", "dinglong1020" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/18865", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
340247849
tf.keras multi input models don't work when using tf.data.Dataset Please go to Stack Overflow for help and support: https://stackoverflow.com/questions/tagged/tensorflow If you open a GitHub issue, here is our policy: It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead). The form below must be filled out. It shouldn't be a TensorBoard issue. Those go here. Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow. System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 10.13.5 and Debian GNU/Linux 9 (stretch) TensorFlow installed from (source or binary): binary TensorFlow version (use command below): v1.9.0-rc2-359-g95cfd8b3d9 1.10.0-dev20180711 also reproduces on v1.9.0 Python version: 3.6.5 and 3.5.3 Bazel version (if compiling from source): GCC/Compiler version (if compiling from source): CUDA/cuDNN version: None GPU model and memory: None Exact command to reproduce: see below You can collect some of this information using our environment capture script: https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh You can obtain the TensorFlow version with python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" Describe the problem tf.keras multi input models don't work when used together with tf.data.Dataset due to input broken validation checks. This problem reproduces both on tf@1.9.0 and the latest nightly. @fchollet Do you have any ideas what's going on here, or am I missing something obvious? Source code / logs Multi input model Consider the following toy model: import numpy as np import tensorflow as tf from tensorflow import keras data_a = np.array([300, 455, 350, 560, 700, 800, 200, 250], dtype=np.float32) labels = np.array([455, 350, 560, 700, 800, 200, 250, 300], dtype=np.float32) data_b = np.array([200, 255, 350, 470, 600, 300, 344, 322], dtype=np.float32) data_a = np.reshape(data_a, (8, 1, 1)) data_b = np.reshape(data_b, (8, 1, 1)) x = keras.layers.Input(shape=(1, 1), name='input_x') y = keras.layers.Input(shape=(1, 1), name='input_y') admi = keras.layers.LSTM(40, return_sequences=False)(x) pla = keras.layers.LSTM(40, return_sequences=False)(y) out = keras.layers.concatenate([admi, pla], axis=-1) output = keras.layers.Dense(1, activation='sigmoid')(out) model = keras.models.Model(inputs=[x, y], outputs=output) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) Using numpy data When fitting using numpy data this works as expected when passing a list or dictionary of inputs: model.fit([data_a, data_b], labels, batch_size=2, epochs=10) model.fit({'input_x': data_a, 'input_y': data_b}, labels, batch_size=2, epochs=10) Using tf.data.Dataset.from_tensor_slices dictionary When trying the same with a tf.data.Dataset the following fails due to incorrect input validation: dataset = tf.data.Dataset.from_tensor_slices(({'input_x': data_a, 'input_y': data_b}, labels)).batch(2).repeat() model.fit(dataset, epochs=10, steps_per_epoch=4) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-6-d35bacd274cc> in <module>() 1 dataset = tf.data.Dataset.from_tensor_slices(({'input_x': data_a, 'input_y': data_b}, labels)).batch(2).repeat() ----> 2 model.fit(dataset, epochs=10, steps_per_epoch=4) /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs) 1276 steps_name='steps_per_epoch', 1277 steps=steps_per_epoch, -> 1278 validation_split=validation_split) 1279 1280 # Prepare validation data. /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split) 915 feed_output_shapes, 916 check_batch_axis=False, # Don't enforce the batch size. --> 917 exception_prefix='target') 918 919 # Generate sample-wise weight values given the `sample_weight` and /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 180 ': expected ' + names[i] + ' to have ' + 181 str(len(shape)) + ' dimensions, but got array ' --> 182 'with shape ' + str(data_shape)) 183 if not check_batch_axis: 184 data_shape = data_shape[1:] ValueError: Error when checking target: expected dense to have 2 dimensions, but got array with shape (None,) Using tf.data.Dataset.from_generator dictionary However using the same network together with tf.data.Dataset.from_generator works. Probably because less validation is done: def generator(): while True: for i in np.random.permutation(8): yield {'input_x': data_a[i], 'input_y': data_b[i]}, labels[i] dataset = tf.data.Dataset.from_generator(generator, ({'input_x': tf.float32, 'input_y': tf.float32}, tf.float32)).batch(2) model.fit(dataset, epochs=10, steps_per_epoch=4) Using tf.data.Dataset tuple Passing the multi-input as a tuple to the model both datasets generated with from_tensor_slices and from_generator fail: dataset = tf.data.Dataset.from_tensor_slices(((data_a, data_b), labels)).batch(2).repeat() model.fit(dataset, epochs=10, steps_per_epoch=4) def generator(): while True: for i in np.random.permutation(8): yield (data_a[i], data_b[i]), labels[i] dataset = tf.data.Dataset.from_generator(generator, ((tf.float32, tf.float32), tf.float32)).batch(2) model.fit(dataset, epochs=10, steps_per_epoch=4) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-7-512a95f0c2a7> in <module>() 1 dataset = tf.data.Dataset.from_tensor_slices(((data_a, data_b), labels)).batch(2).repeat() ----> 2 model.fit(dataset, epochs=10, steps_per_epoch=4) /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs) 1276 steps_name='steps_per_epoch', 1277 steps=steps_per_epoch, -> 1278 validation_split=validation_split) 1279 1280 # Prepare validation data. /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split) 876 feed_input_shapes, 877 check_batch_axis=False, # Don't enforce the batch size. --> 878 exception_prefix='input') 879 880 if y is not None: /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix) 141 data = data.values if data.__class__.__name__ == 'DataFrame' else data 142 data = [data] --> 143 data = [standardize_single_array(x) for x in data] 144 145 if len(data) != len(names): /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py in <listcomp>(.0) 141 data = data.values if data.__class__.__name__ == 'DataFrame' else data 142 data = [data] --> 143 data = [standardize_single_array(x) for x in data] 144 145 if len(data) != len(names): /usr/local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_utils.py in standardize_single_array(x) 79 elif tensor_util.is_tensor(x): 80 return x ---> 81 elif x.ndim == 1: 82 x = np.expand_dims(x, 1) 83 return x AttributeError: 'tuple' object has no attribute 'ndim' I have the same problem and I I have also multiple input dataset. But not sure if this problem caused by the multiple input datset. And I am using tensorflow 1.9 In order to be able to use dataset iterator in model.fit So If I do the following : dataset = tf.data.TFRecordDataset(train.tf_records).map(_parse_function).batch(20).repeat() model.fit(dataset) I got : AttributeError: "'RepeatDataset' object has no attribute 'ndim'" Nagging Assignee @rohan100jain: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. I could reproduce the error. Thanks for taking the time and reproducing it. Did you have a chance to checked out my fix in #20753? Theres also a related PR that adds support for using tuples as multi dim inputs: #20136 Nagging Assignee @fchollet: It has been 17 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. My situation seems similar. The iterator of dataset fed to model.fit is made from tf.data.Dataset.zip() xy_ds = ( tf.data.Dataset.zip((audio_ds, label_ds)) .batch( batch_size=batch_size, # drop_remainder=True if is_training else False ) .repeat(repeat) .prefetch(tf.contrib.data.AUTOTUNE) ) Both audio_ds (input) and label_ds (output) are instances of tf.data.TextLineDataset. tf.data.TextLineDataset(id_path) .map(load_audio, num_parallel_calls=N_READ_THREAD) Before fed to the model, its iterator is created. tr_iterator = tr_set.make_one_shot_iterator() tr_iterator.get_next() (<tf.Tensor 'IteratorGetNext:0' shape=<unknown> dtype=float32>, <tf.Tensor 'IteratorGetNext:1' shape=<unknown> dtype=float32>) And this is the error message when model.fit() is called. File "data_io.py", line 127, in <module> model.fit( File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 950, in fit batch_size=batch_size) File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 749, in _standardize_user_data exception_prefix='input') File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py", line 91, in standardize_input_data data = [standardize_single_array(x) for x in data] File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py", line 91, in <listcomp> data = [standardize_single_array(x) for x in data] File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py", line 26, in standardize_single_array elif x.ndim == 1: AttributeError: 'Iterator' object has no attribute 'ndim' tensorflow: 1.9.0, keras:2.2.2 I think I discovered the problem fro my situation. The problem was I am using the standalone Keras. Not the one imported from tendorflow. So the new features of feeding the iterator directly to model.fit() is valid only when you are using tf.Keras not the standalone Keras. @was84san Wow, same here, and now it seems solved. Thanks! Nagging Assignee @fchollet: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. @lgeiger Did you figure out the solution for multiple input using tf.Dataset API Hi, @was84san. As you mentioned, I am using tf.kera. But the problem still exists. Do you have any idea? Thanks! Which problem exactly!, feeding multiple inputs, or feeding the iterator directly to model.fit. I figure out only the last one. @hhwxxx I was also unable to use model.fit() with a nested Dataset iterator for multi-input and multi-output models (while using tf.keras) on version 1.10. Installing tf-gpu-nightly (my specific version is now 1.12.0-dev20180918) seemed to resolve this problem for me. The final example at this website is interesting: https://www.tensorflow.org/guide/datasets def dataset_input_fn(): filenames = ["/var/data/file1.tfrecord", "/var/data/file2.tfrecord"] dataset = tf.data.TFRecordDataset(filenames) # Use `tf.parse_single_example()` to extract data from a `tf.Example` # protocol buffer, and perform any additional per-record preprocessing. def parser(record): keys_to_features = { "image_data": tf.FixedLenFeature((), tf.string, default_value=""), "date_time": tf.FixedLenFeature((), tf.int64, default_value=""), "label": tf.FixedLenFeature((), tf.int64, default_value=tf.zeros([], dtype=tf.int64)), } parsed = tf.parse_single_example(record, keys_to_features) # Perform additional preprocessing on the parsed data. image = tf.image.decode_jpeg(parsed["image_data"]) image = tf.reshape(image, [299, 299, 1]) label = tf.cast(parsed["label"], tf.int32) return {"image_data": image, "date_time": parsed["date_time"]}, label # Use `Dataset.map()` to build a pair of a feature dictionary and a label # tensor for each example. dataset = dataset.map(parser) dataset = dataset.shuffle(buffer_size=10000) dataset = dataset.batch(32) dataset = dataset.repeat(num_epochs) # Each element of `dataset` is tuple containing a dictionary of features # (in which each value is a batch of values for that feature), and a batch of # labels. return dataset now, how to define a model that accepts and trains correctly with that datase? Is the full example available somewhere? Is it necessary to use dataset.make_one_shot_iterator()? @jashshopin Thanks for pointing that out, apparently you can pass the zipped dataset directly into model.fit(). The example should still work for those who might want to use a one-shot iterator or initializable iterator as well. thanks @gabrielibagon. I have something like that here. Although I used the keras generator format because I could not deal with a video input pipeline using tensorflow methods. I might refactor it to tf.Dataset but it's working for now. :) Nagging Assignee @fchollet: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. This isn't an issue on tensorflow 1.12 and above anymore. Thanks for the help everybody. @ Igeiger, I tried to pass multiple inputs as a list of tf.dataset api to model fit directly, like this model.fit ( [dataset1_iterator, dataset2_iterator] , .....) then I got this error ` /home/wassan/tensorflow/venv/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, cla$s_weight, batch_size, check_steps, steps_name, steps, validation_split) 990 x, y, sample_weight = next_element 991 x, y, sample_weights = self._standardize_weights(x, y, sample_weight, --> 992 class_weight, batch_size) 993 return x, y, sample_weights 994 /home/wassan/tensorflow/venv/lib/python2.7/site-packages/tensorflow/python/keras/engine/training.pyc in _standardize_weights(self, x, y, sample_weight, class$weight, batch_size) 1115 feed_input_shapes, 1116 check_batch_axis=False, # Don't enforce the batch size. -> 1117 exception_prefix='input') 1118 1119 if y is not None: /home/wassan/tensorflow/venv/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_utils.pyc in standardize_input_data(data, names, shapes, che$k_batch_axis, exception_prefix) 282 data = data.values if data.class.name == 'DataFrame' else data 283 data = [data] --> 284 data = [standardize_single_array(x) for x in data] 285 286 if len(data) != len(names): /home/wassan/tensorflow/venv/lib/python2.7/site-packages/tensorflow/python/keras/engine/training_utils.pyc in standardize_single_array(x) 216 if x is None: 217 return None --> 218 if x.shape is not None and len(x.shape) == 1: 219 if tensor_util.is_tensor(x): 220 return array_ops.expand_dims(x, axis=1) AttributeError: 'PrefetchDataset' object has no attribute 'shape' And this is with tensorflow 1.12, so how you can pass multiple input using tf.dataset api with model fit not with model.fit_generator? ` @ Igeiger, I tried to pass multiple inputs as a list of tf.dataset api to model fit directly, like this model.fit ( [dataset1_iterator, dataset2_iterator] , .....) Returning a list of tensors in a single dataset and then passing it to model.fit should work. Checkout this example: https://colab.research.google.com/drive/1h3FUGBhVsXnj6oEE3JDnC0WRFF-Zu__c#scrollTo=cjvaKWOqAQ3e @lgeiger what about using dictionaries as targets? https://github.com/tensorflow/tensorflow/issues/25299#issue-404556539 I can confirm this works in tensorflow 2.0.0-rc0. Multiple input and output, even without all the zipping: (train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data() ds = tf.data.Dataset.from_tensor_slices( ((train_images, dummydata), train_images) ds.shuffle(TRAIN_BUF).repeat().batch(BATCH_SIZE) model.fit(train_dataset, steps_per_epoch=n_trainsamples//BATCH_SIZE) This still seems broken to me (in tensorflow 2.0.0-rc0). See this snippet: import tensorflow as tf from tensorflow import keras inputs = [keras.Input((1,), name="a"), keras.Input((1,), name="b")] outputs = inputs[0] + inputs[1] model = keras.Model(inputs=inputs, outputs=outputs) list_input = [tf.zeros((10, 1)), tf.ones((10, 1))] dict_input = {"a": tf.zeros((10, 1)), "b": tf.ones((10, 1))} print(model.predict(list_input)) print(model.predict(dict_input)) print(model.predict(tf.data.Dataset.from_tensors(dict_input))) # error here print(model.predict(tf.data.Dataset.from_tensors(list_input))) which gives ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [<tf.Tensor: id=47, shape=(2, 10, 1), dtype=float32, numpy= array([[[0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.], [0.]],... @drasmuss workaround could be to convert list to dictionary in Dataset ds=tf.data.Dataset.from_tensors(list_input) def to_dict(lst): return {'a':lst[0], 'b':lst[1]} ds=ds.map(to_dict) print(model.predict(ds)) I think I discovered the problem fro my situation. The problem was I am using the standalone Keras. Not the one imported from tendorflow. So the new features of feeding the iterator directly to model.fit() is valid only when you are usingtf.Kerasnot the standalone Keras. Thx, my problem solved! Just have changed import keras import tensorflow as tf to import tensorflow as tf from tensorflow import keras @johngrabner The problem in your code is that your output_type and output_shape definitions differ. Changing the output_type to ((tensorflow.float32, tensorflow.float32), tensorflow.float32) should to the trick. For the sake of completeness, here is a minimal example of a dataset that expects two inputs (shapes (1,32) and (1,128)): import tensorflow as tf import numpy as np def random_generator(): for i in range(100): x1, x2, y = np.random.random((1,32)), np.random.random((1,128)), np.random.random((1,1)) yield (x1, x2), y toy_dataset = tf.data.Dataset.from_generator( random_generator, output_types=((tf.float32, tf.float32), tf.float32), output_shapes=(((1,32), (1,128)), (1,1)) ) def ready_for_training(bit_0, bit_1, bit_2, bit_3, bit_4, bit_5, bit_6, bit_7, bit_8, bit_9, bit_10, bit_11, bit_12, bit_13, bit_14, bit_15, bit_16, bit_17, bit_18, bit_19, bit_20, bit_21, bit_22, bit_23, bit_24, bit_25, bit_26, bit_27, bit_28, bit_29, bit_30, bit_31, bit_32, bit_33, bit_34, bit_35, bit_36, bit_37, bit_38, bit_39, bit_40, bit_41, bit_42, bit_43, bit_44, bit_45, bit_46, bit_47, bit_48, bit_49, bit_50, bit_51, bit_52, bit_53, bit_54, bit_55, bit_56, bit_57, bit_58, bit_59, bit_60, bit_61, bit_62, bit_63, batch_size, duration, memory_size, arbitration_id): sequence_length=all_ids_length[arbitration_id] * duration # in 1 second how many messages of that specific ID max_element_size=int(len(bit_0)/(sequence_length+1)) max_element_size=max_element_size*(sequence_length+1) # maximum number of elements that can be created bit_0 = bit_0[:max_element_size];bit_1 = bit_1[:max_element_size];bit_2 = bit_2[:max_element_size];bit_3 = bit_3[:max_element_size] bit_4 = bit_4[:max_element_size];bit_5 = bit_5[:max_element_size];bit_6 = bit_6[:max_element_size];bit_7 = bit_7[:max_element_size] bit_8 = bit_8[:max_element_size];bit_9 = bit_9[:max_element_size];bit_10 = bit_10[:max_element_size];bit_11 = bit_11[:max_element_size] bit_12 = bit_12[:max_element_size];bit_13 = bit_13[:max_element_size];bit_14 = bit_14[:max_element_size];bit_15 = bit_15[:max_element_size] bit_16 = bit_16[:max_element_size];bit_17 = bit_17[:max_element_size];bit_18 = bit_18[:max_element_size];bit_19 = bit_19[:max_element_size] bit_20 = bit_20[:max_element_size];bit_21 = bit_21[:max_element_size];bit_22 = bit_22[:max_element_size];bit_23 = bit_23[:max_element_size] bit_24 = bit_24[:max_element_size];bit_25 = bit_25[:max_element_size];bit_26 = bit_26[:max_element_size];bit_27 = bit_27[:max_element_size] bit_28 = bit_28[:max_element_size];bit_29 = bit_29[:max_element_size];bit_30 = bit_30[:max_element_size];bit_31 = bit_31[:max_element_size] bit_32 = bit_32[:max_element_size];bit_33 = bit_33[:max_element_size];bit_34 = bit_34[:max_element_size];bit_35 = bit_35[:max_element_size] bit_36 = bit_36[:max_element_size];bit_37 = bit_37[:max_element_size];bit_38 = bit_38[:max_element_size];bit_39 = bit_39[:max_element_size] bit_40 = bit_40[:max_element_size];bit_41 = bit_41[:max_element_size];bit_42 = bit_42[:max_element_size];bit_43 = bit_43[:max_element_size] bit_44 = bit_44[:max_element_size];bit_45 = bit_45[:max_element_size];bit_46 = bit_46[:max_element_size];bit_47 = bit_47[:max_element_size] bit_48 = bit_48[:max_element_size];bit_49 = bit_49[:max_element_size];bit_50 = bit_50[:max_element_size];bit_51 = bit_51[:max_element_size] bit_52 = bit_52[:max_element_size];bit_53 = bit_53[:max_element_size];bit_54 = bit_54[:max_element_size];bit_55 = bit_55[:max_element_size] bit_56 = bit_56[:max_element_size];bit_57 = bit_57[:max_element_size];bit_58 = bit_58[:max_element_size];bit_59 = bit_59[:max_element_size] bit_60 = bit_60[:max_element_size];bit_61 = bit_61[:max_element_size];bit_62 = bit_62[:max_element_size];bit_63 = bit_63[:max_element_size] bit_0 = np.array(bit_0).reshape(-1, sequence_length + 1);bit_1 = np.array(bit_1).reshape(-1, sequence_length + 1) bit_2 = np.array(bit_2).reshape(-1, sequence_length + 1);bit_3 = np.array(bit_3).reshape(-1, sequence_length + 1) bit_4 = np.array(bit_4).reshape(-1, sequence_length + 1);bit_5 = np.array(bit_5).reshape(-1, sequence_length + 1) bit_6 = np.array(bit_6).reshape(-1, sequence_length + 1);bit_7 = np.array(bit_7).reshape(-1, sequence_length + 1) bit_8 = np.array(bit_8).reshape(-1, sequence_length + 1);bit_9 = np.array(bit_9).reshape(-1, sequence_length + 1) bit_10 = np.array(bit_10).reshape(-1, sequence_length + 1);bit_11 = np.array(bit_11).reshape(-1, sequence_length + 1) bit_12 = np.array(bit_12).reshape(-1, sequence_length + 1);bit_13 = np.array(bit_13).reshape(-1, sequence_length + 1) bit_14 = np.array(bit_14).reshape(-1, sequence_length + 1);bit_15 = np.array(bit_15).reshape(-1, sequence_length + 1) bit_16 = np.array(bit_16).reshape(-1, sequence_length + 1);bit_17 = np.array(bit_17).reshape(-1, sequence_length + 1) bit_18 = np.array(bit_18).reshape(-1, sequence_length + 1);bit_19 = np.array(bit_19).reshape(-1, sequence_length + 1) bit_20 = np.array(bit_20).reshape(-1, sequence_length + 1);bit_21 = np.array(bit_21).reshape(-1, sequence_length + 1) bit_22 = np.array(bit_22).reshape(-1, sequence_length + 1);bit_23 = np.array(bit_23).reshape(-1, sequence_length + 1) bit_24 = np.array(bit_24).reshape(-1, sequence_length + 1);bit_25 = np.array(bit_25).reshape(-1, sequence_length + 1) bit_26 = np.array(bit_26).reshape(-1, sequence_length + 1);bit_27 = np.array(bit_27).reshape(-1, sequence_length + 1) bit_28 = np.array(bit_28).reshape(-1, sequence_length + 1);bit_29 = np.array(bit_29).reshape(-1, sequence_length + 1) bit_30 = np.array(bit_30).reshape(-1, sequence_length + 1);bit_31 = np.array(bit_31).reshape(-1, sequence_length + 1) bit_32 = np.array(bit_32).reshape(-1, sequence_length + 1);bit_33 = np.array(bit_33).reshape(-1, sequence_length + 1) bit_34 = np.array(bit_34).reshape(-1, sequence_length + 1);bit_35 = np.array(bit_35).reshape(-1, sequence_length + 1) bit_36 = np.array(bit_36).reshape(-1, sequence_length + 1);bit_37 = np.array(bit_37).reshape(-1, sequence_length + 1) bit_38 = np.array(bit_38).reshape(-1, sequence_length + 1);bit_39 = np.array(bit_39).reshape(-1, sequence_length + 1) bit_40 = np.array(bit_40).reshape(-1, sequence_length + 1);bit_41 = np.array(bit_41).reshape(-1, sequence_length + 1) bit_42 = np.array(bit_42).reshape(-1, sequence_length + 1);bit_43 = np.array(bit_43).reshape(-1, sequence_length + 1) bit_44 = np.array(bit_44).reshape(-1, sequence_length + 1);bit_45 = np.array(bit_45).reshape(-1, sequence_length + 1) bit_46 = np.array(bit_46).reshape(-1, sequence_length + 1);bit_47 = np.array(bit_47).reshape(-1, sequence_length + 1) bit_48 = np.array(bit_48).reshape(-1, sequence_length + 1);bit_49 = np.array(bit_49).reshape(-1, sequence_length + 1) bit_50 = np.array(bit_50).reshape(-1, sequence_length + 1);bit_51 = np.array(bit_51).reshape(-1, sequence_length + 1) bit_52 = np.array(bit_52).reshape(-1, sequence_length + 1);bit_53 = np.array(bit_53).reshape(-1, sequence_length + 1) bit_54 = np.array(bit_54).reshape(-1, sequence_length + 1);bit_55 = np.array(bit_55).reshape(-1, sequence_length + 1) bit_56 = np.array(bit_56).reshape(-1, sequence_length + 1);bit_57 = np.array(bit_57).reshape(-1, sequence_length + 1) bit_58 = np.array(bit_58).reshape(-1, sequence_length + 1);bit_59 = np.array(bit_59).reshape(-1, sequence_length + 1) bit_60 = np.array(bit_60).reshape(-1, sequence_length + 1);bit_61 = np.array(bit_61).reshape(-1, sequence_length + 1) bit_62 = np.array(bit_62).reshape(-1, sequence_length + 1);bit_63 = np.array(bit_63).reshape(-1, sequence_length + 1) bit_0_input, bit_0_output = bit_0[:, :-1], bit_0[:, 1:];bit_1_input, bit_1_output = bit_1[:, :-1], bit_1[:, 1:] bit_2_input, bit_2_output = bit_2[:, :-1], bit_2[:, 1:];bit_3_input, bit_3_output = bit_3[:, :-1], bit_3[:, 1:] bit_4_input, bit_4_output = bit_4[:, :-1], bit_4[:, 1:];bit_5_input, bit_5_output = bit_5[:, :-1], bit_5[:, 1:] bit_6_input, bit_6_output = bit_6[:, :-1], bit_6[:, 1:];bit_7_input, bit_7_output = bit_7[:, :-1], bit_7[:, 1:] bit_8_input, bit_8_output = bit_8[:, :-1], bit_8[:, 1:];bit_9_input, bit_9_output = bit_9[:, :-1], bit_9[:, 1:] bit_10_input, bit_10_output = bit_10[:, :-1], bit_10[:, 1:];bit_11_input, bit_11_output = bit_11[:, :-1], bit_11[:, 1:] bit_12_input, bit_12_output = bit_12[:, :-1], bit_12[:, 1:];bit_13_input, bit_13_output = bit_13[:, :-1], bit_13[:, 1:] bit_14_input, bit_14_output = bit_14[:, :-1], bit_14[:, 1:];bit_15_input, bit_15_output = bit_15[:, :-1], bit_15[:, 1:] bit_16_input, bit_16_output = bit_16[:, :-1], bit_16[:, 1:];bit_17_input, bit_17_output = bit_17[:, :-1], bit_17[:, 1:] bit_18_input, bit_18_output = bit_18[:, :-1], bit_18[:, 1:];bit_19_input, bit_19_output = bit_19[:, :-1], bit_19[:, 1:] bit_20_input, bit_20_output = bit_20[:, :-1], bit_20[:, 1:];bit_21_input, bit_21_output = bit_21[:, :-1], bit_21[:, 1:] bit_22_input, bit_22_output = bit_22[:, :-1], bit_22[:, 1:];bit_23_input, bit_23_output = bit_23[:, :-1], bit_23[:, 1:] bit_24_input, bit_24_output = bit_24[:, :-1], bit_24[:, 1:];bit_25_input, bit_25_output = bit_25[:, :-1], bit_25[:, 1:] bit_26_input, bit_26_output = bit_26[:, :-1], bit_26[:, 1:];bit_27_input, bit_27_output = bit_27[:, :-1], bit_27[:, 1:] bit_28_input, bit_28_output = bit_28[:, :-1], bit_28[:, 1:];bit_29_input, bit_29_output = bit_29[:, :-1], bit_29[:, 1:] bit_30_input, bit_30_output = bit_30[:, :-1], bit_30[:, 1:];bit_31_input, bit_31_output = bit_31[:, :-1], bit_31[:, 1:] bit_32_input, bit_32_output = bit_32[:, :-1], bit_32[:, 1:];bit_33_input, bit_33_output = bit_33[:, :-1], bit_33[:, 1:] bit_34_input, bit_34_output = bit_34[:, :-1], bit_34[:, 1:];bit_35_input, bit_35_output = bit_35[:, :-1], bit_35[:, 1:] bit_36_input, bit_36_output = bit_36[:, :-1], bit_36[:, 1:];bit_37_input, bit_37_output = bit_37[:, :-1], bit_37[:, 1:] bit_38_input, bit_38_output = bit_38[:, :-1], bit_38[:, 1:];bit_39_input, bit_39_output = bit_39[:, :-1], bit_39[:, 1:] bit_40_input, bit_40_output = bit_40[:, :-1], bit_40[:, 1:];bit_41_input, bit_41_output = bit_41[:, :-1], bit_41[:, 1:] bit_42_input, bit_42_output = bit_42[:, :-1], bit_42[:, 1:];bit_43_input, bit_43_output = bit_43[:, :-1], bit_43[:, 1:] bit_44_input, bit_44_output = bit_44[:, :-1], bit_44[:, 1:];bit_45_input, bit_45_output = bit_45[:, :-1], bit_45[:, 1:] bit_46_input, bit_46_output = bit_46[:, :-1], bit_46[:, 1:];bit_47_input, bit_47_output = bit_47[:, :-1], bit_47[:, 1:] bit_48_input, bit_48_output = bit_48[:, :-1], bit_48[:, 1:];bit_49_input, bit_49_output = bit_49[:, :-1], bit_49[:, 1:] bit_50_input, bit_50_output = bit_50[:, :-1], bit_50[:, 1:];bit_51_input, bit_51_output = bit_51[:, :-1], bit_51[:, 1:] bit_52_input, bit_52_output = bit_52[:, :-1], bit_52[:, 1:];bit_53_input, bit_53_output = bit_53[:, :-1], bit_53[:, 1:] bit_54_input, bit_54_output = bit_54[:, :-1], bit_54[:, 1:];bit_55_input, bit_55_output = bit_55[:, :-1], bit_55[:, 1:] bit_56_input, bit_56_output = bit_56[:, :-1], bit_56[:, 1:];bit_57_input, bit_57_output = bit_57[:, :-1], bit_57[:, 1:] bit_58_input, bit_58_output = bit_58[:, :-1], bit_58[:, 1:];bit_59_input, bit_59_output = bit_59[:, :-1], bit_59[:, 1:] bit_60_input, bit_60_output = bit_60[:, :-1], bit_60[:, 1:];bit_61_input, bit_61_output = bit_61[:, :-1], bit_61[:, 1:] bit_62_input, bit_62_output = bit_62[:, :-1], bit_62[:, 1:];bit_63_input, bit_63_output = bit_63[:, :-1], bit_63[:, 1:] inputs = tf.data.Dataset.from_tensor_slices((bit_0_input,bit_1_input,bit_2_input,bit_3_input,bit_4_input,bit_5_input,bit_6_input, bit_7_input,bit_8_input,bit_9_input,bit_10_input,bit_11_input,bit_12_input,bit_13_input, bit_14_input,bit_15_input,bit_16_input,bit_17_input,bit_18_input,bit_19_input,bit_20_input, bit_21_input,bit_22_input,bit_23_input,bit_24_input,bit_25_input,bit_26_input,bit_27_input, bit_28_input,bit_29_input,bit_30_input,bit_31_input,bit_32_input,bit_33_input,bit_34_input, bit_35_input,bit_36_input,bit_37_input,bit_38_input,bit_39_input,bit_40_input,bit_41_input, bit_42_input,bit_43_input,bit_44_input,bit_45_input,bit_46_input,bit_47_input,bit_48_input, bit_49_input,bit_50_input,bit_51_input,bit_52_input,bit_53_input,bit_54_input,bit_55_input, bit_56_input,bit_57_input,bit_58_input,bit_59_input,bit_60_input,bit_61_input,bit_62_input, bit_63_input)) outputs= tf.data.Dataset.from_tensor_slices((bit_0_output,bit_1_output,bit_2_output,bit_3_output,bit_4_output,bit_5_output,bit_6_output, bit_7_output,bit_8_output,bit_9_output,bit_10_output,bit_11_output,bit_12_output,bit_13_output, bit_14_output,bit_15_output,bit_16_output,bit_17_output,bit_18_output,bit_19_output,bit_20_output, bit_21_output,bit_22_output,bit_23_output,bit_24_output,bit_25_output,bit_26_output,bit_27_output, bit_28_output,bit_29_output,bit_30_output,bit_31_output,bit_32_output,bit_33_output,bit_34_output, bit_35_output,bit_36_output,bit_37_output,bit_38_output,bit_39_output,bit_40_output,bit_41_output, bit_42_output,bit_43_output,bit_44_output,bit_45_output,bit_46_output,bit_47_output,bit_48_output, bit_49_output,bit_50_output,bit_51_output,bit_52_output,bit_53_output,bit_54_output,bit_55_output, bit_56_output,bit_57_output,bit_58_output,bit_59_output,bit_60_output,bit_61_output,bit_62_output, bit_63_output)) train_dataset = tf.data.Dataset.zip((inputs, outputs)).batch(batch_size, drop_remainder=True).shuffle(memory_size) return train_dataset hey! guys. I have been in trouble, the error below was thrown when the model with double inputs predicted. Traceback (most recent call last): File "practice.py", line 279, in <module> action = np.argmax([0.1, 1, 0.2]*agent.get_qs(current_state)) File "practice.py", line 186, in get_qs return self.model.predict(state)[0] File "C:\Users\liuzhen\.conda\envs\python37\lib\site-packages\keras\engine\training.py", line 1380, in predict x, _, _ = self._standardize_user_data(x) File "C:\Users\liuzhen\.conda\envs\python37\lib\site-packages\keras\engine\training.py", line 757, in _standardize_user_data exception_prefix='input') File "C:\Users\liuzhen\.conda\envs\python37\lib\site-packages\keras\engine\training_utils.py", line 95, in standardize_input_data data = [standardize_single_array(x) for x in data] File "C:\Users\liuzhen\.conda\envs\python37\lib\site-packages\keras\engine\training_utils.py", line 95, in <listcomp> data = [standardize_single_array(x) for x in data] File "C:\Users\liuzhen\.conda\envs\python37\lib\site-packages\keras\engine\training_utils.py", line 30, in standardize_single_array elif x.ndim == 1: AttributeError: 'list' object has no attribute 'ndim' the 'state' is a list of two nd-arrays there model = Model(inputs=[input1, input2], outputs=predictions) I would really appreciate it if anyone is willing to give some tips hey! consider trying this: relu = tf.keras.activations.relu; layers = tf.keras.layers; ## Input 1 inputs = layers.Input(shape=(1)); # First input for 'data_a' outputs = layers.Dense(128, activation=relu)(inputs); model_data_a = tf.keras.Model(inputs, outputs); # Build 'model_data_a' for 'data_a' ## Input 2 inputs = layers.Input(shape=(1)); # Second input for 'data_b' outputs = layers.Dense(128, activation=relu)(inputs); model_data_b = tf.keras.Model(inputs, outputs); # Build 'model_data_b' for 'data_b' ## Model inputs = layers.Concatenate()([model_data_a.output, model_data_b.output]); ### Get the outputs of 'model_data_a' , 'model_data_b' and combine the outputs outputs = layers.Dense(1, activation=relu)(inputs); model = tf.keras.Model([model_data_a.input, model_data_b.input], outputs); ## Add both inputs ## Compile and fit the model model.compile(optimizer=tf.keras.optimizers.RMSprop(), loss=tf.keras.losses.mae, metrics=tf.keras.metrics.mse); model.fit([data_a, data_b], labels, epochs=5); hope this helps! @MatanSandori I think this method will work only if the inputs are of the same shape? # generate two parts, one is input and other is output def generator(): for index, row in df.iterrows(): yield ( { 'float_input': row['float_col'], 'int_input': row['int_col'], 'str_input': row['str_col'], 'list_input': row['list_col'] }, row['int_col'] ) # Use output_signature to specify the inputs (maybe also multiple outputs but I didn't try dataset = tf.data.Dataset.from_generator(generator, output_signature=( { 'float_input': tf.TensorSpec(shape=(), dtype=tf.float32), 'int_input': tf.TensorSpec(shape=(), dtype=tf.int32), 'str_input': tf.TensorSpec(shape=(), dtype=tf.string), 'list_input': tf.TensorSpec(shape=(2,), dtype=tf.int32) }, tf.TensorSpec(shape=(), dtype=tf.int32) ))
gharchive/issue
2018-07-11T13:42:33
2025-04-01T04:36:02.739929
{ "authors": [ "DavidPL1", "MatanSandori", "StenkinVlad", "drasmuss", "dstkibbrom", "gabrielibagon", "hhwxxx", "jashshopin", "jiunyen-ching", "keunwoochoi", "kristofgiber", "lgeiger", "mindis", "ricoms", "rmgogogo", "rohan100jain", "srcolinas", "tensorflowbutler", "tinmodeHuang", "was84san" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/20698", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
358354612
TENSORFLOW ISSUES I've been trying to install TensorFlow for the past few hours and keep getting this error: >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "/anaconda2/lib/python2.7/site-packages/conda/exceptions.py", line 812, in __call__ return func(*args, **kwargs) File "/anaconda2/lib/python2.7/site-packages/conda/cli/main.py", line 78, in _main exit_code = do_call(args, p) File "/anaconda2/lib/python2.7/site-packages/conda/cli/conda_argparse.py", line 77, in do_call exit_code = getattr(module, func_name)(args, parser) File "/anaconda2/lib/python2.7/site-packages/conda/cli/main_install.py", line 11, in execute install(args, parser, 'install') File "/anaconda2/lib/python2.7/site-packages/conda/cli/install.py", line 253, in install handle_txn(unlink_link_transaction, prefix, args, newenv) File "/anaconda2/lib/python2.7/site-packages/conda/cli/install.py", line 269, in handle_txn unlink_link_transaction.print_transaction_summary() File "/anaconda2/lib/python2.7/site-packages/conda/core/link.py", line 730, in print_transaction_summary legacy_action_groups = self._make_legacy_action_groups() File "/anaconda2/lib/python2.7/site-packages/conda/core/link.py", line 713, in _make_legacy_action_groups self._pfe.prepare() File "/anaconda2/lib/python2.7/site-packages/conda/common/io.py", line 46, in decorated return f(*args, **kwds) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 561, in prepare for prec in self.link_precs) File "/anaconda2/lib/python2.7/_abcoll.py", line 571, in update for key, value in other: File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 561, in <genexpr> for prec in self.link_precs) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 454, in make_actions_for_record ), None) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 451, in <genexpr> pcrec for pcrec in concat(PackageCacheData(pkgs_dir).query(pref_or_spec) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 452, in <genexpr> for pkgs_dir in context.pkgs_dirs) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 125, in query return (pcrec for pcrec in itervalues(self._package_cache_records) if pcrec == param) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 221, in _package_cache_records self.load() File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 91, in load package_cache_record = self._make_single_record(base_name) File "/anaconda2/lib/python2.7/site-packages/conda/core/package_cache_data.py", line 350, in _make_single_record md5 = compute_md5sum(package_tarball_full_path) File "/anaconda2/lib/python2.7/site-packages/conda/gateways/disk/read.py", line 71, in compute_md5sum return _digest_path('md5', file_full_path) File "/anaconda2/lib/python2.7/site-packages/conda/gateways/disk/read.py", line 64, in _digest_path with open(path, "rb") as fh: File "/anaconda2/lib/python2.7/site-packages/conda/common/compat.py", line 121, in open errors=errors, newline=newline, closefd=closefd) IOError: [Errno 13] Permission denied: '/anaconda2/pkgs/pyspark-2.3.0-py27_0.tar.bz2' $ /anaconda2/bin/conda install tensorflow environment variables: CIO_TEST= CONDA_ROOT=/anaconda2 PATH=/Library/Frameworks/Python.framework/Versions/3.7/bin:/anaconda2/bin:/ Users/asy/Downloads/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin: /usr/sbin:/sbin:/Library/TeX/texbin REQUESTS_CA_BUNDLE= SSL_CERT_FILE= active environment : None user config file : /Users/asy/.condarc populated config files : /Users/asy/.condarc conda version : 4.5.0 conda-build version : 3.0.27 python version : 2.7.14.final.0 base environment : /anaconda2 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/osx-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/osx-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/osx-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/osx-64 https://repo.anaconda.com/pkgs/pro/noarch package cache : /anaconda2/pkgs /Users/asy/.conda/pkgs envs directories : /anaconda2/envs /Users/asy/.conda/envs platform : osx-64 user-agent : conda/4.5.0 requests/2.18.4 CPython/2.7.14 Darwin/16.7.0 OSX/10.12.6 UID:GID : 301705310:285281272 netrc file : None offline mode : False This is what happens when I go down the conda route. When I go through the pip route, this happens: Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: u'/private/var/folders/k7/xthqkfr968g8c83fbtdm1jfc8zq_2y/T/pip-unpack-yhcLfC/tensorflow-1.10.1-cp27-cp27m-macosx_10_11_x86_64.whl' Consider using the --user option or check the permissions. Please assist. Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks. Have I written custom code OS Platform and Distribution TensorFlow installed from TensorFlow version Bazel version CUDA/cuDNN version GPU model and memory Exact command to reproduce Mobile device This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. If you think we've misinterpreted a bug, please comment again with a clear explanation, as well as all of the information requested in the issue template. Thanks!
gharchive/issue
2018-09-09T06:40:56
2025-04-01T04:36:02.754325
{ "authors": [ "acsy0802", "robieta", "tensorflowbutler" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/22165", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
493660936
Keras code freezes on GPU Please go to Stack Overflow for help and support: https://stackoverflow.com/questions/tagged/tensorflow If you open a GitHub issue, here is our policy: System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04 Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: TensorFlow installed from (source or binary): binary (pip) TensorFlow version (use command below): 1.14 Python version: 3.6.3 Bazel version (if compiling from source): GCC/Compiler version (if compiling from source): CUDA/cuDNN version: Cuda: 10.2 cuDNN: 7.6.2.24-1+cuda10.0 GPU model and memory: Geforce GTX 1080Ti Exact command to reproduce: It is a lot of code and data that I am trying to dumb down to something easily reproducible. Describe the problem My keras model training freezes up. By freeze up I mean I cannot ctrl-c out of the code, I can ctrl-z and kill. The training never proceeds, it just sits at that point and the python process goes to 100% and the GPU goes to 100%. Stack trace included below seems to indicate the code is waiting on something. | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 3499 C python 6545MiB | +-----------------------------------------------------------------------------+ Sat Sep 14 12:29:32 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.26 Driver Version: 430.26 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A | | 38% 65C P2 82W / 250W | 6555MiB / 11177MiB | 100% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 3499 C python 6545MiB | +-----------------------------------------------------------------------------+ top - 12:30:48 up 9 days, 15:45, 3 users, load average: 1.04, 1.48, 1.44 Tasks: 224 total, 1 running, 156 sleeping, 0 stopped, 1 zombie %Cpu(s): 0.0 us, 5.8 sy, 6.8 ni, 87.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st KiB Mem : 65913412 total, 40978484 free, 9833572 used, 15101356 buff/cache KiB Swap: 67048444 total, 67048444 free, 0 used. 55203584 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3499 jostheim 25 5 28.628g 4.935g 761640 S 99.7 7.9 22:56.33 python Source code / logs GDB traces are here: https://pastebin.com/qcpHGadT @jostheim , Hi can you please provide a simple and standalone code to reproduce the issue reported from our end ? Thanks! Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!
gharchive/issue
2019-09-14T20:01:20
2025-04-01T04:36:02.763302
{ "authors": [ "jostheim", "oanush" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/32529", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
498396406
Re-emerged Issue #31509 - BaseCollectiveExecutor::StartAbort Out of range: The previous issue described in #31509 was fixed, but I am now experiencing exactly the same issue with all the same setup using the latest nightly build of TF2.0 when using tf.keras.optimizers.Adam @oracle3001 I am not seeing any issue with tf.keras.optimizers.Adam in latest TF 2.0.0-rc2 version.Please, find the gist here. Thanks! I am having the exact same problem using this mock model. I am using tf 2.0.0 release. On windows import matplotlib.pyplot as plt import numpy as np import tensorflow as tf if __name__ == '__main__': x = tf.random.normal((14000, 30, 1)) y = tf.ones_like(x) discriminator = tf.keras.models.Sequential([ tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True), tf.keras.layers.LSTM(100, recurrent_dropout=0.4, dropout=0.4, return_sequences=True) ]) discriminator.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.batch(64) discriminator.fit(dataset, epochs=2) `` @duysPES I am able to execute the code successfully in colab using TF 2.0.0-rc2 .Please, find the gist here.Thanks! I am also having this message feeding a dataset into a 1D Convnet. Happens on my Mac with tf version 2.0.0-rc2. Not reproducible on Colab. import numpy as np import tensorflow as tf def create_timeseries_element(): # returns a random time series of 100 intervals, each with 3 features, # and a random one-hot array of 5 entries data = np.random.rand(100,3) label = np.eye(5, dtype='int')[np.random.choice(5)] return data, label def data_generator(): d, l = create_timeseries_element() yield (d, l) model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(128, 9, activation='relu', input_shape=(100, 3)), tf.keras.layers.Conv1D(128, 9, activation='relu'), tf.keras.layers.MaxPooling1D(2), tf.keras.layers.Conv1D(256, 5, activation='relu'), tf.keras.layers.Conv1D(256, 5, activation='relu'), tf.keras.layers.GlobalAveragePooling1D(), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(5, activation='softmax')]) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) ds = tf.data.Dataset.from_generator(data_generator, output_types=(tf.float32, tf.int32), output_shapes=(tf.TensorShape([100, 3]), tf.TensorShape([5]))) model.fit(ds.batch(32)) I am having an issue similar to this and tried to run in Collab just to get an un-ending runtime. I asked my question in full on SO here. I had some numpy arrays that were trained in keras in the previous version of tf and now have to rewrite my model. Got way worse accuracy so I am thinking I need to switch to tf.data.Dataset. So I did: train_dataset = tf.data.Dataset.from_tensor_slices((X_train_deleted_nans, y_train_no_nans)) train_dataset = train_dataset.shuffle(SHUFFLE_CONST).batch(BATCH_SIZE) model.summary() gave me: BatchDataset shapes: ((None, 2756), (None,)), types: (tf.float64, tf.int64) Model: sequential Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 1379) 3801903 _________________________________________________________________ dropout (Dropout) (None, 1379) 0 _________________________________________________________________ dense_1 (Dense) (None, 1379) 1903020 _________________________________________________________________ dropout_1 (Dropout) (None, 1379) 0 _________________________________________________________________ dense_2 (Dense) (None, 1379) 1903020 _________________________________________________________________ dropout_2 (Dropout) (None, 1379) 0 _________________________________________________________________ dense_3 (Dense) (None, 1379) 1903020 _________________________________________________________________ dropout_3 (Dropout) (None, 1379) 0 _________________________________________________________________ dense_4 (Dense) (None, 1) 1380 ================================================================= Total params: 9,512,343 Trainable params: 9,512,343 Non-trainable params: 0 model.compile(optimizer=adam, loss=bce, metrics=['accuracy']) model.fit(train_dataset, epochs=1000, verbose=0) Once the training starts I get this warning error: 2019-10-04 23:47:56.691434: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] I am having the same issue as above on TF 2.0 release. Is this a bug with tensorflow or is there an issue with the code? It seems everybody who is having this issue is using Windows. I presume that must have something to do with it? I am having the issue on a Mac with the latest version of MacOS I am having the same problem after porting my code from 1.14 to 2.0. I am running on UBUNTU 18.04 (not only a windows problem). It occurs for me during both training and predict. (so not linked to the optimiser). I do NOT get the problem if i hide the GPU. I do get the problem if I expose the GPU. I think I may have found why it is complaining. However, I have no idea how to fix it. While training, we all get the IteratorGetNext Error: Sequence out of range. I noticed that let’s say I have a dataset size of 60,000 with a batch size of 64 that would require floor(60000/64)= 937 to iterate through entire dataset for one epoch. However, when training using .fit(verbose=1) i botice that it attempts to iterate through the dataset 938 (most likely a rounding error because 60000/64=937.5) and thus I get this error. Can someone please confirm this is the case for you as well? Thanks Good idea. I think you can use the take statement on the dataset to limit yourself to the first 64*937 records avoiding any need to round at the end On Mon, 7 Oct 2019, 23:02 duysqubix notifications@github.com wrote: I think I may have found why it is complaining. However, I have no idea how to fix it. While training, we all get the IteratorGetNext Error: Sequence out of range. I noticed that let’s say I have a dataset size of 60,000 with a batch size of 64 that would require floor(60000/64)= 937 to iterate through entire dataset for one epoch. However, when training using .fit(verbose=1) i botice that it attempts to iterate through the dataset 938 (most likely a rounding error because 60000/64=937.5) and thus I get this error. Can someone please confirm this is the case for you as well? Thanks — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tensorflow/tensorflow/issues/32817?email_source=notifications&email_token=ABW6MMFS2UIXVTALDOPPVSLQNOPYDA5CNFSM4I2PHNA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEARYYMI#issuecomment-539200561, or mute the thread https://github.com/notifications/unsubscribe-auth/ABW6MMANBQ3NCFAKJLMWHH3QNOPYDANCNFSM4I2PHNAQ . I'm experiencing a similar problem to @duysqubix with my code in that I have a number of samples that doesn't neatly divide by the batch size. @duysqubix code works for me and the error disappears if I repeat the dataset and specify steps_per_epoch. I'm seeing this on Ubuntu 18.04, so definitely not a Windows only problem. I see this issue with both the tensorflow 2 release and the tensorflow 2 RC2. Trying @mtpgva's advice above and using a take a number of samples that are divisible by the batch size I find that I still get the same message, even when using the simplified example provided by @duysqubix: import tensorflow as tf data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).take(512).batch(64) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(1, activation='softmax') ]) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5) Epoch 1/5 2019-10-08 09:01:00.212603: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 8/Unknown - 1s 84ms/step - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.443158: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_16]] 2019-10-08 09:01:00.443241: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 8/8 [==============================] - 1s 85ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 2/5 1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.502043: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_4]] 2019-10-08 09:01:00.502100: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 8/8 [==============================] - 0s 7ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 3/5 1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.544339: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_16]] 2019-10-08 09:01:00.544373: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 8/8 [==============================] - 0s 5ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 4/5 1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.587002: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_16]] 2019-10-08 09:01:00.587044: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 8/8 [==============================] - 0s 5ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 5/5 1/8 [==>...........................] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:01:00.631688: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_4]] 2019-10-08 09:01:00.631740: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 8/8 [==============================] - 0s 6ms/step - loss: 102.0359 - accuracy: 1.0000 I also tried using the drop_remainder=True argument on .batch but still get the error message: import tensorflow as tf data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).batch(64, drop_remainder=True) model = tf.keras.models.Sequential([ tf.keras.layers.Dense(1, activation='softmax') ]) model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy']) #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5) Epoch 1/5 2019-10-08 09:03:47.431058: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 937/Unknown - 3s 3ms/step - loss: 102.0359 - accuracy: 1.00002019-10-08 09:03:50.275433: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_2]] 2019-10-08 09:03:50.275587: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 2/5 919/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:03:52.891814: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_2]] 2019-10-08 09:03:52.891940: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 3/5 931/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:03:55.506978: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_2]] 2019-10-08 09:03:55.507100: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 4/5 918/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:03:58.045499: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_2]] 2019-10-08 09:03:58.045610: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000 Epoch 5/5 932/937 [============================>.] - ETA: 0s - loss: 102.0359 - accuracy: 1.00002019-10-08 09:04:00.654601: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[IteratorGetNext/_2]] 2019-10-08 09:04:00.654715: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 937/937 [==============================] - 3s 3ms/step - loss: 102.0359 - accuracy: 1.0000 I think we may need to summon @fchollet @duysqubix Your suggestion fixed my issue! @oracle3001 Can you please let us know if the issue still persists?.Please close the issue if it was resolved already. Thanks! Any update? @oracle3001 Can you please let us know if the issue still persists?.Please close the issue if it was resolved already. Thanks! This fixes, my issue, however, surely the final batch size being reduced should not create an issue? Repeating the part of the first batch of data should not be the solution, surely? @oracle3001 Can you please let us know if the issue still persists?.Please close the issue if it was resolved already. Thanks! Yes, it still persists...see all the other posts with the same issue. Hi, I agree with @BeWe11, the issue is still there. Moreover if you are using a data.from_generator() function the number of actual steps must be computed on the first epoch. In addition it seems to always happen when validation_data is used as model.fit() argument have the same issue on a centos7 (tf-2.0) demo source: import numpy as np import tensorflow as tf if __name__ == '__main__': x = tf.random.normal((14000, 30, 1)) y = tf.ones_like(x) discriminator = tf.keras.models.Sequential([ tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True), tf.keras.layers.LSTM(100, recurrent_dropout=0.4, dropout=0.4, return_sequences=True) ]) discriminator.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) dataset = dataset.batch(64) discriminator.fit(dataset, epochs=2) console: 2019-11-15 18:56:52.780799: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-11-15 18:56:52.787215: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2494130000 Hz 2019-11-15 18:56:52.788336: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4ebe390 executing computations on platform Host. Devices: 2019-11-15 18:56:52.788386: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version Epoch 1/2 2019-11-15 18:56:55.992738: W tensorflow/core/grappler/optimizers/implementation_selector.cc:310] Skipping optimization due to error while loading function libraries: Invalid argument: Functions '__inference___backward_standard_lstm_4693_5178' and '__inference___backward_standard_lstm_4693_5178_specialized_for_StatefulPartitionedCall_at___inference_distributed_function_5271' both implement 'lstm_d3e9f06d-9768-4d2d-a545-12ffe5c3e735' but their signatures do not match. 219/Unknown - 20s 93ms/step - loss: 1.37292019-11-15 18:57:13.959804: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence 219/219 [==============================] - 20s 93ms/step - loss: 1.3729 Epoch 2/2 218/219 [============================>.] - ETA: 0s - loss: 0.46322019-11-15 18:57:31.342946: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence 219/219 [==============================] - 17s 79ms/step - loss: 0.4632 Same issue with a trivial example : import numpy as np import tensorflow as tf x = np.array([1, 5, 8, 9 ,10, 15,13, 3,-2],np.float32) y = np.array([-2,-5, -7, -12 ,-15, -5, -12,-10,-5],np.float32) taille_lot = 2 num_epochs = 3 dataset = tf.data.Dataset.from_tensor_slices(( x , y )) dataset = dataset.shuffle(5000).batch(taille_lot) model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, kernel_initializer='uniform', activation='linear',input_shape=(1,))) model.layers[0].set_weights([np.array([[7.3]],np.float32),np.array([5.5],np.float32)]) model.layers[0](np.array([[3]])) model.compile(optimizer='sgd', loss='mse') model.fit(dataset, epochs=num_epochs, verbose=0) print(model.layers[0].get_weights()) Results : 2019-11-19 22:12:24.480601: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 2019-11-19 22:12:24.517314: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 2019-11-19 22:12:24.551855: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [array([[-0.9485246]], dtype=float32), array([4.253337], dtype=float32)] >>> I think An example is missing in Doc : import numpy as np import tensorflow as tf def prepara_data(x, y, num_epochs, batch_size): dataset = tf.data.Dataset.from_tensor_slices(( x , y )) dataset = dataset.batch(batch_size, drop_remainder=True).repeat(num_epochs) return dataset batch_size = 2 num_epochs = 3 x = np.array([1, 2, 3, 4 ,5, 6, 7, 8, 9],np.float32) y = 2 * x + 1 dataset = prepara_data(x, y, num_epochs, batch_size) iterateur = dataset.__iter__() for i in range(0,1000): try: x_batch , y_batch = iterateur.get_next() print("iter ", i,'->',x_batch) except: print ("Exception Iteration ", i," max = ",num_epochs * x.shape[0] // batch_size ) break dataset = prepara_data(x, y, num_epochs, batch_size) model = tf.keras.Sequential() model.add(tf.keras.layers.Dense(1, kernel_initializer='uniform', activation='linear',input_shape=(1,))) model.layers[0].set_weights([np.array([[7.3]],np.float32),np.array([5.5],np.float32)]) model.layers[0](np.array([[3]])) model.compile(optimizer='sgd', loss='mse') model.fit(dataset, epochs=num_epochs ,steps_per_epoch=x.shape[0] // batch_size, verbose=0) print(model.layers[0].get_weights()) you must set drop_remainder to True in batch method argument and set steps_per_epoch=x.shape[0] // batch_size in fit method @sharkdtu import numpy as np import tensorflow as tf if __name__ == '__main__': x = tf.random.normal((14000, 30, 1)) y = tf.ones_like(x) discriminator = tf.keras.models.Sequential([ tf.keras.layers.LSTM(100, input_shape=(30, 1), return_sequences=True), tf.keras.layers.LSTM(100, recurrent_dropout=0.4, dropout=0.4, return_sequences=True) ]) discriminator.compile(loss='binary_crossentropy', optimizer=tf.keras.optimizers.Adam(lr=0.001)) dataset = tf.data.Dataset.from_tensor_slices((x, y)) batch_size = 64 num_epoch = 2 dataset = dataset.batch(batch_size, drop_remainder=True).repeat(num_epoch) discriminator.fit(dataset, epochs=num_epoch, steps_per_epoch=x.shape[0] // batch_size) @LaurentBerger thx, but i don't think it is a good idea for forcing users to set the steps_per_epoch. @LaurentBerger sometimes we even don't know the full size of the data. @sharkdtu I changed code and use tf.data.experimental.cardinality issue here @npuichigo Why? (of course I must shuffle data before fit but it's only an example) 将全量数据集切分成多个batch,对模型进行分批迭代训练,是常规做法,先理解一下两个概念: [batch_size:每个batch的数据量大小] [batches:整个数据集按照batch_size切分后的batch数量] keras的fit函数的部分参数如下: model.fit(self, x=None, y=None,epochs=1,steps_per_epoch=None) [epoch:迭代次数,一次迭代可以粗糙的理解成使用一个batch的训练数据对model进行训练。 The number of iterations, one iteration can be roughly understood as using a batch of training data to train the model.] [steps_per_epoch:每次迭代被model消费的batches,可以粗糙的理解为将固定数量的batch合并成一个更大的bigger_batch,然后用bigger_batch对model进行训练,训练结束即为完成一个epoch。 Each iteration of the batches consumed by the model. It can be roughly seen as a combination of a fixed number of batches into a bigger batch named bigger_batch, and then the model is trained with the bigger_batch. The end of the training is the completion of an epoch.] 个人看法:这是model训练过程中训练数据不足的问题,可以将model的训练过程理解为生产者与消费者的关系。tensorflow2.0的dataset集成了generator的功能,可直接作为吐出训练数据的生成器. dataset不断提供数据,model训练过程不断消费数据,一旦dataset没有数据提供并且model的训练过程还没结束,就会报错,所以需要确保epochs*steps_per_epoch <= dataset所能提供的batches。 你可以根据经验确定batch_size和steps_per_epoch,然后对全量数据集使用repeat()来避免model训练过程中数据不足的问题。如果觉得没有必要对batch再做处理,可令steps_per_epoch=1。 My View:This is the problem of insufficient training data in the model training process, which can be seen as the relationship between producers and consumers. Tensorflow2.0 Dataset integrates the function of generator and can be used to spit out training data directly.Dataset provides data continuously, and model training process consumes data continuously. Once dataset does not provide data and model training process is not finished, an error will be reported. Therefore, it is necessary to ensure that epochs*steps_per_epoch is less than the size of batches provided by dataset.You can determine batch_size and steps_per_epoch based on experience, and then use repeat() for Dataset to avoid data shortage during model training.If you don't think it's necessary to deal with the batch again, you can make steps_per_epoch = 1. 验证过程: train_data = tf.random.normal((5,4))#5个4维特征向量 label = tf.ones((5,1))#5个类别标签 dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset <TensorSliceDataset shapes: ((4,), (1,)), types: (tf.float32, tf.float32)> 全量数据集dataset按照batch_size进行切分,如果最后一个batch的数量不足batch_size,根据drop_remainder判断是否将其丢弃。在上述例子中,train_data和label构成的Dataset包含5个用来训练model的张量,暂且称之为train张量,train张量又包含2个张量:1个4维特征向量和1个label。 dataset = dataset.batch(batch_size, drop_remainder=True).repeat(2) dataset <RepeatDataset shapes: ((2, 4), (2, 1)), types: (tf.float32, tf.float32)> 调用batch()进行切分,batch_size=2,drop_remainder=True,可知batches==2,每个batch包含2个train张量,最后一个batch的大小为1,丢弃;repeat(2)后batches==4。 model.fit(dataset, epochs=4, steps_per_epoch=1) #fit函数中的x和y参数代表特征向量和类别,可直接用Dataset类型的变量赋值 dataset有4个batch,batch_size == 2,每次使用1个batch的数据训练model(bigger_batch_size == batch_size x 1 == 2),可以迭代4次 model.fit(dataset, epochs=1, steps_per_epoch=4) dataset有4个batch,batch_size == 2,每次使用4个batch的数据训练model(bigger_batch_size == batch_size x 4 == 6),可以迭代1次 完整的验证代码如下: import tensorflow as tf tf.__version__ def build_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3, activation='softmax')]) model.compile( optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'] ) return model def check_data_batch_size(dataset): #iterator = iter(dataset) iterator = dataset.__iter__() i=0 try: while i<100: #data = next(iterator) data = iterator.get_next() i += 1 print('id:',i) print('data:',data) except Exception as e: print(repr(e)) return i batch_size = 2 data = tf.random.normal((5,4)) label = tf.ones((5,1)) dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset = dataset.batch(2, drop_remainder=True).repeat(2) batches = check_data_batch_size(dataset) print('batches:',batches) model = build_model() model.fit(dataset, epochs=2, steps_per_epoch=2) 将全量数据集切分成多个batch,对模型进行分批迭代训练,是常规做法,先理解一下两个概念: * [batch_size:每个batch的数据量大小] * [batches:整个数据集按照batch_size切分后的batch数量] keras的fit函数的部分参数如下: model.fit(self, x=None, y=None,epochs=1,steps_per_epoch=None) * [epoch:迭代次数,一次迭代可以粗糙的理解成使用一个batch的训练数据对model进行训练。 The number of iterations, one iteration can be roughly understood as using a batch of training data to train the model.] * [steps_per_epoch:每次迭代被model消费的batches,可以粗糙的理解为将固定数量的batch合并成一个更大的bigger_batch,然后用bigger_batch对model进行训练,训练结束即为完成一个epoch。 Each iteration of the batches consumed by the model. It can be roughly seen as a combination of a fixed number of batches into a bigger batch named bigger_batch, and then the model is trained with the bigger_batch. The end of the training is the completion of an epoch.] * 个人看法:这是model训练过程中训练数据不足的问题,可以将model的训练过程理解为生产者与消费者的关系。tensorflow2.0的dataset集成了generator的功能,可直接作为吐出训练数据的生成器. dataset不断提供数据,model训练过程不断消费数据,一旦dataset没有数据提供并且model的训练过程还没结束,就会报错,所以需要确保epochs*steps_per_epoch <= dataset所能提供的batches。 你可以根据经验确定batch_size和steps_per_epoch,然后对全量数据集使用repeat()来避免model训练过程中数据不足的问题。如果觉得没有必要对batch再做处理,可令steps_per_epoch=1。 * My View:This is the problem of insufficient training data in the model training process, which can be seen as the relationship between producers and consumers. Tensorflow2.0 Dataset integrates the function of generator and can be used to spit out training data directly.Dataset provides data continuously, and model training process consumes data continuously. Once dataset does not provide data and model training process is not finished, an error will be reported. Therefore, it is necessary to ensure that epochs*steps_per_epoch is less than the size of batches provided by dataset.You can determine batch_size and steps_per_epoch based on experience, and then use repeat() for Dataset to avoid data shortage during model training.If you don't think it's necessary to deal with the batch again, you can make steps_per_epoch = 1. 验证过程: train_data = tf.random.normal((5,4))#5个4维特征向量 label = tf.ones((5,1))#5个类别标签 dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset <TensorSliceDataset shapes: ((4,), (1,)), types: (tf.float32, tf.float32)> 全量数据集dataset按照batch_size进行切分,如果最后一个batch的数量不足batch_size,根据drop_remainder判断是否将其丢弃。在上述例子中,train_data和label构成的Dataset包含5个用来训练model的张量,暂且称之为train张量,train张量又包含2个张量:1个4维特征向量和1个label。 dataset = dataset.batch(batch_size, drop_remainder=True).repeat(2) dataset <RepeatDataset shapes: ((2, 4), (2, 1)), types: (tf.float32, tf.float32)> 调用batch()进行切分,batch_size=2,drop_remainder=True,可知batches==2,每个batch包含2个train张量,最后一个batch的大小为1,丢弃;repeat(2)后batches==4。 model.fit(dataset, epochs=4, steps_per_epoch=1) #fit函数中的x和y参数代表特征向量和类别,可直接用Dataset类型的变量赋值 dataset有4个batch,batch_size == 2,每次使用1个batch的数据训练model(bigger_batch_size == batch_size x 1 == 2),可以迭代4次 model.fit(dataset, epochs=1, steps_per_epoch=4) dataset有4个batch,batch_size == 2,每次使用4个batch的数据训练model(bigger_batch_size == batch_size x 4 == 6),可以迭代1次 完整的验证代码如下: import tensorflow as tf tf.__version__ def build_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3, activation='softmax')]) model.compile( optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'] ) return model def check_data_batch_size(dataset): #iterator = iter(dataset) iterator = dataset.__iter__() i=0 try: while i<100: #data = next(iterator) data = iterator.get_next() i += 1 print('id:',i) print('data:',data) except Exception as e: print(repr(e)) return i batch_size = 2 data = tf.random.normal((5,4)) label = tf.ones((5,1)) dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset = dataset.batch(2, drop_remainder=True).repeat(2) batches = check_data_batch_size(dataset) print('batches:',batches) model = build_model() model.fit(dataset, epochs=2, steps_per_epoch=2) Is this reply related to the original question? I just mentioned my understanding of the two parameters of epochs and steps_per_epoch in keras model, why they cause errors, and my handling method. I'm sorry to confuse you. ------------------ 原始邮件 ------------------ 发件人: "Zhu, Lingchen"<notifications@github.com>; 发送时间: 2019年12月12日(星期四) 晚上10:52 收件人: "tensorflow/tensorflow"<tensorflow@noreply.github.com>; 抄送: "326437990"<326437990@qq.com>; "Comment"<comment@noreply.github.com>; 主题: Re: [tensorflow/tensorflow] Re-emerged Issue #31509 - BaseCollectiveExecutor::StartAbort Out of range: (#32817) 将全量数据集切分成多个batch,对模型进行分批迭代训练,是常规做法,先理解一下两个概念: [batch_size:每个batch的数据量大小] * [batches:整个数据集按照batch_size切分后的batch数量] keras的fit函数的部分参数如下: model.fit(self, x=None, y=None,epochs=1,steps_per_epoch=None) * [epoch:迭代次数,一次迭代可以粗糙的理解成使用一个batch的训练数据对model进行训练。 The number of iterations, one iteration can be roughly understood as using a batch of training data to train the model.] * [steps_per_epoch:每次迭代被model消费的batches,可以粗糙的理解为将固定数量的batch合并成一个更大的bigger_batch,然后用bigger_batch对model进行训练,训练结束即为完成一个epoch。 Each iteration of the batches consumed by the model. It can be roughly seen as a combination of a fixed number of batches into a bigger batch named bigger_batch, and then the model is trained with the bigger_batch. The end of the training is the completion of an epoch.] * 个人看法:这是model训练过程中训练数据不足的问题,可以将model的训练过程理解为生产者与消费者的关系。tensorflow2.0的dataset集成了generator的功能,可直接作为吐出训练数据的生成器. dataset不断提供数据,model训练过程不断消费数据,一旦dataset没有数据提供并且model的训练过程还没结束,就会报错,所以需要确保epochssteps_per_epoch <= dataset所能提供的batches。 你可以根据经验确定batch_size和steps_per_epoch,然后对全量数据集使用repeat()来避免model训练过程中数据不足的问题。如果觉得没有必要对batch再做处理,可令steps_per_epoch=1。 * My View:This is the problem of insufficient training data in the model training process, which can be seen as the relationship between producers and consumers. Tensorflow2.0 Dataset integrates the function of generator and can be used to spit out training data directly.Dataset provides data continuously, and model training process consumes data continuously. Once dataset does not provide data and model training process is not finished, an error will be reported. Therefore, it is necessary to ensure that epochssteps_per_epoch is less than the size of batches provided by dataset.You can determine batch_size and steps_per_epoch based on experience, and then use repeat() for Dataset to avoid data shortage during model training.If you don't think it's necessary to deal with the batch again, you can make steps_per_epoch = 1. 验证过程: train_data = tf.random.normal((5,4))#5个4维特征向量 label = tf.ones((5,1))#5个类别标签 dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset <TensorSliceDataset shapes: ((4,), (1,)), types: (tf.float32, tf.float32)> 全量数据集dataset按照batch_size进行切分,如果最后一个batch的数量不足batch_size,根据drop_remainder判断是否将其丢弃。在上述例子中,train_data和label构成的Dataset包含5个用来训练model的张量,暂且称之为train张量,train张量又包含2个张量:1个4维特征向量和1个label。 dataset = dataset.batch(batch_size, drop_remainder=True).repeat(2) dataset <RepeatDataset shapes: ((2, 4), (2, 1)), types: (tf.float32, tf.float32)> 调用batch()进行切分,batch_size=2,drop_remainder=True,可知batches==2,每个batch包含2个train张量,最后一个batch的大小为1,丢弃;repeat(2)后batches==4。 model.fit(dataset, epochs=4, steps_per_epoch=1) #fit函数中的x和y参数代表特征向量和类别,可直接用Dataset类型的变量赋值 dataset有4个batch,batch_size == 2,每次使用1个batch的数据训练model(bigger_batch_size == batch_size x 1 == 2),可以迭代4次 model.fit(dataset, epochs=1, steps_per_epoch=4) dataset有4个batch,batch_size == 2,每次使用4个batch的数据训练model(bigger_batch_size == batch_size x 4 == 6),可以迭代1次 完整的验证代码如下: import tensorflow as tf tf.version def build_model(): model = tf.keras.Sequential([ tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), tf.keras.layers.Dense(10, activation=tf.nn.relu), tf.keras.layers.Dense(3, activation='softmax')]) model.compile( optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'] ) return model def check_data_batch_size(dataset): #iterator = iter(dataset) iterator = dataset.iter() i=0 try: while i<100: #data = next(iterator) data = iterator.get_next() i += 1 print('id:',i) print('data:',data) except Exception as e: print(repr(e)) return i batch_size = 2 data = tf.random.normal((5,4)) label = tf.ones((5,1)) dataset = tf.data.Dataset.from_tensor_slices((data, label)) dataset = dataset.batch(2, drop_remainder=True).repeat(2) batches = check_data_batch_size(dataset) print('batches:',batches) model = build_model() model.fit(dataset, epochs=2, steps_per_epoch=2) Is this reply related to the original question? — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe. @duysqubix Brilliant, your suggestion fixed my issue. And for other people facing this issue, just for the record: # loss, acc = net.evaluate(tst_set) # do not use this when using a Repeating dataset loss, acc = net.evaluate(tst_set, steps=3) # e.g., 3 I got the same problems in the Tensflow 2-gpu in the centos. Has anyone know how to fix this problem? This issue should be fixed by the pull request for issue https://github.com/tensorflow/tensorflow/issues/35314 . The warning was actually propagated up from C++ and so python was passing it forward. But there is really no problem here, no issues with training or anything, according to the issue. The solution was that Google lowered the logging level to ignore these warnings. The change is in TF 2.0 nightly, and will be widely available in the next release. But you can use TF nightly to get the benefit now. So this issue can probably be closed. Tensorflow 2.1(stable) released, has anyone known if this warning fixed in the new version ? I have the same problem when practicing the code from official tutorial. I am using Catalina 10.15 python 3.76 TF 2.1.0 `mbedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1" hub_layer = hub.KerasLayer(embedding, input_shape=[], dtype=tf.string, trainable=True) hub_layer(train_examples_batch[:3]) model = tf.keras.Sequential() model.add(hub_layer) model.add(tf.keras.layers.Dense(16, activation='relu')) model.add(tf.keras.layers.Dense(1, activation='sigmoid')) model.summary() model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit(train_data.shuffle(10000).batch(512), epochs=20, validation_data=validation_data.batch(512), verbose=1) results = model.evaluate(test_data.batch(512), verbose=2) for name, value in zip(model.metrics_names, results): print("%s: %.3f" % (name, value))` 29/30 [============================>.] - ETA: 0s - loss: 0.2012 - accuracy: 0.92892020-01-13 13:53:00.393082: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] I also have this warning in TF2.1.0. model.predict(ds.batch(1)) works but gives this warning : 2020-03-11 17:04:24.760612: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] I have a similar error but can't seem to find anywhere else where anyone is experiencing here is the traceback of my error: Train on 2737611 samples, validate on 2737612 samples Epoch 1/123 Epoch 2/123 Epoch 3/123 Epoch 4/123 Epoch 5/123 Epoch 6/123 2020-08-20 22:56:33.810266: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0] [[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]] [[metrics/recall/assert_greater_equal/Assert/AssertGuard/pivot_f/_143/_157]] 2020-08-20 22:56:33.824745: W tensorflow/core/common_runtime/base_collective_executor.cc:217] BaseCollectiveExecutor::StartAbort Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0] [[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]] WARNING:tensorflow:Can save best model only with val_precision available, skipping. Traceback (most recent call last): File "tf_working.py", line 399, in keras_auto_tuner(training_df, '1week_target_class') File "tf_working.py", line 382, in keras_auto_tuner validation_data=(val_features, y_val)) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\kerastuner\engine\base_tuner.py", line 130, in search self.run_trial(trial, *fit_args, **fit_kwargs) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\kerastuner\engine\multi_execution_tuner.py", line 96, in run_trial history = model.fit(*fit_args, **copied_fit_kwargs) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 819, in fit use_multiprocessing=use_multiprocessing) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 342, in fit total_epochs=epochs) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 128, in run_one_epoch batch_outs = execution_function(iterator) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 98, in execution_function distributed_function(input_fn)) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 568, in call result = self._call(*args, **kwds) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\def_function.py", line 599, in _call return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 2363, in call return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 1611, in _filtered_call self.captured_inputs) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 1692, in _call_flat ctx, args, cancellation_manager=cancellation_manager)) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\function.py", line 545, in call ctx=ctx) File "C:\Users\evoot\anaconda3\envs\tf_sh\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute six.raise_from(core._status_to_exception(e.code, message), None) File "", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found. (0) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0] [[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]] [[metrics/recall/assert_greater_equal/Assert/AssertGuard/pivot_f/_143/_157]] (1) Invalid argument: assertion failed: [predictions must be >= 0] [Condition x >= y did not hold element-wise:] [x (sequential/dense_10/Sigmoid:0) = ] [[nan][nan][nan]...] [y (metrics/tp/Cast_2/x:0) = ] [0] [[{{node metrics/tp/assert_greater_equal/Assert/AssertGuard/else/_1/Assert}}]] 0 successful operations. 0 derived errors ignored. [Op:__inference_distributed_function_222355] Function call stack: distributed_function -> distributed_function Can anyone confirm what the result of this behavior is? I'm confused whether its a logging error, or whether the final batch does not get train/evaluated. For example, imagine I had 100 samples with a batch size of 52. Would I be training on a batch of 50 and 48 (expected behavior), or would I train on 50 and then just fail to fill the next batch and move to the next epoch? This is especially scary in a validation batch and I would be terrified to find that I have a variable validation set (especially if you shuffle!). There is alot of discussion in many spots, but no clear indication of the significance of this error. Some would have you believe it is just a warning. I am on tensorflow==2.1.0. 我想我可能已经找到了为什么要抱怨。但是,我不知道如何解决它。在训练过程中,我们都会收到IteratorGetNext错误:序列超出范围。 我注意到,假设我的数据集大小为60,000,而批处理大小为64,则需要floor(60000/64)= 937才能遍历整个数据集一个时期。但是,当使用.fit(verbose = 1)进行训练时,我注意到它尝试遍历数据集938(很可能是舍入错误,因为60000/64 = 937.5),因此我得到了这个错误。有人可以请您确认这种情况吗?谢谢 编辑: 因此,在构建tf.data.Dataset时,我找到了一种解决方法,请确保添加.repeat()方法,因为程序会抱怨您用完了数据,并且在使用.fit()时 ~添加以下内容~: 这是一个完整的示例,可以正常工作。 这将导致错误: 数据 = TF。随机的。正常((60000,30,4)) ground_truth = TF。那些((60000,1)) 的数据集 = TF。数据。数据集。from_tensor_slices((data,ground_truth))。批(64) #predefined模型在这里:输入:[?,30,4]输出:[?,1] 模型。适合(资料集,历元= 5) ''' 938 /未知-16秒17毫秒/步-丢失:0.02172019-10-07 14:49:49.928619:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [ [{{node IteratorGetNext}}]] [[Shape / _2]] 2019-10-07 14:49:49.928619:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:结束于序列 [[{{node IteratorGetNext}}]] 938/938 [=============================]-16s 17ms /步进-亏损:0.0217 时代2/5 935/938 [===========================>。]-ETA:0秒-损失:2.2229e-062019-10-07 14:49:59.722216:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [[{{node IteratorGetNext}}]] 2019-10-07 14:49:59.722218 :W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [[{{node IteratorGetNext}}] [[Shape / _2]] ''' 这是变通办法。 batch_size = 64 数据 = tf。随机的。正常((60000,30,4)) ground_truth = TF。那些((60000,1)) 的数据集 = TF。数据。数据集。from_tensor_slices((data,ground_truth))。批处理(batch_size)。重复() #predefined模型在这里:输入:[?,30,4]输出:[?,1] 模型。配合(数据集,历元= 5,steps_per_epoch =数据。形状[ 0 ] //的batch_size) ''' 937/937 [=============================]-15s 16ms / step-损失:0.0135 Epoch 2 / 5 937/937 [==============================]-10s 10ms / step-损耗:1.4460e-05 时代3 / 937/937 [=============================]-10s 11ms / step-损失:4.3097e-06 时期4/5 937/937 [=============================]-10s 10ms / step-损耗:1.8212e-06 时代5/5 ''' `` I think I may have found why it is complaining. However, I have no idea how to fix it. While training, we all get the IteratorGetNext Error: Sequence out of range. I noticed that let’s say I have a dataset size of 60,000 with a batch size of 64 that would require floor(60000/64)= 937 to iterate through the entire dataset for one epoch. However, when training using .fit(verbose=1) I noticed that it attempts to iterate through the dataset 938 (most likely a rounding error because 60000/64=937.5) and thus I get this error. Can someone please confirm this is the case for you as well? Thanks Edit: So I found a way around this when building the tf.data.Dataset, make sure to add the .repeat() method because the program will complain that you ran out of data, and when using .fit() ~add the following~: Here is a full example that got it working. This will cause the error: data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).batch(64) #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5) ''' 938/Unknown - 16s 17ms/step - loss: 0.02172019-10-07 14:49:49.928619: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_2]] 2019-10-07 14:49:49.928619: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 938/938 [==============================] - 16s 17ms/step - loss: 0.0217 Epoch 2/5 935/938 [============================>.] - ETA: 0s - loss: 2.2229e-062019-10-07 14:49:59.722216: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 2019-10-07 14:49:59.722218: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_2]] ''' This is the work around. batch_size = 64 data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).batch(batch_size).repeat() #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5, steps_per_epoch=data.shape[0]//batch_size) ''' 937/937 [==============================] - 15s 16ms/step - loss: 0.0135 Epoch 2/5 937/937 [==============================] - 10s 10ms/step - loss: 1.4460e-05 Epoch 3/5 937/937 [==============================] - 10s 11ms/step - loss: 4.3097e-06 Epoch 4/5 937/937 [==============================] - 10s 10ms/step - loss: 1.8212e-06 Epoch 5/5 ''' `` hi,I am a freshman in DL from CHina ,I meet the same error like you.Through search the interent,I found the answer:you need add the 'repeat()',but rember that don't input function parameter,then you need to add "step_per_epoch" in fit(),and it's value is "x_train//batchszie''.it works in my project,I hope it can help you to solve your problem .my English is poor,don't mind! 我想我可能已经找到了为什么要抱怨。但是,我不知道如何解决它。在训练过程中,我们都会收到IteratorGetNext错误:序列超出范围。 我注意到,假设我的数据集大小为60,000,而批处理大小为64,则需要floor(60000/64)= 937才能遍历整个数据集一个时期。但是,当使用.fit(verbose = 1)进行训练时,我注意到它尝试遍历数据集938(很可能是舍入错误,因为60000/64 = 937.5),因此我得到了这个错误。有人可以请您确认这种情况吗?谢谢 编辑: 因此,在构建tf.data.Dataset时,我找到了一种解决方法,请确保添加.repeat()方法,因为程序会抱怨您用完了数据,并且在使用.fit()时 ~添加以下内容~: 这是一个完整的示例,可以正常工作。 这将导致错误: 数据 = TF。随机的。正常((60000,30,4)) ground_truth = TF。那些((60000,1)) 的数据集 = TF。数据。数据集。from_tensor_slices((data,ground_truth))。批(64) #predefined模型在这里:输入:[?,30,4]输出:[?,1] 模型。适合(资料集,历元= 5) ''' 938 /未知-16秒17毫秒/步-丢失:0.02172019-10-07 14:49:49.928619:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [ [{{node IteratorGetNext}}]] [[Shape / _2]] 2019-10-07 14:49:49.928619:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:结束于序列 [[{{node IteratorGetNext}}]] 938/938 [=============================]-16s 17ms /步进-亏损:0.0217 时代2/5 935/938 [===========================>。]-ETA:0秒-损失:2.2229e-062019-10-07 14:49:59.722216:W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [[{{node IteratorGetNext}}]] 2019-10-07 14:49:59.722218 :W tensorflow / core / common_runtime / base_collective_executor.cc:216] BaseCollectiveExecutor :: StartAbort超出范围:序列结束 [[{{node IteratorGetNext}}] [[Shape / _2]] ''' 这是变通办法。 batch_size = 64 数据 = tf。随机的。正常((60000,30,4)) ground_truth = TF。那些((60000,1)) 的数据集 = TF。数据。数据集。from_tensor_slices((data,ground_truth))。批处理(batch_size)。重复() #predefined模型在这里:输入:[?,30,4]输出:[?,1] 模型。配合(数据集,历元= 5,steps_per_epoch =数据。形状[ 0 ] //的batch_size) ''' 937/937 [=============================]-15s 16ms / step-损失:0.0135 Epoch 2 / 5 937/937 [==============================]-10s 10ms / step-损耗:1.4460e-05 时代3 / 937/937 [=============================]-10s 11ms / step-损失:4.3097e-06 时期4/5 937/937 [=============================]-10s 10ms / step-损耗:1.8212e-06 时代5/5 ''' `` I think I may have found why it is complaining. However, I have no idea how to fix it. While training, we all get the IteratorGetNext Error: Sequence out of range. I noticed that let’s say I have a dataset size of 60,000 with a batch size of 64 that would require floor(60000/64)= 937 to iterate through the entire dataset for one epoch. However, when training using .fit(verbose=1) I noticed that it attempts to iterate through the dataset 938 (most likely a rounding error because 60000/64=937.5) and thus I get this error. Can someone please confirm this is the case for you as well? Thanks Edit: So I found a way around this when building the tf.data.Dataset, make sure to add the .repeat() method because the program will complain that you ran out of data, and when using .fit() ~add the following~: Here is a full example that got it working. This will cause the error: data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).batch(64) #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5) ''' 938/Unknown - 16s 17ms/step - loss: 0.02172019-10-07 14:49:49.928619: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_2]] 2019-10-07 14:49:49.928619: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 938/938 [==============================] - 16s 17ms/step - loss: 0.0217 Epoch 2/5 935/938 [============================>.] - ETA: 0s - loss: 2.2229e-062019-10-07 14:49:59.722216: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] 2019-10-07 14:49:59.722218: W tensorflow/core/common_runtime/base_collective_executor.cc:216] BaseCollectiveExecutor::StartAbort Out of range: End of sequence [[{{node IteratorGetNext}}]] [[Shape/_2]] ''' This is the work around. batch_size = 64 data = tf.random.normal((60000,30,4)) ground_truth = tf.ones((60000,1)) dataset = tf.data.Dataset.from_tensor_slices((data, ground_truth)).batch(batch_size).repeat() #predefined model here: input: [?, 30,4] output: [?,1] model.fit(dataset, epochs=5, steps_per_epoch=data.shape[0]//batch_size) ''' 937/937 [==============================] - 15s 16ms/step - loss: 0.0135 Epoch 2/5 937/937 [==============================] - 10s 10ms/step - loss: 1.4460e-05 Epoch 3/5 937/937 [==============================] - 10s 11ms/step - loss: 4.3097e-06 Epoch 4/5 937/937 [==============================] - 10s 10ms/step - loss: 1.8212e-06 Epoch 5/5 ''' `` hi,I am a freshman in DL from CHina ,I meet the same error like you.Through search the interent,I found the answer:you need add the 'repeat()',but rember that don't input function parameter,then you need to add "step_per_epoch" in fit(),and it's value is "x_train//batchszie''.it works in my project,I hope it can help you to solve your problem .my English is poor,don't mind! I tried to run on Colab with TF v2.5 and faced a different error,please find the gist here..Thanks!
gharchive/issue
2019-09-25T16:31:29
2025-04-01T04:36:02.850647
{ "authors": [ "00krishna", "600DZY", "AhmUgEk", "AndreaRigoni", "Andy1997-WQ", "CorbinXL", "LaurentBerger", "Xiaohui-Z", "bw4sz", "duysPES", "duysqubix", "ericvoots", "eustomaqua", "ismael-elatifi", "juliangall", "mtpgva", "npuichigo", "oracle3001", "raceee", "ravikyram", "samueljackson92", "sharkdtu", "sushreebarsa", "zhulingchen" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/32817", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
512802317
add a new class for realizing Early Stopping Please make sure that this is a feature request. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:feature_template System information TensorFlow version (you are using): 2.0 but this is for keras. Are you willing to contribute it (Yes/No): yes. I have already realized it and now I am using it. Describe the feature and the current behavior/state. I want to add a new class for realizing Early Stopping. They are based on 3 metrics from paper "Early Stopping - but when? Lutz Prechelt, University Karlsrhe". Generalization Loss(GL). GL can be described as a metric that can measure how much current validation loss exceeds the lowest validation loss. Training will stop as soon as GL exceeds a certain threshold. Progress Quotient(PQ). It considers training strip of length k. Progress means how much was the average training error during the strip larger than the minimum training error during the strip. Quotient means use the quotient of GL and Progress. Thus, training will stop as soon as the PQ exceeds a certain threshold. UP. This is simple. Training will stop when the generalization error increased in s successive strips. Combining these metrics, there are 5 modes for doing early stopping. 1, 2, 3, 1+3, 2+3. Now the EarlyStopping class of Keras can only stop training based on patience and delta, whereas there are advanced and useful early stopping methods. Will this change the current api? How? No. Only a new class will be added. Who will benefit with this feature? Basically all workers using Keras. Any Other info. This idea comes from others paper, I just realized it. For further info, please refer to the paper. early stopping - but when.pdf @wubowen416, Sorry for the delayed response. In your comment you stated: Are you willing to contribute it (Yes/No): yes. I have already realized it and now I am using it. Since you have already realized it and are using it, can you please submit a PR so that the respective Engineers can review it? Thanks!
gharchive/issue
2019-10-26T07:21:25
2025-04-01T04:36:02.860913
{ "authors": [ "rmothukuru", "wubowen416" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/33739", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
642918258
Coral edge TPU Classification coordinates Hi, I am running my object classification using Raspberry pi 4, model B, coral edge TPU. I am using this command to classify the image. ‘model.classify_with_image(frame, threshold=args[“confidence”])’ It works perfectly but It does not give me coordinates like ‘model.detect_with_image()’ Is there any way I can get the coordinates? From the official documentation: detection: detect_with_image(img, threshold=0.1, top_k=3, keep_aspect_ratio=False, relative_coord=True, resample=0) classification: classify_with_image(img, threshold=0.1, top_k=3, resample=0) The image classification model will predict the probability of an image representing a particular class. Thus you can expect output as an array of probabilities between 0 and 1.
gharchive/issue
2020-06-22T09:41:58
2025-04-01T04:36:02.863193
{ "authors": [ "MevadaRavikumar", "ymodak" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/40663", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
791784468
Loading model from GCS takes longer than copying it to local and then loading the model System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No OS Platform and Distribution (e.g., Linux Ubuntu 16.04): x86_64 GNU/Linux Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: TensorFlow installed from (source or binary): binary TensorFlow version (use command below): 2.3 Python version: 3.7 Describe the current behavior new_model = tf.keras.models.load_model( 'gs://benchmarking_test/bert_en_uncased_L-12_H-768_A-12_2' ) takes about 30min or so. whereas, following works in less than a minute !gsutil cp -r gs://benchmarking_test/bert_en_uncased_L-12_H-768_A-12_2 local_path new_model = tf.keras.models.load_model( 'local_path' ) Describe the expected behavior Both should take a similar time. @sumitbinnani Can you please share colab link or simple standalone code with supporting files to reproduce the issue in our environment.It helps us in localizing the issue faster. Also, can you try with latest stable version TF 2.4 otr Nightly version and see if you are facing the same issue. There are lot of performance improvement in latest versions of TF. Thanks! @sumitbinnani Can you please share colab link or simple standalone code with supporting files to reproduce the issue in our environment.It helps us in localizing the issue faster. Also, can you try with latest stable version TF 2.4 otr Nightly version and see if you are facing the same issue. There are lot of performance improvement in latest versions of TF. Thanks!
gharchive/issue
2021-01-22T07:52:01
2025-04-01T04:36:02.868834
{ "authors": [ "ravikyram", "sumitbinnani" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/46597", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
793094265
Keras model could save to h5 but not savedModel, call() missing argument I am using TF 2.2, the model is a keras model written in tf.keras.engine.training_v1.Model. I could save the model to h5 using model.save(save_format='h5), but if save using savedModel by just model.save(). the error is TypeError: call() missing 1 required positional argument: 'state' How could it be?.. The big question here is that: why is there some keras model which work well and could saved to .h5 but have this error when saved to SavedModel Full stack: Traceback (most recent call last): File "/home/litchy/anaconda3/envs/rec/lib/python3.6/contextlib.py", line 99, in __exit__ self.gen.throw(type, value, traceback) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/backend.py", line 423, in learning_phase_scope yield File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save.py", line 78, in save save_lib.save(model, filepath, signatures, options) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 951, in save obj, export_dir, signatures, options, meta_graph_def) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 1008, in _build_meta_graph checkpoint_graph_view) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/saved_model/signature_serialization.py", line 75, in find_function_to_export functions = saveable_view.list_functions(saveable_view.root) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/saved_model/save.py", line 143, in list_functions self._serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1656, in _list_functions_for_serialization Model, self)._list_functions_for_serialization(serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 2439, in _list_functions_for_serialization .list_functions_for_serialization(serialization_cache)) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/base_serialization.py", line 87, in list_functions_for_serialization fns = self.functions_to_serialize(serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 77, in functions_to_serialize serialization_cache).functions_to_serialize) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/model_serialization.py", line 53, in _get_serialized_attributes_internal serialization_cache)) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal functions = save_impl.wrap_layer_functions(self.obj, serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 153, in wrap_layer_functions original_fns = _replace_child_layer_functions(layer, serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 272, in _replace_child_layer_functions serialization_cache).functions) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal functions = save_impl.wrap_layer_functions(self.obj, serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 153, in wrap_layer_functions original_fns = _replace_child_layer_functions(layer, serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 272, in _replace_child_layer_functions serialization_cache).functions) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 92, in _get_serialized_attributes serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/layer_serialization.py", line 101, in _get_serialized_attributes_internal functions = save_impl.wrap_layer_functions(self.obj, serialization_cache) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 191, in wrap_layer_functions fn.get_concrete_function() File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 546, in get_concrete_function self.call_collection.add_trace(*args, **kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 421, in add_trace fn.get_concrete_function(*args, **kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 547, in get_concrete_function return super(LayerCall, self).get_concrete_function(*args, **kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 959, in get_concrete_function concrete = self._get_concrete_function_garbage_collected(*args, **kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 865, in _get_concrete_function_garbage_collected self._initialize(args, kwargs, add_initializers_to=initializers) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 506, in _initialize *args, **kwds)) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2446, in _get_concrete_function_internal_garbage_collected graph_function, _, _ = self._maybe_define_function(args, kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2777, in _maybe_define_function graph_function = self._create_graph_function(args, kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2667, in _create_graph_function capture_by_value=self._capture_by_value), File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 981, in func_graph_from_py_func func_outputs = python_func(*func_args, **func_kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 441, in wrapped_fn return weak_wrapped_fn().__wrapped__(*args, **kwds) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 524, in wrapper ret = method(*args, **kwargs) File "/home/litchy/anaconda3/envs/rec/lib/python3.6/site-packages/tensorflow/python/keras/saving/saved_model/save_impl.py", line 566, in call_and_return_conditional_losses return layer_call(inputs, *args, **kwargs), layer.get_losses_for(inputs) TypeError: call() missing 1 required positional argument: 'state' The model is not short, so I am not posting it. If you need I would post it. @Litchilitchy Please share a simple indented stand alone code for us to replicate the issue faced. @Litchilitchy Please share a simple indented stand alone code for us to replicate the issue faced. I am using the code of, first copy code from https://github.com/shenweichen/DeepCTR and use the deepctr package and import numpy as np import tensorflow as tf from deepctr.feature_column import SparseFeat, VarLenSparseFeat, DenseFeat,get_feature_names from deepctr.models import DIEN def get_xy_fd(use_neg=False, hash_flag=False, get_test=False): feature_columns = [SparseFeat('user', 3, embedding_dim=10, use_hash=hash_flag), SparseFeat('gender', 2, embedding_dim=4, use_hash=hash_flag), SparseFeat('item_id', 3 + 1, embedding_dim=8, use_hash=hash_flag), SparseFeat('cate_id', 2 + 1, embedding_dim=4, use_hash=hash_flag), DenseFeat('pay_score', 1)] feature_columns += [ VarLenSparseFeat(SparseFeat('hist_item_id', vocabulary_size=3 + 1, embedding_dim=8, embedding_name='item_id'), maxlen=4, length_name="seq_length"), VarLenSparseFeat(SparseFeat('hist_cate_id', 2 + 1, embedding_dim=4, embedding_name='cate_id'), maxlen=4, length_name="seq_length")] behavior_feature_list = ["item_id", "cate_id"] uid = np.array([0, 1, 2]) ugender = np.array([0, 1, 0]) iid = np.array([1, 2, 3]) # 0 is mask value cate_id = np.array([1, 2, 2]) # 0 is mask value score = np.array([0.1, 0.2, 0.3]) hist_iid = np.array([[1, 2, 3, 0], [1, 2, 3, 0], [1, 2, 0, 0]]) hist_cate_id = np.array([[1, 2, 2, 0], [1, 2, 2, 0], [1, 2, 0, 0]]) behavior_length = np.array([3, 3, 2]) feature_dict = {'user': uid, 'gender': ugender, 'item_id': iid, 'cate_id': cate_id, 'hist_item_id': hist_iid, 'hist_cate_id': hist_cate_id, 'pay_score': score, "seq_length": behavior_length} if use_neg: feature_dict['neg_hist_item_id'] = np.array([[1, 2, 3, 0], [1, 2, 3, 0], [1, 2, 0, 0]]) feature_dict['neg_hist_cate_id'] = np.array([[1, 2, 2, 0], [1, 2, 2, 0], [1, 2, 0, 0]]) feature_columns += [ VarLenSparseFeat(SparseFeat('neg_hist_item_id', vocabulary_size=3 + 1, embedding_dim=8, embedding_name='item_id'), maxlen=4, length_name="seq_length"), VarLenSparseFeat(SparseFeat('neg_hist_cate_id', 2 + 1, embedding_dim=4, embedding_name='cate_id'), maxlen=4, length_name="seq_length")] x = {name: feature_dict[name] for name in get_feature_names(feature_columns)} y = np.array([1, 0, 1]) if not get_test: return x, y, feature_columns, behavior_feature_list else: return x if __name__ == "__main__": if tf.__version__ >= '2.0.0': tf.compat.v1.disable_eager_execution() USE_NEG = True x, y, feature_columns, behavior_feature_list = get_xy_fd(use_neg=USE_NEG) model = DIEN(feature_columns, behavior_feature_list, dnn_hidden_units=[4, 4, 4], dnn_dropout=0.6, gru_type="AUGRU", use_negsampling=USE_NEG) model.compile('adam', 'binary_crossentropy', metrics=['binary_crossentropy']) history = model.fit(x, y, verbose=1, epochs=2, validation_split=0.5) model.save("tf_dien") I am using the code of, first copy code from https://github.com/shenweichen/DeepCTR and use the deepctr package and import numpy as np import tensorflow as tf from deepctr.feature_column import SparseFeat, VarLenSparseFeat, DenseFeat,get_feature_names from deepctr.models import DIEN def get_xy_fd(use_neg=False, hash_flag=False, get_test=False): feature_columns = [SparseFeat('user', 3, embedding_dim=10, use_hash=hash_flag), SparseFeat('gender', 2, embedding_dim=4, use_hash=hash_flag), SparseFeat('item_id', 3 + 1, embedding_dim=8, use_hash=hash_flag), SparseFeat('cate_id', 2 + 1, embedding_dim=4, use_hash=hash_flag), DenseFeat('pay_score', 1)] feature_columns += [ VarLenSparseFeat(SparseFeat('hist_item_id', vocabulary_size=3 + 1, embedding_dim=8, embedding_name='item_id'), maxlen=4, length_name="seq_length"), VarLenSparseFeat(SparseFeat('hist_cate_id', 2 + 1, embedding_dim=4, embedding_name='cate_id'), maxlen=4, length_name="seq_length")] behavior_feature_list = ["item_id", "cate_id"] uid = np.array([0, 1, 2]) ugender = np.array([0, 1, 0]) iid = np.array([1, 2, 3]) # 0 is mask value cate_id = np.array([1, 2, 2]) # 0 is mask value score = np.array([0.1, 0.2, 0.3]) hist_iid = np.array([[1, 2, 3, 0], [1, 2, 3, 0], [1, 2, 0, 0]]) hist_cate_id = np.array([[1, 2, 2, 0], [1, 2, 2, 0], [1, 2, 0, 0]]) behavior_length = np.array([3, 3, 2]) feature_dict = {'user': uid, 'gender': ugender, 'item_id': iid, 'cate_id': cate_id, 'hist_item_id': hist_iid, 'hist_cate_id': hist_cate_id, 'pay_score': score, "seq_length": behavior_length} if use_neg: feature_dict['neg_hist_item_id'] = np.array([[1, 2, 3, 0], [1, 2, 3, 0], [1, 2, 0, 0]]) feature_dict['neg_hist_cate_id'] = np.array([[1, 2, 2, 0], [1, 2, 2, 0], [1, 2, 0, 0]]) feature_columns += [ VarLenSparseFeat(SparseFeat('neg_hist_item_id', vocabulary_size=3 + 1, embedding_dim=8, embedding_name='item_id'), maxlen=4, length_name="seq_length"), VarLenSparseFeat(SparseFeat('neg_hist_cate_id', 2 + 1, embedding_dim=4, embedding_name='cate_id'), maxlen=4, length_name="seq_length")] x = {name: feature_dict[name] for name in get_feature_names(feature_columns)} y = np.array([1, 0, 1]) if not get_test: return x, y, feature_columns, behavior_feature_list else: return x if __name__ == "__main__": if tf.__version__ >= '2.0.0': tf.compat.v1.disable_eager_execution() USE_NEG = True x, y, feature_columns, behavior_feature_list = get_xy_fd(use_neg=USE_NEG) model = DIEN(feature_columns, behavior_feature_list, dnn_hidden_units=[4, 4, 4], dnn_dropout=0.6, gru_type="AUGRU", use_negsampling=USE_NEG) model.compile('adam', 'binary_crossentropy', metrics=['binary_crossentropy']) history = model.fit(x, y, verbose=1, epochs=2, validation_split=0.5) model.save("tf_dien") @Litchilitchy Please share a colab gist with the issue reported. @Litchilitchy Please share a colab gist with the issue reported. I choose to close the issue because the code is too much, I would open another one if I could find the minimum reproduce code.
gharchive/issue
2021-01-25T06:57:12
2025-04-01T04:36:02.878403
{ "authors": [ "Litchilitchy", "Saduf2019" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/46652", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
813781673
Value error with DELF I use the following code to compute delf, import argparse from glob import glob import tensorflow as tf import tensorflow_hub as hub from tensorflow.python.framework import ops from tensorflow.python.framework.ops import disable_eager_execution from tqdm import tqdm disable_eager_execution() tf.compat.v1.disable_v2_behavior() class DeepDELF: def __init__(self, input_path): ops.reset_default_graph() m = hub.Module('https://tfhub.dev/google/delf/1') # The module operates on a single image at a time, so define a placeholder to # feed an arbitrary image in. self.image_placeholder = tf.compat.v1.placeholder( tf.float32, shape=(None, None, 3), name='input_image') module_inputs = { 'image': self.image_placeholder, 'score_threshold': 100.0, 'image_scales': [0.25, 0.3536, 0.5, 0.7071, 1.0, 1.4142, 2.0], 'max_feature_num': 1000, } self.module_outputs = m(module_inputs, as_dict=True) self.image_tf = self.image_input_fn(glob(input_path + '/*')) self.path_list = glob(input_path + '/*') def extract(self): with tf.compat.v1.train.MonitoredSession() as sess: results_dict = {} # Stores the locations and their descriptors for each image for image_path in tqdm(self.path_list): image = sess.run(self.image_tf) print('Extracting locations and descriptors from %s' % image_path) results_dict[image_path] = sess.run( [self.module_outputs['locations'], self.module_outputs['descriptors']], feed_dict={self.image_placeholder: image}) return results_dict def image_input_fn(self, image_files): filename_queue = tf.compat.v1.train.string_input_producer( image_files, shuffle=False) reader = tf.compat.v1.WholeFileReader() _, value = reader.read(filename_queue) image_tf = tf.image.decode_jpeg(value, channels=3) return tf.image.convert_image_dtype(image_tf, tf.float32) def main(args): path = args['input_path'] extrator = None extractor = DeepDELF(path) results_dict = extractor.extract() results_dict2 = extractor.extract() print("Shape feature: ", results_dict.keys()) print("Shape feature 2: ", results_dict2.keys()) def args_parser(): parser = argparse.ArgumentParser(description="Methods extract image.") parser.add_argument('-i', '--input_path', help="The path of the input image.") return vars(parser.parse_args()) if __name__ == "__main__": args = args_parser() # End default optional arguments # Print info arguments print("Extract feature from image.".upper().center(100)) print(str("-" * 63).center(100)) print("|{:<30}:\n|{:<30}|".format("Image path", args['input_path']).center(100)) print(str("-" * 63).center(100)) main(args) Unfortunately, it throws the following error raise ValueError(not_null_err) ValueError: string_input_producer requires a non-null input tensor How can I fix this? @Zumbalamambo, TensorFlow Hub issues are tracked in tensorflow/hub repo. Could you please submit a new issue from this link and fill in the template, so that we can track the issue there. Thanks!
gharchive/issue
2021-02-22T19:09:40
2025-04-01T04:36:02.881809
{ "authors": [ "Zumbalamambo", "amahendrakar" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/47322", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
816532506
Install gpu Ubuntu16.04 misses crucial steps: "unable to locate package nvidia-driver-450" URL(s) with the issue: https://www.tensorflow.org/install/gpu?hl=ur#ubuntu_1604_cuda_110 Description of issue (what needs changing): 19th line: sudo apt-get install --no-install-recommends nvidia-driver-450 leads to error: E: Unable to locate package nvidia-driver-450 Was able to reproduce the issue. Please take a look at the below screenshot for reference Thanks! @GrigoriiTarasov, With reference to issue #44301, ignoring the sudo apt-get install --no-install-recommends nvidia-driver-450 command seems to work in this case. Could you please skip that particular step and check if you are able to install TensorFlow. Thanks! @amahendrakar Thanks a lot! I confirm that skipping 19th line works.
gharchive/issue
2021-02-25T15:17:18
2025-04-01T04:36:02.886119
{ "authors": [ "GrigoriiTarasov", "amahendrakar" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/47402", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
819617982
Cosine annealing learning rate scheduler with minimum learning rate boundary System information TensorFlow version (you are using): Tensorflow 2.x Are you willing to contribute it (Yes/No): Yes Describe the feature and the current behavior/state. In the nightly build, we have API called tf.keras.optimizers.schedules.CosineDecay which schedule decaying learning rate. However, we cannot set the range of learning rate incremental decay. For example, this current API allow us to change learning from alpha then decrease it in cosine manner and beyond. My proposal is to decrease the learning rate incrementally in cosine manner from alpha to beta. note - alpha : initial learning rate , beta : final learning rate Will this change the current api? How? It could be implemented in tf.keras.optimizers.schedules.CosineDecay or implemented in different API name. Who will benefit with this feature? Anyone who want to use cosine annealing learning rate and worry about vanishing gradient issue. Could i start working on this? @hfahrudin Sure. Thanks for your contributions. You can raise a PR to update it. Thanks! @jvishnuvardhan Should i make another class, or just edit this API? I already issued the pull request but it seems that i issued it to protected branch. Which branch should i use for this PR? Always master branch Hi @hfahrudin, I took a look at the PR you submitted. Upon looking again at the math, it does seem that the existing implementation provides a minimum/final learning rate, but it's defined as a fraction of the initial learning rate (not as the actual final learning rate). Another point that struck me, is that your PR changes the calculation (whereas I initially though it would just add an optional minimum), this makes it backwards incompatible and makes me very hesitent to accept it unless it actually fixes a bug. @deeb02 Thanks for the review. I would keep it in mind in the future Absolutely. Thank you for trying to make Keras better! Keep looking for things that need to be improved!
gharchive/issue
2021-03-02T05:00:27
2025-04-01T04:36:02.891919
{ "authors": [ "deeb02", "hfahrudin", "jvishnuvardhan", "mihaimaruseac" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/47493", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
998501427
Tensor board wont load log files Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 20.04 Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: TensorFlow installed from (source or binary): source TensorFlow version (use command below): 2.5.1 Python version:3.8.5 Bazel version (if compiling from source): GCC/Compiler version (if compiling from source): CUDA/cuDNN version: cuDNN v 8100 GPU model and memory: 3080 10gb I am trying to explore the log files generated by tf.debugging.experimental.enable_dump_debug_info( "/tmp/tfdbg2_logdir", tensor_debug_mode="FULL_HEALTH", circular_buffer_size=-1) but after loading the tensorboard 2.6 with the following command tensorboard --logdir /tmp/tfdbg2_logdir/ --load_fast=false I get a message that there is no data to be loaded I have checked and there are log files in this dir @vulkomilev , This issue is more suitable for TensorBoard repo. Please post it on tensorflow/tensorboard repo from here. Thanks!
gharchive/issue
2021-09-16T18:22:42
2025-04-01T04:36:02.898008
{ "authors": [ "tilakrayal", "vulkomilev" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/52038", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1165103194
Got this warning while using @tf.function on tensorflow_probability. I am trying to train a reinforcment learning agent on BipedalWalker. To Speed things up I used @tf.function wrapper on my gradient calculations and then I got this warning. System information Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes. OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04LTS Device: Lenovo Legion Y540 TensorFlow installed from (source or binary): conda TensorFlow version (use command below): v2.4 TensorFlow-Probability: v0.12.2 Python version: v3.9 Bazel version (if compiling from source): GCC/Compiler version (if compiling from source): CUDA/cuDNN version: 10.1/7.6.5 GPU model and memory: Nvidia Geforce Gtx 1650 4GB Describe the current behavior WARNING:tensorflow:AutoGraph could not transform <bound method A3CAgent.act of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f20540f1940>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method A3CAgent.act of <tensorflow.python.eager.function.TfMethodTarget object at 0x7f20540f1940>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Index' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING:tensorflow:From /home/himanshu/anaconda3/envs/myenv/lib/python3.9/site-packages/tensorflow_probability/python/distributions/distribution.py:298: calling MultivariateNormalDiag.__init__ (from tensorflow_probability.python.distributions.mvn_diag) with scale_identity_multiplier is deprecated and will be removed after 2020-01-01. Instructions for updating: `scale_identity_multiplier` is deprecated; please combine it with `scale_diag` directly instead. WARNING:tensorflow:From /home/himanshu/anaconda3/envs/myenv/lib/python3.9/site-packages/tensorflow/python/ops/linalg/linear_operator_diag.py:167: calling LinearOperator.__init__ (from tensorflow.python.ops.linalg.linear_operator) with graph_parents is deprecated and will be removed in a future version. Instructions for updating: Do not pass `graph_parents`. They will no longer be used. Describe the expected behavior Tensorflow should not give these warnings. Do you want to contribute a PR? (yes/no): no Briefly describe your candidate solution(if contributing): None Standalone code to reproduce the issue Link to the jupyter notebook can be found here. No, v2.8 is not supported by CUDA 10.1. So that's why I have not checked it. Can you switch to Cuda 11.2 and CudNN 8.1 for TF 2.8 and let us know whether the issue still persists. I tried but my machine when I install these versions of cuda and cudnn, tensorflow is not able to access gpu on my system. @HimGautam, There are the Deprecation Warnings, which we can suppress using one of the below code snippets import os os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf OR import warnings warnings.filterwarnings("ignore") import tensorflow as tf Ok got it, I think when I will update to higher versions of tensorflow then these warning will go away automatically. Thanks for the assist.
gharchive/issue
2022-03-10T11:30:11
2025-04-01T04:36:02.905318
{ "authors": [ "HimGautam", "gadagashwini", "mohantym" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/55190", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
123427730
[feature request] numpy style shape sugar I'd like to suggest numpy-like shape read-only property for tf.Tensor, something like: @property def shape(self): return tuple(self.get_shape().as_list()) I'd prefer not to have two ways to access (subtly different forms of) the same information. However, I wouldn't be opposed to changing Tensor.get_shape() to be a Tensor.shape property. Ideally you could use a (fully defined) TensorShape anywhere a list of integers is accepted. Would that work for your purposes? It is OK for me even now as is, the question what is the optimal / general way ;) In my opinion, I see two possible solution based on what you said and on criterion _whether) the TensorShape object can be used to initialize other objects or can be assigned to some other variables and carries information more than just a tuple of integers: (1) if TensorShape has practical significance as is: implement both methods; maybe reimplement get_shape as a property (called something like tshape for tensor shape) (2) if TensorShape is used only for internal convenience, change to the above suggested property Are there any plans on changing this? @DSLituiev's comment sounds very reasonable to me. There are no plans to change the use of TensorShape, and we're happy to accept suggestions/PRs that would make TensorShape usable wherever a list of integers would be. What is the use of TensorShape? Is it ever used downstream of get_shape()? As I understand, it cannot even be fed into shape parameter upon tensor initialization. Do you imply that you'll oppose .shape returning simple tuple because there is a downstream component that makes use of TensorShape data structure that has something other than what is contained in the tuple? Tensors in TensorFlow can have dynamic shapes. TensorShape is used throughout the Python API to represent shapes that might have one or more unknown dimensions, or even an unknown rank; and to combine such shapes easily. So, yes, I'd be opposed to having two slightly incompatible properties/methods on Tensor that convey the same information, but cannot be used the same way. If we add Tensor.shape, it should return TensorShape, and we should deprecate Tensor.get_shape() at the same time. Which tensor initialization function doesn't support TensorShape objects as the shape argument? We should fix that. @mrry I totally agree that we shouldn't have both .shape and .get_shape(). However, here are some problems with the current TensorShape. Also, I think it's worth switching to the shorter and Numpy-equivalent .shape now (rather than waiting and breaking more code later). >>> import tensorflow as tf >>> a = tf.placeholder(tf.float32, [None, None, None]) >>> b = tf.reshape(a, [a.get_shape()[0], a.get_shape()[1] // 2, -1]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1092, in reshape name=name) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/op_def_library.py", line 411, in apply_op as_ref=input_arg.is_ref) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 566, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 179, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 162, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape)) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 332, in make_tensor_proto _AssertCompatible(values, dtype) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 272, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).__name__)) TypeError: Expected int32, got Dimension(None) of type 'Dimension' instead. >>> import tensorflow as tf >>> a = tf.placeholder(tf.float32, [None, None, None]) >>> b = tf.reshape(a, [a.get_shape()[0], 10, -1]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1092, in reshape name=name) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/op_def_library.py", line 411, in apply_op as_ref=input_arg.is_ref) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 566, in convert_to_tensor ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 179, in _constant_tensor_conversion_function return constant(v, dtype=dtype, name=name) File "/usr/lib/python3.5/site-packages/tensorflow/python/ops/constant_op.py", line 162, in constant tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape)) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 332, in make_tensor_proto _AssertCompatible(values, dtype) File "/usr/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 272, in _AssertCompatible (dtype.name, repr(mismatch), type(mismatch).__name__)) TypeError: Expected int32, got Dimension(None) of type 'Dimension' instead. >>> probably this was the function. On the other hand, tf.placeholder accepts both TensorShape and integer tuples: y = tf.placeholder(np.dtype("float"), shape = (None,1,3) ) y.get_shape() "equivalent to" y = tf.placeholder(np.dtype("float"), shape = tf.TensorShape([tf.Dimension(None), tf.Dimension(1), tf.Dimension(3)]) ) @mrry, you must know better, but I never yet faced an expression where I had to feed TensorShape and tuple (well, sometimes list) of integers did do the job. I mean: I understand that TensorShape is a convention for Python API, but I have not seen a case where it is what the final user might need. Do you have documentation on how the unknown rank is represented? @mrry: Should this be contributions welcome, or do we still have plans to do our own refactoring of .shape vs. .get_shape? We can't—or, at least, really shouldn't—accept a contribution on this until the internal refactoring is done. However, the internal refactoring is P3, so it might be some time before it rises to the top of the pile. Sounds good, let's leave it as is. @mrry Any news on the internal refactoring? @mrry friendly ping? This remains blocked on a low-priority internal cleanup. I'd be happy to hand it off to somebody who is looking for something to do, but—due to the nature of the conflict—they would need to be a Google employee, @mrry Maybe I can help? I'm currently interning at Google MTV, feel free to ping me. @danijar I'll be in touch! What's the status of this? @yifeif may be able to help. That would be perfect. I don't have the time for it, unfortunately. Can you contact @mrry? Fixed by commit above.
gharchive/issue
2015-12-22T07:06:51
2025-04-01T04:36:02.918485
{ "authors": [ "DSLituiev", "alextp", "danijar", "drpngx", "girving", "martinwicke", "mrry" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/586", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1678883296
ValueError: Unexpected result of predict_function (Empty batch_outputs) I have the below model I'm working on. The intention is to forecast the 'Index' field based on the impacts from the fields A,B,C and D. The Date field is of the type 'MM/DD/YYYY' `#Import necessary libraries import pandas as pd import numpy as np from sklearn.preprocessing import MinMaxScaler from keras.models import Sequential from keras.layers import Dense, LSTM #Load the dataset df = pd.read_csv('New Final.csv', usecols=['Date', 'Index', 'A', 'B', 'C', 'D']) df = df.sort_values('Date') df = df.set_index('Date') #Normalize the data using MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(df) #Split data into training and testing sets training_data_len = int(len(scaled_data) * 0.8) train_data = scaled_data[0:training_data_len, :] test_data = scaled_data[training_data_len:, :] #Prepare the data for training def create_dataset(dataset, time_step=1): data_X, data_y = [], [] for i in range(len(dataset) - time_step): a = dataset[i:(i + time_step), :] data_X.append(a) data_y.append(dataset[i + time_step, 0]) return np.array(data_X), np.array(data_y) time_steps = 60 X_train, y_train = create_dataset(train_data, time_steps) X_test, y_test = create_dataset(test_data, time_steps) #Build the model model = Sequential() model.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1], X_train.shape[2]))) model.add(LSTM(units=50, return_sequences=True)) model.add(LSTM(units=50)) model.add(Dense(units=1)) #Compile the model #model.compile(optimizer='adam', loss='mean_squared_error') model.compile(loss='mean_squared_error', optimizer='adam', run_eagerly=True) #Train the model model.fit(X_train, y_train, epochs=100, batch_size=32) #Evaluate the model train_predictions = model.predict(X_train) test_predictions = model.predict(X_test, batch_size=1) #Convert the predictions back to original form train_predictions = scaler.inverse_transform(train_predictions) y_train = scaler.inverse_transform([y_train]) test_predictions = scaler.inverse_transform(test_predictions) y_test = scaler.inverse_transform([y_test]) #Create a dataframe to store the predictions train_predict_df = pd.DataFrame(train_predictions, columns=['IndexI'], index=df.iloc[time_steps:training_data_len, :].index) test_predict_df = pd.DataFrame(test_predictions, columns=['Index'], index=df.iloc[training_data_len+time_steps:-1, :].index) #Merge the predicted values with the original dataset df_train_predict = pd.concat([df.iloc[time_steps:training_data_len, :], train_predict_df], axis=1) df_test_predict = pd.concat([df.iloc[training_data_len+time_steps:-1, :], test_predict_df], axis=1) #Import the necessary libraries import matplotlib.pyplot as plt #Define the x-axis labels x_labels = [] for year in range(2009, 2023): for week in range(1, 53): x_labels.append(str(year) + '-W' + str(week)) #Create the plot plt.plot(df['Index'].values, color='purple') plt.plot(df_train_predict['Index'], color='green') plt.plot(df_test_predict['Index'], color='yellow') #Set the x-axis labels plt.xticks(np.arange(0, len(df), 52), x_labels[::52], rotation=90) #Set the plot title and axis labels plt.title('Actual vs. Predicted Index Values') plt.xlabel('Weeks of the Year') plt.ylabel('SCFI Values') #Display the plot plt.show()` The training outcomes I get is below. Im getting the following error when Im trying to predict the model, Please tell me the possible reason for the above error and how can I fix it? @Erangi2020, I tried to execute the mentioned code and it was failing due to a different error. Kindly find the gist of it here and also Have you got the chance to take a look at this PR which was raised for the similar error and it was open and under review with the Developer. https://github.com/keras-team/keras/pull/16216 https://github.com/keras-team/keras/pull/18042 https://github.com/keras-team/keras/issues/16202 Requesting you to follow the same issue for the updates. Thank you!
gharchive/issue
2023-04-21T17:49:11
2025-04-01T04:36:02.934674
{ "authors": [ "Erangi2020", "tilakrayal" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/60394", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1746807275
tf-mlir-translate and flatbuffer_translate failure for the ERF function For TF Results INVALID_MLIR test_erf_1_f32: Error 1 running command: /tensorflow/bazel-bin/tensorflow/compiler/mlir/tf-mlir-translate --graphdef-to-mlir --tf-enable-shape-inference-on-import --tf-output-arrays=result erf/test_erf_1_f32/model.pb -o erf/test_erf_1_f32/test_tf.preopt.mlir --tf-input-arrays placeholder_0 --tf-input-shapes 1, You can find the dummy tf erf model here: https://github.com/Jerry-Ge/tfl_models/blob/main/erf_model.pb For TFL After running /tensorflow/compiler/mlir/lite/flatbuffer_translate --tflite-flatbuffer-to-mlir erf/test_erf_1_f32/model.tflite --output-arrays=PartitionedCall:0 -o erf/test_erf_1_f32/test_tflite.preopt.mlir You can find the dummy tfl erf model here: https://github.com/Jerry-Ge/tfl_models/blob/main/erf_model.tflite It's generating a tfl.custom operator here which is not tfl.erf module attributes {tf_saved_model.semantics, tfl.description = "MLIR Converted.", tfl.schema_version = 3 : i32} { func.func @main(%arg0: tensor<1xf32> {tf_saved_model.index_path = ["placeholder_0"]}) -> (tensor<1xf32> {tf_saved_model.index_path = ["output_0"]}) attributes {tf.entry_function = {inputs = "serving_default_placeholder_0:0", outputs = "PartitionedCall:0"}, tf_saved_model.exported_names = ["serving_default"]} { %0 = "tfl.custom"(%arg0) {custom_code = "FlexErf", custom_option = #tfl<const_bytes : "0x03457266001212034572661A002A070A0154120230013200000219151414042801">} : (tensor<1xf32>) -> tensor<1xf32> return %0 : tensor<1xf32> } } I think there requires some fair amount of support to the erf function for mlir. Related ticket: https://github.com/tensorflow/tensorflow/issues/60663 Hi @Jerry-Ge, I tried both the commands but I'm getting different errors... did you upload the right files? ../tensorflow/bazel-bin/tensorflow/compiler/mlir/tf-mlir-translate --graphdef-to-mlir --tf-enable-shape-inference-on-import --tf-output-arrays=result erf_model.pb -o test_tf.preopt.mlir --tf-input-arrays=placeholder_0 --tf-input-shapes=1 2023-06-08 23:20:41.144947: E tensorflow/compiler/mlir/tensorflow/utils/import_utils.cc:48] Error parsing Protobuf 2023-06-08 23:20:41.145026: E tensorflow/compiler/mlir/tensorflow/translate/tf_mlir_translate.cc:129] Graph import failed: INVALID_ARGUMENT: Could not parse input proto ../tensorflow/bazel-bin/tensorflow/compiler/mlir/lite/flatbuffer_translate --tflite-flatbuffer-to-mlir erf_model.tflite --output-arrays=PartitionedCall:0 -o test_tflite.preopt.mlir ERROR: The model is not a valid Flatbuffer buffer erf_model.tflite:0:0: error: couldn't parse flatbuffer alternatively, did I translate your commands incorrectly to my environment? thanks @pkgoogle for helping on this. I cloned the model repo again and still couldn't get those Protobuff/Flatbuffer errors. Also double-checked your command and that looks fine. A few options: Maybe something wrong when you're downloading the model files which is causing some corruptions It's a very dummy model and it you can easily generate those by the following definition @tf.function(input_signature=[tf.TensorSpec(shape=[1, ], dtype=tf.float32)]) def erf(self, x): return tf.math.erf(x) Some more progress/issues for TF In the tf model generated, there seems missing a result there and I could get the following errors: INVALID_ARGUMENT: Output result was not found in graph By running this command: tensorflow/bazel-bin/tensorflow/compiler/mlir/tf-mlir-translate --graphdef-to-mlir --tf-enable-shape-inference-on-import --tf-output-arrays result erf_model.pb -o test_tf.preopt.mlir --tf-input-arrays placeholder_0 --tf-input-shapes=1 If we remove --tf-output-arrays result flag, this gonna work and I can see the tf.Erf in the generated ir For TFL the same error as described on the top. @Jerry-Ge, What branch are you working out of? Master? Nightly? A release branch? @Jerry-Ge, What branch are you working out of? Master? Nightly? A release branch? Master. So I went to the exact same commit and recompiled those binaries I was able to get the tflite command to work: ../tensorflow/bazel-bin/tensorflow/compiler/mlir/lite/flatbuffer_translate --tflite-flatbuffer-to-mlir erf.tflite --output-arrays=PartitionedCall:0 -o test_tflite.preopt.mlir test_tflite.preopt.mlir: module attributes {tf_saved_model.semantics, tfl.description = "MLIR Converted.", tfl.schema_version = 3 : i32} { func.func @main(%arg0: tensor<1xf32> {tf_saved_model.index_path = ["x"]}) -> (tensor<1xf32> {tf_saved_model.index_path = ["output_0"]}) attributes {tf.entry_function = {inputs = "serving_defa ult_x:0", outputs = "PartitionedCall:0"}, tf_saved_model.exported_names = ["serving_default"]} { %0 = "tfl.custom"(%arg0) {custom_code = "FlexErf", custom_option = #tfl<const_bytes : "0x03457266001212034572661A002A070A0154120230013200000219151414042801">} : (tensor<1xf32>) -> tensor<1x f32> return %0 : tensor<1xf32> } } I'm still getting the previous error with the "regular" TF version: ../tensorflow/bazel-bin/tensorflow/compiler/mlir/tf-mlir-translate --graphdef-to-mlir --tf-enable-shape-inference-on-import --tf-output-arrays=result saved_model.pb -o test_tf.preopt.mlir --tf-input-arrays=placeholder_0 --tf-input-shapes=1 2023-06-09 21:00:13.685096: E tensorflow/compiler/mlir/tensorflow/utils/import_utils.cc:48] Error parsing Protobuf 2023-06-09 21:00:13.685157: E tensorflow/compiler/mlir/tensorflow/translate/tf_mlir_translate.cc:129] Graph import failed: INVALID_ARGUMENT: Could not parse input proto My exact code to produce the .pb and .tflite files: create_model.py: import tensorflow as tf class Erfer(tf.Module): @tf.function(input_signature=[tf.TensorSpec(shape=[1, ], dtype=tf.float32)]) def erf(self, x): return tf.math.erf(x) model = Erfer() tf.saved_model.save(model, 'saved_model') and convert_model.py: import tensorflow as tf # Path to the saved model directory saved_model_dir = 'saved_model' # Convert the saved model to TFLite format converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.target_spec.supported_ops = [ tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops. tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops. ] tflite_model = converter.convert() # Save the TFLite model to a file tflite_model_file = 'erf.tflite' with open(tflite_model_file, 'wb') as f: f.write(tflite_model) So I believe the flatbuffer to MLIR is expected given that this is a FlexOp i.e. it is not natively built for TFLite (It essentially just makes it work with the original TF implementation) Is this ticket a feature request to make ERF a "BUILTIN" TFLite Op or to just make ERF "work"? i.e. do you run into any errors when attempting to use the .tflite model? Let me you know if you need any clarity on what I'm asking. TF It's quite weird for TF. my code looks very similar except I'm using something like this model = Model() concrete_function = model.erf.get_concrete_function() tf.io.write_graph(concrete_function.graph, ".", "model_file_name", True) TFL For TFLite, I'm requesting to make ERF a "BUILTIN" Op if I understood that correctly. i.e., I want to see the tfl.erf in the mlir file which currently it's a customOp Hi @haozha111, would you know if/when ERF will make it into TFLite as a BUILTIN op? Thanks! Hi all, bring this up again and see if there're any updates. Hi all, bring this up again and see if there're any updates. Hi all, bring this up again and see if there're any updates. @pkgoogle seems no responses for a long time. Fixed it.
gharchive/issue
2023-06-07T23:07:13
2025-04-01T04:36:02.948065
{ "authors": [ "Jerry-Ge", "pkgoogle" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/60809", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2185477718
tf.raw_ops.DatasetToTFRecord: Aborted (core dumped) Issue type Bug Have you reproduced the bug with TensorFlow Nightly? Yes Source binary TensorFlow version tf 2.15 Custom code Yes OS platform and distribution Ubuntu 20.04 Mobile device No response Python version 3.9 Bazel version No response GCC/compiler version No response CUDA/cuDNN version No response GPU model and memory No response Current behavior? Under specific input, tf.raw_ops.DatasetToTFRecord encounters "Aborted (core dumped)". Standalone code to reproduce the issue import tensorflow as tf input_data = [[1, 2], [3, 4], [5, 6]] input_data_strings = [[str(d) for d in inner_list] for inner_list in input_data] dataset = tf.data.Dataset.from_tensor_slices(input_data_strings) filename = "output.tfrecord" tf.raw_ops.DatasetToTFRecord(input_dataset=dataset._variant_tensor, filename=filename, compression_type="") Relevant log output 2024-03-14 05:41:32.476705: F tensorflow/core/framework/tensor.cc:852] Check failed: 1 == NumElements() (1 vs. 2)Must have a one element tensor Aborted (core dumped) @SuryanarayanaY I was able to replicate the issue here. Thank you!
gharchive/issue
2024-03-14T05:41:47
2025-04-01T04:36:02.953595
{ "authors": [ "nonenone12135", "sushreebarsa" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/63689", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
197592069
Feature Request: Gradient for SVD op The gradient for the SVD op would be very useful so that it could be used in networks and cost functions. Currently when trying to use SVD I get the follow: LookupError: No gradient defined for operation 'Svd' (op type: Svd) So my request is for the gradient for the SVD op the algorithm is in section 3.2 of An extended collection of matrix derivative results for forward and reverse mode algorithmic differentiation We are currently working on this internally and I've heard it may be close. @rmlarsen knows more. @aselle @rmlarsen Any update on this? As mentioned, this underway internally. I believe both the person working on it and I are just back from vacation. I assume this will be available within the coming month. Hi @rmlarsen, I am looking through the rc-v1.0 and I don't see the SvdGrad op registered here. Is there somewhere else I should look for it? Yes, this would help experiment with spectral methods. @rmlarsen Is this still in active development ? Is there any update on this? It would be super useful for our work ! @rmlarsen what is the update on this ? There is a current implementation already but it is not yet complete. I was busy with other projects but I will try to come back to this next week. @cdiwork would also be v. interested in this. @cdiwork actually happy to help if there's much more to do. I'm looking for a numerically stable gradient for the pseudoinverse, and this would completely solve my problem. Hello, Any news on this features? Would be very helpful to me too. @cdiwork @rmlarsen @aselle hey fellas, any update on this feature? Hello, I was also wondering if there was any progress on this? Would be great to have this functionality. We might be able to create a differentiable op using this, though its not exactly efficient: def svd(A, compute_uv=True, name=None): # since dA = dUSVt + UdSVt + USdVt # we can simply recompute each matrix using A = USVt # while blocking gradients to the original op. _, M, N = A.get_shape().as_list() P = min(M, N) S0, U0, V0 = map(tf.stop_gradient, tf.svd(A, full_matrices=True, name=name)) # A = USVt # S = inv(U) A inv(Vt) S = tf.batch_matmul(tf.matrix_inverse(U0), tf.batch_matmul(A, tf.matrix_inverse(V0))) S = tf.matrix_diag_part(S) if not compute_uv: return S # U = A inv(SVt) Sv = tf.pad(tf.expand_dims(S0, 2), [[0, 0], [0, N-P], [0, 0]]) SVTi = tf.matrix_inverse(Sv * tf.transpose(V0, (0,2,1))) U = tf.batch_matmul(A, SVTi) U = U[:, :M, :P] # Vt = inv(US) A Su = tf.pad(tf.expand_dims(S0, 1), [[0, 0], [0, 0], [0, M-P]]) V = tf.transpose(tf.batch_matmul(tf.matrix_inverse(Su * U0), A), (0,2,1)) V = V[:, :N, :P] return S, U, V def svd(A, full_matrices=False, compute_uv=True, name=None): # since dA = dUSVt + UdSVt + USdVt # we can simply recompute each matrix using A = USVt # while blocking gradients to the original op. _, M, N = A.get_shape().as_list() P = min(M, N) S0, U0, V0 = map(tf.stop_gradient, tf.svd(A, full_matrices=True, name=name)) Ui, Vti = map(tf.matrix_inverse, [U0, tf.transpose(V0, (0, 2, 1))]) # A = USVt # S = UiAVti S = tf.matmul(Ui, tf.matmul(A, Vti)) S = tf.matrix_diag_part(S) if not compute_uv: return S Si = tf.pad(tf.matrix_diag(1/S0), [[0,0], [0,n-p], [0,m-p]]) # U = AVtiSi U = tf.matmul(A, tf.matmul(Vti, Si)) U = U if full_matrices else U[:, :M, :P] # Vt = SiUiA V = tf.transpose(tf.matmul(Si, tf.matmul(Ui, A)), (0, 2, 1)) V = V if full_matrices else V[:, :N, :P] return S, U, V Hello @kofd This idea did not pass gradient checks for me. Is it because the orthonormality constraints aren't imposed? Did this work for you? (I tried it on a different framework and coded it up independently). Hi, Any news? Hi, I have composed one gradient function based on Matrix-backpropagation paper. Hope it helps. @tf.RegisterGradient('Svd') def gradient_svd(op, grad_s, grad_u, grad_v): """ Define the gradient for SVD References Ionescu, C., et al, Matrix Backpropagation for Deep Networks with Structured Layers Parameters ---------- op grad_s grad_u grad_v Returns ------- """ s, u, v = op.outputs v_t = tf.transpose(v, [0,2,1]) with tf.name_scope('K'): K = get_eigen_K(s, True) inner = matrix_symmetric(K * tf.matmul(v_t, grad_v)) # Create the shape accordingly. u_shape = u.get_shape()[1].value v_shape = v.get_shape()[1].value # Recover the complete S matrices and its gradient eye_mat = tf.eye(v_shape, u_shape) realS = tf.matmul(tf.reshape(tf.matrix_diag(s), [-1, v_shape]), eye_mat) realS = tf.transpose(tf.reshape(realS, [-1, v_shape, u_shape]), [0, 2, 1]) real_grad_S = tf.matmul(tf.reshape(tf.matrix_diag(grad_s), [-1, v_shape]), eye_mat) real_grad_S = tf.transpose(tf.reshape(real_grad_S, [-1, v_shape, u_shape]), [0, 2, 1]) dxdz = tf.matmul(u, tf.matmul(2 * tf.matmul(realS, inner) + real_grad_S, v_t)) return dxdz @kcyu2014 Why don't you make a PR? @albertpumarola Sorry I forgot it and now its updated :) +1 would be very useful :) @rmlarsen Yes, please add this feature, super helpful for matrix nuclear norm. @rmlarsen Have you test the code @kcyu2014 contribute? Is it work? @albertpumarola I tried it and it didn't work for me :/ Can find logs later if it's helpful for people The implementation by @kcyu2014 does not have gradients for U, only for S and V (those seem to agree with numerical gradients though). I need this feature badly. Could someone get it done fast? Hi, any update about this? I tried the code by @kcyu2014 but it didn't work properly unfortunately. @rmlarsen. Any update? Sorry for the lack of progress on this. I will try to set aside a few days to get this in now. Especially now that we have GPU support for all the linear algebra ops (minus complex SVD), this is a gaping hole. here is an implementation that should work for square matrices. @psycharo seems to work but sometimes the loss goes to NaN when using svd in the loss (nuclear norm) @hicham-eyeem -- TensorFlow SVD has some bugs that cause NaNs sometimes -- https://github.com/tensorflow/tensorflow/issues/9234 , you could double check if this is fixed using numpy version @yaroslavvb ah ok, thank you for pointing that out and actually even adding some regularisation doesn't help. Do you know the reason why it would give NaNs sometimes? I guess also we can avoid using SVD by rather using a matrix factorization formulation if it's used in the loss function, since the matrix factorization formulation would require only matmul and transpose ops (+ some constraints that can be linearized with a proximal form) FYI: I believe I have a working version now. I'll send it through internal code review today. Stay tuned. FYI: I have an initial version of this out for review internally. @rmlarsen great, thank you, can't wait to try it out :) The code was submitted and should appear on github within a day or so. Let me close this and open a new issue for extending support for more general matrices. @caisq thanks for the quick push! New issue is https://github.com/tensorflow/tensorflow/issues/13641 @rmlarsen was the formula from https://people.maths.ox.ac.uk/gilesm/files/NA-08-01.pdf used or were there a different formula used? Hi, I have composed one gradient function based on Matrix-backpropagation paper. Hope it helps. def matrix_symmetric(x): return (x + tf.transpose(x, [0,2,1])) / 2 def get_eigen_K(x, square=False): """ Get K = 1 / (sigma_i - sigma_j) for i != j, 0 otherwise Parameters ---------- x : tf.Tensor with shape as [..., dim,] Returns ------- """ if square: x = tf.square(x) res = tf.expand_dims(x, 1) - tf.expand_dims(x, 2) res += tf.eye(tf.shape(res)[1]) res = 1 / res res -= tf.eye(tf.shape(res)[1]) # Keep the results clean res = tf.where(tf.is_nan(res), tf.zeros_like(res), res) res = tf.where(tf.is_inf(res), tf.zeros_like(res), res) return res @tf.RegisterGradient('Svd') def gradient_svd(op, grad_s, grad_u, grad_v): """ Define the gradient for SVD References Ionescu, C., et al, Matrix Backpropagation for Deep Networks with Structured Layers Parameters ---------- op grad_s grad_u grad_v Returns ------- """ s, u, v = op.outputs v_t = tf.transpose(v, [0,2,1]) with tf.name_scope('K'): K = get_eigen_K(s, True) inner = matrix_symmetric(K * tf.matmul(v_t, grad_v)) # Create the shape accordingly. u_shape = u.get_shape()[1].value v_shape = v.get_shape()[1].value # Recover the complete S matrices and its gradient eye_mat = tf.eye(v_shape, u_shape) realS = tf.matmul(tf.reshape(tf.matrix_diag(s), [-1, v_shape]), eye_mat) realS = tf.transpose(tf.reshape(realS, [-1, v_shape, u_shape]), [0, 2, 1]) real_grad_S = tf.matmul(tf.reshape(tf.matrix_diag(grad_s), [-1, v_shape]), eye_mat) real_grad_S = tf.transpose(tf.reshape(real_grad_S, [-1, v_shape, u_shape]), [0, 2, 1]) dxdz = tf.matmul(u, tf.matmul(2 * tf.matmul(realS, inner) + real_grad_S, v_t)) return dxdz this is very useful, we are assuming that we don't use the U matrix when we have decomposed the original matrix A into U s V, since we do not calculate the derivative respect to U anywhere.
gharchive/issue
2016-12-26T14:13:56
2025-04-01T04:36:02.971980
{ "authors": [ "Branden-Zhang", "JaeDukSeo", "LionSR", "Xalos", "albertpumarola", "aselle", "cdiwork", "ddetone", "hicham-eyeem", "jjough", "kcyu2014", "kmyid", "kofd", "kstant0725", "mlhengin", "psycharo", "rmlarsen", "satyam-cyc", "schmiflo", "shariharan99", "smilli", "yaroslavvb" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/issues/6503", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
326144111
Added 64 bit toolchain flag to CMake build instructions When building tensorflow on windows with CMake, using the 64 bit toolchain (compiler (cl.exe) and linker (link.exe)) is needed. Using the 32 bit toolchain often result in errors such as C1060 "compiler out of heap space" and C1002: "compiler is out of heap space in pass 2". There are several issues reported on this: https://github.com/tensorflow/tensorflow/issues?utf8=✓&q=is%3Aissue+compiler+is+out+of+heap+space The current README for CMake in tensorflow states that you can fix this by setting up a bunch of environment variables using the visual studio bat script: "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\amd64\vcvarsall.bat". However, I have discovered that this has no effect. If you look in task manager while you are compiling when using this bat script, you can see the task: "Microsoft Compiler Driver (32 bit)" running. The reason why this has no effect, is that CMake select which compiler and linker is used. It will output this to the console the first time you run configure with cmake. You will then see that CMAKE_CXX_COMPILER is set to "C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe". The x86 part I assume means the 32 bit compiler is used, while the amd64 part means you are building an 64 bit application, two very different things. The fact that some people report that using the bat script actually helps, is most likely due to chance. I have observed several times that sometimes this error occurs, while other times it doesn't. I think this is due to multi-threading during compilation. Newer versions of CMake (>= 3.8) allow you to specify the toolset host architecture to 64 bit using the flag: -Thost=x64. This is documented here: https://cmake.org/cmake/help/v3.8/generator/Visual Studio 14 2015.html By adding this flag, I observe that CMAKE_CXX_COMPILER is set to "C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/amd64/cl.exe" instead. Also, while compiling, the task manager now shows the task "Microsoft Compiler Driver", which is the 64 bit version of cl.exe, instead of "Microsoft Compiler Driver (32 bit)". Using this flag, I have not experienced "compiler out of heap space" issues anymore on neither Visual studio 2015 and 2017 on windows 10. I hope others can test this and, hopefully, verify this solution. I also updated the minimum cmake version required for windows to 3.8, which supports this host toolset flag. Looks like we need a cmake upgrade on our test infra. I will see if I can get that done first, then we can move forward with this PR. Nagging Assignee @case540: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. Nagging Assignee @case540: It has been 29 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. Nagging Assignee @case540: It has been 44 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. Nagging Assignee @case540: It has been 59 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. pinging gunan@ for update about upgrading CMake. @m3bm3b as he has some context. We received some complaints that upgrading the dependencies to cutting edge versions are causing headaches for users who would like to build from sources. So we would like to avoid this upgrade. Is there a way to achieve a similar result without cmake upgrade? In case it helps, one way to achieve a version requirement only under certain conditions is as follows: cmake_minimum_required (VERSION ) if (<need higher version condition (e.g., is Windows)> AND "${CMAKE_MAJOR_VERSION} and "${CMAKE_MINOR_VERSION}" are too low) message( FATAL_ERROR "require version if " ) endif() Not sure why a cmake upgrade should be troublesome? At least not on windows, for linux I can understand as the package manager aptitude for Ubuntu 16 only gives you cmake 3.5. But this toolset feature is not needed on linux, hence the if(WIN32). This example might give an idea on how you can do it without the cmake upgrade: https://github.com/facebook/hhvm/blob/master/CMake/VisualStudioToolset.cmake I have not tested this, so it would have to be verified that it actually does what it claims. Also, the example is only for visual studio 2017, and would have to be expanded for other VS versions. Not sure why a cmake upgrade should be troublesome? If N people are each going to have to spend 5-10 minutes working out that they need to upgrade, looking up how to do it, and then doing it, it's worth the maintainer spending on the order of N*5 minutes seeking to help them by arranging that they don't have to do the upgrade. For TensorFlow, N tends not to be small. Sometimes it's hard to avoid forcing people to upgrade something. But if it's possible to avoid it straightforwardly, I'd recommend it. And failing that for Windows, if it can be helped for other platforms straightforwardly, then I'd recommend that. Nagging Assignee @case540: It has been 16 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. Nagging Assignee @case540: It has been 31 days with no activity and this issue has an assignee. Please update the label and/or status accordingly. @smistad I agree with @m3bm3b on this one. Upgrade of dependencies can be confusing for people, and weith each upgrade we force that on thousands of people. Would you like to prepare a workaround that can do more if newer version of cmake is available, but still work if user has an older version of cmake? I see your point. But is the cmake feature still relevant? The reason I ask, is that in the 1.10 release notes it was stated the following: "Starting from TensorFlow 1.11, Windows builds will use Bazel. Therefore, we will drop official support for cmake." Do you mean we can drop this PR, or do you mean it is OK to be more restrictive on an unsupported config? The hack from https://github.com/facebook/hhvm/blob/master/CMake/VisualStudioToolset.cmake caused an error in CMake, therefore I propose the following compromise: Replace the cmake_minimum_required(3.8) with a warning: if(WIN32 AND ${CMAKE_VERSION} VERSION_LESS "3.8") message(WARNING "Your current cmake version is ${CMAKE_VERSION} which does not support setting the toolset architecture to x64. This may cause \"compiler out of heap space\" errors when building. Consider upgrading your cmake to >= 3.8 and using the flag -Thost=x64 when running cmake.") endif() OK, with the current state of cmake for TF, I am OK with that. Nagging Assignee @case540: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.
gharchive/pull-request
2018-05-24T14:19:38
2025-04-01T04:36:02.988839
{ "authors": [ "case540", "gunan", "m3bm3b", "smistad", "tensorflowbutler" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/19531", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
423959608
Removed references to _ref types in documentation. See #27005. I am not a good reviewer for this change. Please redirect. I'm also not a good reviewer for this. Maybe @alextp or @superbobry
gharchive/pull-request
2019-03-21T21:55:59
2025-04-01T04:36:02.990938
{ "authors": [ "av8ramit", "dynamicwebpaige", "gunan" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/27006", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
426415256
Documentation for tf.bitcast Added some examples in tf.bitcast function. #27166 somebody please review Could you please add Python after the backticks that initiate code samples or inline backticks? (e.g., ```python). tf.bitcast can be used over tf.cast when you have to assign some unsigned data type. For example, code here i have modified according to TF2.0.0 and also added python code here. @tomhennigan please review this pull request Please somebody review the changes Now i have added that one line Thanks @mihaimaruseac for guiding me now whitespace error is over, I have to just modify some changes according to @tomhennigan then I am done. @tomhennigan I have changed some points which you mentioned. Changes done @tomhennigan please review I have done the changes How to initialize these other tests which are still expected. @rthadur what about these checks they are still pending @tomhennigan what about checks import/copybara test is failing what to do @tomhennigan import/copybara error is pending for more than 20 hours now @tomhennigan , what to do? You don't have to do anything on copybara, unless mentioned here. Just have to wait for a googler to act on it on the internal page ok, thank you @mihaimaruseac
gharchive/pull-request
2019-03-28T10:30:18
2025-04-01T04:36:02.996134
{ "authors": [ "hksonngan", "mihaimaruseac", "shashvatshahi1998" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/27239", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
431256115
Fix negative axis issue with ragged tensor and reduce_sum This fix tries to address the issue raised in #27497 where tf.reduce_sum with multiple negative axes and ragged tensor does not produce correct result. The issue is that during reduce op, ragged tensor will reduce one axis at a time. However, for negative axis, sort result is reversed so order is different. This fix convert to positive before the sort to make sure the order. This fix fixes #27497. Signed-off-by: Yong Tang yong.tang.github@outlook.com @yongtang can you please check build failures
gharchive/pull-request
2019-04-10T00:52:11
2025-04-01T04:36:02.998165
{ "authors": [ "rthadur", "yongtang" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/27699", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
165365964
Add exclusive kwarg to scan ops This adds an exclusive kwarg to the cumsum and cumprod ops. Documentation and tests have been extended to cover this kwarg. Can one of the admins verify this patch? Thank you for satisfying my inclusive scan pet peeve. :) Okay, I've updated the PR. I was trying to avoid line breaks by leaving out the kwargs, but I realize that it exactly a great solution :) Okay, I've added a default for the reverse arg as well. Jenkins, test this please. I guess switch to keyword args for _compareAll to satisfy http://ci.tensorflow.org/job/tensorflow-pull-requests-sanity/771//console. Yeah, I've done that but started to wonder why this didn't throw an error when running the tests. Turns out my for a, b, c in combinations([True, False], 3) trick didn't actually do what I wanted and never executed the loop body :( I've then noticed a few more problems with my tests after I corrected that. I'll fix those and update the PR. Jenkins, test this please. I've removed the 8D tests, as they were taking a very long time to run (past the timeout on my machine).
gharchive/pull-request
2016-07-13T16:35:04
2025-04-01T04:36:03.002413
{ "authors": [ "girving", "ibab", "tensorflow-jenkins" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/3296", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
673815893
[Intel MKL] Supporting native format in Convolution Fwd This PR adds support for native(user) format in convolution forward op. _MklNative* ops will be used in both graph and eager modes. This PR needs to be merged after "DNNL 1.5.1 Upgrade" PR is merged #42073 Thank you for your review. I have addressed the comments.
gharchive/pull-request
2020-08-05T20:08:55
2025-04-01T04:36:03.004016
{ "authors": [ "mahmoud-abuzaina" ], "repo": "tensorflow/tensorflow", "url": "https://github.com/tensorflow/tensorflow/pull/42071", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
356020362
Corrected tensorflow "module" operation name In tensorflow the module operation is called FloorMod and not Mod. This error will cause the converter to throw an error if it finds any module operation in the graph. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1121 This change is  I signed it!
gharchive/pull-request
2018-08-31T15:11:52
2025-04-01T04:36:03.023614
{ "authors": [ "ferrarodav" ], "repo": "tensorflow/tfjs-converter", "url": "https://github.com/tensorflow/tfjs-converter/pull/207", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
166292672
Master get_started에서 ## The computation graph 번역했습니다. 의역과 직역이 섞여있습니다. 확인 부탁드립니다. 네 감사드립니다. ^^
gharchive/pull-request
2016-07-19T09:58:12
2025-04-01T04:36:03.031394
{ "authors": [ "dollking", "rickiepark" ], "repo": "tensorflowkorea/tensorflow-kr", "url": "https://github.com/tensorflowkorea/tensorflow-kr/pull/32", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2634370208
Stack op failing for test case with dim(-2 and 3) for 4d inputs Description Stack op sanity test cases with dim =-2 and dim = 3 is failing with RuntimeError: TT_ASSERT @ /proj_sw/user_dev/mramanathan/OCT23_forge/tt-forge-fe/forge/csrc/graph_lib/shape.cpp:140: (i >= 0) && (i < (int)dims_.size()) E info: E Trying to access element outside of dimensions: 4 in optimised graph stage "params", [ ([(1, 30, 30, 16), (1, 30, 30, 16)], -2), ([(1, 30, 30, 16), (1, 30, 30, 16)], 3), ], Above issue was arising from erase_inverse_ops.cpp, more specifically from commute_through_concat function in commute_utils.cpp. In this line, clone_shape = [1, 30, 30, 1, 16] and commute_shape = [1, 30, 30, 16] and matching_in_commute_shape = 3 and concat dim is calculated here as 3 (op->shape().size() =5 and current dimsenion = -2) , since both these are equal concat_commute_up is set to true here. new_dim is being calculated in can_commute_through_dim which further calls can_commute_reshape_through_dim ,where input shape is [1, 30, 30, 16] and output shape is [1, 30, 30, 1, 16], since concat_commute_up is true, input and output shapes gets swapped. After swapping, new_dim is being set to 4 and can_commute is set to true after volumes are checked with these conditions. This condition is evaluated to true since can_commute is true and new_dim is not equal to -1 and when updated_commute_shape[new_dim] = op->shape()[concat_dim]; is trying to be accessed inside this if condition, out of bounds error is thrown as new_dim = 4 and updated_commute_shape = [1, 30, 30, 16] To skip out of bounds test case, this condition is added in can_commute_reshape_through_dim function. if ((volume_above(input_shape_vec, i) == volume_above(output_shape_vec, dim)) and (volume_below(input_shape_vec, i) == volume_below(output_shape_vec, dim)) and (i < output_shape_vec.size())) Reproduce git checkout mramanathan/stack_op_sanity git submodule update --recursive cmake --build build -- install_ttforge pytest forge/test/mlir/test_ops.py::test_stack -vss Observed Behaviour RuntimeError: TT_ASSERT @ /proj_sw/user_dev/mramanathan/OCT23_forge/tt-forge-fe/forge/csrc/graph_lib/shape.cpp:140: (i >= 0) && (i < (int)dims_.size()) E info: E Trying to access element outside of dimensions: 4` in optimised graph What happens when you disable erase inverse op pass? By this PR: Fix stack op sanity failure #606 And this issue seems like we have more bugs with the erase inverse ops pass. What happens when you just disable erase inverse op pass? I will check this
gharchive/issue
2024-11-05T04:15:00
2025-04-01T04:36:03.042644
{ "authors": [ "meenakshiramanathan1", "nvukobratTT" ], "repo": "tenstorrent/tt-forge-fe", "url": "https://github.com/tenstorrent/tt-forge-fe/issues/615", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2696919119
Optimize the web demo for yolov4 Problem description Have a real-time web demo for yolov4 What's changed Enable trace + 2cq Optimize the post processing Checklist [ ] Post commit CI passes [ ] Blackhole Post commit (if applicable) [ ] Model regression CI testing passes (if applicable) [ ] Device performance regression CI testing passes (if applicable) [ ] New/Existing tests provide coverage for changes Please run post commit and nightly fast dispatch You also seem to be on quite an old base of main. @tt-rkim I have rebased to main now and post-commits and nightly fast dispatch are currently running. https://github.com/tenstorrent/tt-metal/actions/runs/12043538909 https://github.com/tenstorrent/tt-metal/actions/runs/12043531296 nothing failing yet. will update you if anything fails. @tt-rkim , I have resolved the CI test issues relevant to yolo-v4. and launched the CI tests again: nightly fast dispatch tests all post commit tests Let's see what's happening with the re-run of nightly fast dispatch: https://github.com/tenstorrent/tt-metal/actions/runs/12129527247 It's taking a long time for GS ttnn integration tests - very suspect. As a reminder, WH_ARCH_YAML is a deprecated env flag for Metal. The plan is to remove this flag from all codebase once this ticket is resolved https://github.com/tenstorrent/tt-metal/issues/11059. Currently, blocked by models team to remove all use cases of this flag at the python testing level. Thanks for the headsup @aliuTT , what is the alternative way to check if we are on a N150 or N300? and how to create/open device in each case? all post commits CI pass
gharchive/pull-request
2024-11-27T03:57:26
2025-04-01T04:36:03.049617
{ "authors": [ "aliuTT", "dvartaniansTT", "tt-rkim" ], "repo": "tenstorrent/tt-metal", "url": "https://github.com/tenstorrent/tt-metal/pull/15478", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2496845748
Update grid analysis overrides Need to support overriding all LayoutAttr params instead of only grid. Include maxShardedConfigs as well.
gharchive/issue
2024-08-30T10:28:14
2025-04-01T04:36:03.050761
{ "authors": [ "odjuricicTT" ], "repo": "tenstorrent/tt-mlir", "url": "https://github.com/tenstorrent/tt-mlir/issues/560", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2493377075
Add sharding support in ttnn backend Related to #450 as this enables multi-core runs if sharding is feasible. Related to #518 as this may rely on the compiler to generate legal memory layouts in the future. Added sharding support in ttnn backend, bootstrapped sharding attrs in compiler. Currently since the compiler cannot dynamically generate legal memory layouts, runtime will infer the memory layout for tensors - if tensor resides in l1 and its shard shape is divisible by tile shape, then create sharded memory config, else use interleaved as before. Added a couple of sharding tests. Thanks for making this change @jnie-TT, right at the moment when we need it! Minor comments left. Related to #450 as this enables multi-core runs if sharding is feasible. Related to #518 as this may rely on the compiler to generate legal memory layouts in the future. Added sharding support in ttnn backend, bootstrapped sharding attrs in compiler. Currently since the compiler cannot dynamically generate legal memory layouts, runtime will infer the memory layout for tensors - if tensor resides in l1 and its shard shape is divisible by tile shape, then create sharded memory config, else use interleaved as before. Added a couple of sharding tests. We can run multicore in DRAM interleaved mode as well? Related to #450 as this enables multi-core runs if sharding is feasible. Related to #518 as this may rely on the compiler to generate legal memory layouts in the future. Added sharding support in ttnn backend, bootstrapped sharding attrs in compiler. Currently since the compiler cannot dynamically generate legal memory layouts, runtime will infer the memory layout for tensors - if tensor resides in l1 and its shard shape is divisible by tile shape, then create sharded memory config, else use interleaved as before. Added a couple of sharding tests. We can run multicore in DRAM interleaved mode as well? That's correct. In fact, we try to run it on the whole compute_with_storage_grid for unary ops. Related to #450 as this enables multi-core runs if sharding is feasible. Related to #518 as this may rely on the compiler to generate legal memory layouts in the future. Added sharding support in ttnn backend, bootstrapped sharding attrs in compiler. Currently since the compiler cannot dynamically generate legal memory layouts, runtime will infer the memory layout for tensors - if tensor resides in l1 and its shard shape is divisible by tile shape, then create sharded memory config, else use interleaved as before. Added a couple of sharding tests. With current default block sharding mode, will we perform any tensor deallocation or they will remain till end of execution? Is there any open issue to provide support to compiler for tensor alloc/dealloc? The tensors are by default stored until the end of execution (end of submit, when tensorPool goes out of scope). I agree that we should perform dynamic alloc dealloc once the tensors are not needed anymore, or else we could run out of L1 space if we have a chain of ops. I don't know if there's an active effort for supporting this ATM.
gharchive/pull-request
2024-08-29T03:34:55
2025-04-01T04:36:03.057192
{ "authors": [ "jnie-TT", "nobradovictt" ], "repo": "tenstorrent/tt-mlir", "url": "https://github.com/tenstorrent/tt-mlir/pull/541", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
264799851
should have breaking sections on changelog and migration section on docs to upgrade to v0.14, v0.11 This is very critical so that it's easier for everyone to upgrade. Later on, for each pull request, we must make sure to update docs immediately before merging the pull requests. done, closed.
gharchive/issue
2017-10-12T03:05:54
2025-04-01T04:36:03.064453
{ "authors": [ "hoatle" ], "repo": "teracyhq/flask-classful", "url": "https://github.com/teracyhq/flask-classful/issues/79", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
60368038
Support cocoapods Hi, I've created a podspec for you to just push to cocoapods spec repo. I've also added missing gitignore file and fixed a warning with unused variable. A little help if you never did it before: Check licence file. As far as I've noticed on cocoacontrols.com, you've tagged it as MIT, so I've attached such licence file. Check podspec file. Validate your name (add email if you want), update the link to your repo (!!! currently set to our fork !!!) and add anything you'd like to change. Update the tag in podspec, then commit all changes and tag it with same version as in podspec. Push everything to your origin. Type 'pod spec lint' to make sure everything works. Type 'pod trunk push RGCardViewLayout.podspec' to push it to cocoapods spec repo. You can search for more info here: http://guides.cocoapods.org/making/specs-and-specs-repo.html. Strict merge is invalid, did you read my comments? You need to at least update the link in podspec to your repo instead of our. Also you need to push it to cocoapods later (point 5 & 6). it's now on cocoapods, just need to add it to your Podfile via pod 'RGCardViewLayout','1.0' and import it via, #import This will give you access to the RGCardViewLayout class. I was procrastinating on doing this, and I appreciate your push and help.
gharchive/pull-request
2015-03-09T16:14:23
2025-04-01T04:36:03.085966
{ "authors": [ "natalia-osa", "terminatorover" ], "repo": "terminatorover/RGCardViewLayout", "url": "https://github.com/terminatorover/RGCardViewLayout/pull/7", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
386738478
Unable to run from command line Hi, I cannot run the command line as suggested in http://termsuite.github.io/getting-started/#run-terminology-extraction in Ubuntu 18.04.1 LTS java --version openjdk 10.0.2 2018-07-17 OpenJDK Runtime Environment (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4) OpenJDK 64-Bit Server VM (build 10.0.2+13-Ubuntu-1ubuntu0.18.04.4, mixed mode) this is what I go: $ java -cp termsuite-core-3.0.10.jar fr.univnantes.termsuite.tools.TerminologyExtractorCLI -t ./treetagger/ -c ./wind-energy/English/txt/ -l en --tsv ./wind-energy-en.tsv WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.google.inject.internal.cglib.core.$ReflectUtils$1 (file:/home/mvillega/Desktop/TERMSUITE_WORKSPACE/termsuite-core-3.0.10.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) WARNING: Please consider reporting this to the maintainers of com.google.inject.internal.cglib.core.$ReflectUtils$1 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException at java.base/java.lang.Class.getDeclaredFields0(Native Method) at java.base/java.lang.Class.privateGetDeclaredFields(Class.java:3014) at java.base/java.lang.Class.getDeclaredFields(Class.java:2207) at org.apache.uima.fit.internal.ReflectionUtil.getFields(ReflectionUtil.java:60) at org.apache.uima.fit.factory.ConfigurationParameterFactory.createConfigurationData(ConfigurationParameterFactory.java:411) at org.apache.uima.fit.factory.ExternalResourceFactory.createExternalResourceDescription(ExternalResourceFactory.java:289) at org.apache.uima.fit.factory.ExternalResourceFactory.createExternalResourceDescription(ExternalResourceFactory.java:232) at fr.univnantes.termsuite.uima.CustomResourceTermSuiteAEFactory.createWordTokenizerAEDesc(CustomResourceTermSuiteAEFactory.java:102) at fr.univnantes.termsuite.framework.PreprocessingPipelineBuilder.create(PreprocessingPipelineBuilder.java:128) at fr.univnantes.termsuite.api.Preprocessor.asService(Preprocessor.java:330) at fr.univnantes.termsuite.api.Preprocessor.asService(Preprocessor.java:284) at fr.univnantes.termsuite.api.Preprocessor.toIndexedCorpus(Preprocessor.java:124) at fr.univnantes.termsuite.tools.TerminologyExtractorCLI.getIndexedCorpus(TerminologyExtractorCLI.java:192) at fr.univnantes.termsuite.tools.TerminologyExtractorCLI.run(TerminologyExtractorCLI.java:136) at fr.univnantes.termsuite.tools.CommandLineClient.launch(CommandLineClient.java:287) at fr.univnantes.termsuite.tools.TerminologyExtractorCLI.main(TerminologyExtractorCLI.java:203) Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:190) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499) Thanks! Hi, Try with the following java option --add-modules java.xml.bind The error you get java.lang.NoClassDefFoundError: javax/xml/bind/JAXBException is related to the absence of JAXB module removed by Oracle from JDK starting at version 9 ... Cheers.
gharchive/issue
2018-12-03T09:57:03
2025-04-01T04:36:03.093764
{ "authors": [ "martavillegas", "mhabsaoui" ], "repo": "termsuite/termsuite-core", "url": "https://github.com/termsuite/termsuite-core/issues/108", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
608817905
Yǒu shé me bànfǎ kěyǐ kàn dào yùnxíng guòchéng ma 14/5000 Is there any way to see the running process? tern scanned the file for nearly 10 minutes, and it didn't end. It seemed to hang. Is there any way to see the running process? (ternenv) [root@localhost hf]# more tern.log 2020-04-29 14:16:08,212 - DEBUG - main - Starting... 2020-04-29 14:16:08,216 - DEBUG - run - Setting up... 2020-04-29 14:16:08,219 - DEBUG - rootfs - Running command: tar -tf base-latest-20191210.tar 2020-04-29 14:16:08,225 - DEBUG - rootfs - Running command: tar -xf base-latest-20191210.tar -C /root/.tern/te mp 2020-04-29 14:16:09,926 - DEBUG - rootfs - Running command: tar -tf /root/.tern/temp/558c11b2b15adc493f1dc7d91 1f5eec1bf4bc9e2fa4119dc4267b8579763450f/layer.tar 2020-04-29 14:16:10,494 - DEBUG - rootfs - Running command: tar -xf /root/.tern/temp/558c11b2b15adc493f1dc7d91 1f5eec1bf4bc9e2fa4119dc4267b8579763450f/layer.tar -C /root/.tern/temp/558c11b2b15adc493f1dc7d911f5eec1bf4bc9e2 fa4119dc4267b8579763450f/contents 2020-04-29 14:16:29,299 - DEBUG - rootfs - Running command: /run/media/root/97c30c4f-dfbf-41ab-8f27-5230c8fed9 85/hf/tern-master/ternenv/lib64/python3.6/site-packages/tern/tools/fs_hash.sh /root/.tern/temp/558c11b2b15adc4 93f1dc7d911f5eec1bf4bc9e2fa4119dc4267b8579763450f/contents 2020-04-29 14:19:38,917 - DEBUG - common - Reading files in filesystem... (ternenv) [root@localhost hf]# uname -a Linux localhost.localdomain 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux (ternenv) [root@localhost temp]# ls 48e3ab07239e906d00f736779268d8832d696d5009f2b1015949fd5dca0ff23d.json manifest.json repositories 558c11b2b15adc493f1dc7d911f5eec1bf4bc9e2fa4119dc4267b8579763450f mergedir workdir Can you paste here the results of ls -al in the temp directory? (ternenv) [root@localhost temp]# ls -al total 12 drwxr-xr-x. 5 root root 227 Apr 30 08:22 . drwxr-xr-x. 3 root root 35 Apr 30 08:19 .. -rw-r--r--. 1 root root 891 Dec 10 15:36 48e3ab07239e906d00f736779268d8832d696d5009f2b1015949fd5dca0ff23d.json drwxr-xr-x. 3 root root 142 Apr 30 08:22 558c11b2b15adc493f1dc7d911f5eec1bf4bc9e2fa4119dc4267b8579763450f -rw-r--r--. 1 root root 212 Jan 1 1970 manifest.json drwxr-xr-x. 2 root root 6 Apr 30 08:22 mergedir -rw-r--r--. 1 root root 99 Jan 1 1970 repositories drwxr-xr-x. 2 root root 6 Apr 30 08:22 workdir (ternenv) [root@localhost temp]# Hello, it took about 27 minutes to run, and the analysis result came out. Is this execution time normal? Are there any command parameters or methods that can help me see a more detailed process during execution Hello, it took about 27 minutes to run, and the analysis result came out. Is this execution time normal? Are there any command parameters or methods that can help me see a more detailed process during execution If you're running tern for the first time on a large image, yes, it can take that long. However, I am not really sure why it is taking 10 minutes at line: 2020-04-29 14:19:38,917 - DEBUG - common - Reading files in filesystem... Can you attach tern.log here? How can I see more logs of the tern scanning process, are there any parameters or methods to learn more? What you're seeing is the most amount of logging we have. Anything specific you are looking for?
gharchive/issue
2020-04-29T06:45:17
2025-04-01T04:36:03.173821
{ "authors": [ "00032118", "nishakm" ], "repo": "tern-tools/tern", "url": "https://github.com/tern-tools/tern/issues/673", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }