id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
228829490
network: throw EnvoyException on address parse error Previously when an IP address could not be parsed, a nullptr was returned. However, this causes a segfault for config files with error(s) in the address field(s). This PR changes the address parsing logic to throw an exception instead. This PR also moves IP address parsing functions from Network::Address to Network::Utility. Fixes #952 @ccaraman for review.
gharchive/pull-request
2017-05-15T20:08:06
2025-04-01T04:34:56.315494
{ "authors": [ "hennna" ], "repo": "lyft/envoy", "url": "https://github.com/lyft/envoy/pull/968", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
453341137
change fileloc logic Make sure you have checked all steps below. Jira [ ] My PR addresses the following Airflow Jira issues and references them in the PR title. For example, "[AIRFLOW-XXX] My Airflow PR" https://issues.apache.org/jira/browse/AIRFLOW-XXX In case you are fixing a typo in the documentation you can prepend your commit with [AIRFLOW-XXX], code changes always need a Jira issue. In case you are proposing a fundamental code change, you need to create an Airflow Improvement Proposal (AIP). In case you are adding a dependency, check if the license complies with the ASF 3rd Party License Policy. Description [ ] Here are some details about my PR, including screenshots of any UI changes: Tests [ ] My PR adds the following unit tests OR does not need testing for this extremely good reason: Commits [ ] My commits all reference Jira issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message": Subject is separated from body by a blank line Subject is limited to 50 characters (not including Jira issue reference) Subject does not end with a period Subject uses the imperative mood ("add", not "adding") Body wraps at 72 characters Body explains "what" and "why", not "how" Documentation [ ] In case of new functionality, my PR adds documentation that describes how to use it. All the public functions and the classes in the PR contain docstrings that explain what it does If you implement backwards incompatible changes, please leave a note in the Updating.md so we can assign it to a appropriate release Code Quality [ ] Passes flake8 Codecov Report Merging #133 into data-platform-upgrade will decrease coverage by 0.23%. The diff coverage is 100%. @@ Coverage Diff @@ ## data-platform-upgrade #133 +/- ## ========================================================= - Coverage 69.19% 68.96% -0.24% ========================================================= Files 142 142 Lines 11296 11296 ========================================================= - Hits 7816 7790 -26 - Misses 3480 3506 +26 Impacted Files Coverage Δ airflow/models.py 86.11% <100%> (-0.1%) :arrow_down: airflow/utils/state.py 86.66% <0%> (-13.34%) :arrow_down: airflow/executors/__init__.py 64.51% <0%> (-3.23%) :arrow_down: airflow/settings.py 91.13% <0%> (-2.54%) :arrow_down: airflow/www/app.py 94.11% <0%> (-2.36%) :arrow_down: airflow/utils/dag_processing.py 85.77% <0%> (-0.82%) :arrow_down: airflow/jobs.py 73.8% <0%> (-0.58%) :arrow_down: airflow/configuration.py 83.91% <0%> (-0.51%) :arrow_down: airflow/utils/helpers.py 43.06% <0%> (-0.5%) :arrow_down: airflow/www/views.py 68.11% <0%> (-0.35%) :arrow_down: ... and 1 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 6e789c1...64ea917. Read the comment docs.
gharchive/pull-request
2019-06-07T05:13:13
2025-04-01T04:34:56.336333
{ "authors": [ "codecov-io", "youngyjd" ], "repo": "lyft/incubator-airflow", "url": "https://github.com/lyft/incubator-airflow/pull/133", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
499253884
Relax tqdm dependency Try to update some deps in etl and found this dependency is pinned and caused failure. I don't think we need to pin this dep and it is used in backfill progress bar. 👀 @jinhyukchang @astahlman @ArgentFalcon cc @lyft/dp-tools-viz 👍 👍
gharchive/pull-request
2019-09-27T06:10:22
2025-04-01T04:34:56.338329
{ "authors": [ "astahlman", "feng-tao", "jinhyukchang" ], "repo": "lyft/incubator-airflow", "url": "https://github.com/lyft/incubator-airflow/pull/154", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
342446383
fix bug in logging username This PR is to pick up the change in upstream https://github.com/apache/incubator-airflow/pull/3438 and https://github.com/apache/incubator-airflow/pull/3508 for addressing the problem of bug in logging username. Codecov Report Merging #92 into data-platform-upgrade will increase coverage by 0.01%. The diff coverage is 100%. @@ Coverage Diff @@ ## data-platform-upgrade #92 +/- ## ======================================================== + Coverage 68.88% 68.9% +0.01% ======================================================== Files 142 142 Lines 11200 11200 ======================================================== + Hits 7715 7717 +2 + Misses 3485 3483 -2 Impacted Files Coverage Δ airflow/www/utils.py 80.62% <100%> (+0.77%) :arrow_up: airflow/www/views.py 69.25% <100%> (ø) :arrow_up: airflow/models.py 86.39% <0%> (+0.04%) :arrow_up: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update ffa3e07...655e3f5. Read the comment docs. @lyft/dp-tools-viz 👍
gharchive/pull-request
2018-07-18T18:43:01
2025-04-01T04:34:56.347202
{ "authors": [ "ArgentFalcon", "codecov-io", "youngyjd" ], "repo": "lyft/incubator-airflow", "url": "https://github.com/lyft/incubator-airflow/pull/92", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
438963432
update package-lock.json, since our deps may be different We have a lot of cherry-picks in our history, and they may cause us to have deps that won't be in the upstream-provided package-lock.json. I'm not completely sure if this is the source of the issue we have here, but running npm i on this branch produced the diff that I am submitting now. Generally speaking we probably need to generate the package-lock.json for every merge into lyft-master just to be safe. (Assuming this ends up being the lone cause of test failures we were seeing...) Since CI installs the deps from the lock file it needs to be pregenerated for CI to pass. @betodealmeida @xtinec @khtruong Codecov Report Merging #23 into lyft-master will decrease coverage by 8.37%. The diff coverage is n/a. @@ Coverage Diff @@ ## lyft-master #23 +/- ## =============================================== - Coverage 72.39% 64.02% -8.38% =============================================== Files 81 438 +357 Lines 11097 21588 +10491 Branches 0 2394 +2394 =============================================== + Hits 8034 13822 +5788 - Misses 3063 7628 +4565 - Partials 0 138 +138 Impacted Files Coverage Δ superset/assets/src/components/Checkbox.jsx 100% <0%> (ø) ...ations/deckgl/layers/Polygon/PolygonChartPlugin.js 0% <0%> (ø) ...ets/src/dashboard/components/dnd/DragDroppable.jsx 94.59% <0%> (ø) ...c/visualizations/deckgl/layers/Polygon/Polygon.jsx 0% <0%> (ø) ...ssets/src/visualizations/presets/MapChartPreset.js 0% <0%> (ø) superset/assets/src/components/EditableTitle.jsx 81.53% <0%> (ø) superset/assets/src/welcome/TagsTable.jsx 15.38% <0%> (ø) superset/assets/src/setup/setupPlugins.js 0% <0%> (ø) ...t/assets/src/components/InfoTooltipWithTrigger.jsx 41.66% <0%> (ø) .../src/dashboard/components/UndoRedoKeylisteners.jsx 9.52% <0%> (ø) ... and 347 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 73f54a5...ff2bb1c. Read the comment docs.
gharchive/pull-request
2019-04-30T20:29:43
2025-04-01T04:34:56.359695
{ "authors": [ "DiggidyDave", "codecov-io" ], "repo": "lyft/incubator-superset", "url": "https://github.com/lyft/incubator-superset/pull/23", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1931790095
StableDiffusionInpaintPipeline Great idea!!! wonder whether the inpainting pipeline can also use the freeU idea? This does not apply to Unet's encoder input, which is fine in theory.
gharchive/issue
2023-10-08T12:37:15
2025-04-01T04:34:56.363196
{ "authors": [ "ZhouXiner", "lyn-rgb" ], "repo": "lyn-rgb/FreeU_Diffusers", "url": "https://github.com/lyn-rgb/FreeU_Diffusers/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
234419153
Unable to run failed tests [09:39:07]: Exit status: 65 +--------------------+---+ | Test Results | +--------------------+---+ | Number of tests | 7 | | Number of failures | 1 | +--------------------+---+ [09:39:08]: failure noted: Tests have failed [09:39:08]: -------------------------------------------- [09:39:08]: --- Step: setup_fragile_tests_for_rescan --- [09:39:08]: -------------------------------------------- +----------------------------+-------------------------------------+ | Lane Context | +----------------------------+-------------------------------------+ | PLATFORM_NAME | | | LANE_NAME | featuretest | | SCAN_DERIVED_DATA_PATH | /Users/myusername/Library/Develo | | | per/Xcode/DerivedData/myAppHori | | | zontal-feecdbysbgyjfzdeythwumcpsam | | | z | | SCAN_GENERATED_PLIST_FILES | ["/Users/myusername/Library/Deve | | | loper/Xcode/DerivedData/myAppHo | | | rizontal-feecdbysbgyjfzdeythwumcps | | | amz/Logs/Test/A2F4CF43-D08E-4BBF-A | | | 3D3-8B4A49062FEA_TestSummaries.pli | | | st"] | | SCAN_GENERATED_PLIST_FILE | /Users/myusername/Library/Develo | | | per/Xcode/DerivedData/myAppHori | | | zontal-feecdbysbgyjfzdeythwumcpsam | | | z/Logs/Test/A2F4CF43-D08E-4BBF-A3D | | | 3-8B4A49062FEA_TestSummaries.plist | +----------------------------+-------------------------------------+ [09:39:10]: Missing end tag for 'unknown' (got "section") Line: -1 Position: -1 Last 80 unconsumed characters: +------+------------------------+-------------+ | fastlane summary | +------+------------------------+-------------+ | Step | Action | Time (in s) | +------+------------------------+-------------+ | 💥 | scan | 360 | | 💥 | setup_fragile_tests_f | 0 | | | or_rescan | | +------+------------------------+-------------+ +------------+--------------+----------------+ | Plugin updates available | +------------+--------------+----------------+ | Plugin | Your Version | Latest Version | +------------+--------------+----------------+ | bluepillar | 0.2.0 | 0.3.0 | +------------+--------------+----------------+ [09:39:10]: To update all plugins, just run [09:39:10]: $ fastlane update_plugins [09:39:10]: fastlane finished with errors Looking for related GitHub issues on fastlane/fastlane... ➡️ Fix for _options https://github.com/fastlane/fastlane/pull/80 [closed] 0 💬 04 Feb 2017 🔗 You can ⌘ + double-click on links to open them directly in your browser. /Users/myusername/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/rexml/parsers/baseparser.rb:341:in **`pull_event': [!] Missing end tag for 'unknown' (got "section") (REXML::ParseException) Line: -1 Position: -1 Last 80 unconsumed characters: Line: -1 Position: -1 Last 80 unconsumed characters:** from /Users/myusername/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/rexml/parsers/baseparser.rb:185:in pull' from /Users/myusername/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/rexml/parsers/treeparser.rb:23:in parse' from /Users/myusername/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/rexml/document.rb:288:in build' from /Users/myusername/.rvm/rubies/ruby-2.4.0/lib/ruby/2.4.0/rexml/document.rb:45:in initialize' from /Users/myusername/.rvm/gems/ruby-2.4.0/gems/fastlane-plugin-setup_fragile_tests_for_rescan-1.1.0/lib/fastlane/plugin/setup_fragile_tests_for_rescan/actions/setup_fragile_tests_for_rescan_action.rb:10:in new' from /Users/myusername/.rvm/gems/ruby-2.4.0/gems/fastlane-plugin-setup_fragile_tests_for_rescan-1.1.0/lib/fastlane/plugin/setup_fragile_tests_for_rescan/actions/setup_fragile_tests_for_rescan_action.rb:10:in block in run' from /Users/myusername/.rvm/gems/ruby-2.4.0/gems/fastlane-plugin-setup_fragile_tests_for_rescan-1.1.0/lib/fastlane/plugin/setup_fragile_tests_for_rescan/actions/setup_fragile_tests_for_rescan_action.rb:10:in open' from /Users/myusername/.rvm/gems/ruby-2.4.0/gems/fastlane-plugin-setup_fragile_tests_for_rescan-1.1.0/lib/fastlane/plugin/setup_fragile_tests_for_rescan/actions/setup_fragile_tests_for_rescan_action.rb:10:in run' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:252:in block (2 levels) in execute_action' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/actions/actions_helper.rb:50:in execute_action' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:230:in block in execute_action' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:226:in chdir' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:226:in execute_action' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:148:in trigger_action_by_name' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/fast_file.rb:146:in method_missing' from ../fastlane/Fastfile:20:in rescue in block in parsing_binding' from ../fastlane/Fastfile:10:in block in parsing_binding' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/lane.rb:33:in call' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:49:in block in execute' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:45:in chdir' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/runner.rb:45:in execute' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/lane_manager.rb:52:in cruise_lane' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/command_line_handler.rb:30:in handle' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/commands_generator.rb:104:in block (2 levels) in run' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/commander-fastlane-4.4.4/lib/commander/command.rb:178:in call' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/commander-fastlane-4.4.4/lib/commander/command.rb:153:in run' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/commander-fastlane-4.4.4/lib/commander/runner.rb:476:in run_active_command' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane_core/lib/fastlane_core/ui/fastlane_runner.rb:39:in run!' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/commander-fastlane-4.4.4/lib/commander/delegates.rb:15:in run!' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/commands_generator.rb:303:in run' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/commands_generator.rb:42:in start' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/fastlane/lib/fastlane/cli_tools_distributor.rb:66:in take_off' from /Users/myusername/.rvm/gems/ruby-2.4.0@global/gems/fastlane-2.37.0/bin/fastlane:20:in <top (required)>' from /Users/myusername/.rvm/rubies/ruby-2.4.0/bin/fastlane:22:in load' from /Users/myusername/.rvm/rubies/ruby-2.4.0/bin/fastlane:22:in <main>' from /Users/myusername/.rvm/gems/ruby-2.4.0/bin/ruby_executable_hooks:15:in eval' from /Users/myusername/.rvm/gems/ruby-2.4.0/bin/ruby_executable_hooks:15:in `' Looks like it had a problem parsing the xml report file. Can you post it so I can look into the problem? @mishrav is your problem resolved? Let me know. Closing the issue after no update.
gharchive/issue
2017-06-08T05:56:39
2025-04-01T04:34:56.382711
{ "authors": [ "lyndsey-ferguson", "mishrav" ], "repo": "lyndsey-ferguson/setup_fragile_tests_for_rescan", "url": "https://github.com/lyndsey-ferguson/setup_fragile_tests_for_rescan/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1947020641
[Bug]: opensuse KDE环境无法显示托盘 解决方案检查 [X] 我已阅读常见问题(https://lyswhut.github.io/lx-music-doc/desktop/faq),并没有找到解决方案 [X] 我已搜索issue列表(https://github.com/lyswhut/lx-music-desktop/issues?utf8=✓&q=),并没有发现类似的问题 预期行为 无法显示托盘图标 实际行为 无法显示托盘图标 Lx Music 版本 2.5.0 最后正常的版本 2.5.0 操作系统版本 Linux(opensuse ) 附加信息 opensuse kde
gharchive/issue
2023-10-17T09:53:37
2025-04-01T04:34:56.392592
{ "authors": [ "lianchengwu" ], "repo": "lyswhut/lx-music-desktop", "url": "https://github.com/lyswhut/lx-music-desktop/issues/1611", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2316479744
🛑 TMS Get Tenants is down In 1f1b7c5, TMS Get Tenants (https://tms.dev.cmxapp.cloud/tenants/$AMS_TENANT_ID) was down: HTTP code: 401 Response time: 339 ms Resolved: TMS Get Tenants is back up in b1f3f2e after 1 hour, 40 minutes.
gharchive/issue
2024-05-25T00:35:45
2025-04-01T04:34:56.395442
{ "authors": [ "lyuboslav2406" ], "repo": "lyuboslav2406/status-xrm", "url": "https://github.com/lyuboslav2406/status-xrm/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2570941316
🛑 XRM Homepage is down In 00fa113, XRM Homepage (https://web.dev.cmxapp.cloud/) was down: HTTP code: 403 Response time: 214 ms Resolved: XRM Homepage is back up in 2a95ba5 after 31 minutes.
gharchive/issue
2024-10-07T16:39:26
2025-04-01T04:34:56.398098
{ "authors": [ "lyuboslav2406" ], "repo": "lyuboslav2406/status-xrm", "url": "https://github.com/lyuboslav2406/status-xrm/issues/603", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1897596167
RT-DETR会支持paddlelite的移动端部署吗 RT-DETR用paddlelite把模型转nb文件时,显示算子都支持,但会报unknown type 0的错误,想问一下还有什么办法在移动端部署吗? 这个你要不在paddlelite那个repo下提一个issue吧
gharchive/issue
2023-09-15T02:29:01
2025-04-01T04:34:56.398926
{ "authors": [ "SongYii", "lyuwenyu" ], "repo": "lyuwenyu/RT-DETR", "url": "https://github.com/lyuwenyu/RT-DETR/issues/64", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2104727965
Read 今年读的书 《育儿手册》 《育儿手册》 养育新生儿 《徒步中国》 《test》
gharchive/issue
2024-01-29T06:12:26
2025-04-01T04:34:56.427748
{ "authors": [ "lzkzs" ], "repo": "lzkzs/2024", "url": "https://github.com/lzkzs/2024/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2566280263
Improve Handling of Development Dependencies Add development dependencies as dev extra to the pyproject.toml, such that they are not installed in a standard installation (pip install gcnn), but when installed as pip install gcnn[dev]. This avoids unnecessary dependencies being installed for users and most use cases. :warning: Please install the to ensure uploads and comments are reliably processed by Codecov. Codecov Report All modified and coverable lines are covered by tests :white_check_mark: :exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
gharchive/pull-request
2024-10-04T12:38:38
2025-04-01T04:34:56.448038
{ "authors": [ "codecov-commenter", "m-kurz" ], "repo": "m-kurz/gcnn", "url": "https://github.com/m-kurz/gcnn/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1848044564
Api abstraction refactoring [x] refactor data providers [x] refactor & simplify abstract database provider abstractions [ ] complete and fix supabase data provider Regardless that supabase db provider have not yet completed - the refactoring of data providers is finished 👉 merging
gharchive/pull-request
2023-08-12T14:30:33
2025-04-01T04:34:56.465574
{ "authors": [ "m0rphed" ], "repo": "m0rphed/stonks-bot", "url": "https://github.com/m0rphed/stonks-bot/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2136828849
Fix "Module not found: Error: Default condition should be last one" Had trouble importing this through Node/Webpack, due to this: https://stackoverflow.com/a/76127619 Thank you for this fix!
gharchive/pull-request
2024-02-15T15:30:37
2025-04-01T04:34:56.473037
{ "authors": [ "m31coding", "sippeangelo" ], "repo": "m31coding/fuzzy-search", "url": "https://github.com/m31coding/fuzzy-search/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2479825738
🛑 DNS is down In db9d4cf, DNS (dns.m4d3bug.com) was down: HTTP code: 0 Response time: 0 ms Resolved: DNS is back up in 7475e3c after 7 minutes.
gharchive/issue
2024-08-22T04:47:46
2025-04-01T04:34:56.477047
{ "authors": [ "m4d3bug" ], "repo": "m4d3bug/status", "url": "https://github.com/m4d3bug/status/issues/104", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
335156050
App crashing:unrecognized selector sent to instance iOS versions: 11.3 and 11.4 react-native-sensitive-info versions: 5.1.0 My code const keyNameSpace = { sharedPreferencesName: 'xyz', keychainService: 'xyz', }; const migrationDone = await SInfo.getItem('migrationDone', keyNameSpace); Exception logs RCTFatalException: Exception '-[__NSSingleObjectArrayI localizedDescription]: unrecognized selector sent to instance 0x1c02081a0' was thrown while invoking getItem on target RNSensitiveInfo with params ( migrationDone, { keychainServic: Exception '-[__NSSingleObjectArrayI localizedDescription]: unrecognized sel... @shashikiran797 - can you paste the code of how you do setItem? I am assuming you provide a different configuration when calling it so there is a misalignment. After investigating into this issue, I can reproduce the error by forcing the app to reach this line: https://github.com/mCodex/react-native-sensitive-info/blob/master/ios/RNSensitiveInfo/RNSensitiveInfo.m#L100 Question would be which error is making the messageFromError method fallback on the default case. default: return error.localizedDescription; After taking a closer look it seems that in case any of the errors defined in messageForError method are encountered the app crashes with RCTFatalException: Exception '-[__NSSingleObjectArrayI localizedDescription]:, in stead the promise should simply be rejected. @shashikiran797 - this is fixed in version 5.2.0. Before the module was crashing the app on iOS for any encountered error intentionally with RCTMakeAndLogError. @denissb So when will it be released to npm? Version is still at 5.1.0 The pre-release is dated back to October '17 and there is a newer version (5.2.1), so the package should be updated on npm, right? @mCodex @felixus95, it is released, just in beta (npm shows last release). I tried it and it seems to work quite well.
gharchive/issue
2018-06-24T06:48:11
2025-04-01T04:34:56.486267
{ "authors": [ "denissb", "fdnhkj", "felixus95", "shashikiran797" ], "repo": "mCodex/react-native-sensitive-info", "url": "https://github.com/mCodex/react-native-sensitive-info/issues/85", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
962211838
Fixes metadata fallback strategy Fixes #92 Changes Introduced I suggest an implementation as described in the issue mentioned above. We used a temporary struct to parse the YAML, and then check its fields. If they were not set, we use a default value, otherwise, we use the parsed value. The downside is that for every field in Meta we need a block such as: if tmp.NameOfTheField != nil { m.NameOfTheField = *tmp.NameOfTheField } else { m.NameOfTheField = fallback.NameOfTheField } The good part is that we can decide the strategy and defaults for each field (for example, using OS's user name as a default for author as suggested in #87). This looks perfect! Thank you so so much! Just testing it out and will merge soon. The downside is that for every field in Meta we need a block such as: I think this is okay, since we need to decide on good defaults for each metadata value anyway. And, we shouldn't have way too many at the moment anyway. Everything works as expected (just went through most of the slides in examples/*) and did echo "# Hello" | go run main.go on your branch and it looks great! Thanks so much, this is awesome!
gharchive/pull-request
2021-08-05T21:30:42
2025-04-01T04:34:56.498955
{ "authors": [ "cuducos", "maaslalani" ], "repo": "maaslalani/slides", "url": "https://github.com/maaslalani/slides/pull/93", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
596861658
Change type for C strings to make usage of format strings containing … …c style placeholders easier Fix for issue #622 Thanks for this fix!
gharchive/pull-request
2020-04-08T21:03:37
2025-04-01T04:34:56.504257
{ "authors": [ "schroepf", "tomlokhorst" ], "repo": "mac-cain13/R.swift", "url": "https://github.com/mac-cain13/R.swift/pull/623", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
852642876
Remove CardTransformerProvider class Remove CardTransformerProvider class Rename (internal) SwipeCardDelegate methods to be more consistent with UIScrollViewDelegate naming conventions Add cardDidFinishSwipeAnimation delegate method in favor of using the Notification Center Remove CardStackAnimatableOptions and CardAnimatableOptions protocols as they served no purpose Codecov Report Merging #139 (48bc7dc) into master (48b8f4a) will decrease coverage by 0.1%. The diff coverage is 85.4%. @@ Coverage Diff @@ ## master #139 +/- ## ======================================== - Coverage 84.1% 83.9% -0.2% ======================================== Files 11 10 -1 Lines 724 715 -9 ======================================== - Hits 609 600 -9 Misses 115 115 Impacted Files Coverage Δ ...uffle/Classes/SwipeCard/CardAnimationOptions.swift 100.0% <ø> (ø) ...ses/SwipeCardStack/CardStackAnimationOptions.swift 100.0% <ø> (ø) ...fle/Classes/SwipeCardStack/CardStackAnimator.swift 0.0% <0.0%> (ø) ...Classes/SwipeCardStack/CardStackStateManager.swift 100.0% <ø> (ø) Sources/Shuffle/Classes/SwipeCard/SwipeCard.swift 100.0% <100.0%> (ø) ...huffle/Classes/SwipeCardStack/SwipeCardStack.swift 100.0% <100.0%> (ø)
gharchive/pull-request
2021-04-07T17:41:10
2025-04-01T04:34:56.513215
{ "authors": [ "codecov-io", "mac-gallagher" ], "repo": "mac-gallagher/Shuffle", "url": "https://github.com/mac-gallagher/Shuffle/pull/139", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
620701496
enable card rasterization by default There were pretty big improvements to performance by enabling this by default, most noticeably on the larger iPads. For more complex cards with animated components (i.e. a card displaying a looping gif), this property can still be turned off Also reduced image quality on images for example project to boost performance Closes #18. Codecov Report Merging #63 into master will increase coverage by 0.05%. The diff coverage is 100.00%. @@ Coverage Diff @@ ## master #63 +/- ## ========================================== + Coverage 82.44% 82.50% +0.05% ========================================== Files 13 13 Lines 621 623 +2 ========================================== + Hits 512 514 +2 Misses 109 109 Impacted Files Coverage Δ Sources/Shuffle/SwipeCard/SwipeCard.swift 100.00% <100.00%> (ø) Decided not to merge in, see comments on #18
gharchive/pull-request
2020-05-19T06:00:39
2025-04-01T04:34:56.517965
{ "authors": [ "codecov-commenter", "mac-gallagher" ], "repo": "mac-gallagher/Shuffle", "url": "https://github.com/mac-gallagher/Shuffle/pull/63", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
299312322
New version parameters Hi Machinegun, I have a similar issue as the one reported in https://github.com/machinegun/SALSA/issues/10. That one was related to an older version of SALSA. Is there a way to apply the same changes you suggested also in the new code? Thanks in advance No, the new version does not need you to specify any parameter like that. You just need to specify the minimum contig length you want to consider for scaffolding. Thanks a lot. I'd like to know also how the previously settable parameters are now handled by the software. You can take a look at the preprint which describes the algorithm for the new version: https://www.biorxiv.org/content/early/2018/02/07/261149 Thank you so much!
gharchive/issue
2018-02-22T11:10:37
2025-04-01T04:34:56.547786
{ "authors": [ "Tocci89", "machinegun" ], "repo": "machinegun/SALSA", "url": "https://github.com/machinegun/SALSA/issues/21", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
299236717
chore(models): Add LabConfig model While I was doing #742, I noticed that we don't have a model for the lab config. So I added one to have some better type inference down the road :). There are actually a lot more model files that should make their way to the shared folder I think, do you want to do them all separately? Also there's sometimes an inconstancy between the definition of the model in different files, do you guys want to align that as well in the future?
gharchive/pull-request
2018-02-22T06:30:07
2025-04-01T04:34:56.550363
{ "authors": [ "SamVerschueren", "maartentibau" ], "repo": "machinelabs/machinelabs", "url": "https://github.com/machinelabs/machinelabs/pull/743", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1742186920
Render interlinks for See Also sections Currently, quartodoc does not do any custom handling of the See Also section. However, the quartodoc sites for siuba and shiny do custom processing to handle them! siuba uses the numpydoc itself to handle See Also references, so is a good one to grab code from. (Eventually we should move this back into quartodoc). https://github.com/machow/siuba.org/blob/main/_renderer.py#L32 This is done for fully qualified names, and I've opened an issue for interlinking short names: #16
gharchive/issue
2023-06-05T17:05:21
2025-04-01T04:34:56.552075
{ "authors": [ "machow" ], "repo": "machow/plotnine-docs-demo", "url": "https://github.com/machow/plotnine-docs-demo/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
175095692
Feature to get a word from its vector and vice-versa I'd like to get a word vector from its associated word and vice-versa. Is it possible to do this in existing version of the code? Yes, you can. If I am not entirely mistaken, the word ids contained in the Glove class' self.dictionary correspond to the word's vector's index in self.word_vectors. I did this to save every word and its vector in a text file (with first line being number of vectors and dimension of vectors): # Sort words by their id word_and_word_id_tuples = glove_model.dictionary.items() word_and_word_id_tuples.sort(key=lambda t: t[1]) # id is second element of tuple # Save vectors to output file with open(args.out_file, 'w+') as f: f.write(str(len(word_and_word_id_tuples)) + " " + str(args.dimension) + "\n") for word, word_id in word_and_word_id_tuples: f.write(word + " " + " ".join([str(value) for value in glove_model.word_vectors[word_id]]) + "\n") Thank you for your response. On Tue, Nov 15, 2016 at 5:08 PM, Mats Byrkjeland notifications@github.com wrote: Yes, you can. If I am not entirely mistaken, the word ids contained in the Glove class' self.dictionary correspond to the word's vector's index in self.word_vectors. I did this to save every word and its vector in a text file (with first line being number of vectors and dimension of vectors): Sort words by their id word_and_word_id_tuples = glove_model.dictionary.items() word_and_word_id_tuples.sort(key=lambda t: t[1]) # id is second element of tuple Save vectors to output filewith open(args.out_file, 'w+') as f: f.write(str(len(word_and_word_id_tuples)) + " " + str(args.dimension) + "\n") for word, word_id in word_and_word_id_tuples: f.write(word + " " + " ".join([str(value) for value in glove_model.word_vectors[word_id]]) + "\n") — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/maciejkula/glove-python/issues/51#issuecomment-260619246, or mute the thread https://github.com/notifications/unsubscribe-auth/AS8o_vcgmu_BjKt3r6cgiE-BQHYgbUKkks5q-ZnHgaJpZM4J1LQr . One way is to make a dictionary where keys = words the model was trained on and values = glove representation of the words, i.e. Suppose model = fitted Glove() model D = {word: model.word_vectors[model.dictionary[word]] for word in model.dictionary.keys()} Example: suppose "music" was in the training corpus and model.no_components = 200. Then D["music"] is a vector of floats of length 200.
gharchive/issue
2016-09-05T15:43:39
2025-04-01T04:34:56.571062
{ "authors": [ "IvanBarrientos", "Vijethbv", "draperunner" ], "repo": "maciejkula/glove-python", "url": "https://github.com/maciejkula/glove-python/issues/51", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
625730691
Improved tests I know this is quite a big PR, so I'll try my best to summarize all changes here. None of the changes affect the API in any way (see the small bug fix for an exception), they only concern the internal tests. Original problem Our linear Gaussian tests were too simple - we sometimes had bugs in the inference algorithms and yet the tests were still passing. New functions the linearGaussian() simulator is now called standardlinearGaussian() there is a new linearGaussian(). It simulates x=theta + shift + noise where shift is a constant and noise is Gaussian with an arbitrary likelihood covariance matrix. functions to compute ground truth posterior samples for this simulator when the prior is Gaussian or uniform. another new simulator linear_gaussian_different_dims(). It discards some theta dimensions and therefore outputs an x that has lower dimensionality than theta. This would allow us to spot bugs that only show up when dim_x != dim_theta. function to compute the ground truth posterior samples for this simulator and Gaussian prior. Tests all tests that compute the c2st now use the new linearGaussian(). The prior cov is eye(), the likelihood cov is 0.3*eye(). x_o is 1.0 and the shift is -1.0 (in all dimensions). Hence the posterior is centered (nearly) around 2.0 in each dimension (a bit less then 2.0 because of the prior). SRE, SNL, and SNPE all contain a single new test that checks the c2st for the linear_gaussian_different_dims(). Removed tests for speeding up testing Our test time had hit around 40 minutes (excluding slow tests!), which I think is just too long. This PR pushes the test time down to around 10 minutes by doing the following: removed duplicate tests. E.g. because we had separate functions for testing hmc and simulation_batch_size we would test the "default" setting twice. moved some tests together at the expense of modularity. Especially in SNPE we often varied only one parameter at a time (e.g. the simulation_batch_size and the sampling method). I merged some of these tests while making sure that all options are still tested - just not only one option at a time ;) decrease thin for all mcmc samplers to thin=3 wherever only the API is tested, we draw no more than 10 samples from the posterior. in user_input_checks_test, we also run some inference. I set max_num_epochs=2 such that training does not take up any time. This is only an API check anyway, so inference quality is not checked. single round snpe_b no longer checked in fast tests (only in slow tests, directly in multi-round scenario). Bug fixes fixed a small bug in the leakage computation. num_remaining required to be updated earlier than where we updated it before in user_input_checks_test, we had one case where the we produced almost 100% leakage due to a bad setup of the inference problem. This led to the RAM being filled during sampling and eventually to all the tests not working. Fixed it by making the prior broader. Great, thanks! Just some quick input: the num_remaining leakage but is already in master I dont understand the shift in the gaussian linear simulator, this is just like changing the mean, no? why not moving the prior to a different mean equal to prior_mean + shift. And why the noise term in the simulator, isnt the noise coming from sampling from the Gaussian itself? we should double check the convergence of the classification performance of c2st on the new tests: run c2st with 300, 500, 1000 samples and make sure the performance does not get better for more samples. the RAM filling up in sampling with leakage was not because of the problem set up, but because of the missing .detach(). In addition to fixing this memory bug we should make sure the leakage sampling loop has an anker, like it used to have with the patience, but with something else. ok great! the same effect as that of the shift by moving either the prior mean or the observation. I just thought that it would be the most general case to include the shift. Noise noise in the simulator is exactly the Gaussian itself ;) yes I'll do that ok. Let's keep this for another PR though. Made an issue #198
gharchive/pull-request
2020-05-27T14:11:23
2025-04-01T04:34:56.584811
{ "authors": [ "michaeldeistler" ], "repo": "mackelab/sbi", "url": "https://github.com/mackelab/sbi/pull/192", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
239930897
Launching a second gvim window from gvim crashes due to python3 I installed python3 via the python website. I installed MacVim via brew install macvim --with-python3 and then launched gvim myfile.py. From gvim I typed :!gvim %<cr> and it opened. This second window crashed after running any python3 command (:py3 print('hi')<cr>) but the first window runs them fine. I worked around this by separating the processes. https://vi.stackexchange.com/questions/12764/osx-cant-run-gvim-from-gvim Let me know if I put this issue in right or not. I'm happy to change it. I'm also happy to work on fixing this if someone wants to help me on it. It works fine with MacVim build 155. @eirnym Tested and working fine for me now too 👍 Thanks
gharchive/issue
2017-07-01T06:25:29
2025-04-01T04:34:56.631685
{ "authors": [ "Benhgift", "eirnym" ], "repo": "macvim-dev/macvim", "url": "https://github.com/macvim-dev/macvim/issues/518", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
393501716
MacVim is lagging behind We're 38 patches behind vim now. @ychin is doing great work, but I hope we aren't returning to the days before @splhack took over (when there were hardly any merge-ins from upstream). The highest priority should be to have vim merged in quickly when they make a change (except of course when they do something that doesn't affect us like make a fix for Windows). Things like tweaks to tab bars and applescript should take a backseat. More major things like making sure the new rendering works well are important, but I don't think they should freeze out vim updates. I'll need to merge it in tomorrow since there are some merge conflicts and I don't have time to resolve it now. The last merge was 9 days ago so this is a fair criticism but I won't commit to any cadence higher than roughly weekly. Also, binary release is currently the primary way MacVim is released (which I plan to make ~monthly), so you should expect that to be the upper bound to how long it takes for a change from Vim master to get merged to MacVim.
gharchive/issue
2018-12-21T15:41:35
2025-04-01T04:34:56.633578
{ "authors": [ "chdiza", "ychin" ], "repo": "macvim-dev/macvim", "url": "https://github.com/macvim-dev/macvim/issues/824", "license": "Vim", "license_type": "permissive", "license_source": "github-api" }
296651425
Reminder for all users in channel Remind all users in channel to write standup with user's tag modified the standup time addition, now it takes hh: mm. #56 This functionality implemented in https://github.com/maddevsio/comedian/blob/master/notifier/notifier.go
gharchive/issue
2018-02-13T08:31:56
2025-04-01T04:34:56.636766
{ "authors": [ "anatoliyfedorenko", "malikim" ], "repo": "maddevsio/comedian", "url": "https://github.com/maddevsio/comedian/issues/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1714761839
🛑 qBitTorrent UI is down In 3099658, qBitTorrent UI (https://qbittorrent.$BASE_URL_MADHOGS) was down: HTTP code: 0 Response time: 0 ms Resolved: qBitTorrent UI is back up in fa82afc.
gharchive/issue
2023-05-17T23:05:51
2025-04-01T04:34:56.644833
{ "authors": [ "madhogs" ], "repo": "madhogs/upptime", "url": "https://github.com/madhogs/upptime/issues/596", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1340023035
The Preview panel is layered on top, hiding other windows When the markdown editor is open, showing the editing and preview panels, the preview panel remains in the foreground when another window is opened over the markdown editor. To Reproduce Open the markdown editor, pinned, in a 'New Horizontal Document Group', showing both edit and preview panels. Open the Output window, unpinned, so that it overlays the editing windows. See image. Expected behavior The output window should overlay the editor panels. Screenshots Duplicate of #62
gharchive/issue
2022-08-16T08:50:44
2025-04-01T04:34:56.661480
{ "authors": [ "madskristensen", "ronmurp" ], "repo": "madskristensen/MarkdownEditor2022", "url": "https://github.com/madskristensen/MarkdownEditor2022/issues/69", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
338061186
Too many )'s. When testing extracting file level1 of Easy Red (paid game on steam), error at bottom appears. There is no log file but the exe properties show this product version: 2017.3.1.16522547 Perhaps reproduce-able with the free demo version at https://corvobrok.itch.io/easyred General: System.ArgumentException: parsing "gerHelmetSSLOD)[_]?[\d]*\.[^.]+$" - Too many )'s. at System.Text.RegularExpressions.RegexParser.ScanRegex() at System.Text.RegularExpressions.RegexParser.Parse(String re, RegexOptions op) at System.Text.RegularExpressions.Regex..ctor(String pattern, RegexOptions options, TimeSpan matchTimeout, Boolean useCache) at System.Text.RegularExpressions.Regex..ctor(String pattern) at UtinyRipper.DirectoryUtils.GetMaxIndexName(String dirPath, String fileName) at UtinyRipper.AssetExporters.ExportCollection.GetUniqueFileName(Object object, String dirPath) at UtinyRipper.AssetExporters.AssetExportCollection.Export(ProjectAssetContainer container, String dirPath) at UtinyRipper.AssetExporters.ProjectExporter.Export(String path, IEnumerable`1 objects) at UtinyRipper.Program.Load(IReadOnlyList`1 args) Theory: unexpected characters are not sanitized? https://xkcd.com/327/ Yep, your theory is right. Fixed in 70e8689ef8d4a860caff3b221145b2810599fb72 Nice! None of that problem any more left. Afterwards it gets stuck on Export: 'Cubemap(Cubemap)' exported but it might actually be that the game uses one huge mesh for all of its levels and or something like that (RAM use keeps growing so utinyripper might actually still be doing its job). Anyways, thanks for fixing! Level1 contains 300000 objects. Because such huge amount you have to wait a lot until all of them get converted to text representation. I've done some optimisation earlier so i am not sure is it possible to improve export speed even more. Yes, I expected it's the game's/engine's fault. If you really want to improve the experience, you could add a "x/n objects exported" progress indicator or a "now exporting Y (N objects)" if that information is available before exporting and if it doesn't slow down the process extremely. But this is just a UX bonus for a border case. I'm certainly not requesting this. :) Again thanks for handling the original issue
gharchive/issue
2018-07-03T21:31:34
2025-04-01T04:34:56.667119
{ "authors": [ "iwanPlays", "mafaca" ], "repo": "mafaca/UtinyRipper", "url": "https://github.com/mafaca/UtinyRipper/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
189763873
Cannot include fuse.h Hey, I'm using windows and trying to install fuse-bindings I followed all the instructions, and did install Dokany myself I also tried to set environment variables: DOKAN_INSTALL_DIR = C:\Program Files\Dokan\DokanLibrary DOKAN_FUSE_INCLUDE = C:\Program Files\Dokan\DokanLibrary\include and still getting the error (npm install): ..\fuse-bindings.cc(12): fatal error C1083: Cannot open include file: 'fuse.h': No such file or directory [C:\project\node_modules\fuse-bindings\build\fuse_bindings.vcxproj] I tried to research as much as I can but I couldn't solve this Any idea? Thanks! It seems that I needed to download Dokan_x64.msi and not DokanSetup.exe
gharchive/issue
2016-11-16T16:39:29
2025-04-01T04:34:56.670381
{ "authors": [ "KromDaniel" ], "repo": "mafintosh/fuse-bindings", "url": "https://github.com/mafintosh/fuse-bindings/issues/39", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
317682244
This module is now part of Node.js 10 It seems like the code of the new Stream.pipeline(...) used this repository as the base. Since this is now core module, should this repository be deprecated? npm deprecate is a big stick that should be wielded lightly. People are going to be using older versions of node for quite awhile, why would we make them see a deprecation warning all the time?
gharchive/issue
2018-04-25T15:34:41
2025-04-01T04:34:56.672286
{ "authors": [ "ehmicky", "phated" ], "repo": "mafintosh/pump", "url": "https://github.com/mafintosh/pump/issues/40", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1661108612
Can‘t get credential with cli.js Hi, thanks for this plugin. If I try /usr/local/lib/node_modules/homebridge-appletv-now-playing $ node .\bin/cli.js there is an issue. Could you help me, please? Hi @Kohle81 I'm flattered that you arrived at my plugin only half a day after I copied it in there! Just FYI, my plugin was copied from Kurt Warwick's published plugin - "Homebridge Appletv Now Playing". I've copied it so that I can do some additional development, fix the issues I'm having with it, and then publish it as a new variant. That said, let me try and assist you. Firstly, is your issue with actually launching the cli? Do you get the cli functioning? Or, do you get the cli but you can't get the credentials? I assume you're having trouble with launching the cli. Because I only copied the plugin 14 hours ago, I can't help at the moment with a non-functioning cli, as I have issues myself with getting the plugin to load under Homebridge. However, if your goal is to get credentials to pair with your Apple TV, then maybe I can help. From your Homebridge page, launch a 'Terminal' session. Check & update Python to the latest supported version: python --version As far as I know, it should say Python 3.9.2 Check & update pip, Python's package manager: pip --version As far as I know, it should say pip 20.3.4 After checking those two, install pyatv with pip: pip install pyatv When you're happy with your installation of pyatv, try issuing the command atvremote scan If everything is going ok for you, now try and get the credentials with: atvremote --id <id> --protocol airplay pair All going well, your Apple TV should've just prompted you on-screen with a PIN verification code. Type that in you'll get a very long character string that is your credentials. I need to leave it there because that's all I can help with for now. 😁👍🏼 Issue appears closed.
gharchive/issue
2023-04-10T17:36:29
2025-04-01T04:34:56.677082
{ "authors": [ "Kohle81", "mag911" ], "repo": "mag911/HomeBridge-AppleTV_as_an_Accessory", "url": "https://github.com/mag911/HomeBridge-AppleTV_as_an_Accessory/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1752395418
Transfer sol method not working after first session of wallet connection. Describe the bug Transfer function on walletBase is not working after first session, when you connect to phantom wallet. To Reproduce After connecting to phantom wallet in android, try to transfer sol with Transfer function. It should work fine and redirect to phantom wallet. Now kill the app and re launch it. As wallet is connected from previous session, try to transfer sol with Transfer function. Notice that this time you won't be able to transfer the sol and you will get a null reference exception. Expected behavior In every session as long as it is connected to wallet it should redirect to phantom on calling Transfer function on walletBase from WEB3 object. Smartphone (please complete the following information): -Device: [Poco X3] -OS: [MIUI 13.0.1 ] Additional context SDK version is 0.1.4 downloaded from github using unity package manager. Closing as solved. Feel free to reopen in case you are still experiencing the issue.
gharchive/issue
2023-06-12T10:10:44
2025-04-01T04:34:56.911380
{ "authors": [ "Eiquy3", "GabrielePicco" ], "repo": "magicblock-labs/Solana.Unity-SDK", "url": "https://github.com/magicblock-labs/Solana.Unity-SDK/issues/123", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
120831227
[feature/deleteRecord] FC when database is empty Connected with #34 fill base with words>delete all of them>wait for notification>tap on notification>FC notifications come even though database is empty Now it's ok, no FC after tap on notification. Tested on Android 4.4.4.
gharchive/issue
2015-12-07T17:55:10
2025-04-01T04:34:56.912814
{ "authors": [ "magicmychal", "michq" ], "repo": "magicmychal/Fiszki", "url": "https://github.com/magicmychal/Fiszki/issues/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1552536754
Navbar Height is too much on Android and IOS Describe the bug Investigate the bottom nav bar height in material specs To Reproduce Steps to reproduce the behavior: Run the example app on an Android check the navbar height Expected behavior Height should match the standard material guidelines Screenshots Pixel 7 Xiaomi k20 pro This is a bug in flutter's NavigationBar https://github.com/flutter/flutter/issues/127088 IOS Fixed in release v0.6.2 Seriously the bottom value is twice what it should be. It leaves a massive gap above the system nav bar. I want just the height of the nav bar and if I want more space then I should be using a padding. bottomNavigationBar: TabBar( tabs: const [ Tab( icon: Icon(Icons.directions_car), text: 'Car', ), Tab( icon: Icon(Icons.directions_transit), text: 'Transit', ), Tab( icon: Icon(Icons.person), text: 'Account', ), ], padding: EdgeInsets.only( bottom: MediaQuery.paddingOf(context).bottom, ), dividerColor: Colors.transparent, ), Hi @karatekid430, I wonder if the issue you are referring to is relatd to navbar_router package?
gharchive/issue
2023-01-23T05:20:46
2025-04-01T04:34:56.968582
{ "authors": [ "karatekid430", "maheshmnj" ], "repo": "maheshmnj/navbar_router", "url": "https://github.com/maheshmnj/navbar_router/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
391515543
Fix build on rust 1.31.0 Building on rust 1.31.0 fails with errors like error: lint private_no_mangle_fns has been removed: no longer an warning, #[no_mangle] functions always exported This PR removes the lint. Hi @winding-lines, thanks for your contribution. Sorry I didn't notice it before raising PR 92 (I was tackling the Maidsafe crates systematically). Part of your PR is indeed covered by PR 92, but what about updating lazy_static? Was there a reason in particular you wanted this change in? Thanks for your reply! My app loads both this crate and tantivy (text search). The latest tantivy dependencies bump requires a new lazy_static. Marius On Jan 2, 2019, at 3:22 AM, Pierre Chevalier notifications@github.com wrote: Hi @winding-lines, thanks for your contribution. Sorry I didn't notice it before raising PR 92 (I was tackling the Maidsafe crates systematically). Part of your PR is indeed covered by PR 92, but what about updating lazy_static? Was there a reason in particular you wanted this change in? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread. If you want to rebase your PR and only keep the lazy_static update, I'll review it for you and merge it once CI passes :smile: aside: My app loads both this crate and tantivy can I see?
gharchive/pull-request
2018-12-16T23:11:06
2025-04-01T04:34:56.981582
{ "authors": [ "mitchtbaum", "pierrechevalier83", "winding-lines" ], "repo": "maidsafe/rust_sodium", "url": "https://github.com/maidsafe/rust_sodium/pull/91", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
140811861
add linux-arm to supported platforms Task will download linux-arm but not allow building with it because it's not in the platform list. The build failing is another issue having to do with a pretty old version of electron that it can't download... src/index.js should also get changed. @codeskyblue updated source with change! Sorry, didn't pay attention to source since ./lib was in repo. Build still failing due to zips. Sweet! nice attention for linux-arm support.
gharchive/pull-request
2016-03-14T22:13:27
2025-04-01T04:34:57.026340
{ "authors": [ "codeskyblue", "mainyaa", "technicallyjosh" ], "repo": "mainyaa/gulp-electron", "url": "https://github.com/mainyaa/gulp-electron/pull/33", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1006171626
Cleanup TLS code. Commit Message: Run fix_format CI target and fix the integer comparison issues. Also remove unused parameters. /ok-to-test @ipuustin: thanks for the PR. To confirm -- this is mostly formatting changes, along with fixes for primitive data type comparisons and a change to use Envoy::errorDetails(). That's right. The first patch is auto-generated by running ./ci/run_envoy_docker.sh './ci/do_ci.sh fix_format'. The second patch changes to errorDetails() and the third one fixes the integer data types and unused arguments.
gharchive/pull-request
2021-09-24T07:33:37
2025-04-01T04:34:57.028255
{ "authors": [ "dmitri-d", "ipuustin" ], "repo": "maistra/envoy", "url": "https://github.com/maistra/envoy/pull/109", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
891048290
MAISTRA-2355: Change httpbin port to 8000 From the default 80, which is not allowed by default in OpenShift. Service already uses port 8000, so this change should have minimal impact. /test unit
gharchive/pull-request
2021-05-13T13:37:29
2025-04-01T04:34:57.029595
{ "authors": [ "brian-avery", "jwendell" ], "repo": "maistra/istio", "url": "https://github.com/maistra/istio/pull/340", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1424520288
OSSM-2203: aligns dependencies with istio/release-1.16 Changes in this PR go module version to be 1.19 as this is the target version of upstream, also without it, it won't build easily - some deps explicitly require this version introduces default target following the principle of least astonishment (POLA) running make should give reproducible builds (especially when starting from fresh clone) aligns dependencies with istio/istio@release-1.16 branch updates vendor/ folder I used https://gist.github.com/bartoszmajsak/9359068bc8cead002a2fcf1ff947ad09 to automate majority of work. PR opened https://github.com/openshift/release/pull/33496 Side note: I have handy template for Makefiles I can apply on this one... It needs some love :) can we get rid of all stuff related to vendor? We really don't need to vendor deps in this repo. It isn't even built in brew. @jwendell done in 8f77795. I added few bits to .gitignore
gharchive/pull-request
2022-10-26T18:29:07
2025-04-01T04:34:57.033967
{ "authors": [ "bartoszmajsak" ], "repo": "maistra/xns-informer", "url": "https://github.com/maistra/xns-informer/pull/29", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
800550852
Revert "MIP Set (38,39,40,41) + SP + Amendments + newMIPs" Reverts makerdao/mips#169 @Davidutro Please confirm the revert and merge the PR as we wanted to split out the amendments as a separate PR before merging.
gharchive/pull-request
2021-02-03T17:38:02
2025-04-01T04:34:57.049268
{ "authors": [ "CPSTL" ], "repo": "makerdao/mips", "url": "https://github.com/makerdao/mips/pull/173", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1461999589
MIP41 Amendments to smooth out interim facilitator processes. Attached to: https://github.com/makerdao/mips/pull/712 Patrick and I went over this line by line to address concerns. vote passed - https://vote.makerdao.com/polling/QmSYNed5#vote-breakdown
gharchive/pull-request
2022-11-23T15:51:30
2025-04-01T04:34:57.050846
{ "authors": [ "LongForWisdom", "patrick-j-govalpha" ], "repo": "makerdao/mips", "url": "https://github.com/makerdao/mips/pull/711", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1918056518
Sponsored issue: Adding paddingLeft on MakesMathEasy Icon Add paddingLeft to MakesMathEasy icon on the navbar so it will be look much clear Priority Support @wajeeha-mushtaq is using Mintycode to fund this issue. If you would like to accept bounty for solving this issue join Mintycode. Thank you in advance for helping. Hello @wajeeha-mushtaq , As a GSSOC'24 contributor, I would like to contribute to your project. Would you please assign me under GSSOC24?
gharchive/issue
2023-09-28T18:26:49
2025-04-01T04:34:57.063648
{ "authors": [ "Khanishsuresh", "wajeeha-mushtaq" ], "repo": "makesmatheasy/makesmatheasy", "url": "https://github.com/makesmatheasy/makesmatheasy/issues/5451", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1440724709
v2.0.4 Update core to WebUI v2.0.4 Update example Update screenshot Update readme I don't know how to build the example using the local module instead of the online repo one. Perhaps we should add a development example file, also we should add a new section in readme for how to build from source. Thanks for your pr. I don't know how to build the example using the local module instead of the online repo one. You can build using local module. Change import malisipi.vwebui as webui to import vwebui as webui and make sure you have a directory named vwebui that include files inside root of repo. And run v <code file>.v to compile. we should add a new section in readme for how to build from source. You're right. I will add the section asap.
gharchive/pull-request
2022-11-08T18:47:16
2025-04-01T04:34:57.083285
{ "authors": [ "AlbertShown", "malisipi" ], "repo": "malisipi/vwebui", "url": "https://github.com/malisipi/vwebui/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1943859660
[Fix] Keyboard interrupt for listen_loop For windows 11, it seems "ctrl+c" doesn't stop the process completely. [Reproduct process] pip install of whisper_mic "whisper_mic --loop" wait for the initialization "Ctrl+c" "Aborted!" shows in the command line. but the command line doesn't start to accept new commands. (chatgpt says "Aborted!" is related to the operating system or the C runtime library btw.) fixed in latest PR
gharchive/issue
2023-10-15T11:57:42
2025-04-01T04:34:57.085512
{ "authors": [ "mallorbc", "szriru" ], "repo": "mallorbc/whisper_mic", "url": "https://github.com/mallorbc/whisper_mic/issues/53", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1072644531
question: tests and test files? I'm interested in adding tests (using pytest, unless you have other preferences) that demonstrate functionality of dnfile. is pytest ok with you? And ok if I place tests under tests/test_*.py? where would you like test data, like .NET modules used in the tests? For reference, in capa, we have a separate repository, capa-testfiles, that we use to hold all the files used during testing, which we reference as a submodule under tests/data/. This makes it possible to checkout in CI via --recurse-submodules but also easy to checkout the source code without pulling down MBs of test data. Of course, this introduces a bit more configuration and maintenance of two repos vs. one. What would you like to do for dnfile? pytest and tests/test_*.py works for me, that's what I usually do. Hadn't thought about test data yet. Separate repo makes sense since for something this complex there could be a lot of test data. I'll create one now.
gharchive/issue
2021-12-06T21:39:57
2025-04-01T04:34:57.096160
{ "authors": [ "malwarefrank", "williballenthin" ], "repo": "malwarefrank/dnfile", "url": "https://github.com/malwarefrank/dnfile/issues/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2568170130
Do we have titles and filenames swapped for the needs docs? @tracykteal Please double check. Thanks! Thank you! I had both files with the same 'by category' title and introduction. I've now updated the 'by project' page to have the correct title and introduction.
gharchive/issue
2024-10-05T15:12:04
2025-04-01T04:34:57.132310
{ "authors": [ "tracykteal", "willingc" ], "repo": "managing-os-projects/os-sponsor-needs", "url": "https://github.com/managing-os-projects/os-sponsor-needs/issues/3", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1082302364
Allow to have a different default console We are currently opening console applications using cmd. $executableCmd = Join-Path ${Env:WinDir} "system32\cmd.exe" I think it would be nice to use a environment variable to allow to change the default console. Same for the default browser. 🤔 Same for the default browser. 🤔 Regarding my comment in https://github.com/mandiant/VM-Packages/pull/15#issuecomment-998445610, I was meaning to say we could have additional configuration packages to allow users to further configure Chrome. For instance, if they wanted the default browser to be Chrome then they could include that package. I think for Win10 you have to manually set your default browser anyway? I realized the other tricky part is properly handling paths with spaces correctly. We have this down well for cmd.exe, we'd need to figure out for powershell.exe and cmder.exe the proper way to handle that. Then we should probably create a function in vm.common.psm1 to have a big switch statement to handle the bulk logic of: $executableCmd = Join-Path ${Env:WinDir} "system32\cmd.exe" $executableDir = Join-Path ${Env:UserProfile} "Desktop" $executableArgs = "/K `"cd `"$executableDir`" && `"$executablePath`" --help`"" Install-ChocolateyShortcut -shortcutFilePath $shortcut -targetPath $executableCmd -Arguments $executableArgs -WorkingDirectory $executableDir -IconLocation $executablePath so the users can simply pass in the parameters and not have to worry about handling paths with spaces for each console type. Closing in favor of https://github.com/mandiant/VM-Packages/issues/971
gharchive/issue
2021-12-16T15:05:10
2025-04-01T04:34:57.155541
{ "authors": [ "Ana06", "MalwareMechanic" ], "repo": "mandiant/VM-Packages", "url": "https://github.com/mandiant/VM-Packages/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
279666785
xml openioc_to_yara parse error Hello! I have the following issue with some xml formats I am trying to parse with the openioc_to_yara.py. First, I have the following xml: <?xml version='1.0' encoding='UTF-8'?> <!-- TITLE: 0b879284-0c37-4bfa-9dd8-34505a9c5175.ioc VERSION: 1.0 DESCRIPTION: OpenIOC file LICENSE: Copyright 2015 FireEye Corporation. Licensed under the Apache 2.0 license. FireEye licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <ioc xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns="http://schemas.mandiant.com/2010/ioc" id="0b879284-0c37-4bfa-9dd8-34505a9c5175" last-modified="2015-01-27T21:18:07Z"> <short_description>PYTHON SHELLCODE (DOWNLOADER)</short_description> <description>The shellcode launcher is a simple launcher which recieves an encoded shellcode buffer from its C2 server, allocates memory for it and then executes the shellcode. The launcher is written in python and packaged with PyInstaller. You can read more about this downloader at https://www.fireeye.com/blog/threat-research/2015/02/behind_the_syrianco.htmlhttps://www.fireeye.com/blog/threat-research/2015/02/behind_the_syrianco.html </description> <keywords/> <authored_by>FireEye</authored_by> <authored_date>2015-01-27T19:56:21Z</authored_date> <links> <link rel="category">Downloader</link> <link rel="license">Apache 2.0</link> </links> <definition> <Indicator id="b5c921a1-56c5-45ab-9537-72581eb73e0e" operator="OR"> <IndicatorItem id="a89dc9cb-15a5-42e6-9e49-7a7cc9ae1bf5" condition="is"> <Context document="FileItem" search="FileItem/Md5sum" type="mir"/> <Content type="md5">64a17f5177157bb8c4199d38c46ec93b</Content> </IndicatorItem> <IndicatorItem id="452d566a-78fb-48cc-bb9e-47ae7234a1dd" condition="is"> <Context document="FileItem" search="FileItem/FileName" type="mir"/> <Content type="string">Facebook-Account.exe</Content> </IndicatorItem> <IndicatorItem id="ece0648f-532e-42ec-8f67-095aafb53ca2" condition="contains"> <Context document="PortItem" search="PortItem/remoteIP" type="mir"/> <Content type="IP">80.241.223.128</Content> </IndicatorItem> </Indicator> </definition> </ioc> and when I try to parse it I get an error saying it is not openioc but ioc. I tried to confront it using the openioc_10_to_11. The resulted file accomplished parsing it successfully but the output file was empty. The upgraded file is the next one: <?xml version='1.0' encoding='UTF-8'?> <OpenIOC xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://openioc.org/schemas/OpenIOC_1.1" id="0b879284-0c37-4bfa-9dd8-34505a9c5175" last-modified="2015-01-27T21:18:07" published-date="0001-01-01T00:00:00"> <metadata> <short_description>PYTHON SHELLCODE (DOWNLOADER)</short_description> <description>The shellcode launcher is a simple launcher which recieves an encoded shellcode buffer from its C2 server, allocates memory for it and then executes the shellcode. The launcher is written in python and packaged with PyInstaller. You can read more about this downloader at https://www.fireeye.com/blog/threat-research/2015/02/behind_the_syrianco.htmlhttps://www.fireeye.com/blog/threat-research/2015/02/behind_the_syrianco.html </description> <keywords/> <authored_by>FireEye</authored_by> <authored_date>2015-01-27T19:56:21</authored_date> <links> <link rel="category" href="Downloader"/> <link rel="license" href="Apache 2.0"/> </links> </metadata> <criteria> <Indicator id="eead8521-fbea-4ddb-8aad-05b09dc08468" operator="OR"> <IndicatorItem id="a89dc9cb-15a5-42e6-9e49-7a7cc9ae1bf5" condition="is" preserve-case="false" negate="false"> <Context document="FileItem" search="FileItem/Md5sum" type="mir"/> <Content type="md5">64a17f5177157bb8c4199d38c46ec93b</Content> </IndicatorItem> <IndicatorItem id="452d566a-78fb-48cc-bb9e-47ae7234a1dd" condition="is" preserve-case="false" negate="false"> <Context document="FileItem" search="FileItem/FileName" type="mir"/> <Content type="string">Facebook-Account.exe</Content> </IndicatorItem> <IndicatorItem id="ece0648f-532e-42ec-8f67-095aafb53ca2" condition="contains" preserve-case="false" negate="false"> <Context document="PortItem" search="PortItem/remoteIP" type="mir"/> <Content type="IP">80.241.223.128</Content> </IndicatorItem> </Indicator> </criteria> <parameters/> </OpenIOC> So, comparing the example ioc and mine I saw a difference in the document, search and type tags. As I changed them from <Context document="FileItem" search="FileItem/Md5sum" type="mir"/> to <Context document="Yara" search="Yara/HexString" type="yara" /> and then run the openioc_to_yara I finally got the yara rule! But this is a lot of effort...so my question is, is there an error with the format of the input? Is there something wrong about the parser? I would like to get the yara rule instantly without doing all this job, especially when I have a ton of .ioc files Hi @DimChris0, your initial input was not correct. If you look at any of the IOCs found in https://github.com/mandiant/ioc_writer/tree/master/examples/openioc_to_yara/example_iocs, you will notice the XML structure expected by the script. Where did you get the input XML you tried first? @LoveMutt I found some of them for example here https://github.com/fireeye/iocs. By the time you are telling me that this format is wrong for this parser, is there any chance you know how can I extract the info there into a yara rule? I might try to extend your parser but this is gonna be time consuming. Any ideas how can I approach this or if there is a special meaning of these tags (e.g. <Context document="FileItem" search="FileItem/Md5sum" type="mir"/>) to the yara format? Thanks in advance! @DimChris0, yara does not care about those XML nodes, but the parser does. I suggest you use the existing openioc_to_yara.py one as a basis to create something similar and make some very minor changes to read whatever input XML format you would like to use
gharchive/issue
2017-12-06T08:30:57
2025-04-01T04:34:57.164094
{ "authors": [ "DimChris0", "LoveMutt" ], "repo": "mandiant/ioc_writer", "url": "https://github.com/mandiant/ioc_writer/issues/10", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2583798312
Alignment fixed Issue Resolved. #297 Pull Request Title Description Node modules deleted. Related Issues Changes Made Checklist [ ] I have tested the changes locally [ ] Documentation has been updated (if necessary) [ ] Changes are backward-compatible Screenshots (if applicable) Additional Notes Footer please dont send node modules
gharchive/pull-request
2024-10-13T08:53:32
2025-04-01T04:34:57.209381
{ "authors": [ "khwa04", "manikumarreddyu" ], "repo": "manikumarreddyu/AgroTech-AI", "url": "https://github.com/manikumarreddyu/AgroTech-AI/pull/410", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2415091740
🛑 Mesa Freeworld is down In 79e5f8a, Mesa Freeworld (https://nonfree.eu) was down: HTTP code: 0 Response time: 0 ms Resolved: Mesa Freeworld is back up in 000f1c9 after 12 minutes.
gharchive/issue
2024-07-18T02:47:35
2025-04-01T04:34:57.216672
{ "authors": [ "boredland" ], "repo": "manjaro-contrib/upptime", "url": "https://github.com/manjaro-contrib/upptime/issues/3397", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2483868023
rio_* crates are not actively maintained - maybe instead use oxrdf The maintainer of rio_api, rip_turtle and rio_xml no longer actively maintain those packages and has shifted to instead use component from his own Oxigraph project - e.g. oxrdf. Please consider switching from rio_* to ox* crates. Yes. this should be done, once sophia project starts using ox* crates. See https://github.com/pchampin/sophia_rs/issues/162
gharchive/issue
2024-08-23T20:46:55
2025-04-01T04:34:57.229734
{ "authors": [ "damooo", "jonassmedegaard" ], "repo": "manomayam/manas", "url": "https://github.com/manomayam/manas/issues/70", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
625721778
No compatible devices found Hey. I'm excited about this project. For now, when I run annemone single 255 0 255 returns No compatible devices found. My AnnePro 2 is connected by USB cable. My environment: Hello! Thank you for bringing this to my attention, I haven't tested annemone on Linux yet. Could you please run the following commands and let me know what the output is? This will output a list of USB devices connected to your computer, feel free to remove entries that aren't the keyboard. npm install -g node-hid hid-showdevices devices: [ { vendorId: 1241, productId: 32777, path: '/dev/hidraw6', serialNumber: 'SN0000000001', manufacturer: 'OBINLB', product: 'USB-HID Keyboard', release: 256, interface: 0 }, { vendorId: 1241, productId: 32777, path: '/dev/hidraw7', serialNumber: 'SN0000000001', manufacturer: 'OBINLB', product: 'USB-HID Keyboard', release: 256, interface: 1 }, { vendorId: 1241, productId: 32777, path: '/dev/hidraw8', serialNumber: 'SN0000000001', manufacturer: 'OBINLB', product: 'USB-HID Keyboard', release: 256, interface: 2 }, { vendorId: 1241, productId: 32777, path: '/dev/hidraw9', serialNumber: 'SN0000000001', manufacturer: 'OBINLB', product: 'USB-HID Keyboard', release: 256, interface: 3 }, # ... Interesting, the product ID is different from my Anne Pro 2. I've updated annemone to detect this variant of the keyboard, please run npm update -g annemone and if the error still occurs, reopen this ticket. Now, it works like a charm. Thank you @manualmanul.
gharchive/issue
2020-05-27T14:00:04
2025-04-01T04:34:57.297835
{ "authors": [ "GabeConsalter", "manualmanul" ], "repo": "manualmanul/Annemone", "url": "https://github.com/manualmanul/Annemone/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
818837155
Low responsiveness when moving tokens + slow movement of groups of tokens I notice that there's often considerable lag when starting to click and drag a token and when it finally starts to move along with the cursor. The ShowDragDistance module performs more smoothly right now, while providing similar base functionality. To be fair, the lag is most noticeable when using lots of modules like FXmaster, etc. In addition, when moving a group of tokens, they move one-by-one to the new location. This takes very, very long - especially when there's lag as mentioned above. It would be nice if there was a setting to turn this behavior off. I'm using Foundry v0.7.9 and the dnd5e system v1.2.4. My map sizes are fairly large and often feature lots of tokens, because I'm running a megadungeon. Drag Ruler is designed to move all the tokens simultaneously. However the tokens aren't moved as a group, but instead each token takes care of the movement for itself. However if a token needs a long time to start it's movement that would indeed case them to move one-by-one. So it's likely that addressing the performance issue will also fix the issue with tokens not moving simultaneously. Regarding the performance issue: In all the worlds I have Drag Ruler performs smoothly. However I'm lacking worlds with huge maps with many tokens that I could use for testing / reproducing this issue. Can you send me the world that's causing your issues, so I can take a look at it? The easiest for me to work with would be if you could share your whole Foundry Data folder (only the world-folder with a list of modules you use in that world would do as well though). You can privately send me the data using this link: https://wolke.ccn.li/s/jbyw8CB7d6eCysB I've sent you a .zip file of a test world I made based on my world. It's called test-world-drag-ruler.zip. I would've sent you the entire world, but a lot of the assets I use in it are stored outside of the world folder, so it would either be a lot of work to sort out or become a gigantic upload... So I chose to compromise. From what I can tell, the performance issue persists in that test world, even if there's only a single scene. I've also included this list of modules I usually have active: module_list.txt. However, even with all modules deactivated save Drag Ruler, the issue is still there. (Apologies for the barrage of random files in your nextcloud - I didn't notice that the folder with the world had become unzipped after downloading it from my own server. ^^;) Thanks for providing this test world to me. I can reproduce the problem in that world and will see what I can do about it. Don't worry about the random files that got uploaded. Everything went into a dedicated folder, so it was easy to discard of everything that wasn't needed :) I've done some initial testing on this and it appears, that most of the lag issues are actually caused by foundry itself. The lag happening when selecting, deselecting and dragging tokens seems to also appear when disabling the Drag Ruler module. It appears that the only thing that I could do to improve the situation is to make the tokens all move at the same time. Can you confirm that this is the case? Thanks for investigating! The results I find kind of curious, because I was sure that ShowDragDistance performed more smoothly... but perhaps I was biased by the incremental movement of groups of tokens being so slow. To be fair, the scene I sent you is one of the largest maps in the campaign (though they’re all pretty big), so I don’t think there’ll be as much slowdown from Foundry in the future. Having said that, it’d be a great timesaver if you could somehow add a setting that makes multiple tokens move at the same time. :) I'll treat this issue as a request to ensure that all tokens will move simultaneously, even if the map is laggy, since that's the only issue that I was able to attribute to Drag Ruler. If you encounter other performance issues other than this that can be attributed to Drag Ruler feel free to open another issue. I gave this another shot and was able to improve the situation a lot in 26917200. This will be part of Drag Ruler v1.6.0.
gharchive/issue
2021-03-01T12:52:29
2025-04-01T04:34:57.323181
{ "authors": [ "Stendarpaval", "manuelVo" ], "repo": "manuelVo/foundryvtt-drag-ruler", "url": "https://github.com/manuelVo/foundryvtt-drag-ruler/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
366616769
Simplied Shape File Error When Uploading as TileSet to Mapbox Account First I uploaded a zipped shape file (37 mb) to Mapbox as a Tileset, but I have to zoom in a lot to see the shapes in Power BI Desktop. So I used the mapshaper to simplify it and exported it as GeoJSON file with a smaller size, but when I tried to upload it to Mapbox as a Tileset, it threw an error like the attached screenshot "bounds west value must be between -360 and 360" Same error but I was trying to upload a .geojson file. All my coordinates are longitudes around -60, is it possible that something on the projection info is sending the points away past the 360 degrees? Have no idea... Currently, very frustrated by this as we don't want to zoom in a lot just to see the shapes showing up and the display is not what we want... Since downsizing through mapshaper doesn't look like to work, I also tried to downsize the precision of the coordinates in kml format, but unfortunately, after I uploaded the kml file as a tilest, nothing shows up in Power BI desktop no matter how much I zoom in or out.... I checked everywhere to make sure everything up to requirements like field name, url, tileset name, property name.... Couldn't figure out why it just doesn't work. @TiffanyFHA Solved it. I was forgetting to translate de coordinates from metric to lat lng for a few points, so that intead of -60 those were around 70000. It worked right after I corrected that. @TiffanyFHA https://github.com/TiffanyFHA is me :) Sorry that I don't quite understand what you mean by "translate the coordinates from metric to lat lng..." do you mean that I should change those -60 to 70000 in the ESRI shape file? If yes, how can I change it since that's a zipped shaped file? I only know how to edit the kml file as I can only it in note... On Sat, Oct 20, 2018 at 7:07 AM Matias Iglesias notifications@github.com wrote: @TiffanyFHA https://github.com/TiffanyFHA Solved it. I was forgetting to translate de coordinates from metric to lat lng for a few points, so that intead of -60 those were around 70000. It worked right after I corrected that. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mapbox/mapbox-studio-classic/issues/1587#issuecomment-431585001, or mute the thread https://github.com/notifications/unsubscribe-auth/Apzmq6L-1u2F1nn0rb28OsgJPhjdN6kyks5umy48gaJpZM4XHV0h . Im saying that you make completely sure that all lat lng coordinate values in your file are between +-360 On Mon, Oct 22, 2018, 1:07 PM TiffanyFHA notifications@github.com wrote: @TiffanyFHA https://github.com/TiffanyFHA is me :) Sorry that I don't quite understand what you mean by "translate the coordinates from metric to lat lng..." do you mean that I should change those -60 to 70000 in the ESRI shape file? If yes, how can I change it since that's a zipped shaped file? I only know how to edit the kml file as I can only it in note... On Sat, Oct 20, 2018 at 7:07 AM Matias Iglesias notifications@github.com wrote: @TiffanyFHA https://github.com/TiffanyFHA Solved it. I was forgetting to translate de coordinates from metric to lat lng for a few points, so that intead of -60 those were around 70000. It worked right after I corrected that. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/mapbox/mapbox-studio-classic/issues/1587#issuecomment-431585001 , or mute the thread < https://github.com/notifications/unsubscribe-auth/Apzmq6L-1u2F1nn0rb28OsgJPhjdN6kyks5umy48gaJpZM4XHV0h . — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/mapbox/mapbox-studio-classic/issues/1587#issuecomment-431899103, or mute the thread https://github.com/notifications/unsubscribe-auth/AQ9YVmev0qE7Pte6ycxnOgCXjnnVydmuks5unfs2gaJpZM4XHV0h .
gharchive/issue
2018-10-04T04:21:48
2025-04-01T04:34:57.489203
{ "authors": [ "TiffanyFHA", "matuteiglesias" ], "repo": "mapbox/mapbox-studio-classic", "url": "https://github.com/mapbox/mapbox-studio-classic/issues/1587", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
156129779
Empty tile I have some geojson feature collections (all points) which I convert to mbtiles using tippecanoe. When I try to preview the mbtiles in MapTiller, I can't see anything, just an empty, gray map. I also try this tile server - https://github.com/klokantech/tileserver-php, and I get a black, empty map. What am I doing wrong. I would like the points to show up as dots, something similar to this: https://www.mapbox.com/blog/twitter-map-every-tweet/ Changing the type to circle instead of line in the layer json objects fixed my issue. It appears the default example code the tile server generates automatically sets the type to line even for geojson Points. I'm closing this because it doesn't sound like there is a Tippecanoe bug. Please reopen with more details if you disagree.
gharchive/issue
2016-05-22T01:24:36
2025-04-01T04:34:57.502704
{ "authors": [ "ericfischer", "stackTom" ], "repo": "mapbox/tippecanoe", "url": "https://github.com/mapbox/tippecanoe/issues/250", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
259054088
How to let Spark access GPU Data Frame (GDF) directly? Is there any way for Spark to access GDF like accessing Parquet files rather than like accessing an external SQL database via JDBC? Not sure if i understand it correctly but the former approach seems much faster, because it will treat MapD as a distributed file system rather than a SQL database. When there are millions/billions rows, it avoids huge data serialization overhead. Not sure what you mean, do no operations in MapD and just use it to read columns? That wouldn't benefit in any way from having GPUs. The benefit i can think of is that the huge ecosystem behind Spark becomes available to MapD. Also, treating MapD like Parquet files actually delegates SQL functionality to Spark SQL. For example, MapD does not support table join. Through Spark SQL, it does. Having Spark in the picture of MapD won't lose the advantage of having GPUs. See https://databricks.com/blog/2016/10/27/gpu-acceleration-in-databricks.html The main reason we're building this from scratch is the traditional one: query execution and data must be in the same address space to achieve speed. It's not clear to me which operations can actually run on GPU in Spark and I'd rather not speculate. But, as a rule of thumb, a very deep amount of integration is needed to actually get good results. That amount of integration would be the moral equivalent of fusing Spark and MapD into a single project, which isn't reasonable. Also, MapD does support table join. It's a matter of it not being complete / friendly enough, but we're working hard on fixing it. We even support arbitrary loop joins if the watchdog flag is disabled, but the main goal is to focus on queries which can actually scale / execute interactively and avoid baseline algorithms which would send the server spinning for many minutes. That amount of integration would be the moral equivalent of fusing Spark and MapD into a single project, which isn't reasonable. It seems to me that, MapD core looks like the GPU version of fusion of Apache Phoenix and Kudu projects. If i understand it correctly, Spark can access the data in Kudu's columnar store (https://kudu.apache.org/docs/developing.html#_kudu_integration_with_spark) more efficient than it accesses Phoenix's SQL columnar database via JDBC. Function-wise the part of source code of Kudu that handles columnar storage looks similar to that of MapD, so abstracting that part of MapD and adding an Spark adapter interface on top of the part like what Kudu has done should make get good results, i guess. One fact hard to ignore is that more and more enterprises move toward Hadoop/Spark no more just for only batch analytics but also for real-time analytics. I'm fairly confident MapD will be able to "attach" to alternative storage in the not too distant future. Yes, I've looked into Kudu, prototyped a few things and it's definitely feasible to use it as an additional storage. The exact time line is not clear, but that integration point makes a lot of sense. I'm not arguing against it, quite the opposite; it'll happen. I've looked into Kudu / Arrow / Parquet, prototyped a few things and it's definitely feasible to use an alternative storage (without converting to our own storage by importing it). My point is the opposite. Letting others (especially SPARK) to attach MapD as an alternative storage seems less efforts and more benefits :) Generally MapD acts as an accelerator for other stores of record. With MapD several hundred times faster than Spark (http://tech.marksblogg.com/benchmarks.html), and with GPU memory relatively more expensive than CPU RAM or storage, I don't see how it would make sense to put MapD "behind" Spark. Certainly the opposite makes sense to us and is a common use case. With MapD several hundred times faster than Spark (http://tech.marksblogg.com/benchmarks.html), Definitely SPARK+Parquet can't compete with MapD on SQL query speed with warm data already in GPU memory. However, the big number of developers as well as the fast-growing ecosystem surrounding Spark/Hadoop should make certain senses to put MapD behind Spark (not just via JDBC... "Why not JDBC?" ) Quoted with GPU memory relatively more expensive than CPU RAM or storage, That concern is true for most "developers" or those interested audiences who can't afford many GPUs to warm up their entire big data sets but may have a few 100s GB (not TBs) CPU memory to cache portions of the data, MapD and SPARK actually fall back to the same ground -- both need to load "cold" data into CPU memory once in a while depending on query patterns. Consider most applications with many concurrent queries hitting different ranges of the data sets. In these commonly seen use cases, "more expensive" GPU seems to help little. For example, on my VM with 170M rows, MapD and Spark perform equally well (~30sec) on the first execution of query "select count(*) from (select max(trip_time_in_secs) from trips where trip_time_in_secs > 10 group by trip_time_in_secs)". A summary of comparison follows: database 1st query 2nd+ queries MapD 35 secs 0.05 sec SPARK/Parquet 30 secs 30 secs Clickhouse 4 secs 0.4 sec Because living or competing (or both) with SPARK is more of belief than technical, i am closing this issue. I think this will get soon another hot topic (still was and is). There is announced RAPIDS framework with cuDF which Databricks guys are picking up and integrate in Spark. I myself work on Java bindings (They support Python mostly at moment) for whole RAPIDS framework (its not really big...at moment). From they other directon there exists already Spark accelerated in memory database (SQL like) SnappyData. Once RAPIDS integrates with Spark, it will be only matter of time until this feature (GPU acceleration) gets propagated into SnappyData and viola we have another GPU accelerated db which out of box support RAM and VRAM shared space (although VRAM is preferred). Do you plan some work towards Spark integration at moment or for you its not the goal? Hi @archenroot - In the future, please feel free to start a different thread (as you have on our Community forum) instead of posting to closed threads. In general, our philosophy has been one of being a complimentary analytics solution with other open-source technologies. But in the end, we're not necessarily chasing any one technology, as we are our own product offering. We are already part of the overall RAPIDS eco-system (MapD was one of the founding members of the GPU Open Analytics Initiative), and so any work that's being done there benefits any tool that can work with the GPU data frame provided by cuDF. We already provide the ability to get results of a query as a GPU dataframe, and provide the ability to load data into OmniSci via Arrow. So depending on whatever gets built in Spark, we may or may not be able to accept the output from those processes right out of the box. We'll just have to see when they have something that works. @randyzwitch - sure, you are right (I am dead threat troll :-) ) Thx for comprehensive answer. Add - OmniSci via Arrow - nice i didn't know about this (actually discovering full mapd potential). I think cuDF is the main building block. Thanks again and lets see what future brings :-)
gharchive/issue
2017-09-20T06:31:03
2025-04-01T04:34:57.526729
{ "authors": [ "archenroot", "asuhan", "fleapapa", "randyzwitch", "tmostak" ], "repo": "mapd/mapd-core", "url": "https://github.com/mapd/mapd-core/issues/79", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
339678398
Update ROADMAP.md Not sure exactly what should be in the Geo section, but definitely needs an update since 4.0 is out. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Closing, as this was handled through a different update. Thanks @mflaxman10 !
gharchive/pull-request
2018-07-10T03:17:15
2025-04-01T04:34:57.530050
{ "authors": [ "CLAassistant", "mflaxman10", "randyzwitch" ], "repo": "mapd/mapd-core", "url": "https://github.com/mapd/mapd-core/pull/227", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
351422686
how to use a single relu or bn I want to transform a net with Inplace_abn,but there is a single relu without bn in the start of the net.After used the single relu in here and Inplace_abn in other place, I got the error "RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation".Using a single bn without relu got the same error. How can I use a single relu or bn in a Inplace_abn net. I try to transform a xception net by replace bn+relu which comes in pairs with inplaceabn and keep relu or bn which comes alone,then I got the error. the source is class Block(nn.Module): def __init__(self, inplanes, planes, reps, stride=1, dilation=1, start_with_relu=True, grow_first=True): super(Block, self).__init__() if planes != inplanes or stride != 1: self.skip = nn.Conv2d(inplanes, planes, 1, stride=stride, bias=False) self.skipbn = nn.BatchNorm2d(planes) else: self.skip = None self.relu = nn.ReLU(inplace=True) rep = [] filters = inplanes if grow_first: rep.append(self.relu) rep.append(SeparableConv2d_same(inplanes, planes, 3, stride=1, dilation=dilation)) rep.append(nn.BatchNorm2d(planes)) filters = planes for i in range(reps - 1): rep.append(self.relu) rep.append(SeparableConv2d_same(filters, filters, 3, stride=1, dilation=dilation)) rep.append(nn.BatchNorm2d(filters)) if not grow_first: rep.append(self.relu) rep.append(SeparableConv2d_same(inplanes, planes, 3, stride=1, dilation=dilation)) rep.append(nn.BatchNorm2d(planes)) if not start_with_relu: rep = rep[1:] if stride != 1: rep.append(SeparableConv2d_same(planes, planes, 3, stride=stride)) self.rep = nn.Sequential(*rep) and after transform is: class Block(nn.Module): def __init__( self, inplanes, planes, bloacks, stride=1, dilation=1, start_with_relu=True, grow_first=True): super(Block, self).__init__() if planes != inplanes or stride != 1: self.downsample = nn.Sequential( con1x1(inplanes, planes, stride=stride), norm_act(inplanes,slope=1)) else: self.downsample = None block = [] self.relu = nn.ReLU(inplace=True) if start_with_relu: block.append(self.relu) filters = inplanes if grow_first: block.append(SeparableConv2d_same( inplanes, planes, 3, stride=1, dilation=dilation)) block.append(norm_act(planes)) filters = planes for i in range(reps - 1): block.append(SeparableConv2d_same( filters, filters, 3, stride=1, dilation=dilation)) block.append(norm_act(filters)) if not grow_first: block.append(SeparableConv2d_same( filters, planes, 3, stride=1, dilation=dilation)) block.append(norm_act(planes)) block[-1].slope = 1 if stride != 1: block.append(SeparableConv2d_same( planes, planes, 3, stride=stride)) self.block = nn.Sequential(*block) I use your resnext by add a norm_act between mod2 and mod3,got the same error too. @JamesKasperYu Since mod3 starts with an inplace operation you cannot have an inplace operation before that. If you use an activated batch norm then use Relu with inplace=False or clone the input before feeding it to mod3. Great,it works by clone the input(using Relu with inplace=False don't work).Thank you @rotabulo I close the issue. I try to use single relu just now ,it also works,#6 is very helpful.
gharchive/issue
2018-08-17T01:09:33
2025-04-01T04:34:57.550340
{ "authors": [ "JamesKasperYu", "rotabulo" ], "repo": "mapillary/inplace_abn", "url": "https://github.com/mapillary/inplace_abn/issues/40", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2616875740
Geocoder input is left uncollapsed after selecting an item When auto-collapsing is used, when the user clicks on a result, the box is left open. It should be auto-closed, as the user has made a definitive decision and the action is complete. I have created a workaround, but this is unsatisfactory because it force-clears the input: // Auto-close on select; see: https://maplibre.org/maplibre-gl-geocoder/classes/default.html#on geocoder.on ('result', function () { document.querySelector ('.maplibregl-ctrl-geocoder--button').click (); // Click to remove the search value document.querySelector ('.maplibregl-ctrl-geocoder--input').blur (); // Move away from the search box }); I tried using the _collapse method, which seems to be advertised as public despite the underscore, but this had no effect, as I see its internal implementation ignores the call if there is focus. "advertise as public" is probably an incorrect documentation migration due to the migration to typescript, _ methods are private. Can you link to a jsbin showcasing this issue? Reproduce case: Go to this Codepen example I found: https://codepen.io/tsamaya/pen/KKxGwLj In the Javascript, add collapsed: true to the example given, in the new MaplibreGeocoder options parameter The map will load below, with the control collapsed by default Hover over the control Click in the geocoder input box Type a location, e.g. Paris, and press return [The map will go to the first location, and place a marker, which is a very odd behaviour but that's a separate problem] Click on the drop-down that appears, to select a desired item The map moves to that location The box is left open, even though the user has made a selection. The user has to erase the contents, and then blur away, in order to get rid of it, which is not intuitive. The last step should not be necessary - selection of an item from the drop down should auto-close the control.
gharchive/issue
2024-10-27T21:22:50
2025-04-01T04:34:57.560089
{ "authors": [ "HarelM", "mvl22" ], "repo": "maplibre/maplibre-gl-geocoder", "url": "https://github.com/maplibre/maplibre-gl-geocoder/issues/183", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
1214328121
Enable WebWorker bundling Blocked by https://github.com/parcel-bundler/parcel/issues/8004 Solved by switching to esbuild! :tada:
gharchive/issue
2022-04-25T10:55:52
2025-04-01T04:34:57.573728
{ "authors": [ "maxammann" ], "repo": "maplibre/maplibre-rs", "url": "https://github.com/maplibre/maplibre-rs/issues/39", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2382218931
WASM decoder Could we use a decoder written in C++ or Rust and deploy it as webassembly? Could this give performance benefits? Yes, this is one of my goals to write a Rust decoder and compile it to WebAssembly. Only with WASM we can use SIMD instructions in the browser and take full advantage of the encodings of the format
gharchive/issue
2024-06-30T11:49:52
2025-04-01T04:34:57.574643
{ "authors": [ "mactrem", "wipfli" ], "repo": "maplibre/maplibre-tile-spec", "url": "https://github.com/maplibre/maplibre-tile-spec/issues/225", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1384652229
maplibre-rs monthly Preview: Hm strange it does not show up for me. Should it be at https://maplibre.org/news ? Ah I saw now #82
gharchive/pull-request
2022-09-24T12:09:11
2025-04-01T04:34:57.576230
{ "authors": [ "maxammann", "wipfli" ], "repo": "maplibre/maplibre.github.io", "url": "https://github.com/maplibre/maplibre.github.io/pull/81", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
2376505220
Fix deleted parent filter Both the /taskCluster endpoint and the /tasks/box/:left/:bottom/:right/:top endpoint should return the same tasks (the format of the data should be different, but the tasks themselves should be the same and the amount of tasks should be the same). Issue: The two endpoints, when passed the same filtering parameters, would pass back different results. Solution: Fixed the filtering conditions of these two endpoints: PUT /taskCluster PUT /tasks/box/:left/:bottom/:right/:top More details on the solution: Remove a manually added filter on enabled challenges from the taskCluster endpoint and use the built in search parameter instead. Fix the deleted challenges and projects filters to filter for challenges and projects that have a "false" deleted status. Remove manually added filter on deleted projects and challenges. related: https://github.com/maproulette/maproulette3/issues/2211 /taskCluster endpoint WHERE statement in sql query with the manually added "deleted" related filters: WHERE (tasks.location && ST_MakeEnvelope (-111.91609382363669, 40.714313359738256, -111.86871528359762, 40.748394608673685, 4326)) AND (tasks.status IN (0,3,6)) AND ((c.status IN (3,4,0,-1) OR c.status IS NULL)) AND (NOT c.requires_local) AND (c.enabled) AND (c.is_archived = false) AND c.deleted = false AND p.deleted = false` /tasks/box endpoint WHERE statement in sql query where filters on deleted project and challenges should be present: WHERE (p.enabled) AND (tasks.location && ST_MakeEnvelope (-111.91609382363669, 40.714313359738256, -111.86871528359762, 40.748394608673685, 4326)) AND (tasks.status IN (0,3,6)) AND ((c.status IN (3,4,0,-1) OR c.status IS NULL)) AND (NOT c.requires_local) AND (c.enabled) AND (c.is_archived = false) ORDER BY RANDOM() DESC LIMIT 1001 OFFSET 0; /taskCluster endpoint WHERE statement in sql query after change: WHERE (c.deleted = false AND p.deleted = false) AND (tasks.location && ST_MakeEnvelope (-111.90092327097236, 40.72337196386074, -111.87723400095282, 40.740412450956256, 4326)) AND (tasks.status IN (0,3,6)) AND ((c.status IN (3,4,0,-1) OR c.status IS NULL)) AND (NOT c.requires_local) AND (c.enabled) AND (c.is_archived = false) /tasks/box endpoint WHERE statement in sql query after change: WHERE (c.deleted = false AND p.deleted = false) AND (tasks.location && ST_MakeEnvelope (-111.90092327097236, 40.72337196386074, -111.87723400095282, 40.740412450956256, 4326)) AND (tasks.status IN (0,3,6)) AND ((c.status IN (3,4,0,-1) OR c.status IS NULL)) AND (NOT c.requires_local) AND (c.enabled) AND (c.is_archived = false) ORDER BY RANDOM() DESC LIMIT 1001 OFFSET 0; The reason why p.enabled is missing from the new PUT /tasks/box/:left/:bottom/:right/:top endpoint is because i removed the manually added condition that added it to that specific endpoint. The filter that is supposed to be used is blocked by a condition: if (projectSearch) { filterList = this.filterProjects(params) :: this.filterProjectEnabled(params) :: filterList } projectSearch value is determined by this function: /** * Filters by p.display_name with a like %projectSearch% * @param params with inverting on 'ps' */ def filterProjectSearch(params: SearchParameters): FilterGroup = { FilterGroup( List( FilterParameter.conditional( Project.FIELD_DISPLAY_NAME, s"'${SQLUtils.search(params.projectSearch.getOrElse(""))}'", Operator.ILIKE, params.invertFields.getOrElse(List()).contains("ps"), true, params.projectSearch != None, Some("p") ) ) ) } At the moment this change is necessary for the endpoints outputs to match, but further investigation of why that condition is needed. Removing if (params.projectEnabled.getOrElse(false)) block might result in some edge case, but it's fine to merge
gharchive/pull-request
2024-06-27T00:58:03
2025-04-01T04:34:57.584117
{ "authors": [ "CollinBeczak", "ljdelight" ], "repo": "maproulette/maproulette-backend", "url": "https://github.com/maproulette/maproulette-backend/pull/1135", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
208800209
cannot load template with template url in marker infoWindow Hello everyone, I am trying to inject template instead of String in marker window but even after compiling html template ,marker is taking as string.Please help me.Thank you. my directive:- .directive('loadWindowTemplate', [function () { return { templateUrl: 'home/window.html' }; }]) my controller code:- var templateUrl=$compile('<div load-window-template></div>')($scope); // var templateUrl=$compile('<div load-window-template></div>')($scope); map.addMarker({ position: {lat:geoLocationObj.lat, lng:geoLocationObj.lng}, title:templateUrl[0].innerHTML,//here i ma getting string instead of html page... snippet:templateUrl[0].innerHTML, animation: plugin.google.maps.Animation.BOUNCE }, function(currentMarker) { marker=currentMarker; marker.showInfoWindow(); marker.on(plugin.google.maps.event.INFO_CLICK, function() { alert("Hello world!"); }); }); }); The plugin v1 only allows you to use string or base64 encoded image. The plugin v2 allows you to use HTML also. https://github.com/mapsplugin/v2.0-demo/issues/5#issuecomment-257238253
gharchive/issue
2017-02-20T07:09:53
2025-04-01T04:34:57.594256
{ "authors": [ "Ashish121", "wf9a5m75" ], "repo": "mapsplugin/cordova-plugin-googlemaps", "url": "https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/1314", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
294861850
New markers added with MarkerCluster.addMarkers don't show until the map is moved I'm submitting a ... (check one with "x") [ ] question [x] any problem or bug report [ ] feature request If you choose 'problem or bug report', please select OS: (check one with "x") [x] Android [ ] iOS cordova information: (run $> cordova plugin list) com.googlemaps.ios 2.5.0 "Google Maps SDK for iOS" cordova-plugin-googlemaps 2.2.2 "cordova-plugin-googlemaps" cordova-plugin-whitelist 1.3.3 "Whitelist" Current behavior: When I add markers inside a markerCluster with his method "addMarkers", I can't see the new markers until I move the map. Expected behavior: I should see the new markers when added, without moving the map. Screen capture or video record: mapsTest.zip Related code, data or error log (please format your code or data): https://github.com/danieleLewis/cordova-map-test-cluster Not bug. The marker cluster redraws when the map is moved. So, if I add a marker to the map it displays, if I add a marker inside a cluster I need to to move the map to see it. How can this be normal? On iOS I don't need to move anything for displaying the new markers. Is there a way to trigger the redraw of MarkerCluster from js? If you see the source code, you will notice the answer (that's why open source) https://github.com/mapsplugin/cordova-plugin-googlemaps/blob/master/www/MarkerCluster.js Can you just tell me how to trigger the redraw? I have not the time to read a 1200 line js. I tried with map.panBy(1,1) but it doesn't always work. Try cordova-plugin-googlemaps@2.2.3 The 2.2.3 update fixed this. Now it redraws on map move, thank you! You are welcome.
gharchive/issue
2018-02-06T18:17:26
2025-04-01T04:34:57.601435
{ "authors": [ "danieleLewis", "wf9a5m75" ], "repo": "mapsplugin/cordova-plugin-googlemaps", "url": "https://github.com/mapsplugin/cordova-plugin-googlemaps/issues/2053", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
143383709
Don't merge transit stations zoom 15+ From discussion in #506, we once we're zoomed in on the map, it's no longer useful to "merge" stations in the same relation. Two examples in London where the rail station is winning over the subway station (the subway stations are totally missing from tiles): Euston: http://vector.mapzen.com/osm/all/15/16371/10893.topojson?api_key=vector-tiles-HqUVidw Waterloo: http://vector.dev.mapzen.com/osm/all/15/16373/10896.topojson?api_key=vector-tiles-HqUVidw From the other issue: Examples: https://www.openstreetmap.org/edit?node=3638795617#map=19/51.50284/-0.11280 https://www.openstreetmap.org/edit?node=3638795618#map=19/51.50299/-0.11396 Sure, let's give zoom 15 try (to stop dedup'ing). There's certainly room to place the multiple icons by then, and it looks weird / broken to not show them when everyone else does (labeling is labeling, shrug). Looks like the same problem is happening at Euston? Linked up to this failed PR: https://github.com/mapzen/vector-datasource/pull/636 Looks like Matt's function considers a few more OSM features than we currently export in tiles. p.public_transport IN ('stop', 'stop_position', 'tram_stop')) You'll need to add those into the POIs calculation in order for the expected number of features around London Waterloo station to show up at zoom 15. (If they are also polygons, consider adding them to the transit layer as well.) progress in branch olga/637-transit-stations-merge. to import all the required test data since I have custom built osm2pgsql I need to modify the test-data-update-osm.sh like: /Users/olga/Documents/osm2pgsql/build/osm2pgsql -s -C 1024 -S osm2pgsql.style --hstore-all -d $db -a -H localhost update.osc Punting to v1.1
gharchive/issue
2016-03-24T23:07:13
2025-04-01T04:34:57.632518
{ "authors": [ "nvkelso", "okavvada" ], "repo": "mapzen/vector-datasource", "url": "https://github.com/mapzen/vector-datasource/issues/637", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
316140652
Duration : Changes to skinning engine kodi krypton to leia Hi, since there is a change that duration can use time format strings in leia $INFO[ListItem.Duration(hh,, h] $INFO[ListItem.Duration(mm,, min] or $INFO[ListItem.Duration(h,, h] $INFO[ListItem.Duration(m,, min] or $INFO[ListItem.Duration] SO Thats just working correct in krypton, but get issues in leia. ALSO its not needed in leia anymore,because you can use kodi labels to seperate durstion types (hh: mm) or (mins) possible in leia to def get_duration(duration): '''transform duration time in minutes to hours:minutes''' if not duration: return {} if isinstance(duration, (unicode, str)): duration.replace("min", "").replace("", "").replace(".", "") try: total_minutes = int(duration) if total_minutes < 60: hours = 0 else: hours = total_minutes / 60 minutes = total_minutes - (hours * 60) formatted_time = "%s:%s" % (hours, str(minutes).zfill(2)) except Exception as exc: log_exception(__name__, exc) return {} return { "Duration": formatted_time, "Duration.Hours": hours, "Duration.Minutes": minutes, "Runtime": total_minutes, "RuntimeExtended": "%s %s" % (total_minutes, xbmc.getLocalizedString(12391)), "DurationAndRuntime": "%s (%s min.)" % (formatted_time, total_minutes), "DurationAndRuntimeExtended": "%s (%s %s)" % (formatted_time, total_minutes, xbmc.getLocalizedString(12391)) } Or $INFO[ListItem.Duration(mins,, min]
gharchive/issue
2018-04-20T06:14:51
2025-04-01T04:34:57.695785
{ "authors": [ "marduklev" ], "repo": "marcelveldt/script.module.metadatautils", "url": "https://github.com/marcelveldt/script.module.metadatautils/issues/30", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2591563516
🛑 Grafana is down In d95e19f, Grafana (https://grafana.bojko.eu) was down: HTTP code: 523 Response time: 2684 ms Resolved: Grafana is back up in 9fe63b1 after 50 minutes.
gharchive/issue
2024-10-16T11:24:07
2025-04-01T04:34:57.698216
{ "authors": [ "marcinbojko" ], "repo": "marcinbojko/upptime", "url": "https://github.com/marcinbojko/upptime/issues/880", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2281842998
Cache header handler Descrizione Set cache header when traversing cache module Criteri di accettazione Dettagli implementativi Nice to have https://datatracker.ietf.org/doc/rfc9211/
gharchive/issue
2024-05-06T21:50:02
2025-04-01T04:34:57.708178
{ "authors": [ "marco-svitol" ], "repo": "marco-svitol/quaestio-be", "url": "https://github.com/marco-svitol/quaestio-be/issues/138", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1665687528
🛑 Video Call is down In 3d67d61, Video Call (https://jitsi01.diversolatam.com/test) was down: HTTP code: 0 Response time: 0 ms Resolved: Video Call is back up in b53f346.
gharchive/issue
2023-04-13T04:30:06
2025-04-01T04:34:57.710424
{ "authors": [ "marcoadasilvaa" ], "repo": "marcoadasilvaa/health", "url": "https://github.com/marcoadasilvaa/health/issues/1960", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1021820157
build(angular): migrate to Angular 12 fix #2 Pull Request Test Coverage Report for Build 1324454887 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 100.0% Totals Change from base Build 1324390579: 0.0% Covered Lines: 4 Relevant Lines: 4 💛 - Coveralls
gharchive/pull-request
2021-10-09T22:17:38
2025-04-01T04:34:57.714780
{ "authors": [ "coveralls", "marcobuschini" ], "repo": "marcobuschini/angular-application-dev-ops-starter", "url": "https://github.com/marcobuschini/angular-application-dev-ops-starter/pull/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1405175906
Add type declarations for methods I've been missing type declarations for the methods when using this wonderful component in my TypeScript project. Using // @ts-ignore has been the solution so far, as described here: https://github.com/marcocesarato/react-native-big-list/issues/119 This PR adds type declarations for the methods. @marcocesarato Thanks for approving. Any idea when it will be published to npm?
gharchive/pull-request
2022-10-11T20:16:19
2025-04-01T04:34:57.716433
{ "authors": [ "larsmunkholm" ], "repo": "marcocesarato/react-native-big-list", "url": "https://github.com/marcocesarato/react-native-big-list/pull/263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1863628243
The adv is not be shown the first times In my page, I added a Rewarded adv. In the constructor of the page I added public ProfilePage(ProfilePageViewModel model) { InitializeComponent(); #if ANDROID || IOS CrossMauiMTAdmob.Current.TagForChildDirectedTreatment = MTTagForChildDirectedTreatment.TagForChildDirectedTreatmentUnspecified; CrossMauiMTAdmob.Current.TagForUnderAgeOfConsent = MTTagForUnderAgeOfConsent.TagForUnderAgeOfConsentUnspecified; CrossMauiMTAdmob.Current.MaxAdContentRating = MTMaxAdContentRating.MaxAdContentRatingG; #endif } then on Clicked I added this code private async void buttonEarn_Clicked(object sender, EventArgs e) { #if IOS || ANDROID CrossMauiMTAdmob.Current.LoadRewarded(DeviceInfo.Current.Platform == DevicePlatform.Android ? Constants.AndroidReward1 : Constants.iOSReward1); CrossMauiMTAdmob.Current.ShowRewarded(); #else await DisplayAlert(AppResources.AdvNewPointsTitle, AppResources.AdvNoPlatformSupported, AppResources.OK); #endif } The result is that I have to click a few times before the app shows the rewarded adv. Here a video. https://github.com/marcojak/MauiMTAdmob/assets/9497415/06d36c44-f76b-4865-95f6-9c378f775273 After the first time, to click again on the button, I have to restart the app. Do you still have the same issue on the latest version? No, I think it is fixed. Thank you.
gharchive/issue
2023-08-23T16:08:21
2025-04-01T04:34:57.718864
{ "authors": [ "erossini", "marcojak" ], "repo": "marcojak/MauiMTAdmob", "url": "https://github.com/marcojak/MauiMTAdmob/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1500532708
Using onclick function to display node properties of a network Hello @marcomusy , I'm exporting a network graph to html file and I would like to display the values of the graph nodes using the onclick method. For instance, the time-series value of node 1 is stored in a dict, data['node1'] = [0,1,2,3,4,6]. I could use the export function to export the network in x3d format. import networkx as nx from vedo import * G = nx.gnm_random_graph(n=5, m=10) nxpos = nx.spring_layout(G, dim=3, seed=1) nxpts = [nxpos[pt] for pt in sorted(nxpos)] nx_lines = [(nxpts[i], nxpts[j]) for i, j in G.edges()] pts = Points(nxpts, r=10).lighting('off') edg_w = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] edg = [] for n in range(0, 10): line = Line(nx_lines[n]).lw(edg_w[n]) edg.append(line) plt = Plotter(N=1, size=(320, 240)) show(pts, *edg, axes=True, bg='w', title='plot') plt.export('network.x3d', binary=False) I had a look at the onclick method, but I am not sure how to implement it for my use case and display the time-series values corresponding to each node in a plot window. Example, if I select node 1 and node 2, I would like to see two curves displayed on the plot window (sample). Could you please help me with this? Thanks a lot for your time and kind attention Hello @marcomusy , Thanks a lot! This works great in python script. You mean you want this to end up in a html web server? It not possible (in vedo)! Yes, I would like to do the same in a html web server. I tried to look into Cytoscape for this task. But this mostly supports only 2D networks. I´m not sure if you can do it by modifying the X3D file Thanks for the tip, I will surely check how this can be done. Could you please suggest if there are other packages that I could check for doing these in html web server? Hello @marcomusy , We tried k3d . Unfortunately, it doesn't support click callback functionality for points and lines at this moment Callbacks are supported for k3d.marching_cubes k3d.mesh k3d.surface k3d.texture k3d.voxels k3d.sparse_voxels k3d.voxels_group I would like to know if I can export the network that we create in vedo as a mesh object and use it k3d. Hi @marcomusy , This is a kind reminder. Hi @DeepaMahm sorry for the late reply. If you aim at serving this through the web and the whole thing is just 2D probably you should go for tools like plotly or altair...
gharchive/issue
2022-12-16T16:41:12
2025-04-01T04:34:57.725439
{ "authors": [ "DeepaMahm", "marcomusy" ], "repo": "marcomusy/vedo", "url": "https://github.com/marcomusy/vedo/issues/763", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1958945228
Implement Articles page Here's a rough mockup for how such a page might look like I'd like to work on this issue Awesome, thanks @SmithWebDev! We currently have an almost empty articles.yml file for the content at: https://github.com/marcoroth/hotwire.io/blob/4a71f10e6b67fa9c10c64a43013854eae2de7e6b/app/content/data/articles.yml#L1 Feel free to add some articles so you can test it probably. We probably also want to add more column for published date, author and so on.
gharchive/issue
2023-10-24T10:22:18
2025-04-01T04:34:57.727741
{ "authors": [ "SmithWebDev", "marcoroth" ], "repo": "marcoroth/hotwire.io", "url": "https://github.com/marcoroth/hotwire.io/issues/53", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
169363206
Different/configurable distance metrics Background We've been doing some work with LIME in relatively high-dimension problems (1000+ features), and have found that the current distance metric doesn't tend to perform well in that context [related paper]. For very high dimensions, the euclidian distance is inflated to a large value, which given the kernel you are using: sqrt(exp(-d^2/k^2)) Can be tuned by using very large kernel widths. A simple example is the classification problem defined by using a default sklearn RandomForestClassifier on a dataset created with: sklearn.datasets.make_classification(n_classes=2, n_features=1000, n_samples=1000) In this case, a kernel width of 3 leads to the weight for the first record (the sample being interpreted itself) having a weight of 1, while all other samples are weighted at near 0, which of course leads to a bad interpretation. Using very large kernel_widths increases the average score (~0.3 with a width of 10e9), but in general, I think this just isn't the best metric for higher dimension problems. I've tried some other metrics, and without tuning the kernel width, they are performing much better than euclidian norm can do with a well tuned kernel width). Just for reference, using the default kernel width of 3, and changing only distance metric on the above problem led to a range of scores between 0 and 1, with cityblock and L1 distance performing best of those I tried, at the cost of a small slowdown. The scores per sample within each run had low variance, but the scores per metric differed quite a bit, which indicates to me that this is an important tuning parameter. Chebyshev distance performed better than euclidean but not the best, and was slightly faster than the current euclidean code. Changes This PR exposes an option to the user to specify a metric from this pretty comprehensive list: sklearn pairwise distances, defaulting to euclidian for tabular and cosine for text which is the the same as the current measures. So for existing users, this changes nothing, but optionally, for cases like I described above, you'll be able to use different metrics. I've also included a benchmarks folder which has a script for each of the explainers that loops through a sample dataset and records scores and times, it may be useful for other similar optimizations. This is a problem that I had already identified and was on my 'todo' stack. The problem with euclidean distance for tabular is not only that it is bad for high dimensions, but that it is really hard to come up with a sensible default value (as you noted yourself). The benchmark folder is also a good idea. Thanks you very much, I will merge this soon. Cool, happy to help. Perhaps it would be useful to have a contributing.md file with some of this kind of 'todo' item for existing or other potential contributors? Yes, that's a great idea. When I have a bit of time, I'll think try to write a file with known problems and ideas. If you have any intuition or suggestions for a reasonable distance / kernel pair for tabular data, let me know! Thanks again for contributing. I'll put together a PR with some experimental results but I think just scaling the distance by the number of columns before squaring it makes a single default kernel width work well with a wider range of datasets. I've only tested it with the Euclidean distance so far but I would expect pretty similar improvements or no change for the other metrics.
gharchive/pull-request
2016-08-04T12:32:14
2025-04-01T04:34:57.744092
{ "authors": [ "marcotcr", "wdm0006" ], "repo": "marcotcr/lime", "url": "https://github.com/marcotcr/lime/pull/13", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
993594703
Can't find QT Designer UI File Looking for to original QT Designer .ui file before it was converted to gui.py. Here it is! You can now have access to it from the repo => https://github.com/marcpinet/batch-downloader-nyaa.si/tree/main/pyqt-ui Please, note that this file only contains the GUI with the widgets put at the right place. Method connections, etc. are only in the .py.
gharchive/issue
2021-09-10T21:06:01
2025-04-01T04:34:57.746375
{ "authors": [ "marcpinet", "xAkai97" ], "repo": "marcpinet/batch-downloader-nyaa.si", "url": "https://github.com/marcpinet/batch-downloader-nyaa.si/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
972143538
How to use Data links Each GanttTask component seem to have datalink capability. However I can't find out how to invoke the link. Please add the link to tool tips. I just discovered that Data Links support seems to be broken in Grafana 8. Fixed in v0.7.4.
gharchive/issue
2021-08-16T22:21:19
2025-04-01T04:34:57.755009
{ "authors": [ "marcusolsson", "n-arakawa" ], "repo": "marcusolsson/grafana-gantt-panel", "url": "https://github.com/marcusolsson/grafana-gantt-panel/issues/49", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1868085275
A way to unload a model? Hi There, I'm using ctransformers with Oobabooga text-generation-webui and I can load models fine. All the other loaders let me load, unload, reload models at will. When I load a model with ctransformers as stated it works fine but using the unload button does nothing, and if I try to load another model it loads it ontop of the model already in vram usually causing an OOM vram cuda failure. I looked through your code a bit and didn't find any interface for requesting an unload of a model. Is this something that can be implimented? Or if it is, can you tell me how to make it happen? Many Thanks. Remo Hi, https://github.com/oobabooga/text-generation-webui/pull/3711 should fix it. Hey, is there a way to unload the model via the CTransformers library itself. How can we unload using CTransformers itself? I'm not using any other library on top of CTransformers or Web GUI as such.
gharchive/issue
2023-08-26T12:44:45
2025-04-01T04:34:57.766168
{ "authors": [ "Remowylliams", "marella", "yashpundir" ], "repo": "marella/ctransformers", "url": "https://github.com/marella/ctransformers/issues/111", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2099585527
IOS build issue Platform iOS Cloned main on commit 3850942410b9e9d4fcbfc3ca2a8ad601198ec0f4 and got the error repo/node_modules/react-native-skottie/cpp/JsiSkSkottie.h:11:10 'modules/skottie/include/Skottie.h' file not found Hey, can you please retest with the latest version? I think it should be fixed. Thanks Update: Upon using quotes instead of Angle bracket, it builds succesfully. But upon running the application and using Skottie, I get a non-std C++ exception Question, why do you install from the git repo and not by installing the package. The packages artifact on npm actually contains stuff that's not present in the repo. I had initially installed from the git repo, cause the changes had not been released and I was too eager to give it a try. Later, I installed from the repo. Now, upon clearing my xcode cache and rebuilding, all errors have vanished. I really appreciate your quick responses and fixes!
gharchive/issue
2024-01-25T05:14:10
2025-04-01T04:34:57.771469
{ "authors": [ "hannojg", "relaxxpls" ], "repo": "margelo/react-native-skottie", "url": "https://github.com/margelo/react-native-skottie/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1000712390
Fixes GH workflow's name Problem Github actions workflow, test.yml, doesn't have an appropriate name. Solution This PR changes the name from 'Hello Reason' to 'Deku Test Workflow'.
gharchive/pull-request
2021-09-20T08:41:47
2025-04-01T04:34:57.795949
{ "authors": [ "EduardoRFS", "callistonianembrace" ], "repo": "marigold-dev/sidechain", "url": "https://github.com/marigold-dev/sidechain/pull/193", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1945658228
max width and max height utilities Call .max_width(pixels) or .max_height(pixels) on any HTML object to constrain its max width or height. e.g., mo.hstack([...]).max_height(height=400) Closing in favor of a more general style escape hatch, since it can be difficult to anticipate desired behavior (eg, with overflow or other things). will open a PR for that later
gharchive/pull-request
2023-10-16T16:36:57
2025-04-01T04:34:57.799093
{ "authors": [ "akshayka" ], "repo": "marimo-team/marimo", "url": "https://github.com/marimo-team/marimo/pull/218", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
384024487
Housekeeping Setting up ESLint + basic CLI 🎉 Try it using: $ npx vladimyr/eurojackpot#housekeeping Btw you don't need cheerio for this. There is an API: https://www.lottoland.com/api/drawings/euroJackpot 🦊 Thanks for suggestions, specially for API. Will definitely implement some of them 🐐
gharchive/pull-request
2018-11-24T21:43:54
2025-04-01T04:34:57.800838
{ "authors": [ "marinko-peso", "vladimyr" ], "repo": "marinko-peso/eurojackpot", "url": "https://github.com/marinko-peso/eurojackpot/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2453401115
Hello, Can you enable zIndex setting? It seems to have a lower index rank than the daisyUI modal window, so it is not displayed. It would be nice if I could set it up separately. Hi there the new version now you can pass the index as a parameter. thx :)
gharchive/issue
2024-08-07T12:47:00
2025-04-01T04:34:57.801889
{ "authors": [ "mariojgt", "nebula0225" ], "repo": "mariojgt/wind-notify", "url": "https://github.com/mariojgt/wind-notify/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
137171457
Task Scheduler and -AllowEscapedDotsAndSlashes Hi, I encountered the following error: If I run my script directly it works fine, if I start it with the Windows Task Scheduler the Get-RabbitMQQueue cmdlet throws the error "A parameter cannot be found that matches parameter name 'AllowEscapedDotsAndSlashes'". I fixed it by removing the parameter from the GetItemsFromRabbitMQApis.ps1 but maybe you have a look at it. The URL I give to the cmdlet has no characters which need to be escaped so that I don't need the functionality, but maybe other do. If you need any more details, I'll be happy to provide them. Kind regards, secana I can confirm I get the same error with some any command making use of the Invoke-Restmethod proxy. It looks like it does not load the proxy under the task scheduler (even when doing the ipmo -force -noClobber). This is an issue when doing integration testing with Test-Kitchen as it uses task scheduler to run the pester tests. Investigating... It looks like Scheduled Task does not have a caller's Session to load the ScriptsToProcess into, so when the module call Invoke-RestMethod it only knows the Core Cmdlet (no override in the parent scope). One fix seems to move those functions from ScriptsToProcess to dot source them in the PSM1, or maybe add the similarly to NestedModule ones. You'd have to do so for both 'PreventUnEscapeDotsAndSlashesOnUri.ps1' and 'Invoke_RestMethodProxy.ps1', and remove them from the ScriptsToProcess field in the PSD1. hi.. am using this piece of code. i still get the error while running.. but it works through ise. Encode process monitor report file full path if($restResponse.outputFileName -ne $null) { $dmReportFileName = $restResponse.outputFileName.Substring($restResponse.outputFileName.LastIndexOf("/")+1) $encodeDMReportFilePath = [System.Net.WebUtility]::UrlEncode($restResponse.outputFileName.replace('/','\\')) # Download output file $restResponse = Invoke-RestMethod -Uri ($fileURL + $encodeDMReportFilePath + "/contents") -Method Get -Headers $headers -Outfile ($Root_Path + "\Outbox\" + $dmReportFileName) -ContentType "application/octet-stream" Any help?
gharchive/issue
2016-02-29T07:14:43
2025-04-01T04:34:57.809195
{ "authors": [ "gaelcolas", "rickytp", "secana" ], "repo": "mariuszwojcik/RabbitMQTools", "url": "https://github.com/mariuszwojcik/RabbitMQTools/issues/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
232976535
gvm2 messes with my prompt wes at Wes-Outreach-Macbook in ~ ⌚ 11:55:59 ‹ruby-2.3.3› $ source "$HOME/.gvm/scripts/gvm" [magenta]wes at [yellow]Wes-Outreach-Macbook in [green]~ ⌚ [red]11:56:01 [red]‹ruby-2.3.3› $ Hrmm. That's odd because GVM2 does not actually do anything to change the prompt, or at least shouldn't be. I'll have to take a look. I just tried this on the most recent release (as I was also having an installation problem, see #3). This is still a problem. This is my oh-my-zsh theme # vim:ft=zsh ts=2 sw=2 sts=2 ruby_version() { echo `rbenv version | sed -e 's/ .*//'` } rbenv_version() { rbenv version 2>/dev/null | awk '{print $1}' } ruby_prompt_info() { if [ -e ~/.rvm/bin/rvm-prompt ]; then echo "%{$fg_bold[red]%}‹$(rvm_current)›%{$reset_color%}" elif which rbenv &> /dev/null; then echo "%{$fg_bold[red]%}$(rbenv_version)%{$reset_color%}" fi } PROMPT=' %{$fg[magenta]%}%n%{$reset_color%} at %{$fg[yellow]%}%m%{$reset_color%} in %{$fg_bold[green]%}${PWD/#$HOME/~}%{$reset_color%}$(git_prompt_info) ⌚ %{$fg_bold[red]%}%*%{$reset_color%} $(ruby_prompt_info) $ ' # Must use Powerline font, for \uE0A0 to render. ZSH_THEME_GIT_PROMPT_PREFIX=" on %{$fg[magenta]%}\uE0A0 " ZSH_THEME_GIT_PROMPT_SUFFIX="%{$reset_color%}" ZSH_THEME_GIT_PROMPT_DIRTY="%{$fg[red]%}!" ZSH_THEME_GIT_PROMPT_UNTRACKED="%{$fg[green]%}?" ZSH_THEME_GIT_PROMPT_CLEAN="" I think maybe the load order of scripts is affecting variables or something? I'm still not sure, I've done some searching for fg and reset_color, but I don't see anything like that. It seems that $fg and/or $reset_color are not working when the gvm source line is uncommented from my .zshrc profile. I've looked through the various scripts that get sourced starting with the gvm script entrypoint. Nothing stands out. [[ -s "$HOME/.gvm/scripts/gvm" ]] && source "$HOME/.gvm/scripts/gvm" Hi @ghostsquad. I haven't looked further into this prompt problem yet. I wanted to get the code base into better shape. Thanks for the oh-my-zsh theme snippet. That will be helpful. Prompt problems have been identified as reported in #3 (Installation fails in zsh). While the exact issue reported in this issue has not been duplicated I'm wondering if it's the same but that the different theme variation just displays the problem in a different way. It's going to take some time to refactor the code to address this problem: it stems from array incompatibility between bash and zsh. I'm scheduling a fix for 0.10.8. I recently realized this array problem in some other scripts of mine. Would it be easier to write this code in Go and distribute a static binary instead of using shell scripts? For GVM3 I'm considering a rewrite in Go exactly because it would provide better portability. Support for zsh was not on my radar when I started the rewrite for GVM2. The current plan is to finish GVM2 with zsh compatibility and then sometime in 2018 I will start the journey towards GVM3. I am facing this problem as well. Is there any update on this? Is there any workaround that we can apply locally ? @markeissler how about this issue now :-) I moved on to use https://asdf-vm.com/#/ which support go among many other things. I moved on to use https://asdf-vm.com/#/ which support go among many other things. asdf is a very nice tool to manager other versions for many languages. Thanks @arminbalalaie @markeissler @fd I found the reason why gvm2 messes with the prompt of the zsh. it is going to setopt KSH_ARRAYS on the scripts/function/_shell_compat file. [[ -n $ZSH_VERSION ]] && setopt KSH_ARRAYS when I comment this line, my ohmyzsh comes normally. So I think here the problem is. Hope it can help other guys to use gvm2 tool. I have fixed this issue related to the zsh compatibility on my repository => https://github.com/keonjeo/gvm.
gharchive/issue
2017-06-01T18:56:39
2025-04-01T04:34:57.831203
{ "authors": [ "arminbalalaie", "ghostsquad", "keonjeo", "markeissler", "wesm-outreach" ], "repo": "markeissler/gvm2", "url": "https://github.com/markeissler/gvm2/issues/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
186106145
Fixed entry for editor context menu. Fixed entry for editor context menu. For ticket #37 .
gharchive/pull-request
2016-10-29T23:03:51
2025-04-01T04:34:57.832531
{ "authors": [ "Chris2011" ], "repo": "markiewb/nb-git-open-in-external-repoviewer", "url": "https://github.com/markiewb/nb-git-open-in-external-repoviewer/pull/38", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
57730558
compile from another hard disc Hello, I just find that less compile files on a hard disc different from that of the winless installation does not work. Do you have a solution? Thank you. The same here, do you have a solution for this?
gharchive/issue
2015-02-15T13:37:23
2025-04-01T04:34:57.833789
{ "authors": [ "luger95", "robertroth" ], "repo": "marklagendijk/WinLess", "url": "https://github.com/marklagendijk/WinLess/issues/151", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1181233405
DatabaseClient.eval() always return true when the result is a boolean When evaluating a piece of JavaScript on the server using DatabaseClient.eval() seems to always return true when the expression results in a single boolean. Even when the result is false. To reproduce: require('marklogic') .createDatabaseClient({ host: 'localhost', port: '8000', database: 'neo-elec-content', user: 'admin', password: 'admin', authType: 'DIGEST' }) .eval(`false;`) .result() .then(res => console.dir(res)); returns the following: [ { format: 'text', datatype: 'boolean', value: true } ] We would expect of course value to be false. If you change the expression in the eval() to, say, '42;', then you get the correct result: [ { format: 'text', datatype: 'integer', value: 42 } ] I use the marklogic package version 2.9.0 (installed last week with npm i marklogic.) Looks like this is where that response object is generated for boolean: https://github.com/marklogic/node-client-api/blob/master/lib/server-exec.js#L75 and Boolean(data.content)would produce the correct boolean value if content is false let data = {"content": false}; Boolean(data.content); So, the issue is further upstream in how the data.content is produced from the results, and that looks like it is done in the marshal() function https://github.com/marklogic/node-client-api/blob/ee49df1fa05bb29908f931b2f13bce69470c2567/lib/mlutil.js#L215 Looks like it may be the marshal() function https://github.com/marklogic/node-client-api/blob/ee49df1fa05bb29908f931b2f13bce69470c2567/lib/mlutil.js#L215 where boolean falls through to the return String(data) at the end. I think it needs an extra if statement to handle boolean: else if (typeof data == "boolean") { return data; } If that is the case, when making a fix for boolean, should ensure that we have full coverage for all primitive types (and figure out if/when we will be upgrading the MarkLogic V8 engine and can handle bigint ) This test to verify boolean only verifies true, which is why it was not discovered. It would be helpful to have tests for both true and false https://github.com/marklogic/node-client-api/blob/master/test-basic/server-exec.js#L78 The fix for this issue will be available in the upcoming release. Closing this since fix is now available in version 3.5.0. Thanks.!
gharchive/issue
2022-03-25T20:19:06
2025-04-01T04:34:57.847829
{ "authors": [ "anu3990", "fgeorges", "hansenmc" ], "repo": "marklogic/node-client-api", "url": "https://github.com/marklogic/node-client-api/issues/669", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1530130568
Add Pesticide Chrome Extension Item : Pesticide For Chrome Link:(chrome://extensions/?id=bakpbgckdnepkmkeaiomhmfcnejndkbi) Short Description: This is a Chrome extension that allows web developers to see the elements, such as divs, by giving the website a skeleton-like look. It also provides hover functionality, so users can get an idea of the particular element's properties. This extension is very helpful, as it eliminates the need for constantly checking the Chrome inspection tool. I have added the extension link and included it in the README file. Please review the code and let me know if you have any suggestions or if there are any issues that need to be addressed before merging this pull request. Thank you! Extension Link Free Resource: Yes this chrome extension is free!! @lokeshvasnik The link is broken.
gharchive/pull-request
2023-01-12T05:26:09
2025-04-01T04:34:57.854214
{ "authors": [ "lokeshvasnik", "markodenic" ], "repo": "markodenic/web-development-resources", "url": "https://github.com/markodenic/web-development-resources/pull/422", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
85723117
Builds and badges You will need to enable the repo on coveralls.io for code coverage reports. Uncheck "Leave comment?" on the config page to avoid those annoying coverage report comments and just get updates through status api. I ran phpcbf to auto fix CS errors but there are errors which need to be fixed manually. I am too lazy to do that right now :stuck_out_tongue:. Can move the phpcs build to allowed_failures if you want. I can fix the ohocs errors separately.
gharchive/pull-request
2015-06-06T08:30:32
2025-04-01T04:34:57.975113
{ "authors": [ "ADmad", "markstory" ], "repo": "markstory/asset_compress", "url": "https://github.com/markstory/asset_compress/pull/270", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
158316120
NullPointerException: Attempt to invoke interface method 'java.lang.String com.github.moduth.blockcanary.IBlockCanaryContext.getLogPath()' 06-02 18:07:50.253 E/AndroidRuntime(27493): FATAL EXCEPTION: pool-7-thread-1 06-02 18:07:50.253 E/AndroidRuntime(27493): Process: com.youku.phone, PID: 27493 06-02 18:07:50.253 E/AndroidRuntime(27493): java.lang.NullPointerException: Attempt to invoke interface method 'java.lang.String com.github.moduth.blockcanary.IBlockCanaryContext.getLogPath()' on a null object reference 06-02 18:07:50.253 E/AndroidRuntime(27493): at com.github.moduth.blockcanary.log.BlockCanaryInternals.getPath(BlockCanaryInternals.java:30) 06-02 18:07:50.253 E/AndroidRuntime(27493): at com.github.moduth.blockcanary.log.BlockCanaryInternals.detectedLeakDirectory(BlockCanaryInternals.java:38) 06-02 18:07:50.253 E/AndroidRuntime(27493): at com.github.moduth.blockcanary.log.BlockCanaryInternals.getLogFiles(BlockCanaryInternals.java:46) 06-02 18:07:50.253 E/AndroidRuntime(27493): at com.github.moduth.blockcanary.ui.DisplayBlockActivity$LoadBlocks.run(DisplayBlockActivity.java:395) 06-02 18:07:50.253 E/AndroidRuntime(27493): at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113) 06-02 18:07:50.253 E/AndroidRuntime(27493): at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588) 06-02 18:07:50.253 E/AndroidRuntime(27493): at java.lang.Thread.run(Thread.java:818) 应该是直接打开了Blocks,还没有开启自己的应用吧,这时候 BlockCanaryContext 是空,所以当然会crash
gharchive/issue
2016-06-03T08:26:55
2025-04-01T04:34:57.994530
{ "authors": [ "liuxin85611", "markzhai" ], "repo": "markzhai/AndroidPerformanceMonitor", "url": "https://github.com/markzhai/AndroidPerformanceMonitor/issues/52", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1786238938
Step 19 Specify that it should be developed on the host machine or in the Docker container? The host machine needs the packages installed. The docker container might not be able to renderimages As it isn't possible to get R terminal in docker to generate charts interactively with X11, we will go for the Rstudio approach. Has been fixed in the GH protocol doc but not on protocols.io Fixed on protocols.io
gharchive/issue
2023-07-03T13:56:43
2025-04-01T04:34:57.996660
{ "authors": [ "markziemann" ], "repo": "markziemann/enrichment_recipe", "url": "https://github.com/markziemann/enrichment_recipe/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
215946081
Support HMR for reducers When I try to enable HMR on my project, console log said: <Provider> does not support changing `store` on the fly. It is most likely that you see this error because you updated to Redux 2.x and React Redux 2.x which no longer hot reload reducers automatically. See https://github.com/reactjs/react-redux/releases/tag/v2.0.0 for the migration instructions. We need to apply these simple code to support HMR. Guide here: https://github.com/reactjs/react-redux/releases/tag/v2.0.0 This won't work. As explained in the redux documentation: import { createStore } from 'redux'; import rootReducer from '../reducers/index'; export default function configureStore(initialState) { const store = createStore(rootReducer, initialState); if (module.hot) { // Enable Webpack hot module replacement for reducers module.hot.accept('../reducers', () => { const nextRootReducer = require('../reducers/index'); store.replaceReducer(nextRootReducer); }); } return store; } Note the two lines: module.hot.accept('../reducers', () => { const nextRootReducer = require('../reducers/index'); My understanding is that we need to know the module path for hot reloading to work. @fzaninotto this is still relevant React-admin doesn't impose the use of webpack, nor does it know where the reducers will end up in the final bundle. So my understanding is that HMR support should be added in userland. But maybe I misunderstood how that works? @fzaninotto hot module reload can be enabled for React components, redux reducers and redux sagas. For react components we can do it in userland by wrapping the Admin component, however this means the redux store gets remounted when the component is hot updated which breaks react-redux (they do not support this). I tried this, unfortunately on hot updates the Admin panel breaks. The correct way to support hot module reload is to follow the steps in the documentation of react-hot-loader, react-redux and redux-saga to make sure the top level component, reducer and saga can accept hot updates. Doing this for react-admin is only possible through forking and not in userland. @djhi showed how to support it for react-redux. If module.hot is undefined because the environment is not webpack or does not support hot module reload, nothing happens. This means that if the environment does not have hot module reload turned on, nothing happens. Even if not running in webpack, alternatives to it (like parcel) can take advantage of the hot module reload support so it will work there too, or if they don't support it then they won't break. The only extra library needed for this functionality is the react-hot-loader wrapper that you are supposed to use with your top level component (but below ) to make it accept hot updates. Thanks for the explanation. I'm reopening the enhancement request. A PR to implement the solution is welcome. Shouldn't this enhancement request having been reopened?
gharchive/issue
2017-03-22T03:56:31
2025-04-01T04:34:58.002817
{ "authors": [ "djhi", "fazo96", "fzaninotto", "kimkha", "mabhub" ], "repo": "marmelab/admin-on-rest", "url": "https://github.com/marmelab/admin-on-rest/issues/495", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
224036836
feat: add ReferenceArrayField Take 2, supersedes #536 Fixes #429 Changes to be committed: modified: docs/Fields.md modified: example/app.js modified: example/posts.js modified: src/actions/referenceActions.js new file: src/mui/field/ReferenceArrayField.js new file: src/mui/field/ReferenceArrayField.spec.js modified: src/mui/field/index.js modified: src/reducer/references/oneToMany.js modified: src/sideEffect/saga/referenceFetch.js CRUD_GET_ONE_REFERENCE = many action taking single id debounced to single GET_MANY CRUD_GET_MANY_REFERENCE = GET_MANY_REFERENCE CRUD_DEBOUNCED_GET_MANY = many action taking id array debounced to single GET_MANY Overloading of "Reference" is causing the issue here and masked the true purpose of debouncing actions in referenceActions.js. TODOS (maybe in 1.0) [ ] rename CRUD_GET_ONE_REFERENCE to CRUD_DEBOUNCED_GET_ONE [ ] rename referenceActions.js to debouncedActions.js Exactly what I need! I'm looking forward to a (fast) integration. I wonder what's the status of this PR? Really need to use this in my code. Let me know if there is anything I can help? Thanks! same here, what is missing now? Any update? I won't be adding the TODOS in this PR. I fixed a regression in <ReferenceField>, renamed the debounce action to accumulate, and made some adjustments to the documentation. Overall, great PR, thanks a lot for your contribution!
gharchive/pull-request
2017-04-25T06:41:57
2025-04-01T04:34:58.008855
{ "authors": [ "fzaninotto", "langhard", "leesei", "nonotest", "sherryxiao1988" ], "repo": "marmelab/admin-on-rest", "url": "https://github.com/marmelab/admin-on-rest/pull/596", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
298853888
is it possible to using [mouse+keyboard] instead of using [Logitech G27] ? Thanks to you, I can follow everything of europilot except the joystick [Logitech G27]. Even though this problem, I think that this repository is good change to me. As mentioned in General Architecture on README.md , " If you have a different joystick, modify joystick.py to your needs. " is it possible to using [mouse+keyboard] instead of using [Logitech G27] ? Because it's hard to buy the joystick [Logitech G27] for me. I guess that you have experienced about this joystick Can you any guide or answer to me about [mouse+keyboard] alternatives ? If I know the alternatives and implemented, then I will PR on this repository. ----In Korean---- 제가 영어가 짧아서 위에 간략하게 적었습니다. 한글로 적어 보겠습니다. 여기 저장소 덕분에 europilot 을 알게 되었고, joystick 에서 문제를 만났지만 좋은 도전이 될 것 같다는 생각에는 변함이 없습니다. README.md 에서도 언급했지만 [Logitech G27]을 사용할 수 없다면 직접 joystick.py를 수정하라고 명시되어 있습니다. 제가 [Logitech G27]을 구입하려했지만 비싸서 대안으로 [mouse+keyboard]로 사용가능한지 문의드립니다. 제 추측으로는 여기에서 구현한 joystick 을 만들면서 충분히 고민하고 경험을 했을것 같습니다. [mouse+keyboard]에 대한 참고할만한 가이드 또는 볼만한 링크라도 알고 계시면 답변 부탁드립니다. 만일 제가 대안을 알게 되고, 구현 한다면 여기에 PR 올리도록 하겠습니다. 26 day pending? It's a bit late now but if you're still interested https://github.com/Sentdex/pygta5/ is a place to start. It plays with the idea to use the Keyboard + OpenCV and the InceptionV3 Model from Google. You can find an overview on https://psyber.io/ or videos about the project on https://youtube.com/sentdex Sorry for the late reply. We don't have a time to develop the keyboard support right now. However, implementing it would be quite straightforward. Pull requests are always welcomed.
gharchive/issue
2018-02-21T06:15:53
2025-04-01T04:34:58.021684
{ "authors": [ "bratlachs", "daftshady", "venadHD", "wooheaven" ], "repo": "marsauto/europilot", "url": "https://github.com/marsauto/europilot/issues/14", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }