id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2586373830
🛑 CICD is down In 75e51f9, CICD (https://cicd.jenshamann.solutions/userContent/readme.txt) was down: HTTP code: 0 Response time: 0 ms Resolved: CICD is back up in 60ce84b after 9 minutes.
gharchive/issue
2024-10-14T15:22:49
2025-04-01T04:34:27.597295
{ "authors": [ "hamannjens" ], "repo": "hamannjens/upptime", "url": "https://github.com/hamannjens/upptime/issues/238", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1857932530
⚠️ ClamAV has degraded performance In fb8cd1b, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 88 ms Resolved: ClamAV performance has improved in 1e1889d after 534 days, 22 hours, 21 minutes.
gharchive/issue
2023-08-20T02:50:37
2025-04-01T04:34:27.602789
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/10144", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1565380113
⚠️ ClamAV has degraded performance In b04b83e, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 224 ms Resolved: ClamAV performance has improved in fd24e78.
gharchive/issue
2023-02-01T03:49:36
2025-04-01T04:34:27.605343
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/5299", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1657961724
⚠️ ClamAV has degraded performance In 1ded14a, ClamAV (https://spamassassin.apache.org/) experienced degraded performance: HTTP code: 200 Response time: 66 ms Resolved: ClamAV performance has improved in 6480b98.
gharchive/issue
2023-04-06T19:54:57
2025-04-01T04:34:27.607603
{ "authors": [ "hamboneZA" ], "repo": "hamboneZA/caffeine", "url": "https://github.com/hamboneZA/caffeine/issues/6681", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
160946397
Inserting non utf-8 binaries Hi there. Thanks for the great library. Looking at the source there appears to be support and tests for binaries but I'm getting an error and not sure what I'm doing wrong. I'm new to Elixir and really appreciate any guidance. I'm trying to insert a binary like this: db("my_db") |> table("my_table") |> insert(%{test: <<245>>}) |> MyProject.Database.run And get the following error: (Poison.EncodeError) unable to encode value: <<245>> Running the same procedure but replacing <<245>> with <<1>> succeeds, presumably because <<1>> is a valid string. Thanks in advance for any suggestions. Wrap it in https://hexdocs.pm/rethinkdb/RethinkDB.Query.html#binary/1 Thanks for the quick response! I'm not sure if I'm wrapping the correct thing but I still seem to get the same error message. When you have a moment could you please check out the code below and let me know if this is what you mean? binary_data = RethinkDB.Query.binary(<<245>>) db("my_db") |> table("my_table") |> insert(%{test: binary_data}) |> MyProject.Database.run Thanks again. Yeah. That's the correct way to do it. This is a bug. I'll take a look and try to sort it out. On Fri, Jun 17, 2016 at 3:04 PM Bruce Pomeroy notifications@github.com wrote: Thanks for the quick response! I'm not sure if I'm wrapping the correct thing but I still seem to get the same error message. When you have a moment could you please check out the code below and let me know if this is what you mean? binary_data = RethinkDB.Query.binary(<<245>>) db("my_db") |> table("my_table") |> insert(%{test: binary_data}) |> MyProject.Database.run Thanks again. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/hamiltop/rethinkdb-elixir/issues/105#issuecomment-226892413, or mute the thread https://github.com/notifications/unsubscribe/AAVqIIHus8gdOqpZhr5PdldiHYdm1ofqks5qMxnNgaJpZM4I4m7m . I just tried to reproduce and couldn't, so I'm happy to hear you figured it out. Thanks!
gharchive/issue
2016-06-17T18:18:35
2025-04-01T04:34:27.622952
{ "authors": [ "brucepom", "hamiltop" ], "repo": "hamiltop/rethinkdb-elixir", "url": "https://github.com/hamiltop/rethinkdb-elixir/issues/105", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1791893465
Sign in: Continue with Google I use 'Continue with Google' to sign-in - any thoughts on how to accommodate this? Good question! I'll dig into this and see what is needed to automate the login flow. Added an answer to this question in the readme via commit 2ac5b93.
gharchive/issue
2023-07-06T16:44:36
2025-04-01T04:34:27.641026
{ "authors": [ "elstevega", "hammem" ], "repo": "hammem/monarchmoney", "url": "https://github.com/hammem/monarchmoney/issues/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2185292982
Updated to more modern f-strings where applicable Just some small changes while I was getting to know the project. Some really interesting potential here, from someone who was starting to lose faith in selenium! Haha—after I created this project I discovered Playwright and immediately regretted using Selenium. Though I think it would have been a bit tougher to keep everything object-oriented with Playwright. Anyway, thanks for contributing. I'll take a look now. Looks good to me. Thanks for doing this. As you can probably surmise, Copilot wrote a lot of this code, and it seems like Copilot's habits are a bit dated...
gharchive/pull-request
2024-03-14T03:14:43
2025-04-01T04:34:27.666554
{ "authors": [ "handrew", "marksmayo" ], "repo": "handrew/browserpilot", "url": "https://github.com/handrew/browserpilot/pull/11", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
946597228
patch calculate_size_of_first which is creating issues with embedded sets Should solve #199 and #175 Issue stems from ctre::word_chars being a set*, when processed as part of the regex the current method massively undercounts the needed capacity. Thanks! ❤️ I just realized, this likely solves sporadic issues where I couldn't compile expressions for #158.
gharchive/pull-request
2021-07-16T21:07:43
2025-04-01T04:34:27.674451
{ "authors": [ "Andersama", "hanickadot" ], "repo": "hanickadot/compile-time-regular-expressions", "url": "https://github.com/hanickadot/compile-time-regular-expressions/pull/204", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2339573746
Request DataSet Hello, I am a beginner in machine learning, currently learning and trying to reproduce your project. Could you please provide the dataset used in this project? Thank you very much for your assistance. Best regards! Hello, you can generate the data by running the following command: python data_preparation.py # You need to download the ImageNet dataset by yourself. You can download from Baidu Netdisk (code: mhfp) and put all files in the data directory. thank you very much
gharchive/issue
2024-06-07T04:41:08
2025-04-01T04:34:27.684334
{ "authors": [ "coderlty", "haowang-cqu" ], "repo": "haowang-cqu/TransTroj", "url": "https://github.com/haowang-cqu/TransTroj/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1477363083
Absent score field in POSSIBLE_MATCH MDM links Score field is missing in MDM links when the same golden recourse is possible match and possible duplicate at the same time. To Reproduce Steps to reproduce the behavior: Setup mdm Create three resources as shown in attachment Send query links request Observe that POSSIBLE_MATCH links don't have a score value Expected behavior POSSIBLE_MATCH links should have a score value Environment (please complete the following information): HAPI FHIR Version v6.2.2 payloads generate possible match.txt Fixed by: https://github.com/hapifhir/hapi-fhir/pull/4329
gharchive/issue
2022-12-05T20:20:14
2025-04-01T04:34:27.689107
{ "authors": [ "jmarchionatto" ], "repo": "hapifhir/hapi-fhir", "url": "https://github.com/hapifhir/hapi-fhir/issues/4327", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
570784869
Poor find file performance with icons Problems summary When running <leader>+f+f (find file) on a folder with a fair amount of files (in my case 38.000) I experience a big delay (9s) until the floating window appears. Problem is that dev icons are collected upfront for all files. The delay is also several seconds on less files - e.g. 9.000 which I often have in the environment I work. These are the first items of the neovim profiling output: FUNCTIONS SORTED ON TOTAL TIME count total (s) self (s) function 1 9.530465 0.004144 Fzf_dev() 1 9.193987 0.010084 <SNR>87_files() 1 8.990296 1.613361 <SNR>87_prepend_icon() 38440 7.376935 7.018629 WebDevIconsGetFileTypeSymbol() 38460 0.358549 <SNR>48_DevIconsGetArtifactFix() 1 0.331276 0.226557 fzf#run() 1 0.097137 0.000507 which_key#start() 2 0.080172 0.000066 which_key#window#show() 38440 0.079961 <SNR>38_enc_to_cp() 2 0.063552 0.001302 which_key#wait_for_input() FUNCTIONS SORTED ON SELF TIME count total (s) self (s) function 38440 7.376935 7.018629 WebDevIconsGetFileTypeSymbol() 1 8.990296 1.613361 <SNR>87_prepend_icon() 38460 0.358549 <SNR>48_DevIconsGetArtifactFix() 1 0.331276 0.226557 fzf#run() 38440 0.079961 <SNR>38_enc_to_cp() 2 0.023327 0.023313 <SNR>44_request() 2 0.013351 0.013009 <SNR>78_getchar() 1 0.035184 0.012326 which_key#map#parse() So this takes more than 9 seconds for me while running rg --files on the xfce4-terminal takes just 0.5s (or fzf, it's equally fast). Environment Information OS: Manjaro neovim :version output: NVIM v0.4.3 Build type: Release LuaJIT 2.0.5 Compilation: /usr/bin/cc -march=x86-64 -mtune=generic -O2 -pipe -fno-plt -O2 -DNDEBUG -DMIN_LOG_LEVEL=3 -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototyp es -std=gnu99 -Wshadow -Wconversion -Wmissing-prototypes -Wimplicit-fallthrough -Wvla -fstack-protector-strong -fdiagnostics-color=always -DINCLUDE_GENERATED_DECLARATI ONS -D_GNU_SOURCE -DNVIM_MSGPACK_HAS_FLOAT32 -DNVIM_UNIBI_HAS_VAR_FROM -I/build/neovim/src/build/config -I/build/neovim/src/neovim-0.4.3/src -I/usr/include -I/build/ne ovim/src/build/src/nvim/auto -I/build/neovim/src/build/include Compiled by builduser Features: +acl +iconv +tui :checkhealth or :CheckHealth result(neovim only): health#coc#check ======================================================================== - OK: Environment check passed - OK: Javascript bundle found - OK: Service started health#nvim#check ======================================================================== ## Configuration - OK: no issues found ## Performance - OK: Build type: Release ## Remote Plugins - OK: Up to date ## terminal - INFO: key_backspace (kbs) terminfo entry: key_backspace=^H - INFO: key_dc (kdch1) terminfo entry: key_dc=\E[3~ - INFO: $VTE_VERSION='5803' - INFO: $COLORTERM='truecolor' health#provider#check ======================================================================== ## Clipboard (optional) - OK: Clipboard tool found: xclip ## Python 2 provider (optional) - INFO: `g:python_host_prog` is not set. Searching for python2 in the environment. - INFO: Multiple python2 executables found. Set `g:python_host_prog` to avoid surprises. - INFO: Executable: /usr/sbin/python2 - INFO: Other python executable: /usr/bin/python2 - INFO: Other python executable: /sbin/python2 - INFO: Other python executable: /bin/python2 - INFO: Python version: 2.7.17 - INFO: pynvim version: 0.4.0 (outdated; from /usr/lib/python2.7/site-packages/neovim) - WARNING: Latest pynvim is NOT installed: 0.4.1 ## Python 3 provider (optional) - INFO: `g:python3_host_prog` is not set. Searching for python3 in the environment. - INFO: Multiple python3 executables found. Set `g:python3_host_prog` to avoid surprises. - INFO: Executable: /usr/sbin/python3 - INFO: Other python executable: /usr/bin/python3 - INFO: Other python executable: /sbin/python3 - INFO: Other python executable: /bin/python3 - INFO: Python version: 3.8.1 - INFO: pynvim version: 0.4.1 - OK: Latest pynvim is installed. ## Ruby provider (optional) - WARNING: `ruby` and `gem` must be in $PATH. - ADVICE: - Install Ruby and verify that `ruby` and `gem` commands work. ## Node.js provider (optional) - INFO: Node.js: v13.7.0 - INFO: Neovim node.js host: /usr/lib/node_modules/neovim/bin/cli.js - WARNING: Package "neovim" is out-of-date. Installed: 4.5.0, latest: 4.8.0 - ADVICE: - Run in shell: npm install -g neovim - Run in shell (if you use yarn): yarn global add neovim How to reproduce the problem from neovim startup (Required!) cd into a directory with a lot files type <leader>+f+f (space+f+f) observe the delay until the floating window appears Screenshot (if possible) profile.log Please install bat The message tells [bat error]: './Riot/SingletonLock: No and on the left side I see 'No such file or directory', so it looks like this is a condition in which fzf finds the file but it gets deleted in the time span of the command run. If bat wouldn't be installed, I'd get zsh:1: command not found: bat. I think you don't see the issue because you're running the command on 441 files which is much less. Just did a couple of sketchy tests on my current machine: file count delay 400 instant 2600 1s 10800 2s 31000 5s 44000 7s 75000 13s So it comes down to the project size. When I run the following on a command line marco@tux01 $ time rg --files | wc 43142 45758 5760746 rg --files 0.62s user 0.43s system 492% cpu 0.213 total wc 0.08s user 0.00s system 40% cpu 0.213 total it shows that searching for 43000 files takes just 0.2 seconds. Did you set rg ignore file? closed. Here is a patch that comments out the icon fetching code. diff --git layers/+completion/fzf/config.vim layers/+completion/fzf/config.vim index 304df5d..6f9b7cb 100644 --- layers/+completion/fzf/config.vim +++ layers/+completion/fzf/config.vim @@ -74,8 +74,8 @@ let l:fzf_files_options = ' -m --bind ctrl-d:preview-page-down,ctrl-u:preview-pa let result = [] for candidate in a:candidates let filename = fnamemodify(candidate, ':p:t') - let icon = WebDevIconsGetFileTypeSymbol(filename, isdirectory(filename)) - call add(result, printf("%s %s", icon, candidate)) + " let icon = WebDevIconsGetFileTypeSymbol(filename, isdirectory(filename)) + call add(result, printf("%s", candidate)) endfor return result This reduces the <leader>+f+f load time of 42k files by a factor of 10 which is completely acceptable for me. I totally see that this issue might only affect big projects. I see the following solutions: introduce a configuration parameter to deactivate the icon loading for the screen adapt the implementation to stop loading icons if the list of files is bigger than x elements (maybe also configurable because it depends on the machine performance) I'd appreciate the second solution. If I run <leader>+f+f on my home directory (400k files) accidentally I commonly have to hard kill the nvim process because I can't wait for minutes. Do you see room for improvement? the latest version of thinkvim , used the fzf-preview.vim instead of my hack function. this setting of fzf-preview.vim is disabled on thinkvim Use vim-devicons let g:fzf_preview_use_dev_icons = 0 Regarding judging the number of files by opening the icon and getting the returned number of rg can do this, I will study if there is time to implement it. @ubmarco This is a comparison of speed, fzf and coclist files, you can see that coclist is faster. But the display effect is not as comfortable as fzf. Thanks for that feedback. The new fzf-preview.vim solution has a great performance. Also activating the dev icons works (if needed on project basis). However I get an error on every interaction (ESC / select) with the preview window.: -- TERMINAL -- Close "term://.//7785:cat '/tmp/nvim0PUsUl/3'|'/home/marco/.cache/vim/dein/repos/github.com/junegunn/fzf/bin/fzf' --multi --reverse --ansi --bind=ctrl-d:preview-page-down,ctrl-u:preview-page-up,?:toggle-preview --expect=ctrl-v,ctrl-x,ctrl-q,ctrl-t --prompt="DirectoryFiles> " --preview='[[ "$(file --mime {})" =~ binary ]] && echo "{} is a binary file" || head -100 {-1}' --no-height > /tmp/nvim0PUsUl/2;#FZF"? [Y]es, (N)o, (C)ancel: @ubmarco maybe some error on fzf. i am not fzf pro.please ask this question on fzf. because i cant reproduce it .works fine for me.
gharchive/issue
2020-02-25T19:34:08
2025-04-01T04:34:27.845019
{ "authors": [ "taigacute", "ubmarco" ], "repo": "hardcoreplayers/ThinkVim", "url": "https://github.com/hardcoreplayers/ThinkVim/issues/93", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
235192131
temperatureMode CUSTOM doesn't respond correctly Wondering if you can help. If I do this in the payload Alexa responds settings undefined to 22 "temperatureMode":{ "value":"OFF" } I also cannot get "AUTO" to respond correctly either. HEAT and COOL respond as expected OFF appears to be a new mode that wasn't present when I wrote this section of the node. I'll see if I can fix it, but it may be some time Thanks what about AUTO? Show the whole message you are sending to the response node Tried things like this. Tried with value AUTO and no friendly name which didn't work correctly. If you do custom it is completly ignored. If you do auto, the word auto is replaced with the room name. So alexa goes can't set room name room name to 18. Trying 18 msg.extra = { "targetTemperature": { "value": 18 }, "applianceResponseTimestamp": new Date().toISOString(), "temperatureMode": { "value": "CUSTOM", "friendlyName": "FAN" } }; msg.payload = true; return msg; The following works for me msg.extra = { "targetTemperature": { "value": 22.20 }, "applianceResponseTimestamp": new Date().toISOString(), "temperatureMode": { "value": "CUSTOM", "friendlyName": "Space Heater" } }; msg.payload = true; It's ignored for me. So are you saying Alexa responds with "My Room Space heater set to 22"? With HEAT I get "My Room HEAT set to 22"? With AUTO I get "My Room set to 22"? On a positive note. The OFF command now works. She replies "My Room is OFF" I'm saying that for a device called "thermostat" if I ask "Alexa what is the thermostat set to" I get back "The space heater is set to 22.2" Strange I can't get it to work. It says it's not responding Figured it out. Some commands don't allow the use of friendly name. Eg GetTargetTemperatureRequest does but GetTemperatureReadingRequest doesn't Ahh, yes. It only really makes sense when asking what the device is set to, not what the current temperature is. I just call the skill my room. But want it to reply with the name not saying my room. Not a big deal. The Alexa names can be confusing for what they do. Thanks. Hope you can add home entertainment soon
gharchive/issue
2017-06-12T10:44:38
2025-04-01T04:34:27.851945
{ "authors": [ "NovaGL", "hardillb" ], "repo": "hardillb/node-red-alexa-home-skill-web", "url": "https://github.com/hardillb/node-red-alexa-home-skill-web/issues/16", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
311371119
docker swarm Does this image work in docker swarm mode? Yes, some people use it with docker swarm or kubernetes. But no official support. I'm also interested if someone have more informations on that. For the time being I chose to implement my mail server with iredmail. As long as the server isn't under heavy load and does little more, it should work most of the time GRIN. What do you want to know about docker swarm ? As rancher support, I'm open merge any contribution to add documentation or make swarm support better. But I don't have enough time to maintain my self all variants (Vanilla, Rancher, Swarm, K8s...etc). So any maintainers for Swarm and K8s configurations are welcome. I'm also interested if someone have more informations on that. ping @outbackdingo. Personally, I want to use the Rancher solution. I think that in a production environment it is essential to have a backup server in case of error or system update with reboot of the main server but unfortunately I do not have enough knowledge in docker swarm to participate. @measwel why iredmail, its terrible, would have been better off with mailu ..... either way they all do basically the same thing, however support for things like kubernetes, swarm, rancher do somewhat lack here in mailserver ... mailu actually runs on rancher and kubernetes. that being said there is nothing that stops or prevents someone from running mailserver in a stack environment other then someone properly putting together the right compose config for it. of all the stacks well ranchert is the easiest to deploy on as it uses standard docker-compose files, k8s and mesos however use different yaml syntax. I guess the best bet honestly would be to poll the community using mailserver and see who has actually successfully deployed mailserver on what stacks and then go from there. kubernetes, swarm, rancher do somewhat lack here in mailserver hardware/mailserver works with Rancher : https://github.com/hardware/mailserver-rancher There is nothing that stops or prevents someone from running hardware/mailserver in a stack environment other then someone properly putting together the right compose config for it. Ok, good to know :) As previously said, any contributors are welcome to add some documentation for Swarm and K8s environments. There is nothing that stops or prevents someone from running hardware/mailserver in a stack environment other then someone properly putting together the right compose config for it. I second this. Currently working on deploying this on my personal Kubernetes cluster. I'll keep note of what I do and try to document the process for others afterwards. By the way, the fact that this container does mount the docker socket is very refreshing. Thanks for all the great work here. I really appreciate it. @nsmith5 Your contribution is welcome.
gharchive/issue
2018-04-04T20:03:00
2025-04-01T04:34:27.859440
{ "authors": [ "bjrnio", "hardware", "measwel", "nsmith5", "outbackdingo", "tsedgwick1" ], "repo": "hardware/mailserver", "url": "https://github.com/hardware/mailserver/issues/231", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1842427206
Question about the 'retry' argument Hi, Due to the usage part in readme, there is a retry argument like this: --retry [count] Max time to retry the download fail However, the count parameter is not accepted, since the code in iwara-dl.sh reads like: --retry ) export IWARA_RETRY="TRUE"; shift; ;; and in func iwara-dl-retry-dl() in iwaralib.sh: if [[ "$IWARA_RETRY" != "FALSE" ]] ; then It seems that the [count] parameter is not supported yet, so maybe it's better to modify the help info about [retry]. Best regards @amamiya-yuuko-1225 Thanks The arguments are not consistent because of the new iwara website and I don't have much time to work on this. The code is pretty messy now. Feel free to send a PR to make it better
gharchive/issue
2023-08-09T04:15:15
2025-04-01T04:34:27.867845
{ "authors": [ "amamiya-yuuko-1225", "hare1039" ], "repo": "hare1039/iwara-dl", "url": "https://github.com/hare1039/iwara-dl/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2204331596
[Feature Request] - Supplies Ordering Not sure if there is already a way to track this. I could not find anything. However, I was thinking of something like a planner but for supply ordering. Use Case I plan a service/repair/upgrade first, if I do not always have the parts or tools need to an upcoming service/repair/upgrade. Then research and order/pickup the parts. Then once they arrive log them in to the supplies then add then re-create a planner activity with the new supplies. This process is missing the ability tracker the purchase/research process of the supplies. Ideal Flow I would love to be able to create a plan for a service/repair/upgrade and have a toggle to create of a related supply order record which I can edit and add the list of supplies needed to complete the planned activity. The ability to select there if the supplies should be added to the garage or car level supplies would be ideal. The tracking for the supply order record would be similar to the planner but the following columns: Draft, Ordered, Done. When the Supply Order is marked as Done, the supplies are added to the garage supplies/car supplies list. I think this is similar to what I'm looking for also. I'd love to be able to track purchases (Supplier, Invoice, Price, Tax, Shipping, etc) and then have that feed into supplies. Came here to suggest a feature like this. I'm currently planning a number of maintenance and upgrade tasks for a couple of my cars but I haven't ordered all the parts and consumables yet. I'd like to be able to put all my plans in with the requisite supplies noted, then generate a "parts needed" view of what I have to order.
gharchive/issue
2024-03-24T12:41:19
2025-04-01T04:34:27.884306
{ "authors": [ "joshtbernstein", "rswafford", "varunpan" ], "repo": "hargata/lubelog", "url": "https://github.com/hargata/lubelog/issues/428", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
585264631
add queue to faucet, validate faucet use Procedure Implement a queue to add pending transactions to, instead of sending transactions as soon as the /fund endpoint is called. Use setInterval to send a pending transaction to the chain every ~15 seconds Implement checks (calls from same ip/user-agent, same ip within time limit) and reject pushing to queue if checks fail. Additions Add a mutex while checking the queue for addresses from the same ip + user agent Check that account balance has increased before adding account to the map of already funded accounts (prevent blocking address even when transaction fails)
gharchive/pull-request
2020-03-20T18:50:00
2025-04-01T04:34:27.888661
{ "authors": [ "Cem-Harmony" ], "repo": "harmony-one/HRC", "url": "https://github.com/harmony-one/HRC/pull/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1771459091
[DOC-3283] Add replica pod deletion Add replica pod deletion For reviewers, previews are available: Graceful delegate shutdown Delete replica pods What Type of PR is This? [ ] Issue [ ] Feature [x] Maintenance/Chore If tied to an Issue, list the Issue(s) here: Issue(s) House Keeping Some items to keep track of. Screen shots of changes are optional but would help the maintainers review quicker. [x] Tested Locally [ ] Optional Screen Shoot. Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app
gharchive/pull-request
2023-06-23T13:22:56
2025-04-01T04:34:27.909719
{ "authors": [ "bot-gitexp-user", "brian-f-harness" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/2233", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1815585943
Nandeesh kb Harness Developer Pull Request Thanks for helping us make the Developer Hub better. The PR will be looked at by the maintainers. What Type of PR is This? [ ] Issue [ ] Feature [ ] Maintenance/Chore [x] Tested Locally [ ] Optional Screen Shoot. Build is failing need to debug the issue
gharchive/pull-request
2023-07-21T10:41:37
2025-04-01T04:34:27.912192
{ "authors": [ "NandeeshHarness" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/2571", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2333701737
[CCM-18035]: Added troubleshooting for when the secrets expire for a CCM SMP customer Thanks for contributing to the Harness Developer Hub! Our code owners will review your submission. Description Please describe your changes: Jira/GitHub Issue numbers (if any): 18035 Preview links/images (Internal contributors only): PR lifecycle We aim to merge PRs within one week or less, but delays happen sometimes. If your PR is open longer than two weeks without any human activity, please tag a code owner in a comment. PRs must meet these requirements to be merged: [ ] Successful preview build. [ ] Code owner review. [ ] No merge conflicts. [ ] Release notes/new features docs: Feature/version released to at least one prod environment. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: Usually we get a nice netlify link for the draft, can we see that? Please check the Execution Link of the Pipeline for the Website Draft URL. This is located in the Preview Step behind the Harness VPN and also is available in #hdh_alerts. E.g Website Draft URL: https://unique-id--harness-developer.netlify.app. Current Draft URL is: https://666745b7416cf120e435e94d--harness-developer.netlify.app
gharchive/pull-request
2024-06-04T14:33:13
2025-04-01T04:34:27.919344
{ "authors": [ "CLAassistant", "bot-gitexp-user", "joeyouss", "tuffacton" ], "repo": "harness/developer-hub", "url": "https://github.com/harness/developer-hub/pull/6928", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1006366662
Missing fclose in position_index.c Hi, there is a fclose(infile); statement missing in position_index.c The infile is opened but not closed at the end of the routine. This can cause problems when running the the grib-extraction for multiple parameter or long periods, since there is an increasing number of open files that are never closed. Best regards, Florian You are right. Thanks for finding and reporting this. I'll fix it asap. Fixed. Thanks.
gharchive/issue
2021-09-24T11:29:25
2025-04-01T04:34:28.008942
{ "authors": [ "FlorianW-ZAMG", "adeckmyn" ], "repo": "harphub/Rgrib2", "url": "https://github.com/harphub/Rgrib2/issues/5", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1321724723
task: Update to rust image 1.62-1.0 What Updates the docker image used for building and testing to 1.62-1.0: https://github.com/harrison-ai/dataeng-tooling-rust/blob/main/CHANGELOG.md#162-10 Why We want to stay up to date on all of our dependencies where possible. 👍🏻 , but also heads-up, I just made a 1.62-1.1 release of the image that bumps the patch version of rust. Righto, I'll bump to that release here too 👍
gharchive/pull-request
2022-07-29T03:51:43
2025-04-01T04:34:28.010963
{ "authors": [ "timleslie" ], "repo": "harrison-ai/cobalt-aws", "url": "https://github.com/harrison-ai/cobalt-aws/pull/116", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2576303390
Support specifying a folder / file extension pattern to apply highlighting I love the new file level comment feature to enable syntax highlighting across multiple yaml properties. However, it requires modifying every file you want highlighted. It would be nice to have an alternative way to do this that does not require editing each file. Instead, have the extension read from a workspace setting. Something along the lines of... { "yaml-embedded-languages.someSettingName": [ { "path": "folder\\subfolder\\**\\*.pa.yaml", "language": "powerfx" } ] } Thanks. Something like this should be possible, but with the way the extension currently works, it may require the VS code window to be reloaded when switching to a yaml file which doesn't match the pattern for a particular language. That may not be too bad though. I should see if there is a way to update the syntax without requiring a reload.
gharchive/issue
2024-10-09T15:44:45
2025-04-01T04:34:28.013749
{ "authors": [ "devkeydet", "harrydowning" ], "repo": "harrydowning/vscode-yaml-embedded-languages", "url": "https://github.com/harrydowning/vscode-yaml-embedded-languages/issues/57", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
272247828
The module is not installable with zef zef install Text::Markdown::Discount ===> Searching for: Text::Markdown::Discount No candidates found matching identity: Text::Markdown::Discount It doesn't work for me anymore either. I'll have a look around why that's happening, thanks for letting me know.
gharchive/issue
2017-11-08T15:50:31
2025-04-01T04:34:28.025889
{ "authors": [ "AlexDaniel", "hartenfels" ], "repo": "hartenfels/Text-Markdown-Discount", "url": "https://github.com/hartenfels/Text-Markdown-Discount/issues/6", "license": "artistic-2.0", "license_type": "permissive", "license_source": "bigquery" }
1897673203
fix rancher-integration link Summary PR Checklist ~- Is this a multi-tenancy feature/bug? - [ ] Yes, the relevant RBAC changes are at:~ ~- Do we need to backport changes to the old Rancher UI, such as RKE1? - [ ] Yes, the relevant PR is at:~ ~- Are backend engineers aware of UI changes? - [ ] Yes, the backend owner is:~ Fixes # Occurred changes and/or fixed issues Technical notes summary Areas or cases that should be tested Areas which could experience regressions Screenshot/Video @mergify backport release-harvester-v1.2 @mergify backport release-harvester-v1.1
gharchive/pull-request
2023-09-15T04:08:01
2025-04-01T04:34:28.044852
{ "authors": [ "WuJun2016" ], "repo": "harvester/dashboard", "url": "https://github.com/harvester/dashboard/pull/920", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2157056799
Fix grafana in all scenarios Problem: This is a manual backport of https://github.com/harvester/harvester-installer/pull/665, refer it. Solution: Add processing for all cases. Related Issue: https://github.com/harvester/harvester/issues/5236 Test plan: Refer: https://github.com/harvester/harvester-installer/pull/665 gentle ping @w13915984028 , I've tried applied this pr and build with havester v1.2 branch with latest commit https://github.com/harvester/harvester/commit/a822a3a42ba324f11f2b95f4d0d91486eec89233 with 3 nodes while I found the Cluster Metrics still not work. Not sure what am I missing. this is the chrome console output And I also found that promethus-ranch-monitoring-prometheus is stuck in Init status, not sure if this is related The promethues pod was not started successfully, stucking at init, k9s showed the volume was not mounted. Use kubectl get pvc -n cattle-monitoring-system to find the volume name of pvc prometheus-rancher-monitoring-prometheus-db-prometheus-rancher-monitoring-prometheus-0; then kubectl get volume -n longhorn-system pvc-name -oyaml to get the detailed LH information. The promethues pod was not started successfully, stucking at init, k9s showed the volume was not mounted. Use kubectl get pvc -n cattle-monitoring-system to find the volume name of pvc prometheus-rancher-monitoring-prometheus-db-prometheus-rancher-monitoring-prometheus-0; then kubectl get volume -n longhorn-system pvc-name -oyaml to get the detailed LH information. Huge thanks ! I discovered the volume wass detached due to insufficient disk space. I've also confirmed that this pr is good to go, the metrics are now visible and Grafana login page no longer 404.
gharchive/pull-request
2024-02-27T16:30:12
2025-04-01T04:34:28.052788
{ "authors": [ "brandboat", "w13915984028" ], "repo": "harvester/harvester-installer", "url": "https://github.com/harvester/harvester-installer/pull/666", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1186023728
[FEATURE] Add air-gapped installation tests Is your feature request related to a problem? Please describe. Our vagrant CI tests run with the Internet, but sometimes this can't catch air-gapped issues. Describe the solution you'd like It would be nice to add an air-gapped test case. Describe alternatives you've considered Additional context The CI didn't catch failure in this: https://github.com/harvester/harvester-installer/pull/257 Update: Currently working through some elements about provisioning an airgapped rancher single node via vagrant/ansible/libvirt, via a temporary repository here: https://github.com/irishgordo/single-node-rancher-airgapped-vagrant-ansible-proof-of-concept . Very much still a work in progress, just currently scaffolding out elements. Would eventually like to integrate it in with the /harvester/ipxe-examples/vagrant-pxe-harvester once it's more defined. Learning lots of great things about Ansible, Vagrant, LibVirt and more Update: Started the integration process with the ipxe-examples/vagrant-pxe-harvester repo, still currently a work in progress, but the content is here: https://github.com/irishgordo/ipxe-examples/tree/feat/2096-air-gapped-rancher-single-node-proof-of-concept/vagrant-pxe-harvester A tad slow, but it's almost there: Latest run relative timings just on the 'rancher-box' that's airgapped that get's provisioned: Started: 16:52 Finished: 18:29 @bk201 thanks for a bit more insight surrounding this! I think this breaks down into a few separate pieces: I'm thinking we'd want to modify the CI Groovy so that it can support the additional ipxe-examples/vagrant-pxe-harvester repository edits. I started to scaffold what perhaps the CI Pipeline might need to morph to (this is still a work-in-progress and has just been for testing/validation): https://github.com/irishgordo/harvester-installer/blob/feat/2096-temporary-edits-testing/temporary_jenkins_example.groovy within that, there are some temporary edits simply for testing, since this is running on a single node with Jenkins 2.303.3, with some additional plugins of: workspace-cleanup-plugin, pipeline-utility-steps, pipeline, job-dsl - that I believe would be the only ones needed to add to the production Jenkins instance introducing a 'test-stage' that can output test artifacts / test results for the important tests for air-gapped harvester & air-gapped rancher, which hasn't been implemented yet the pipeline, for testing has just been manually started for builds Also making the necessary modifications to the necessary changes to ipxe-examples/vagrant-pxe-harvester, currently there's an open PR, working through a few issues: https://github.com/harvester/ipxe-examples/pull/45 And also making edits to the run_vagrant_install_test.yaml within the harvester/harvester-installer repo, I've also started to make some temporary-testing-edits here; https://github.com/irishgordo/harvester-installer/blob/feat/2096-temporary-edits-testing/ci/run_vagrant_install_test.yml Would it be acceptable to have the CI run normally how it does now, then for a specific build with a PR_ID, being able to "re-run" that build with parameters that would re-provision the environment either in?: harvester offline harvester airgapped & rancher airgapped Do you feel there would be some benefit to on the PR to harvester-installer, allowing someone to write a comment like: "/run-test-harvester-offline" or "/run-test-harvester-airgapped-with-rancher-airgapped" and have that start the corresponding tests checking out that PR's code, building the assets, then provisioning / running-tests in those environments? With all those parts, do you believe that would be covering everything needed to improve the CI from harvester-installer? @irishgordo Thanks for the nice write-up and anlaysis. I think this breaks down into a few separate pieces: If it's too complicated, we can improve gradually. Feel free close this issue with some enhancements first and use follow up issues later. No need to be 100% perfect at the beginning. If including all tests in a PR is too heavy, then the way to trigger tests with comments like /run-test-harvester-airgapped-with-rancher-airgapped is great. We can trigger those tests only when needed (for example, a PR that bumps dependency versions or release head). @bk201 Based around some outside discussion, the scope may need to be adjusted. I think the biggest goal will be to try to get in #45 as it's had the most attention and others to be reviewing. Items That Can Get Funneled In With Scope So far we currently have with Pull-Request #45 On Harvester/ipxe-examples: supporting edits to support bringing up a Rancher Node alongside Harvester in: the same subnet airgapping through the offline: true in the settings.yml caveats: the airgapping still relies on: harvester private network, that subnet, utilizing the dhcp server & squid proxy - from a harvester node, outside requests (outbound traffic) via curl/ping do seem to fail which is good when offline: true is enabled, but it isn't a fully-airgapped solution the vagrant ci pipeline would continue to run as is, but the vagrant installation groovy test would stay as is, but allow for future iterations to bring in the potential of a non-proxied airgap environment and the rancher node that exists through the modifications to the ipxe-examples vagrant setup once #45 is able to be merged it does at least offer a way to standup rancher alongside harvester locally on a laptop/machine Items That Would Need More Work: Vagrant edits: support producing an environment that is not airgapped just by the http proxy allow for different subnets for an airgapped environment: airgapped subnet for harvester airgapped subnet for rancher additional hardening, improve it overall to follow more traditional Ansible project structure: utilizing grouping common tasks together in blocks with given rescue and always hooks, providing a more solid way to address occasional provisioning issues due to network timeouts and other elements converting any time we have a curl command over to use the ansible.builtin.uri command for much better handling of errors, retry logic, and the ability to validate certain status codes within the block, and parse out JSON automatically Tests repo work: ensure the dockerized tests repo preforms as expected - currently #372 in harvester/tests hasn't had a lot of iterations or input yet: what we would want is to validate that dockerizing the test suite with all python test dependencies & makefile does allow for our harvester/tests to be run without issues Jenkins edits: having an environment for others to test Jenkins edits that isn't a production box (so far that isn't really automated per say, we might explore creating a dev.ci.harvesterhci.io on other sets of bare metal servers accessible as to not interupt production ci/cd) the biggest: possibly look at shifting over to a Jenkinfile instead of a pure groovy.j2, there have been inital testing edits with a pipeline in pure groovy as there have been some tests here > as right now there is no "automated" way to roll out pipeline changes or changes to jenkins in general concerning plugins - the CI/CD should be able to support that without the need of someone logging into the GUI and making edits or shelling into a server and scp'ing a file over find a way to implement managing Jenkins plugins dynamically with source control so pipeline can read new plugin version bumps or when additional plugins get added possibly figuring out a way to leverage something similar to this, currently the Ansible used to provision the server just called the jenkins install-plugin shown here (note: also those plugins aren't pinned at a given version, which is something that should be modified as if we had PR builds install the plugin list every time, if they weren't pinned to a version it would always grab the latest which could be a problem) but re-provisioning the bare-metal Jenkins instances from scratch every time the PR runs is probably not the best method plugin additions: we will probably continue to want to add additional plugins, at given versions of the plugins to help better support the pipeline's lifecycle needs having an automated way as mentioned earlier that they can be added in would be great, perhaps validated only by certain GitHub users: providing automated rollback if a plugin install fails rebuild-ability, the ability to re-run the rebuild with different combinations of items like: re-running builds with different versions of rancher re-running the builds with different versions of the harvester/tests repo to support the possiblity for a new feature-set we'd have accompanying tests in harvester/tests to support it that we may want to validate work with the CI before getting brought into main on harvester/tests -> as this also allows the Dockerfile for the Tests Repo to be built off the given branch / repo allowing rebuild access only to certain GitHub users, that way outside PRs won't immediately be allowed to be triggered by any GitHub user, we would lock down the list to team members only for running rebuilds that allow for various combinations: either through PR comments on the open PR manually going onto the GUI for Jenkins and running the rebuilds there adding additional testing stages like this one, the dependency on that would be ensuring the dockerized tests repo was working possibly-> examine dockerizing jenkins at ci.harvesterhci.io/ - as it's not currently dockerized, dockerizing Jenkins would allow us more control over the version of Jenkins and better managing security updates (by increasing jenkins to latest version) for ci.harvesterhci Validating disaster recovery: If the servers running CI/CD content: jenkins docker vagrant jenkins pipelines etc That way if the environment were to go down, ensuring that we have a way to re-provision everything, back to the state that it was exactly (IE: keeping versions on plugins, versions for jenkins, slack credentials, etc) Look at levering HashiCorp Vault Plugin as a Secret Source for Jenkins Configuration As Code Plugin, providing a way to store credentials for Jenkins Suggested Actions create additional issues sets in harvester/tests prefacing them as: [EPIC] Jenkins Work For CI/CD To Support Testing, Rebuilding, Automated Pipeline Deployments & Plugin Deployment Edits [EPIC] Provide Fully-Non-Proxied Airgap Provisioning With Vagrant For Harvester & Rancher [EPIC] Provide Disaster Recovery Solution For Jenkins CI @irishgordo as long as harvester-vagrant-installation-test works, feel free to improve out-of-scope topics in the future. @irishgordo Can you share the progress of this item? Besides this one: https://github.com/harvester/ipxe-examples/pull/59, do we have other tickets to track? Thank you! @bk201 sorry for the late reply :sweat_smile: - this slipped through the cracks. These two WIP PRs need some additional edits: https://github.com/harvester/tests/pull/686 https://github.com/harvester/harvester-installer/pull/400 Then additionally I'm curious as to what would be documentation? In the ipxe-examples airgap readme - we do highlight what the feature set is. Those other two PRs will be just Ansible scripting & Groovy scripting (for the Jenkins pipeline) Covered by https://github.com/harvester/tests/issues/967
gharchive/issue
2022-03-30T07:41:14
2025-04-01T04:34:28.088132
{ "authors": [ "bk201", "irishgordo" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/2096", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1478318455
[FEATURE] Natively Install Rancher alongside Harvester Is your feature request related to a problem? Please describe. Rancher and Harvester are awesome products, and their integration is tight and works well. Currently, Rancher needs to be installed separately from Harvester. Describe the solution you'd like It would be convenient if Rancher could be installed directly on a Harvester cluster's Kubernetes deployment, even if it were disabled by default and behind a config/toggle switch. Describe alternatives you've considered Install Rancher in a virtual machine on top of Harvester (this works, and is what I am currently doing) Install Rancher directly on the K8s cluster Harvester is running on, e.g. via a Helm deployment I am unsure of the implications of directly modifying the K8s cluster Harvester is deployed on, i.e. if this has undesirable side effects Use the builtin Rancher deployment that manages Harvester itself on RKE2 Similar concerns to above Additional context I think pairing Harvester and Rancher natively would provide an incredibly powerful integration between container and VM workloads, while still maintaining separation of concerns. I am not sure of any technical issues directly integrating Rancher in Harvester, and I'm sure this has been considered before. I just discovered this native UI to install Rancher (among other Helm charts in the catalog) as well. Again, not sure if this is supported/has undesirable side effects. Oops, I just found https://github.com/harvester/harvester/issues/2679 which is extremely similar to this. Closing as duplicate.
gharchive/issue
2022-12-06T06:12:55
2025-04-01T04:34:28.094661
{ "authors": [ "coopbri" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/issues/3243", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1629071538
doc: add bare-metal container and mcm support[CI SKIP] IMPORTANT: Please do not create a Pull Request without creating an issue first. Problem: Add HEP doc of Bare-metal Cluster Container and Multi-cluster Management Support Related Issue: Test plan: Update and please take a re-review, thanks.
gharchive/pull-request
2023-03-17T10:29:58
2025-04-01T04:34:28.096657
{ "authors": [ "guangbochen" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/pull/3673", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2113924381
feat: bump longhorn to v1.6.0 Solution: Bump LH to v1.6.0. Related Issue: https://github.com/harvester/harvester/issues/4970 Test plan: Case 1: New cluster Make sure users can operate creating/migrating VM, creating and restoring VMBackup/VMSnapshot. Case 2: Upgrade cluster Create a harvester v1.2.1 cluster. Create a guest cluster. Make sure the harvester can bump to this branch. Thanks, everyone. @khushboo-rancher @lanfon72 do we have an automated sanity test for the Longhorn version upgrade? If not, consider having it in the future, so we can leverage it to verify what members tested here to make everything more productive.
gharchive/pull-request
2024-02-02T02:52:27
2025-04-01T04:34:28.099756
{ "authors": [ "FrankYang0529", "innobead" ], "repo": "harvester/harvester", "url": "https://github.com/harvester/harvester/pull/5087", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1479442482
Support input devices Signed-off-by: futuretea Hang.Yu@suse.com Related issues https://github.com/harvester/harvester/issues/3119 Depend PR https://github.com/harvester/harvester/pull/3247 convert to Draft util PR https://github.com/harvester/harvester/pull/3247 merged
gharchive/pull-request
2022-12-06T15:49:27
2025-04-01T04:34:28.102274
{ "authors": [ "futuretea" ], "repo": "harvester/terraform-provider-harvester", "url": "https://github.com/harvester/terraform-provider-harvester/pull/63", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
344674181
A question on custom dataset class Thanks for sharing your project. I just met a small question when reading your code. For class SiameseNetworkDataset(Dataset): def __getitem__(self,index): img0_tuple = random.choice(self.imageFolderDataset.imgs) #we need to make sure approx 50% of images are in the same class should_get_same_class = random.randint(0,1) if should_get_same_class: while True: #keep looping till the same class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1]==img1_tuple[1]: break else: while True: #keep looping till a different class image is found img1_tuple = random.choice(self.imageFolderDataset.imgs) if img0_tuple[1] !=img1_tuple[1]: break img1_tuple = random.choice(self.imageFolderDataset.imgs) when should_get_same_class equals to False, the coding keeps running until it finds img1_tuple that has different label as img0_tuple, but why do you use img1_tuple = random.choice(self.imageFolderDataset.imgs) again in last row? Thank you very much. This is a bug. Thanks for pointing it out. If you would like to fix it , I will accept the pull request
gharchive/issue
2018-07-26T02:42:46
2025-04-01T04:34:28.114254
{ "authors": [ "Jetyau", "harveyslash" ], "repo": "harveyslash/Facial-Similarity-with-Siamese-Networks-in-Pytorch", "url": "https://github.com/harveyslash/Facial-Similarity-with-Siamese-Networks-in-Pytorch/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
312802129
Hello, I have a issues when I run multi robots exploration. Hello, I have a issues when run multi robots exploration. The merged map is not a properly position. And When launch "mutliple_simulated_largeMap.launch", there is always show an error about " not transfer from "word" to "robot1/map" ". Try this fork: https://github.com/hasauino/m-explore/tree/master/map_merge This fork is an older version of the map_merge package. The older version used to work fine. Yes, this work for me. Thank you! But I have another question as follow image show, there is not appear many RRT tree lines as the video show. What can do for it. Thanks. @Gaoee @hasauino How do you run the multi robot rrt_exploration? I am using ROS melodic. These are my steps: roslaunch rrt_exploration_tutorials multiple_simulated_house.launch roslaunch rrt_exploration_tutorials map_merge.launch roslaunch rrt_exploration three_robots.launch When I runroslaunch rrt_exploration three_robots.launch I get the following error: [ERROR] [1677001900.306743, 2623.350000]: bad callback: <function globalMap at 0x7feec434ccd0> Traceback (most recent call last): File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/sgari/iral_research/sim_rrt_ws/src/rrt_exploration/scripts/filter.py", line 42, in globalMap [litraIndx])-namespace_init_count ValueError: invalid literal for int() with base 10: '_' [ERROR] [1677001900.308289, 2623.420000]: bad callback: <function globalMap at 0x7feec434ccd0> Traceback (most recent call last): File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/-/iral_research/sim_rrt_ws/src/rrt_exploration/scripts/filter.py", line 42, in globalMap [litraIndx])-namespace_init_count ValueError: invalid literal for int() with base 10: '_'
gharchive/issue
2018-04-10T07:24:51
2025-04-01T04:34:28.118719
{ "authors": [ "Gaoee", "hasauino", "sandilyasg" ], "repo": "hasauino/rrt_exploration", "url": "https://github.com/hasauino/rrt_exploration/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1393451826
.tmp created Hi im trying to use this "injector" but it creates a .tmp and that makes it really detectable im not sure if it is posible to make it so it stops creating the .tmp Or if there is any way to edit it by myself so it doasent creates it, ik tried visual studio but it wont read the .exe hi! please read the original description of this technique at: https://www.elastic.co/blog/process-ghosting-a-new-executable-image-tampering-attack This step is necessary for this technique: The created .tmp file is in the delete-pending state, which prevents the file from being opened by the external processes, including anti-malware scanners. oh ok is there any alternative of process_ghosting that doasent require the .tmp? yes, Process Doppelgänging (https://github.com/hasherezade/process_doppelganging) is very similar, but instead of the delete-pending file it uses a file within a transaction.
gharchive/issue
2022-10-01T15:54:43
2025-04-01T04:34:28.138011
{ "authors": [ "Jaimebuu", "hasherezade" ], "repo": "hasherezade/process_ghosting", "url": "https://github.com/hasherezade/process_ghosting/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2528854268
bug: block node emits error when sending a response if the producer has closed the connection When running a test where ghz sends a large block and then disconnects, an error is thrown to the log like: 2024-09-16 09:38:36 Exception in thread "Thread-5" java.lang.RuntimeException: java.lang.IllegalStateException: Stream is already completed, no further calls are allowed 2024-09-16 09:38:36 at com.lmax.disruptor/com.lmax.disruptor.FatalExceptionHandler.handleEventException(FatalExceptionHandler.java:35) 2024-09-16 09:38:36 at com.lmax.disruptor/com.lmax.disruptor.BatchEventProcessor.handleEventException(BatchEventProcessor.java:254) 2024-09-16 09:38:36 at com.lmax.disruptor/com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:193) 2024-09-16 09:38:36 at com.lmax.disruptor/com.lmax.disruptor.BatchEventProcessor.run(BatchEventProcessor.java:122) 2024-09-16 09:38:36 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144) 2024-09-16 09:38:36 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642) 2024-09-16 09:38:36 at java.base/java.lang.Thread.run(Thread.java:1583) 2024-09-16 09:38:36 Caused by: java.lang.IllegalStateException: Stream is already completed, no further calls are allowed 2024-09-16 09:38:36 at com.google.common@33.0.0-jre/com.google.common.base.Preconditions.checkState(Preconditions.java:512) 2024-09-16 09:38:36 at io.grpc.stub@1.65.1/io.grpc.stub.ServerCalls$ServerCallStreamObserverImpl.onNext(ServerCalls.java:375) 2024-09-16 09:38:36 at com.hedera.block.server/com.hedera.block.server.producer.ProducerBlockItemObserver.onEvent(ProducerBlockItemObserver.java:207) 2024-09-16 09:38:36 at com.hedera.block.server/com.hedera.block.server.producer.ProducerBlockItemObserver.onEvent(ProducerBlockItemObserver.java:53) 2024-09-16 09:38:36 at com.lmax.disruptor/com.lmax.disruptor.BatchEventProcessor.processEvents(BatchEventProcessor.java:167) 2024-09-16 09:38:36 ... 4 more I suspect this is because the ProducerBlockItemObserver attempts to send a block response but the producer (ghz) has already closed the connection. The grpc io interface for interacting with the underlying observer does not provide any status methods but we might be able to cast to the implementation class under the hood and check isReady() to check before invoking a response when the connection is already closed. It would be good to get rid of these errors since they pollute the logs and will become noisy in production. @mattp-swirldslabs to confirm if this is invalid now as we are not using grpc.io anymore. Helidon does not expose methods to check socket availability. We may have to live with this for now. There's some activity on this ticket so perhaps they will provide the ability to check availability before sending data.
gharchive/issue
2024-09-16T15:56:22
2025-04-01T04:34:28.141849
{ "authors": [ "a-saksena", "mattp-swirldslabs" ], "repo": "hashgraph/hedera-block-node", "url": "https://github.com/hashgraph/hedera-block-node/issues/179", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1870669618
Demo application Problem Adding demo application using steps from https://docs.hedera.com/hedera/support-and-community/contributing-guide Demo application name: HederaToolsFX Developer/Maintainer name and GitHub username: shagus60 Link to the demo application GitHub repository: https://github.com/shagus60/HederaToolsFX Solution n/a Alternatives No response 👋 @shagus60 - thanks for your submission! A demo application does not exactly fit within this documentation repo... but we do have somewhere where this would indeed fit well: https://github.com/orgs/hedera-dev/repositories Try to either transfer your repo to that github organisation, or create a new copy of it - your preference! LMK here in this thread if you have any questions, or run into permissions issues from github, and I'll do my best to assist! Hi @shagus60, thanks for this demo application submission! The team will need to be able to run the application as part of the review process. Please update the README with explicit step-by-step instructions on project setup/configuration, installation, and how to run. Here is one example demo application README. You can find the rest on the docs Demo Application page for additional examples here. Setting Up and Running the Project Download JDK 17 or a newer version. Download Apache NetBeans IDE 18. Clone the source code repository. Open the project using NetBeans. Execute the project. Upon successful execution, you should be presented with the specified screen. [image: how_to_run_pr.PNG] On Thu, Aug 31, 2023 at 3:21 PM krystal @.***> wrote: Hi @shagus60 https://github.com/shagus60, thanks for this demo application submission! The team will need to be able to run the application as part of the review process. Please update the README with explicit step-by-step instructions on project setup/configuration, installation, and how to run. Here https://github.com/hashgraph/hedera-hcs-chat-js is one example demo application README. You can find the rest on the docs Demo Application page for additional examples here https://docs.hedera.com/hedera/tutorials/demo-applications. — Reply to this email directly, view it on GitHub https://github.com/hashgraph/hedera-docs/issues/442#issuecomment-1701648293, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHY4R2MQQFYIOIDGUEJYG73XYDP23ANCNFSM6AAAAAA4CD5LFA . You are receiving this because you were mentioned.Message ID: @.***>
gharchive/issue
2023-08-29T00:01:18
2025-04-01T04:34:28.152145
{ "authors": [ "bguiz", "shagus60", "theekrystallee" ], "repo": "hashgraph/hedera-docs", "url": "https://github.com/hashgraph/hedera-docs/issues/442", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1057920475
Update messages to support auto account creation Problem Hedera Services would like to support new way to create accounts using an alias (public key) Solution Make the following changes in our protobufs: Add an alias public key to the existing AccountID. Any account should have unique alias or accountNum set. message AccountID { int64 shardNum = 1; int64 realmNum = 2; oneof account { int64 accountNum = 3; Key alias = 4; } } The account info should have information about alias bytes alias; Alternatives No response PRs are merged
gharchive/issue
2021-11-18T22:46:24
2025-04-01T04:34:28.154878
{ "authors": [ "Neeharika-Sompalli" ], "repo": "hashgraph/hedera-protobufs", "url": "https://github.com/hashgraph/hedera-protobufs/issues/118", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
249722105
Upgrade/wallaby Upgrade wallaby to version 18.1. HEADLESS CHROME! thanks :heart: :yellow_heart: :green_heart: :blue_heart: :purple_heart:
gharchive/pull-request
2017-08-11T19:07:45
2025-04-01T04:34:28.609453
{ "authors": [ "hashrocketeer" ], "repo": "hashrocket/tilex", "url": "https://github.com/hashrocket/tilex/pull/140", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1549724834
Add support for both, Light and Dark theme for OSWH Is your feature request related to a problem? Please describe. I would like to request the contributor that they should add a toggle button with which the user can switch to light and dark mode Describe the solution you'd like None Describe alternatives you've considered None Additional context Please mention this issue while raising the PR. Hey @hasnainmakada-99 i would like to work on this .. Great saksham, I'm assigning it you ☺️
gharchive/issue
2023-01-19T18:30:13
2025-04-01T04:34:28.710807
{ "authors": [ "hasnainmakada-99", "yung-coder" ], "repo": "hasnainmakada-99/Open-Source-With-Hasnain", "url": "https://github.com/hasnainmakada-99/Open-Source-With-Hasnain/issues/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1146241396
Git remote branch clone repo locally typeorm script updated
gharchive/pull-request
2022-02-21T21:28:58
2025-04-01T04:34:28.711572
{ "authors": [ "irajAamir" ], "repo": "hassanazharkhan/express-wtith-typescript-boilerplate", "url": "https://github.com/hassanazharkhan/express-wtith-typescript-boilerplate/pull/661", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
327542803
Login page asking for Username/Password I installed this add-on in hass.io following the instructions and embedded it into Home Assistant, but when I click on it or select the "open web ui" button, it shows me a page where I have to put in a username and password to log in. Can someone help guide me on what to do next, or if I did something wrong and wasn't supposed to see a login page? @codenamej6, this is not a login page, but a register page (the button say "Register", not login.) This is what I see, which looks like a login page to me:
gharchive/issue
2018-05-30T01:40:28
2025-04-01T04:34:28.725224
{ "authors": [ "codenamej6", "frenck" ], "repo": "hassio-addons/addon-sonweb", "url": "https://github.com/hassio-addons/addon-sonweb/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1345105641
VS Code server addon does not start on fresh install Problem/Motivation Fresh installation on ProxMox with tteck script, after initial login install VS Code Server addon Expected behavior Start VS Code from sidebar and edit config Actual behavior Message telling addon is not running, please start it error log: Add-on version: 5.2.2 You are running the latest version of this add-on. System: Home Assistant OS 8.4 (amd64 / qemux86-64) Home Assistant Core: 2022.8.6 Home Assistant Supervisor: 2022.08.3 [21:06:36] INFO: Starting the code server... [2022-08-19T19:09:04.990Z] info Wrote default config file to ~/.config/code-server/config.yaml [2022-08-19T19:09:20.758Z] error timed out [21:09:23] WARNING: code-server crashed, halting add-on Steps to reproduce Install Home Assistant OS VM with standard settings on ProxMox 7 and after login install VS Code server Proposed changes (If you have a proposed change, workaround or fix, describe the rationale behind it) Looks like I have a wonky installation, so first solve that
gharchive/issue
2022-08-20T09:28:21
2025-04-01T04:34:28.728467
{ "authors": [ "daballiemo" ], "repo": "hassio-addons/addon-vscode-remote", "url": "https://github.com/hassio-addons/addon-vscode-remote/issues/117", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
737907086
BE: Game Engine V2 to_json method : include all the necessary information relayed to the frontend assign method: assign one user to one role so that it can be relayed to the frontend close this issue by pull request https://github.com/hatchways/team-fruit-loops/pull/31
gharchive/issue
2020-11-06T16:56:06
2025-04-01T04:34:28.751659
{ "authors": [ "mpmpx", "sina-jamshidi" ], "repo": "hatchways/team-fruit-loops", "url": "https://github.com/hatchways/team-fruit-loops/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1346881814
Make vaults ownable individualy Adds a HATVault owner role This is wip to address #290
gharchive/pull-request
2022-08-22T19:28:56
2025-04-01T04:34:28.755885
{ "authors": [ "ben-kaufman", "jellegerbrandy" ], "repo": "hats-finance/hats-contracts", "url": "https://github.com/hats-finance/hats-contracts/pull/295", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
166113335
HWKAPM-449 Include basic zipkin server to intercept span data and con… …vert for APM use @pavolloffay This work is considered experimental at the moment. I went through it, it looks ok, in the next steps we should definitely provide more tests. @pavolloffay Thanks Pavol! Definitely agree, more tests required.
gharchive/pull-request
2016-07-18T14:54:45
2025-04-01T04:34:28.831759
{ "authors": [ "objectiser", "pavolloffay" ], "repo": "hawkular/hawkular-apm", "url": "https://github.com/hawkular/hawkular-apm/pull/480", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
255454513
Enables high priority for processes when benchmarking Haven't actually tested on Linux yet, but it will definitely pass 20 to the function, which the docs say is correct. I don't have good answers for the failing checks... looks like PyPy isn't installed, and flake8 doesn't know how to count to two... Oh, the pypy job is broken. It should be removed, or fixed. Isn't 20 a lowest priority? Isn't 20 a lowest priority? Oh right. Didn't actually see the comment before I made the fix - I was closing browser windows and re-read the docs :) I removed PyPy from Travis CI: commit 4802fe5add2d26372a91bea0142baf98a25065d1, so if you rebase/merge with master, tests should pass. Looks good to me in principle but I am not sure how Victor feels about depending on psutil. For Unix/BSD, os.nice() can be used to not depend on psutil. For Windows, well, it's an optional dependency, it's fine. If too many users complain, we might provide a ctypes fallback later? @zooba: Please only modify the behaviour on Windows. I prefer to discuss using nice() on Unix later. By the way, it will help you to get a green Travis CI :-) I'm not sure about using os.nice(). For example, it fails on Travis CI :-) Thank you @zooba for this nice enhancement. nice() can't increase priority unless the process is ran with root privileges.
gharchive/pull-request
2017-09-06T01:32:42
2025-04-01T04:34:28.860280
{ "authors": [ "giampaolo", "haypo", "methane", "serhiy-storchaka", "zooba" ], "repo": "haypo/perf", "url": "https://github.com/haypo/perf/pull/33", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
45464857
How to auto scroll it with animation? FYI . With method viewPagerInstance.setCurrentIndicator(index) , it scrolls with no animation. Can anyone help me how to achieve it ? Thanks a lot ! won't do
gharchive/issue
2014-10-10T09:57:31
2025-04-01T04:34:28.861517
{ "authors": [ "ArronXY", "hayribakici" ], "repo": "hayribakici/infiniteviewpager", "url": "https://github.com/hayribakici/infiniteviewpager/issues/8", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
366676440
Thread termination passed argument memory leak fix Fix for leaking memory on thread close. The void pointer argument passed to the thread should be freed before thread exits. The leak was detected using valgrind. fixes https://github.com/hazelcast/hazelcast-cpp-client/issues/455 Linux test FAILed. Windows test PASSed. Windows test FAILed. Linux test PASSed. Windows test FAILed. Linux test FAILed. Windows test FAILed. Linux test FAILed. Linux test FAILed. Windows test FAILed. Windows test PASSed. Linux test PASSed.
gharchive/pull-request
2018-10-04T08:28:32
2025-04-01T04:34:28.865596
{ "authors": [ "devOpsHazelcast", "ihsandemir" ], "repo": "hazelcast/hazelcast-cpp-client", "url": "https://github.com/hazelcast/hazelcast-cpp-client/pull/456", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
103866826
ocamlbrew requires m4 ocamlbrew fails with the following error in /tmp/ocamlbrew.RANDOMID: =-=- ocamlfind.1.5.5 troubleshooting =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= => Could not build ocamlfind. The most common reason for that is a missing 'm4' system package. Best would be to check at the beginning of the script if m4 is present. Good point. If ocamlfind support is going to stick around directly in ocamlbrew it's worth adding a check for prerequisites.
gharchive/issue
2015-08-29T17:52:31
2025-04-01T04:34:28.893307
{ "authors": [ "hbbio", "hcarty" ], "repo": "hcarty/ocamlbrew", "url": "https://github.com/hcarty/ocamlbrew/issues/28", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1230524166
Big-M Tightness IntervalArithmetic won't necessarily give you the tightest Big-M value for nonlinear GDP. Need to throw an error when Big-M is applied to a nonlinear constraint and no M value was provided An possible feature would be to build and solve an NLP to get the maximum big-M value for the nonlinear constraint.
gharchive/issue
2022-05-10T02:56:16
2025-04-01T04:34:28.901420
{ "authors": [ "hdavid16" ], "repo": "hdavid16/DisjunctiveProgramming.jl", "url": "https://github.com/hdavid16/DisjunctiveProgramming.jl/issues/42", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2149505523
feat(rec): add new crate tracing-rec This is the initial (incomplete) implementation of the tracing-rec crate. This crate provides a tracing-subscriber layer which will record (hence "rec") traces it receives. The idea is to record data as close as possible to what the layer receives and then serialize it into a format that can later be read by tracing-replay and replayed (hence "replay") into a tracing dispatcher. The initial implementation records calls to all the methods on Layer which are needed to reproduce the same sequence elsewhere. They are serialized into JSON (because it allows for easy visual debugging) and written to stdout. Two examples, events and spans have been included which record events and spans respectively so that all the methods on Layer are invoked between the two of them. I pushed the wrong thing.
gharchive/pull-request
2024-02-22T16:54:20
2025-04-01T04:34:28.904396
{ "authors": [ "hds" ], "repo": "hds/tracing-rec-replay", "url": "https://github.com/hds/tracing-rec-replay/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2758779950
🛑 stagin-env is down In 0829bc1, stagin-env (https://staging.lessonpal.com) was down: HTTP code: 0 Response time: 0 ms Resolved: stagin-env is back up in dec1adc after 25 minutes.
gharchive/issue
2024-12-25T12:32:00
2025-04-01T04:34:28.907097
{ "authors": [ "heIsThePirate" ], "repo": "heIsThePirate/upptime-lessonpal", "url": "https://github.com/heIsThePirate/upptime-lessonpal/issues/1430", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1109884521
Fix method redefined warnings Environment Ruby ruby 3.1.0p0 (2021-12-25 revision fb4df44d16) [arm64-darwin21.2.0] Rails 7.0.1 Devise 4.8.1 Current behavior Hello. :wave: I've been seeing the following in my logs when running specs: /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:116: warning: method redefined; discarding old authenticate_user! /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:116: warning: previous definition of authenticate_user! was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:121: warning: method redefined; discarding old user_signed_in? /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:121: warning: previous definition of user_signed_in? was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:125: warning: method redefined; discarding old current_user /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:125: warning: previous definition of current_user was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:129: warning: method redefined; discarding old user_session /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:129: warning: previous definition of user_session was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: method redefined; discarding old current_user /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: previous definition of current_user was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: method redefined; discarding old user_signed_in? /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: previous definition of user_signed_in? was here /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: method redefined; discarding old user_session /Users/bkuhlmann/.cache/frum/versions/3.1.0/lib/ruby/gems/3.1.0/gems/devise-4.8.1/lib/devise/controllers/helpers.rb:136: warning: previous definition of user_session was here Expected behavior I would expect those warnings to not be there but seems like methods are being constantly redefined? Steps to Recreate You should be able to easily recreate this issue by launch your test suite with warnings enabled. For instance, in RSpec this looks like the following: RSpec.configure do |config| config.warnings = true end @bkuhlmann are you still experiencing those? I tried on an app here and didn't see them running some controller tests, but I did see a few others from devise related to instance variables not initialized that I can look at. Hey Carlos, thanks. I'm afraid I'm no longer doing client work with Devise so I can't easily confirm this anymore. Back when I was working with this version of Devise, the behavior was constant. I would assume you'd see this in Devise's own test with RSpec warnings enabled? Maybe if you set $VERBOSE = true before running your test suite, you'd be able to easily recreate within Devise itself? Alright, appreciate the feedback, and sorry about taking just over 1 year to reply back 😅 I will try with a few other variations of verbose / warnings, so far all I've got was those I mentioned above, and enabling warnings here gives me a few method redefined from warden too, but that's about it. Thanks! Yeah, thanks for getting back. :bow: Maybe this is already fixed in the current version and the issues is resolved (that'd be nice). Based on what you are seeing, it seems safe to close this for now so I'll do that. I can always reopen if I bump into this again. :wink:
gharchive/issue
2022-01-21T00:07:45
2025-04-01T04:34:28.925277
{ "authors": [ "bkuhlmann", "carlosantoniodasilva" ], "repo": "heartcombo/devise", "url": "https://github.com/heartcombo/devise/issues/5459", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
241910536
The effect of Difficult button This pull contains : Difficult checkbox to label difficult or with occlusion object (saved in shape) <object> <name>cat</name> <pose>Unspecified</pose> <truncated>0</truncated> **<difficult>1</difficult>** <bndbox> <xmin>141</xmin> <ymin>459</ymin> <xmax>343</xmax> <ymax>719</ymax> </bndbox> </object> i don't know the meaning and the effect. Can anyone help me? From my point of views, 'difficult tag' is designed for object detection to exclude the samples labeled as difficult so if you select difficult attribute for an object, it will not be considered in training of the model? @tzutalin @srafay No, I don't believe that's correct. @315386775 Here's an explanation of the difficult attribute: "The difficult field being set to 1 indicates that the object has been annotated as "difficult", for example an object which is clearly visible but difficult to recognize without substantial use of context." - Source @vdalv alright thanks! @vdalv nice explanation, thanks.
gharchive/issue
2017-07-11T03:01:21
2025-04-01T04:34:28.928959
{ "authors": [ "315386775", "karenkao", "srafay", "tzutalin", "vdalv" ], "repo": "heartexlabs/labelImg", "url": "https://github.com/heartexlabs/labelImg/issues/113", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1020330943
Window messages aren't sorted, resulting in errors when a window is only active for a single frame When using screenshot tools like ShareX, there's weird window stuff happening: a temporary screenshot window appears for a single frame and is removed again immediately. This seems to cause issues with the UwcManager and the window messages – sometimes the "added" messages appear after the "removed" messages, and sometimes there is no "removed" message at all. Here's a screenshot with the events being logged - the first number is the frame index: To reproduce: in UwcWindowList.cs, add "listItem.Enable();" as last line in OnWindowAdded to add every found window automatically download and install https://getsharex.com/ make a screenshot with ShareX while Uwc is running note you'll get a broken white window (no texture) Thank you for the report. This bug was not actually caused by the sort order, but by the management of list items. Specifically, in this sample, it was a simple bug that the window object in Unity did not disappear when the actual window was removed. But as you pointed out, there was a potential bug related to the sort order, so I've fixed it as well. Could you please check it?
gharchive/issue
2021-10-07T18:20:23
2025-04-01T04:34:28.943782
{ "authors": [ "hecomi", "hybridherbst" ], "repo": "hecomi/uWindowCapture", "url": "https://github.com/hecomi/uWindowCapture/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
767848419
Add an option to print the default sources I am using hblock to download lists one at a time. I currently have a list of hblock's built in sources defined. It would be nice if I could call hblock with an option to get the built in sources so I always get the correct sources. Output that is URLs separated by newline would be perfect. Hi, You could try with: grep "https://raw.githubusercontent.com/hectorm/hmirror/master/data/" "$(which hblock)" That will work, thanks!
gharchive/issue
2020-12-15T17:12:11
2025-04-01T04:34:28.945300
{ "authors": [ "KeyofBlueS", "fhriley" ], "repo": "hectorm/hblock", "url": "https://github.com/hectorm/hblock/issues/64", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
96269187
Force bind with dsa account or anonymous before a search operation I force bind with dsa account / anonymous accont before a search operation because I think that the authenticate operation assign the user bind. if ( $username && $password ){ if ( @ldap_bind($this->_ch, $dn, $password) ){ return true; } } Sorry for getting back to you that late. Can you explain, what issue you encountered and then tried to solve? The LDAP::search()-method should not be used without a prior bind. Therefore a bind should already have been done and there is no need to bind again. Thanks for your explanation! My ACL blocked my wordpress authentification. I think that the ldapsearch was tried in anonymous and not with the DSA account. Thanks @kadogo for bringing that to my attention. I solved it in a more general manner by calling the bind right after a successfull authentication.
gharchive/pull-request
2015-07-21T09:21:02
2025-04-01T04:34:28.976563
{ "authors": [ "heiglandreas", "kadogo" ], "repo": "heiglandreas/authLdap", "url": "https://github.com/heiglandreas/authLdap/pull/64", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1568807028
Data not loaded after HA restart After a HA restart, the integration does not fetch (or fail to fetch ?) the data. v1.2.1-beta1 Seen in the logs: 2023-02-02 22:15:55.366 ERROR (RTE Tempo API Worker) [custom_components.rtetempo.api_worker] JSON parsing error on a HTTP 200 request: Expecting value: line 1 column 1 (char 0) I have to manually reload the extension, or wait, after a (long) time, it refreshes This is the error message of the fix we talked about on #3 . This means this fix is indeed what was needed to avoid crashing. So in a way, this is good news. Now about the error itself, if the RTE API is having a hard time I can not do anything about it :( After an error like this, the extension will retry querying the API 10 minutes later. You are unlucky because you got a RTE API error at startup, if Home Assistant would have been already running the previous data would have been use to keep the sensors with their value as long as possible. I am still curious of what the API returns in its 200 request which is not the expected JSON... Understood! Thanks for the explanation! Could this 10min delay cause some issues if during that time prices are not available ? I.e. during a switch from hp/hc or from a white to red day for example. I've juste tested 2 consecutive restarts and there is no error. Like you said, I was just unlucky ;-) Could this 10min delay cause some issues if during that time prices are not available ? I.e. during a switch from hp/hc or from a white to red day for example. Definitely. If the RTE API is down when you start your Home Assistant, you won't have what you need for your template sensor. But this is an edge case, the most likely scenario is the API going down while your Home Assistant is already running: previous retreived data will still be in memory and sensors will still be able to use it. But if the RTE API failure lasts more than 24h to 48h cached data will not cover the next day becoming the actual day and without this particular day value, even the "actual day" color will switch to unknown. A quick example: we are the 2nd and you got the next day 3rd color around 6am. Data is kept within memory by the integration. By 1 pm, RTE API goes down. This is not an issue because the integration already got what it needs and will only retry to fetch new data the 2nd around 6am. If by that time the RTE API is still down, the next day color will become unavailable because the cached data does not contains any info about the 3rd, But you do have the color for the 2nd, so current day color will change as expected. But if the 3rd at 6am the RTE API is still down, you will have undefined "current day" color. This means that once the integration does have data, it is fairly resilient to incident on the RTE API... but for a day or two max :) thanks :-) I didn't know there was a little cache ;-) Just a quick note... I am not sure this if fixed correctly after a startup. Since yesterday I go the error 3 times already, 2 times after a HA restart after a config change and 1 time after a core update. It's like the integration is failing to do the call after startup, and consequently has to wait 10 min. I was wondering if there could be a way otherwise at startup to have an exponential backoff, like we try, it fails, wait for 1min, it fails, wait 2 times more, it fails, wait 2 times more, etc until a max of 10min. I checked my logs and I did not have a single one :( I expanded the log on the error in v1.2.1 to better understand what is happening with the RTE API: https://github.com/hekmon/rtetempo/commit/b01fc7c4e373d93ad9d0bec44b4b5ea053178452 But since I did not encountered the error again (yet ?) I still do not have a clue. Could you share yours when it happen again using v1.2.1 ? I was wondering if there could be a way otherwise at startup to have an exponential backoff, like we try, it fails, wait for 1min, it fails, wait 2 times more, it fails, wait 2 times more, etc until a max of 10min. I have to go abroad for work for a certain time so I won't be able to implement this just yet. But I will think about it. Also, as a precision, it is more than 10min actually: The first request failed at 9h24 because the data returned is in XML format: Cette erreur provient d'une intégration personnalisée Logger: custom_components.rtetempo.api_worker Source: custom_components/rtetempo/api_worker.py:94 Integration: RTE Tempo (documentation, issues) First occurred: 09:24:17 (1 occurrences) Last logged: 09:24:17 JSON parsing error on a HTTP 200 request (Expecting value: line 1 column 1 (char 0)): <Tempos><Tempo><DateHeureCreation>2023-02-03</DateHeureCreation><DateApplication>2023-02-04</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-02-02</DateHeureCreation><DateApplication>2023-02-03</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-02-01</DateHeureCreation><DateApplication>2023-02-02</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-31</DateHeureCreation><DateApplication>2023-02-01</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-30</DateHeureCreation><DateApplication>2023-01-31</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-29</DateHeureCreation><DateApplication>2023-01-30</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-28</DateHeureCreation><DateApplication>2023-01-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-27</DateHeureCreation><DateApplication>2023-01-28</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-26</DateHeureCreation><DateApplication>2023-01-27</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-25</DateHeureCreation><DateApplication>2023-01-26</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-24</DateHeureCreation><DateApplication>2023-01-25</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-23</DateHeureCreation><DateApplication>2023-01-24</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-22</DateHeureCreation><DateApplication>2023-01-23</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-21</DateHeureCreation><DateApplication>2023-01-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-20</DateHeureCreation><DateApplication>2023-01-21</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-19</DateHeureCreation><DateApplication>2023-01-20</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-18</DateHeureCreation><DateApplication>2023-01-19</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-17</DateHeureCreation><DateApplication>2023-01-18</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-16</DateHeureCreation><DateApplication>2023-01-17</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-15</DateHeureCreation><DateApplication>2023-01-16</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-14</DateHeureCreation><DateApplication>2023-01-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-13</DateHeureCreation><DateApplication>2023-01-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-12</DateHeureCreation><DateApplication>2023-01-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-11</DateHeureCreation><DateApplication>2023-01-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-10</DateHeureCreation><DateApplication>2023-01-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-09</DateHeureCreation><DateApplication>2023-01-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-08</DateHeureCreation><DateApplication>2023-01-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-07</DateHeureCreation><DateApplication>2023-01-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-06</DateHeureCreation><DateApplication>2023-01-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-05</DateHeureCreation><DateApplication>2023-01-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-04</DateHeureCreation><DateApplication>2023-01-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-03</DateHeureCreation><DateApplication>2023-01-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-02</DateHeureCreation><DateApplication>2023-01-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2023-01-01</DateHeureCreation><DateApplication>2023-01-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-31</DateHeureCreation><DateApplication>2023-01-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-30</DateHeureCreation><DateApplication>2022-12-31</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-29</DateHeureCreation><DateApplication>2022-12-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-28</DateHeureCreation><DateApplication>2022-12-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-27</DateHeureCreation><DateApplication>2022-12-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-26</DateHeureCreation><DateApplication>2022-12-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-25</DateHeureCreation><DateApplication>2022-12-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-24</DateHeureCreation><DateApplication>2022-12-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-23</DateHeureCreation><DateApplication>2022-12-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-22</DateHeureCreation><DateApplication>2022-12-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-21</DateHeureCreation><DateApplication>2022-12-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-20</DateHeureCreation><DateApplication>2022-12-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-19</DateHeureCreation><DateApplication>2022-12-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-18</DateHeureCreation><DateApplication>2022-12-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-17</DateHeureCreation><DateApplication>2022-12-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-16</DateHeureCreation><DateApplication>2022-12-17</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-15</DateHeureCreation><DateApplication>2022-12-16</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-14</DateHeureCreation><DateApplication>2022-12-15</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-13</DateHeureCreation><DateApplication>2022-12-14</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-12</DateHeureCreation><DateApplication>2022-12-13</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-11</DateHeureCreation><DateApplication>2022-12-12</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-10</DateHeureCreation><DateApplication>2022-12-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-09</DateHeureCreation><DateApplication>2022-12-10</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-08</DateHeureCreation><DateApplication>2022-12-09</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-07</DateHeureCreation><DateApplication>2022-12-08</DateApplication><Couleur>ROUGE</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-06</DateHeureCreation><DateApplication>2022-12-07</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-05</DateHeureCreation><DateApplication>2022-12-06</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-04</DateHeureCreation><DateApplication>2022-12-05</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-03</DateHeureCreation><DateApplication>2022-12-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-02</DateHeureCreation><DateApplication>2022-12-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-12-01</DateHeureCreation><DateApplication>2022-12-02</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-30</DateHeureCreation><DateApplication>2022-12-01</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-29</DateHeureCreation><DateApplication>2022-11-30</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-28</DateHeureCreation><DateApplication>2022-11-29</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-27</DateHeureCreation><DateApplication>2022-11-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-26</DateHeureCreation><DateApplication>2022-11-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-25</DateHeureCreation><DateApplication>2022-11-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-24</DateHeureCreation><DateApplication>2022-11-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-23</DateHeureCreation><DateApplication>2022-11-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-22</DateHeureCreation><DateApplication>2022-11-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-21</DateHeureCreation><DateApplication>2022-11-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-20</DateHeureCreation><DateApplication>2022-11-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-19</DateHeureCreation><DateApplication>2022-11-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-18</DateHeureCreation><DateApplication>2022-11-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-17</DateHeureCreation><DateApplication>2022-11-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-16</DateHeureCreation><DateApplication>2022-11-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-15</DateHeureCreation><DateApplication>2022-11-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-14</DateHeureCreation><DateApplication>2022-11-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-13</DateHeureCreation><DateApplication>2022-11-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-12</DateHeureCreation><DateApplication>2022-11-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-11</DateHeureCreation><DateApplication>2022-11-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-10</DateHeureCreation><DateApplication>2022-11-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-09</DateHeureCreation><DateApplication>2022-11-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-08</DateHeureCreation><DateApplication>2022-11-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-07</DateHeureCreation><DateApplication>2022-11-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-06</DateHeureCreation><DateApplication>2022-11-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-05</DateHeureCreation><DateApplication>2022-11-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-04</DateHeureCreation><DateApplication>2022-11-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-03</DateHeureCreation><DateApplication>2022-11-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-02</DateHeureCreation><DateApplication>2022-11-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-11-01</DateHeureCreation><DateApplication>2022-11-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-31</DateHeureCreation><DateApplication>2022-11-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-30</DateHeureCreation><DateApplication>2022-10-31</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-29</DateHeureCreation><DateApplication>2022-10-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-28</DateHeureCreation><DateApplication>2022-10-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-27</DateHeureCreation><DateApplication>2022-10-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-26</DateHeureCreation><DateApplication>2022-10-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-25</DateHeureCreation><DateApplication>2022-10-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-24</DateHeureCreation><DateApplication>2022-10-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-23</DateHeureCreation><DateApplication>2022-10-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-22</DateHeureCreation><DateApplication>2022-10-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-21</DateHeureCreation><DateApplication>2022-10-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-20</DateHeureCreation><DateApplication>2022-10-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-19</DateHeureCreation><DateApplication>2022-10-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-18</DateHeureCreation><DateApplication>2022-10-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-17</DateHeureCreation><DateApplication>2022-10-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-16</DateHeureCreation><DateApplication>2022-10-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-15</DateHeureCreation><DateApplication>2022-10-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-14</DateHeureCreation><DateApplication>2022-10-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-13</DateHeureCreation><DateApplication>2022-10-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-12</DateHeureCreation><DateApplication>2022-10-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-11</DateHeureCreation><DateApplication>2022-10-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-10</DateHeureCreation><DateApplication>2022-10-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-09</DateHeureCreation><DateApplication>2022-10-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-08</DateHeureCreation><DateApplication>2022-10-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-07</DateHeureCreation><DateApplication>2022-10-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-06</DateHeureCreation><DateApplication>2022-10-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-05</DateHeureCreation><DateApplication>2022-10-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-04</DateHeureCreation><DateApplication>2022-10-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-03</DateHeureCreation><DateApplication>2022-10-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-02</DateHeureCreation><DateApplication>2022-10-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-10-01</DateHeureCreation><DateApplication>2022-10-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-30</DateHeureCreation><DateApplication>2022-10-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-29</DateHeureCreation><DateApplication>2022-09-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-28</DateHeureCreation><DateApplication>2022-09-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-27</DateHeureCreation><DateApplication>2022-09-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-26</DateHeureCreation><DateApplication>2022-09-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-25</DateHeureCreation><DateApplication>2022-09-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-24</DateHeureCreation><DateApplication>2022-09-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-23</DateHeureCreation><DateApplication>2022-09-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-22</DateHeureCreation><DateApplication>2022-09-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-21</DateHeureCreation><DateApplication>2022-09-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-20</DateHeureCreation><DateApplication>2022-09-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-19</DateHeureCreation><DateApplication>2022-09-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-18</DateHeureCreation><DateApplication>2022-09-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-17</DateHeureCreation><DateApplication>2022-09-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-16</DateHeureCreation><DateApplication>2022-09-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-15</DateHeureCreation><DateApplication>2022-09-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-14</DateHeureCreation><DateApplication>2022-09-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-13</DateHeureCreation><DateApplication>2022-09-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-12</DateHeureCreation><DateApplication>2022-09-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-11</DateHeureCreation><DateApplication>2022-09-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-10</DateHeureCreation><DateApplication>2022-09-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-09</DateHeureCreation><DateApplication>2022-09-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-08</DateHeureCreation><DateApplication>2022-09-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-07</DateHeureCreation><DateApplication>2022-09-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-06</DateHeureCreation><DateApplication>2022-09-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-05</DateHeureCreation><DateApplication>2022-09-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-04</DateHeureCreation><DateApplication>2022-09-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-03</DateHeureCreation><DateApplication>2022-09-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-02</DateHeureCreation><DateApplication>2022-09-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-09-01</DateHeureCreation><DateApplication>2022-09-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-31</DateHeureCreation><DateApplication>2022-09-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-30</DateHeureCreation><DateApplication>2022-08-31</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-29</DateHeureCreation><DateApplication>2022-08-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-28</DateHeureCreation><DateApplication>2022-08-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-27</DateHeureCreation><DateApplication>2022-08-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-26</DateHeureCreation><DateApplication>2022-08-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-25</DateHeureCreation><DateApplication>2022-08-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-24</DateHeureCreation><DateApplication>2022-08-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-23</DateHeureCreation><DateApplication>2022-08-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-22</DateHeureCreation><DateApplication>2022-08-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-21</DateHeureCreation><DateApplication>2022-08-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-20</DateHeureCreation><DateApplication>2022-08-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-19</DateHeureCreation><DateApplication>2022-08-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-18</DateHeureCreation><DateApplication>2022-08-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-17</DateHeureCreation><DateApplication>2022-08-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-16</DateHeureCreation><DateApplication>2022-08-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-15</DateHeureCreation><DateApplication>2022-08-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-14</DateHeureCreation><DateApplication>2022-08-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-13</DateHeureCreation><DateApplication>2022-08-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-12</DateHeureCreation><DateApplication>2022-08-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-11</DateHeureCreation><DateApplication>2022-08-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-10</DateHeureCreation><DateApplication>2022-08-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-09</DateHeureCreation><DateApplication>2022-08-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-08</DateHeureCreation><DateApplication>2022-08-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-07</DateHeureCreation><DateApplication>2022-08-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-06</DateHeureCreation><DateApplication>2022-08-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-05</DateHeureCreation><DateApplication>2022-08-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-04</DateHeureCreation><DateApplication>2022-08-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-03</DateHeureCreation><DateApplication>2022-08-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-02</DateHeureCreation><DateApplication>2022-08-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-08-01</DateHeureCreation><DateApplication>2022-08-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-31</DateHeureCreation><DateApplication>2022-08-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-30</DateHeureCreation><DateApplication>2022-07-31</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-29</DateHeureCreation><DateApplication>2022-07-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-28</DateHeureCreation><DateApplication>2022-07-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-27</DateHeureCreation><DateApplication>2022-07-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-26</DateHeureCreation><DateApplication>2022-07-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-25</DateHeureCreation><DateApplication>2022-07-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-24</DateHeureCreation><DateApplication>2022-07-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-23</DateHeureCreation><DateApplication>2022-07-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-22</DateHeureCreation><DateApplication>2022-07-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-21</DateHeureCreation><DateApplication>2022-07-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-20</DateHeureCreation><DateApplication>2022-07-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-19</DateHeureCreation><DateApplication>2022-07-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-18</DateHeureCreation><DateApplication>2022-07-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-17</DateHeureCreation><DateApplication>2022-07-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-16</DateHeureCreation><DateApplication>2022-07-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-15</DateHeureCreation><DateApplication>2022-07-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-14</DateHeureCreation><DateApplication>2022-07-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-13</DateHeureCreation><DateApplication>2022-07-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-12</DateHeureCreation><DateApplication>2022-07-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-11</DateHeureCreation><DateApplication>2022-07-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-10</DateHeureCreation><DateApplication>2022-07-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-09</DateHeureCreation><DateApplication>2022-07-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-08</DateHeureCreation><DateApplication>2022-07-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-07</DateHeureCreation><DateApplication>2022-07-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-06</DateHeureCreation><DateApplication>2022-07-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-05</DateHeureCreation><DateApplication>2022-07-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-04</DateHeureCreation><DateApplication>2022-07-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-03</DateHeureCreation><DateApplication>2022-07-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-02</DateHeureCreation><DateApplication>2022-07-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-07-01</DateHeureCreation><DateApplication>2022-07-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-30</DateHeureCreation><DateApplication>2022-07-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-29</DateHeureCreation><DateApplication>2022-06-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-28</DateHeureCreation><DateApplication>2022-06-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-27</DateHeureCreation><DateApplication>2022-06-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-26</DateHeureCreation><DateApplication>2022-06-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-25</DateHeureCreation><DateApplication>2022-06-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-24</DateHeureCreation><DateApplication>2022-06-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-23</DateHeureCreation><DateApplication>2022-06-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-22</DateHeureCreation><DateApplication>2022-06-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-21</DateHeureCreation><DateApplication>2022-06-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-20</DateHeureCreation><DateApplication>2022-06-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-19</DateHeureCreation><DateApplication>2022-06-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-18</DateHeureCreation><DateApplication>2022-06-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-17</DateHeureCreation><DateApplication>2022-06-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-16</DateHeureCreation><DateApplication>2022-06-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-15</DateHeureCreation><DateApplication>2022-06-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-14</DateHeureCreation><DateApplication>2022-06-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-13</DateHeureCreation><DateApplication>2022-06-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-12</DateHeureCreation><DateApplication>2022-06-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-11</DateHeureCreation><DateApplication>2022-06-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-10</DateHeureCreation><DateApplication>2022-06-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-09</DateHeureCreation><DateApplication>2022-06-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-08</DateHeureCreation><DateApplication>2022-06-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-07</DateHeureCreation><DateApplication>2022-06-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-06</DateHeureCreation><DateApplication>2022-06-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-05</DateHeureCreation><DateApplication>2022-06-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-04</DateHeureCreation><DateApplication>2022-06-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-03</DateHeureCreation><DateApplication>2022-06-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-02</DateHeureCreation><DateApplication>2022-06-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-06-01</DateHeureCreation><DateApplication>2022-06-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-31</DateHeureCreation><DateApplication>2022-06-01</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-30</DateHeureCreation><DateApplication>2022-05-31</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-29</DateHeureCreation><DateApplication>2022-05-30</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-28</DateHeureCreation><DateApplication>2022-05-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-27</DateHeureCreation><DateApplication>2022-05-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-26</DateHeureCreation><DateApplication>2022-05-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-25</DateHeureCreation><DateApplication>2022-05-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-24</DateHeureCreation><DateApplication>2022-05-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-23</DateHeureCreation><DateApplication>2022-05-24</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-22</DateHeureCreation><DateApplication>2022-05-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-21</DateHeureCreation><DateApplication>2022-05-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-20</DateHeureCreation><DateApplication>2022-05-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-19</DateHeureCreation><DateApplication>2022-05-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-18</DateHeureCreation><DateApplication>2022-05-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-17</DateHeureCreation><DateApplication>2022-05-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-16</DateHeureCreation><DateApplication>2022-05-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-15</DateHeureCreation><DateApplication>2022-05-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-14</DateHeureCreation><DateApplication>2022-05-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-13</DateHeureCreation><DateApplication>2022-05-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-12</DateHeureCreation><DateApplication>2022-05-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-11</DateHeureCreation><DateApplication>2022-05-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-10</DateHeureCreation><DateApplication>2022-05-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-09</DateHeureCreation><DateApplication>2022-05-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-08</DateHeureCreation><DateApplication>2022-05-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-07</DateHeureCreation><DateApplication>2022-05-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-06</DateHeureCreation><DateApplication>2022-05-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-05</DateHeureCreation><DateApplication>2022-05-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-04</DateHeureCreation><DateApplication>2022-05-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-03</DateHeureCreation><DateApplication>2022-05-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-02</DateHeureCreation><DateApplication>2022-05-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-05-01</DateHeureCreation><DateApplication>2022-05-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-30</DateHeureCreation><DateApplication>2022-05-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-29</DateHeureCreation><DateApplication>2022-04-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-28</DateHeureCreation><DateApplication>2022-04-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-27</DateHeureCreation><DateApplication>2022-04-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-26</DateHeureCreation><DateApplication>2022-04-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-25</DateHeureCreation><DateApplication>2022-04-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-24</DateHeureCreation><DateApplication>2022-04-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-23</DateHeureCreation><DateApplication>2022-04-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-22</DateHeureCreation><DateApplication>2022-04-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-21</DateHeureCreation><DateApplication>2022-04-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-20</DateHeureCreation><DateApplication>2022-04-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-19</DateHeureCreation><DateApplication>2022-04-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-18</DateHeureCreation><DateApplication>2022-04-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-17</DateHeureCreation><DateApplication>2022-04-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-16</DateHeureCreation><DateApplication>2022-04-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-15</DateHeureCreation><DateApplication>2022-04-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-14</DateHeureCreation><DateApplication>2022-04-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-13</DateHeureCreation><DateApplication>2022-04-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-12</DateHeureCreation><DateApplication>2022-04-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-11</DateHeureCreation><DateApplication>2022-04-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-10</DateHeureCreation><DateApplication>2022-04-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-09</DateHeureCreation><DateApplication>2022-04-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-08</DateHeureCreation><DateApplication>2022-04-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-07</DateHeureCreation><DateApplication>2022-04-08</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-06</DateHeureCreation><DateApplication>2022-04-07</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-05</DateHeureCreation><DateApplication>2022-04-06</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-04</DateHeureCreation><DateApplication>2022-04-05</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-03</DateHeureCreation><DateApplication>2022-04-04</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-02</DateHeureCreation><DateApplication>2022-04-03</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-04-01</DateHeureCreation><DateApplication>2022-04-02</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-31</DateHeureCreation><DateApplication>2022-04-01</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-30</DateHeureCreation><DateApplication>2022-03-31</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-29</DateHeureCreation><DateApplication>2022-03-30</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-28</DateHeureCreation><DateApplication>2022-03-29</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-27</DateHeureCreation><DateApplication>2022-03-28</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-26</DateHeureCreation><DateApplication>2022-03-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-25</DateHeureCreation><DateApplication>2022-03-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-24</DateHeureCreation><DateApplication>2022-03-25</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-23</DateHeureCreation><DateApplication>2022-03-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-22</DateHeureCreation><DateApplication>2022-03-23</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-21</DateHeureCreation><DateApplication>2022-03-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-20</DateHeureCreation><DateApplication>2022-03-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-19</DateHeureCreation><DateApplication>2022-03-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-18</DateHeureCreation><DateApplication>2022-03-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-17</DateHeureCreation><DateApplication>2022-03-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-16</DateHeureCreation><DateApplication>2022-03-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-15</DateHeureCreation><DateApplication>2022-03-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-14</DateHeureCreation><DateApplication>2022-03-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-13</DateHeureCreation><DateApplication>2022-03-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-12</DateHeureCreation><DateApplication>2022-03-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-11</DateHeureCreation><DateApplication>2022-03-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-10</DateHeureCreation><DateApplication>2022-03-11</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-09</DateHeureCreation><DateApplication>2022-03-10</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-08</DateHeureCreation><DateApplication>2022-03-09</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-07</DateHeureCreation><DateApplication>2022-03-08</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-06</DateHeureCreation><DateApplication>2022-03-07</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-05</DateHeureCreation><DateApplication>2022-03-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-04</DateHeureCreation><DateApplication>2022-03-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-03</DateHeureCreation><DateApplication>2022-03-04</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-02</DateHeureCreation><DateApplication>2022-03-03</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-03-01</DateHeureCreation><DateApplication>2022-03-02</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-28</DateHeureCreation><DateApplication>2022-03-01</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-27</DateHeureCreation><DateApplication>2022-02-28</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-26</DateHeureCreation><DateApplication>2022-02-27</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-25</DateHeureCreation><DateApplication>2022-02-26</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-24</DateHeureCreation><DateApplication>2022-02-25</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-23</DateHeureCreation><DateApplication>2022-02-24</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-22</DateHeureCreation><DateApplication>2022-02-23</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-21</DateHeureCreation><DateApplication>2022-02-22</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-20</DateHeureCreation><DateApplication>2022-02-21</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-19</DateHeureCreation><DateApplication>2022-02-20</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-18</DateHeureCreation><DateApplication>2022-02-19</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-17</DateHeureCreation><DateApplication>2022-02-18</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-16</DateHeureCreation><DateApplication>2022-02-17</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-15</DateHeureCreation><DateApplication>2022-02-16</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-14</DateHeureCreation><DateApplication>2022-02-15</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-13</DateHeureCreation><DateApplication>2022-02-14</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-12</DateHeureCreation><DateApplication>2022-02-13</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-11</DateHeureCreation><DateApplication>2022-02-12</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-10</DateHeureCreation><DateApplication>2022-02-11</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-09</DateHeureCreation><DateApplication>2022-02-10</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-08</DateHeureCreation><DateApplication>2022-02-09</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-07</DateHeureCreation><DateApplication>2022-02-08</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-06</DateHeureCreation><DateApplication>2022-02-07</DateApplication><Couleur>BLANC</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-05</DateHeureCreation><DateApplication>2022-02-06</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-04</DateHeureCreation><DateApplication>2022-02-05</DateApplication><Couleur>BLEU</Couleur></Tempo><Tempo><DateHeureCreation>2022-02-03</DateHeureCreation><DateApplication>2022-02-04</DateApplication><Couleur>BLEU</Couleur></Tempo></Tempos> Could it be that the request is not properly adding the right accept header or query param ? Interesting! The API documentation states that JSON is the default format and you have to specify a header if you want JSON. My bet is that one of the RTE API server is misconfigured and respond xml by default. Depending on where their load balancer forward your request, you hit the wrong one. I will try to force the JSON header in the request to see if it also forces the remote server to send the correct payload format. Fix released in https://github.com/hekmon/rtetempo/releases/tag/v1.2.2 Thanks again !
gharchive/issue
2023-02-02T21:27:01
2025-04-01T04:34:29.030691
{ "authors": [ "hekmon", "mathieucarbou" ], "repo": "hekmon/rtetempo", "url": "https://github.com/hekmon/rtetempo/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1655737755
Update to Jandex 3 Environment Details Helidon Version: 4.x Helidon SE or Helidon MP JDK version: OS: Docker version (if applicable): Problem Description We might want to adopt the latest Jandex release. The Jandex 3 library (as of this writing, 3.0.5 is current) is now available. Further, there is a new release of the Jandex plug-in as well, also with a different groupID. The effect of the library change on Helidon would be in these general areas: It has some incompatible API changes as compared to Jandex Some of our code and some code in 3rd-party libraries we currently depend on use APIs removed from Jandex 3. Also the group ID changed from org.jboss to io.smallrye (although the package name remains org.jboss.jandex). See https://smallrye.io/blog/jandex-3-0-0/ for a summary of the changes. Several areas in Helidon would be affected by the Jandex library change. [x] Update archetypes to generate pom.xml files depending on Jandex 3 [x] Update example pom.xml files for Jandex 3 groupID change [x] Update integration pom.xml files for Jandex 3 groupID change [x] Update helidon/microprofile pom.xml files for Jandex 3 groupID change [x] Update microprofile/lra/jax-rs to avoid usage of APIs removed in Jandex 3 [x] Update tests pom.xml for Jandex 3 groupID change [x] Update Jakarta persistence API to 3.1.0 (IIRC, because of the Hibernate update) Tracked by a separate issue: #7448 We have worked around the Hibernate 6.1.7 dependency on Jandex 2 by excluding the dependency. Therefore I am closing this epic as complete. I have opened #7448 to track the Hibernate upgrade seperately.
gharchive/issue
2023-04-05T14:28:06
2025-04-01T04:34:29.047548
{ "authors": [ "barchetta", "tjquinno" ], "repo": "helidon-io/helidon", "url": "https://github.com/helidon-io/helidon/issues/6561", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1363707800
added some Cate and rewrote makefile I know most of you people have no idea what Cate is, basically it's a painless build system! While Cate is not really fast, it saves hours of Makefile pain! Cate only builds C/C++ though. The C++ NBT Catefile is Library nbt(dynamic); nbt.compiler = "clang++"; nbt.flags = "-std=c++17 -Ofast -fasm -fms-extensions"; nbt.out = "build/libheliumcppnbt.so"; nbt.files = recursive("src/cpp/nbt/*.cpp"); nbt.includes = {"src/cpp/nbt/"}; nbt.object_folder = "obj/"; nbt.build(); Which is much easier to comprehend than CPP_NBT_FILES := $(wildcard $(CPP_SRC)/nbt/*.cpp) $(C_NBT_FILES) CPP_NBT_OBJECTS := $(patsubst $(CPP_NBT_FILES)/%.c, $(CPP_NBT_OBJ_DIR)/%.o, $(CPP_NBT_FILES)) cpp_nbt : $(CPP_NBT_OBJECTS) @$(C_COMPILER) -shared $^ -o $(BUILD)/libheliumcppnbt.so $(CPP_NBT_OBJ_DIR)/%.o : $(CPP_NBT_FILES) @$(CPP_COMPILER) -I$(C_NBT_FILES) $(C_FLAGS) -c $< -o $@ Cate also creates directories automatically, saving the poor soul who has to deal with Makefile a bit of sanity. Also, fixed your makefile Hope this could help you! -yogurt (ae/aem) make is dead 🦀
gharchive/pull-request
2022-09-06T19:13:38
2025-04-01T04:34:29.066900
{ "authors": [ "ExaInsanity", "TheMilkies" ], "repo": "helium-toolchain/helium-serialization", "url": "https://github.com/helium-toolchain/helium-serialization/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1153716977
[Removal]: Please help me remove from the rejection list, thank you, hard work! Hotspot Name Suave Berry Koala Hotspot b58 Address 11UjvWDpGHk9NZsuhJV4ReEg9H81mKC6cCzNqxfNP44eXoUpbiK Discord Handle No response Hotspot Manufacturer Bobcat Removal Reason My equipment was blacklisted for no reason. This happened when I moved and reinstalled it. The installation was carried out in accordance with the official requirements. Modifications No Extra forwarders No Extra antennas No Additional Information No response We have taken into account this removal request in the next iteration of the denylist. It may take some time before it's processed. This is an automated comment due to the volume of addition and removal requests. A future tool is in the works to provide more insight into the analysis approach made by the Helium team and community members that maintain this list.
gharchive/issue
2022-02-28T06:44:09
2025-04-01T04:34:29.070340
{ "authors": [ "Eclosionstars", "abhay" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/1732", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1164858044
[Removal]: Generous Metal Tiger [Previous #1503, previous #75] Hotspot Name Generous Metal Tiger Hotspot b58 Address 1121Y5ZAAc7bPacoQTtv33sHJTuSTyVJ8TVbbbGaJCdiGXwgbm14 Discord Handle LaRisposta4#3182 Hotspot Manufacturer Bobcat Removal Reason Again re-added without any reason for the third time...please check my evidence Modifications No Extra forwarders No Extra antennas No Additional Information I've just checked the new pull request that will be merged in the next few hours and my hs is re added again!!! Unbelievable guys. I think there is an issue because you have reviewed my case and removed from denylist two times!!!!!! https://github.com/helium/denylist/issues/75 https://github.com/helium/denylist/issues/1503 Please check your automatism!!!! I'm wasting time and money!!!! below some further information that may and hope helps to be deleted from that list: the location of the hotspot indicated is very accurated; the elevation from the ground also, maybe 1 or 2 meters of difference but i can't measure that in other way; the antenna declared is right, it the default antenna of bobcat miner; i had to relocated the hotspot in november after bought it from another user (19th November 2021 as log file can confirm) but after that i didn't do any changes or something; i believe in this project and absolutely i don't gaming with my hotspot or cheating; Please check The log and The stats/info since The miner was bought by me and not from The previous owner. Please let me know if you need further information and what i have to do in order to fix this. so sorry if i did mistakes. hope you understand. thanks again best regards Attached all the transaction relative of my account that give the evidence that I am not cheating!!!! helium-13AZjSc3JAsT38yyzkx6BkcK6DmC7pNjdrZsa3cBcKxfLQDirkz-all-raw.3.csv For what it's worth... I had a look at your hotspot and it doesn't look suspicious at all. For what it's worth... I had a look at your hotspot and it doesn't look suspicious at all. Thanks for your feedback, means a lot...i'm trying with all I have to give evidence that I am legit...this is the third time that my hs is included in that list :( i am really sad and desperate about that...
gharchive/issue
2022-03-10T07:35:27
2025-04-01T04:34:29.077316
{ "authors": [ "LaRisposta4", "malder2508" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/2259", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1118787409
[Addition]: Hotspot b58 Addresses 11pSymViAm83aWAcR92gXm2tdenBsjux2Zjo93DrYuxWchjSZDv Discord Handle Jack K Reason(s) Hotspot does not witness anything outside of it's bubble despite other legit-apperaing hotspots in it's area. There are dozens/hundreds more in this area. Thank you for your submission. This has been reviewed and may be considered for a future change to the denylist. If any change will be made, it'll likely be in the next 1-2 weeks assuming (1) it is proposed to be added, (2) denylist signers agree on the change, and (3) the release is published to all Hotspots. Please do not open additional issues regarding these Hotspots.
gharchive/issue
2022-01-30T23:51:31
2025-04-01T04:34:29.079211
{ "authors": [ "HowardScottWarshaw", "abhay" ], "repo": "helium/denylist", "url": "https://github.com/helium/denylist/issues/90", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
630549169
LIGNE_DEVIEE has no expectedArrivalTime { "points": [ { "passingTimes": [ { "destination": { "fr": "GARE DU MIDI", "nl": "ZUIDSTATION" }, "expectedArrivalTime": "2020-06-03T07:34:00+02:00", "lineId": "50" }, { "destination": { "fr": "FOREST (BERVOETS)", "nl": "VORST (BERVOETS)" }, "lineId": "54", "message": { "fr": "LIGNE D\xc3\x89VI\xc3\x89E", "nl": "LIJN OMGELEID" } }]} fixed in 1.3.0. Now if no expectedArrivalTime is given in the json, there will be no attribute expectedArrivalTime/arriving_in.min/sec in the passage object
gharchive/issue
2020-06-04T06:32:46
2025-04-01T04:34:29.093148
{ "authors": [ "helldog136" ], "repo": "helldog136/pystibmivb", "url": "https://github.com/helldog136/pystibmivb/issues/8", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
185188285
Add support for custom_fields editor and required fields Currently the python SDK does not support 'custom_fields': '[{"name":"newline", "value":"$20,000", "editor":"Client", "required":true}]', the editor and required fields for custom_fields. Please add this support. The python SDK will work for templates and custom fields, but not in the case where you want to assign an editor or make it required. Workaround: import requests buildTheRequest = 'https://' + apikey + ':@api.hellosign.com/v3/signature_request/create_embedded_with_template' data = { 'client_id': 'YOUR_CLIENT_ID', 'template_id': 'YOUR_TEMPLATE_ID', 'subject': 'ticket214090', 'message': 'ticket214090', 'signers[Client][name]': 'George', 'signers[Client][email_address]':'YOUR_EMAIL', 'custom_fields': '[{"name":"newline", "value":"$20,000", "editor":"Client", "required":true}]', 'test_mode': '1' } print(buildTheRequest) r = requests.post(buildTheRequest, data) print(r.text) In order to understand this SDK issue one must be able to create a template and setup custom fields. Custom fields are text boxes that have the value 'ME WHEN SENDING' and then in the code when you call the signature request, you should be able to pass the value to the custom field. To understand this issue first go to the documentation link here: https://app.hellosign.com/api/reference#send_with_template Search on that link for the parameter "custom_fields" Notice that there are four things that can be set by custom fields and that it is documented as being passed as a JSON array. If you look at the curl example it is working more as designed. -F 'custom_fields=[{"name":"Cost", "value":"$20,000", "editor":"Client", "required":true}]' name: the name, or "Field Label," of the custom field (the field's API ID can be used here as well) value: the value of the custom field editor: the RoleName allowed to edit the custom field (optional, but required if 'required' is defined) required: a boolean describing if this field is required (default: false) Notice that currently the PYTHON SDK works like this with a custom field custom_fields=[{ 'Cost': '$20,000' }] which is basically saying, pass the value of $20,000 to the custom field named Cost. DESIRED RESULTS Should be able to pass this instead: [{"name":"Cost", "value":"$20,000", "editor":"Client", "required":true}] Also, if there are multiple custom fields, this should be supported [{"name":"Cost", "value":"$20,000", "editor":"Client", "required":true}, {"name":"fName", "value":"Sally", "editor":"Client", "required":true}] Ref: New PR created with code change and unit test completed: PR #69
gharchive/issue
2016-10-25T17:53:00
2025-04-01T04:34:29.104175
{ "authors": [ "alexmac05", "oconnor-sn" ], "repo": "hellosign/hellosign-python-sdk", "url": "https://github.com/hellosign/hellosign-python-sdk/issues/21", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
391738032
[stable/jaeger-operator] bump operator to 1.8.2 and add recommended label What this PR does / why we need it: add recommended label bump jaeger operator to 1.8.2 Checklist [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md /test pull-charts-e2e /lgtm
gharchive/pull-request
2018-12-17T14:31:32
2025-04-01T04:34:29.108370
{ "authors": [ "cpanato", "davidkarlsen" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/10064", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
519930126
[stable/prometheus-operator] Introduce crds directory for compatibility with Helm v3 What this PR does / why we need it: This adds crds directory which has all 5 CRDs As files from crds directory needs to be plain YAMLs, keeps app: prometheus-operator label and removes all the templating With this change .Values.prometheusOperator.createCustomResource will be ignored by Helm v3 as it handles the CRDs differently. Which means, if the CRDs already present in the cluster then it won't touch them at all Helm v3 ignores the CustomResourceDefinition objects which are present in templates directory. (At least I wasn't able to find any CRD YAML when I ran helm install ... --debug) Update the README.md Update the chart version Special notes for your reviewer: This PR does not do the complete migration of this chart to v2 apiVersion of Chart.yaml. This adds changes which ensure that the current chart is installable with Helm v3 as well. Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [X] DCO signed [X] Chart Version bumped [X] Variables are documented in the README.md [X] Title of the PR starts with chart name (e.g. [stable/mychartname]) /ok-to-test I wonder if we can just load the files from the directory and apply them using a list. Not sure if that would break the CRD install hooks checking for CRDs being established in Helm 2 or break upgrading the chart. Would you be interested in checking this out? Yeah, that's an interesting idea. Thanks, let me try that out. I deleted the crd-*.yaml files from templates/prometheus-operator and created a crds.yaml in same directory. Also added crd-install hook to files from crds directory. Helm v3 does not take any special action on the YAML if we have the crd-install annotation. With List approach: (creating the List object out of files from crds directory at top level)# templates/prometheus-operator/crds.yaml apiVersion: v1 kind: List items: {{- range $path, $bytes := .Files.Glob "crds/*.yaml" }} - {{ $.Files.Get $path | nindent 2 }} {{- end }} Helm v2: fails to install as well upgrade existing chart. Basically it does not consider the crd-install hook from items: section of List object. Helm v3: fails with the error that the CRDs already exist. Reason is same as above. Does not consider the hook which prevents it from ignoring those objects. With separate document approach: (creating one file with all the CRD objects separated by ---)# templates/prometheus-operator/crds.yaml {{- range $path, $bytes := .Files.Glob "crds/*.yaml" }} {{ $.Files.Get $path }} --- {{- end }} Helm v2: new installation works, CRDs are installed first. Upgrade from previous installation works as well. Helm v3: install as well as upgrade works just fine. Objects from crds.yaml are ingored as they have crd-install hook in them. @vsliouniaev what do you think, should we go with the second approach? @bhavin192 thanks for checking this one out. I think that option #2 sounds like it should be the best option. @vsliouniaev I made the changes according to the second option, please take a look whenever you are around :) /lgtm Looks like it's working!
gharchive/pull-request
2019-11-08T10:18:18
2025-04-01T04:34:29.118458
{ "authors": [ "bhavin192", "vsliouniaev" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/18721", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
525072061
[stable/airflow] fix typo in readme for session affinity Is this a new chart Np What this PR does / why we need it: Fixes a typo in the readme Which issue this PR fixes None Special notes for your reviewer: Checklist [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/mychartname]) /assign /ok-to-test /lgtm
gharchive/pull-request
2019-11-19T15:12:29
2025-04-01T04:34:29.122077
{ "authors": [ "davidkarlsen", "stijndehaes" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/18996", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
586242989
[stable/minecraft] One comment line : you can put URL to download FTB package [stable/minecraft] Modify one comment line in stable/minecraft/values.yaml. You can put URL to download FTB package Signed-off-by: Dimitri Leurs dimitri.leurs@gmail.com Is this a new chart No. What this PR does / why we need it: This pull request adds one line of additional information into stable/minecraft stable.yaml, (only one comment line) that will hopefully save developper's time Special notes for your reviewer: Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [x] DCO signed [x] Chart Version bumped [x] Variables are documented in the README.md [x] Title of the PR starts with chart name (e.g. [stable/mychartname]) /ok-to-test @dleurs can you please sign the DCO for the pull request checks to be happy? /lgtm
gharchive/pull-request
2020-03-23T14:16:07
2025-04-01T04:34:29.126431
{ "authors": [ "billimek", "dleurs" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/21576", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
370233423
[stable/suitecrm] Update Notes, unify volumes and stop using sessionAffinity Signed-off-by: tompizmor tompizmor@gmail.com What this PR does / why we need it: The upgrade will not work correctly if we do not set the db root password as well Unifies the Apache and SuiteCRM data under the same volume using subPath. Stop using sessionAffinity as the issue that wanted to solve has been already fixed in the image itself. EKS is not able to provide a loadBalancer if the service has sessionAffinity configured. Checklist [Place an '[x]' (no spaces) in all applicable fields. Please remove unrelated fields.] [X] DCO signed [X] Chart Version bumped [X] Variables are documented in the README.md /ok-to-test /lgtm /ok-to-test /lgtm
gharchive/pull-request
2018-10-15T16:02:39
2025-04-01T04:34:29.130648
{ "authors": [ "carrodher", "juan131", "tompizmor" ], "repo": "helm/charts", "url": "https://github.com/helm/charts/pull/8474", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
120844018
helm uninstall warns about everything when a Namespace is involved In testing the deis v2 chart, I see lots of warnings on helm uninstall. I think this is because the Namespace was deleted first, which effectively deletes all the other objects, so they're already gone by the time helm tries to delete them. This is benign, but annoying. Maybe helm should reorder its delete strategy? $ helm uninstall deis Service/deis-builder Service/deis-database Service/deis-etcd-discovery Service/deis-etcd-1 Service/deis-minio Service/deis-registry Service/deis-workflow ReplicationController/deis-builder ReplicationController/deis-database ReplicationController/deis-etcd-discovery ReplicationController/deis-etcd-1 ReplicationController/deis-minio ReplicationController/deis-registry ReplicationController/deis-router ReplicationController/deis-workflow Secret/deis-etcd-discovery-token Secret/minio-admin Secret/minio-ssl Secret/minio-user ServiceAccount/deis Namespace/deis Uninstall the listed objects? (y/N) y ---> Running `kubectl delete` ... [WARN] Could not delete Service deis-builder (Skipping): Error from server: services "deis-builder" not found : exit status 1 [WARN] Could not delete Service deis-database (Skipping): Error from server: services "deis-database" not found : exit status 1 [WARN] Could not delete Service deis-etcd-discovery (Skipping): Error from server: services "deis-etcd-discovery" not found : exit status 1 [WARN] Could not delete Service deis-etcd-1 (Skipping): Error from server: services "deis-etcd-1" not found : exit status 1 [WARN] Could not delete Service deis-minio (Skipping): Error from server: services "deis-minio" not found : exit status 1 [WARN] Could not delete Service deis-registry (Skipping): Error from server: services "deis-registry" not found : exit status 1 [WARN] Could not delete Service deis-workflow (Skipping): Error from server: services "deis-workflow" not found : exit status 1 [WARN] Could not delete ReplicationController deis-builder (Skipping): Error from server: replicationControllers "deis-builder" not found : exit status 1 [WARN] Could not delete ReplicationController deis-database (Skipping): Error from server: replicationControllers "deis-database" not found : exit status 1 [WARN] Could not delete ReplicationController deis-etcd-discovery (Skipping): Error from server: replicationControllers "deis-etcd-discovery" not found : exit status 1 [WARN] Could not delete ReplicationController deis-etcd-1 (Skipping): Error from server: replicationControllers "deis-etcd-1" not found : exit status 1 [WARN] Could not delete ReplicationController deis-minio (Skipping): Error from server: replicationControllers "deis-minio" not found : exit status 1 [WARN] Could not delete ReplicationController deis-registry (Skipping): Error from server: replicationControllers "deis-registry" not found : exit status 1 [WARN] Could not delete ReplicationController deis-router (Skipping): Error from server: replicationControllers "deis-router" not found : exit status 1 [WARN] Could not delete ReplicationController deis-workflow (Skipping): Error from server: replicationControllers "deis-workflow" not found : exit status 1 [WARN] Could not delete Secret deis-etcd-discovery-token (Skipping): Error from server: secrets "deis-etcd-discovery-token" not found : exit status 1 [WARN] Could not delete Secret minio-admin (Skipping): Error from server: secrets "minio-admin" not found : exit status 1 [WARN] Could not delete Secret minio-ssl (Skipping): Error from server: secrets "minio-ssl" not found : exit status 1 [WARN] Could not delete Secret minio-user (Skipping): Error from server: secrets "minio-user" not found : exit status 1 [WARN] Could not delete ServiceAccount deis (Skipping): Error from server: serviceaccounts "deis" not found : exit status 1 ---> Done same warnings here. maybe a change to UninstallOrder would do the trick? https://github.com/helm/helm/blob/master/action/uninstall.go#L115 Namespace is deleted last. https://github.com/helm/helm/blob/master/action/install.go#L26 But we might be hitting a race condition. hmm, I got this error for the first time after a helm uninstall deis: ENG000656:micro-kube aaronschlesinger$ helm install deis/deis ---> Running `kubectl create -f` ... ---> Data: { "apiVersion": "v1", "kind": "Namespace", "metadata": { "annotations": { "chart.helm.sh/description": "For testing only!", "chart.helm.sh/file": "/Users/aaronschlesinger/.helm/workspace/charts/deis/manifests/deis-namespace.yaml", "chart.helm.sh/name": "deis", "chart.helm.sh/version": "2.0.0-pre-alpha" }, "labels": { "heritage": "deis" }, "name": "deis" } } Error from server: error when creating "STDIN": namespaces "deis" already exists [ERROR] Failed to upload manifests: exit status 1 This has been fixed--helmc deletes the namespace last, and also requires a --namespace | -n flag with uninstall.
gharchive/issue
2015-12-07T19:05:16
2025-04-01T04:34:29.136026
{ "authors": [ "arschles", "mboersma", "technosophos" ], "repo": "helm/helm", "url": "https://github.com/helm/helm/issues/320", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
525210912
v3: Prettify Updated time in helm list|history Fixes https://github.com/helm/helm/issues/6972. We used time.ANSIC formatter in v2. This PR makes v3 use same formatter as default for helm ls commands. Current behavior ❯ helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION hello-world-1574188666 default 1 2019-11-19 19:37:46.993599 +0100 CET deployed hello-world-0.1.0 1.16.0 I believe the UPDATED time is in the Kubernetes cluster TZ Changed behavior ❯ helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION hello-world-1574188666 default 1 Tue Nov 19 19:37:46 2019 deployed hello-world-0.1.0 1.16.0 This is prettified time in user local TZ Note This is a preliminary PR to discuss on the issue. I think the ANSIC formatter is user friendly than the longer formatter. Accuracy up to seconds would be good enough. If we think TZ is needed, we could provide additional flag --date-utc. Thoughts? I insist that timezone information must be present when showing time. It is very important. Or else the user is just confused or might assume wrong timezone or be ignorant of the timezone factor which can cause issues I don't want the user to even have a doubt about the timezone. So even if the time is in their timezone, I say show it to them Interesting, I wouldn't have expected that @karuppiah7890, but I suppose it's because I only have experience working with co-located teams where timezone rarely enters into the conversation. Additionally, in my case, when timezone does come up, it's not too complicated because we tend to simply add/ subtract whole hours instead of dealing with half hours which I presume makes the situation even more complex for you. Would having a switch + an env variable HELM_DATE_UTC improve the helm list usability enough for you? For a long time I've had TILLER_NAMESPACE as a config value for working with helm and would have hated having to type the --tiller-namespace switch out every time I just wanted to take a glance at my deployments, for instance. Great work moving this issue forward, btw @jan25! Just a FYI - helm v2 used no TZ to show for helm ls, and I suppose it used tiller local time. Also its prettified in same format as this PR. I think we can keep this format and clarify in helm ls help saying UPDATED time is in users TZ. To view time in UTC, we could provide --time-utc flag as i mentioned. I actually agree and see the fact that lots of software work this way - showing users the time in their timezone, and without TZ info, that is no T or Zs and +0530 +0100 etc. For example - I have seen it in slack, grafana too. But when I first used grafana, I got this doubt - "Hmm. Wait. Am I looking at the right time? Is it my timezone ? Or UTC? Or the grafana server time zone? or what?" . So, I've got these points from my side: Helm deployment time accuracy at the level of split second is not needed. Just seconds or even minutes level accuracy is good enough. If people want more accuracy about the deployment, I think checking the k8s resource - for example pod creation time and other timestamps would help, better compared to the Helm release time. As Helm release time is just more regarding Helm. Better accuracy can be obtained only at k8s resource level timestamp data, which may be different from the Helm release time Showing time in a pretty manner is good to the eye. With helm 3's current output, my brain would do some processing (is it just me? idk) to read it something like Tue Nov 19 19:37:46 2019. I feel that directly giving that kind of output is good enough. But it's still subjective, since "pretty", "better UX" are all subjective. Only data talks better, for which we need user's feedback. Considering we are doing all this in kubernetes environment, I checked the pod yaml data in kubernetes, for some inspiration. In all the date time yaml fields, it shows timezone information, and when kubectl cli is used, it shows relative time, for example pod started running 4 hours ago. I would take that as inspiration and say either show Helm release time in relative time, or show timezone information along with date time as shown in kubernetes resource yaml fields. For example creationTimestamp is one such field Also, I'm not going to look too much into Helm v2. Helm v3 is a breaking release, and is the way forward, so new decisions can come in if they seem good. Also, the fact that you say I suppose it used tiller local time shows the confusion a user has. Even if it was user's time, you wouldn't know that. In this case, since you are working on a PR, you would know about the Helm 3 behavior. I agree docs can solve the problem, but intuitiveness is also a good thing. People can't be checking the doc for each and every simple thing. If I just show them the timezone information along with the date time, then that's awesome. Or we can see other ways to show time. Like how I mentioned above - taking inspiration from kubectl @karuppiah7890 Thanks for your insights! I don't have strong opinion toward either case - keeping TZ or no TZ. So i can include TZ for now in the default output. Let me push that fix when i get a chance. 2019-11-19 19:37:46 +0100 CET this'll be the format. would you mind looking into the failing tests? I'm surprised the tests didn't fail for 'new v3 formatter' introduced before this PR: 2019-11-19 19:37:46.993599 +0100 CET I noticed the history command has --output json|yaml support. This PR won't change the time fields in those outputs. We could create a separate issue for those? Additionally we must support those output formats for list command @hickeyma could you review this when you get a chance? Updated as suggested. Because time.go is thought of as wrapper around stdlib Time i put func Format in a different file. But semantically both ways are the same though, so moved it to helm/time.go I noticed the history command has --output json|yaml support. This PR won't change the time fields in those outputs. The time format used for the JSON AND YAML time output is ISO_8601: $ helm history mydemo -o yaml - app_version: 1.16.0 chart: mychart-0.1.0 description: Install complete revision: 1 status: deployed updated: "2020-03-03T17:52:40.769361223Z" $ helm history mydemo -o json [{"revision":1,"updated":"2020-03-03T17:52:40.769361223Z","status":"deployed","chart":"mychart-0.1.0","app_version":"1.16.0","description":"Install complete"}] I think that this is a good format if these outputs are to be processed programatically afterwards. It would be good to get the opinions of other @helm/helm-core-maintainers on this also. @jan25 I am going to hold back on merging as I would like to get other @helm/helm-core-maintainers to take a look again. I will update the Helmsman community if/when it goes into a release. I am aiming for 3.2.0 which means it can be tested during the RC phase. @technosophos @mattfarina I would appreciate some 👀 before merging. I think you're missing the point of this issue. Helm never intended to change the formatting of it's output, it drifted as an unforeseen situation caused by the removal of tiller. I agree, when that happened, it should have been behind a switch, but again, that was a defect that no one detected until the originator of this issue spoke up and the investigation began. @TheNotary The problem is that this is a breaking change. There are tools that execute commands and parse the output. More than we are aware of. Changing the format, even if there is a "better" method, will be a breaking change to those tools. Helm follows Semantic Versioning which means we'd have to increment the major version for a breaking change like this. I realize that Kubernetes and some other tools may not follow Semantic Versioning. If I'm being honest, the breaking changes between minor versions of Kubernetes have caused problems for me. Helm aims to follow Semantic Versioning unless a security vulnerability requires a break. This is the kind of change that can happen behind a flag. Ok, you two are absolutely right, my mistake. helm list is an application programming interface, it's not a human interface. When people write software that consumes the outputs of CLIs, they do so under the assumption that this output will be reliable across all platforms and minor/patch versions of the tool. People relying on this code would probably not have the ability to avoid upgrades to their version of Helm either and would always be using the latest version released. Millions of hypothetical people would be adversely impacted if this issue were to be resolved and they would have no hope of recovering from the incident. We should probably schedule a couple more reviewers on this one, I feel as though 6 months isn't enough time for everyone to prepare their input. I'm wondering if maybe we should put this behind 2 feature flags, one being "--UNLOCK-DANGEROUS-FEATURESand another for the actual feature, like--USE-HUMAN-READABLE-LIST-OUTPUT-THAT-WILL-BREAK-CICD-AS-WE-KNOW-IT-WHY-ARE-YOU-DOING-THIS-WHYYYYYYYYYYYYYYYYYY`. Please be more respectful of other's opinions. I can understand your frustration, but there are more constructive ways to discuss topics. Both @technosophos and @mattfarina have a point. This pull request introduces a breaking change to the CLI output. Our semver contract quite clearly states we cannot introduce changes to the CLI output that would introduce a breaking change. For posterity: Command line commands, flags, and arguments MUST be backward compatible Based on previous comments in this thread, this would be considered a breaking change, and must be placed behind a feature flag. Something similar to the --output flag would be a good place to start. Perhaps --time-format, as then we can introduce other timestamp formats as more use cases evolve. I hope you guys don't take what I said personally, I think everyone's doing what that think is best and while you may have interpretations of this command output as different than mine, I think you're probably great people. I will leave it to @TheNotary to decide whether a switch is good, or whether this PR should be closed. Please remove me from this list ;) On April 9, 2020 7:56:06 PM CDT, Matt Butcher notifications@github.com wrote: I will leave it to @TheNotary to decide whether a switch is good, or whether this PR should be closed. -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/helm/helm/pull/7024#issuecomment-611822607 -- Sent from my Android device with K-9 Mail. Please excuse my brevity. @IRobL This is a GitHub PR. To be removed you need to remove yourself. On the PR page there is a button to unsubscribe from notifications. @TheNotary When Ken Thompson wrote the Unix philosophy one of the things he noted was Expect the output of every program to become the input to another, as yet unknown, program. This happens a lot in CLI tools. For example, the output of Helm is used by tools. We have used it for that in the CI process for the charts repo. Helmsman uses it. There are two examples of this part of the Unix philosophy at work. The reason we don't make breaking changes easily is because we consider others and the impact on them. I found it hypocritical that the person demanding respect used the downvote emoji. I guess I'm not the only sarcastic one here, lol. @mattfarina I concede to the decision you've made there's no need to GH lawyer this to me any further; the PR must be reworked and then the maintainer's list should probably be pinged again if new commits come it so that it can be further checked off. Throughout this PR we have discussed/raised the issue of backward compatibility. I appreciate @jan25 patience as the PR has taken time because of this and trying to finding the best solution. One of the key requirement and a great attribute of Helm is to respect compatibility in Helm minor releases. It is something I have heard since joining the project. That is why when I approved this based on technical and usability grounds, I requested further reviews from maintainers as I was not 100%. Something lingered in my mind. I know we seemed to have addressed it in https://github.com/helm/helm/pull/7024#issuecomment-580822136 but still doubt remained. I am glad to get the feedback from @technosophos @mattfarina. 👍 @jan25 I hope this is ok with you and that you might be willing to put it behind a flag. Throughout this PR we have discussed/raised the issue of backward compatibility. I appreciate @jan25 patience as the PR has taken time because of this and trying to finding the best solution. One of the key requirement and a great attribute of Helm is to respect compatibility in Helm minor releases. It is something I have heard since joining the project. That is why when I approved this based on technical and usability grounds, I requested further reviews from maintainers as I was not 100%. Something lingered in my mind. I know we seemed to have addressed it in https://github.com/helm/helm/pull/7024#issuecomment-580822136 but still doubt remained. I am glad to get the feedback from @technosophos @mattfarina. 👍 @jan25 I hope this is ok with you and that you might be willing to put it behind a flag. Thanks. I'll investigate and add this behind a feature soon @jan25 Did you get a chance to look at this again? Just asking as we are thinking of cutting a 3.3.0 release soon. @hickeyma Yes, I have changes for list command which i pushed now. Could you review? If it looks good i'll go ahead and update the tests and history command too. Sorry, i almost forgot about pushing my changes to branch. @jan25 Its looks good to go. Just 2 things: Maybe change flag from pretty to tz-format and description as "time zone format" Can you sign the dco when committing: https://helm.sh/blog/helm-dco/index.html I added --format-time flag in list command. And, I undid changes to history command as it needs rework because time is confusingly formatted across different output format. E.g. yaml and json uses different time formatting from table output. Probably a good idea to fix history in different issue/PR Thanks @jan25 for the effort and time you have put into this PR. Here is output from manual tested that I did. Before the PR: $ helm ls NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql default 1 2020-06-18 14:17:46.125134977 +0000 UTC deployed mysql-1.6.4 5.7.30 pr-7613-1 default 1 2020-06-18 11:15:32.298743117 +0000 UTC deployed pr-7613-0.1.0 1.16.0 tstocichrt default 1 2020-06-18 14:05:16.515958406 +0000 UTC deployed tstocichrt-0.1.0 1.16.0 With PR: $ helm ls --format-time NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION mysql default 1 2020-06-18 14:17:46 +0000 UTC deployed mysql-1.6.4 5.7.30 pr-7613-1 default 1 2020-06-18 11:15:32 +0000 UTC deployed pr-7613-0.1.0 1.16.0 tstocichrt default 1 2020-06-18 14:05:16 +0000 UTC deployed tstocichrt-0.1.0 1.16.0 I appreciate that the PR shortens the time format by removing the milliseconds but I am at this stage unsure of the overall gain of adding the flag. I think the bigger issue is that there is different formats for date-time in the output of Helm commands. I have raised #8352 to tackle this in Hem 4. /cc @helm/helm-core-maintainers What is your opinion on the PR. Should it do into Helm 3 or handle the format in Helm 4? I think its fine for Helm 3 with a flag, but wondering if other formats will be requested. Instead of --format-time as boolean but rather string used for format like --format-time "2006-01-02 15:04:05 -0700 MST" ? Usual convention is to accept a format flag that accepts control characters as input for each unit measurement. %h for hours, %m for millis, etc. For example: --format-time “%y-%m-%d %h:%m:%s %z 3.3.0-rc.1 is being cut tomorrow, so we're removing this from the milestone. Hi @jan25. It looks like there's consensus among the helm maintainers on next steps here for this PR. There's a few comments left in the review that need to be addressed or resolved. Would you mind going through those comments and addressing those, then let us know when this is ready for another review? Thanks again. Sorry for not getting back on this. I pushed required changes. Now it is good to review, let me know if something is missing. The issue josh called out has been fixed. Going to merge this for the 3.4 release. Also, martins comments were addressed and we discussed as Taylor had requested. Should be good to go.
gharchive/pull-request
2019-11-19T19:18:38
2025-04-01T04:34:29.171354
{ "authors": [ "IRobL", "TheNotary", "bacongobbler", "hickeyma", "jan25", "jdolitsky", "karuppiah7890", "mattfarina", "technosophos" ], "repo": "helm/helm", "url": "https://github.com/helm/helm/pull/7024", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
797415996
Add typescript support #17 + tsd test Thanks! Thanks!
gharchive/pull-request
2021-01-30T13:16:40
2025-04-01T04:34:29.183944
{ "authors": [ "StarpTech", "erfanium" ], "repo": "hemerajs/fastify-graceful-shutdown", "url": "https://github.com/hemerajs/fastify-graceful-shutdown/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
169510522
Error: Module version mismatch on "npm run dev" Trying to run a clone of felony on my Macbook. No logged issues with the webpack bundle or server but this message keeps appearing after the developer console appears with the app. Node version: 6.2.2 NPM version: 3.9.5 Logs after running command: ` felony@0.10.3 dev /Users/Daniel/Desktop/Github/Forks/felony concurrently --kill-others "npm run hot-server" "npm run start-hot" [0] [0] > felony@0.10.3 hot-server /Users/Daniel/Desktop/Github/Forks/felony [0] > node -r babel-register server.js [0] [1] [1] > felony@0.10.3 start-hot /Users/Daniel/Desktop/Github/Forks/felony [1] > cross-env HOT=1 NODE_ENV=development electron -r babel-register ./main.development [1] [0] Listening at http://localhost:3000 [0] webpack: wait until bundle finished: /dist/bundle.js [1] 2016-08-05 09:15:51.297 Electron Helper[76876:155148] Couldn't set selectedTextBackgroundColor from default () [0] webpack built 7290703c24af0fb635fb in 3772ms [0] Hash: 7290703c24af0fb635fb [0] Version: webpack 1.13.1 [0] Time: 3772ms [0] Asset Size Chunks Chunk Names [0] bundle.js 1.05 MB 0 [emitted] main [0] chunk {0} bundle.js (main) 462 kB [rendered] [0] [0] multi main 52 bytes {0} [built] [0] [1] external "babel-polyfill" 42 bytes {0} [not cacheable] [0] [2] (webpack)-hot-middleware/client.js?path=http://localhost:3000/__webpack_hmr 4.49 kB {0} [built] [0] [3] (webpack)/buildin/module.js 251 bytes {0} [built] [0] [4] external "querystring" 42 bytes {0} [not cacheable] [0] [5] external "strip-ansi" 42 bytes {0} [not cacheable] [0] [6] (webpack)-hot-middleware/client-overlay.js 1.74 kB {0} [built] [0] [7] external "ansi-html" 42 bytes {0} [not cacheable] [0] [8] external "html-entities" 42 bytes {0} [not cacheable] [0] [9] (webpack)-hot-middleware/process-update.js 3.88 kB {0} [built] [0] [10] ./app/index.js 886 bytes {0} [built] [0] [11] external "react" 42 bytes {0} [not cacheable] [0] [12] external "react-dom" 42 bytes {0} [not cacheable] [0] [13] external "react-redux" 42 bytes {0} [not cacheable] [0] [14] external "redux" 42 bytes {0} [not cacheable] [0] [15] external "redux-thunk" 42 bytes {0} [not cacheable] [0] [16] ./app/reducers/index.js 590 bytes {0} [built] [0] [17] ./app/reducers/uiReducer.js 1.82 kB {0} [built] [0] [18] ./app/constants/UIConstants.js 645 bytes {0} [built] [0] [19] external "immutable" 42 bytes {0} [not cacheable] [0] [20] ./app/reducers/keychainReducer.js 2.24 kB {0} [built] [0] [21] ./app/constants/KeychainConstants.js 180 bytes {0} [built] [0] [22] ./app/components/Felony.js 4.94 kB {0} [built] [0] [23] ./~/normalize.css/normalize.css 867 bytes {0} [built] [0] [24] ./~/css-loader!./~/normalize.css/normalize.css 8.29 kB {0} [built] [0] [25] ./~/css-loader/lib/css-base.js 1.51 kB {0} [built] [0] [26] ./~/style-loader/addStyles.js 7.15 kB {0} [built] [0] [27] external "reactcss" 42 bytes {0} [not cacheable] [0] [28] ./app/assets/fonts/work-sans/WorkSans.css 952 bytes {0} [built] [0] [29] ./~/css-loader!./app/assets/fonts/work-sans/WorkSans.css 644 bytes {0} [built] [0] [30] ./app/assets/fonts/work-sans/WorkSans-Light.woff2 66 kB {0} [built] [0] [31] ./app/assets/fonts/work-sans/WorkSans-Regular.woff2 64 kB {0} [built] [0] [32] ./app/assets/fonts/work-sans/WorkSans-Medium.woff2 68.6 kB {0} [built] [0] [33] ./app/assets/fonts/work-sans/WorkSans-Bold.woff2 69.7 kB {0} [built] [0] [34] ./app/assets/styles/felony.css 934 bytes {0} [built] [0] [35] ./~/css-loader!./app/assets/styles/felony.css 267 bytes {0} [built] [0] [36] ./app/assets/styles/spinner.css 937 bytes {0} [built] [0] [37] ./~/css-loader!./app/assets/styles/spinner.css 974 bytes {0} [built] [0] [38] ./app/assets/styles/variables/colors.js 334 bytes {0} [built] [0] [39] ./app/containers/HeaderContainer.js 1.47 kB {0} [built] [0] [40] ./app/actions/index.js 669 bytes {0} [built] [0] [41] ./app/actions/KeychainActions.js 3.29 kB {0} [built] [0] [42] ./app/config/database.js 814 bytes {0} [built] [0] [43] external "lowdb" 42 bytes {0} [not cacheable] [0] [44] external "lowdb/file-async" 42 bytes {0} [not cacheable] [0] [45] external "path" 42 bytes {0} [not cacheable] [0] [46] external "os" 42 bytes {0} [not cacheable] [0] [47] external "underscore-db" 42 bytes {0} [not cacheable] [0] [48] ./app/actions/UIActions.js 1.74 kB {0} [built] [0] [49] ./app/components/header/Header.js 4.54 kB {0} [built] [0] [50] ./app/assets/styles/variables/utils.js 213 bytes {0} [built] [0] [51] ./app/components/alias/Alias.js 2.8 kB {0} [built] [0] [52] ./app/components/common/index.js 1.08 kB {0} [built] [0] [53] ./app/components/common/Avatar.js 2.63 kB {0} [built] [0] [54] ./app/components/common/Button.js 2.85 kB {0} [built] [0] [55] ./app/components/common/Icon.js 2.61 kB {0} [built] [0] [56] ./app/utils/icons.js 3.14 kB {0} [built] [0] [57] ./app/components/common/Overlay.js 3.08 kB {0} [built] [0] [58] ./app/components/common/User.js 4.16 kB {0} [built] [0] [59] ./app/utils/pgp.js 13.2 kB {0} [built] [0] [60] external "keytar" 42 bytes {0} [not cacheable] [0] [61] external "openpgp" 42 bytes {0} [not cacheable] [0] [62] ./app/components/header/HeaderKeyStatus.js 4.53 kB {0} [built] [0] [63] ./app/components/header/HeaderKeyCopy.js 3.7 kB {0} [built] [0] [64] external "react-copy-to-clipboard" 42 bytes {0} [not cacheable] [0] [65] ./app/components/header/HeaderKeyStatusSpinner.js 2.69 kB {0} [built] [0] [66] ./app/components/header/HeaderKeyStatusTooltip.js 4.56 kB {0} [built] [0] [67] external "dynamics.js" 42 bytes {0} [not cacheable] [0] [68] external "lodash" 42 bytes {0} [not cacheable] [0] [69] ./app/containers/FloatingButtonContainer.js 834 bytes {0} [built] [0] [70] ./app/components/floating-button/FloatingButton.js 5.85 kB {0} [ [0] built] [0] [71] ./app/components/floating-button/FloatingButtonItem.js 5.29 kB {0} [built] [0] [72] ./app/components/floating-button/FloatingButtonItemLabel.js 3.47 kB {0} [built] [0] [73] ./app/containers/ComposerContainer.js 1.87 kB {0} [built] [0] [74] ./app/components/composer/Composer.js 4.36 kB {0} [built] [0] [75] ./app/components/composer/ComposerForm.js 13.4 kB {0} [built] [0] [76] ./app/components/composer/ComposerFormSubmit.js 3.88 kB {0} [built] [0] [77] ./app/components/composer/ComposerAliasForm.js 11.3 kB {0} [built] [0] [78] ./app/components/composer/ComposerAliasFormInput.js 4.15 kB {0} [built] [0] [79] ./app/components/composer/ComposerAliasSuccess.js 4.73 kB {0} [built] [0] [80] ./app/containers/OutputContainer.js 1.27 kB {0} [built] [0] [81] ./app/components/output/Output.js 7.15 kB {0} [built] [0] [82] ./app/containers/KeychainContainer.js 1.62 kB {0} [built] [0] [83] ./app/components/keychain/Keychain.js 3.25 kB {0} [built] [0] [84] ./app/components/keychain/KeychainList.js 3.26 kB {0} [built] [0] [85] ./app/components/keychain/KeychainListItem.js 4.24 kB {0} [built] [0] [86] ./app/components/keychain/KeychainComposerOpener.js 4.46 kB {0} [built] [0] webpack: bundle is now VALID.` Error stack: ELECTRON_ASAR.js:167Uncaught Error: Module version mismatch. Expected 49, got 48. module.(anonymous function) @ ELECTRON_ASAR.js:167 Module._extensions..node @ module.js:568 module.(anonymous function) @ ELECTRON_ASAR.js:167 Module.load @ module.js:458 tryModuleLoad @ module.js:417 Module._load @ module.js:409 Module.require @ module.js:468 require @ internal/module.js:20 (anonymous function) @ keytar.js:4 (anonymous function) @ keytar.js:58 Module._compile @ module.js:541 Module._extensions..js @ module.js:550 Module.load @ module.js:458 tryModuleLoad @ module.js:417 Module._load @ module.js:409 Module.require @ module.js:468 require @ internal/module.js:20 (anonymous function) @ external "keytar"?c4e9:1 (anonymous function) @ bundle.js:950 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ pgp.js?b688:6 (anonymous function) @ bundle.js:944 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Alias.js?5486:8 (anonymous function) @ bundle.js:896 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Header.js?fa34:9 (anonymous function) @ bundle.js:884 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ HeaderContainer.js?e915:3 (anonymous function) @ bundle.js:824 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Felony.js?48cf:12 (anonymous function) @ bundle.js:722 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ index.js?2018:9 (anonymous function) @ bundle.js:650 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ bundle.js:589 webpack_require @ bundle.js:557 (anonymous function) @ bundle.js:580 (anonymous function) @ bundle.js:583 Rebuild electron and your problem will be solved. On Thu, Aug 4, 2016, 6:32 PM Daniel Hsing notifications@github.com wrote: Trying to run a clone of felony on my Macbook. No logged issues with the webpack bundle or server but this message keeps appearing after the developer console appears with the app. Node version: 6.2.2 NPM version: 3.9.5 Logs after running command: ` felony@0.10.3 dev /Users/Daniel/Desktop/Github/Forks/felony concurrently --kill-others "npm run hot-server" "npm run start-hot" [0] [0] > felony@0.10.3 hot-server /Users/Daniel/Desktop/Github/Forks/felony [0] > node -r babel-register server.js [0] [1] [1] > felony@0.10.3 start-hot /Users/Daniel/Desktop/Github/Forks/felony [1] > cross-env HOT=1 NODE_ENV=development electron -r babel-register ./main.development [1] [0] Listening at http://localhost:3000 [0] webpack: wait until bundle finished: /dist/bundle.js [1] 2016-08-05 09:15:51.297 Electron Helper[76876:155148] Couldn't set selectedTextBackgroundColor from default () [0] webpack built 7290703c24af0fb635fb in 3772ms [0] Hash: 7290703c24af0fb635fb [0] Version: webpack 1.13.1 [0] Time: 3772ms [0] Asset Size Chunks Chunk Names [0] bundle.js 1.05 MB 0 [emitted] main [0] chunk {0} bundle.js (main) 462 kB [rendered] [0] [0] multi main 52 bytes {0} [built] [0] [1] external "babel-polyfill" 42 bytes {0} [not cacheable] [0] 2 http://webpack-hot-middleware/client.js?path= http://localhost:3000/__webpack_hmr 4.49 kB {0} [built] [0] 3 http://webpack/buildin/module.js 251 bytes {0} [built] [0] [4] external "querystring" 42 bytes {0} [not cacheable] [0] [5] external "strip-ansi" 42 bytes {0} [not cacheable] [0] 6 http://webpack-hot-middleware/client-overlay.js 1.74 kB {0} [built] [0] [7] external "ansi-html" 42 bytes {0} [not cacheable] [0] [8] external "html-entities" 42 bytes {0} [not cacheable] [0] 9 http://webpack-hot-middleware/process-update.js 3.88 kB {0} [built] [0] [10] ./app/index.js 886 bytes {0} [built] [0] [11] external "react" 42 bytes {0} [not cacheable] [0] [12] external "react-dom" 42 bytes {0} [not cacheable] [0] [13] external "react-redux" 42 bytes {0} [not cacheable] [0] [14] external "redux" 42 bytes {0} [not cacheable] [0] [15] external "redux-thunk" 42 bytes {0} [not cacheable] [0] [16] ./app/reducers/index.js 590 bytes {0} [built] [0] [17] ./app/reducers/uiReducer.js 1.82 kB {0} [built] [0] [18] ./app/constants/UIConstants.js 645 bytes {0} [built] [0] [19] external "immutable" 42 bytes {0} [not cacheable] [0] [20] ./app/reducers/keychainReducer.js 2.24 kB {0} [built] [0] [21] ./app/constants/KeychainConstants.js 180 bytes {0} [built] [0] [22] ./app/components/Felony.js 4.94 kB {0} [built] [0] [23] ./~/normalize.css/normalize.css 867 bytes {0} [built] [0] [24] ./~/css-loader!./~/normalize.css/normalize.css 8.29 kB {0} [built] [0] [25] ./~/css-loader/lib/css-base.js 1.51 kB {0} [built] [0] [26] ./~/style-loader/addStyles.js 7.15 kB {0} [built] [0] [27] external "reactcss" 42 bytes {0} [not cacheable] [0] [28] ./app/assets/fonts/work-sans/WorkSans.css 952 bytes {0} [built] [0] [29] ./~/css-loader!./app/assets/fonts/work-sans/WorkSans.css 644 bytes {0} [built] [0] [30] ./app/assets/fonts/work-sans/WorkSans-Light.woff2 66 kB {0} [built] [0] [31] ./app/assets/fonts/work-sans/WorkSans-Regular.woff2 64 kB {0} [built] [0] [32] ./app/assets/fonts/work-sans/WorkSans-Medium.woff2 68.6 kB {0} [built] [0] [33] ./app/assets/fonts/work-sans/WorkSans-Bold.woff2 69.7 kB {0} [built] [0] [34] ./app/assets/styles/felony.css 934 bytes {0} [built] [0] [35] ./~/css-loader!./app/assets/styles/felony.css 267 bytes {0} [built] [0] [36] ./app/assets/styles/spinner.css 937 bytes {0} [built] [0] [37] ./~/css-loader!./app/assets/styles/spinner.css 974 bytes {0} [built] [0] [38] ./app/assets/styles/variables/colors.js 334 bytes {0} [built] [0] [39] ./app/containers/HeaderContainer.js 1.47 kB {0} [built] [0] [40] ./app/actions/index.js 669 bytes {0} [built] [0] [41] ./app/actions/KeychainActions.js 3.29 kB {0} [built] [0] [42] ./app/config/database.js 814 bytes {0} [built] [0] [43] external "lowdb" 42 bytes {0} [not cacheable] [0] [44] external "lowdb/file-async" 42 bytes {0} [not cacheable] [0] [45] external "path" 42 bytes {0} [not cacheable] [0] [46] external "os" 42 bytes {0} [not cacheable] [0] [47] external "underscore-db" 42 bytes {0} [not cacheable] [0] [48] ./app/actions/UIActions.js 1.74 kB {0} [built] [0] [49] ./app/components/header/Header.js 4.54 kB {0} [built] [0] [50] ./app/assets/styles/variables/utils.js 213 bytes {0} [built] [0] [51] ./app/components/alias/Alias.js 2.8 kB {0} [built] [0] [52] ./app/components/common/index.js 1.08 kB {0} [built] [0] [53] ./app/components/common/Avatar.js 2.63 kB {0} [built] [0] [54] ./app/components/common/Button.js 2.85 kB {0} [built] [0] [55] ./app/components/common/Icon.js 2.61 kB {0} [built] [0] [56] ./app/utils/icons.js 3.14 kB {0} [built] [0] [57] ./app/components/common/Overlay.js 3.08 kB {0} [built] [0] [58] ./app/components/common/User.js 4.16 kB {0} [built] [0] [59] ./app/utils/pgp.js 13.2 kB {0} [built] [0] [60] external "keytar" 42 bytes {0} [not cacheable] [0] [61] external "openpgp" 42 bytes {0} [not cacheable] [0] [62] ./app/components/header/HeaderKeyStatus.js 4.53 kB {0} [built] [0] [63] ./app/components/header/HeaderKeyCopy.js 3.7 kB {0} [built] [0] [64] external "react-copy-to-clipboard" 42 bytes {0} [not cacheable] [0] [65] ./app/components/header/HeaderKeyStatusSpinner.js 2.69 kB {0} [built] [0] [66] ./app/components/header/HeaderKeyStatusTooltip.js 4.56 kB {0} [built] [0] [67] external "dynamics.js" 42 bytes {0} [not cacheable] [0] [68] external "lodash" 42 bytes {0} [not cacheable] [0] [69] ./app/containers/FloatingButtonContainer.js 834 bytes {0} [built] [0] [70] ./app/components/floating-button/FloatingButton.js 5.85 kB {0} [ [0] built] [0] [71] ./app/components/floating-button/FloatingButtonItem.js 5.29 kB {0} [built] [0] [72] ./app/components/floating-button/FloatingButtonItemLabel.js 3.47 kB {0} [built] [0] [73] ./app/containers/ComposerContainer.js 1.87 kB {0} [built] [0] [74] ./app/components/composer/Composer.js 4.36 kB {0} [built] [0] [75] ./app/components/composer/ComposerForm.js 13.4 kB {0} [built] [0] [76] ./app/components/composer/ComposerFormSubmit.js 3.88 kB {0} [built] [0] [77] ./app/components/composer/ComposerAliasForm.js 11.3 kB {0} [built] [0] [78] ./app/components/composer/ComposerAliasFormInput.js 4.15 kB {0} [built] [0] [79] ./app/components/composer/ComposerAliasSuccess.js 4.73 kB {0} [built] [0] [80] ./app/containers/OutputContainer.js 1.27 kB {0} [built] [0] [81] ./app/components/output/Output.js 7.15 kB {0} [built] [0] [82] ./app/containers/KeychainContainer.js 1.62 kB {0} [built] [0] [83] ./app/components/keychain/Keychain.js 3.25 kB {0} [built] [0] [84] ./app/components/keychain/KeychainList.js 3.26 kB {0} [built] [0] [85] ./app/components/keychain/KeychainListItem.js 4.24 kB {0} [built] [0] [86] ./app/components/keychain/KeychainComposerOpener.js 4.46 kB {0} [built] [0] webpack: bundle is now VALID.` Error stack: ELECTRON_ASAR.js:167Uncaught Error: Module version mismatch. Expected 49, got 48. module.(anonymous function) @ ELECTRON_ASAR.js:167 Module. extensions..node @ module.js:568 module.(anonymous function) @ ELECTRON_ASAR.js:167 Module.load @ module.js:458 tryModuleLoad @ module.js:417 Module.load @ module.js:409 Module.require @ module.js:468 require @ internal/module.js:20 (anonymous function) @ keytar.js:4 (anonymous function) @ keytar.js:58 Module._compile @ module.js:541 Module._extensions..js @ module.js:550 Module.load @ module.js:458 tryModuleLoad @ module.js:417 Module._load @ module.js:409 Module.require @ module.js:468 require @ internal/module.js:20 (anonymous function) @ external "keytar"?c4e9:1 (anonymous function) @ bundle.js:950 __webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ pgp.js?b688:6 (anonymous function) @ bundle.js:944 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Alias.js?5486:8 (anonymous function) @ bundle.js:896 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Header.js?fa34:9 (anonymous function) @ bundle.js:884 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ HeaderContainer.js?e915:3 (anonymous function) @ bundle.js:824 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ Felony.js?48cf:12 (anonymous function) @ bundle.js:722 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ index.js?2018:9 (anonymous function) @ bundle.js:650 webpack_require @ bundle.js:557 fn @ bundle.js:88 (anonymous function) @ bundle.js:589 webpack_require @ bundle.js:557 (anonymous function) @ bundle.js:580 (anonymous function) @ bundle.js:583 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/henryboldi/felony/issues/74, or mute the thread https://github.com/notifications/unsubscribe-auth/AFu9R1eGvulq1UZdYX5agtcRhUhDI3pWks5qcpLBgaJpZM4JdRt4 .
gharchive/issue
2016-08-05T01:32:46
2025-04-01T04:34:29.265246
{ "authors": [ "Arthelon", "frankcash" ], "repo": "henryboldi/felony", "url": "https://github.com/henryboldi/felony/issues/74", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
164677795
Added listener to enter to submit form, and validation to check that all fields are valid Fixes first part of this issue: #47 Wow, excellent PR! 💓
gharchive/pull-request
2016-07-09T17:51:47
2025-04-01T04:34:29.266904
{ "authors": [ "MartinIngesen", "henryboldi" ], "repo": "henryboldi/felony", "url": "https://github.com/henryboldi/felony/pull/58", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
469087729
[BUG] the createTabs method creates twice the number of tabs Which platform(s) does your issue occur on? [*] iOS [*] Android What type of device? [*] Emulator [*] Device √ Component nativescript has 5.4.2 version and is up to date. √ Component tns-core-modules has 5.4.3 version and is up to date. √ Component tns-android has 5.4.0 version and is up to date. When I'm using the createTabs method to create tabs programmatically it creates twice the number of tabs. Although I'm calling the createTabs method only once. onBottomNavigationLoaded(args) { if (!this.bottomMenu) { this.bottomMenu = args.object; const tabs = [ new BottomNavigationTab("First", "ic_home"), new BottomNavigationTab("Second", "ic_view_list") ]; this.bottomMenu.createTabs(tabs); } } It looks like the tabs are added first by calling the createTabs method and then it's called one more time by the tabsProperty.setNative after you set the tabs property. createTabs(tabs: BottomNavigationTab[]) { if (!this.tabs) { this.tabs = tabs; } const toUITabBarItem = (tab: BottomNavigationTab, tag: number) => UITabBarItem.alloc().initWithTitleImageTag(tab.title, fromResource(tab.icon).ios, tag); const bottomNavigationTabs: UITabBarItem[] = tabs.map(toUITabBarItem); this.nativeView.items = bottomNavigationTabs; this.nativeView.selectedItem = bottomNavigationTabs[this.selectedTabIndex]; } [tabsProperty.setNative](value: BottomNavigationTab[]) { this.createTabs(value); } @henrychavez I created a pull request #66 with the changes that I think it should fix this problem. I think you are calling the createTabs() function and also setting the tabs property in the component that's what is causing the issue, but I think the create tabs should clear the existing tabs and replaced with the last ones, and create another method like addTab() to add new tabs programmatically. can you please send me your HTML to see if you are setting the tabs property there too? <GridLayout height="100%" rows="*, auto" width="100%"> <page-router-outlet class="page page-content" left="0" tkMainContent top="0"></page-router-outlet> <BottomNavigation (tabSelected)="onBottomNavigationTabSelected($event)" [visibility]="menuVisible ? 'visible' : 'collapsed'" activeColor="#a1aa27" backgroundColor="white" (loaded)="onBottomNavigationLoaded($event)" id="bottomNavigation" inactiveColor="black" keyLineColor="black" row="1" row="1" titleVisibility="selected"> </BottomNavigation> </GridLayout> but I think the create tabs should clear the existing tabs and replaced with the last ones, and create another method like addTab() to add new tabs programmatically. By the way, for the Android implementation, I think it needs some improvements on bottom-navigation.android.ts AHBottomNavigation has a method addItems which you are not using it. Because you are using the addItem for each tab there are a lot of instructions which are executed for each tab instead executing only once after all the tabs were added. addItem method from AHBottomNavigation executes createItems() after each call. Please correct me if I'm wrong. To conclude this issue: On the Android platform, it's easily detectable because you are inserting tabs into the menu On iOS, you are replacing the array of items, so it's harder to notice the issue (but I'm sure that you are replacing them twice for no reason) The route of the cause: when you set the tabs if (!this.tabs) { this.tabs = tabs; } the createTabs method it's called one more time. but I think the create tabs should clear the existing tabs and replaced with the last ones, and create another method like addTab() to add new tabs programmatically. As you said, it would be nice to be able to replace all the tabs when the createTabs method is called. It's useful when you want to change your menu after you already created it. Do you accept PRs? Hi @Pandishpan , Yes I accept PRs but right now I'm working on a major update to remove AHBottomNavigation and user BottomNavigationView with support for Androidx but I'll take your comments about this issue and included in the next release, that would be ready this weekend, also the reason for doing this is because the plugin now will be part of nativescript-material components that would be included in the official docs of Nativescript that were mentioned today in the launch of NS6. 😄 Fixed in version 2.0.0
gharchive/issue
2019-07-17T09:33:20
2025-04-01T04:34:29.277759
{ "authors": [ "Pandishpan", "henrychavez" ], "repo": "henrychavez/nativescript-bottom-navigation", "url": "https://github.com/henrychavez/nativescript-bottom-navigation/issues/65", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
299870554
[Broken Link] In Conformance Testing Docs To customize the set of tests that will be run as part of the report, the following environmental variables can be set in the plugin-specific YAML config: The plugin-specific YAML config link returns a 404 for https://github.com/heptio/sonobuoy/blob/master/plugins.d/e2e.tmpl I'm guessing it's suppose to point to https://github.com/heptio/sonobuoy/blob/master/examples/plugins.d/e2e.yaml, right? @christopherhein this is being completely refactored in v0.11.0, so master branch right now is a WIP. Thanks @timothysc Fixed
gharchive/issue
2018-02-23T22:28:55
2025-04-01T04:34:29.288721
{ "authors": [ "christopherhein", "timothysc" ], "repo": "heptio/sonobuoy", "url": "https://github.com/heptio/sonobuoy/issues/276", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
308195228
Use config defaults with a custom config Start with sensible defaults instead of zero values. If a config is passed in it will merge into the default config instead of a zero-value config. Fixes #390 Signed-off-by: Chuck Ha chuck@heptio.com What this PR does / why we need it: This uses the default config (not the zero value config) as the base config. If no config is supplied, we end up with default values, if a config is supplied we end up with the default values overwritten by any supplied config fields. Which issue(s) this PR fixes Fixes #390 Special notes for your reviewer: We already have mergo in our dependency list! Release note: release-note Here are the logs: $ cat cfg.json |jq { "plugins": [ { "name": "systemd-logs" } ] } test salazar:sonobuoy cha$ go run main.go run --config cfg.json Running plugins: systemd-logs INFO[0027] created object name=heptio-sonobuoy namespace= resource=namespaces INFO[0028] created object name=sonobuoy-serviceaccount namespace=heptio-sonobuoy resource=serviceaccounts INFO[0029] created object name=sonobuoy-serviceaccount-heptio-sonobuoy namespace= resource=clusterrolebindings INFO[0030] created object name=sonobuoy-serviceaccount namespace= resource=clusterroles INFO[0031] created object name=sonobuoy-config-cm namespace=heptio-sonobuoy resource=configmaps INFO[0032] created object name=sonobuoy-plugins-cm namespace=heptio-sonobuoy resource=configmaps INFO[0033] created object name=sonobuoy namespace=heptio-sonobuoy resource=pods INFO[0033] created object name=sonobuoy-master namespace=heptio-sonobuoy resource=services test salazar:sonobuoy cha$ go run main.go logs -f namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-m5bhv" container="sonobuoy-worker" time="2018-03-23T22:27:20Z" level=info msg="Waiting for waitfile" waitfile=/tmp/results/done time="2018-03-23T22:27:21Z" level=info msg="Detected done file, transmitting result file" resultFile=/tmp/results/systemd_logs namespace="heptio-sonobuoy" pod="sonobuoy" container="kube-sonobuoy" time="2018-03-23T22:27:18Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)" time="2018-03-23T22:27:18Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)" time="2018-03-23T22:27:18Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist" time="2018-03-23T22:27:18Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)" time="2018-03-23T22:27:18Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist" time="2018-03-23T22:27:18Z" level=info msg="Filtering namespaces based on the following regex:.*|heptio-sonobuoy" time="2018-03-23T22:27:18Z" level=info msg="Namespace default Matched=true" time="2018-03-23T22:27:18Z" level=info msg="Namespace heptio-sonobuoy Matched=true" time="2018-03-23T22:27:18Z" level=info msg="Namespace kube-public Matched=true" time="2018-03-23T22:27:18Z" level=info msg="Namespace kube-system Matched=true" time="2018-03-23T22:27:18Z" level=info msg="Starting server Expected Results: [{ip-10-0-18-149.us-west-2.compute.internal systemd_logs} {ip-10-0-20-112.us-west-2.compute.internal systemd_logs} {ip-10-0-22-78.us-west-2.compute.internal systemd_logs}]" time="2018-03-23T22:27:18Z" level=info msg="starting aggregation server" address=0.0.0.0 port=8080 time="2018-03-23T22:27:18Z" level=info msg="Running plugin" plugin=systemd-logs time="2018-03-23T22:27:21Z" level=info msg="received aggregator request" client_cert=systemd-logs node=ip-10-0-20-112.us-west-2.compute.internal plugin_name=systemd_logs time="2018-03-23T22:27:21Z" level=info msg="received aggregator request" client_cert=systemd-logs node=ip-10-0-22-78.us-west-2.compute.internal plugin_name=systemd_logs time="2018-03-23T22:27:21Z" level=info msg="received aggregator request" client_cert=systemd-logs node=ip-10-0-18-149.us-west-2.compute.internal plugin_name=systemd_logs time="2018-03-23T22:27:21Z" level=info msg="Running non-ns query" time="2018-03-23T22:27:21Z" level=error msg="error querying CustomResourceDefinitions: the server could not find the requested resource" time="2018-03-23T22:27:21Z" level=info msg="Collecting Node Configuration and Health..." time="2018-03-23T22:27:21Z" level=info msg="Creating host results for ip-10-0-18-149.us-west-2.compute.internal under /tmp/sonobuoy/5e2de22a-87b1-40bf-97e0-66def9e9a045/hosts/ip-10-0-18-149.us-west-2.compute.internal\n" time="2018-03-23T22:27:21Z" level=info msg="Creating host results for ip-10-0-20-112.us-west-2.compute.internal under /tmp/sonobuoy/5e2de22a-87b1-40bf-97e0-66def9e9a045/hosts/ip-10-0-20-112.us-west-2.compute.internal\n" time="2018-03-23T22:27:21Z" level=info msg="Creating host results for ip-10-0-22-78.us-west-2.compute.internal under /tmp/sonobuoy/5e2de22a-87b1-40bf-97e0-66def9e9a045/hosts/ip-10-0-22-78.us-west-2.compute.internal\n" time="2018-03-23T22:27:21Z" level=error msg="error querying ThirdPartyResources: don't know how to handle non-namespaced resource ThirdPartyResources" time="2018-03-23T22:27:21Z" level=info msg="Running ns query (default)" time="2018-03-23T22:27:22Z" level=info msg="Collecting Pod Logs..." time="2018-03-23T22:27:22Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)" time="2018-03-23T22:27:24Z" level=info msg="Running ns query (heptio-sonobuoy)" time="2018-03-23T22:27:25Z" level=info msg="Collecting Pod Logs..." time="2018-03-23T22:27:27Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)" time="2018-03-23T22:27:28Z" level=info msg="Running ns query (kube-public)" time="2018-03-23T22:27:29Z" level=info msg="Collecting Pod Logs..." time="2018-03-23T22:27:29Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)" time="2018-03-23T22:27:31Z" level=info msg="Running ns query (kube-system)" time="2018-03-23T22:27:32Z" level=info msg="Collecting Pod Logs..." namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-fphx5" container="sonobuoy-worker" time="2018-03-23T22:27:20Z" level=info msg="Waiting for waitfile" waitfile=/tmp/results/done time="2018-03-23T22:27:21Z" level=info msg="Detected done file, transmitting result file" resultFile=/tmp/results/systemd_logs namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-qj96m" container="sonobuoy-worker" time="2018-03-23T22:27:20Z" level=info msg="Waiting for waitfile" waitfile=/tmp/results/done time="2018-03-23T22:27:21Z" level=info msg="Detected done file, transmitting result file" resultFile=/tmp/results/systemd_logs namespace="heptio-sonobuoy" pod="sonobuoy" container="kube-sonobuoy" time="2018-03-23T22:28:38Z" level=error msg="error querying PodPresets: the server could not find the requested resource (get podpresets.settings.k8s.io)" namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-m5bhv" container="sonobuoy-worker" rpc error: code = Unknown desc = Error: No such container: df929664274ef4115efe5262f1f00f525e20e2c90a2427545ff976424558dc15namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-fphx5" container="sonobuoy-worker" rpc error: code = Unknown desc = Error: No such container: 5d7418834e264150ee71639b9176491ca830e6068abfd32d9a0ea62cadba1e69namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-qj96m" container="sonobuoy-worker" rpc error: code = Unknown desc = Error: No such container: 4880d63b7cc77659db5eddc67e6b750fd47f6a303b1b7408ae41fd80907d8a0bnamespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-m5bhv" container="sonobuoy-systemd-logs-config" rpc error: code = Unknown desc = Error: No such container: 31985d8a29e9a0386b78a4f883bce603545c754c880aa403bf2e0035169eab23namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-fphx5" container="sonobuoy-systemd-logs-config" rpc error: code = Unknown desc = Error: No such container: 171b1c531535817f69e0651037748f37ba75ac68b06a7c082f90631105ea3e17namespace="heptio-sonobuoy" pod="sonobuoy-systemd-logs-daemon-set-35cab9120c4e46c3-qj96m" container="sonobuoy-systemd-logs-config" rpc error: code = Unknown desc = Error: No such container: ea3fc98ce01dd87d2f3f64db95d3943120f39d621f4dff2d284484806b839175namespace="heptio-sonobuoy" pod="sonobuoy" container="kube-sonobuoy" time="2018-03-23T22:28:43Z" level=info msg="Results available at /tmp/sonobuoy/201803232227_sonobuoy_5e2de22a-87b1-40bf-97e0-66def9e9a045.tar.gz" time="2018-03-23T22:28:43Z" level=info msg="no-exit was specified, sonobuoy is now blocking" also of note, retrieve works and there is data despite the errors in the logs. I'll file an issue to look into it. Adding tests, will push up another pr in a few. yay jetlag -_- (I closed this because I had discovered a bug while writing tests and didn't want an accidental merge 🙃)
gharchive/pull-request
2018-03-23T22:21:25
2025-04-01T04:34:29.296322
{ "authors": [ "chuckha" ], "repo": "heptio/sonobuoy", "url": "https://github.com/heptio/sonobuoy/pull/392", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
268261795
http sink receiving invalid sequence unparsable by rfc5424 I have a fairly simple http sink receiver side implementation below. Very frequently it's failing to parse the incoming sequence using rfc5424.Message.ReadFrom method. The error from ReadFrom method reads: strconv.Atoi: parsing "\n807": invalid syntax All of the source code: package main import ( "fmt" "io" "log" "net/http" "github.com/crewjam/rfc5424" ) func handler(w http.ResponseWriter, r *http.Request) { log.Printf("Method: %s", r.Method) if r.Body != nil { defer r.Body.Close() } m := new(rfc5424.Message) for { n, err := m.ReadFrom(r.Body) if err == io.EOF { break } else if err != nil { log.Fatalf("READERROR: %+v", err) } log.Printf("n=%d, host=%s, app=%s msg=%q", n, m.Hostname, m.AppName, string(m.Message)) } } func main() { log.Println("starting server!") http.HandleFunc("/", handler) log.Fatal(http.ListenAndServe(":8080", nil)) } It's fairly easy to replicate. I simply use this project's manifests to redeploy: kubectl delete -f ./yaml; kubectl apply -f ./yaml and it almost immediately triggers the error on the first message: 2017/10/25 03:22:31 starting server! 2017/10/25 03:22:45 Method: POST 2017/10/25 03:22:45 n=789, host=, app=default-scheduler msg="{\"verb\":\"ADDED\",\"event\":{\"metadata\":{\"name\":\"httpsink-395313959-234dm.14f0b1777bf8e722\",\"namespace\":\"default\",\"selfLink\":\"/api/v1/namespaces/default/events/httpsink-395313959-234dm.14f0b1777bf8e722\",\"uid\":\"fdb530dc-b931-11e7-96cb-2e6a5c70d9f0\",\"resourceVersion\":\"14958\",\"creationTimestamp\":\"2017-10-25T03:10:01Z\"},\"involvedObject\":{\"kind\":\"Pod\",\"namespace\":\"default\",\"name\":\"httpsink-395313959-234dm\",\"uid\":\"fdac2d9e-b931-11e7-96cb-2e6a5c70d9f0\",\"apiVersion\":\"v1\",\"resourceVersion\":\"14952\"},\"reason\":\"Scheduled\",\"message\":\"Successfully assigned httpsink-395313959-234dm to minikube\",\"source\":{\"component\":\"default-scheduler\"},\"firstTimestamp\":\"2017-10-25T03:10:01Z\",\"lastTimestamp\":\"2017-10-25T03:10:01Z\",\"count\":1,\"type\":\"Normal\"}}" 2017/10/25 03:22:45 READERROR: strconv.Atoi: parsing "\n807": invalid syntax Any ideas why this might be happening? It sometimes gets another sequence too: strconv.Atoi: parsing "\n788": invalid syntax I attached a simple implementation that reproes at #35. Once you run the container, expose it as Service and configure the httpSinkUrl, you can run kubectl run nginx --image=nginx and see it crash. It just occurred to me that (not sure why it took so long, I guess I thought \n is a form of \x... notation) this is because the parser is not expecting \n but the messages int the POST request is line-separated. (i.e. \n788 is new line, and the next message is length 788).
gharchive/issue
2017-10-25T04:17:48
2025-04-01T04:34:29.301447
{ "authors": [ "ahmetb" ], "repo": "heptiolabs/eventrouter", "url": "https://github.com/heptiolabs/eventrouter/issues/33", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
266200238
Add GCMaxPauseCheck for restarting an app experiencing long GC pauses. You can use this to aggressively restart an application (with safe draining) if the GC pause time starts to exceed some threshold. LGTM
gharchive/pull-request
2017-10-17T16:45:55
2025-04-01T04:34:29.302531
{ "authors": [ "bryanl", "mattmoyer" ], "repo": "heptiolabs/healthcheck", "url": "https://github.com/heptiolabs/healthcheck/pull/11", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
957954982
Give 3D buildings that have 0m height data a random height between 2 values Hey awesome team, I'm trying to attribute buildings that have 0m value for height, with a height value of a random number between two values. I appreciate your help! hi @messiosmarios , thanks for the issue. You can replace buildings with no height with a single value, but unfortunately not a random value (see https://github.com/heremaps/harp.gl/blob/master/@here/harp-datasource-protocol/StyleExpressions.md for the list of available expressions, but we could definitely add this). In your style file, you need to go to the extruded-polygon technique and change the height to something like: "height": ["case", ["<=", ["get", "height"], 0], 1000, ["get", "height”]] (I set it to 1000 just so you can see it works, but you can change to whatever value you want in meters). I.e. here is the height value defined: https://github.com/heremaps/harp.gl/blob/c579784fb7f75fafaf1e46440b6ce5ff10c2ee8d/%40here/harp-map-theme/resources/berlin_tilezen_base.json#L2254 you would need to change this to what I mentioned above. Does that help? hi @messiosmarios , thanks for the issue. You can replace buildings with no height with a single value, but unfortunately not a random value (see https://github.com/heremaps/harp.gl/blob/master/@here/harp-datasource-protocol/StyleExpressions.md for the list of available expressions, but we could definitely add this). In your style file, you need to go to the extruded-polygon technique and change the height to something like: "height": ["case", ["<=", ["get", "height"], 0], 1000, ["get", "height”]] (I set it to 1000 just so you can see it works, but you can change to whatever value you want in meters). I.e. here is the height value defined: https://github.com/heremaps/harp.gl/blob/c579784fb7f75fafaf1e46440b6ce5ff10c2ee8d/%40here/harp-map-theme/resources/berlin_tilezen_base.json#L2254 you would need to change this to what I mentioned above. Does that help? Hey @nzjony thank you for that! It works perfectly. Just a small note, the last height word in the command uses the wrong quotation mark ("height”) -> "height"
gharchive/issue
2021-08-02T09:36:51
2025-04-01T04:34:29.312053
{ "authors": [ "messiosmarios", "nzjony" ], "repo": "heremaps/harp.gl", "url": "https://github.com/heremaps/harp.gl/issues/2254", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
321569883
Bundler: Apply heuristic to find VCS info to meta-data from rubygems It seems the response from the rubygems.org API sometimes contains different data from the gemspec file, even for the same version. One case where this occurs is "spork". Installing version 0.9.2 and running "gem specification spork" produces the URL to a GitHub repository that no longer exists, whereas the data from rubygems.org ontains the correct URL. This change is  different data from the gemspec file "data different from the gemspec file" I'm not sure we actually want to apply heuristics here. For other package managers, we also do not try to deduce the VCS URL from the homepage URL, because the analyzer is supposed to reflect reality, and if reality is broken, it's broken (and we fix it via curations). @mnonnenmacher, what's your opinion? @sschuberth Good point. That's also why I was not too happy with having this function in the first place and having it only in one package manager. But if we do it for local gems, we should also do it for data from the API. As far as I understand, the gemspec has no "official" attribute for a VCS URL. There is a field for arbitrary metadata and the docs recommend putting a VCS URL there: http://guides.rubygems.org/specification-reference/#metadata I think if we apply heuristics it should only go to the vcsProcessed field, as this already contains deduced data. But if we guess the VCS URL from the Homepage URL we should probably do this in a more generic way as I have also seen NPM packages which have a homepage URL that points to GitHub but no VCS information set. Alright, I think we won't merge this as-is. The options are: Remove the heuristic completely (This would probably result in the need for many Ruby curations.) Generalize the heuristic and apply it to many/all package managers. My vote goes to 1. If it turns out that too many Ruby packages need curations we can implement option 2 later. Closing this in favor of above mentioned option 2. Option 2 was implemented as part of https://github.com/heremaps/oss-review-toolkit/pull/548.
gharchive/pull-request
2018-05-09T13:27:11
2025-04-01T04:34:29.319131
{ "authors": [ "haikoschol", "mnonnenmacher", "sschuberth" ], "repo": "heremaps/oss-review-toolkit", "url": "https://github.com/heremaps/oss-review-toolkit/pull/537", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2668460637
SA VoNR docker won't start after setting up AMF ports I'm trying to use the sa-vonr-deploy branch, in use with an external gNodeB (ORS68 from Rapidspace). Following the tutorial, i uncommented the below lines about ports on AMF and UPF: Afterwards, the docker launch ends with a parsing error : test@lab5G-open5GS:~/docker_open5gs$ sudo docker-compose -f sa-vonr-deploy.yaml up ERROR: yaml.parser.ParserError: while parsing a block mapping in "./sa-vonr-deploy.yaml", line 197, column 5 expected , but found '' in "./sa-vonr-deploy.yaml", line 220, column 6 If I skip this part, the dockers will start, but it results in the AMF aborting all SCTP INIT attempts from the gNodeB : Could someone give me a clue about how getting it to work ? Thanks in advance. Can you post here the sa-vonr-deploy.yaml you modified ? Also, can you let me know the docker-ce and docker compose version you are running? Hi @herlesupreeth. The requested docker version : cogisys@lab5G-open5GS-fork:~/docker_open5gs$ docker --version Docker version 24.0.7, build 24.0.7-0ubuntu2~22.04.1 cogisys@lab5G-open5GS-fork:~/docker_open5gs$ docker compose version Docker Compose version v2.3.3 And the sa-vonr-deploy.yaml in attachment. sa-vonr-deploy.yaml.txt The indentation of "ports" section under amf and upf is wrong. Please align it (yaml files are very sensitive to spaces) Thanks for the information, I didn't know it was space-senstive. I tried again with a copy/paste from the repo, and I had the same issue. Howerver, I was able to make it work : there is an extra space between the sharp character and "ports" in the sa-vonr-deploy.yaml from the repo. I removed it and it starts. Thanks again for your help.
gharchive/issue
2024-11-18T12:59:16
2025-04-01T04:34:29.333392
{ "authors": [ "Stromduster", "herlesupreeth" ], "repo": "herlesupreeth/docker_open5gs", "url": "https://github.com/herlesupreeth/docker_open5gs/issues/390", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
207991704
Add a server subcommand which acts as a client The only way right now to interact with the server is via one of our official client libraries or to generate a client impl from the server protocol. This is great for game development with an engine like Unity or Unreal but makes demos and administrative tasks tricky because another language/toolchain/environment is required. It'd be great to be able to run Nakama with the necessary connection parameters as a client subcommand which could be used to perform basic tasks. For example: nakama client register-user --device-id "some unique ID" We'll need to establish how much of the surface area of the game server we'd want/need to support within the subcommand but it'd be useful to operate on the server and complement the inspection options available in the embedded dashboard. This might be handled better as a separate command line tool like nakamactl or similar. It depends whether we want to package the functionality with the main server. Different examples have made different choices: cockroachdb has a subcommand called cockroach sql which gives a REPL as a client connection to the server. postgres has a separate tool called psql. hbase has a subcommand called hbase shell which gives a REPL as a client connection. Similar to cockroachdb. This is my 2c: I would like to keep this command line tool as in the same nakama binary: Given that we already have a doctor and migrate subcommands, I would like to keep the consistency. Less management overhead, install scripts etc when you have one binary Much similar build system As a sidenote, I think we'd need a REPL like environment as we have an active WS connection to the server. (As opposed to disconnect and reconnect). I don't think this issue is needed now. We have GRPC/HTTP access, multiple client libraries, Swagger API definitions, and documentation to help learn the API and experiment.
gharchive/issue
2017-02-16T02:07:18
2025-04-01T04:34:29.343644
{ "authors": [ "mofirouz", "novabyte" ], "repo": "heroiclabs/nakama", "url": "https://github.com/heroiclabs/nakama/issues/27", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
201992856
Remove instructions to set KAFKA_TOPIC config var This config var isn’t actually used anymore :+1:
gharchive/pull-request
2017-01-19T22:21:45
2025-04-01T04:34:29.357039
{ "authors": [ "halorgium", "thody" ], "repo": "heroku/heroku-kafka-demo-java", "url": "https://github.com/heroku/heroku-kafka-demo-java/pull/14", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
76715539
Error when running heroku fork on Linux: Error: Cannot find module 'co' It sounds like this might be due to an unfinished update on one of our 4 build machines? Perhaps related to GH-1567 00:18:10.907 module.js:339 00:18:10.907 throw err; 00:18:10.907 ^ 00:18:10.908 Error: Cannot find module 'co' 00:18:10.908 at Function.Module._resolveFilename (module.js:337:15) 00:18:10.908 at Function.Module._load (module.js:287:25) 00:18:10.908 at Module.require (module.js:366:17) 00:18:10.908 at require (module.js:385:17) 00:18:10.909 at Object.<anonymous> (/home/vagrant/.heroku/node_modules/heroku-fork/commands/fork.js:3:18) 00:18:10.909 at Module._compile (module.js:431:26) 00:18:10.909 at Object.Module._extensions..js (module.js:449:10) 00:18:10.909 at Module.load (module.js:356:32) 00:18:10.909 at Function.Module._load (module.js:311:12) 00:18:10.910 at Module.require (module.js:366:17) On Mac, I can run heroku update to wait for everything to be updated, right? On Linux though, it looks like running heroku update will not block and wait for the CLI and plugins to update. Is there a way for me to block and wait for everything to be updated? That way, I could do it at the beginning of our build, to decrease the chance of our build machine discovering that it's only partially updated midway through. Or perhaps your fixes in GH-1567 will make that request moot? Running it again didn't help (unlike GH-1567, where that just made it go away), so I have logged in and done this: $ heroku --version heroku-toolbelt/3.36.8 (x86_64-linux) ruby/2.2.2 heroku-cli/4.19.19-6dfff81 (amd64-linux) go1.4.2 ERROR: json: cannot unmarshal array into Go value of type map[string]gode.Package npm ERR! max depth reached: co@^4.5.4, required by heroku-fork@3.0.3 npm ERR! max depth reached: heroku-cli-util@^3.6.0, required by heroku-fork@3.0.3 npm ERR! max depth reached: lodash@^3.2.0, required by heroku-fork@3.0.3 === Installed Plugins heroku-pipeline More info to come as I learn more about this... Other linux machines we have are fine though: heroku version heroku-toolbelt/3.36.7 (x86_64-linux) ruby/2.2.2 heroku-cli/4.19.19-6dfff81 (amd64-linux) go1.4.2 === Installed Plugins heroku-apps@0.1.4 heroku-fork@3.0.3 heroku-pipeline So, this might just be an error on this particular machine. Removed and reinstalled Heroku Toolbelt, and now fork works again: $ heroku version heroku-toolbelt/3.36.8 (x86_64-linux) ruby/2.2.2 heroku-cli/4.19.19-6dfff81 (amd64-linux) go1.4.2 === Installed Plugins heroku-fork@3.0.3 That's the oddest thing. Feel free to close out, since it's a transient error. I wanted to leave it open just in case you or others happen to run into this. I was able to identify a few bugs that I resolved in 4.19.20. One includes the locking around an individual plugin, so hopefully that will make the Error: cannot find module bug a thing of the past. However, maybe not until everything goes through one more round of updates. There was also a bug with the update command that it was using the autoupdate file to check if it needed to update. That's the bug you saw, that calling heroku update didn't do an update, it just checked the autoupdate file and saw it didn't need to update.
gharchive/issue
2015-05-15T12:38:29
2025-04-01T04:34:29.361894
{ "authors": [ "Taytay", "dickeyxxx" ], "repo": "heroku/heroku", "url": "https://github.com/heroku/heroku/issues/1571", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
3576803
git error outputted when destroying an app from non git directory If you are in a non-git directory and destroy an app it will output: fatal: Not a git repository (or any of the parent directories): .git This should be avoided (or at least hidden). Oh hey, what was the command line used? Seems to also be coming running specs (see below), but I haven't seen that on heroku destroy: for some reason. Below the spec output are two cases of create/destroy apps without git dirs. rake spec without a git dir ~/github/steakknife/heroku ᐅ rake spec fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). /Volumes/Users/barry/.rvm/rubies/ruby-1.9.3-p125/bin/ruby -S rspec ./spec/heroku/auth_spec.rb ./spec/heroku/client_spec.rb ./spec/heroku/command/addons_spec.rb ./spec/heroku/command/apps_spec.rb ./spec/heroku/command/auth_spec.rb ./spec/heroku/command/base_spec.rb ./spec/heroku/command/config_spec.rb ./spec/heroku/command/db_spec.rb ./spec/heroku/command/domains_spec.rb ./spec/heroku/command/drains_spec.rb ./spec/heroku/command/help_spec.rb ./spec/heroku/command/keys_spec.rb ./spec/heroku/command/logs_spec.rb ./spec/heroku/command/maintenance_spec.rb ./spec/heroku/command/pg_spec.rb ./spec/heroku/command/pgbackups_spec.rb ./spec/heroku/command/plugins_spec.rb ./spec/heroku/command/ps_spec.rb ./spec/heroku/command/releases_spec.rb ./spec/heroku/command/run_spec.rb ./spec/heroku/command/sharing_spec.rb ./spec/heroku/command/ssl_spec.rb ./spec/heroku/command/stack_spec.rb ./spec/heroku/command/version_spec.rb ./spec/heroku/command_spec.rb ./spec/heroku/helpers_spec.rb ./spec/heroku/heroku-postgresql_client_spec.rb ./spec/heroku/heroku-shared-postgresql_client_spec.rb ./spec/heroku/pg_resolver_spec.rb ./spec/heroku/pgbackupsclient_spec.rb ./spec/heroku/plugin_spec.rb fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). fatal: Not a git repository (or any parent up to mount parent /Volumes) Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set). .................................F......................................................................................................................................................................................................................................................................................................... Failures: 1) Heroku::Client read_logs new style can read new style logs Failure/Error: @client.read_logs("myapp") do |logs| WebMock::NetConnectNotAllowedError: Real HTTP connections are disabled. Unregistered request: GET http://api.heroku.com:443/logplex_url with headers {'Accept'=>'*/*', 'User-Agent'=>'Ruby'} You can stub this request with the following snippet: stub_request(:get, "http://api.heroku.com:443/logplex_url"). with(:headers => {'Accept'=>'*/*', 'User-Agent'=>'Ruby'}). to_return(:status => 200, :body => "", :headers => {}) registered request stubs: stub_request(:get, "https://api.heroku.com/logplex_url") stub_request(:get, "https://api.heroku.com/apps/myapp/logs?logplex=true") ============================================================ # ./lib/heroku/client.rb:414:in `block in read_logs' # ./lib/heroku/client.rb:413:in `read_logs' # ./spec/heroku/client_spec.rb:140:in `block (4 levels) in <top (required)>' Finished in 4.74 seconds 331 examples, 1 failure Failed examples: rspec ./spec/heroku/client_spec.rb:139 # Heroku::Client read_logs new style can read new style logs Randomized with seed 30734 Coverage report generated for RSpec to /Volumes/Users/barry/github/steakknife/heroku/coverage. 2267 / 3113 LOC (72.82%) covered. rake aborted! /Volumes/Users/barry/.rvm/rubies/ruby-1.9.3-p125/bin/ruby -S rspec ./spec/heroku/auth_spec.rb ./spec/heroku/client_spec.rb ./spec/heroku/command/addons_spec.rb ./spec/heroku/command/apps_spec.rb ./spec/heroku/command/auth_spec.rb ./spec/heroku/command/base_spec.rb ./spec/heroku/command/config_spec.rb ./spec/heroku/command/db_spec.rb ./spec/heroku/command/domains_spec.rb ./spec/heroku/command/drains_spec.rb ./spec/heroku/command/help_spec.rb ./spec/heroku/command/keys_spec.rb ./spec/heroku/command/logs_spec.rb ./spec/heroku/command/maintenance_spec.rb ./spec/heroku/command/pg_spec.rb ./spec/heroku/command/pgbackups_spec.rb ./spec/heroku/command/plugins_spec.rb ./spec/heroku/command/ps_spec.rb ./spec/heroku/command/releases_spec.rb ./spec/heroku/command/run_spec.rb ./spec/heroku/command/sharing_spec.rb ./spec/heroku/command/ssl_spec.rb ./spec/heroku/command/stack_spec.rb ./spec/heroku/command/version_spec.rb ./spec/heroku/command_spec.rb ./spec/heroku/helpers_spec.rb ./spec/heroku/heroku-postgresql_client_spec.rb ./spec/heroku/heroku-shared-postgresql_client_spec.rb ./spec/heroku/pg_resolver_spec.rb ./spec/heroku/pgbackupsclient_spec.rb ./spec/heroku/plugin_spec.rb failed Tasks: TOP => spec (See full trace by running task with --trace) ~/github/steakknife/heroku ᐅ Create and destroy app, no git dir ~/heroku/dasfadsf88888 ᐅ heroku apps:create dasfadsf88888 Creating dasfadsf88888... done, stack is bamboo-mri-1.9.2 http://dasfadsf88888.heroku.com/ | git@heroku.com:dasfadsf88888.git ~/heroku/dasfadsf88888 ᐅ ls -la total 0 drwxr-xr-x 2 barry admin 68 Mar 9 04:09 . drwxr-xr-x 4 barry admin 136 Mar 9 04:09 .. ~/heroku/dasfadsf88888 ᐅ cd .. ~/heroku ᐅ ls -la total 0 drwxr-xr-x 4 barry admin 136 Mar 9 04:09 . drwxr-xr-x 127 barry admin 4318 Mar 9 04:12 .. drwxr-xr-x 2 barry admin 68 Mar 9 04:09 dasfadsf88888 drwxr-xr-x 2 barry admin 68 Mar 9 02:22 test ~/heroku ᐅ heroku apps:destroy --confirm dasfadsf88888 /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/gems/heroku-2.20.0/lib/heroku/command/apps.rb:191:in `destroy': uninitialized constant Heroku::Command::Apps::CommandFailed (NameError) from /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/gems/heroku-2.20.0/lib/heroku/command.rb:129:in `run' from /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/gems/heroku-2.20.0/lib/heroku/cli.rb:9:in `start' from /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/gems/heroku-2.20.0/bin/heroku:15:in `<top (required)>' from /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/bin/heroku:19:in `load' from /Volumes/Users/barry/.rvm/gems/ruby-1.9.3-p125@utils/bin/heroku:19:in `<main>' ~/heroku ᐅ heroku apps:destroy dasfadsf88888 ! WARNING: Potentially Destructive Action ! This command will affect the app: dasfadsf88888 ! To proceed, type "dasfadsf88888" or re-run this command with --confirm dasfadsf88888 > dasfadsf88888 Destroying dasfadsf88888 (including all add-ons)... done ~/heroku ᐅ Create repo, create app, remove repo, destroy app ~/heroku ᐅ mkdir blah7777 ~/heroku ᐅ cd blah7777 ~/heroku/blah7777 ᐅ git init Initialized empty Git repository in /Volumes/Users/barry/heroku/blah7777/.git/ ~/heroku/blah7777 (master ✔) ᐅ heroku apps:create blah7777 Creating blah7777... done, stack is bamboo-mri-1.9.2 http://blah7777.heroku.com/ | git@heroku.com:blah7777.git Git remote heroku added ~/heroku/blah7777 (master ✔) ᐅ rm -rf .git ~/heroku/blah7777 ᐅ heroku apps:destroy blah7777 ! WARNING: Potentially Destructive Action ! This command will affect the app: blah7777 ! To proceed, type "blah7777" or re-run this command with --confirm blah7777 > blah7777 Destroying blah7777 (including all add-ons)... done ~/heroku/blah7777 ᐅ
gharchive/issue
2012-03-09T02:11:35
2025-04-01T04:34:29.366621
{ "authors": [ "geemus", "steakknife" ], "repo": "heroku/heroku", "url": "https://github.com/heroku/heroku/issues/252", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
532057028
HOME should not be the /workspace In order to resolve #42 and make some of the shimmed buildpacks work, we've set $HOME to /workspace in heroku/pack:18 for the heroku user. But in the future we want to avoid this so it matches heroku/pack:18-build. We can/should not set $HOME to /workspace in heroku/pack:18-build because it can cause secrets, local caches in dot dirs, etc to end up in the launch image. Fixing this will probably require changes in the shimmed buildpacks @jkutner Is this ok to close out? It seems HOME is now /app: https://github.com/heroku/pack-images/blob/d724bd94e8dc9d6cb2d76b49b5469cdee0347f29/Dockerfile.run#L15 It looks like this was resolved by #43 / #54, feel free to reopen if there is work left to do :-)
gharchive/issue
2019-12-03T15:15:17
2025-04-01T04:34:29.371556
{ "authors": [ "edmorley", "jkutner" ], "repo": "heroku/pack-images", "url": "https://github.com/heroku/pack-images/issues/45", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1561632394
🛑 Software Center - Test 1 is down In d4dcc89, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in dbeef8b.
gharchive/issue
2023-01-30T02:31:00
2025-04-01T04:34:29.380795
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/11532", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1180120009
🛑 Auth-Bridge - Test 1 is down In 4cb0252, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 484d376.
gharchive/issue
2022-03-24T22:31:16
2025-04-01T04:34:29.382953
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/1732", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1784893031
🛑 Auth-Bridge - Test 1 is down In 70a005f, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in a5a96b9.
gharchive/issue
2023-07-02T21:32:48
2025-04-01T04:34:29.385064
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/17780", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1803525834
🛑 Auth-Bridge - Test 1 is down In e699208, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 401aa8d.
gharchive/issue
2023-07-13T18:07:38
2025-04-01T04:34:29.387208
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/18256", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1805832480
🛑 Auth-Bridge - Test 1 is down In 0e5586d, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 924e142.
gharchive/issue
2023-07-15T02:44:42
2025-04-01T04:34:29.389541
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/18309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1812879153
🛑 Auth-Bridge - Test 1 is down In ba09de8, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in a14bd31.
gharchive/issue
2023-07-19T23:28:48
2025-04-01T04:34:29.391762
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/18537", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1185700893
🛑 Auth-Bridge - Test 1 is down In 4cb5297, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 574316f.
gharchive/issue
2022-03-30T01:35:22
2025-04-01T04:34:29.394171
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/1928", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1847708012
🛑 Auth-Bridge - Test 1 is down In 38f6878, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 36ee1e9.
gharchive/issue
2023-08-12T03:27:39
2025-04-01T04:34:29.396299
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/19699", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1190902766
🛑 Software Center - Test 1 is down In f0c88c2, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in a9e5893.
gharchive/issue
2022-04-03T11:30:02
2025-04-01T04:34:29.398436
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/2105", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1899278833
🛑 Auth-Bridge - Test 1 is down In 07249a3, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in 6a0afad after 10 minutes.
gharchive/issue
2023-09-16T03:41:50
2025-04-01T04:34:29.400803
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/21533", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1905434573
🛑 Auth-Bridge - Test 1 is down In 8de13e6, Auth-Bridge - Test 1 ($AUTH_BRIDGE_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Auth-Bridge - Test 1 is back up in c69032a after 10 minutes.
gharchive/issue
2023-09-20T17:26:36
2025-04-01T04:34:29.402954
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/21766", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1196811042
🛑 Software Center - Test 1 is down In 67ce5d8, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in fd04807.
gharchive/issue
2022-04-08T04:21:04
2025-04-01T04:34:29.405060
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/2304", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1996876056
🛑 Software Center - Test 1 is down In 0c3653c, Software Center - Test 1 ($SOFTWARECENTER_TEST_1) was down: HTTP code: 0 Response time: 0 ms Resolved: Software Center - Test 1 is back up in bc0d374 after 1 hour, 15 minutes.
gharchive/issue
2023-11-16T13:38:01
2025-04-01T04:34:29.407179
{ "authors": [ "herrphon" ], "repo": "herrphon/upptime", "url": "https://github.com/herrphon/upptime/issues/24632", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }