id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
105469749
Add README.md for udp service See the graphite README.md for reference. Fixed by https://github.com/influxdb/influxdb/pull/4379 @beckettsean -- once this closes, the documents can point to the README added as part of #4379 New README available for UDP. https://github.com/influxdb/influxdb.com/issues/396 created for linking to the README, thanks, @otoolep!
gharchive/issue
2015-09-08T21:10:32
2025-04-01T06:39:04.851841
{ "authors": [ "beckettsean", "corylanou", "otoolep" ], "repo": "influxdb/influxdb", "url": "https://github.com/influxdb/influxdb/issues/4041", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1992015341
🛑 alternative site is down In 5cbd4bd, alternative site (https://almgro7al3nzy.com) was down: HTTP code: 0 Response time: 0 ms Resolved: alternative site is back up in 831fe4b after 30 minutes.
gharchive/issue
2023-11-14T05:24:46
2025-04-01T06:39:04.854326
{ "authors": [ "info-devf5r" ], "repo": "info-devf5r/VPN", "url": "https://github.com/info-devf5r/VPN/issues/519", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
320323032
Added usage of options This commit is dependent on the following PR: https://github.com/infobloxopen/atlas-app-toolkit/pull/22 PR is not relevant anymore
gharchive/pull-request
2018-05-04T15:39:18
2025-04-01T06:39:04.855711
{ "authors": [ "Evgeniy-L", "vitalykarpenko" ], "repo": "infobloxopen/atlas-contacts-app", "url": "https://github.com/infobloxopen/atlas-contacts-app/pull/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1071603645
Move all RFCs into same directory Keeping our rfcs/adrs in separate directories causes me confusion, and I imagine would do so for others too. Moreover, I think it's not useful. When we initially separated them out, it was to protect users from encountering design documents that wouldn't be useful to them. Now we can do this by just omitting the docs we wish to hide from the listing in SUMMARY.md. [x] Ran make fmt-fix (or had formatting run automatically on all files edited) @Kukovec good call! I fixed the one link that would have been broken. Thanks :D
gharchive/pull-request
2021-12-06T00:28:00
2025-04-01T06:39:04.915691
{ "authors": [ "shonfeder" ], "repo": "informalsystems/apalache", "url": "https://github.com/informalsystems/apalache/pull/1148", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2129324052
🛑 Meyer Lab Web is down In fee0a37, Meyer Lab Web (www.meyerlab.com.py) was down: HTTP code: 0 Response time: 0 ms Resolved: Meyer Lab Web is back up in 939aa13 after 10 minutes.
gharchive/issue
2024-02-12T02:28:41
2025-04-01T06:39:04.918219
{ "authors": [ "informaticaMeyerlab" ], "repo": "informaticaMeyerlab/statusPageMeyerLab", "url": "https://github.com/informaticaMeyerlab/statusPageMeyerLab/issues/160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
158991621
File upload The data entry app needs to support file uploads. We will need to introduce an annotation to identify when a url column should be regarded as an asset. Assets should be uploaded to Hatrac and then the asset url column should be updated with the hatrac url of the file. What about introduce a new column type "hatrac_uri"? For this column type: the app asks users to select a file to be uploaded calculate the hatrac md5 checksum, so it won't try to upload the same content over again. set the appropriate content-type if possible, calculate filesize if possible, calculate sha256 checksum All of these should be implemented in a module so it can be reused. I think the UI would need more logic and more guidance than just recognizing a column type... there is substantial overlap with iobox concepts here: Some annotation with a mustache pattern to produce the desired object name from row metadata and/or checksum information? Some annotation to collect local filename (if feasible w/ HTML5?) as additional row metadata Some annotation to guide initial object ACL config and/or ACL setting UX From our conversation yesterday, a flow could be: collect form data and file-related information (size, checksum, etc) mint identifier (i.e., accession number) post object (hatrac) post entity (ermrest) Step 2, requires a new accession service or feature, probably on ERMrest and probably configured by the same custom trigger logic we use for generating accession numbers. Also, if we intend to capture details about multi-assets per entity (row of a table) then we will need to modify the annotation because it really assumes that there will be a single asset annotation on a table. https://github.com/informatics-isi-edu/ermrest/blob/master/user-doc/annotation.md#2016-asset Could be a matter of changing the definition to allow a list [ ... ] of the same payload objects as currently specified. Thus the top level modes for the payload would be: null, {}, { ...properties... }, or [ { ...properties... }, ... ]. Should we make EZID one option for that assassin generation service. Carl Dr. Carl Kesselman Dean’s Professor, Epstein Department of Industrial and Systems Engineering Fellow, Information Sciences Institute Viterbi School of Engineering Professor, Preventive Medicine Keck School of Medicine University of Southern California 4676 Admiralty Way, Suite 1001, Marina del Rey, CA 90292-6695 Phone: +1 (310) 448-9338 Email: carl@isi.edumailto:carl@isi.edu Web: http://www.isi.edu/~carl On Nov 9, 2016, at 2:55 AM, robes <notifications@github.commailto:notifications@github.com> wrote: From our conversation yesterday, a flow could be: collect form data and file-related information (size, checksum, etc) mint identifier (i.e., accession number) post object (hatrac) post entity (ermrest) Step 2, requires a new accession service or feature, probably on ERMrest and probably configured by the same custom trigger logic we use for generating accession numbers. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/informatics-isi-edu/chaise/issues/426#issuecomment-259225603, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADbjXp390aJkUi4odaQMXuAajMP1NJf6ks5q8MWMgaJpZM4IwN8d. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/informatics-isi-edu/chaise","title":"informatics-isi-edu/chaise","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/informatics-isi-edu/chaise"}},"updates":{"snippets":[{"icon":"PERSON","message":"@robes in #426: From our conversation yesterday, a flow could be:\r\n\r\n1. collect form data and file-related information (size, checksum, etc)\r\n2. mint identifier (i.e., accession number)\r\n3. post object (hatrac)\r\n4. post entity (ermrest)\r\n\r\nStep 2, requires a new accession service or feature, probably on ERMrest and probably configured by the same custom trigger logic we use for generating accession numbers."}],"action":{"name":"View Issue","url":"https://github.com/informatics-isi-edu/chaise/issues/426#issuecomment-259225603"}}} There are quite a few libraries out there for handling file upload in modern browsers. We should investigate if there are benefits to leveraging one of these rather than rolling our own. Here's one in particular that I was looking into. It is an Angular library that is pretty mature and popular: https://github.com/danialfarid/ng-file-upload Of course, there are plenty of others out there too. @chiragsanghvi could you look at hongsuda's last comment above and update her checklist? it looks like you may have completed these but I'd like to confirm. thanks @robes I have updated the issue. great. thanks. so are we skipping the sha256 for now? Yes, we are skipping sha256 for now. okay, so as I understand it then, the last things to do are work on tests for the automated (travis) and manually-invoked modes. The code is merged in Master and has some happy flow testcases for Chaise that run locally but not on Travis. There are testcases in ermrestjs that test the upload part.
gharchive/issue
2016-06-07T18:26:09
2025-04-01T06:39:04.937577
{ "authors": [ "carlkesselman", "chiragsanghvi", "hongsudt", "karlcz", "mikedarcy", "robes" ], "repo": "informatics-isi-edu/chaise", "url": "https://github.com/informatics-isi-edu/chaise/issues/426", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
301619144
TOC header and Main link on the top This PR addresses #1449 Added Main link on the top of the list and renamed the header to Contents. Test case is in presentation.spec.js Please merge and update people on this improvement. On Mon, Mar 5, 2018 at 12:27 PM, Josh Chudy notifications@github.com wrote: @jrchudy approved this pull request. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/informatics-isi-edu/chaise/pull/1450#pullrequestreview-101314937, or mute the thread https://github.com/notifications/unsubscribe-auth/AAZUDP8v-2ad7ntpumuriysEC-m8sWJmks5tbZ-ggaJpZM4SZQKe .
gharchive/pull-request
2018-03-02T01:15:58
2025-04-01T06:39:04.942040
{ "authors": [ "amitjha21", "hongsudt" ], "repo": "informatics-isi-edu/chaise", "url": "https://github.com/informatics-isi-edu/chaise/pull/1450", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
302942183
Runner の CI設定を入れる 設定方法はここ https://github.com/infra-ci-book/ketchup-vagrant-ansible/blob/master/tests/README.md https://github.com/infra-ci-book/gitlab-vagrant-ansible/commit/69a982c29efe6e68edbd167b75bcdfb286aeeefc
gharchive/issue
2018-03-07T02:29:15
2025-04-01T06:39:04.950195
{ "authors": [ "irixjp" ], "repo": "infra-ci-book/gitlab-vagrant-ansible", "url": "https://github.com/infra-ci-book/gitlab-vagrant-ansible/issues/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2328965927
aws ebs unit price is not correct Part result of infracost brewakdown aws_instance.server[0] ├─ Instance usage (Linux/UNIX, on-demand, r6i.large) 730 hours $89.03 ├─ EC2 detailed monitoring 7 metrics $1.93 ├─ root_block_device │ └─ Storage (general purpose SSD, gp3) 64 GB $4.69 └─ ebs_block_device[0] └─ Storage (general purpose SSD, gp3) 2,048 GB $150.24 Actually 2048GB EBS gp3 monthly price is more than 800 USD @KaimingWan thanks for creating the issue. In us-east-1, EBS gp3 is $0.08/GB-month x 2048 = $163.84. Which region are you using? Can you please share the Terraform HCL code snippet that generated the above cost estimate so we can reproduce it and investigate? @alikhajeh1 Thank you for your response. The $800 USD figure is inaccurate. I recalculated the price for the cn-northwest-1 region. The unit price is ¥ 0.5312 per GB-month of provisioned storage, and based on the exchange rate, this price should be reasonable. @KaimingWan thanks for confirming, so I'll close this issue. Just FYI I tested it as follows using your numbers (0.5312 x 2048 = 1.1K CNY), which matches given that we use exchange rates to convert USD to CNY. # main.tf provider "aws" { region = "cn-northwest-1" skip_credentials_validation = true skip_requesting_account_id = true access_key = "mock_access_key" secret_key = "mock_secret_key" } resource "aws_instance" "my_web_app" { ami = "ami-005e54dee72cc1d00" instance_type = "m3.xlarge" root_block_device { volume_size = 2048 volume_type="gp3" } } $ INFRACOST_CURRENCY=CNY infracost breakdown --path . > └─ root_block_device └─ Storage (general purpose SSD, gp3) 2,048 GB 1,087.90 CNY
gharchive/issue
2024-06-01T06:54:58
2025-04-01T06:39:04.958421
{ "authors": [ "KaimingWan", "alikhajeh1" ], "repo": "infracost/infracost", "url": "https://github.com/infracost/infracost/issues/3090", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1166205994
fix: panic error if policy body is invalid Solves issue if a valid policy file type is provided in --policy-path but the contents is malformed. @aliscott makes sense to me
gharchive/pull-request
2022-03-11T09:43:19
2025-04-01T06:39:04.960165
{ "authors": [ "hugorut" ], "repo": "infracost/infracost", "url": "https://github.com/infracost/infracost/pull/1453", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1860607233
🛑 Academusoft is down In eff7e91, Academusoft ($SECRET_ACADEMUSOFT) was down: HTTP code: 503 Response time: 1236 ms Resolved: Academusoft is back up in f8a01e0 after 560 days, 16 hours, 14 minutes.
gharchive/issue
2023-08-22T05:39:16
2025-04-01T06:39:04.962563
{ "authors": [ "infraestructuraidt" ], "repo": "infraestructuraidt/status", "url": "https://github.com/infraestructuraidt/status/issues/1768", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1730420978
Small improvements to events package for nats publishing/subscription Adds small helps for setting up your subscriber/publisher configs with NATS Adds missing definitions for event message event types Adjusts PublishEventMessage to publish to similar subject name as we expect with PublishChangeMessage Adjusts NATS to use DurableCalculator instead of QueuePrefix per NATS/watermill recommendations This will now use a small function to calculate the name of the consumer instead of just using the queue/topic name. This was causing issues with overlapping consumer names for multiple instances/apps that were listening on the same subject. This now essentially will take the queue name that is provided by the application (--event-subscriber-queuegroup) and then concatenates it with a hex encoded string of the full topic name (prefix+calculated topic) to give us a consumer name that is easy to calculate. Using the DurablePrefix gives us the added benefit of when a member of the queue group drops or rejoins, it will be able to pick back up where the last consume left off vs potentially starting fresh if all members of that group had previously dropped. Could you add the Close() to the publisher and subscriber? // Close will close the subscriber func (s *Subscriber) Close() error { return s.subscriber.Close() }
gharchive/pull-request
2023-05-29T09:31:26
2025-04-01T06:39:04.966965
{ "authors": [ "rizzza", "tylerauerbeck" ], "repo": "infratographer/x", "url": "https://github.com/infratographer/x/pull/87", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
962964308
Adjustments for upstream functional CI (#214) Add Jenkins CI and fix smoketest test Since this is a cherry pick of the work from 1.2 I think the same comments will be applicable here. I won't bother spending the time to call the same items out though. Was this merged/proposed to main as well or are we just focusing on testing the stable-1.x branches for now? test
gharchive/pull-request
2021-08-06T18:41:05
2025-04-01T06:39:04.968750
{ "authors": [ "leifmadsen", "pleimer" ], "repo": "infrawatch/service-telemetry-operator", "url": "https://github.com/infrawatch/service-telemetry-operator/pull/242", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1428496127
Increment list number at line break The list number is automatically incremented when the enter key is used to create a new line but is not incremented when the o key is used to create a new line. I often use ESC + o key to create a new line in vim, so it would be easier to type if the number is incremented. I don't know whether it should be a default behavior or not because people may want to just insert a new line without inserting a list bullet/number. But you can replicate the enter key behavior by making a custom command like so: inkdrop.onEditorLoad(() => { const editor = inkdrop.getActiveEditor(); const inputField = editor.cm.getInputField(); const { commands } = inkdrop; commands.add(inputField, { "custom:insert-below-with-newline": () => { commands.dispatch(inputField, "vim:activate-insert-mode"); commands.dispatch(inputField, "editor:go-line-end"); commands.dispatch(inputField, "editor:new-line"); }, }); }); And bind it in keymap.cson: '.CodeMirror.vim-mode.normal-mode:not(.key-buffering):not(.visual-mode) textarea': 'o': 'custom:insert-below-with-newline'
gharchive/issue
2022-10-29T23:06:53
2025-04-01T06:39:04.994149
{ "authors": [ "craftzdog", "seachicken" ], "repo": "inkdropapp/inkdrop-vim", "url": "https://github.com/inkdropapp/inkdrop-vim/issues/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
433454083
Added alt to footer social icons #26 I fixed problem with pulling the request to the wrong branch. Great work @nexus0212! Thanks for the contribution. I'll merge it into staging right now.
gharchive/pull-request
2019-04-15T20:01:19
2025-04-01T06:39:05.004779
{ "authors": [ "nexus0212", "tonynguyen111997" ], "repo": "inland-empire-software-development/landing", "url": "https://github.com/inland-empire-software-development/landing/pull/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1454848557
Can we set Marker Cluster options? Hi there, Is it possible to pass an object with this options? I don't see it in the component API. Regards, Hi @HusamIbrahim , thanks for the response. I found a way to do it with ref and computed. One last question, is it possible to use InfoWindow inside CustomMarker? Its not working for me. Regards No this isn’t supported unfortunately.
gharchive/issue
2022-11-18T10:25:00
2025-04-01T06:39:05.015057
{ "authors": [ "HusamIbrahim", "francoromanol" ], "repo": "inocan-group/vue3-google-map", "url": "https://github.com/inocan-group/vue3-google-map/issues/112", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
466054573
I train under the ubuntu, only four minigames can run ?? Dear Inorry, Thanks your sharing, I can learn a lot. Now I have trained four minigames and they are consistent with your results. But the other three minigames can not run. the error is id 1/id 17 unknown. I use gtx1080 ti , ubuntu16.04. I wonder if it has something to do with it? It's definitely not related to your GPU or OS. Did you install pysc2 from master branch? yes,I install pysc2 from master branch. When I run the program using map of FindAndDefeatZerglings, ValueError: Unknown ability_id: 1, type: cmd_quick. Likely a bug. When i run the program using map of CollectMineralsAndGas and BuildMarines, ValueError: Unknown ability_id: 17, type: cmd_quick. Likely a bug. What SC2 version do you use? SC2 version is 3.16.1,maybe this reason? Yes, Reaver requires SC2 version 4.1+. Thank u very much!
gharchive/issue
2019-07-10T01:41:16
2025-04-01T06:39:05.023404
{ "authors": [ "inoryy", "songwaimai" ], "repo": "inoryy/reaver", "url": "https://github.com/inoryy/reaver/issues/33", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2196619403
Not able to Submit Archive with Swift SDK SPM Is this a regression? Yes Description To replicate: Use Xcode 15.3, create new project (iOS or MultiPlatform). Set deployment target to anything, 15, 16, or 17 Add Swift SDK using SPM. Archive project (should build) Submit to AppStoreConnect: Observe Error returned: Please provide the exception or error you saw 'Invalid Bundle. The bundle ____/Frameworks/ApolloLibrary.framework does not support the minimum OS Version specified in the Info.plist (ID: $someID) Please provide the environment you discovered this bug in Xcode 15.3 iOS project and MultiPlatform project (iOS/Mac), with deployment target of iOS 15/16/17 This happens with Swift SDK v4.0.1 and v5.0.0 Anything else? I'm not able to ship an app with the Swift SDK to TestFlight. I was able to do this with earlier versions of the SDK but it's been about a year since I've tried. Found this, maybe it's an Xcode 15.3 thing: https://forums.developer.apple.com/forums/thread/748177 I had same issue with Xcode 15.2 More complaints about 15.3: https://www.reddit.com/r/swift/comments/1bd7kxj/swift_ios_cannot_upload_to_testflight/ @coveloper this is fixed right or you still have problems? This issue has been addressed and is now considered solved. If you have further questions or related concerns, please open a new issue.
gharchive/issue
2024-03-20T03:57:04
2025-04-01T06:39:05.034472
{ "authors": [ "coveloper", "essbante-io", "goncalo-frade-iohk" ], "repo": "input-output-hk/atala-prism-wallet-sdk-swift", "url": "https://github.com/input-output-hk/atala-prism-wallet-sdk-swift/issues/127", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1407423246
test new command query tx-mempool depends on: https://github.com/input-output-hk/cardano-node/issues/4459 The query tx-mempool looks like a useful command that should be added to clusterlib and used from there. The query tx-mempool looks like a useful command that should be added to clusterlib and used from there. updated
gharchive/pull-request
2022-10-13T09:05:16
2025-04-01T06:39:05.036396
{ "authors": [ "mkoura", "saratomaz" ], "repo": "input-output-hk/cardano-node-tests", "url": "https://github.com/input-output-hk/cardano-node-tests/pull/1433", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
642061607
[BUG] - VRFKeyBadNonceOVERLAY on epoch cutover Exernal Summary It has been observed running a node on private Guild Operators network that blocks minted right after epoch cutover(under 10s after cutover, exact time not confirmed but at least up to 6s) get rejected, trace: TraceForgedInvalidBlock , kind: VRFKeyBadNonceOVERLAY. Steps to reproduce Have a block producing node with enough stake to regularly make blocks right after epoch cutover. Expected behavior Blocks successfully made and adopted. System info (please complete the following information): OS: Ubuntu Version: 20.04 Node version: cardano-node 1.13.0 - linux-x86_64 - ghc-8.6 git rev 9f49515afc8ca3473cc3f1e1104765e0e357d888 Screenshots and attachments Full JSON trace of VRFKeyBadNonceOVERLAY { "at": "2020-06-19T09:00:02.13Z", "env": "1.13.0:c8ec2", "ns": [ "cardano.node.Forge" ], "data": { "kind": "TraceForgedInvalidBlock", "reason": { "kind": "ValidationError", "error": { "kind": "HeaderProtocolError", "error": { "kind": "PredicateFailure", "failures": [ { "kind": "VRFKeyBadNonceOVERLAY", "blockNonce": "CertifiedVRF {certifiedNatural = 257496154028758090498779421544410045289, certifiedProof = CertSimpleVRF {certU = Point 2262937906362959516303148039147879 4848089730085357766205839523249491, certC = 54060608959167033178960862473799130198, certS = 4126235464891015280355460803085182}}", "currentSlot": "229501", "previousHashAsNonce": "Nonce 24ec739bd17a04e65b8931cc8d6a2c6e5f0eaf64f561e344ebee79c92f630a50", "seedNonce": "Nonce 6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d" } ] } } }, "slot": 229501 }, "app": [], "msg": "", "pid": "3477", "loc": null, "host": "cnode-ah", "sev": "Error", "thread": "46" } Image from my block collector implemented in guild operators cntools script tool to get a feeling for timestamps. Epoch cutover for guild network is exactly on the hour with 30min epochs. Shows 5 consecutive epochs for the first blocks made by my block producing node. The Hash field contains a base64 encoded representation of JSON trace like the one above. Additional context Genesis for Guild network: https://github.com/cardano-community/guild-operators/blob/master/files/ptn0/files/genesis.json We have checked if it's attempting to make a block when there is an overlay schedule with higher priority (due to d= 0.5) but this was not the case, was not on BFT schedule. I have only ever seen the issue right after epoch cutover. The correlation with the epoch cutover suggests there may be an issue with e.g. the amount of time that is taken for rewards calculation. hi @nc6 is this still an issue? Has this issue been solved? Has this issue been solved? Closing this. If this is still relevant please re-open.
gharchive/issue
2020-06-19T15:45:18
2025-04-01T06:39:05.044230
{ "authors": [ "Jimbo4350", "Scitz0", "kevinhammond", "mrbrinker", "vix-io" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/issues/1310", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
648180704
[BUG] - DecoderErrorDeserialiseFailure "Address" when query stake-address-info Internal/Exernal Internal if an IOHK staff member. Summary I am getting a DecoderErrorDeserialiseFailure "Address" error while trying to interrogate a stake address using the cardano-cli shelley query stake-address-info command. Steps to reproduce Steps to reproduce the behavior: create a payment and a stake address (+ key pairs) delegate the stake address to one stake pool query the stake address using: cardano-cli shelley query stake-address-info --address STAKE_ADDRESS --testnet-magic 42 See error Expected behavior No error; stake address info is displayed System info (please complete the following information): OS: Ubuntu 18.04.4 LTS node version ~# cardano-cli version cardano-cli 1.14.0 - linux-x86_64 - ghc-8.6 git rev 9cfe6fc4236ca38cfa7245ffd70feb161badd79a Screenshots and attachments # CARDANO_NODE_SOCKET_PATH=example/node-bft1/node.sock \ > cardano-cli shelley query stake-address-info \ > --address e058bcb673c7acb99076482cb4617fcd650639ab7870cd7de0353ddb9d \ > --testnet-magic 42 option --address: DecoderErrorDeserialiseFailure "Address" (DeserialiseFailure 0 "expected list len") Usage: cardano-cli shelley query stake-address-info --address ADDRESS (--mainnet | --testnet-magic NATURAL) [--out-file FILE] Get the current delegations and reward accounts filtered by stake address. Are you sure it's not a payment address ? My stake addresses are 4 characters longer. The address is created with cardano-cli shelley stake-address build This is still not working in 1.15.0 - different error CARDANO_NODE_SOCKET_PATH=example/node-bft1/node.sock \ cardano-cli shelley address info \ --address e03a251e8f0ed7b010912f4ca24fd0e079af4aa5bccbdc8526acd448c0 option --address: Failed reading: invalid address Usage: cardano-cli shelley query stake-address-info --address ADDRESS (--mainnet | --testnet-magic NATURAL) [--out-file FILE] Get the current delegations and reward accounts filtered by stake address. # cardano-cli --version cardano-cli 1.15.0 - linux-x86_64 - ghc-8.6 git rev 97b3e95c67940608f5acda929cf861e8ebfeddd1
gharchive/issue
2020-06-30T13:35:30
2025-04-01T06:39:05.050382
{ "authors": [ "Proxiweb", "dorin100" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/issues/1370", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
651840360
Yet more conversions to the new api And remove more parts of the old api. In particular, remove the file read/write and serialisation functions, so we can be sure that we cannot accidentally mix up the new and old formats. bors r+
gharchive/pull-request
2020-07-06T21:50:10
2025-04-01T06:39:05.052075
{ "authors": [ "dcoutts", "intricate" ], "repo": "input-output-hk/cardano-node", "url": "https://github.com/input-output-hk/cardano-node/pull/1398", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
247441034
[CSLD-119] Enable swagger & client doc autobuild by travis My only purpose is to update https://cardanodocs.com/technical/wallet/api/ And this PR seems to be the easiest way to do that Afaik haddock is not generated because it took too long. This will need to wait on https://github.com/input-output-hk/cardano-sl/pull/1192 I omitted haddock and remained executables-based documentation builders, hope them to be very fast Okay, I've just found out that travis job limit is already fully depleted, chances that this will pass are phantom
gharchive/pull-request
2017-08-02T16:04:20
2025-04-01T06:39:05.054476
{ "authors": [ "Martoon-00", "domenkozar" ], "repo": "input-output-hk/cardano-sl", "url": "https://github.com/input-output-hk/cardano-sl/pull/1258", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1103994100
Let's try Continuous Integration! Recently, @bwbush, @paluh and I decided to do a trial of a continuous integration workflow, where we contribute commits directly to a mainline branch and regularly integrate work with one another. This is a placeholder PR to track differences from main. Rules of this branch: Push your work frequently (at least once a day, but preferably more regularly) Pull (integrate) regularly and resolve any merge conflicts Commit all work directly to this branch Close JIRA tickets as they become "fixed" on this branch. Tag commits with a fixes SCP-xxx message when a commit addresses a JIRA ticket. The absolute highest priority is maintaining a "Green" build status. If you break the build, fix it immediately! Do pair programming if you need code review. @jhbertra, should we put this PR into "draft" status and leave it there until we're ready to review and push to main? @jhbertra, @paluh , this marlowe-run-development branch lets me restore a wallet and add contacts (which I see in local storage), but it doesn't display the contacts or let me use the in contracts. @jhbertra, @paluh , this marlowe-run-development branch lets me restore a wallet and add contacts (which I see in local storage), but it doesn't display the contacts or let me use the in contracts. Issue fixed
gharchive/pull-request
2022-01-14T18:34:13
2025-04-01T06:39:05.058324
{ "authors": [ "bwbush", "jhbertra" ], "repo": "input-output-hk/marlowe-cardano", "url": "https://github.com/input-output-hk/marlowe-cardano/pull/66", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
550705430
contracts: Add Ledger.Constraints The core of this change is the Ledger.Constraints module, explained below. The rest is changing the on-chain and off-chain code to use the constraints. Ledger.Constraints.TxConstraints Constraints on pending transactions are expressed in the TxConstraints i o type. It is very similar to the old PendingTx, except that it has two type parameters for the inputs and outputs. The constructors defined in Ledger.Constraints.TxConstraints are also pretty much the same that existed for PendingTx. On-chain vs off-chain There are three incarnations of TxConstraints: Off-chain (untyped): type LedgerTxConstraints = TxConstraints [LTx.TxIn] [LTx.TxOut], pretty much identical to PendingTx. There are two reasons we can't use the same type in on-chain code. First, the TxIn transaction inputs contain validators in full (un-hashed) which we don't want to have in the on-chain code. This will probably go away when we update the emulator types to take the validator out of the transaction body. Second, TxIn and TxOut only contain Data and hashes. They don't say anything about the types of data and redeemer values. This makes sense from the ledger's point of view, but when we're inside a validator script we know the types of our own data and redeemer values, and it would be nice to be able to write constraints using those types directly, without having to convert from Data first. (That was the motivation for the typed tx library as well) On-chain: Here we have type PendingTxConstraints a = TxConstraints () [OnChainUtxo a]. a is the type of the validator's data value. This unlocks a new constructor mustPayToOwnAddress :: forall i a. (Monoid i, IsData a) => Value -> a -> TxConstraints i [OnChainUtxo a] which we can use to say something about the "continuing outputs" of the validator. (Note that the idea of "own address" also doesn't exist in off-chain code because it depends on the specific input that is being validated) Off-chain (typed): type TypedTxConstraints (ins :: [Type]) (outs :: [Type]) is a typed interface to TxConstraints. It's very similar to what we have Ledger.Typed.Tx. To convert between the three representations we have the following functions: toLedgerTx :: LedgerTxConstraints -> Tx fromLedgerTx :: Tx -> LedgerTxConstraints toLedgerConstraints :: PlutusTx.IsData a => Address -> PendingTxConstraints a -> LedgerTxConstraints toTypedTxConstraints :: IsData (DataType inn) => ScriptInstance inn -> PendingTxConstraints (DataType inn) -> Maybe (Either (TypedTxConstraints '[] '[]) (TypedTxConstraints '[] '[inn])) (used for the state machine) toUntypedLedgerConstraints :: forall inn out. TypedTxConstraints inn out -> LedgerTxConstraints The WriteTx effect has been changed to take a LedgerTxConstraints value. As a result, if we write our validator as a function that produces PendingTxConstraints a, then we can use the same function to generate the constraints that are eventually turned into a transaction in off-chain code. Semantics The semantics of the constraints are given by Ledger.Constraints.OnChain.checkPendingTx checkPendingTx :: (IsData a) => PendingTxConstraints a -> PendingTx -> Bool We can view checkPendingTx as a relation between constraints and the pending transactions that satisfy the constraints. Also, checkPendingTx relates the partial orders of PendingTxConstraints a and PendingTx (defined by the partial orders of their constituent parts) in a way that looks almost like a galois connection (or is that trivial? definitely needs some more thought) Note about ordering: The constructors for PendingTxConstraints cannot express any information about the ordering of the transaction outputs, and the actual order of the outputs of a PendingTx does not affect the result of checkPendingTx. So we treat both inputs and outputs as if they were (multi-) sets. IMO the fact that outputs are ordered is an artifact of the ledger implementation, and relying on them to be in a particular order just makes the scripts more brittle and less composable. PendingTxConstraints and checkPendingTx can be used in any validator. State machine The state machine type has been changed to data StateMachine s i = StateMachine { -- | The transition function of the state machine. 'Nothing' indicates an invalid transition from the current state. smTransition :: s -> i -> Value -> Maybe (PendingTxConstraints s), -- | The condition checking function. Checks whether a given state transition is allowed given the 'PendingTx'. smCheck :: s -> i -> PendingTx -> Bool } Note the transition function now takes an extra argument (the value locked by the output we're validating), and its result type is Maybe (PendingTxConstraints s). The Value of the output is included because it is as much a part of the contract's state as is the data value, and the result type combines the new state of the machine with constraints about the transaction that makes the transition. smCheck is provided as an escape hatch in case the constraints aren't enough. None of our current examples need smCheck. The state machine client (in Language.Plutus.Contract.StateMachine) uses the TypedTxConstraints type to ensure that the machine has exactly one input and one or zero outputs Other notable changes PendingTxOut has been replaced by TxOut (they were identical) The validators increased in size (At least some of that is explained by additional trace strings. We should be able to get rid of those for the release version of the contracts) @michaelpj This is ready for another review (unless you want to wait for it to be rebased - I think the biggest change that's missing from this is replacing Pubkey with PubKeyhash) Constraints TxConstraints is now a list of constraints, with typed inputs and outputs for a ScriptType instance (the reason why it has two type parameters instead of one is that the type families of the ScriptType class caused problems with the plugin in places like data OutputConstraint = OutputConstraint { ocData :: DataType a, ... }. So the i is usually RedeemerType a and the o is DataType a for some ScriptType a. The Ledger.Constraints.OffChain module now has a function mkTx that does constraints resolution. It uses the ScriptLookups type to store all the witnesses. As a consequence, a lot of the client code needed to be changed to populate the ScriptLookups with the right values. This can be a bit tedious if you use mkTx directly, but the state machine client lib. takes care of it for you, which is quite nice. I think most, if not all, of the examples could now be written as state machines anyway. State machine The transition function now returns a value of NewState a, which has a field for the new state (so we don't need to fish it out of the constraints) isFinal is also back With the new approach in this PR we can actually implement the "one token per state machine instance" approach generically for any state machine (by modifying the transition function) but I'll leave that for future work :) Typed transactions mkTx also typechecks the script outputs that are spent by the transaction. So the typed transaction interface is now the default. It is still possible to do untyped stuff with the constraints, by using the MustSpendScriptOutput and MustPayToOtherScript constructors. We need those for the handover from one contract to the next (for example, the setup code of futures contract creates the two tokens and then locks them in an Escrow contract, so that they're only released when both participants have paid their initial deposits) I added a field for the forwarding monetary policy to ScriptInstance. That way the MPS and its hash can be included in the ScriptLookups automatically. If we had to define the forwarding MPS in every contract that uses it then we'd have to remember to add it ScriptLookups every time. We can still define other monetary policies of course. I added a type UtxoMap = Map TxOutRef TxOutTx for the unspent outputs at some address. (In many places where we were using AddressMap in the client code (not emulator) we were only interested in a single address anyway)
gharchive/pull-request
2020-01-16T10:12:53
2025-04-01T06:39:05.076164
{ "authors": [ "j-mueller" ], "repo": "input-output-hk/plutus", "url": "https://github.com/input-output-hk/plutus/pull/1796", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1028096216
[Builtin] Define 'caseList' in terms of 'chooseList' As per @kwxm's request. /benchmark plutus-benchmark:validation
gharchive/pull-request
2021-10-16T16:57:57
2025-04-01T06:39:05.077849
{ "authors": [ "effectfully" ], "repo": "input-output-hk/plutus", "url": "https://github.com/input-output-hk/plutus/pull/4119", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1723273108
Implement Custom YOLO v5 Module I am trying to create an environment that uses a simple custom yolov5 object detection model, the model is working on a stand-alone deep stream environment. I have used the traffic meter sample as the baseline, however, I still have some issues implementing this example. The main issue is that the .wts failed to load when loading the module, do you have any advice? @maoztamir, hi! Are you facing any issues with loading the weights when running the "traffic_meter" example? Please provide an error log. If possible, attach the complete module log. Closing due to no activity.
gharchive/issue
2023-05-24T06:35:25
2025-04-01T06:39:05.093300
{ "authors": [ "bwsw", "dorgun", "maoztamir" ], "repo": "insight-platform/Savant", "url": "https://github.com/insight-platform/Savant/issues/218", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
334798541
Added the ability to override the target from the default of _blank I'd like to target _self and couldn't see how to do this. Related to #365 +1
gharchive/pull-request
2018-06-22T08:54:41
2025-04-01T06:39:05.102721
{ "authors": [ "davidjamartin", "tomaszzmuda" ], "repo": "insites/cookieconsent", "url": "https://github.com/insites/cookieconsent/pull/396", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1433499015
pkg/gadgets/trace: remove cgo This PR removes dependencies on cgo for all trace gadgets. Are we missing something to merge this? (Please notice that there are some fixup commits to be squashed before merging)
gharchive/pull-request
2022-11-02T17:08:38
2025-04-01T06:39:05.112224
{ "authors": [ "flyth", "mauriciovasquezbernal" ], "repo": "inspektor-gadget/inspektor-gadget", "url": "https://github.com/inspektor-gadget/inspektor-gadget/pull/1093", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
142922676
A.2.5 This issue has been extracted from the issue list on:https://ies-svn.jrc.ec.europa.eu/issues/2685 Comment Why is the note included here? û It seems more like an comment received earlier on. Proposed Change Remove the note. Proposed Resolution Remove note Had I really missed so many earlier this week. Anyway, agreed.
gharchive/issue
2016-03-23T10:33:37
2025-04-01T06:39:05.119034
{ "authors": [ "PeterParslow", "jensscheerlinck" ], "repo": "inspire-eu-validation/ats-discovery-service", "url": "https://github.com/inspire-eu-validation/ats-discovery-service/issues/20", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
262750550
Uncaught ReferenceError: module is not defined Hi! I am building an app using https://github.com/auduno/clmtrackr, which requires jsfeat ("jsfeat": "git+https://github.com/inspirit/jsfeat"). When I try to build and launch the app in the browser i get this error Any ideas? thanks! Same here! On dev server it works. But when I use the build, get the same error. I have the same problem... Have you find a fix @kishmiryan-karlen ?
gharchive/issue
2017-10-04T11:16:17
2025-04-01T06:39:05.141733
{ "authors": [ "Omi0", "kishmiryan-karlen", "mrasoahaingo" ], "repo": "inspirit/jsfeat", "url": "https://github.com/inspirit/jsfeat/issues/76", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
770400108
add adapt mode node for k8s - [closed] In GitLab by @luotian-github on May 21, 2018, 04:05 Merges updatek8s18 -> updatek8s18 add adapt mode node for k8s In GitLab by @wknet123 on Nov 13, 2019, 10:57 closed
gharchive/issue
2020-12-17T21:55:31
2025-04-01T06:39:05.143161
{ "authors": [ "inspuradmin" ], "repo": "inspursoft/board", "url": "https://github.com/inspursoft/board/issues/1309", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
769556748
create a image more than 20 minutes, it'll lost the create image page In GitLab by @sokril on Dec 8, 2017, 17:32 If you are reporting a problem, please make sure the following information are provided: Version of board (last git commit number) , docker engine and docker-compose. Config files of board, you can get them by packaging "board.cfg" and files in the same directory, including subdirectory. Log files, you can get them by package the /var/log/board/. Other necessery information. In GitLab by @yhua123 on Nov 8, 2019, 07:43 issue 1, token resign without new token, so expired 30 min after fist time get token. issue 2, web socket without token resign. In GitLab by @wknet123 on Nov 8, 2019, 07:43 Fixed with PR #776, @MrLi please implement re-sign token action per 25 min in Websocket callback. In GitLab by @sokril on Nov 8, 2019, 08:16 closed In GitLab by @sokril on Nov 8, 2019, 08:16 assigned to @liyanq528
gharchive/issue
2020-12-17T05:59:48
2025-04-01T06:39:05.146780
{ "authors": [ "inspuradmin" ], "repo": "inspursoft/board", "url": "https://github.com/inspursoft/board/issues/750", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
769593574
Suggest:The list of projects is sorted by creation time,the latest creation time project is shown in front [REPLACEMENT ISSUE] The original issue Id: 964 Title: Suggest:The list of projects is sorted by creation time,the latest creation time project is shown in front could not be created. This is a dummy issue, replacing the original one. It contains everything but the original issue description. In case the gitlab repository is still existing, visit the following link to show the original issue: TODO In GitLab by @luotian-github on Nov 8, 2019, 07:46 should reverse the sequence in backend by time should change tables: user, image, project, service In GitLab by @sokril on Nov 8, 2019, 07:46 key: string order_field: "NAME" "CREATE_TIME" int order_asc: 0 1 0:desc 1:asc In GitLab by @liyanq528 on Nov 8, 2019, 07:46 need update the clarity to v0.11.9; In GitLab by @liyanq528 on Nov 8, 2019, 07:46 fixed by pull request #1110 In GitLab by @cuiaq123 on Nov 8, 2019, 07:46 fixed In GitLab by @sokril on Nov 8, 2019, 08:18 closed In GitLab by @sokril on Nov 8, 2019, 08:18 assigned to @cuiaq123 In GitLab by @liyanq528 on Nov 11, 2019, 08:47 mentioned in merge request !451 In GitLab by @liyanq528 on Nov 12, 2019, 02:18 mentioned in merge request !838 In GitLab by @liyanq528 on Nov 13, 2019, 08:44 mentioned in merge request !1050
gharchive/issue
2020-12-17T06:42:32
2025-04-01T06:39:05.153399
{ "authors": [ "inspuradmin" ], "repo": "inspursoft/board", "url": "https://github.com/inspursoft/board/issues/966", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
331004655
NextMaxID for GetMediaComments returns json [Bug] Hey The NextMaxID feature seems to be working fine with GetFollowerRequest , but it seems to have a bug in GetMediaComments . In which NextMaxID is returned as a json . This will lead into a "java.net.URISyntaxException: Illegal character in query" Exception which is caused by json characters ("{}" and space character) The full exception description : Caused by: java.net.URISyntaxException: Illegal character in query at index 106: https://i.instagram.com/api/v1/media/1797253869312684639_1296464116/comments/?ig_sig_key_version=4&max_id={"server_cursor": "AQCAT4EDcxAlIIAqDFR4sVdmOOARc1b_XKz4fdhaGEJ9bqDMU1iVKcC2EZQVK7kOUTH-ctDa7zcihPB5T1Xs1bmrhZUbniUkW39U4irtjaOOPA", "is_server_cursor_inverse": false} well i hope i didn't give away private keys in that exception 😂 anyways here it is thanks . @jessrix it seems that you are sending more than it asks for in the max_id. You should be sending just the server_cursor. Can you post your code for me to take a look if there's something wrong in the library or in your implementation? Here is the original code that generated the exception `String nextMaxId = null; while(true) { InstagramGetMediaCommentsResult comments = instagram.sendRequest(new InstagramGetMediaCommentsRequest(lastpost.getId(), nextMaxId)); for (InstagramComment c : comments.getComments()) { //some code here } nextMaxId = comments.getNext_max_id(); if (nextMaxId == null) { break; } }` The problem here was that nextmaxid is not the server_cursor itself , but a json containing it so i did this JSONParser jsonParser = new JSONParser(); JSONObject jo = (JSONObject) jsonParser.parse(comments.getNext_max_id()); nextMaxId = (String) jo.get("server_cursor"); And it perfectly worked ! The problem is that i found "InstagramGetMediaCommentsResult" returning json as nextmaxid a little bit weird because all the other payloads return only the server cursor and not the whole json . Thanks again Addressed in rewrite or no longer relevant.
gharchive/issue
2018-06-10T21:37:04
2025-04-01T06:39:05.166013
{ "authors": [ "brunocvcunha", "jessrix", "jvogit" ], "repo": "instagram4j/instagram4j", "url": "https://github.com/instagram4j/instagram4j/issues/184", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
463709264
Get link from instagram story Is there a way to get a link from an instagram story ? I'm referring to the "See More" section which redirects the user to a link that was shared along with the story. I am looking for a way to get that link. Addressed in rewrite or no longer relevant.
gharchive/issue
2019-07-03T11:54:42
2025-04-01T06:39:05.167788
{ "authors": [ "Damo1", "jvogit" ], "repo": "instagram4j/instagram4j", "url": "https://github.com/instagram4j/instagram4j/issues/341", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1868470804
Potential Malicious Detection for instaloader.exe Issue Description I came across a concerning situation while working with the instaloader.exe file. Upon scanning the file using VirusTotal, the results indicated the presence of several potential malware detections from different antivirus engines. This is quite alarming and needs to be addressed to ensure the safety and integrity of the file. Detected Malware: Bkav Pro: W32.AIDetectMalware.64 Cynet: Malicious (score: 100) McAfee-GW-Edition: BehavesLike.Win64.Backdoor.wc SecureAge: Malicious Trellix (FireEye): Generic.mg.93b12db8d80368eb Acronis (Static ML): Undetected unusual detection Additional Information I will attach an image of the VirusTotal scan results for reference. Your expertise and assistance in resolving this issue are greatly appreciated. Let's work together to ensure the safety of the instaloader.exe file and maintain the integrity of this project. As already described in #1432 and #1002 this is a false positive. The complete source available in this repo, you can build it at your own if you want. i think i will stick with virtual machine until it get fixed
gharchive/issue
2023-08-27T12:02:47
2025-04-01T06:39:05.171199
{ "authors": [ "Showmaster3000", "crossdefalt" ], "repo": "instaloader/instaloader", "url": "https://github.com/instaloader/instaloader/issues/2060", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2370483360
Broaden OAuth user pool beyond just ILab org members Only allowing instructlab org members to OAuth won't scale. There is a cost to org members along with it being too restrictive. E.g. a workshop shouldn't have to add everyone into the org. The obvious answer would be to open it to anyone. This would require accelerating the plan for a daily limit on chats. If that is too much to get done before v1 we could limit chat based on ilab org membership. Update: @jjasghar has setup a nice app to create invitations to a new org. This will allow us to add anyone to the public org for OAuth and avoid the costs associated with the upstream ilab org. This also alleviates the manual problem of adding folks. Links: https://instructlab-public-inviter.1iyb6ex4la0n.us-east.codeengine.appdomain.cloud/ https://github.com/instructlab-public TODOS: [ ] Add the new repo to src/app/api/auth/[...nextauth]/route.ts as an ENV along with checking the upstream ilab org. [ ] While in the code, make the organization optional by checking if the .env exists. Devs should be able to start the repo and test OAuth in their own app without having to be a member of either org for test/dev. [ ] Demo this on Wednesday's UI community call and a triager call. @nerdalert I think we need to make some decisions here about the production release. cc: @lhawthorn @jjasghar @Gregory-Pereira @aevo98765 @instructlab/ui-maintainers
gharchive/issue
2024-06-24T15:03:43
2025-04-01T06:39:05.185104
{ "authors": [ "nerdalert", "vishnoianil" ], "repo": "instructlab/ui", "url": "https://github.com/instructlab/ui/issues/27", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2328326542
Sharvils/tf labels Description This PR adds labels to TF services for support matrix docs. Changes Made Adds labels in all services in the file tensorflow/docker-compose.yaml [x] The code follows the project's coding standards. [x] No Intel Internal IP is present within the changes. [x] The documentation has been updated to reflect any changes in functionality. Validation [x] I have tested any changes in container groups locally with test_runner.py with all existing tests passing, and I have added new tests where applicable. Friendly reminder @sharvil10 to not expose your idsid in this repository. :)
gharchive/pull-request
2024-05-31T17:54:01
2025-04-01T06:39:05.290153
{ "authors": [ "sharvil10", "tylertitsworth" ], "repo": "intel/ai-containers", "url": "https://github.com/intel/ai-containers/pull/73", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
639030935
Nav Controller and Fragment Issue I am facing the below issue in android app when I am switch on new activity. I check all the onclick listeners all are seems okay....please help "" Process: com.example.myshop, PID: 8116 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.example.myshop/com.example.myshop.HomePage}: java.lang.IllegalStateException: Activity com.example.myshop.HomePage@cd35735 does not have a NavController set on 2131230904 at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2666) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2727) at android.app.ActivityThread.-wrap12(ActivityThread.java) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1478) at android.os.Handler.dispatchMessage(Handler.java:102) at android.os.Looper.loop(Looper.java:154) at android.app.ActivityThread.main(ActivityThread.java:6121) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:889) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:779) Caused by: java.lang.IllegalStateException: Activity com.example.myshop.HomePage@cd35735 does not have a NavController set on 2131230904 at androidx.navigation.Navigation.findNavController(Navigation.java:61) at com.example.myshop.HomePage.onCreate(HomePage.java:60) at android.app.Activity.performCreate(Activity.java:6723) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1119) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2619) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2727)  at android.app.ActivityThread.-wrap12(ActivityThread.java)  at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1478)  at android.os.Handler.dispatchMessage(Handler.java:102)  at android.os.Looper.loop(Looper.java:154)  at android.app.ActivityThread.main(ActivityThread.java:6121)  at java.lang.reflect.Method.invoke(Native Method)  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:889)  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:779) "" Hi @Abhi83020 , looks like not a haxm related issue. Could you provide more details or raise android specific issue to google in the issue tracker? Yes there are issue with nav controller. when I switch to new activity it shows some error. I am also sharing my android manifest file and switch between two activities(Skip Button to Home page) where I am facing this issue. Android Manifest: <uses-permission android:name="android.permission.INTERNET" /> <application android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".HomePage" android:label="@string/title_activity_home_page" android:theme="@style/AppTheme.NoActionBar"></activity> <activity android:name=".LoginActivity" /> <activity android:name=".HomeFragment" /> <activity android:name=".MainActivity" android:theme="@style/SplashTheme"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> **Singup** package com.example.myshop; import android.content.Intent; import android.os.Bundle; import android.text.Editable; import android.text.TextUtils; import android.text.TextWatcher; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.Button; import android.widget.EditText; import android.widget.FrameLayout; import android.widget.TextView; import android.widget.Toast; import androidx.annotation.NonNull; import androidx.annotation.Nullable; import androidx.fragment.app.Fragment; import androidx.fragment.app.FragmentTransaction; import com.google.android.gms.tasks.OnCompleteListener; import com.google.android.gms.tasks.Task; import com.google.firebase.auth.AuthResult; import com.google.firebase.auth.FirebaseAuth; import com.google.firebase.firestore.DocumentReference; import com.google.firebase.firestore.FirebaseFirestore; import java.util.HashMap; import java.util.Map; /** A simple {@link Fragment} subclass. */ public class SignupFragment extends Fragment { public SignupFragment() { // Required empty public constructor } private TextView alreadyHaveAnAccount; private FrameLayout parentFrameLayout; private EditText email; private EditText fullName; private EditText password; private EditText confirmPassword; private Button skipBtn; private Button signUpBtn; private FirebaseAuth firebaseAuth; private FirebaseFirestore firebaseFirestore; private String emailPattern = "[a-zA-Z0-9._-]+@[a-z]+.[a-z]+"; @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment View view = inflater.inflate(R.layout.fragment_signup, container, false); alreadyHaveAnAccount = view.findViewById(R.id.SignupSignin); parentFrameLayout = getActivity().findViewById(R.id.RegisterFrame); email = view.findViewById(R.id.SignupEmail); fullName = view.findViewById(R.id.SignupFullname); password = view.findViewById(R.id.SignupPassword); confirmPassword = view.findViewById(R.id.SignupConfirm); skipBtn = view.findViewById(R.id.Skip2); signUpBtn = view.findViewById(R.id.Signup); firebaseAuth = FirebaseAuth.getInstance(); firebaseFirestore = FirebaseFirestore.getInstance(); return view; } @Override public void onViewCreated(@NonNull View view, @Nullable Bundle savedInstanceState) { super.onViewCreated(view, savedInstanceState); alreadyHaveAnAccount.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { setFragment(new SigninFragment()); } }); skipBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent homeIntent = new Intent(getActivity(), HomePage.class); startActivity(homeIntent); getActivity().finish(); } }); email.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { checkInputs(); } @Override public void afterTextChanged(Editable s) { } }); fullName.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { checkInputs(); } @Override public void afterTextChanged(Editable s) { } }); password.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { checkInputs(); } @Override public void afterTextChanged(Editable s) { } }); confirmPassword.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { checkInputs(); } @Override public void afterTextChanged(Editable s) { } }); signUpBtn.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { checkEmailAndPassword(); } }); } private void setFragment(Fragment fragment) { FragmentTransaction fragmentTransaction = getActivity().getSupportFragmentManager().beginTransaction(); fragmentTransaction.replace(parentFrameLayout.getId(), fragment); fragmentTransaction.commit(); } private void checkInputs() { if (!TextUtils.isEmpty(email.getText())) { if (!TextUtils.isEmpty(fullName.getText())) { if (!TextUtils.isEmpty(password.getText()) && password.length() >= 8) { if (!TextUtils.isEmpty(confirmPassword.getText())) { signUpBtn.setEnabled(true); } else { signUpBtn.setEnabled(false); } } else { signUpBtn.setEnabled(false); } }else{ signUpBtn.setEnabled(false); } } else { signUpBtn.setEnabled(false); } } private void checkEmailAndPassword(){ if(email.getText().toString().matches(emailPattern)){ if(password.getText().toString().equals(confirmPassword.getText().toString())){ signUpBtn.setEnabled(false); firebaseAuth.createUserWithEmailAndPassword(email.getText().toString(),password.getText().toString()) .addOnCompleteListener(new OnCompleteListener() { @Override public void onComplete(@NonNull Task task) { if (task.isSuccessful()) { Map<Object,String> userdata = new HashMap<>(); userdata.put("fullname",fullName.getText().toString()); firebaseFirestore.collection("USERS") .add(userdata) .addOnCompleteListener(new OnCompleteListener<DocumentReference>() { @Override public void onComplete(@NonNull Task<DocumentReference> task) { if (task.isSuccessful()){ Intent homeIntent = new Intent(getActivity(), HomePage.class); startActivity(homeIntent); getActivity().finish(); } else{ signUpBtn.setEnabled(true); String error = task.getException().getMessage(); Toast.makeText(getActivity(), error, Toast.LENGTH_SHORT).show(); } } }); } else { signUpBtn.setEnabled(true); String error = task.getException().getMessage(); Toast.makeText(getActivity(), error, Toast.LENGTH_SHORT).show(); } } }); }else{ confirmPassword.setError("Password doesn't matched"); } } else{ email.setError("Invalid Email"); } } } Home Page package com.example.myshop; import android.os.Bundle; import android.view.Menu; import android.view.MenuItem; import android.widget.FrameLayout; import androidx.appcompat.app.AppCompatActivity; import androidx.appcompat.widget.Toolbar; import androidx.core.view.GravityCompat; import androidx.drawerlayout.widget.DrawerLayout; import androidx.fragment.app.Fragment; import androidx.fragment.app.FragmentTransaction; import androidx.navigation.Navigation; import androidx.navigation.ui.AppBarConfiguration; import androidx.navigation.ui.NavigationUI; import com.google.android.material.navigation.NavigationView; public class HomePage extends AppCompatActivity { private AppBarConfiguration mAppBarConfiguration; private FrameLayout frameLayout; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_home_page); Toolbar toolbar = findViewById(R.id.toolbar); setSupportActionBar(toolbar); getSupportActionBar().setDisplayShowTitleEnabled(false); /* FloatingActionButton fab = (FloatingActionButton) findViewById(R.id.fab); fab.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Snackbar.make(view, "Replace with your own action", Snackbar.LENGTH_LONG) .setAction("Action",null).show(); } }); */ DrawerLayout drawer = findViewById(R.id.drawer_layout); NavigationView navigationView = findViewById(R.id.nav_view); navigationView.getMenu().getItem(0).setChecked(true); frameLayout = findViewById(R.id.homeframelayout); setFragment(new HomeFragment()); // Passing each menu ID as a set of Ids because each // menu should be considered as top level destinations. mAppBarConfiguration = new AppBarConfiguration.Builder( R.id.nav_home, R.id.nav_gallery, R.id.nav_slideshow) .setDrawerLayout(drawer) .build(); androidx.navigation.NavController navController = Navigation.findNavController(this, R.id.nav_view); NavigationUI.setupActionBarWithNavController(this, navController, mAppBarConfiguration); NavigationUI.setupWithNavController(navigationView, navController); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.home_page, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item){ int id=item.getItemId(); if (id==R.id.searchicon){ return true; }else if (id==R.id.notificationicon){ return true; }else if (id==R.id.carticon){ return true; } return super.onOptionsItemSelected(item); } public boolean onNavigationItemSelected(MenuItem item){ int id=item.getItemId(); if (id==R.id.myshop) { }else if (id==R.id.myorder){ }else if (id==R.id.mycart){ }else if (id==R.id.myreward){ }else if (id==R.id.mywishlist){ }else if (id==R.id.myaccount) { } else if (id==R.id.signout) { } DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout); drawer.closeDrawer(GravityCompat.START); return true; } @Override public boolean onSupportNavigateUp() { androidx.navigation.NavController navController = Navigation.findNavController(this, R.id.nav_view); return NavigationUI.navigateUp(navController, mAppBarConfiguration) || super.onSupportNavigateUp(); } private void setFragment(Fragment fragment){ FragmentTransaction fragmentTransaction = getSupportFragmentManager().beginTransaction(); fragmentTransaction.replace(frameLayout.getId(),fragment); fragmentTransaction.commit(); } }
gharchive/issue
2020-06-15T17:51:20
2025-04-01T06:39:05.393340
{ "authors": [ "Abhi83020", "wayne-ma" ], "repo": "intel/haxm", "url": "https://github.com/intel/haxm/issues/303", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2720502905
Revert windows-related changes for void *params[] in third_party/nvidia/backend/driver.py So it's more like what other backends have. If this was a change from an old pull request (in Triton), I guess we can roll it back. @gshimansky please take a look. @gshimansky thanks for review!
gharchive/pull-request
2024-12-05T14:01:30
2025-04-01T06:39:05.395694
{ "authors": [ "anmyachev" ], "repo": "intel/intel-xpu-backend-for-triton", "url": "https://github.com/intel/intel-xpu-backend-for-triton/pull/2939", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2322569163
[BesTLA] First-token inference optimization Type of Change Higher performance for comp_int8 group=-1 kernels Improved vector mul and add. Reduce template combinations, and speed up the compilation process Performance change llama2-7B, int4, group=-1, sym, comp_int8: Alderlake 16 cores PR: model_print_timings: load time = 3765.44 ms model_print_timings: sample time = 2.79 ms / 16 runs ( 0.17 ms per token) model_print_timings: prompt eval time = 3745.27 ms / 1024 tokens ( 3.66 ms per token) model_print_timings: eval time = 1086.39 ms / 15 runs ( 72.43 ms per token) model_print_timings: total time = 4861.68 ms ========== eval time log of each prediction ========== prediction 0, time: 3745.27ms prediction 1, time: 72.77ms prediction 2, time: 72.35ms prediction 3, time: 72.35ms prediction 4, time: 72.31ms Main: model_print_timings: load time = 7380.84 ms model_print_timings: sample time = 3.81 ms / 16 runs ( 0.24 ms per token) model_print_timings: prompt eval time = 5029.46 ms / 1024 tokens ( 4.91 ms per token) model_print_timings: eval time = 1086.52 ms / 15 runs ( 72.43 ms per token) model_print_timings: total time = 8486.67 ms ========== eval time log of each prediction ========== prediction 0, time: 5029.46ms prediction 1, time: 72.87ms prediction 2, time: 72.38ms prediction 3, time: 72.17ms prediction 4, time: 72.36ms sapphire rapids 56 cores: PR: model_print_timings: load time = 385.17 ms model_print_timings: sample time = 8.08 ms / 16 runs ( 0.50 ms per token) model_print_timings: prompt eval time = 382.27 ms / 1023 tokens ( 0.37 ms per token) model_print_timings: eval time = 338.05 ms / 15 runs ( 22.54 ms per token) model_print_timings: total time = 735.46 ms ========== eval time log of each prediction ========== prediction 0, time: 382.27ms prediction 1, time: 23.75ms prediction 2, time: 22.91ms prediction 3, time: 22.69ms prediction 4, time: 22.63ms model_print_timings: load time = 740.27 ms model_print_timings: sample time = 8.00 ms / 16 runs ( 0.50 ms per token) model_print_timings: prompt eval time = 734.54 ms / 2024 tokens ( 0.36 ms per token) model_print_timings: eval time = 385.63 ms / 15 runs ( 25.71 ms per token) model_print_timings: total time = 1137.81 ms ========== eval time log of each prediction ========== prediction 0, time: 734.54ms prediction 1, time: 27.92ms prediction 2, time: 26.25ms prediction 3, time: 25.76ms prediction 4, time: 25.60ms Main: model_print_timings: sample time = 8.59 ms / 16 runs ( 0.54 ms per token) model_print_timings: prompt eval time = 407.88 ms / 1023 tokens ( 0.40 ms per token) model_print_timings: eval time = 348.30 ms / 15 runs ( 23.22 ms per token) model_print_timings: total time = 771.64 ms ========== eval time log of each prediction ========== prediction 0, time: 407.88ms prediction 1, time: 24.32ms prediction 2, time: 23.61ms prediction 3, time: 23.46ms prediction 4, time: 23.34ms model_print_timings: load time = 865.02 ms model_print_timings: sample time = 8.57 ms / 16 runs ( 0.54 ms per token) model_print_timings: prompt eval time = 859.30 ms / 2024 tokens ( 0.42 ms per token) model_print_timings: eval time = 386.94 ms / 15 runs ( 25.80 ms per token) model_print_timings: total time = 1264.64 ms ========== eval time log of each prediction ========== prediction 0, time: 859.30ms prediction 1, time: 27.38ms prediction 2, time: 27.00ms prediction 3, time: 26.05ms prediction 4, time: 25.91ms Mistral-7B, int4, group=-1, sym, comp_int8: Cascade lake 20 cores PR: model_print_timings: load time = 2572.34 ms model_print_timings: sample time = 9.15 ms / 16 runs ( 0.57 ms per token) model_print_timings: prompt eval time = 2571.85 ms / 1008 tokens ( 2.55 ms per token) model_print_timings: eval time = 711.59 ms / 15 runs ( 47.44 ms per token) model_print_timings: total time = 3298.07 ms ========== eval time log of each prediction ========== prediction 0, time: 2571.85ms prediction 1, time: 48.90ms prediction 2, time: 47.35ms prediction 3, time: 47.37ms prediction 4, time: 47.35ms Main: model_print_timings: load time = 2933.11 ms model_print_timings: sample time = 9.31 ms / 16 runs ( 0.58 ms per token) model_print_timings: prompt eval time = 2932.60 ms / 1008 tokens ( 2.91 ms per token) model_print_timings: eval time = 708.29 ms / 15 runs ( 47.22 ms per token) model_print_timings: total time = 3655.52 ms ========== eval time log of each prediction ========== prediction 0, time: 2932.60ms prediction 1, time: 48.08ms prediction 2, time: 47.02ms prediction 3, time: 46.99ms prediction 4, time: 47.08ms There also add and mul in custom::epilogue. Should we keep one add and mul? There also add and mul in custom::epilogue. Should we keep one add and mul? custom::epilogue are using the ref code, they should call kernel::wrapper::Add and Mul Windows server can't connect to the proxy server. Windows build is verified on a local windows machine.
gharchive/pull-request
2024-05-29T07:24:03
2025-04-01T06:39:05.423113
{ "authors": [ "luoyu-intel", "yuchengliu1" ], "repo": "intel/neural-speed", "url": "https://github.com/intel/neural-speed/pull/271", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
833841757
Use kroki to create svg Using only mermaid will cause large white boxes around the diagram. Using Korki to pre-create a mermaid image helps to avoid this. Jenkins please retry a build
gharchive/pull-request
2021-03-17T14:35:44
2025-04-01T06:39:05.424185
{ "authors": [ "itrushkin", "maradionov" ], "repo": "intel/openfl", "url": "https://github.com/intel/openfl/pull/36", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
390805489
question: Cannot create pmem-csi volume When I tried to create a pmem-csi PVC, it seems like it's not having the capacity to provide it. My k8s cluster is running with 1 master and 3 workers, only 1 worker has the pmem-csi labeled. This is the error I get in the pvc creation: failed to provision volume with StorageClass "pmem-csi-sc": rpc error: code = Unavailable desc = No node found with 8589934592 capacaity This is the relevant log from the pod that is running the node with persistence memory enabled: . . . I1213 17:46:31.993469 1 glog.go:79] total used: 0 I1213 17:46:33.070068 1 glog.go:79] CheckVG: Bus: ndbus0 I1213 17:46:33.070457 1 glog.go:79] Region: region1 I1213 17:46:33.070465 1 glog.go:79] NsMode: fsdax I1213 17:46:33.071043 1 glog.go:79] No active namespaces in region region1 I1213 17:46:33.071058 1 glog.go:79] NsMode: sector I1213 17:46:33.071121 1 glog.go:79] No active namespaces in region region1 I1213 17:46:33.071128 1 glog.go:79] Region: region0 I1213 17:46:33.071133 1 glog.go:79] NsMode: fsdax I1213 17:46:33.071724 1 glog.go:79] No active namespaces in region region0 I1213 17:46:33.071732 1 glog.go:79] NsMode: sector I1213 17:46:33.071793 1 glog.go:79] No active namespaces in region region0 I1213 17:46:37.382378 1 main.go:138] Version: v0.4.1-0-gb3ef1f69a Here's the command how my VM is started: qemu-system-x86_64 -drive if=virtio,file=/data/cow_image.qcow2,format=qcow2 \ -drive file=/data/seed.iso,if=virtio,format=raw -nodefaults \ -device virtio-balloon-pci,id=balloon0 -realtime mlock=off -msg timestamp=on \ -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 \ -serial stdio -enable-kvm -machine accel=kvm,usb=off -vga qxl \ -display none -m 4G,slots=4,maxmem=32G -smp 4 -machine pc,accel=kvm,nvdimm=on \ -device virtio-net-pci,netdev=net0,mac=02:42:ac:11:00:09 -netdev tap,id=net0,vhost=on,fd=3 \ -object memory-backend-file,id=mem1,share,mem-path=/tmp/f27nvdimm0,size=4G -device nvdimm,memdev=mem1,id=nv1,label-size=2M \ -object memory-backend-file,id=mem2,share,mem-path=/tmp/f27nvdimm1,size=4G -device nvdimm,memdev=mem2,id=nv2,label-size=2M -vnc unix:/data/vnc I ran this in order to enable the NVIDIMM regions ndctl disable-region all ndctl init-labels all ndctl enable-region all Is there something that is missing to run? I saw some documentation about the usage of https://github.com/intel/ipmctl, is it only for the cases when I have the Optane HW? When I tried to create a pmem-csi PVC, it seems like it's not having the capacity to provide it. My k8s cluster is running with 1 master and 3 workers, only 1 worker has the pmem-csi labeled. This is the error I get in the pvc creation: failed to provision volume with StorageClass "pmem-csi-sc": rpc error: code = Unavailable desc = No node found with 8589934592 capacaity This is the relevant log from the pod that is running the node with persistence memory enabled: . . . I1213 17:46:31.993469 1 glog.go:79] total used: 0 I1213 17:46:33.070068 1 glog.go:79] CheckVG: Bus: ndbus0 I1213 17:46:33.070457 1 glog.go:79] Region: region1 I1213 17:46:33.070465 1 glog.go:79] NsMode: fsdax I1213 17:46:33.071043 1 glog.go:79] No active namespaces in region region1 I1213 17:46:33.071058 1 glog.go:79] NsMode: sector I1213 17:46:33.071121 1 glog.go:79] No active namespaces in region region1 I1213 17:46:33.071128 1 glog.go:79] Region: region0 I1213 17:46:33.071133 1 glog.go:79] NsMode: fsdax I1213 17:46:33.071724 1 glog.go:79] No active namespaces in region region0 I1213 17:46:33.071732 1 glog.go:79] NsMode: sector I1213 17:46:33.071793 1 glog.go:79] No active namespaces in region region0 I1213 17:46:37.382378 1 main.go:138] Version: v0.4.1-0-gb3ef1f69a Here's the command how my VM is started: qemu-system-x86_64 -drive if=virtio,file=/data/cow_image.qcow2,format=qcow2 \ -drive file=/data/seed.iso,if=virtio,format=raw -nodefaults \ -device virtio-balloon-pci,id=balloon0 -realtime mlock=off -msg timestamp=on \ -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 \ -serial stdio -enable-kvm -machine accel=kvm,usb=off -vga qxl \ -display none -m 4G,slots=4,maxmem=32G -smp 4 -machine pc,accel=kvm,nvdimm=on \ -device virtio-net-pci,netdev=net0,mac=02:42:ac:11:00:09 -netdev tap,id=net0,vhost=on,fd=3 \ -object memory-backend-file,id=mem1,share,mem-path=/tmp/f27nvdimm0,size=4G -device nvdimm,memdev=mem1,id=nv1,label-size=2M \ -object memory-backend-file,id=mem2,share,mem-path=/tmp/f27nvdimm1,size=4G -device nvdimm,memdev=mem2,id=nv2,label-size=2M -vnc unix:/data/vnc This s showing that your qemu is configured with two 4GB nvdimms, so with this configuration you can use max ~4GB volume, Can you please try creating PVC with less than 4GB. Is there something that is missing to run? I saw some documentation about the usage of https://github.com/intel/ipmctl, is it only for the cases when I have the Optane HW? Yes, ipmctl tool is to initialize physical dimms. I think before that this scenario blocks by the fact that default namespace size id 32GB, means the drivers fails to create even first namespace. In emulated case, it makes sense to use smaller than default namespace size using pmem-csi driver added option, for example -namespacesize 2 So that in yaml file this config becomes for exmaple: image: REGISTRY:5000/pmem-ns-init:canary args: [ "-v=5", "-namespacesize=2" ] The 32 GB default size was selected real HW in mind where device will be like 128 G and more. What is the good/optimal value is bit open, also some thoughts in issue #58. In short, your emulated NVDIMM size and value given to -namespacesize should result in balance such as at least one, better few namespaces can be create inside one NVDIMM. Note also there is some overhead so with NVDIMM=8G and namespacesize=4G you will get one, not two, and with such round numbers the unused overhead is close to 4G another hint, use ndctl on host to list the actual namespaces state, it's bit cleaner to examine it there compared to pmem-csi driver logs @avalluri and @okartau thanks for the help, I created a volume of 1Gb. As far I can see, my regions seem to have enough room to create a 1G volume. ndctl list -R [ { "dev":"region1", "size":4292870144, "available_size":4292870144, "type":"pmem", "numa_node":0, "iset_id":10248195696374871, "persistence_domain":"unknown" }, { "dev":"region0", "size":4292870144, "available_size":4292870144, "type":"pmem", "numa_node":0, "iset_id":10248187106440278, "persistence_domain":"unknown" } ] I'm still getting this error: I1213 23:22:06.862312 1 glog.go:79] Enabling volume access mode: SINGLE_NODE_WRITER I1213 23:22:06.862842 1 glog.go:79] NewPmemDeviceManagerLVM: Bus: ndbus0 I1213 23:22:06.863214 1 glog.go:79] NewPmemDeviceManagerLVM: Region: region1 I1213 23:22:06.863230 1 glog.go:58] Executing: vgs ndbus0region1fsdax I1213 23:22:06.918079 1 glog.go:58] Output: Volume group "ndbus0region1fsdax" not found Cannot process volume group ndbus0region1fsdax I1213 23:22:06.918145 1 glog.go:79] NewPmemDeviceManagerLVM: VG ndbus0region1fsdax non-existent, skip I1213 23:22:06.918202 1 glog.go:58] Executing: vgs ndbus0region1sector I1213 23:22:06.959001 1 glog.go:58] Output: Volume group "ndbus0region1sector" not found Cannot process volume group ndbus0region1sector I1213 23:22:06.959053 1 glog.go:79] NewPmemDeviceManagerLVM: VG ndbus0region1sector non-existent, skip I1213 23:22:06.959101 1 glog.go:79] NewPmemDeviceManagerLVM: Region: region0 I1213 23:22:06.959130 1 glog.go:58] Executing: vgs ndbus0region0fsdax I1213 23:22:07.031182 1 glog.go:58] Output: Volume group "ndbus0region0fsdax" not found Cannot process volume group ndbus0region0fsdax I1213 23:22:07.031259 1 glog.go:79] NewPmemDeviceManagerLVM: VG ndbus0region0fsdax non-existent, skip I1213 23:22:07.031298 1 glog.go:58] Executing: vgs ndbus0region0sector I1213 23:22:07.099042 1 glog.go:58] Output: Volume group "ndbus0region0sector" not found Cannot process volume group ndbus0region0sector I1213 23:22:07.099150 1 glog.go:79] NewPmemDeviceManagerLVM: VG ndbus0region0sector non-existent, skip I1213 23:22:07.099255 1 glog.go:79] Enabling node service capability: STAGE_UNSTAGE_VOLUME I1213 23:22:07.099602 1 glog.go:79] Enabling controller service capability: PUBLISH_UNPUBLISH_VOLUME I1213 23:22:07.101561 1 glog.go:79] Listening for connections on address: &net.UnixAddr{Name:"/csi/csi.sock", Net:"unix"} I1213 23:22:07.101953 1 glog.go:58] Connecting to 10.97.108.156:10000 Connecting to Registry at : tcp://10.97.108.156:10000 And my VMs setup is: qemu-system-x86_64 -drive if=virtio,file=/data/cow_image.qcow2,format=qcow2 -drive file=/data/seed.iso,if=virtio,format=raw -nodefaults -device virtio-balloon-pci,id=balloon0 -realtime mlock=off -msg timestamp=on -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -serial stdio -enable-kvm -machine accel=kvm,usb=off -vga qxl -display none -m 8G,slots=4,maxmem=16G -smp 4 -machine pc,accel=kvm,nvdimm=on -device virtio-net-pci,netdev=net0,mac=02:42:ac:11:00:09 -netdev tap,id=net0,vhost=on,fd=3 -object memory-backend-file,id=mem1,share,mem-path=/tmp/f27nvdimm0,size=4G -device nvdimm,memdev=mem1,id=nv1,label-size=2M -object memory-backend-file,id=mem2,share,mem-path=/tmp/f27nvdimm1,size=4G -device nvdimm,memdev=mem2,id=nv2,label-size=2M -vnc unix:/data/vnc @okartau I'm actually running it in K8s, I'm following this part of the main README https://github.com/intel/pmem-CSI#run-as-kubernetes-deployment. As far I understand pmem-ns-init and pmem-vgm are running on init containers in pmem-csi containers that are running on each container. hm, re-read of start of thread shows you attempt pvc, so it means you have Kubernetes context amd my guidance about steps like in run_driver is off-topic. But question remains, whats your deployment scheme, has pmem-vgm been run? In kubernetes case, we have deployment example documented in README, using manifests in deploy/, please check recently updated content there suitable for k8s v1.12. The pmem-csi.yaml has entries taking care of starting all 3 stages. in k8s logs you can examine the stage logs separately. pmem-vgm log should show creation (or re-use) of LVM volume groups. It is also helpful to re-check using vgs on host side, may need to use vgs --foreign as vg ops were running inside container Yes, I'm using the k8s v1.12 yml file from deploy, here's the full log in the pod/pmem-csi-xxx kubectl logs pod/pmem-csi-86f7h --all-containers http://pastebin.intel.com:8080/sijetejose For some reason, this error keeps appearing: Executing: vgs ndbus0region0fsdax I1213 23:54:25.562723 1 glog.go:58] Output: Volume group "ndbus0region0fsdax" not found I'm not sure if that may be the issue. Also, I ran the vgs command and showing as follows: root@k8s-ubuntu-pmem-worker1:/home/ubuntu# vgs --foreign root@k8s-ubuntu-pmem-worker1:/home/ubuntu# thanks for log, now I start getting some traction of it. You have 2 regions, both with available_size:4292870144 You have configured 4G namespacesize: glog.go:79] Configured namespacesize; 4 GB pmem-ns-init (1st stage) checks each region and tries to create 4294967296 byte namespaces. But 4294967296-byte namespace does not fit into free space 4292870144, therefore it can't create any: 0 fsdax-namespaces of size 4294967296 possible in region region1 the reason why 4G cant fit into 4G is namespace overhead (I think it's 4 MB). Please try with reduced namespacesize, like 3G or 2G (you should get 1 namespace created) or 1G (you should get 3 namespaces created) @okartau, yeah, that totally makes sense. I'm almost there. I changed the namespace to 2GB and it seems like it now creates at least one namespace. I'm still getting some weird error later. I'm digging more to find the root cause of this. > kubectl log pod/pmem-csi-v22jt --all-containers -f log is DEPRECATED and will be removed in a future version. Use logs instead. I1214 07:14:02.434865 1 glog.go:79] Configured namespacesize; 2 GB I1214 07:14:02.435347 1 glog.go:79] Create fsdax-namespaces in region1, allowed 100 %: total : 4292870144 avail : 4292870144 can use : 4292870144 I1214 07:14:02.435882 1 glog.go:79] total used: 0 I1214 07:14:02.435903 1 glog.go:79] 1 fsdax-namespaces of size 2147483648 possible in region region1 I1214 07:14:02.435910 1 glog.go:79] Creating namespace 0 I1214 07:14:02.439541 1 glog.go:79] setting namespace sector size ...: 0 I1214 07:14:02.442212 1 glog.go:79] setting pfn libndctl: ndctl_pfn_enable: pfn1.0: failed to enable W1214 07:14:02.443641 1 glog.go:84] Failed to create namespace:pfn: failed to enable I1214 07:14:02.443672 1 glog.go:79] Create sector-namespaces in region1, allowed 0 %: total : 4292870144 avail : 4292870144 can use : 0 I1214 07:14:02.443909 1 glog.go:79] total used: 0 I1214 07:14:02.443936 1 glog.go:79] Create fsdax-namespaces in region0, allowed 100 %: total : 4292870144 avail : 4292870144 can use : 4292870144 I1214 07:14:02.444540 1 glog.go:79] total used: 0 I1214 07:14:02.444558 1 glog.go:79] 1 fsdax-namespaces of size 2147483648 possible in region region0 I1214 07:14:02.444563 1 glog.go:79] Creating namespace 0 I1214 07:14:02.447361 1 glog.go:79] setting namespace sector size ...: 0 I1214 07:14:02.448955 1 glog.go:79] setting pfn libndctl: ndctl_pfn_enable: pfn0.0: failed to enable W1214 07:14:02.450249 1 glog.go:84] Failed to create namespace:pfn: failed to enable I1214 07:14:02.450274 1 glog.go:79] Create sector-namespaces in region0, allowed 0 %: total : 4292870144 avail : 4292870144 can use : 0 I1214 07:14:02.450429 1 glog.go:79] total used: 0 I1214 07:14:02.747019 1 glog.go:79] CheckVG: Bus: ndbus0 I1214 07:14:02.747613 1 glog.go:79] Region: region1 I1214 07:14:02.747624 1 glog.go:79] NsMode: fsdax I1214 07:14:02.748471 1 glog.go:79] No active namespaces in region region1 I1214 07:14:02.748491 1 glog.go:79] NsMode: sector I1214 07:14:02.748572 1 glog.go:79] No active namespaces in region region1 I1214 07:14:02.748581 1 glog.go:79] Region: region0 I1214 07:14:02.748587 1 glog.go:79] NsMode: fsdax I1214 07:14:02.749262 1 glog.go:79] No active namespaces in region region0 I1214 07:14:02.749271 1 glog.go:79] NsMode: sector I1214 07:14:02.749361 1 glog.go:79] No active namespaces in region region0 I1214 07:14:09.589779 1 main.go:138] Version: v0.4.1-0-gb3ef1f69 I1214 07:14:09.589934 1 main.go:145] Attempting to open a gRPC connection with: "/csi/csi.sock" I1214 07:14:09.589967 1 connection.go:68] Connecting to /csi/csi.sock I1214 07:14:09.591170 1 connection.go:95] Still trying, connection is CONNECTING I1214 07:14:09.591985 1 connection.go:95] Still trying, connection is TRANSIENT_FAILURE I1214 07:14:10.591444 1 connection.go:95] Still trying, connection is CONNECTING I1214 07:14:10.591504 1 connection.go:92] Connected I1214 07:14:10.591513 1 main.go:153] Calling CSI driver to discover driver name. I1214 07:14:10.591529 1 connection.go:136] GRPC call: /csi.v0.Identity/GetPluginInfo I1214 07:14:10.591538 1 connection.go:137] GRPC request: I1214 07:14:10.592031 1 connection.go:139] GRPC response: name:"pmem-csi" vendor_version:"0.0.1" I1214 07:14:10.592074 1 connection.go:140] GRPC error: <nil> I1214 07:14:10.592085 1 main.go:161] CSI driver name: "pmem-csi" I1214 07:14:10.592092 1 main.go:165] Loading kubeconfig. I1214 07:14:10.592295 1 node_register.go:55] Calling CSI driver to discover node ID. I1214 07:14:10.592311 1 connection.go:136] GRPC call: /csi.v0.Node/NodeGetId I1214 07:14:10.592317 1 connection.go:137] GRPC request: I1214 07:14:10.592706 1 connection.go:139] GRPC response: node_id:"k8s-ubuntu-pmem-worker1" I1214 07:14:10.592731 1 connection.go:140] GRPC error: <nil> I1214 07:14:10.592752 1 node_register.go:63] CSI driver node ID: "k8s-ubuntu-pmem-worker1" I1214 07:14:10.592803 1 node_register.go:86] Starting Registration Server at: /registration/pmem-csi-reg.sock I1214 07:14:10.592883 1 node_register.go:93] Registration Server started at: /registration/pmem-csi-reg.sock I1214 07:14:10.597913 1 main.go:111] Received GetInfo call: &InfoRequest{} I1214 07:14:10.605026 1 main.go:121] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} When I try to create the pvc I'm still getting: Warning ProvisioningFailed 14s (x2 over 14s) pmem-csi_pmem-csi-controller-0_8d858b22-ff70-11e8-838d-0a580af40303 failed to provision volume with StorageClass "pmem-csi-sc": rpc error: code = Unavailable desc = No node found with 1073741824 capacaity Now we hit next issues related to emulated NVDIMM. Failed to create namespace:pfn: failed to enable We have seen series of those. Emulated NVDIMM in general works better if you only enable one device. Having 2 devices was my own idea to get emulated env closer to real HW, and that got documented in READMEs, but it actually causes some extra trouble. You can make success chances higher by removing 2nd emulated device, and use one larger. I think I need to update README stating the same. There is BTW recently added code in test/ which automates qemu VMs creation and k8s cluster setup., using ClearLinux targets. In such a system NVDIMM (one device) emulation has been quite stable. I created another PR #109 to stop advertising dual NVDIMM in emulation, which seems to create more trouble than gain @okartau, quick question, do you know if it has been tested in other distros different than ClearLinux? I can tell where we (dev team of 2 so far been active with that) have been running pmem-csi: VMs with emulated NVDIMM (as described in READMEs): Ubuntu 18.04.1 VMs created by scripts in test/: Clearlinux, various versions Phys host with real Pmem devices: Clearlinux Phys host with real Pmem devices: Fedora 28 As we had not systematic testing set up, this has not been automated, but it has been verified on those via manual run. As most(all?) functioning happens in containers (if not running stand-alone binaries), it should not matter too much what is the distro, if it runs Docker
gharchive/issue
2018-12-13T18:32:38
2025-04-01T06:39:05.447107
{ "authors": [ "avalluri", "obedmr", "okartau" ], "repo": "intel/pmem-CSI", "url": "https://github.com/intel/pmem-CSI/issues/107", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2021686424
QPL filter for varied input (> 1 byte, string, float, etc) Hi, I had a question regarding filtering in QPL. The example code for scan shows how to filter single-byte data (i.e., each element in the source vector is uint8_t). qpl_job *job = reinterpret_cast<qpl_job *>(job_buffer.get()); job->next_in_ptr = source.data(); job->available_in = static_cast<uint32_t>(source.size()); job->next_out_ptr = destination.data(); job->available_out = static_cast<uint32_t>(destination.size()); job->op = qpl_op_scan_range; job->src1_bit_width = input_vector_width; job->num_input_elements = static_cast<uint32_t>(source.size()); job->out_bit_width = qpl_ow_32; // set output bit width job->param_low = lower_boundary; job->param_high = upper_boundary; For instance, job->next_in_ptr expects a pointer to a vector of type uint8_t (range 0 to 255 in decimal), so using a vector of type uint32_t for source gives a compilation error. On setting the input vector width to 32 bits and casting the input/output pointers to type uint8_t, the code compiles but the filter qpl operation returns an error with status code 232. Is it possible to run filtering on multibyte data (e.g. uint32_t) and on data like date/time, strings, floats, etc? It would be great if you could share a simple example. On setting the input vector width to 32 bits and casting the input/output pointers to type uint8_t, the code compiles but the filter qpl operation returns an error with status code 232. Hi @raunaks13 , this seems like a correct approach to me. The input/output buffers are simply raw buffers of data of type uint8_t *. But you could use different bit width to specify the element size. Could you make sure you also updated the number of input elements appropriately when you changed the input bit width? If this still doesn't work, please provide a reproducer for us to work with. Closing as no response from the user, please feel free to re-open if needed.
gharchive/issue
2023-12-01T22:45:01
2025-04-01T06:39:05.452427
{ "authors": [ "mzhukova", "raunaks13" ], "repo": "intel/qpl", "url": "https://github.com/intel/qpl/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1588258813
fix: correct emoji parsing and component -> modal About This pull request is about: Adding the ModalMixin to the ComponentContext Similarly to #1279 , emoji conversion does not work for emoji in buttons Add the modal_callback function in all init files Checklist [x] The pre-commit code linter has been run over all edited files to ensure the code is linted. [x] I've ensured the change(s) work on 3.8.6 and higher. [ ] I have added the versionadded, versionchanged and deprecated to any new or changed user-facing function I committed. Pull-Request specification I've made this pull request: (check all that apply) [ ] For the documentation [x] To add a new feature [ ] As a general enhancement [ ] As a refactor of the library/the library's code [x] To fix an existing bug [ ] To resolve #ISSUENUMBER This is: [ ] A breaking change I apologize I didn't catch this in my testing Renamed PR. I try to avoid "fix: fix" or "docs: document" styled commits
gharchive/pull-request
2023-02-16T19:39:27
2025-04-01T06:39:05.522760
{ "authors": [ "Wolfhound905", "i0bs", "mAxYoLo01" ], "repo": "interactions-py/interactions.py", "url": "https://github.com/interactions-py/interactions.py/pull/1283", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
157476656
Showcase Implementation Different showcases to get new users started more quickly. These showcases could for example show catalog used as design handover, used over multiple phases or as a lab notebook. You could also link to actual projects using Catalog. The more, the better. In progress, but as a separate effort 😄
gharchive/issue
2016-05-30T10:34:29
2025-04-01T06:39:05.525997
{ "authors": [ "bebraw", "bldng", "herrstucki" ], "repo": "interactivethings/catalog", "url": "https://github.com/interactivethings/catalog/issues/117", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
402461065
Custom text for headings This add's the ability to set custom text to be used for headings, either for all headings or per heading when using the array of objects in the headings array. Also fixes issue #454 Is this project abandoned? :(
gharchive/pull-request
2019-01-23T22:36:59
2025-04-01T06:39:05.527011
{ "authors": [ "equinusocio", "matthewroach" ], "repo": "interactivethings/catalog", "url": "https://github.com/interactivethings/catalog/pull/474", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
312863064
Styling/coding standards Which coding standards/PEP guides are you planning to adhere to. As this is a python library aimed at python users(presumably) the PEPs would be a good place to start. Also when implementing the classes there are a couple options regarding the OOP interface as I see it: literal translations so getPos player becomes player.getPos(). or player.get_pos() @property decorator to wrap the calls to the DLL to the user so that getPos player becomes player.pos , and player setPos [a,b,c] becomes player.pos = [a,b,c]. Both of the above, having one approach call the other for example. This illustrates both approaches, and is therefor less pythonic ("one way to do something"). class RV_Object(object): def __init__(self): # todo pass @property def pos(self)->tuple:pass @pos.setter def pos(self, pos:tuple)->None:pass def get_pos(self)->tuple: pass # todo def set_pos(self, pos:tuple)->None: pass # todo Using PEP8 would be a good start ;). I'd suggest doing as "everyone else does" and increasing the line size limit to 120 chars (with 1080p and even 4k monitors nowadays) I'm sitting on a fence regarding the camelCase vs snake_notation. I'm used to snake but I also see the benefit of sticking to the original SQF function names. Deep inside me I'd prefer to stick with the original naming (less confusion) but I haven't done enough Arma modding (nor even used Intercept much) to speak authoritatively about this matter. I have some boilerplate code here: https://github.com/Tirpitz93/intercept-python/tree/python_classes.
gharchive/issue
2018-04-10T10:27:12
2025-04-01T06:39:05.531614
{ "authors": [ "Tirpitz93", "overfl0" ], "repo": "intercept/intercept-python", "url": "https://github.com/intercept/intercept-python/issues/3", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1517284256
APY does not update/show wrong value (Borrow Markets & My Borrow Positions) Under My Borrow Positions as well as Borrow/Lend Markets, the APY of the token does not update to the correct value when loans or deposits change. To Reproduce Steps to reproduce the behavior: Go to Lending Click on ‘KINT’ (or any other currency) under ‘Borrow Markets’ and borrow amount X Expected behavior Show the correct APY according to this sheet: https://docs.google.com/spreadsheets/d/1R2ZzhDmAHW8_N64Mifoo82m2pMDLLY9mX57fvgiV87w/edit#gid=308038421 and the following parameters: https://docs.google.com/spreadsheets/d/1U_iFh8Bx4pRNOkjYFEL0jwOlb1KqKgnjAJXkFCCx1M0/edit Formulas (for reference): https://www.notion.so/interlay/InterLend-Economics-a86ec57da6c1462abed18895a8329ce9#9aa5436994ad41bd91a116723252da1c Screenshots Desktop (please complete the following information): Linux (Fedora 35) Firefox Hello @philippb90, if you don't mind. I would like to work on this, but I don't have access to the full information. Even the testnet, I had to bypass the conditional check to view the lending page. What do I need to do to be able to work on this? Hello @philippb90, if you don't mind. I would like to work on this, but I don't have access to the full information. What do I need to do to be able to work on this, I can provide my email if you mean to grant me access Hi @bolajahmad - thanks for your interest in contributing, much appreciated :) one of our devs has picked this task up, but I've updated the README file with all the information you'll need to run the UI locally and connect to testnet. Closing this as it was a parachain configuration issue rather than a UI one
gharchive/issue
2023-01-03T11:50:50
2025-04-01T06:39:05.538981
{ "authors": [ "bolajahmad", "daniel-savu", "philippb90", "tomjeatt" ], "repo": "interlay/interbtc-ui", "url": "https://github.com/interlay/interbtc-ui/issues/789", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
453247605
Retroactively credit incoming payments if notification failed In the current implementation, the plugin must be online in order to credit an incoming Lightning payment. If the plugin is offline but the LND node is online, the sender may have sent a settlement, but it wouldn't be credited. There should be a protocol or mechanism to retroactively credit settlements to get peers' balances back in sync if the receiver goes offline, then comes back online, or for some reason they fail to get the notification. (The plugin is currently pretty stateless, so this would require persisting which account each invoice is linked to, rather than just keeping that in memory. Then, the plugin would need to iterate through previously paid invoices to ensure each was credited--while ensuring no invoice is credited more than once). Alternatively: senders could include their account identifier in the memo/description of each payment, so the receiver knows which to credit it to (or none at all). while ensuring no invoice is credited more than once Persist the monotonically increasing settle_index and replay notifications from the last one credited to ensure no invoice is credited more than once. And, instead of persisting the invoices sent to each peer to determine which account each settled invoice should be credited to, require the counterparty to include their accountId in the memo field of the payment.
gharchive/issue
2019-06-06T21:32:24
2025-04-01T06:39:05.541875
{ "authors": [ "kincaidoneil" ], "repo": "interledgerjs/ilp-plugin-lightning", "url": "https://github.com/interledgerjs/ilp-plugin-lightning/issues/35", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
180719345
move to a higher-half kernel @ketsuban has an example here: https://github.com/Ketsuban/finisterre I think this would be a good idea before going to usermode, so cc https://github.com/intermezzOS/kernel/issues/82 If anyone is interested in tackling this, please give it a try! Here's a link to the current commit to save me having to keep the master branch pristine in the interim. The main differences are in boot.asm and layout.ld, and should be fairly easy to port over to intermezzOS. As written any identity-mapped virtual addresses (like the VGA buffer at 0xB8000) will continue to work, because there's no code removing the original identity mapping—if you want to follow page faults to find them it should go somewhere in start64 before the jump to kmain.
gharchive/issue
2016-10-03T19:03:07
2025-04-01T06:39:05.544884
{ "authors": [ "Ketsuban", "steveklabnik" ], "repo": "intermezzOS/kernel", "url": "https://github.com/intermezzOS/kernel/issues/96", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
223237891
safen up brozzler.thread_raise() to avoid interrupting rethinkdb tran… …sactions and such There are >80 lines of threading specific stuff in init.py across a couple functions... time to split into a utility file? I could... for now init.py is my utility file, it still hasn't crossed my mental threshold for own fileness. Using a with block Refactored to use a with block! extending the Thread class would put all this code in a single coherent place. I decided not to do this because it would be more brittle. For example if some project is using brozzler as a library, and they want to call some brozzler code that accepts exceptions, they have to make sure it runs in a SpecialThread. They wouldn't be able to run such code at all from the main thread (unless they monkey patch it, yuck). Or on the flip side, if someone wants to use the thread_raise functionality outside of brozzler, perhaps in the main thread, or in threads created deep in some library code, there is no barrier to doing so. Incrementing this seems like an anti-pattern. What if two PRs both increment it at the same time? (about incrementing the version number) Yeah I normally do that only on commits to master. Oops, not exactly sure how to pull it out of this pull request now. Would definitely split the implementation and use of this new feature separate commits in the future. Oh? Well, I did that for you for this refactoring. After discussing in person, I think I understand the use case of this code a lot better. Pattern of sending exceptions to expedite shutdowns during specific "slow" code paths still feels a little hack-y to me. We discussed using multi-processing and signals as a way to pre-empt larger library code (like youtube-dl). For browser interactions, it might also be possible to signal the browser process directly (bypassing the owner thread) to expedite shutdown. One of the sign to me that multiprocessing would be acceptable is that "the amount of work is large/long", so the over head of a fork is negligable. Not sure either of those are really cleaner than this approach though. Overall, this PR feels like an improvement to me, and particularly makes the code more understandable to future readers. "LGTM"! Thanks for looking it over @bnewbold!
gharchive/pull-request
2017-04-21T00:12:59
2025-04-01T06:39:05.558640
{ "authors": [ "bnewbold", "nlevitt" ], "repo": "internetarchive/brozzler", "url": "https://github.com/internetarchive/brozzler/pull/36", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
185646138
Change package raven/raven to sentry/sentry (https://github.com/getsentry/sentry-php) thank you!
gharchive/pull-request
2016-10-27T11:48:00
2025-04-01T06:39:05.579582
{ "authors": [ "1allen", "rroyik" ], "repo": "intersvyaz/yii-sentry", "url": "https://github.com/intersvyaz/yii-sentry/pull/1", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
465136569
Agenda does not work as expected Do you want to request a feature or report a bug? I think it is a bug. What's the current behavior? The current behavior is that a event for today won't be showed in the agenda. I checked the code of the Agenda component (Agenda.js) and I saw, that the library removes events which should not be showed in the agenda view. Thats totally ok for now! The code looks like this: export function inRange(e, start, end, accessors) { let eStart = dates.startOf(accessors.start(e), 'day') let eEnd = accessors.end(e) let startsBeforeEnd = dates.lte(eStart, end, 'day') // when the event is zero duration we need to handle a bit differently let endsAfterStart = !dates.eq(eStart, eEnd, 'minutes') ? dates.gt(eEnd, start, 'minutes') : dates.gte(eEnd, start, 'minutes') return startsBeforeEnd && endsAfterStart } Please take a look into the eEnd variable. Why is this different from the eStart variable? I tought the code should look like this: dates.endOf(accessors.end(e), 'day') ? As you can see in the following image, my event starts at 5:30am and ends 10am. If I continue the debugger to the end, you can see that the endsAfterStart returns false. But thats not true! The event definitely ends after the start date. The end and start property from the arguments always have to same value for the agenda view, because this value is the current time. Hopefully anyone can help me out if I do something wrong :x Is there anyone who can help? I'm seeing a similar issue. I have an all-day event for today, and it doesn't show on the Agenda view unless I hit the back button. The default Agenda view which includes today should show today's all-day event. I am also having this issue, can someone please help? same problem! Please reopen this issue! same problem! @rahrang @paulhan221 Did you find a solution for this ?
gharchive/issue
2019-07-08T08:57:54
2025-04-01T06:39:05.591093
{ "authors": [ "davidhan527", "ffjanhoeck", "paulhan221", "rahrang" ], "repo": "intljusticemission/react-big-calendar", "url": "https://github.com/intljusticemission/react-big-calendar/issues/1382", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2549934
Root keys for entity representations Here is initial support for specifying root keys for entity representations. You can do this: module API::Entities class User < Grape::Entity root 'users', 'user' end end module API class Users < Grape::API # this will render { "users": [ {"id":"1"}, {"id":"2"} ] } get '/users' do @users = User.all present @users, :with => API::Entities::User end # this will render { "user": {"id":"1"} } get '/users/:id' do @user = User.find(params[:id]) present @user, :with => API::Entities::User end end end I could write an auto mode that would allow you to just specify 'root' with no arguments in the Entity class, but I'd have to add a dependency on something like linguistics or activesupport so I could pluralize the class name properly. What do you think? Not sure if this is a permanent keeper in terms of syntax, but good enough to start some discussion on the ol' frontier. FWIW, I don't think the root key should be part of the Entity itself. This leads to having to work around things like children having root keys. It ought to be an option to present. @stouset I think I agree, see my other pull request
gharchive/issue
2011-12-14T10:47:10
2025-04-01T06:39:05.594201
{ "authors": [ "evansj", "mbleigh", "stouset" ], "repo": "intridea/grape", "url": "https://github.com/intridea/grape/issues/101", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
708813296
[Documentation] Animated representation of state machine execution Enhance the documentation to have a demo of the execution of the state machine as an animated gif. Example: https://github.com/intuit/RBHC/blob/master/Clustering.gif I'll be working on this today at vGHC! Taking this on.
gharchive/issue
2020-09-25T10:03:22
2025-04-01T06:39:05.600487
{ "authors": [ "Neunis", "namitad", "yangjabigail" ], "repo": "intuit/Trapheus", "url": "https://github.com/intuit/Trapheus/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
305285362
parameterized basic auth fails Background: def global_variables = read('global_variables.json') url get global_variables.baseURL def userName = get global_variables.userName def userPassword = get global_variables.userPassword print userName print userPassword header Authorization = call read('basic-auth.js') { username: userName, password: userPassword } failed features: demo.appname.GET_permission_level_read: [com.intuit.karate.exception.KarateException: java.lang.NullPointerException at com.intuit.karate.StepDefs.method(StepDefs.java:364) at ?.When method get(demo/appname/GET_permission_level_read.feature:22) Solution: Background: def global_variables = read('global_variables.json') url get global_variables.baseURL def userName = get global_variables.userName def userPassword = get global_variables.userPassword print userName print userPassword header Authorization = call read('basic-auth.js') { username: '#(userName)', password: '#(userPassword)' }
gharchive/issue
2018-03-14T18:48:44
2025-04-01T06:39:05.605106
{ "authors": [ "SreeCharanShroff" ], "repo": "intuit/karate", "url": "https://github.com/intuit/karate/issues/347", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
347850734
ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException Hi, I'm behind a corporate proxy and I get the following error when I run a my runner class as junit test i get the following issue 14:36:06.199 [main] ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect, http call failed after 21410 milliseconds for URL: https://careferencedatacpt-map.azurewebsites.net/api/v1/marketavailableprice/clients 14:36:06.206 [main] ERROR com.intuit.karate - http request failed: org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect 14:36:27.235 [main] ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect, http call failed after 20999 milliseconds for URL: https://careferencedatacpt-map.azurewebsites.net/api/v1/marketavailableprice/clients 14:36:27.235 [main] ERROR com.intuit.karate - http request failed: org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect 14:36:48.271 [main] ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect, http call failed after 21000 milliseconds for URL: https://careferencedatacpt-map.azurewebsites.net/api/v1/marketavailableprice/clients 14:36:48.272 [main] ERROR com.intuit.karate - http request failed: org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect 14:37:09.296 [main] ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect, http call failed after 21002 milliseconds for URL: https://careferencedatacpt-map.azurewebsites.net/api/v1/marketavailableprice/clients 14:37:09.296 [main] ERROR com.intuit.karate - http request failed: org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect 14:37:30.684 [main] ERROR com.intuit.karate - org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect, http call failed after 21362 milliseconds for URL: https://careferencedatacpt-map.azurewebsites.net/api/v1/marketavailableprice/clients 14:37:30.685 [main] ERROR com.intuit.karate - http request failed: org.apache.http.conn.HttpHostConnectException: Connect to careferencedatacpt-map.azurewebsites.net:443 [careferencedatacpt-map.azurewebsites.net/40.68.40.55] failed: Connection timed out: connect Failed scenarios: net/apmoller/crb/map/data/api/map_client_api.feature:30 # Scenario: Testing the client api service net/apmoller/crb/map/data/api/map_client_api.feature:36 # Scenario: Testing the exact response of a GET endpoint net/apmoller/crb/map/data/api/map_client_api.feature:43 # Scenario: Testing that GET response contains specific field - name net/apmoller/crb/map/data/api/map_client_api.feature:50 # Scenario: Testing that GET response contains specific field - id net/apmoller/crb/map/data/api/map_client_api.feature:57 # Scenario: Testing that GET response contains specific field - scvCode 5 Scenarios (5 failed) 24 Steps (5 failed, 9 skipped, 10 passed) 1m47.523s Karate version: 0.8.0.1 html report: (paste into browser to view) file:/C:/Workspace/MAP/map-data-api-service/integrationtest/target/surefire-reports/TEST-net.apmoller.crb.map.data.api.map_client_api.html Please find my pom.xml attached. closing as cannot replicate. why don't you try reading the doc on using a proxy server: https://github.com/intuit/karate#configure if still stuck - please follow the instructions here: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue
gharchive/issue
2018-08-06T09:40:00
2025-04-01T06:39:05.624933
{ "authors": [ "ptrthomas", "yamucd" ], "repo": "intuit/karate", "url": "https://github.com/intuit/karate/issues/487", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1273350195
🛑 CodePush is down In e8e4e0f, CodePush (https://codepush.appcenter.ms/v0.1/public/codepush/update_check?deployment_key=Fzqu71x4FXhOaKcAiwpqXkFQddjgYl2dIUW2K&app_version=4.4.1) was down: HTTP code: 0 Response time: 0 ms Resolved: CodePush is back up in 4f2cb17.
gharchive/issue
2022-06-16T10:10:46
2025-04-01T06:39:05.627794
{ "authors": [ "potados99" ], "repo": "inu-appcenter/cafeteria-status", "url": "https://github.com/inu-appcenter/cafeteria-status/issues/305", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1556267709
🛑 INU API is down In be7f08c, INU API (https://api.inuappcenter.kr) was down: HTTP code: 0 Response time: 0 ms Resolved: INU API is back up in 215cbc6.
gharchive/issue
2023-01-25T08:50:45
2025-04-01T06:39:05.630175
{ "authors": [ "potados99" ], "repo": "inu-appcenter/cafeteria-status", "url": "https://github.com/inu-appcenter/cafeteria-status/issues/446", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
973682593
Investigate lodpdf discrepancy across ilmm and open source naive ilmm Currently, logpdf does not match to the same atol as all other functionality between these two models, which should be equivalent. This can be recreated and noticed by examining the atol provided for testing in test/illm.jl. Given the oilmm and ilmm are approximate up to atol=1e-9, this might need to be looked in to via the naive ilmm implementation in KernelFunctions. Did we resolve this the other day @thomasgudjonwright , or is it still a problem? Closed in #19
gharchive/issue
2021-08-18T13:42:58
2025-04-01T06:39:05.635456
{ "authors": [ "thomasgudjonwright", "willtebbutt" ], "repo": "invenia/LinearMixingModels.jl", "url": "https://github.com/invenia/LinearMixingModels.jl/issues/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
155517204
global: support for legacy invenio password hashes NEW Adds the ability to migrate legacy Invenio users to Invenio 3 automatically. The old encryption scheme has been integrated into the flask-security extension and a migrate_hash function can be used to update the password by re-encrypting it (using the current hash scheme) and storing it in the datastore. (closes #122) Adds a new test utility, namely create_legacy_user. Increases test coverage. Signed-off-by: Orestis Melkonian melkon.or@gmail.com Co-Authored-By: Lars Holm Nielsen lars.holm.nielsen@cern.ch Coverage increased (+0.1%) to 99.572% when pulling b6242c5115058b6dd8aa9d69b886ffa4a49d7d52 on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Looks like a good start 👍 A high level test case where a user has a Invenio legacy hash, then login the user via /login and verify that 1) user is authenticated, and 2) the password hash has been upgraded. E.g. I would expect that the password hash would need to be in the format $algoid$salt_ie_email$hash_value in order for passlib to which algo to use. OK, will look into it. Coverage increased (+0.07%) to 99.537% when pulling 8b8665771b1cac0abe7a7a68cacb967bd63d25dc on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage increased (+0.07%) to 99.537% when pulling 8b8665771b1cac0abe7a7a68cacb967bd63d25dc on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage increased (+0.07%) to 99.537% when pulling 8b8665771b1cac0abe7a7a68cacb967bd63d25dc on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage decreased (-0.4%) to 99.078% when pulling ae6090ded8c0b411388017e958f9bec6aa227493 on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage increased (+0.08%) to 99.539% when pulling 028e03fa487ef45e3f3706e7ab23787799303f3e on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage increased (+0.08%) to 99.539% when pulling 028e03fa487ef45e3f3706e7ab23787799303f3e on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. Coverage decreased (-0.1%) to 99.335% when pulling 06a0ea8c810ab5730f4963e5b0759d3ba9b979e4 on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. LGTM Coverage increased (+0.09%) to 99.553% when pulling c3743ead2b9248a5ab6004803182b6d54204427f on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master. I'll merge as soon as Travis is green Coverage increased (+0.09%) to 99.553% when pulling 4750dd65e84a7eee803cb775ba1515ac99eb1531 on omelkonian:feat-user-hash-migration into 6b5847119381f92e13a87097d8efd713ae2c18ed on inveniosoftware:master.
gharchive/pull-request
2016-05-18T14:33:57
2025-04-01T06:39:05.652543
{ "authors": [ "coveralls", "lnielsen", "omelkonian" ], "repo": "inveniosoftware/invenio-accounts", "url": "https://github.com/inveniosoftware/invenio-accounts/pull/123", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
248625654
docs: fix readthedocs build Fixes readthedocs.org documentation build. (addresses #146) Signed-off-by: Leonardo Rossi leonardo.r@cern.ch @lnielsen I don't have idea how I can test it.
gharchive/pull-request
2017-08-08T07:34:20
2025-04-01T06:39:05.658935
{ "authors": [ "hachreak" ], "repo": "inveniosoftware/invenio-oauthclient", "url": "https://github.com/inveniosoftware/invenio-oauthclient/pull/147", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
50181587
BibEdit: correct uploading of controlfields When you try to upload a record having controlfield such as: <controlfield tag="008">this is some text after spaces</controlfield> After upload you will get this: <datafield tag="008" ind1=" " ind2=" "> </datafield> BibUpload should treat 001-009 as controlfields. (with special treatment of 001 and 005) P.S. See also #230. CC @Kennethhole So this is a bug related to BibEdit rather than BibUpload? Actually I am able to reproduce it. There is maybe mis-handling of controlfields in the javascript part. TIND's workaround is replace all whitespaces before the treatment, hence closing this issue as per the legacy code base freeze.
gharchive/issue
2014-11-26T14:08:08
2025-04-01T06:39:05.661283
{ "authors": [ "kaplun", "tiborsimko" ], "repo": "inveniosoftware/invenio", "url": "https://github.com/inveniosoftware/invenio/issues/2582", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
62692094
RFC Code of Conduct Our community is growing, and as part of improving our community documentation #1799, we thought of adopting a Code of Conduct. This PR proposes a text for discussion. Design guidelines: should be short, sweet, no-nonsense should promote good collaborative development principles Proposal: endorse literally the Python Code of Conduct recommend literally the egoless programming principles (TPOCP) For the full text, please see the file docs/community/code-of-conduct.rstthat comes with this PR. Please comment on the text, upvote, downvote, or otherwise express your opinions. we thought of adopting a Code of Conduct. what are the /goals of/reasons for/ such document? what are the /goals of/reasons for/ such document? We are currently nearly 30 developers and overall (according to GitHub) 89 historical contributors. Although many of us are based at CERN, some are elsewhere, e.g. in Germany or Greece, or US... Sometimes it's not easy to solve disputes being in the same room, imagine in a middle-large and distributed community. Such document could be a sort of mantra that everyone could keep available (e.g. printed on a wall), and to be checked during hard times (but also the good ones), and that could help fostering a community spirit in this project, if it was really followed by everyone. Are there any points that diverge from a common sense or a basic human interaction rules? Also, is this initiative addressing a specific problem? Are there any points that diverge from a common sense or a basic human interaction rules? I don't know. Do you find easy to endorse all of the points? Then you are a super hero :-) E.g. I for once I have personally some difficulties with at least point 9: Don't be the guy coding in the dark office emerging only to buy cola.. Also, is this initiative addressing a specific problem? As far as I know, it stems simply from the fact that is come the moment to have such a thing. Invenio is maturing, we have 2.0, we are rewriting/redesigning it for the good, and trying to follow mainstreams Open source conventions, such as e.g. having an internal code of conduct for the community. Do you find easy to endorse all of the points? No, I fail from time to time, just as everyone else. But I don't see them being written in a GitHub repository file to help me in any way, just because I already know all of them (just because it is common sense, as mentioned). The question is: is it just going to be a text that is floating around, or is it going to be enforced in any way? If so, isn't that just adding complexity to the development process? it stems simply from the fact that is come the moment to have such a thing I still fail to see the need here. If there is no particular issue being solved, we are benevolently adding more clutter to the communication process. And this is a thing that should go exactly the other way around. just because I already know all of them (just because it is common sense, as mentioned). I don't think these are necessarily common sense. The fact that you know all of them, shows only that you in particular are already having experience in Open source. But e.g. the egoless coding principles go against most of the common sense, e.g. it is very likely not instinctively shared by high-school bullies, larger-than-life-egos people, etc. People are very different in terms of experiences, background and culture... @lovasko @kaplun you both have interesting points in this conversation. the code is here to be discussed; its content might be a subject of change (e.g. the Generous point 4); we can't really force people to follow it but we can ask them to acknowledge its existence by e.g. adding Signed-off-by signature to their commits (just an example!). @tiborsimko I'm fine with your proposal, I just would not like to see it longer. While I have nothing against a code of conduct per se, I am not sure this is really necessary either. So +0 for adding one in general. If a code of conduct is adopted though, I think this one is too long and the 3 first points more or less encompass all 10 others. As another example, have a look at https://engineering.twitter.com/opensource/code-of-conduct One thing that I would like to be clarified is whether and how Invenio is open for others to take responsibilities. I really believe that for a transition towards community-driven development to be effective (assuming this is where Invenio wants to go?), the project needs to make its contributors feeling involved and responsible. In my opinion, an open source project is more than a project with an open source license. An open source project is a project where its contributors feel like they can become a part of it, like they can take decisions and take responsibilities. For instance, with regards to Invenio, I believe more contributors should have ownership rights of the repository, more contributors should feel like they are equal. More contributors should be let making mistakes and learn from them, even if founders think otherwise. In this way (but there may be others), I feel like contributors would stay more motivated and more contributors would keep contributing in the long term. In summary, such a statement towards openness is not clear for me with respect to the proposed code of conduct, nor I know if this is something everyone agrees on. I think Sam's teacher is right on: "We don't want that the children are simply doing what they want, rather that they want to do what they are doing." Just my 2 cents :) what are the /goals of/reasons for/ such document? In addition to good reasons already highlighted by @kaplun, there were a few cases of tension in the past, as probably in any larger group. Since we are writing up the Community documentation section, it is a good occasion to formalise an informal Code of Conduct document, reminding people to respect each other's karate. In my experience not many of our newcomers have heard of "egoless programming" principles. If you feel they are only transcribing "common sense", than so much the better. we can ask them to acknowledge its existence by e.g. adding Signed-off-by signature to their commits (just an example!). It would seem like an overkill to use Signed-off-by with every single branch that people submit to mean that they acknowledge the general Code of Conduct. It should be enough to get acquainted with the Code of Conduct and acknowledge it once, like during the boot camp, as it won't be changing often. The individual branch signatures would be better reserved to something specific that can vary for each concrete branch, such as QA, though let's see in #2876. One thing that I would like to be clarified is whether and how Invenio is open for others to take responsibilities. This will come into the same Community documentation section as part of #1799 efforts. In the meantime, the general principles have been written up elsewhere, see e.g. triage docs. In a one-line summary, people can take on responsibilities by meritocracy: start contributing to the ecosystem, commenting on issues, reviewing people's branches, helping out with bug fixes and tests, not only in "your modules" but in general, and see your project responsibilities widen. The above URL highlights principles on how we triage issues and prioritise milestones; similar will follow on how we merge, etc. @jirikuncar happened to be in my office a few days ago and kindly reminded me that I should try to be more active in maintaining my PRs. I think raises is an important point: responsibility. We all have different positions here - some with low expectancies - but being an open source project we should at least try to find a personal point in what we do and motivate ourselves around it. During a brief search I stumbled upon ACM's Code of Ethics and I found paragraphs 1.3 and 2.6 to relate to my idea. The document is definitely broader in range, but I think it is a good read even if we don't end up integrating it. Now, not only I think we should have this code of conduct, we should also hang it on our walls to remind ourselves how we should balance community, pleasure and work. We should also not fear to reference this code and improve upon it. Hurting someone's feelings temporarily for the good of the entire community is a useful and valuable lesson for all. +1 Just some few comments on points mentioned above: Need: We are a growing community and the code of conduct is a reminder to as all of the basic rules of interaction. Not only as person to person, but also just as important: service to service (Inspire vs Zenodo vs JUSER, TIND vs CERN, etc.). While perhaps obvious in theory, in practice I think it's far from so. I've personally experience many tensions where it would have been good to have a CoC as a reminder. Code of Conduct (CoC) vs Egoless programming: I think it would be useful to make it into two sections, so that the CoC is very short, and then stick egoless programming under a section called something like "Good Behavioural Practices". Enforcement: It might be useful to have a part saying that violations of the CoC can be reported to any member of the triaging team or similar. Python vs custom: I agree with @tiborsimko that a literal adoption of an existing CoC is preferable. A bit like with using other Python packages: let's not reinvent the wheel. Lots of thought has gone into the right wording of the existing CoCs. Open Source vs Open Collaboration: I fully agree with @glouppe and @kaplun that we need what I would call an "open collaboration" where you want to contribute. They keyword to an open collaboration I think is frictionless. Technically frictionless to get Invenio up running, and socially frictionless to contribute and take part. We have been addressing the former the past years, and the CoC is just the first steps in addressing the latter. I believe that clear playing rules is needed in an open collaboration - it's needed for newcomers to know how they can eventually become integrators and take decisions, and it's needed for long standing contributors to give up control to the community. @tiborsimko we made a LaTeX version of the document for a nice print version @lovasko rst2latex ;) @tiborsimko can we merge it as is? There were no major objections and more suggested documents might come in upcoming days. -- Following documents should be proposed and discussed in future RFCs: [ ] "Good Behavioural Practices" [ ] Pull-Request Integration Checklist [ ] Release Checklist [ ] "How to become triager/integrator"@tiborsimko can we merge it as is? There were no major objections and more suggested documents might come in upcoming days. -- Following documents should be proposed and discussed in future RFCs: [ ] "Good Behavioural Practices" [ ] Pull-Request Integration Checklist [ ] Release Checklist [ ] "How to become triager/integrator" Following documents should be proposed and discussed in future RFCs This is indeed coming as part of #1799, as I mentioned above. (By simply centralising current practices into one place; any possible RFCs could be future amendments on top of that.) "Good Behavioural Practices" This sounds a lot like a "Code of Conduct". I'd simply leave them together, people should have enough bandwidth to read 13 short points in one single place... @tiborsimko docs/community/code-of-conduct.rst:: WARNING: document isn't included in any toctree @jirikuncar Yes, the inclusion of CoC in the overall docs is in my other branch that introduces "Community" docs structure. (That's by design: the CoC body was best discussed separately, while for the overall docs structure we have #1799 already.) I'll issue another PR for the Community docs afterwards. @tiborsimko we can't merge it because the tests with fail to build docs in strict mode (https://travis-ci.org/inveniosoftware/invenio/jobs/57621067). OK, I run sphinx without -W locally.. I'll amend this.
gharchive/pull-request
2015-03-18T13:19:28
2025-04-01T06:39:05.688783
{ "authors": [ "dset0x", "glouppe", "jirikuncar", "kaplun", "lnielsen", "lovasko", "tiborsimko" ], "repo": "inveniosoftware/invenio", "url": "https://github.com/inveniosoftware/invenio/pull/2900", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1524720342
[FR] Add API endpoint to activate plugins This PR: integrate general plugin patterns into the plugin patterns BREAKING moves all plugin management endpoints to /api/plugins instead of /api/plugin adds an endpoint to activate a plugin Fixes #4182 Is the breaking change ("plugins" vs "plugin") necessary here? This will cause at the least the app to break. Also please bump the API version in api_version.py @SchrodingersGat I think it has strong benefits as the current naming schema prevents some slugs from being used for plugin URLs and the number of 'reserved' slugs will only grow with time. A clear separation between /api/plugin for plugin-provided URLs and /api/plugins for meta fields also makes sense to prevent accidentally overriding core functions. @matmair please increment the API version so external services (e.g. app) can determine which endpoint to call :) @SchrodingersGat API version is bumped!
gharchive/pull-request
2023-01-08T23:05:53
2025-04-01T06:39:05.695461
{ "authors": [ "SchrodingersGat", "matmair" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/pull/4186", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2487983416
[PUI] Make actions recognisable Action Dropdowns are not very recognisable right now. Therefore, I propose adding indicators similar to the screenshots below to buttons with more than 1 action. I am not sure how to style that, below solution is not the final goal - maybe a single indicator could be enough for example. Current Proposed Fixes https://github.com/invenhost/InvenTree/issues/98 Maybe a "down caret" indicator would be clearer here? It seems a bit more "traditional". But I agree with the concept in general I like the suggestion from @SchrodingersGat What do we think about this: That's pretty clean IMO I think no its really hard to tell, which "down caret" belongs to what action, maybe an outline would help to add some separation, but not sure how that will look like. It looks atrocious with an outline That's true. Maybe its just me having problems to see the separation. What about a mantine indicator with the caret attached directly to the button https://mantine.dev/core/indicator/ I have tried another thing: light I find vertical dots also match the flow better: And the same thing in dark mode with dots instead of chevrons Light with chevrons is my pick :) Ok then this is ready for merge - light with chevron is the current state of the PR. And maybe we add the chevron to the vertical dot menu too? I would not add chevrons to the dot menu, I think that 3 dots are universally understood as "more" - that is why I suggested usage of them in the first place. I would not add chevrons to the dot menu, I think that 3 dots are universally understood as "more" - that is why I suggested usage of them in the first place. In this case, should we add a special dropdown menu just for the "more" actions - I see that you have added a lot of instances of noindicator - this could be refactored into a special dropdown with the dots icon and no indicator? Sure, I can look into that. @SchrodingersGat I have added a special dropdown to adress this @SchrodingersGat this is ready for review and merge Nice work, thanks for this!
gharchive/pull-request
2024-08-26T23:50:23
2025-04-01T06:39:05.704993
{ "authors": [ "SchrodingersGat", "matmair", "wolflu05" ], "repo": "inventree/InvenTree", "url": "https://github.com/inventree/InvenTree/pull/8005", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2162858109
custom CustomizationId Added a way to set the customization id of the invoice. This is to replace pull request #4 with a sensible way of changing the customization id. Thanks @mcmihai ! Let me know when you think a new release will be useful for your use case. I guess that you still need to revise the hardcoding of currency in the invoice template. I just uploaded version 0.2 to pypi with your PR included: https://pypi.org/project/en16931/0.2/
gharchive/pull-request
2024-03-01T08:15:08
2025-04-01T06:39:05.769174
{ "authors": [ "jtorrents", "mcmihai" ], "repo": "invinet/python-en16931", "url": "https://github.com/invinet/python-en16931/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2086348852
Support for signature and header verification Adds a "Verify" method to the envelope that makes it easier to check an optional list of keys against the signatures used to sign the document. head.Header now has a "Contains" method that facilitates testing if the current header includes all the fields from the provided header. The Header "contains" method is a bit bespoke, I was hoping for something more generic, but this felt like the best way for the time being to handle comparisons. I'm surprised we didn't have have this earlier though 😆 🤦
gharchive/pull-request
2024-01-17T14:45:49
2025-04-01T06:39:05.787245
{ "authors": [ "samlown" ], "repo": "invopop/gobl", "url": "https://github.com/invopop/gobl/pull/232", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1556505593
🛑 Bitwage API Docs is down In f90d9fc, Bitwage API Docs (https://developer.bitwage.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Bitwage API Docs is back up in 3e76ef0.
gharchive/issue
2023-01-25T11:42:52
2025-04-01T06:39:05.789628
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/136", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2350568266
🛑 Bitwage API (Sandbox) is down In da87475, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 305 ms Resolved: Bitwage API (Sandbox) is back up in 7935adc after 55 minutes.
gharchive/issue
2024-06-13T08:49:37
2025-04-01T06:39:05.791901
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/1967", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2421049743
🛑 Bitwage API (Sandbox) is down In 69cb608, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 147 ms Resolved: Bitwage API (Sandbox) is back up in fe9ccc4 after 13 minutes.
gharchive/issue
2024-07-20T20:42:38
2025-04-01T06:39:05.794554
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/3224", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2501631723
🛑 Bitwage API (Sandbox) is down In 57350d6, Bitwage API (Sandbox) (https://api.sandbox.bitwage.com) was down: HTTP code: 403 Response time: 265 ms Resolved: Bitwage API (Sandbox) is back up in 647f2b8 after 1 hour, 6 minutes.
gharchive/issue
2024-09-02T22:14:42
2025-04-01T06:39:05.797036
{ "authors": [ "joelinzy" ], "repo": "inwage/status_page", "url": "https://github.com/inwage/status_page/issues/4642", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
194848941
Redesign mesh API The "proprietary" RMB format should be replaced with the SMF equivalent. https://github.com/io7m/smf This needs more consideration and should be moved to beyond 0.2.0. The mesh API should be generalized as with SMF: A callback API that accepts data for named attributes, where each attribute has a COLLADA-like semantic tag that indicates whether data is intended as position data, normal data, UV coordinates, etc. Another module should map the SMF API to this r2-specific API for easy model loading. SMF needs to be extended with an easy way to load meshes onto the GPU via jcanephora. https://github.com/io7m/smfj-jcanephora now exists. smfj-jcanephora has been obsolete since smf 0.4.0, but this mesh work is being held up by https://github.com/io7m/smf/issues/41. Now that SMF exists, it's not exactly clear what degree of mesh support the r2 package needs. In some as-yet-uncommitted work, the tangent generation code is now in its own package, an SMF filter command has been implemented that can generate tangents for an SMF mesh, the obj, arrayobject, and binary packages have been removed. What exactly is left? An interface is needed that specifies what attributes should be loaded for a mesh and returns an array/index buffer pair. The interface should not return an array object because this then precludes loading the mesh on an OpenGL context other than the rendering context. Need to keep in mind that both synchronous and asynchronous loading is desired. What about mapped and unmapped I/O? With unmapped I/O, the allocation of the array and index buffers on the GPU happens after the mesh data has been read from disk and packed into a byte buffer. With mapped I/O, the allocation of the array and index buffers happens first and the mesh data is packed into them during loading of the mesh data.
gharchive/issue
2016-12-11T18:58:50
2025-04-01T06:39:05.824983
{ "authors": [ "io7m" ], "repo": "io7m/r2", "url": "https://github.com/io7m/r2/issues/71", "license": "isc", "license_type": "permissive", "license_source": "bigquery" }
1383513653
widget becomes activ with mouseover already instead of clicking it This issue is the outcome of this Thread at iobroker Forum: https://forum.iobroker.net/topic/58219/widget-wird-aktiviert-beim-überfahren-mit-mauszeiger?_=1663921986890 As soon as i mouseover the widget becomes aktiv. Tested with Chrome/Firefox Example Widget: [{"tpl":"tplHqButton","data":{"g_fixed":true,"g_visibility":false,"g_css_font_text":true,"g_css_background":false,"g_css_shadow_padding":false,"g_css_border":true,"g_gestures":false,"g_signals":false,"g_last_change":false,"visibility-cond":"==","visibility-val":1,"visibility-groups-action":"hide","oid":"hm-rpc.0.MEQ0675904.1.LEVEL","min":"0","max":"","iconName":"/icons-mfd-png/control_centr_arrow_down.png","btIconWidth":"75","offsetAuto":true,"leftOffset":"35","topOffset":"0","timeAsInterval":"true","infoLeftFontSize":"12","infoFontRightSize":"12","infoLeftPaddingLeft":"15","infoLeftPaddingRight":"50","infoRightPaddingRight":"15","signals-cond-0":"==","signals-val-0":true,"signals-icon-0":"/vis/signals/lowbattery.png","signals-icon-size-0":0,"signals-blink-0":false,"signals-horz-0":0,"signals-vert-0":0,"signals-hide-edit-0":false,"signals-cond-1":"==","signals-val-1":true,"signals-icon-1":"/vis/signals/lowbattery.png","signals-icon-size-1":0,"signals-blink-1":false,"signals-horz-1":0,"signals-vert-1":0,"signals-hide-edit-1":false,"signals-cond-2":"==","signals-val-2":true,"signals-icon-2":"/vis/signals/lowbattery.png","signals-icon-size-2":0,"signals-blink-2":false,"signals-horz-2":0,"signals-vert-2":0,"signals-hide-edit-2":false,"lc-type":"last-change","lc-is-interval":true,"lc-is-moment":false,"lc-format":"","lc-position-vert":"top","lc-position-horz":"right","lc-offset-vert":0,"lc-offset-horz":0,"lc-font-size":"12px","lc-font-family":"","lc-font-style":"","lc-bkg-color":"","lc-color":"","lc-border-width":"0","lc-border-style":"","lc-border-color":"","lc-border-radius":10,"lc-zindex":0,"pushButton":true,"oid-working":"hm-rpc.0.MEQ0675904.1.WORKING","descriptionLeft":"Rollladen Arbeitszimmer Vorne.LEVEL","descriptionLeftDisabled":true,"usejQueryStyle":true,"iconOn":"","changeEffect":"","testActive":false,"name":"Rolllade BAD runter"},"style":{"left":"5px","top":"202px","width":"70px","height":"114px","border-width":"1px","border-style":"solid","border-color":"#FFFFFF","border-radius":"2px","z-index":"5"},"widgetSet":"hqwidgets"}] VIS 1.4.15 HQ Widgets installed: 1.3.0 all other aadpters and JS Controller on latest versions. please let me know if anything else is needed. Closed. No feedback at all.
gharchive/issue
2022-09-23T09:12:52
2025-04-01T06:39:05.857715
{ "authors": [ "wendy2702" ], "repo": "ioBroker/ioBroker.vis-hqwidgets", "url": "https://github.com/ioBroker/ioBroker.vis-hqwidgets/issues/49", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
303101388
Bokeh renderer fixes Fixes two bugs introduced in the JupyterLab refactor: Fixes png export by correctly handling it in _figure_data (where it should be handled) Fixes wrong logic in bokeh message handler, which incorrectly skipped messages when Bokeh.protocol was defined. Thanks for the message handler fix and the PNG changes seem fine to me. Merging.
gharchive/pull-request
2018-03-07T13:38:21
2025-04-01T06:39:05.867739
{ "authors": [ "jlstevens", "philippjfr" ], "repo": "ioam/holoviews", "url": "https://github.com/ioam/holoviews/pull/2418", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
389770408
Make matplotlib's BarPlot consistent with other backends Fixes bug in matplotlib BarPlot where Bars are not correctly aligned with ticks. Broken: Fixed: I couldn't put up with the horrendous API that the matplotlib BarPlot provided so for now I've simply locked it down so that the API matches plotly and bokeh plots. I think the notebook diffs say it all it's gone from: grouped = bars.relabel('Grouped').opts(category_index=5, color_by=['stack'], stack_index=1) stacked = bars.relabel('Stacked').opts(stack_index=5) grouped + stacked to: stacked = bars.relabel('Stacked') grouped = bars.relabel('Grouped') grouped + stacked.opts(stacked=True) I maintain that the matplotlib BarPlot is the worst bit of code I've ever written. I am very happy to see the API made consistent. Personally, things like category_index=5 never made much sense to me so I like the new version a lot more. There's bound to be bugs but fixing the API is important enough that I'm not going to hesitate merging this now. I suspect these bugs will be the motivation for cleaning up the internal code for good.
gharchive/pull-request
2018-12-11T13:42:22
2025-04-01T06:39:05.871310
{ "authors": [ "jlstevens", "philippjfr" ], "repo": "ioam/holoviews", "url": "https://github.com/ioam/holoviews/pull/3275", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
443046403
lastupdated zeit ist falsch Irgendwie wird der Wert meiner Sensoren "lastupdated" immer in UTC (Universal Time) erzeugt. Die eingestellte Local-Time wäre hier der bessere Wert. Im Phosconapp ist die Uhrzeit die richtige (local-time). Andere Trigger von anderen Adaptern laufen auch mit der richtigen Zeit (local-time). Der Wert wird als UTC string von deConz geliefert. Das wurde im Forum schon diskutiert. Genau das ist es. Warum ist es so?! Und wo wurde das diskutiert, gerne mit Linkangabe. Warum das so ist, keine Ahnung. Hier der Link: https://forum.iobroker.net/post/254354 #91340d00915640b0b66763528e0af4e7b430ee27 #91340d0 ich befürchte da passt nach wie vor etwas nicht beim schreiben der Datenpunkte: Zu sehen ist, dass die Zeitwerte für UTC und z.Z. GMT+2 identisch sind, statt die 2 Stunden versetzt. Anbei eine Zeile aus dem deconz-Log im Debug-Mode, hier passt es: 2020-10-11 10:33:09.085 - debug: deconz.0 (24145) Websocket message: {"attr":{"lastannounced":null,"lastseen":"2020-10-11T08:33Z","manufacturername":"dresden elektronik","modelid":"ConBee","name":"Configuration tool 10","swversion":"0x260b0500","type":"Configuration tool","uniqueid":"00:01:23:34:45:56:67:78-01"},"e":"changed","id":"10","r":"lights","t":"event","uniqueid":"00:01:23:34:45:56:67:78-01"} Warum ist es in den Datenpunkten gem. Screenshot falsch? Es ist schon richtig wie es ist, es soll keinen Zeitversatz mehr geben. Das war der Wunsch von den meisten Benutzern. Dein log Auszug passt nicht zum Screenshot, da ist kein lastupdated dein. Mein Problem ist, dass Deconz Localtime liefert während die anderen Adapter wie z.B. HUE die Zeit für "LastUpdated" in UTC ausgeben - was in der IT auch so üblich ist, Wenn man die Zeitstempel von Deconz mit denen anderer Devices abgleichen will, funktioniert das nicht ohne weiteres manuelles "übersetzten" der Deconz-Devices. Das ist mir schon klar, aber viele Benutzer sind damit nicht Klar gekommen und haben das Ständig als Fehler gemeldet.
gharchive/issue
2019-05-11T23:36:44
2025-04-01T06:39:05.878832
{ "authors": [ "DNC74", "Jey-Cee", "thurbo" ], "repo": "iobroker-community-adapters/ioBroker.deconz", "url": "https://github.com/iobroker-community-adapters/ioBroker.deconz/issues/63", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
527519787
one_hour Hallo, es soll über die API abfragbar sein, also genau der Watt Wert der in der aktuellen Stunde verbraucht wurde. Also 7 bis 8 Uhr 212 Watt, 8 bis 9 Uhr 1231 Watt etc... Hier die Info von Disconvergy "ja, kann man. Wenn es genau Stunden sein sollen, dann bitte als Resolution “ONE_HOUR” nehmen und den API Endpunkt “/readings” verwenden." Kannst du das als Datenpunkt mit einbauen? cu Deta ah ok danke ich verstehe was er meint schaue es mir an Wäre cool, teste es gerne. Schon mal geschaut? Hallo, wäre mal schön ein kurzes Feedback von dir zu bekommen. also man kann eine bestimmte Zeitspanne abfrage die frage ist was moechte man einbauen und wie. eine Tabelle wo man Zeiten eingibt, ne feste zeit ? https://api.discovergy.com/docs/ no response
gharchive/issue
2019-11-23T06:48:52
2025-04-01T06:39:05.882573
{ "authors": [ "DutchmanNL", "detafun" ], "repo": "iobroker-community-adapters/ioBroker.discovergy", "url": "https://github.com/iobroker-community-adapters/ioBroker.discovergy/issues/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2474740798
Please consider fixing issues detected by repository checker Notification from ioBroker Check and Service Bot Dear adapter developer, I'm the ioBroker Check and Service Bot. I'm an automated tool processing routine tasks for the ioBroker infrastructure. I have recently checked the repository for your adapter harmony for common errors and appropiate suggestions to keep this adapter up to date. Please see the result of the check below. ioBroker.harmony - ERRORS: [ ] :heavy_exclamation_mark: [E204] Version "1.2.0" listed at common.news at io-package.json does not exist at NPM. Please remove from news section. WARNINGS: [ ] :eyes: [W113] Adapter should support compact mode [ ] :eyes: [W162] js-controller 5.0.0 listed as dependency but 5.0.19 is recommended. Please consider updating dependency at io-package.json. [ ] :eyes: [W173] Potential sensitive data "password" not listed at "protectedNative" in io-package.json [ ] :eyes: [W174] Potential sensitive data "password" not listed at "encryptedNative" in io-package.json [ ] :eyes: [W184] "common.main" is deprecated and ignored. Please remove from io-package.json. Use "main" at package.json instead. [ ] :eyes: [W184] "common.materialize" is deprecated for admin >= 5 at io-package.json. Please use property "adminUI". [ ] :eyes: [W505] setTimeout found in "harmony.js", but no clearTimeout detected [ ] :eyes: [W853] .npmignore found - consider using package.json object "files" instead. SUGGESTIONS: [ ] :pushpin: [S522] Please consider migrating to admin 5 UI (jsonConfig). Please review issues reported and consider fixing them as soon as appropiate. Errors reported by repository checker should be fixed as soon as possible. Some of them require a new release to be considered as fixed. Please note that errors reported by checker might be considered as blocking point for future updates at stable repository. Warnings reported by repository checker should be reviewed. While some warnings can be ignored due to good reasons or a dedicated decision of the developer, most warnings should be fixed as soon as appropiate. Suggestions reported by repository checker should be reviewed. Suggestions can be ignored due to a decision of the developer but they are reported as a hint to use a configuration which might get required in future or at least is used be most adapters. Suggestions are always optional to follow. You may start a new check at any time by adding the following comment to this issue: @iobroker-bot recheck Please note that I (and the server at GitHub) have always plenty of work to do. So it may last up to 30 minutes until you see a reaction. I will drop a comment here as soon as I start processing. Feel free to contact me (@iobroker-bot) if you have any questions or feel that an issue is incorrectly flagged. And THANKS A LOT for maintaining this adapter from me and all users. Let's work together for the best user experience. your ioBroker Check and Service Bot Note: This issue replaces issue #190 @mcm1957 for evidence This issue has been replaced by new isse #192 This issue can be closed. your ioBroker Check and Service Bot
gharchive/issue
2024-08-20T05:20:08
2025-04-01T06:39:05.896886
{ "authors": [ "ioBroker-Bot" ], "repo": "iobroker-community-adapters/ioBroker.harmony", "url": "https://github.com/iobroker-community-adapters/ioBroker.harmony/issues/191", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1086908277
Allow web-api-only operation Adapter does not work by default when no bridge is configured, fixes #169 Tested with Nuki 3.0 Pro without having a bridge. Works like expected. Maybe someone still using the bridge could give it a try if everything still works as expected. Thanks. Tested with Nuki 3.0 pro and Bridge @jhubig & @StrathCole could you please check if my update (which resolves the issues in automatic test, work in your live enviroments properly too? Hello Thiemo. thanks. Works perfectly for me. Just as reminder my setup, if needed. Nuki 3.0 Pro used without the bridge Node 14.18.1 NPM 6.14.15 Thanks again.
gharchive/pull-request
2021-12-22T15:05:48
2025-04-01T06:39:05.899880
{ "authors": [ "StrathCole", "jhubig", "theimo1221" ], "repo": "iobroker-community-adapters/ioBroker.nuki-extended", "url": "https://github.com/iobroker-community-adapters/ioBroker.nuki-extended/pull/179", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1639120766
Return saltstack_highstate_error everytime Currently we only return saltstack_highstate_error if an error exists. With this setup, we will never clear highstate errors. The fix is to return 0 if there is no error and return 1 whenever an error occurs. It looks like this might be something you can check for with an integration test? Ensure there won't be a regression Added, thanks for the reminder!
gharchive/pull-request
2023-03-24T10:24:56
2025-04-01T06:39:05.944143
{ "authors": [ "Xcalizorz" ], "repo": "ioki-mobility/salt_exporter", "url": "https://github.com/ioki-mobility/salt_exporter/pull/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
170032892
Update flat file parameters, dictionary_close implementation, and .ino tests These changes reflect recent flat file implementation updates. LGTM LGTM
gharchive/pull-request
2016-08-08T21:43:50
2025-04-01T06:39:05.945206
{ "authors": [ "Stickerpants", "danaack", "wpenson" ], "repo": "iondbproject/iondb", "url": "https://github.com/iondbproject/iondb/pull/71", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
378514953
I cannot get full (click) $event on ios - ionic4 I also posted this on stackoverflow, https://stackoverflow.com/questions/53199458/i-cannot-get-full-click-event-on-ios-ionic4 as for my config, this happens with ionic start --type=angular... in any case: Ionic: ionic (Ionic CLI) : 4.3.1 (/usr/local/lib/node_modules/ionic) Ionic Framework : @ionic/angular 4.0.0-beta.15 @angular-devkit/build-angular : 0.8.7 @angular-devkit/schematics : 0.8.7 @angular/cli : 6.2.7 @ionic/angular-toolkit : 1.1.0 Cordova: cordova (Cordova CLI) : 8.0.0 Cordova Platforms : ios 4.5.5 Cordova Plugins : cordova-plugin-ionic-keyboard 2.1.3, cordova-plugin-ionic-webview 2.2.1, (and 4 other plugins) System: Android SDK Tools : 26.1.1 (/Users/emiliomaciel/Library/Android/sdk) NodeJS : v10.8.0 (/usr/local/bin/node) npm : 6.4.1 OS : macOS High Sierra Xcode : Xcode 10.1 Build version 10B61 I am using ionic 4 "@ionic/angular": "^4.0.0-beta.15", and cordova. I am doing a simple button: <ion-button (click)="myFunc($event)"> My button and its handler myFunc($event) { console.log($event); } In the computer, running ionic serve, I can see the whole event, access Parent Nodes, etc (the main purpose of this is styling objects with the renderer later) BUT... in ios, the event is only composed of {"isTrusted":true} Same thing happens if you change the ion-button for a div, or a span, for example. I am trying to understand why. I run this with ionic cordova prepare ios And then build/run it on Xcode (latest 10.1 (10B61) ) Any ideas? If you need more info... It happens if you just build a ionic starter project. ionic start testIOS blank --type=angular ionic cordova platform add ios ionic cordova prepare ios Run in Xcode Only added the ion-button in home page, and the click event. Works in ionic serve, not in ios. Cheers! Thanks! Emilio Thanks for your issue! Could you please check if this is still an issue in beta.17? That would be very nice :)
gharchive/issue
2018-11-07T23:33:15
2025-04-01T06:39:06.037715
{ "authors": [ "diver2", "paulstelzer" ], "repo": "ionic-team/ionic", "url": "https://github.com/ionic-team/ionic/issues/16263", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1090045250
Use stream json rpc This enables us to avoid mucking with low-level stuff and get support for $/cancelRequest automatically Still needs work, esp type Client is not reimplemented Please ignore the remove paket commit, that is an optional part works with CSharpLanguageServer where the main wish was to enable cancellation support which this PR does @baronfel this is a continuation (alternative implementation) of #6 where I converted Server to use StreamJsonRpc lib like you suggested. this is somewhat functional (apart from reporting errors via LspResult<result>`) and probably misc stuff does this look ok to you? @baronfel this is up for review.. some very questionable :) changes were made, it is very up for discussion: this has been tested with csharp-ls for a couple of months by me (and other ppl presumably, nothing bad reported AFAIK) I have added API so I can build a server w/o a LspServer instance -- that allows me to do some more decoration on functions when setting up the server, see: https://github.com/razzmatazz/csharp-language-server/blob/master/src/CSharpLanguageServer/Server.fs#L1077-L1098 I did not touch module Client (did not convert it to StreamJsonRpc) -- not sure how to test it, so it still uses "old" internal JsonRpc module -- which should eventually be removed sorry it took so long, got my new pc delivered, then it broke for weeks, then I was tracking emacs bug related to dotnet (https://lists.gnu.org/archive/html/emacs-devel/2022-02/msg00009.html) -- then life happened :) also it is somewhat a shame paket does not suport paket.local for .net core projects :( -- really hard to test things when you have to start your own nuget server to test your project integration before publishing complaining because csproj/sln does not this problem (even if it has other issues). this is why i was pushing https://github.com/ionide/LanguageServerProtocol/pull/8 sorry for bugging @baronfel but does this need more work from me before getting it merged? ping @baronfel Hey! Sorry this took so long for me to get back to - it's great! Really excited to have this in :)
gharchive/pull-request
2021-12-28T17:53:07
2025-04-01T06:39:06.077594
{ "authors": [ "baronfel", "razzmatazz" ], "repo": "ionide/LanguageServerProtocol", "url": "https://github.com/ionide/LanguageServerProtocol/pull/10", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
107496322
Failed to activate the ionide-fsi package [Enter steps to reproduce below:] ... ... Atom Version: 1.0.15 System: Mac OS X 10.10.5 Thrown From: ionide-fsi package, v1.0.3 Stack Trace Failed to activate the ionide-fsi package At Invalid left-hand side in assignment ReferenceError: Invalid left-hand side in assignment at FsiService__activate$ (/Users/stefanweber/.atom/packages/ionide-fsi/lib/fsi.js:163:31) at Fsi__activate$ (/Users/stefanweber/.atom/packages/ionide-fsi/lib/fsi.js:321:5) at /Users/stefanweber/.atom/packages/ionide-fsi/lib/fsi.js:758:14 at Object.module.exports.IonideFSI.activate (/Users/stefanweber/.atom/packages/ionide-fsi/lib/fsi.js:769:26) at Package.module.exports.Package.activateNow (/Applications/Atom.app/Contents/Resources/app.asar/src/package.js:245:19) at /Applications/Atom.app/Contents/Resources/app.asar/src/package.js:226:30 at Package.module.exports.Package.measure (/Applications/Atom.app/Contents/Resources/app.asar/src/package.js:169:15) at Package.module.exports.Package.activate (/Applications/Atom.app/Contents/Resources/app.asar/src/package.js:218:14) at PackageManager.module.exports.PackageManager.activatePackage (/Applications/Atom.app/Contents/Resources/app.asar/src/package-manager.js:486:21) at /Applications/Atom.app/Contents/Resources/app.asar/src/package-manager.js:469:29 Commands Config { "core": { "disabledPackages": [ "browser-plus" ] }, "ionide-fsi": { "FsiPath": "fsharpi" } } Installed Packages # User atom-jshint, v2.0.0 atom-yeoman, v0.3.13 file-icons, v1.6.9 grunt-runner, v0.11.0 ionide-fake, v1.0.3 ionide-fsharp, v1.1.1 ionide-fsi, v1.0.3 ionide-installer, v1.2.1 ionide-paket, v2.0.7 ionide-yeoman, v1.0.1 language-dust-ng, v0.1.1 linter, v1.6.0 linter-js-standard, v3.2.0 minimap, v4.13.3 terminal-panel, v1.14.1 # Dev No dev packages Should be fixed in 1.0.4 I get the same extat error with OS: Windows 8.1 64bit Atom 1.0.15 ionide-fsi 1.0.4 This is bit different issue ;) Does the file plugin is trying to spawn exist? Do You have installed F# 4.0 ? If You have earlier version of F# installed (or some non-standard installation) there is place in ionide-fsi plugin settings to set path to F# Interactive. Does the file plugin is trying to spawn exist?: Yes it does exist. Do You have installed F# 4.0 ?: Yes I did (I'm actively coding in F# 4.0 with VS). If You have earlier version of F# installed (or some non-standard installation) there is place in ionide-fsi plugin settings to set path to F# Interactive.: Yeah, I tried to overwrite it with the full FSI path "c:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\fsi.exe'" but no luck. FSI version is 14.0.23020.0. Let me know if you need more details. I would try providing case sensitive path to FSI (so Fsi.exe instead of fsi.exe) (however I doubt it will change anything) Tried with previous FSI versions: did not work Tried providing case sensitive path to FSI: did not work I've also tried to manually set FsiPath setting in config.cson and restart Atom, no luck. Here is the content of config.cson: "*": "exception-reporting": userId: "024cc103-5b7d-a736-965f-c949354c83e4" welcome: showOnStartup: false core: {} editor: invisibles: {} "ionide-fsi": FsiPath: "C:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\Fsi.exe" linter: {} "ionide-fsharp": {} Well, only thing I can suggest is complete re installation of Atom Try using forward slashes in the path to fsi. :+1: I already tried with \ and /. No luck. Sorry, I'll try to do more testing later today and keep you posted. Thanks for your prompt response. Bug is fixed for me, Thanks! I'm getting the same issue as @juanjoarana in my enterprise environment (Windows 7 x64) when I hit ALT+Enter: 'C:/Program Files (x86)/Microsoft SDKs/F#/4.0/Framework/v4.0/Fsi.exe' could not be spawned. Is it installed and on your path? If so please open an issue on the package spawning the process. I copied ionide-fsi-1.0.6 from GH releases into my packages folder and ran npm install. I do have Fsi.exe in the specified folder. Same error for me on OSX: " 'mono' could not be spawned. Is it installed and on your path? If so please open an issue on the package spawning the process. " mono & fsharpi are in /usr/bin i got the same issue as @juanjoarana all things are installed with nearly exact the same sys config as his. i have German version of Windows 7. btw this seems related https://github.com/ionide/ionide-paket/issues/20 I have the same problem. 'C:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\Fsi.exe' could not be spawned. Is it installed and on your path? If so please open an issue on the package spawning the process. I have Windows 10 German version. Fsi.exe is in the FsiPath Maybe a problem with the German Window version. Atom has anyway problems with e.g. the german keyboard. Maybe try using double slashes \\ and escape spaces with slashes \ . No, still not working. I tried to create a symlink . with: mlink /j C:\Fsi.exe "C:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\Fsi.exe" and used "C:\Fsi.exe" in the FsiPath.. still not working. :( Frustrated. Any solution? up to now...not for me. On my system with Visual Studio 2013 and F# Power Tools 2.3.0, the path to Fsi.exe had moved. Instead of C:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\Fsi.exe, I found it in C:\Program Files (x86)\Microsoft F#\v4.0\Fsi.exe. However, changing the path in ionide-fsi settings didn't work for me: I'm still getting "'C:\Program Files (x86)\Microsoft F#\v4.0\Fsi.exe' could not be spawned. Is it installed and on your path? If so please open an issue on the package spawning the process." So no solution yet, but perhaps the default path should be changed? SOLVED! For me, at least. Windows Defender was blocking Fsi.exe from running, for reasons yet unknown. I turned Windows Defender off and tried to run some F# code with Alt-Enter, and it worked. Then I turned Windows Defender back on, and ionide-fsi continued to work. For anyone else still experiencing this issue, please check whether your anti-virus has a real-time virus protection feature that might be interfering with running Fsi.exe. Update to amend my earlier comment. The Fsi.exe that I found in C:\Program Files (x86)\Microsoft F#\v4.0\Fsi.exe was F# 2.0, and I just didn't notice that fact. I don't have F# 4.0 on my system, so I had to change C:\Program Files (x86)\Microsoft SDKs\F#\4.0\Framework\v4.0\Fsi.exe to C:\Program Files (x86)\Microsoft SDKs\F#\3.1\Framework\v4.0\Fsi.exe and then it ran for me. But turning off Windows Defender before launching Fsi.exe the first time was key; afterwards, I could turn Windows Defender back on and it continued to work. Update: It looks like Windows Defender had nothing to do with it. I've found a reproducible way to cause this error and then make it go away: Create a new .fsx file. Mine is in C:\Users\Robin\Desktop\trythis.fsx and contains just one line, 3 + 4. Launch Atom. Close all panes, all tabs, and remove all project folders. Open trythis.fsx. It opens up in a tab and a new "project folder" is created called trythis.fsx. (Instead of a project folder called Desktop, which is what I would have half-expected). Highlight the 3 + 4 line and press Alt-Enter. Get the "Fsi.exe could not be spawned" error. Chrome developer console pops up. Close the Chrome developer console. Close the error message. Close the F# Interactive window. Remove the "project folder" that got created in step 3. Highlight the 3 + 4 line and press Alt-Enter. This time, it almost works: F# Interactive window appears, but doesn't yet run the highlighted code. Highlight the 3 + 4 line and press Alt-Enter again. Result in F# Interactive window: val it : int = 7 Also, if a real folder is added to the Project Folders pane, the F# Interactive window continues to work. The one constant that I've seen cause failure is if the project folder isn't really a folder, but is a file (like trythis.fsx in my case). I can't add trythis.fsx to the project folder manually -- but if I do "open file" while there's no project folder, and open the trythis.fsx file, it gets added to the project folders pane and then I can't run F# Interactive. Delete the incorrect project folder and now I can run F# Interactive again. Anyone else experiencing this issue: try removing all project folders, closing the failed F# Interactive windows (otherwise you'll get the "This socket is closed" error), and then running your F# script file with no project folders. Does that make it work for you? Should be fixed in 1.0.10
gharchive/issue
2015-09-21T12:01:55
2025-04-01T06:39:06.102903
{ "authors": [ "DavidTheBugWriter", "Krzysztof-Cieslak", "juanjoarana", "kurtschelfthout", "leobm", "ptrelford", "rmunn", "s-trooper", "stweb", "susquehanna" ], "repo": "ionide/ionide-fsi", "url": "https://github.com/ionide/ionide-fsi/issues/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
159231177
DOS line endings in automated build.sh I am unsure if it is the right place to repport that problem but I just noticed that using "new project" creates a build.sh that contains DOS line endings (^M wich can be seen using "cat -v build.sh") making it unable to build on ubuntu. "fromdos build.sh" solves the problem easily but it would be better if it was fixed at the source. Nestor P.S. I am new to F# and struggling to understand fake but so far ionide for vscode has been a pleasure to use :) This should be fixed in 2.0.0 I encountered this issue with the current version 2.9.4. build.sh was created with \r\n line endings, breaking the build step (from command line or with ionide-vscode-fake) on OSX. I just add the issue with the current version. Does anyone knows where the culprit build.sh is ? I would be happy to upload a fixed version :) @nestordemeure @Krzysztof-Cieslak says it is over https://github.com/fsharp-editing/Forge/tree/templates Thank you, I just proposed change for the two build.sh that were in that repository. Thanks, both PR merged. You should get updated templates by running F#: Refresh Project Templates from Code's command palette. It now builds without problems :) Is there a way to add a regression test for that problem ? (It artificially raises the bar of entry for non Windows users)
gharchive/issue
2016-06-08T17:52:52
2025-04-01T06:39:06.110638
{ "authors": [ "Krzysztof-Cieslak", "Systemcluster", "nestordemeure", "smoothdeveloper" ], "repo": "ionide/ionide-vscode-fsharp", "url": "https://github.com/ionide/ionide-vscode-fsharp/issues/71", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2723844641
Unable to build XCFramework after adding library Added this library to one of our KMP modules and it seems to do exactly what we want, thanks! Unfortunately it seems to break our iOS XCFramework build. Here's the command we're running ./gradlew :my:module:assembleMyFrameworkKotlinDebugXCFramework It fails with this message: > Task :my:module:assembleDebugIosSimulatorFatFrameworkForMyFrameworkKotlinXCFramework FAILED fatal error: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/lipo: /Users/me/workspace/repo/my/module/build/bin/iosX64/debugFramework/MyFrameworkKotlin.framework/MyFrameworkKotlin and /Users/me/workspace/repo/my/module/build/bin/iosSimulatorArm64/debugFramework/MyFrameworkKotlin.framework/MyFrameworkKotlin have the same architectures (x86_64) and can't be in the same fat output file Any ideas? I added the library as a commonMain dep. Our xcf config looks like: val xcf = XCFramework("MyFrameworkKotlin") val iosTargets = if (isCi) { listOf(iosSimulatorArm64()) } else { listOf(iosX64(), iosArm64(), iosSimulatorArm64()) } iosTargets.forEach { it.binaries.framework { export(project(":some:other:module")) export(libs.ktor.client.darwin) baseName = "MyFrameworkKotlin" binaryOption("bundleId", "com.company.MyFrameworkKotlin") isStatic = true xcf.add(this) } } Hi @edenman, The lipo error seems to signal that the same framework is added twice with the same architecture, but why that is happening for you after you add this library is beyond me. I'm using libsodium in my projects without any problems, although I am not using XCFramework, but building the framework by running embedAndSignAppleFrameworkForXcode KMP gradle task as run script phase in build phases in XCode, also the framework search paths are set up. I'm not sure if the problem is in your build setup, or if I am not supporting XCFrameworks correctly. In any case I'm limited on time I can spend on this, so in short term I guess it's up to you to figure out what is going.
gharchive/issue
2024-12-06T20:18:12
2025-04-01T06:39:06.113725
{ "authors": [ "edenman", "ionspin" ], "repo": "ionspin/kotlin-multiplatform-libsodium", "url": "https://github.com/ionspin/kotlin-multiplatform-libsodium/issues/54", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1141794317
Interactive visualization of cloud-hosted ocean sonar data Project Description: Ocean sonar systems, such as echosounders, are the workhorse to study life in the ocean. They provide continuous observations of fish and zooplankton by transmitting sounds and analyzing the echoes bounced off these animals, just like how medical ultrasound images the interior of human body. In recent years these systems are widely deployed on ships, autonomous vehicles, or moorings, bringing in a lot of data that allow scientists to study the change of the marine ecosystem. This project aims to create the capability to interactively visualize large, cloud-based ocean sonar data to accelerate data exploration and discovery. Developments in this project will go hand-in-hand with ongoing development of the echopype package that handles the standardization, pre-processing, and organization of these data. Expected Outcomes: The GSoC contributor will work with mentors to develop a new package echoshader that provides core ocean sonar data visualization functionalities based on the HoloViz suite of tools, test configuration for using echoshader widgets in Panel dashboards, and create Jupyter notebooks to demo use of the combination of tools. Skills required: Python Interest in working with oceanographic, acoustic and geospatial data Bonus skills: Cloud computing Visualization Project Size: 175 or 350 h Difficulty Moderate Mentor(s): Wu-Jung Lee (@leewujung), Emilio Mayorga (@emiliom), Valentina Staneva (@valentina-s), Landung "Don" Setiawan (@lsetiawan), Brandon Reyes (@b-reyes) Hi. I am Dwip Dalal sophomore at IIT Gandhinagar. I have been working with machine learning and data science for two years now. My passion for it has motivated me to take part in GSOC. After going through numerous organizations, I found IOOS very interesting; I am especially interested in using my data science and data visualization skills in geospatial analysis. Therefore, I would love to spend my summer working with and contributing to IOSS organization. I would love to work on the Interactive visualization of cloud-hosted ocean sonar data idea. Brief description about me: In the last two years, I have interned in three companies, and currently, I am working with DRDO (Defence Research and Development Organisation of India), making Drone swarm AI. I am also working on analyzing fossils using data science with professor Pankaj Khaana IIT GN. I have done all this data science and data visualization work in python, so I am very familiar with the language. I have also done a couple of courses on cloud computing due to utter curiosity. To know more about me, please check out: https://www.linkedin.com/in/dwip-dalal-a7a440190 Respected Mentor, I would be very grateful if you could please allocate me with some starter tasks. Looking forward to hearing from you soon. Thank You Hey @dwipddalal, nice meeting you! It's great to know that you're interested in this project. Do you want to do an introduction in the project repo echoshader? You will see that we are in the process of populating resources and materials for the project, but we definitely welcome you to start asking questions and brainstorming ideas anytime! Yes I would love to introduce myself in the echoshader repo. Could you please suggest me in which specific section of the repo do I have to introduce myself? Should it be in issues tab with gsoc 2022 label? @dwipddalal: we think the best way is for you to ask a question and/or put in some ideas to brainstorm with us (the mentors). You can also make a profile on github that everyone can see, and in your application you will be able to introduce yourself more fully there! :) Sorry Sir for the late response. What I understood until now: We have to make functionalities for visualization of the ocean sonar data using HoloViz toolkits. Could you please let me know if my understanding of this project is correct or not? I forgot to give my github profile: https://github.com/dwipddalal @dwipddalal : What I meant was that you should ask questions or post ideas for discussion in the echoshader repo linked above. Okay Sir. Good Evening I have gone through all the issues and the current thread. Could you guide us to some good starter tasks and also perhaps some guidelines as to what has to be included in the proposal for GSOC. @harshil15999 : Welcome! 😃 Please go ahead to the echoshader repo: https://github.com/OSOceanAcoustics/echoshader and find relevant links in the README: For more information about joining the project as GSoC contributor, check out the IOOS GSoC Contributor Guidance and the Echoshader GSoC Contributor's Guide. And start discussion with us! Project Description: Ocean sonar systems, such as echosounders, are the workhorse to study life in the ocean. They provide continuous observations of fish and zooplankton by transmitting sounds and analyzing the echoes bounced off these animals, just like how medical ultrasound images the interior of human body. In recent years these systems are widely deployed on ships, autonomous vehicles, or moorings, bringing in a lot of data that allow scientists to study the change of the marine ecosystem. This project aims to create the capability to interactively visualize large, cloud-based ocean sonar data to accelerate data exploration and discovery. Developments in this project will go hand-in-hand with ongoing development of the echopype package that handles the standardization, pre-processing, and organization of these data. Expected Outcomes: The GSoC contributor will work with mentors to develop a new package echoshader that provides core ocean sonar data visualization functionalities based on the HoloViz suite of tools, test configuration for using echoshader widgets in Panel dashboards, and create Jupyter notebooks to demo use of the combination of tools. Skills required: Python Interest in working with oceanographic, acoustic and geospatial data Bonus skills: Cloud computing Visualization Project Size: 175 or 350 h Difficulty Moderate Mentor(s): Wu-Jung Lee (@leewujung), Emilio Mayorga (@emiliom), Valentina Staneva (@valentina-s), Landung "Don" Setiawan (@lsetiawan), Brandon Reyes (@b-reyes) Hello, My name is Bitan Biswas and I am a Masters's student in Computer Science at the University of Calcutta and a Research Assistant at Cardiff University. I have been working in the field of DataScience for three years and my urge to learn more has got me to take part in Summer of Code. The work by IOOS seems extremely interesting and aligns with my interests in acoustic sound event detection and hyperspectral image visualization. I would be highly obliged if I am allowed to contribute to the echoshader repository and get involved in the project "Interactive visualization of cloud-hosted ocean sonar data" . In the last two years, I have interned in six companies, and have been in working in the field of Data Analytics, Computer Vision, and NLP closely. I used the Bokeh package a couple of times, which seems to be relevant with the HoloViz tools. I have fluency in Python and other Object-Oriented Programming Languages and a certain preliminary understanding of cloud computing too. My LinkedIn is:https://www.linkedin.com/in/bitan-biswas-2544641b0/ and I would like to introduce myself in the echoshader repo and indulge in new ideas. Closing all past GSoC issues. Please open a new issue if you want to participate in GSoC23.
gharchive/issue
2022-02-17T19:53:07
2025-04-01T06:39:06.135470
{ "authors": [ "bitanb1999", "dwipddalal", "harshil15999", "leewujung", "ocefpaf" ], "repo": "ioos/gsoc", "url": "https://github.com/ioos/gsoc/issues/16", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
714302552
ProgramToFindOutQuotientAndRemainder i m trying this for the first time so i m not sure if i made a pull request correctly or not but i have finished the problem and checked it. thanks sorry i didnt know this...i ll try to get it right next time From: Sahil notifications@github.com Sent: Sunday, October 4, 2020 7:16 PM To: iot-lab-kiit/tasks.c tasks.c@noreply.github.com Cc: Parth Maradia parth.maradia@students.iiit.ac.in; Author author@noreply.github.com Subject: Re: [iot-lab-kiit/tasks.c] ProgramToFindOutQuotientAndRemainder (#106) Closed #106https://github.com/iot-lab-kiit/tasks.c/pull/106. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/iot-lab-kiit/tasks.c/pull/106#event-3837578093, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ARAD6UDKFE4CHJ7U3HGCA5LSJB4ELANCNFSM4SDVNGHA.
gharchive/pull-request
2020-10-04T12:23:04
2025-04-01T06:39:06.159773
{ "authors": [ "Parth0248" ], "repo": "iot-lab-kiit/tasks.c", "url": "https://github.com/iot-lab-kiit/tasks.c/pull/106", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
871031508
Problem with a PIN profile Hi, from this morning i have aproblem with mi wallet. My PIN doesn´t works. Finally I have to create a second profile and recover my wallet with my IOTA´s. But I´m so worried because the PIN of that first profile this morning worked well and now is not working. So I would like to delete that profile becuase I think that is a direct access to my coins and if someone has changed my PIN now he can send my coins to other account. Sorry for mi english. It´s not my native language I have the same problem, my Pin is not working anymore. @Mcjevo In the current version that is an issue with the language and currency when using other languages. Please try changing the language back to English and restart Firefly, and see if you can login to your first profile. Then please continue to use Firefly in english until new version is released. And about; if someone has changed my PIN now he can send my coins to other account. This is not possible. You need the stronghold password to send funds. So your funds are safe. @Mcjevo In the current version that is an issue with the language and currency when using other languages. Please try changing the language back to English and restart Firefly, and see if you can login to your first profile. Then please continue to use Firefly in english until new version is released. And about; if someone has changed my PIN now he can send my coins to other account. This is not possible. You need the stronghold password to send funds. So your funds are safe. I changed the languaje and restart the wallet app and I have the same problem, My PIN doesn´t works and I finish with the message of "3 incorrect attemps" and i have to wait. I suppose that the password cannot be changed from the settings of this profile. By the way, another user write to me that have the same problem with the PIN today, but here I can´t see the reply. Sotrry is the first time taht y write in Hitub Anyway, Thank you very much for you Reply tOOk and sorry again for my english. Okay, I helped a user earlier today and the language was his issue. No need to say sorry for your english. Also, there is no risk about your funds with the first profile. If you have your 24 phrases and the stronghold backed up you can re-install Firefly wallet and setup a new profile with those 24 phrases. As long as you have your 24 phrases or the stronghold you can always access your IOTA. Are you using Windows? Yes I´m using Windows 10. Really, instead of re-install the app I opened a new profile and I selected recovere the wallet with the 24 phrases and I could see that all it´s ok. Althouhgt I would like to delete that first profile to avoid risk, but for delete that profile I believe that I can to access fist with the PIN :). Yes, but the only way to remove the first profile is to get access to it first, which you will probably not be able to. What you can do is to browse into C:\Users\Username\AppData\Roaming\Firefly and remove that Firefly folder. This will remove Firefly from your computer and let you re-install it. Then you just setup with the same 24 words again (as you did with your second profile) and you will see the funds again, and only have one profile. Well, thank you very much T00k. your solution has worked well. I was able to delete the profile and everything is fine. Thanks a lot. Happy to hear. Can you please mark this issue as solved here on github? Have a great night!
gharchive/issue
2021-04-29T13:43:42
2025-04-01T06:39:06.167187
{ "authors": [ "Mcjevo", "etiiiR", "t00k" ], "repo": "iotaledger/firefly", "url": "https://github.com/iotaledger/firefly/issues/1069", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
667673565
Fix order of the purple set The RFC describes an ordering algorithm based on depth-first search, where you travel down the trunk of a milestone first, then down the branch. I tried the example, and I think maybe O and N were the wrong way round. So you would: Apply I Go back to O Apply K Apply O Go back to S Apply N Apply S Apply V Hmm, you don't go back to 0 after applying I, you go back to K ? You always want to go trunk first then branch, so from V it would be (skipping the upper part with trunk) V->S->N->K->I. Where would O go? You access O through S, and O is the branch of S, so N (as trunk of S) has to come first Like in my changes? ~D, G, J, L, M, R, I, K, N, O, S, V~ {D, G, J, L, M, R, I, K, O, N, S, V} No, not like in your change. In your change O comes first. Why does O comes first when it's a branch and we say trunk have to come first ?
gharchive/pull-request
2020-07-29T08:58:41
2025-04-01T06:39:06.172417
{ "authors": [ "JakeSCahill", "thibault-martinez" ], "repo": "iotaledger/protocol-rfcs", "url": "https://github.com/iotaledger/protocol-rfcs/pull/21", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2322788385
🛑 备用KMS激活服务器 01 is down In 0e6e80c, 备用KMS激活服务器 01 (s11.ikms.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: 备用KMS激活服务器 01 is back up in ec6435c after 18 minutes.
gharchive/issue
2024-05-29T09:08:59
2025-04-01T06:39:06.178225
{ "authors": [ "iougemini" ], "repo": "iougemini/ikms-uptime", "url": "https://github.com/iougemini/ikms-uptime/issues/1781", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2292459739
🛑 备用KMS激活服务器 01 is down In 0f03413, 备用KMS激活服务器 01 (s11.ikms.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: 备用KMS激活服务器 01 is back up in b42395e after 9 minutes.
gharchive/issue
2024-05-13T10:44:56
2025-04-01T06:39:06.180906
{ "authors": [ "iougemini" ], "repo": "iougemini/ikms-uptime", "url": "https://github.com/iougemini/ikms-uptime/issues/300", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2303826997
🛑 备用KMS激活服务器 01 is down In 84f7c28, 备用KMS激活服务器 01 (s11.ikms.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: 备用KMS激活服务器 01 is back up in 34f1c5a after 9 minutes.
gharchive/issue
2024-05-18T03:33:36
2025-04-01T06:39:06.183309
{ "authors": [ "iougemini" ], "repo": "iougemini/ikms-uptime", "url": "https://github.com/iougemini/ikms-uptime/issues/737", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2305006288
🛑 备用KMS激活服务器 01 is down In c5f9215, 备用KMS激活服务器 01 (s11.ikms.eu.org) was down: HTTP code: 0 Response time: 0 ms Resolved: 备用KMS激活服务器 01 is back up in 02ab1a2 after 10 minutes.
gharchive/issue
2024-05-20T03:41:00
2025-04-01T06:39:06.185682
{ "authors": [ "iougemini" ], "repo": "iougemini/ikms-uptime", "url": "https://github.com/iougemini/ikms-uptime/issues/931", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
949913762
Rendering different menu during runtime. I changed the provided example to this one. However, the menu does not get updated. When clicking on a StandardItem it panics with the following error: thread '<unnamed>' panicked at 'index out of bounds: the len is 2 but the index is 3', /home/projects/ksni/src/service.rs:702:32 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Is it possible to update the menu during runtime? Fixed! Thank you for the report!
gharchive/issue
2021-07-21T16:50:55
2025-04-01T06:39:06.193696
{ "authors": [ "emirror-de", "iovxw" ], "repo": "iovxw/ksni", "url": "https://github.com/iovxw/ksni/issues/9", "license": "Unlicense", "license_type": "permissive", "license_source": "github-api" }
2380415432
Add Cargo Handling Equipment Operational Performance Still some open questions, thus a draft PR. Replaced with https://github.com/ioxio-dataspace/ioxio-io-definitions/pull/11
gharchive/pull-request
2024-06-28T13:15:22
2025-04-01T06:39:06.194766
{ "authors": [ "joakimnordling", "lietu" ], "repo": "ioxio-dataspace/ioxio-io-definitions", "url": "https://github.com/ioxio-dataspace/ioxio-io-definitions/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
813329622
Write utilitiy structs IgnoredAny and Opaque IgnoredAny similar to serde https://docs.serde.rs/serde/de/struct.IgnoredAny.html. A thing that just reads and skips a value. Opaque similar to https://docs.serde.rs/serde_json/value/struct.RawValue.html or https://docs.rs/actyxos_sdk/0.5.0/actyxos_sdk/struct.Opaque.html . Reads a value and stores it in a raw array. This is needed mostly for Dag-CBOR for now, but in the long term it might be useful to also have the same type work for the other codecs. Not sure it is a problem, but e.g. IgnoredAny can only implement Decode. And for Opaque/RawValue it would probably be best to parametrize it with the codec, so nobody accidentally reads a json RawValue and then writes it to a CBOR stream... :-) If it helps, here are my experiments to write a RawValue: pub struct OpaqueDagCbor(Box<[u8]>); /// Parses the CBOR item at the current position and returns the length /// /// Fails if there is no valid cbor item at the current position, or if the item. fn next_cbor_len<R: io::Read + io::Seek>(r: &mut R) -> anyhow::Result<usize> { let p0 = r.seek(io::SeekFrom::Current(0))?; // just scrape the references to traverse the cbor, for now let mut cids = Vec::new(); let e1 = <Ipld as References<DagCborCodec>>::references(DagCborCodec, r, &mut cids); let p1 = r.seek(io::SeekFrom::Current(0))?; // seek to the old position even if references failed r.seek(io::SeekFrom::Start(p0))?; // bail if traversal produced an error e1?; let len = usize::try_from(p1 - p0)?; Ok(len) } impl codec::Encode<DagCborCodec> for OpaqueDagCbor { fn encode<W: io::Write>(&self, _: DagCborCodec, w: &mut W) -> anyhow::Result<()> { Ok(w.write_all(&self.0)?) } } impl codec::Decode<DagCborCodec> for OpaqueDagCbor { fn decode<R: io::Read + io::Seek>(c: DagCborCodec, r: &mut R) -> anyhow::Result<Self> { let len = next_cbor_len(r)?; let mut buf: Box<[u8]> = vec![0u8; len].into(); r.read_exact(buf.as_mut())?; Ok(Self(buf)) } } impl TryReadCbor for OpaqueDagCbor { fn try_read_cbor<R: io::Read + io::Seek>(r: &mut R, _major: u8) -> anyhow::Result<Option<Self>> { let len = next_cbor_len(r)?; let mut buf: Box<[u8]> = vec![0u8; len].into(); r.read_exact(buf.as_mut())?; Ok(Some(Self(buf))) } } Implemented in https://github.com/ipfs-rust/libipld/pull/98
gharchive/issue
2021-02-22T09:45:55
2025-04-01T06:39:06.199797
{ "authors": [ "rklaehn" ], "repo": "ipfs-rust/libipld", "url": "https://github.com/ipfs-rust/libipld/issues/95", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }