id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
360702248
Update README to use current version The current released version says it's in 4.0.1 so I just updated the version in the installation. Pull Request Test Coverage Report for Build 244 0 of 0 changed or added relevant lines in 0 files are covered. No unchanged relevant lines lost coverage. Overall coverage remained the same at 89.423% Totals Change from base Build 236: 0.0% Covered Lines: 186 Relevant Lines: 208 💛 - Coveralls Fixed in 5.0.
gharchive/pull-request
2018-09-17T01:51:33
2025-04-01T06:38:22.850436
{ "authors": [ "corroded", "coveralls", "devinus" ], "repo": "devinus/poison", "url": "https://github.com/devinus/poison/pull/178", "license": "0BSD", "license_type": "permissive", "license_source": "github-api" }
1362907135
zu1k's docker image got removed the docker image is gone. i see your app is running fine. would you help to restore or provide the exported .tar file of that image? How can I help with that? found the release here. Should I/you close this issue or is there anything to be done? It's alive again now.
gharchive/issue
2022-09-06T08:32:39
2025-04-01T06:38:22.870318
{ "authors": [ "James4Ever0", "JounQin" ], "repo": "devockr/deeplx", "url": "https://github.com/devockr/deeplx/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
459904092
Enable h2-console for debug purposes I have tricked sec config to make h2-console work. Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. @maciejmalecki could you please sign the CLA in order to contriubte to devonfw?
gharchive/pull-request
2019-06-24T13:51:09
2025-04-01T06:38:22.872630
{ "authors": [ "CLAassistant", "hohwille", "maciejmalecki" ], "repo": "devonfw-forge/keywi", "url": "https://github.com/devonfw-forge/keywi/pull/19", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
951853867
Update 정상동기.md 각 챌린지 미션을 완료 할때 마다 PR을 생성해 주세요. 팀이름 팀이름, 자신의 GitHub ID를 남겨주세요. 팀 이름: 정상동기 GitHub ID: justinyoo 완료된 챌린지 완료된 챌린지의 내용을 선택 한 후, 해당 정보를 알려주시면 됩니다. [x] 애저 정적 웹 앱 챌린지 완료 [ ] 깃헙 액션 챌린지 완료 [ ] 소셜미디어 인증샷 챌린지 완료 [ ] 애저 정적 웹 앱 리포지토리 제출 챌린지 완료 [ ] 애저 정적 웹 앱 URL 제출 챌린지 완료 [ ] 블로그 후기 URL 제출 챌린지 완료 /aswasignoff /aswasignoff
gharchive/pull-request
2021-07-23T19:47:12
2025-04-01T06:38:22.923058
{ "authors": [ "justinyoo" ], "repo": "devrel-kr/HackaLearn", "url": "https://github.com/devrel-kr/HackaLearn/pull/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1509939780
Bug: Paradise lost is breaking some leaf textures from other mods due to christmas ornaments. What happened? A bug happened! To replicate: Install paradise lost & BYG (Or any mod that adds leaves) Some leaves will not have textures. Mod Version Beta 1.6.9 1.18.2 Fabric API Version 0.67.0 Relevant log output https://gist.github.com/FluffyBumblebees/3862e4b0fff36bc84c82630e7c72ff42 Other mods BYG Additional Information No response It seem that i was able to replicate this bug by installing sodium with byg and lost paradise, when i remove sodium both lost paradise and byg leaf's start to have texture For me: BYG + Paradies Lost (no sodium) = load BYG + Sodium = load Paradies Lost + Sodium = does not load BYG + Paradies Lost + Sodium = does not load Mod ### Version Release 1.4.7 1.18.2 Fabric API ### Version 0.67.0 Ah its a sodium incompat. Kinda annoying that there isnt a config to atleast turn this off until there's a fix. I am currently using these 3 mods with my friend in a modded server, from his point of view the BYG and paradise lost leaf does load, but chest from paradise lost does not load We actually don't have christmas textures for the chests that's why they're untextured. I believe the leaf problems are fixed in the 1.19.x versions however, but I will make a not of both. 1.19.2 Modpack MedievalMC users have reported this exact issue as well. Most notably, leaf textures from BYG are missing and are replaced with the purple/black texture. This is fixed in my PR.
gharchive/issue
2022-12-24T04:45:22
2025-04-01T06:38:22.929422
{ "authors": [ "FluffyBumblebees", "Frontear", "MBatt1", "aeaver" ], "repo": "devs-immortal/Paradise-Lost", "url": "https://github.com/devs-immortal/Paradise-Lost/issues/748", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2521668429
修复无法杀死自定义容器的问题 修复 https://github.com/devsapp/fc3/issues/15 最好签署一下 DCO 已签
gharchive/pull-request
2024-09-12T08:09:47
2025-04-01T06:38:22.931492
{ "authors": [ "ghostoy" ], "repo": "devsapp/fc3", "url": "https://github.com/devsapp/fc3/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2600452483
chore: restructure into a monorepo using pnpm Description Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change. Fixes # (issue) Type of change [ ] Bug fix (non-breaking change which fixes an issue) [x] New feature (non-breaking change which adds functionality) [x] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] This change requires a documentation update How Has This Been Tested? Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration [ ] Test A [ ] Test B Checklist: [ ] The title of the PR states what changed and the related issues number (used for the release note). [ ] Does this PR require documentation updates? [ ] I've updated documentation as required by this PR. [ ] I have performed a self-review of my own code [ ] I have commented my code, particularly in hard-to-understand areas Remove linter.py, sentry.sh Common out custom.d.ts vite-env
gharchive/pull-request
2024-10-20T13:20:10
2025-04-01T06:38:22.944183
{ "authors": [ "Elessar1802", "eshankvaish" ], "repo": "devtron-labs/dashboard", "url": "https://github.com/devtron-labs/dashboard/pull/2140", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
204813617
ERROR in BusyModule is not an NgModule I get this error when I import BusyModule. Or this library only works on Angular 2.0.0? "dependencies": { "@angular/common": "^2.3.1", "@angular/compiler": "^2.3.1", "@angular/core": "^2.3.1", "@angular/forms": "^2.3.1", "@angular/http": "^2.3.1", "@angular/platform-browser": "^2.3.1", "@angular/platform-browser-dynamic": "^2.3.1", "@angular/router": "^3.3.1", "angular2-busy": "^1.0.2", "angularfire2": "^2.0.0-beta.7", "bootstrap": "^3.3.7", "core-js": "^2.4.1", "firebase": "^3.6.8", "ng2-bootstrap": "^1.3.2", "rxjs": "^5.0.1", "ts-helpers": "^1.1.1", "zone.js": "^0.7.2" }, "devDependencies": { "@angular/compiler-cli": "^2.3.1", "@types/jasmine": "2.5.38", "@types/node": "^6.0.42", "angular-cli": "1.0.0-beta.26", "codelyzer": "~2.0.0-beta.1", "jasmine-core": "2.5.2", "jasmine-spec-reporter": "2.5.0", "karma": "1.2.0", "karma-chrome-launcher": "^2.0.0", "karma-cli": "^1.0.1", "karma-jasmine": "^1.0.2", "karma-remap-istanbul": "^0.2.1", "protractor": "~4.0.13", "ts-node": "1.2.1", "tslint": "^4.3.0", "typescript": "~2.0.3" } same issue same issue same issue I have the same problem. same issue Same same Has anyone fixed this bug? Same error for me with: "@angular/compiler-cli": "^2.3.1", "angular-cli": "1.0.0-beta.24", "typescript": "~2.0.3" same Any idea how to fix this? same issue!! same issue. any idea how to fix? same! any idea? the same issue. any idea when it will be fixed? Same error for me with: "@angular/compiler-cli": "^2.4.8", "angular-cli": "1.0.0-beta.32.3", Same issues "@angular/compiler-cli": "^2.3.1", "angular-cli": "1.0.0-beta.28.3", Same issue here after upgrading to @angular/cli@latest - is there a fix in the works? @angular/cli: 1.0.0-beta.32.3 node: 7.3.0 os: win32 x64 @angular/cli: 1.0.0-beta.32.3 @angular/common: 2.4.8 @angular/compiler: 2.4.8 @angular/compiler-cli: 2.4.8 @angular/core: 2.4.8 @angular/forms: 2.4.8 @angular/http: 2.4.8 @angular/platform-browser: 2.4.8 @angular/platform-browser-dynamic: 2.4.8 @angular/router: 3.4.8 +1 A possible workaround, until it is patched, is to comment out imports: [ ... //,BusyModule ...] Then ng build -w or whatever you prefer. Once it is built remove the comments. It just throws a fit during the build but it still works fine. +1 same issue yep! same issue... same here Same here Changing the order in app.module.ts from @NgModule({ imports: [ ... BusyModule, ], declarations: [ ... ], ... } to @NgModule({ declarations: [ ... ], imports: [ ... BusyModule, ], ... } worked for me same here :( @devyumao , you can take a look at this issue: https://github.com/angular/angular-cli/issues/3426#issuecomment-269673735 maybe it will be able to help. Do you plan to keep the library updated or it is abandoned currently? I have fixed this error. Please use this until the official module is fixed. https://github.com/dinusuresh/angular2-busy Thank you @dinusuresh ! The solution you provided works on my side too! 👍 Why don't you consider to issue a pull request to https://github.com/devyumao/angular2-busy ? I bet this would help @devyumao deal with this issue faster. Yay! Works great! Thanks, Dinesh! From: Dinesh [mailto:notifications@github.com] Sent: Friday, March 10, 2017 5:43 AM To: devyumao/angular2-busy Cc: Hatfield, R. Scott; Comment Subject: Re: [devyumao/angular2-busy] ERROR in BusyModule is not an NgModule (#33) I have fixed this error. Please use this until the official module is fixed. https://github.com/dinusuresh/angular2-busy — You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/devyumao/angular2-busy/issues/33#issuecomment-285636933, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ABDSfemQtnuFzG9iunR8soclrhBApxCxks5rkSk4gaJpZM4L02hA. Thanks a lot @dinusuresh 👍 your fixed also worked for me. Thanks @dinusuresh its works for me too!!! I updated "ts-metadata-helper" and "angular2-dynamic-component" packages in my projetc too... Glad I could help 😄 @superKalo I have created a pull request but considering the author is not active ATM I do not know if and when it will be accepted. Can anyone please explain me how to solve this problem? I didn't understand what I have to change :) @M4NC1O Sure. If you are looking to just get it working, then In you package.json file amke sure the angular2-busy line is like this: "angular2-busy": "https://github.com/dinusuresh/angular2-busy.git" It works! thank you! Thank you ! It works well..thanks When I build --aot I lose the spinner and the label "Loading..".. does anyone have the same issue? @dinusuresh: started getting error "DynamicComponentModule is not an NgModule".. @Niraj-Sharma Please can you post your package.json ? It is working good on my end, the only warning is the rxjs version 5.0.2 which is used by angular2-dynamic-component @M4NC1O I will look into it and let you know. @Niraj-Sharma https://github.com/devyumao/angular2-busy/issues/39 Now the module has been fixed. Thanks, @dinusuresh ! Your solution is great. 👍 Also thank @superKalo . The issue you mentioned is very helpful. Sorry for the late reply. The library will be keeping updated :) @Niraj-Sharma Now the module does not rely on angular2-dynamic-component any more. Please update to the latest version. @devyumao Thank you Hello, I followed solution by dinusuresh, but it's not working. Please provide solution. & also wanted to ask that It will create separate instance for different component if we add in diff component. Currently, I am using loader with observables but problem is loader gets activated for any request to Server it's just because of service we are using. Please provide solution. @dinusuresh
gharchive/issue
2017-02-02T08:50:30
2025-04-01T06:38:22.972444
{ "authors": [ "GuiFSimoes", "M4NC1O", "Niraj-Sharma", "PatilPritam", "SerhiiTsybulskyi", "SimoneMSR", "Trenrod", "arianul", "bashoogzaad", "bgaillard", "calvingferrando18", "chintharr", "danilocubo", "devyumao", "dinusuresh", "emidel", "flexkiran", "gepisolo", "hambardzumyan-mane", "justicewebtech", "kthomas80", "marciomsm", "michalzfania", "nolafs", "oslanier", "qcnguyen", "rshatf", "slesarevns", "smainz", "stanciupaul", "superKalo", "tim-hoffmann", "tmwnni", "trueflywood" ], "repo": "devyumao/angular2-busy", "url": "https://github.com/devyumao/angular2-busy/issues/33", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
300583569
Dexie is sometimes hard to debug because of missing context in error event object I'm using window.addEventListener('unhandledrejection', callback) for global error handling of Dexie.js. My Problem is I often find myself in a situation where it's hard to debug my code because the error event object does not give enough meaningful information of where the error occurred in my code. Here is a very distilled example of a type of error which I had a hard time debugging: window.addEventListener('unhandledrejection', function(e) { // The output of the error object does not give any meaningful information where the error occured console.log(e); }); var db = new Dexie("TestDuplicateKey"); db.version(1).stores({ test: 'id' }); db.open().then(function() { return db.test.add({ id: 1 }); }).then(function() { return db.test.add({ id: 1 }); }); Fiddle: https://jsfiddle.net/1ksfk1hv/ Here is all information I get from the error event object in the console output: message: "Key already exists in the object store." name: "ConstraintError" stack: "Error at getErrorWithStack (https://unpkg.com/dexie@2.0.1/dist/dexie.js:322:12) at new DexieError (https://unpkg.com/dexie@2.0.1/dist/dexie.js:451:19) at mapError (https://unpkg.com/dexie@2.0.1/dist/dexie.js:481:14) at handleRejection (https://unpkg.com/dexie@2.0.1/dist/dexie.js:965:14) at IDBRequest. (https://unpkg.com/dexie@2.0.1/dist/dexie.js:4220:9) at IDBRequest. (https://unpkg.com/dexie@2.0.1/dist/dexie.js:1178:23)" I'm not sure if I should do something differently, this is a limitation I have to live with or if this is something which could be improved in Dexie.js. Yes, you can set Dexie.debug to true to enable long call stacks: Dexie.debug = true; I updated the fiddle accordingly and after running it, the log will tell you the exact line where the error occurred, see screenshot: Note: Dexie.debug will be default true when served from localhost. http://dexie.org/docs/Dexie/Dexie.debug Thank you - this helps a lot! I'm running my tests in a Jasmine suite and therefore Dexie.debug has been set to false by default.
gharchive/issue
2018-02-27T10:59:19
2025-04-01T06:38:23.008840
{ "authors": [ "dasboe", "dfahlander" ], "repo": "dfahlander/Dexie.js", "url": "https://github.com/dfahlander/Dexie.js/issues/669", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
59360487
parameters information (wcbench) Hi Daniel, I'd like to find out more about "cbench_max, cbench_min, cbench_avg, one_min_load, five_min_load, fifteen_min_load" parameters? What are exactly this parameters? How are they calculated?? Thanks :) With these parameters: I don't understand these values: and the average responses/sec (817.41), are the answers that "switches" riceve by "controller"? What are exactly this parameters? They are described in detail in the WCBench Results section of the README. How are they calculated?? This array contains most of the commands used to gather system stats, including all of the *_load ones you asked about. The CBench min/max/avg values are parsed from CBench's result output. Here's the relevant code. are the answers that "switches" riceve by "controller"? I don't think I understand what you mean by this - can you restate it more clearly? You can find general information about system load from many places, including this wiki article. Daniel excuse me, I wouldn't be wrong; "CBench is a somewhat classic SDN controller benchmark tool. It blasts a controller with OpenFlow packet-in messages and counts the rate of flow mod messages returned." Is "cbench_avg" parameter calculated according to these messages returned? Is "cbench_avg" parameter calculated according to these messages returned? Yes, it's the average packet_ins/flow_mods per second. My slides from LinuxCon provide a pretty good overview of CBench's algorithm. Perfect ;) Thanks you !!
gharchive/issue
2015-02-28T21:56:48
2025-04-01T06:38:23.015746
{ "authors": [ "dfarrell07", "stefanoperone" ], "repo": "dfarrell07/wcbench", "url": "https://github.com/dfarrell07/wcbench/issues/59", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
105075444
MEF exports with IODataEndpoint or IODataEndpointData must support Version information Currently MEF endpoints that are loaded via the IODataEndpoint interface do not specify their version. When using these endpoints more intensively it will become difficult to manage or detect their versions. Therefore it is suggested to either implement a static property on IODataEndpointData which specifies the version of the assembly/plugin or to add a method/property to IODataEndpoint that returns the version. The disadvantage of having the version information supplied in IODataEndpointData is that it is static, but it is then very obvious (within the data annotation) which version the plugin is supposed to have. If the version information was supplied via IODataEndpoint then it could be read from the assembly or somewhere else (which could lead to a more consistent version scheme and could be detectable from the outside via file information). Both of these suggested changes are breaking changes! @rufer7 what do you think of this? The downside is, that we have that IODataEndpoint inside the utilities package so everyone upgrading this package for other reasons will run into that breaking change. I know that breaking changes are not the way to go, but this is very early in the dev process of that component, so I think we should go for it. @dfch Seems to be a good point providing the version in IODataEndpoint. Another option could be reading the assembly version itself. The disadvantage is, that with this apptoach the version gets increased with every build. Exposing the version by a method or field seems to be more flexible. I would suggest taking the approach you suggested. Breaking changes in early development state should not be the reason for not doing it.
gharchive/issue
2015-09-06T06:43:57
2025-04-01T06:38:23.020079
{ "authors": [ "dfch", "rufer7" ], "repo": "dfch/biz.dfch.CS.System.Utilities", "url": "https://github.com/dfch/biz.dfch.CS.System.Utilities/issues/9", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1195292529
not saving correctly this is the part of my script that exports the data to excel foreach($tedserver in $tedservers) { $kbtenables | Where-Object {$.'dns name' -match $tedserver} | Export-Excel -Path 'D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx' -WorksheetName Ted-KB -Append -AutoSize $othertenables | Where-Object {$.'dns name' -match $tedserver} | Export-Excel -Path 'D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx' -WorksheetName Ted-Other -Append -AutoSize } i have the same thing about 6 times, one for each tech. when i run one loop everything works but doing 2 or more i get this error message MethodInvocationException: C:\Users\my user\Documents\PowerShell\Modules\ImportExcel\7.4.1\Public\Export-Excel.ps1:679:20 Line | 679 | else { $pkg.Save() } | ~~~~~~~~~~~ | Exception calling "Save" with "0" argument(s): "Error saving file D:\Powershell Scripts\Data\WindowsUpdates\Tenable.xlsx" what i end up with is a spreadsheet with 2 sheets which is from the last loop instead of all sheets for all techs Thank you i went back to this the next day, closed code and reran the script and everything works Sometimes it only works every other Thursday ;-)
gharchive/issue
2022-04-06T23:13:43
2025-04-01T06:38:23.045501
{ "authors": [ "computechrr", "dfinke" ], "repo": "dfinke/ImportExcel", "url": "https://github.com/dfinke/ImportExcel/issues/1155", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1057403239
Draft: Galactic-Devel (Help wanted) Hi there, I have started porting this package to ROS2 galactic a while ago. It was a lot of effort so far and there is still heaps to do. I wanted to get further in the development before opening a PR but because of the lack of time I am guessing it would be easiest to all work together (hopefully) on this one to move along. And finally be able to use ROS2 for all the MiR platforms. It is still pretty rudimentary, but the simulation and driver work together with our MiR100 platform and it is great to be able to use ROS2 package once the driver connection is up and running. Overview: mir description: ported mir_driver: supports the laser_scans, cmd_vel, odom and tf so far. Since I didn't need the other topics, I have commented them out. (ROS1 to ROS2 msg require some dict filter, but that shouldn't be a big effort to implement for most topics) This was the biggest chunk to be solved, I ended up porting rospy_msg_converter to ROS2 (rclpy_msg_converter) as well. Will link the PR here shortly. mir_gazebo: simulation is up and running mir_navigation: mapping + navigation ported using slam_toolbox and nav2 (which is great) but there is still a lot of parameter tuning to be done (mostly waiting on 1.0.8 release right now) Some issues to point out: Separate laserscans: The laserscans are the most important part in mapping, localization and mapping and we have two separate scans (front and back). In comparison to ROS1 the slam and nav nodes require the scan to be merged and not only be passed alternately to the same topic. So I needed to merge those (using another package I ported) from laserscan to pointcloud then merge the clouds and then convert it back to a single laserscan. At that time I could not find a better solution, if somebody else can point out anything, I'd be happy to review that. Known Bug: Gazebo simulated Laserscans drift away from time to time (before merging in a pointcloud). This seems to be an issue in the urdf configuration, but I haven't had the time to track it down. https://github.com/relffok/mir_robot/issues/1 Missing namespace support: While porting I missed to drag the namespace everywhere (I also felt like there is an ongoing discussion about namespaces in lots of ros2 pkg), but this will also be added some time soon https://github.com/relffok/mir_robot/issues/2 To sum up, it is useable for a few tasks, but there is still lots of things to do and bugs to remove. Help wanted and appreciated! Thanks a lot for all this work! I'll review it as soon as possible (I'm pretty swamped with work right now). Just a quick comment on one of your points: Separate laserscans: The laserscans are the most important part in mapping, localization and mapping and we have two separate scans (front and back). In comparison to ROS1 the slam and nav nodes require the scan to be merged and not only be passed alternately to the same topic. So I needed to merge those (using another package I ported) from laserscan to pointcloud then merge the clouds and then convert it back to a single laserscan. At that time I could not find a better solution, if somebody else can point out anything, I'd be happy to review that. This is really unfortunate. The problem with merging the laserscans like that is that you lose the information which frame a particular point was recorded from. At least in mapping and navigation there's some ray tracing going on between the sensor frame and the point (to determine free space), so that'll probably be a problem. Perhaps for pure localization it's going to work. I remember having to do the same hack because gmapping in ROS1 also doesn't support multiple laser scanners, but it didn't produce good results, so I switched to hector_mapping instead (which supports multiple laser scanners). Thanks a lot for all this work! I'll review it as soon as possible (I'm pretty swamped with work right now). Thank you. I'm looking forward to get this one fully ported! This is really unfortunate. The problem with merging the laserscans like that is that you lose the information which frame a particular point was recorded from. At least in mapping and navigation there's some ray tracing going on between the sensor frame and the point (to determine free space), so that'll probably be a problem. Perhaps for pure localization it's going to work. I used it on MiR for mapping and navigation and also SLAM with a what I called virtual_laser_link (which I placed in the middle of the robot platform) and I didn't run into any issues. But maybe I am overlooking something here and you'll find something explicit to show me once you tested it. @ros-pull-request-builder retest this please That doesn't seem to have worked. Closing and reopening to trigger retesting.
gharchive/pull-request
2021-11-18T14:11:35
2025-04-01T06:38:23.054768
{ "authors": [ "mintar", "relffok" ], "repo": "dfki-ric/mir_robot", "url": "https://github.com/dfki-ric/mir_robot/pull/96", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
109582530
Migrate the FromRequest things into a sub-package with some more flexible features The original ParseFromRequest method was just a helper I inserted based on how I was using this library. It is useful as an example, but far more specific than I'm comfortable with for this library. It's also hard to change it's behavior without introducing risk to users of the library. I'd like to migrate, for version 3.0, all the request parsing behavior into a sub-package. In doing so, we should also modify it to be flexible, but have well defined behavior. Adding functionality should not introduce unexpected behavior for existing users. Now's a good time to talk about a desired set of functionality. I'll go through the PRs and Issues and see what people have asked for in the past as a jumping off point. If anyone has any other thoughts, please post here.
gharchive/issue
2015-10-02T22:29:01
2025-04-01T06:38:23.401773
{ "authors": [ "dgrijalva" ], "repo": "dgrijalva/jwt-go", "url": "https://github.com/dgrijalva/jwt-go/issues/90", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1121552977
touch.rb:54: Add support for search over multiple branches (.locations) The puzzle 567-859269e5 from #567 has to be resolved: https://github.com/dgroup/lazylead/blob/cab193ac3d5bedcf8f8c6ad487e898fc03f5d864/lib/lazylead/task/svn/touch.rb#L54-L54 The puzzle was created by rultor on 02-Feb-22. role: DEV. If you have any technical questions, don't ask me, submit new tickets instead. The task will be "done" when the problem is fixed and the text of the puzzle is removed from the source code. Here is more about PDD and about me. The puzzle 567-859269e5 has disappeared from the source code, that's why I closed this issue.
gharchive/issue
2022-02-02T06:15:03
2025-04-01T06:38:23.404645
{ "authors": [ "0pdd" ], "repo": "dgroup/lazylead", "url": "https://github.com/dgroup/lazylead/issues/574", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2289612737
Start DaprPlacementContainer automatically when DaprContainer is started Currently, DaprPlacementContainer and DaprContainer should be started manually. This commit adds DaprPlacementContainer as a dependency when DaprContainer is started. /cc @ThomasVitale This is great! Thanks a lot @eddumelendez!
gharchive/pull-request
2024-05-10T12:09:55
2025-04-01T06:38:23.525196
{ "authors": [ "ThomasVitale", "eddumelendez" ], "repo": "diagridio/testcontainers-dapr", "url": "https://github.com/diagridio/testcontainers-dapr/pull/45", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
343889493
How to handle list/carousel selection events ? I have made a carousel following this example, link How can I handle the selection event for this ? Hi, Your link does not work for me. I got it to work for me on a project i did earlier using actions_intent_OPTION as an EVENT TRIGGER on a followup intent to the intent containing the carousel. To extract the value in a parameter use #actions_intent_option.OPTION @karanvs Your link is broken, can you post the code you have so far and be more specific? List selection events are Action on Google specific events and are not supported by this library. If you are building on Dialogflow with only Actions on Google in mind please use the Actions on Google client library . If you are building for Actions on Google and other platforms please see the Dialogflow fulfillment & Actions on Google client library sample.
gharchive/issue
2018-07-24T05:27:55
2025-04-01T06:38:23.528645
{ "authors": [ "karanvs", "matthewayne", "sarahdwyer", "syedatifakhtar" ], "repo": "dialogflow/dialogflow-fulfillment-nodejs", "url": "https://github.com/dialogflow/dialogflow-fulfillment-nodejs/issues/105", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
325802810
SSL error when trying to run example Environment details OS: Windows 10 Node.js version: 10.1.0 npm version: 5.0.3 dialogflow version: 0.4.0 Steps to reproduce Follow the QuickStart steps on https://github.com/dialogflow/dialogflow-nodejs-client-v2 Create a file using the code on https://github.com/dialogflow/dialogflow-nodejs-client-v2#using-the-client-library Change the project ID as intended. Run node filename.js After struggling with the grpc installation, I followed the steps for Windows users here. Now I have the following error when trying to run the code, as well as with any sample on the /samples directory: Auth error:Error: write EPROTO 21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\ssl\record\ssl3_record.c:252: ERROR: { Error: 14 UNAVAILABLE: Getting metadata from plugin failed with error: write EPROTO 21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\ssl\record\ssl3_record.c:252: at Object.exports.createStatusError (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\common.js:87:15) at Object.onReceiveStatus (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:1214:28) at InterceptingListener._callNext (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:590:42) at InterceptingListener.onReceiveStatus (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:640:8) at callback (C:\Users\foo\Desktop\dialogflow\samples\node_modules\grpc\src\client_interceptors.js:867:24) code: 14, metadata: Metadata { _internal_repr: {} }, details: 'Getting metadata from plugin failed with error: write EPROTO 21584:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:openssl\\ssl\\record\\ssl3_record.c:252:\n' } I searched for problems with SSL relating to both grpc and Google cloud projects, but didn't find any clue on what to do. For me it seems to be related to the installation issues involving openssl and Windows as cited on the grpc installation page, so I'm considering test the same steps on some Linux platform and see if the same problem occurs. Could anyone help? Update: Running on an Ubuntu VM with Node.js: 8.11.2 npm: 5.6.0 I got a similar error, here tried to run detect.js from /samples: ubuntu@ubuntu-VirtualBox:~/dialogflow-nodejs-client-v2/samples$ node detect.js text -q "hi" Sending query "hi" E0523 16:58:36.419127299 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. E0523 16:58:36.774246703 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. E0523 16:58:37.124691019 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. E0523 16:58:37.471887273 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. E0523 16:58:38.195687058 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. E0523 16:58:38.542390862 10093 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number. Also I'm pretty sure the problem isn't with my credentials, because they work perfectly fine with the Python SDK. I found out that the problem was my proxy, when I tried to use in a network without the proxy I forgot to deactivate the proxy configurations temporarily. This is not an issue with the proxy from what I can tell. This issue appears to be with the node.js library itself. The TLS handshake needs to be initiated with an HTTP CONNECT message to the proxy so the destination server (google) can receive the TLS handshake initiation instead of your proxy. Otherwise you get a 400 or other error from your proxy server, which is invalid TLS protocol with respect to your client, resulting in the error. If the node.js grpc implementation really did send the CONNECT message to the proxy, then this all should work. Greetings! We're tracking proxy support over in #20.
gharchive/issue
2018-05-23T17:27:11
2025-04-01T06:38:23.536376
{ "authors": [ "JustinBeckwith", "mephicide", "pbragaalves" ], "repo": "dialogflow/dialogflow-nodejs-client-v2", "url": "https://github.com/dialogflow/dialogflow-nodejs-client-v2/issues/89", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1268229895
Fix rethrowing errors 🎁 reject was not defined here 😄. I could use return Promise.reject(e) but opted for simpler throw.
gharchive/pull-request
2022-06-11T10:29:29
2025-04-01T06:38:23.538874
{ "authors": [ "wokalski" ], "repo": "dialohq/inline-test-ppx", "url": "https://github.com/dialohq/inline-test-ppx/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1823189480
feat(avatar): extract initials from full name Feat (Avatar): Extract initials from full name :hammer_and_wrench: Type Of Change [ ] Fix [x] Feature [ ] Refactoring [ ] Documentation :book: Description Refactored Avatar to remove slot based usage in favor of prop based Added iconSize property to be able to resize the avatar's icon Removed slot related documentation Migrated avatar's usage on components that were using DtAvatar Changed imported images to public path to improve Percy visual tests stability with images Included changes from #1097 and #1098 :bulb: Context We we're having a lot of issues maintaining avatar's as it was created slot based initially to make it more customizable. :pencil: Checklist [x] I have reviewed my changes [x] I have added tests [x] I have added all relevant documentation [ ] I have validated components with a screen reader [ ] I have validated components keyboard navigation [ ] I have considered the performance impact of my change [ ] I have checked that my change did not significantly increase bundle size [ ] I am exporting any new components or constants in the index.js in the component directory [ ] I am exporting any new components or constants in the index.js in the root :crystal_ball: Next Steps Migrate usages of DtAvatar, DtRecipeFeedItemRow, DtRecipeContactRow and DtRecipeContactInfo components on product. :camera: Screenshots / GIFs Seems like visual tests are running even the PR is not approved yet, I'll take a look into that.
gharchive/pull-request
2023-07-26T21:18:47
2025-04-01T06:38:23.546150
{ "authors": [ "juliodialpad" ], "repo": "dialpad/dialtone-vue", "url": "https://github.com/dialpad/dialtone-vue/pull/1102", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1817308060
Separate static checks from pytest I want to separate the static code checks (black, mypy, ruff) so that they don't run as part of pytest. Instead they should run either as part of pre-commit, or as their own standalone tox environments. We only need to run these checks once, not for every Python version as we do with pytest, and besides the dependencies needed to run these tests as part of pytest are breaking compatibility with Python 3.6. From what I've figured out so far, I think this should be done together with, or after, #20. Anyway I'm working on both.
gharchive/issue
2023-07-23T21:51:22
2025-04-01T06:38:23.572584
{ "authors": [ "diazona" ], "repo": "diazona/setuptools-pyproject-migration", "url": "https://github.com/diazona/setuptools-pyproject-migration/issues/39", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
212606758
Can't logout npm logout Results in 13:48:16 Fennec@VERGIL testpublish: npm logout npm ERR! Darwin 15.6.0 npm ERR! argv "/usr/local/bin/node" "/usr/local/bin/npm" "logout" npm ERR! node v6.9.1 npm ERR! npm v3.10.2 npm ERR! code E404 npm ERR! 404 Not found : -/user/token/191c8da2-07c5-43de-ac98-c8704cd915cd npm ERR! Please include the following file with any support request: npm ERR! /Users/Fennec/testpublish/npm-debug.log Is this api just not supported? not at the moment, feel free to submit a PR though, should be pretty simple
gharchive/issue
2017-03-08T00:59:07
2025-04-01T06:38:23.580754
{ "authors": [ "Roam-Cooper", "dickeyxxx" ], "repo": "dickeyxxx/npm-register", "url": "https://github.com/dickeyxxx/npm-register/issues/69", "license": "isc", "license_type": "permissive", "license_source": "bigquery" }
2330400434
Suffix v1/chat/completions added to custom server endpoint Plugin Version 1.0.0-231 Actual Behaviour I use a custom endpoint in the server settings https://ete-openai-experiments.openai.azure.com/openai/deployments/gpt-4/chat/completions?api-version=2024-02-15-preview and this worked fine until the latest version of the plugin. Now it fails with the message 404 Not Found from POST https://ete-openai-experiments.openai.azure.com/openai/deployments/gpt-4/chat/completions/v1/chat/completions Please notice the v1/chat/completions suffix that seems to be added. Expected Behaviour If I add a URL in the server settings configuration it gets applied as it is Azure OpenAI services have got a separate configuration in the latest version of the plugin. Please go to Settings | Tools | ChatGPT Integration | Azure OpenAI and provide your endpoint configuration there: API Key, API Endpoint and Deployment Name. And if you are not using previous endpoints for GPT-4 and GPT-3.5-Turbo you may also disable them to not show them in a tool window. I hope this helps. Great - thanks 👍 That resolved my issues. Thanks for this nice plugin.
gharchive/issue
2024-06-03T07:37:15
2025-04-01T06:38:23.593478
{ "authors": [ "didalgolab", "pgebert" ], "repo": "didalgolab/chatgpt-intellij-plugin", "url": "https://github.com/didalgolab/chatgpt-intellij-plugin/issues/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1385565108
iOS、web端为什么没有弱网模拟 想全面推进团队使用DoKit 但是不知道为什么只有Android端有弱网模拟,iOS和web没有是因为技术问题吗? Android 和iOS都是支持弱网模拟的,如果没有找到升级最新版本试试。
gharchive/issue
2022-09-26T07:23:58
2025-04-01T06:38:23.594904
{ "authors": [ "RealOnlyone", "protectedMan" ], "repo": "didi/DoKit", "url": "https://github.com/didi/DoKit/issues/1079", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
357609588
Feat locale Please make sure these boxes are checked before submitting your PR, thank you! [x] Make sure you follow DiDi's contributing guide. [x] Make sure you are merging your commits to dev branch. [x] Add some descriptions and refer relative issues for you PR. Codecov Report Merging #302 into dev will decrease coverage by 0.04%. The diff coverage is 91.52%. @@ Coverage Diff @@ ## dev #302 +/- ## ========================================== - Coverage 92.89% 92.84% -0.05% ========================================== Files 131 134 +3 Lines 2801 2853 +52 Branches 418 427 +9 ========================================== + Hits 2602 2649 +47 - Misses 105 110 +5 Partials 94 94 Impacted Files Coverage Δ src/components/picker/picker.vue 82.45% <ø> (ø) :arrow_up: src/components/cascade-picker/cascade-picker.vue 81.81% <ø> (ø) :arrow_up: src/components/date-picker/date-picker.vue 98.73% <ø> (ø) :arrow_up: src/modules/locale/index.js 100% <100%> (ø) src/modules/time-picker/index.js 100% <100%> (ø) :arrow_up: src/components/time-picker/time-picker.vue 89.41% <100%> (+0.52%) :arrow_up: src/modules/date-picker/index.js 100% <100%> (ø) :arrow_up: src/components/action-sheet/action-sheet.vue 90% <100%> (+1.11%) :arrow_up: src/modules/select/index.js 100% <100%> (ø) :arrow_up: src/components/dialog/dialog.vue 96.66% <100%> (+0.11%) :arrow_up: ... and 13 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f0d64b8...d9498e8. Read the comment docs. Hey @theniceangel, Something went wrong with the build. TravisCI finished with status errored, which means the build failed because of something unrelated to the tests, such as a problem with a dependency or the build process itself. View build log TravisBuddy Request Identifier: 0c3648f0-cd1d-11e8-9706-8d7bf71fb7b5
gharchive/pull-request
2018-09-06T11:18:37
2025-04-01T06:38:23.611190
{ "authors": [ "TravisBuddy", "codecov-io", "theniceangel" ], "repo": "didi/cube-ui", "url": "https://github.com/didi/cube-ui/pull/302", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
848717858
[language] remove unused diem-types dependency from disassembler This removes the (unused) diem-types dependency from crate disassembler. /land @bmwill :exclamation: Unable to run the provided command on a closed PR /land @vgao1996 :exclamation: Unable to run the provided command on a closed PR Unable to run the provided command on a closed PR @bmwill this doesn't look quite right Let me try to close and then reopen this /land Looks like it works. Hmm... maybe bors didn't add this to In Review in the first place? :broken_heart: Test Failed - ci-test-success /land
gharchive/pull-request
2021-04-01T19:08:23
2025-04-01T06:38:23.623557
{ "authors": [ "bmwill", "bors-libra", "vgao1996" ], "repo": "diem/diem", "url": "https://github.com/diem/diem/pull/8114", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
443408258
afd-blocker blockiert afd.de nicht Geht mal so gar nicht [Witzig] schwach Fixed in #21
gharchive/issue
2019-05-13T13:47:12
2025-04-01T06:38:23.625288
{ "authors": [ "KennyTV", "RaisingAgent", "dieparteidiepartei", "fzakfeld" ], "repo": "dieparteidiepartei/afd-blocker-plugin", "url": "https://github.com/dieparteidiepartei/afd-blocker-plugin/issues/8", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
241437938
Support for order by random() Setup Versions Rust: 1.18 Diesel: 0.14.0 Database: SQLite Operating System Windows Feature Flags diesel: sqlite diesel_codegen: sqlite Problem Description There is no way to randomly sort the results of a select statement. What are you trying to accomplish? Returning a random subset of rows from a table. (ie. SELECT * FROM my_table LIMIT 20 ORDER BY RANDOM()) I know it's possible to work around this with my_table.order(sql::<types::Bool>("RANDOM()")).load(connection) but this seems like a useful feature to support in a safe manner. Checklist [x] I have already looked over the issue tracker for similar issues. You can use sql_function! or no_arg_sql_function! for this. Generally we want to avoid exporting every possible function in SQL from Diesel, since it's trivial to declare the ones that you want. Thanks! It took me too long to figure this out so this is the code required: no_arg_sql_function!(RANDOM, (), "Represents the sql RANDOM() function"); // Usage, using the post schema from the getting started guide. let results = posts .order(RANDOM) .limit(5) .load::<Post>(&*connection) .expect("unable to load posts"); Which will generate the following query: SELECT * ORDER BY RANDOM() @agersant How do you solve this problem? (The version I am using now is 1.4.2) no_arg_sql_function!( random, sql_types::Integer, "Represents the SQL RANDOM() function" ); let results = table .limit(10) .order(random) .load(connection); I used this code in version 1.4.2, but it still reports an error. pub fn query_by_random_order_by_id_desc(conn: &MysqlConnection, category_id_data: i32, limit_num: i64) -> Result<Vec<Self>, Error> { no_arg_sql_function!(random, sql_types::Integer,"Represents the SQL RANDOM() function"); albums::table .order(random) .filter(category_id.eq(category_id_data)) .filter(status.eq(1)) .limit(limit_num) .load::<Self>(conn) } @tingfeng-key Our issue tracker is used to track bugs and feature request. For asking questions please use our gitter channel
gharchive/issue
2017-07-08T09:11:43
2025-04-01T06:38:23.633065
{ "authors": [ "Thomasdezeeuw", "agersant", "sgrif", "tingfeng-key", "weiznich" ], "repo": "diesel-rs/diesel", "url": "https://github.com/diesel-rs/diesel/issues/1007", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
557964729
Opprette spdx:algorithm #52 "Range: spdx:checksumAlgorithm_sha1 Cardinality: 1..1 This property identifies the algorithm used to produce the subject Checksum. Currently, SHA-1 is the only supported algorithm. It is anticipated that other algorithms will be supported at a later time." Gjelder versjon 1.1
gharchive/issue
2020-01-31T07:35:37
2025-04-01T06:38:23.675137
{ "authors": [ "Elianne", "oystein-asnes" ], "repo": "difi/dcat-ap-no", "url": "https://github.com/difi/dcat-ap-no/issues/120", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2450126212
feat: add "all organisations" to global menu When the option "all organisations" in party dropdown (situated left for filters) is chosen, this is not reflected in the global menu (cf. picture). It is also not possible to choose "all organisations" from the global menu. Missing design? Legges på is inntil detaljert videre. Fjernes fra PartyDropdown.
gharchive/issue
2024-08-06T07:12:33
2025-04-01T06:38:23.681151
{ "authors": [ "seanes" ], "repo": "digdir/dialogporten-frontend", "url": "https://github.com/digdir/dialogporten-frontend/issues/921", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2726991797
Question: Only sending specific errors to Sentry Hello, We have a module that collects and deals with lots of personal information (including payment data). For our staging site, all data is obfuscated or test data, so there's no issue in sending every error to Sentry. However, for the production site, we would only want to send specific errors (either like an includedExceptions array or the ability to only manually throw an error when we want e.g. MyCustomSentryError. This is because we cannot guarantee that personal data isn't contained within the error until we go through a scrubbing process and it's a large code base. Is something like this possible with this plugin? I note that the craft-sentry plugin seems to support this when writing module code, as it explicitly needs the Sentry error to be thrown. Any ideas welcome! Thanks. Hello @lukew-cogapp, I believe you can achieve the desired result with the categories parameter. This parameter is empty by default, which means all exception categories are sent to Sentry. If you specify classes in this parameter, only the mentioned exception categories will be sent to Sentry. https://github.com/diginov/craft-sentry-logger?tab=readme-ov-file#categories Since this plugin is a direct extension of a Yii Log Target, you can use all the base parameters. https://www.yiiframework.com/doc/api/2.0/yii-log-target
gharchive/issue
2024-12-09T13:01:15
2025-04-01T06:38:23.698092
{ "authors": [ "lukew-cogapp", "martinleveille" ], "repo": "diginov/craft-sentry-logger", "url": "https://github.com/diginov/craft-sentry-logger/issues/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1057360894
Use IIIF terminology For the training it is useful if things are referred to as canvas and manifest etc. This might be at odds to making the tool universally accessible especially to those that are less familiar with IIIF but for the training its useful if the terms mentioned in the tool are the ones we've already taught them from the specifications. In v1 this was configurable, so some users could see terms appropriate for making an exhibit, and others would see the IIIF terms. I think this is related to 118n but not quite the same - a French training course would use localised text but not translate "Canvas", for example. So formal model terms maybe should have separate config. Would it not just be easier to ensure that all terms and titles can be controlled by the 118n process - with the default being the standard English IIIF terms - if a use case requires simplified or domain specific terms then a single config file can be created to control it. This provides consistency or programming and administration. TODO - decide what this looks like in https://github.com/digirati-co-uk/iiif-manifest-editor/wiki/Configuration The UI strings can be included in Language Maps; there should be a special flag to indicate where a string is a label for an actual property value, e.g., requiredStatement, so they can be left intact if needed. Different Apps can have their own overriding strings. E.g., a bespoke slide show editor. There is a new sub-package of the Manifest Editor that specifically tries to match the IIIF specification with a static definition that can be read in applications. https://codesandbox.io/s/iiif-meta-27v3q9?file=/index.html Contains things like: Required/recommended properties per resource Valid rights statements + descriptions of each Link + summary of each property (from IIIF specification) This can be used to show contextual information inline and relate it back to the specification. Labels will be in the i18n configuration as previously mentioned.
gharchive/issue
2021-11-18T13:30:51
2025-04-01T06:38:23.703255
{ "authors": [ "glenrobson", "jpadfield", "stephenwf", "tomcrane" ], "repo": "digirati-co-uk/iiif-manifest-editor", "url": "https://github.com/digirati-co-uk/iiif-manifest-editor/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2073394286
Update the game over message Update the game over message so people know how to play again! done!
gharchive/pull-request
2024-01-10T00:45:09
2025-04-01T06:38:23.704238
{ "authors": [ "digiserf01" ], "repo": "digiserf01/srp", "url": "https://github.com/digiserf01/srp/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
621069343
Add google analytics tag @bame-da asked me to do this thanks @anthonylusardi-da for the help! Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
gharchive/pull-request
2020-05-19T15:22:25
2025-04-01T06:38:23.706498
{ "authors": [ "andreolf-da", "digitalasset-cla" ], "repo": "digital-asset/daml-cheat-sheet", "url": "https://github.com/digital-asset/daml-cheat-sheet/pull/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
132651152
Edit package.json config Hi. Can you change the main file in package.json from grunfile.js to src/jquery.maskedinput.js. It needs to built bundle using webpack. :+1: It would be useful if we can build a bundle using the webpack. Please fix this. Anyone hoping to use this with a module bundler is out of luck. In Browserify, you can just require the file directly until there's a fix. require('jquery.maskedinput/src/jquery.maskedinput.js');
gharchive/issue
2016-02-10T09:49:21
2025-04-01T06:38:23.750955
{ "authors": [ "Forshortmrmeth", "jonscottclark", "maximal" ], "repo": "digitalBush/jquery.maskedinput", "url": "https://github.com/digitalBush/jquery.maskedinput/issues/351", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1120351458
Updated spacing to match designs There were a couple of spacing issues that have been fixed to match the original designs. Adobe XD Designs - https://xd.adobe.com/view/6f1463ae-cefd-4b5e-8641-474ab7880353-3a47 Before After Which screens, while not matching the designs, I'm not sure the implemented looked bad... Brandon On Tue, Feb 1, 2022 at 3:07 AM Seth Duffin @.***> wrote: There were a couple of spacing issues that have been fixed to match the original designs. Adobe XD Designs - https://xd.adobe.com/view/6f1463ae-cefd-4b5e-8641-474ab7880353-3a47 You can view, comment on, or merge this pull request online at: https://github.com/digitalcredentials/learner-credential-wallet/pull/120 Commit Summary 790b303 https://github.com/digitalcredentials/learner-credential-wallet/pull/120/commits/790b303265a5d5672892cf4ef11f3893882df138 Updated spacing to match designs File Changes (9 files https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files ) M app/components/CredentialItem/CredentialItem.styles.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-783ec984564e6476e607a66f54e4b11cc2184aff86982c7ef60193ef60774391 (17) M app/screens/AddScreen/AddScreen.styles.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-8575a9bd35f15872fe3ff3385978610bf16f2aa2b5a1e1e0c3943877c2488af8 (2) M app/screens/ApproveCredentialsScreen/ApproveCredentialsScreen.styles.ts https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-01504ed743d49bb4661beaf2b716e1f1fe707c25cce6b1b6095767e097d386aa (6) M app/screens/ApproveCredentialsScreen/ApproveCredentialsScreen.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-c155cd019318e5ff514b34192b1f3b99eaab8828cae55ee00638a59f10f8aa10 (9) M app/screens/HomeScreen/HomeScreen.styles.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-80ea8b3e615dbe45d84b09fa9e0a32f138c5b78cb1a79df42c5885465ab6fbc1 (5) M app/screens/HomeScreen/HomeScreen.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-869377616d47d4f9c3f66acdb1b2534686f1c5ef38d665394fa6b60d924d0f53 (2) M app/screens/ShareHomeScreen/ShareHomeScreen.styles.ts https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-d9cfeb678c385dee4e22edbe78597ab10dc13568d173302531bbbb41fbae2c5f (10) M app/screens/ShareHomeScreen/ShareHomeScreen.tsx https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-0c0fd255ffa680da0b656558bfc2c604619c5346266bdffcc63e3381f9314417 (1) M app/styles/mixins.ts https://github.com/digitalcredentials/learner-credential-wallet/pull/120/files#diff-8a22920ed8ef9464931083f26f71fe02bb7df9943feaae9228fb5c35e23de441 (5) Patch Links: https://github.com/digitalcredentials/learner-credential-wallet/pull/120.patch https://github.com/digitalcredentials/learner-credential-wallet/pull/120.diff — Reply to this email directly, view it on GitHub https://github.com/digitalcredentials/learner-credential-wallet/pull/120, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFXVRQHQK2OSBHHGPTB62LUY6IENANCNFSM5NINN7GA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you are subscribed to this thread.Message ID: @.***> @bmuramatsu There were a couple of small padding issues that Brandon Findlay had pointed out to me. These were quick fixes that took no longer than 5 min. 👍 On Wed, Feb 2, 2022 at 1:42 PM Seth Duffin @.***> wrote: @bmuramatsu https://github.com/bmuramatsu There were a couple of small padding issues that Brandon Findlay had pointed out to me. These were quick fixes that took no longer than 5 min. — Reply to this email directly, view it on GitHub https://github.com/digitalcredentials/learner-credential-wallet/pull/120#issuecomment-1028243162, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAFXVRQKZQIG4EOPN6RK2XLUZF3H7ANCNFSM5NINN7GA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub. You are receiving this because you were mentioned.Message ID: @.*** com>
gharchive/pull-request
2022-02-01T08:07:22
2025-04-01T06:38:23.780453
{ "authors": [ "bmuramatsu", "sethduffin" ], "repo": "digitalcredentials/learner-credential-wallet", "url": "https://github.com/digitalcredentials/learner-credential-wallet/pull/120", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2334979460
feat: Required logos added in the supported by section Pull Request Details Description Rwquired logos added in the supported by section Fixes #989 solved Type of PR [x] Bug fix [x] Feature enhancement [ ] Documentation update [ ] Refactoring [ ] Other (specify): _______________ Summary [Summarize the changes made in this PR.] Screenshots (if applicable) Additional Notes [Include any additional information or context that might be helpful for reviewers.] Checklist [x] I have read and followed the Pull Requests and Issues guidelines. [x] The code has been properly linted and formatted using npm run lint:fix and npm run format:fix. [x] I have tested the changes thoroughly before submitting this pull request. [x] I have provided relevant issue numbers, snapshots, and videos after making the changes. [x] I have not borrowed code without disclosing it, if applicable. [x] This pull request is not a Work In Progress (WIP), and only completed and tested changes are included. [x] I have tested these changes locally. [x] My code follows the project's style guidelines. [x] I have updated the documentation accordingly. [x] This PR has a corresponding issue in the issue tracker. Summary by CodeRabbit New Features Added new partner logos to the Home section with direct links to Netlify, Google Cloud, and Holopin websites. @pranshugupta54 LGTM 💥
gharchive/pull-request
2024-06-05T06:04:37
2025-04-01T06:38:23.802328
{ "authors": [ "RamakrushnaBiswal" ], "repo": "digitomize/digitomize", "url": "https://github.com/digitomize/digitomize/pull/1011", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
283309407
Merge commcare hq deploy Sorry for the long diff here. To review this PR Consider your thoughts about merging commcare-hq-deploy into commcarehq-ansible/fab Then skip down all the way to the bottom and review the commits starting with https://github.com/dimagi/commcarehq-ansible/pull/1179/commits/09cb79714574556f51dc2c0782c7ebcd4dd48293. See whether you think https://github.com/dimagi/commcare-hq/pull/18959 is an okay bridge for easing the change. When this is merged, people will be able to log into the control machine and (assuming they've already run the 2 steps to set it up) can run update-code and then (and every time they log in thereafter) they can run all our fab commands like we always have (without changing directories or anything). Locally the change is going to be a little less automated but it's basically workon ansible cd commcarehq-ansible git pull pip install -r fab/requirements.txt ./control/check_install.sh cd fab and then you can run the fab command. Thereafter to run fab commands you must workon ansible and choose one of three options: enter the fab directory:cd commcarehq-ansible/fab fab production deploy use the -f option on fabfab -f ~/.commcare-cloud/repo/fab/fabfile.py production deploy put an alias in your bash profileecho "alias fab=fab -f ~/.commcare-cloud/fab/fabfile.py" >> ~/.bash_profile # forever after, from any directory: fab production deploy I prefer the third one, which is what's done on control machines, and I suggest that in https://github.com/dimagi/commcare-hq/pull/18959/files 👍 from me. I think this makes a lot of sense. I've been think about making it a submodule for a while but I think this is better This also breaks ground on getting the services deploy out of HQ deploy. I could also see us breaking the environments.yml file up and putting it with the other env vars files. Yes! Exactly. Glad to hear we're on the same page. Those were also two of the top things on my wish-list related to this change. It just occurred to me that we could possibly do this in two steps, where we merge this one first, and then merge the commcare-hq counterpart of this a week or two later. That way we can make sure fab is working in commcarehq-ansible for a number of people before pulling out the rug. The only thing we'd have to do is to remember to make all changes in the interim to the current commcare-hq-deploy submodule and then I could make sure they get more or less continuously merged into this repo. Not sure it's worth the effort, but just wanted to throw that out there and highlight that merging this would initially be a very quiet change (but would mean committing to a louder change within a couple weeks). your rollout plan seems good. I doubt much will change in the next few days just tested this on icds-new and it worked fine: (ansible) skelly@kafka0:~/commcarehq-ansible/fab$ fab icds-new restart_services @dannyroberts FYI there are some commits in commcare-hq-deploy that need to be merged in here. Looked at https://github.com/dimagi/commcare-hq-deploy/commits/master (latest commit on Dec. 21) and it's all been merged in; must have done that last week after you wrote this but before I read it.
gharchive/pull-request
2017-12-19T17:24:12
2025-04-01T06:38:23.846191
{ "authors": [ "dannyroberts", "snopoke" ], "repo": "dimagi/commcarehq-ansible", "url": "https://github.com/dimagi/commcarehq-ansible/pull/1179", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2089491171
OTD stopped working looks like i'm not the only one. about 7pm PST, stopped loading the columns. I see your on it. thanks.
gharchive/issue
2024-01-19T03:28:32
2025-04-01T06:38:23.872968
{ "authors": [ "boilinabag" ], "repo": "dimdenGD/OldTweetDeck", "url": "https://github.com/dimdenGD/OldTweetDeck/issues/160", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
57756414
API routes fail with Debugbar enabled I've just installed barryvdh/laravel-debugbar and attempting to load a page with an API named route now fails with a 'Route not defined' error. I'm filing the bug here and with the debugbar (https://github.com/barryvdh/laravel-debugbar/issues/290) since I'm not sure which project is doing something funky. Sample code? Works fine here mate. maybe try to load api first. 'Dingo\Api\Provider\ApiServiceProvider', 'Barryvdh\Debugbar\ServiceProvider', I've created a minimal test case laravel install that shows the issue https://github.com/EspadaV8/dingo-debugbar/tree/develop All good I'll give this a run when I can. Cheers. I'm no longer supporting Laravel 4.x. Try this in either Lumen or Laravel 5 and report back if there's still issues. Cheers mate.
gharchive/issue
2015-02-16T01:30:21
2025-04-01T06:38:23.909243
{ "authors": [ "EspadaV8", "devmark", "jasonlewis" ], "repo": "dingo/api", "url": "https://github.com/dingo/api/issues/363", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1469642769
Limn(of:) an URL causes an EXC_BAD_ACCESS In Limn+Objc.swift line 10. Test: Limn(of: URL(string: "http://foo.com")).dump() Not sure what's so evil about reading the _urlString ivar, but it doesn't work. Nice, thank you!
gharchive/issue
2022-11-30T13:31:14
2025-04-01T06:38:23.943590
{ "authors": [ "julasamer" ], "repo": "diogopribeiro/Limn", "url": "https://github.com/diogopribeiro/Limn/issues/12", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
743166815
[log] merge-only option Problem, approach See https://github.com/dirk-thomas/vcstool/issues/174 Example operation and out put # git checkout 0.1.27 # vcs-log --merge-only === . (git) === commit d82cb6ec3be31533dc90979e083072c6440c68d3 Merge: dcee1a0 c174cd8 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #61 from dirk-thomas/use_pytest commit 68a2451d33c4f4de6f148de57439f08f0330d898 Merge: f7f008a 93eb8c6 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #58 from dirk-thomas/support_nested_repos commit f7f008a1279dce9b213c9cfc9e72a2f1fbf73c33 Merge: 3d98c90 3d8d011 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #59 from dirk-thomas/flake8 Open question This PR at the time of writing just adds a simple wrapper for underlining vcs tool, without new advantage. The design with custom verb to delegate commands to each vcs tool seems to be a right approach. I can agree to close this PR if a simple change like what's in this PR is undesirable. What I envisioned originally was to take advantage of vcstool's unique functionality and run a command against multiple local repos. Something like this (Obviously the argument oneline doesn't yet exist so the command failed):# vcs-log --merge-only --oneline usage: vcs log [-h] [-l N] [--limit-tag TAG | --limit-untagged] [--merge-only] [--verbose] [--debug] [-s] [-n] [-w N] [--repos] [paths [paths ...]] vcs log: error: unrecognized arguments: --oneline Then I found custom verb lets you easily achieve what I wanted.# vcs-custom --git --args log --oneline --merges -n 10 ... === ./ros_tutorials (git) === f40abd5 Merge pull request #31 from JavaJeremy/ThetaBugfix 626a3e5 Merge pull request #35 from jproft/kinetic-devel f21c4d2 Merge pull request #29 from ros/fix_compiler_warnings_jade d6d11f3 Merge pull request #27 from gusmonod/patch-1 36715e2 Merge pull request #23 from adamheins/jade-devel 9a1f606 Merge pull request #22 from ros/jade-devel-add-turtle : === ./vcstool (git) === d82cb6e Merge pull request #61 from dirk-thomas/use_pytest 68a2451 Merge pull request #58 from dirk-thomas/support_nested_repos f7f008a Merge pull request #59 from dirk-thomas/flake8 3d98c90 Merge pull request #55 from dirk-thomas/style 3a687d2 Merge pull request #54 from dirk-thomas/convert_version_to_string : OP updated: Problem, approach See https://github.com/dirk-thomas/vcstool/issues/174 Example operation and out put # git checkout 0.1.27 # vcs-log --merge-only === . (git) === commit d82cb6ec3be31533dc90979e083072c6440c68d3 Merge: dcee1a0 c174cd8 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #61 from dirk-thomas/use_pytest commit 68a2451d33c4f4de6f148de57439f08f0330d898 Merge: f7f008a 93eb8c6 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #58 from dirk-thomas/support_nested_repos commit f7f008a1279dce9b213c9cfc9e72a2f1fbf73c33 Merge: 3d98c90 3d8d011 Author: Dirk Thomas <dirk-thomas@users.noreply.github.com> Merge pull request #59 from dirk-thomas/flake8 Open question This PR at the time of writing just adds a simple wrapper for underlining vcs tool, without new advantage. The design with custom verb to delegate commands to each vcs tool seems to be a right approach. I can agree to close this PR if a simple change like what's in this PR is undesirable. What I envisioned originally was to take advantage of vcstool's unique functionality and run a command against multiple local repos. Something like this (Obviously the argument oneline doesn't yet exist so the command failed):# vcs-log --merge-only --oneline usage: vcs log [-h] [-l N] [--limit-tag TAG | --limit-untagged] [--merge-only] [--verbose] [--debug] [-s] [-n] [-w N] [--repos] [paths [paths ...]] vcs log: error: unrecognized arguments: --oneline Then I found custom verb lets you easily achieve what I wanted.# vcs-custom --git --args log --oneline --merges -n 10 ... === ./ros_tutorials (git) === f40abd5 Merge pull request #31 from JavaJeremy/ThetaBugfix 626a3e5 Merge pull request #35 from jproft/kinetic-devel f21c4d2 Merge pull request #29 from ros/fix_compiler_warnings_jade d6d11f3 Merge pull request #27 from gusmonod/patch-1 36715e2 Merge pull request #23 from adamheins/jade-devel 9a1f606 Merge pull request #22 from ros/jade-devel-add-turtle : === ./vcstool (git) === d82cb6e Merge pull request #61 from dirk-thomas/use_pytest 68a2451 Merge pull request #58 from dirk-thomas/support_nested_repos f7f008a Merge pull request #59 from dirk-thomas/flake8 3d98c90 Merge pull request #55 from dirk-thomas/style 3a687d2 Merge pull request #54 from dirk-thomas/convert_version_to_string : Thanks for the patch and apologies for the late merge.
gharchive/pull-request
2020-11-15T02:53:56
2025-04-01T06:38:24.000433
{ "authors": [ "130s", "dirk-thomas" ], "repo": "dirk-thomas/vcstool", "url": "https://github.com/dirk-thomas/vcstool/pull/194", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2367677255
screen sharing DO NOT WORK in WAYLAND ! This is somehow and old issue for me but it has not been fixed yet !! It's been some time that Modern linux systems moved from X to Wayland protocols . but Still I'm not able to share my screen in wayland ! people have to use firefox/chrome web app version of discord so they can share screen, show teammate/friends something and then again get back to desktop version ! :) That's kinda terrible experience for users ... This repo is not for tracking client issues, and you're in the wrong place.
gharchive/issue
2024-06-22T07:56:48
2025-04-01T06:38:24.079239
{ "authors": [ "ArMot", "junetried" ], "repo": "discordlinux/feedback", "url": "https://github.com/discordlinux/feedback/issues/64", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
382309917
.NET Core Support Possible? I know you stated that you will not update the library, so I am not asking whether you will do it, I am asking how hard would it be. As if it is easy enough I should be able to do it myself and possibly fork this one but if it is hard or near impossible without a complete rewrite, I will not attempt, because I would most likely fail tbh, Thanks Currently, MonoGame relies on WinForms implementations of .NET Framework or Mono for desktop GUI rendering. .NET Core 2.1 (latest as of this post) does not support WinForms, making it impossible to support it. However, .NET Core 3.0 will introduce WinForms support for Windows platform. If MonoGame decides to support it, I'd be more than happy to move this code base over to .NET Core. Main benefits would be faster runtime and vastly improved .csproj format to simplify code. Wait, if I understand correctly, your saying that MonoGame itself cannot currently run on .NET Core? Okay, not to argue with you, but I am running a .NET Core MonoGame game on Linux right next to me, and MonoGame download links list Linux as well link It's cool to see that there's working third party effort for .NET Core support! The Linux download link on the official page is the Mono version of MonoGame. It's hard to tell what problems may rise during the port. I'm mostly worried about the content pipeline, as Penumbra needs to compile custom shaders for each supported platform. If it is any help, when I attempted to build Penumbra in .NET Core it compiled successfully, don't know if that included the shaders or not though. Well, keep us updated, whether you plan to port or not! Personally, I will wait for official support in MonoGame 3.8 before dabbling into it. The whole .NET Core story around MonoGame is still unclear for me. Well what. Did you bloody know. If you install nuGet package Penumbra.DesktopGL it works out of the box. Just like that, no nothing needed. Your thing does work for .NET Core. You're a bloody genius. After some testing, I have concluded, that it works reliably, had 0 problems Well what. Did you bloody know. If you install nuGet package Penumbra.DesktopGL it works out of the box. Just like that, no nothing needed. Your thing does work with .NET Core. You're a bloody genius. Edit: Tested on HelloPenumbra, saw shadow thing rotating, the whole window was slightly blue, I suppose that's what it is supposed to do Edit2: Well someone is definitely a genius, don't actually know if specifically you. How are you run monogame on .net core? Install MonoGame like you normally would Create new project Add Penumbra.DesktopGL nuGet package Should work now Penumbra.DesktopGL is not working with Monogame UWP Core project: System.TypeLoadException: 'Could not load type 'MonoGame.Framework.GameFrameworkViewSource`1' from assembly 'MonoGame.Framework, Version=3.8.0.1641, Culture=neutral, PublicKeyToken=null'.' Penumbra.DesktopGL is not working with Monogame UWP Core project: System.TypeLoadException: 'Could not load type 'MonoGame.Framework.GameFrameworkViewSource`1' from assembly 'MonoGame.Framework, Version=3.8.0.1641, Culture=neutral, PublicKeyToken=null'.' Closing as supported already for awhile.
gharchive/issue
2018-11-19T17:03:58
2025-04-01T06:38:24.093463
{ "authors": [ "discosultan", "meowxiik", "nanodesu88", "richard-hajek", "romanov" ], "repo": "discosultan/penumbra", "url": "https://github.com/discosultan/penumbra/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2989599
Minimum required PHP version README.rst mentions PHP 5.3 as the minimum required version but from a quick code analysis with this neat little tool, it's actually 5.2 and just for json_decode(). For all the rest you only need PHP 5.1. Could you please confirm this and, if correct, change the docs accordingly? Thanks! :) 5.3 might have actually been a guess. Will try to review and confirm. Neat tool :) Thanks for looking into it :) I forgot to mention this but since you provide an alternative json implementation, the minimum required version is effectively 5.1. I also went through the code manually to confirm the tool's result and I think it's right, I don't see anything that requires PHP over 5.1 (or 5.3 for json_decode()).
gharchive/issue
2012-01-27T02:21:23
2025-04-01T06:38:24.115486
{ "authors": [ "borfast", "dcramer" ], "repo": "disqus/disqus-php", "url": "https://github.com/disqus/disqus-php/issues/14", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1701806114
update to go1.19.9 Added back minor versions in these, so that we have a somewhat more reproducible state in the repository when tagging releases. Codecov Report Patch and project coverage have no change. Comparison is base (8900e90) 56.76% compared to head (b320abd) 56.76%. :mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more Additional details and impacted files @@ Coverage Diff @@ ## main #3905 +/- ## ======================================= Coverage 56.76% 56.76% ======================================= Files 106 106 Lines 10681 10681 ======================================= Hits 6063 6063 Misses 3944 3944 Partials 674 674 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Do you have feedback about the report comment? Let us know in this issue. We'll probably need an updated golangci-lint; > [stage-2 1/1] RUN --mount=type=bind,target=. --mount=type=cache,target=/root/.cache --mount=from=golangci-lint,source=/usr/bin/golangci-lint,target=/usr/bin/golangci-lint golangci-lint run: #17 10.70 panic: load embedded ruleguard rules: rules/rules.go:13: can't load fmt #17 10.70 #17 10.70 goroutine 1 [running]: #17 10.70 github.com/go-critic/go-critic/checkers.init.22() #17 10.70 github.com/go-critic/go-critic@v0.6.2/checkers/embedded_rules.go:46 +0x4b4 temporarily rebased on top of https://github.com/distribution/distribution/pull/3906 to see if things are looking good after that (I'll rebase this one once the other PR is merged to remove the golangci-lint update commits)
gharchive/pull-request
2023-05-09T10:25:47
2025-04-01T06:38:24.142184
{ "authors": [ "codecov-commenter", "thaJeztah" ], "repo": "distribution/distribution", "url": "https://github.com/distribution/distribution/pull/3905", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
412947596
Coderef using keyref also prints temporary file name Expected Behavior I am using a codeblock with coderef to a C# file pointed to by a keyref. Only the file content is listed in the codeblock. Following snapshot shows output using direct href path to the same file. Actual Behavior After the codeblock, the file name of the temporary file is printed. Environment DITA-OT version: 3.2.1 out-of the box, no external plugins Operating system and version: Windows 10 How did you run DITA-OT? calling dita.bat Transformation type: PDF, using FOP My bookmap project used: coderef-test.zip Reproduced in 3.3.4 with both HTML and PDF. In HTML output, the last line of the code block includes the full path name of the snippet: }snippets/csharp-regions-simple.cs Same in DITA-OT version 3.4.1 Do you have any plans to fix this? Reproduced in 3.3.4 with both HTML and PDF. In HTML output, the last line of the code block includes the full path name of the snippet: }snippets/csharp-regions-simple.cs Same in DITA-OT version 3.4.1 Do you have any plans to fix this? Unable to reproce with 3.5. @robander, how about your?
gharchive/issue
2019-02-21T14:19:26
2025-04-01T06:38:24.155165
{ "authors": [ "jelovirt", "masofcon", "qvrijt", "robander" ], "repo": "dita-ot/dita-ot", "url": "https://github.com/dita-ot/dita-ot/issues/3232", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
157930304
Add support for configuring tm scope in PDF #1245 Add mode to control whether trademark symbol is created. :+1:
gharchive/pull-request
2016-06-01T15:03:14
2025-04-01T06:38:24.156107
{ "authors": [ "jelovirt", "robander" ], "repo": "dita-ot/dita-ot", "url": "https://github.com/dita-ot/dita-ot/pull/2404", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
727014893
Graph Dfs Application Title - Count the number of islands in a grid what will change - Code will be added Type of Issue - Application of DFS Please add/delete options that are not relevant. [x] Adding New Code [x] Improving Code [x] Improving Documentation [x] Bug Fix Programming Language Please add/delete options that are not relevant. [] Python [x] C++ [] Java [] C [] Go [] Other language Self Check Ask for issue assignment before making Pull Request. Add your file in the proper folder Clean Code and Documentation for better readability Add Title and Description of the program in the file :star2: Star it :fork_and_knife:Fork it :handshake: Contribute to it! Happy Coding, please assign me @div-bargali @jai2dev for this task, this is an interview question and a nice application of DFS Can I solve this issue? I would also like to add some variations in this question, please assign me this so that I can add more relevant material here.
gharchive/issue
2020-10-22T03:30:39
2025-04-01T06:38:24.161685
{ "authors": [ "heri2468", "palakdavda22", "tyagi619" ], "repo": "div-bargali/Data-Structures-and-Algorithms", "url": "https://github.com/div-bargali/Data-Structures-and-Algorithms/issues/622", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
110474340
Documentation for 3.2 features... This should include apphooks_reload in core, content creation wizards, etc. (see: https://github.com/divio/django-cms/pull/4563) Also, replace any references of "click" et al, in the docs to something more touch-friendly. Finally, review this closed PR for text changes: https://github.com/divio/django-cms/pull/4551 done by @evildmp
gharchive/issue
2015-10-08T15:19:10
2025-04-01T06:38:24.163644
{ "authors": [ "FinalAngel", "mkoistinen" ], "repo": "divio/django-cms", "url": "https://github.com/divio/django-cms/issues/4566", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
166442452
Make note of need for migrations in plugin tutorial Just a suggestion from a user working to develop a new CMS plugin that uses a model -- it may seem intuitive to some django programmers but you might want to add a line in the http://docs.django-cms.org/en/release-3.3.x/how_to/custom_plugins.html#storing-configuration tutorial noting that you should run manage.py makemigrations and manage.py migrate following your model creation to properly set up the configuration fields - it tripped me up a little before I realized what was the issue when I went to remove or save the newly created plugin. Thanks, fixed in https://github.com/divio/django-cms/pull/5566.
gharchive/issue
2016-07-19T21:40:52
2025-04-01T06:38:24.166295
{ "authors": [ "evildmp", "pdbethke" ], "repo": "divio/django-cms", "url": "https://github.com/divio/django-cms/issues/5555", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
146393382
GitSavvy doesn't work after updated After installed GitGutter my Sublime can't use GitSavvy anymore. It was not found on my command palette. So, I tried to remove GitGutter and reinstall GitSavvy but it still doesn't work. Please help me as soon as possible, GitSavvy is a part of my life now. Same issue here, GitSavvy is totally absent from the command palette since the last update :disappointed: There is some major bug now, I am working on fixing it now. @bank32 @rmnbrd, I just pushed out an update that will disable the offending code as a temporary work-around. Syntax highlighting will disabled in inline-diff views, but we'll address that regression separately. If you see this in the next few minutes, it would be super helpful if you could pull down master and confirm that things are working for you. Sorry for the bad update! fixed it. Thanks a lot. @divmain I have another question. now I fixed it by less simple method but I wanna know when we can install latest version via Sublime package control ? @bank32 it should be going out now. You may need to either 1) restart Sublime, or 2) run Package Control: Upgrade Package in the command palette. There is a delay between when I push a new tag in my Git repo and when packagecontrol.io picks up that changes and starts pushing it out to clients. It's working like a charm now. Thank you guys for your great reactivity! You rock! :+1:
gharchive/issue
2016-04-06T18:13:26
2025-04-01T06:38:24.174883
{ "authors": [ "bank32", "divmain", "rmnbrd", "stoivo" ], "repo": "divmain/GitSavvy", "url": "https://github.com/divmain/GitSavvy/issues/384", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
249345408
Is it possible to revert from gitsavvy? I didn't find a way.. I'd guess it could be done from the graph window. I don't believe we support it, and I don't use it personally. However this is a nice feature request, to add a git: revert command, should be easy to implement if you want to give it a go and submit a Pull Request we'll be happy to assist. Meanwhile, as a workaround I can suggest using custom commands.
gharchive/issue
2017-08-10T13:25:39
2025-04-01T06:38:24.176815
{ "authors": [ "asfaltboy", "hanoii" ], "repo": "divmain/GitSavvy", "url": "https://github.com/divmain/GitSavvy/issues/732", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2395674640
🛑 ArtChart is down In 24fef4f, ArtChart (https://artchart.net/en) was down: HTTP code: 504 Response time: 15646 ms Resolved: ArtChart is back up in 01a2984 after 7 minutes.
gharchive/issue
2024-07-08T13:37:48
2025-04-01T06:38:24.179251
{ "authors": [ "divtiply" ], "repo": "divtiply/artupptime", "url": "https://github.com/divtiply/artupptime/issues/521", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1484865337
Cyrillic-based keyboards not selectable on Mac On Mac 12.5.1, I have had Erzya (myv) and Moksha (mdf) keyboards as well as several Latin-based keyboards Apurinã (apu), Lushootseed (lut), Tenino (tqn), Võro (vro). After restarting my mac today, 2022-12-08, I have been unable to select a Cyrillic-based keyboard (a) from the keyboard, (b) manually attempting to select the language from the drop-down menu. Upon adding a Mac Russian keyboard, both Erzya and Moksha mysteriously disappeared. The same issue affects the keyboards: Mansi (mns), Kildin Saami, Khanty (kca) I can confirm the behaviour reported: Today I installed the Mansi (mns) cyrillic-based keyboard, with the same result: I installed the Mansi keyboard, and after restart the Divvun installer claims it to be installed, as excpected. The keyboard, however, is nowhere to be found in the System menu for adding keyboards. DI lists the keyboard as installed ("No updates"), but to no avail. Latin-based keyboards (e.g. fao, rmn) work well, the problem seems to be the cyrillic-based one. The issue is somewhat acute, since we work on mns this week, but it is of course a problem also genarally speaking. Further confirmed by me using Mansi as a test case. @zoomix could you have someone look at this? Details: macOS 13.3.1 (a) bundle properly installed as /Library/Keyboard\ Layouts/no.uit.giella.keyboards.mns.keyboardlayout.mns.bundle using Divvun Manager not listed or visible at all in System Preferences (System Preferences > Keyboards > Input sources) @dylanhand @SteffenErn — sorry, I forgot one step: you need to switch to the nightly channel in the app settings. After that the All repositories view will show Mansi near the bottom (it is written in Cyrillic, and the Cyrillic entries are towards the bottom - the actual packages are written in Latin, so should be no problem finding it). We found the issue. This occurs when the .keylayout file contains a self-closing tag, such as: <actions />` The .keylayout file can be found in /Library/Keyboard\ Layouts/<your language>.bundle/Contents/Resources The problem doesn't occur if this tag is either omitted or re-written as: <actions></actions> Our current plan to fix is to fork https://github.com/bbqsrc/xmlem and add an option to disable self-closing tags. Our current plan fix is to fork https://github.com/bbqsrc/xmlem and add an option to disable self-closing tags. Feel free to make a pull request and i can publish the fixes on cargo. So, what next? I understand the pull request is now done. From my perspective I would then like to have a working Mansi keyboard on PC (Mac and Windows), my coworkers are typing in hex codes. Does the code in keyboard-xxx (here: keyboard-mns) need any changes, and if so, what changes? @bbqsrc the option to disable self-closing tags ended up causing other issues. MacOS being persnickety 😄 The fix was instead to remove the <actions \> tag altogether if it contained no children when generating MacOS layouts. @Trondtr changes to keyboard-xxx repos should not be required for this fix to work on MacOS. I'm not sure about Windows though - are you having issue there too? If so please create an issue with details. Next step is to merge this and then have kbdgen re-generate MacOS layouts so they're available in the nightly channel. Hopefully the merge will trigger that. Still learning the tool chain. Will merge (if no objections) and investigate how to deploy the layouts to nightly after lunch 😄 Please do merge. @dylanhand rebuild of the keyboards do not happen automatically yet (cross-repo build deps have been planned, but not yet implemented). I will trigger new builds of the most critical keyboards as soon as the fix is merged and kbdgen has been rebuilt. @snomos thanks for the info. Just merged, so feel free to trigger new builds of the keyboards. I was able to download and install Mns (Mansi) keyboard on my Mac M2 13.0.1 It seems to work fine in the command line, so I am happy. The keyboards for mhr (Meadow & Eastern Mari), mrj (Hill Mari aka Western Mari), myv (Erzya), mdf (Moksha), kpv (Komi-Zyrian), yrk (Nenets), did not show up as possible keyboards to install. @rueter all the mentioned keyboards have now been rebuilt to fix the issue. They also have a new version number. They should be available in Divvun Manager as updates. For keyboards with a Sámi flag as menu item icon, it has been replaced with a best effort alternative. Feel free to suggest other flags or icons 😊 @snomos The Cyrillic keyboards presently working are: myv, mdf, kpv The mrj (Hill Mari aka Western Mari) keyboard is only partial. It contains none of the extras required for mrj, so we will have to work on that. The mhr (Eastern & Meadow Mari) keyboard does not appear on my Ventura as a selectable, even after down loading it. The udm keyboard, has a Saami flag and is, in fact, a Latin letter content. More work for us;) Thanks for the feedback, I will look into the various points. Further discussion should take place in new issues specific for the relevant keyboards, if needed. This issue is now fixed, thanks for reporting it 😊
gharchive/issue
2022-12-08T15:20:03
2025-04-01T06:38:24.209629
{ "authors": [ "Trondtr", "bbqsrc", "dylanhand", "rueter", "snomos" ], "repo": "divvun/kbdgen", "url": "https://github.com/divvun/kbdgen/issues/4", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1101986425
🛑 Dixneuf19 website is down In 2b5a889, Dixneuf19 website (https://www.dixneuf19.me) was down: HTTP code: 504 Response time: 15459 ms Resolved: Dixneuf19 website is back up in ed769af.
gharchive/issue
2022-01-13T16:10:18
2025-04-01T06:38:24.213838
{ "authors": [ "dixneuf19" ], "repo": "dixneuf19/upptime", "url": "https://github.com/dixneuf19/upptime/issues/490", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2403363257
🛑 Dixneuf19 website is down In 1532138, Dixneuf19 website (https://www.dixneuf19.me) was down: HTTP code: 0 Response time: 0 ms Resolved: Dixneuf19 website is back up in 2bb701b after 8 minutes.
gharchive/issue
2024-07-11T14:42:03
2025-04-01T06:38:24.216236
{ "authors": [ "dixneuf19" ], "repo": "dixneuf19/upptime", "url": "https://github.com/dixneuf19/upptime/issues/716", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
758711340
Update About button (about.html) to show the user profile Can I work on this @diyajaiswal11 Can I work on this @diyajaiswal11 Similar issue has been raised in #13 . Go through it.
gharchive/issue
2020-12-07T17:42:22
2025-04-01T06:38:24.217320
{ "authors": [ "Halix267", "diyajaiswal11" ], "repo": "diyajaiswal11/Bloggitt", "url": "https://github.com/diyajaiswal11/Bloggitt/issues/44", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1991421067
Plugin does not appear on session start. Only appears after serialization-interval time. Describe the bug After updating to Zellij 0.39.1, the status line is not initially displayed after a session is started. After some time, the statusbar displays as normal. To Reproduce Steps to reproduce the behavior: Update to Zellij 0.39.1 Start a zellij session using this plugin It appears that the amount of time it takes for the status bar to appear is the same as Zellij's session serialization frequency. Starting a session with just zellij, it will take 60 seconds for the status line to appear. 60 seconds is the new default session serialization frequency in 0.39.1. It was previously 1 second. Running zellij with zellij --serialization-interval 5, the statusline will appear after 5 seconds. With zellij --serialization-interval 20, it will appear after 20 seconds. Expected behavior Plugin should load immediately as it did on Zellij 0.39.0. Screenshots If applicable, add screenshots to help explain your problem. Desktop (please complete the following information): OS: 6.5.6-arch2-1 Zellij version: v0.39.1 Version: v0.9.0 Layout How does the layout look like? Please copy it into a code block. layout { pane split_direction="vertical" { pane } pane size=1 borderless=true { plugin location="file:/home/mike/.config/zellij/plugins/zjstatus.wasm" { format_left "{mode}#[fg=#1a1c23,bg=#4fa6ed,bold]{session}#[fg=#4fa6ed,bg=#1a1c23]{tabs}" format_right "#[fg=#1a1c23,bg=#4fa6ed,bold]{datetime}" format_space "#[bg=#1a1c23]" border_enabled "false" hide_frame_for_single_pane "true" tab_normal "#[fg=#000000,bg=#4C4C59] {index}  {name} #[fg=#4C4C59,bg=#1a1c23]" tab_normal_fullscreen "#[fg=#000000,bg=#4C4C59] {index}  {name} Z #[fg=#4C4C59,bg=#1a1c23]" tab_normal_sync "#[fg=#000000,bg=#4C4C59] {index}  {name} S #[fg=#4C4C59,bg=#1a1c23]" tab_active "#[fg=#1a1c23,bg=#ffffff,bold] {index}  {name} #[fg=#ffffff,bg=#1a1c23]" tab_active_fullscreen "#[fg=#1a1c23,bg=#ffffff,bold] {index}  {name} Z #[fg=#ffffff,bg=#1a1c23]" tab_active_sync "#[fg=#1a1c23,bg=#ffffff,bold] {index}  {name} S #[fg=#ffffff,bg=#1a1c23]" datetime "#[fg=#1a1c23,bg=#4fa6ed,bold] {format} " datetime_format "%A, %Y%m%d %H%M" datetime_timezone "America/Los_Angeles" mode_normal "#[fg=#1a1c23,bg=#4fa6ed,bold] NORMAL " mode_locked "#[fg=#1a1c23,bg=#e55561,bold] LOCKED " mode_resize "#[fg=#1a1c23,bg=#e2b86b,bold] RESIZE " mode_pane "#[fg=#1a1c23,bg=#e2b86b,bold] PANE " mode_tab "#[fg=#1a1c23,bg=#e2b86b,bold] TAB " mode_scroll "#[fg=#1a1c23,bg=#e2b86b,bold] SCROLL " mode_enter_search "#[fg=#1a1c23,bg=#e2b86b,bold] ENTER SEARCH " mode_search "#[fg=#1a1c23,bg=#e2b86b,bold] SEARCH " mode_rename_tab "#[fg=#1a1c23,bg=#e2b86b,bold] RENAME TAB " mode_rename_pane "#[fg=#1a1c23,bg=#e2b86b,bold] RENAME PANE " mode_session "#[fg=#1a1c23,bg=#e2b86b,bold] SESSION " mode_move "#[fg=#1a1c23,bg=#e2b86b,bold] MOVE " mode_prompt "#[fg=#1a1c23,bg=#e2b86b,bold] PROMPT " mode_tmux "#[fg=#1a1c23,bg=#8ebd6b,bold] TMUX " } } } Awesome thanks for the quick fix Hey, the fix is available with the new release (0.9.1). Seems to work on my side. Hope this resolves the issue.
gharchive/issue
2023-11-13T20:24:22
2025-04-01T06:38:24.255134
{ "authors": [ "dj95", "mike-lloyd03" ], "repo": "dj95/zjstatus", "url": "https://github.com/dj95/zjstatus/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1778937372
Add more customizable blocks in import.html Problem Fields detail and definition cannot easily be overriden in import.html unless you overrite the whole import_form block. Solution This PR adds sub-blocks in import_form block to allow overide forms parts more easily. Thanks, feel free to add your name to AUTHORS Done, thank you! Thanks - we had an issue with an upstream lib which caused the build to fail. I've fixed that. I'd be grateful if you could merge the main branch and re-push your change. Done.
gharchive/pull-request
2023-06-28T13:21:44
2025-04-01T06:38:24.359164
{ "authors": [ "christophehenry", "matthewhegarty" ], "repo": "django-import-export/django-import-export", "url": "https://github.com/django-import-export/django-import-export/pull/1598", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
2447406535
Get more info on tasks Hello, Is there a way to know the cluster / machine used by a task ? I guess I could add it in the result for the successful ones but how would it be done for failures / queued ones (maybe the most important ones) Thanks, Couldn't get this through the Q_CLUSTER_NAME environment variable? Yes, actually the cluster name can be found. For the machine I guess I'll have to save it as a variable (as few clusters can have the same name across different machines)
gharchive/issue
2024-08-05T01:16:23
2025-04-01T06:38:24.371782
{ "authors": [ "GDay", "ThomasDeudon" ], "repo": "django-q2/django-q2", "url": "https://github.com/django-q2/django-q2/issues/204", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
862469732
Removed unnecessary line in OrderBy.as_sql(). The first line of as_sql (https://github.com/django/django/blob/ed0cc52dc3b0dfebba8a38c12b6157a007309900/django/db/models/expressions.py#L1213) is a duplicate of this line that's being removed, meaning that in this function template is already defined in the same manner, making this line a noop. @davidjb Thanks :+1: Good catch :dart:
gharchive/pull-request
2021-04-20T06:38:52
2025-04-01T06:38:24.379166
{ "authors": [ "davidjb", "felixxm" ], "repo": "django/django", "url": "https://github.com/django/django/pull/14286", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
963245405
Fixed #29026 -- Added --scriptable option to makemigrations. ticket-29026 desires to separate logging from a simple list of filepaths created in makemigrations. Today, there is logging to both stdout and stderr (for errors), but no real "program output": the filenames are bolded and indented to flow between other log messages. This PR creates a new option --scriptable that will 1. divert all current logging to stderr and 2. log only the filepaths of created migration files to stdout, one per line (without leading spaces and styling) --noinput mode is still necessary to suppress input completely, otherwise interactive prompts go to stderr when --scriptable is used EDIT: moved ticket-29470 solution to #14805 All of this logging can be silenced with --verbosity 0: no changes in this respect. The thing with structured is that text lines aren't that structured. To me that sounds more like json. Very reasonable--I'll think about a name for this option. It seems like the user-facing option should be more about whether logging should go to stderr. Then the non-logging ("output") lines would continue to go to stdout as is. My thought process was this: merely switching the output to stderr is achievable today: you simply subclass the command and set self.stdout to whatever you want, sys.stderr, or whatever. I'm not sure we would add complexity if that were the only thing we were achieving. I think the reason for this PR is that we don't have any "non-logging/output" lines today, since this is not really nonlogging (leading spaces, hyphen, colors): \x1b[1m/var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_klnxitjw/tmpwiefzufk/tmp82ptgwlr/migrations/0001_initial.py\x1b[0m\n My thoughts would be to add an initial commit that adds a log() method and makes all stderr / stdout output go through that. So I'm a little hesitant to do to this, because then we're making makemigrations more special than the other commands. This ticket is asking to make makemigrations more special by producing different kinds of output, so I guess we're already starting down that path. Also, in the case that logging is going to stderr, I think you'd still want to log the output lines using log() as a message, so that info would be available both in the diagnostic stderr logs, as well as the output. Agreed, I made sure this was the case. Any thoughts on the ticket-29470 stuff? If we have design questions about ticket-29026 I could ask the fellows to re-triage 29470 and move it to a separate PR to keep it moving. btw, thanks for the review, @cjerdonek! And it looks like I need to liberalize one of the tests to pass on Windows. Maybe --separatelogs? Everything but the last line going to stderr, last line to stdout: Migrations for 'migrations': /var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_e_o35bow/tmpso0994dn/tmp7ogni0kc/migrations/0001_initial.py - Create model ModelWithCustomBase - Create model SillyModel - Create model UnmigratedModel /var/folders/6q/jljmh5xs27v8_7557q3rgtfw0000gn/T/django_e_o35bow/tmpso0994dn/tmp7ogni0kc/migrations/0001_initial.py you simply subclass the command This didn't feel right when I wrote it, and indeed--it's easier than that, there's call_command(stdout=), see: https://docs.djangoproject.com/en/3.2/ref/django-admin/#output-redirection The drift of my earlier comment was that I would be hesitant to rework the documented API we have for that. I see the issue as more defining wanting separate logs, period. To respond to one point now: My thoughts would be to add an initial commit that adds a log() method and makes all stderr / stdout output go through that. So I'm a little hesitant to do to this, because then we're making makemigrations more special than the other commands. I think making log() a method that is passed a message is more natural and has advantages over making it an attribute with a write() method. For example, it would give people a way to use a Python logger instead of writing to a stream. This is the approach taken in this commit for ticket #14150. Also, the word "log" is more commonly used in Python as the verb / method name rather than the stream (see e.g. Logger.log() in Python's logging module). It would also make the calling sites simpler / cleaner. Finally, if this pattern is found useful, it could be moved to BaseCommand so makemigrations.py wouldn't be so special. Also, the word "log" is more commonly used in Python as the verb / method name rather than the stream (see e.g. Logger.log() in Python's logging module). I have to admit, this did occur to me when I was writing it, and if you also noticed it, lots of folks will. :-O For example, it would give people a way to use a Python logger instead of writing to a stream. This is the approach taken in this commit for ticket #14150. That's a good reason. Finally, if this pattern is found useful, it could be moved to BaseCommand so makemigrations.py wouldn't be so special. I guess I wasn't looking at it that way. 👍🏻 Thanks for the quick feedback, and I'll have a look at implementing a log() method. Maybe --separatelogs? Maybe --scriptable or --scriptmode? The option seems like a higher-level mode as it does two things: it changes logging to go to stderr, and it adds additional info to stdout. Maybe --scriptable or --scriptmode? At a glance, I would be a bit worried it implies setting --noinput can be skipped when using in a script. At a glance, I would be a bit worried it implies setting --noinput can be skipped when using in a script. I think there's an argument the mode should imply --noinput. The reason is that, when programmatically consuming the stdout (e.g. piping it to a file), you wouldn't see the prompt anyways. (It would show up to the user as a hang.) This is because the command's questioner uses Python's input() built-in, which writes to stdout. So it seems incompatible with consuming stdout for programmatic use. Maybe you could write a prompt to stderr, but that seems non-standard. Either way, I think you'd want to document in the help whether --noinput is implied, which should eliminate any worries. Reading and thinking more about Python's input() and how --noinput behaves, I think the scriptable mode option we're discussing in this PR shouldn't default to --noinput, and when it's used, the questioner should write its prompt to stderr instead of stdout (and not use log()). There are a couple reasons for the latter. First, there is a very old Python ticket to change input() to use stderr, so it wouldn't actually be non-standard to use stderr like I suggested in my comment above. Secondly, if the mode didn't use stderr, then there would be no way to use scriptable mode while capturing the output stream (one of its intended use cases) when answers are required that differ from the default answers when --noinput is used. Lastly, I said "not use log()" because if someone, say, changed log() to log to a file, you would still want the prompt to go to stderr so the user could provide interactive feedback. Also, on this question: Any thoughts on the ticket-29470 stuff? After reading and thinking about it, I think it should be re-opened and handled separately. I can go ahead and add a comment to the ticket. Hello @francoisfreitag -- I noticed you have a WIP branch for ticket-21429 implementing logging for each of the management commands. As you can see above, Chris is making the good case that to move this PR forward, I should do something similar for makemigrations. Are you still interested in contributing your work? If #13853 is the only blocker, we can see about getting it into the review queue. Let me know if I can be helpful in any respect. 👍🏻 @jacobtylerwalls By the way, regarding the log() method I was suggesting you add above, the collectstatic management command is another class that already has a method like that: https://github.com/django/django/blob/8208381ba6a3d1613bb746617062ccf1a6a28591/django/contrib/staticfiles/management/commands/collectstatic.py#L207-L212 It will be of help here independent of that ticket. Hi @cjerdonek, Thanks for the ping! Fixing ticket-21429 is pretty time consuming and personal life has been busy (and will be for at least a few months). My employer is not interested in sponsoring the work for now, so it’s all on my personal time. Basically, my todo list includes: rebasing on main (last rebase was probably 9 months ago) a readthrough, making sure: assertNoLogs and assertLogRecords are used where possible reviewing uses of io, StringIO, stdout and stderr in test code code polish (e.g. grouping context managers, preferring single quotes, f-strings, etc) consider introducing flake8-logging-format installing my branch on an existing project shows unexpected line returns if I don’t change the existing project config. Needs investigation. Completing the documentation Testing against all DBs. I tried to use exact assertions for the logging output as much as possible, but different DB engines or configuration may cause messages to change slightly. #13853 is just a very early step, introducing tools I would like to use for the logging PR. I’m afraid there are weeks of work remaining before my branch can be put up for review, and I am not able to commit to a timeline. Thanks for the update. I certainly wouldn't ask you to commit to a timeline! I was just merely curious if I should try to conform to any likely pending changes. I think I have enough to go on for now. Good luck with everything, and be well. --Jacob Thanks for the design guidance @cjerdonek , that was very helpful. I haven't squashed/reordered commits yet, and I am still thinking about https://github.com/django/django/pull/14751#discussion_r684664382, but I wanted to ask if you thought these changes were looking right-track to you. I'd be grateful if you had time for a re-review. I didn't as of yet tackle a refactor of how verbosity is tracked. There's enough going on (and I feel like this PR is already two or so.) Speaking of which, thanks for commenting on ticket-29470. The first one I'd recommend is a refactoring PR that just adds a log() method, and in particular doesn't make any reference to scriptable, etc. The next step PR can be discussed after that. Done! :tada: https://github.com/django/django/pull/14936 Summary: Adds --scriptable flag to makemigrations, causing: log() method to divert output to self.stderr (agreement on this here: "whether log() uses stderr or stdout could be controlled centrally in that method") Interactive questioner to divert prompts to self.stderr (agreement here) an additional line of "clean output" (no styling or indentation) written to self.stdout (original case from ticket) That's all the diff does, it's just bloated because the interactive questioner was using print() statements, which had to be rewritten. I can move that to another commit (or PR?) if folks like. Could also be good to rebase after #15212. @jacobtylerwalls Thanks :+1: I added writing a path of generated migration file to the --merge option (with tests) and pushed small edits. @felixxm Thanks for the updates and the additional test!
gharchive/pull-request
2021-08-07T14:53:57
2025-04-01T06:38:24.408679
{ "authors": [ "cjerdonek", "felixxm", "francoisfreitag", "jacobtylerwalls" ], "repo": "django/django", "url": "https://github.com/django/django/pull/14751", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1669313237
Added username in AbstractBaseUser as a index I changed this from this Q Why I did this A This field is used for login user and find user to most of time, as the number of user increases applications get slow in finding users by usernames. So I thought to add this in that. Every time I have to create a custom user field and I think most of developers does too. I think this will help for beginners too. db_index is unnecessary as this field is already marked as unique.
gharchive/pull-request
2023-04-15T11:10:21
2025-04-01T06:38:24.411981
{ "authors": [ "ankushagar99", "felixxm" ], "repo": "django/django", "url": "https://github.com/django/django/pull/16768", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
2553318907
Update docs.yml Trac ticket number "N/A" Branch description We're trying to check some alignment beetween docs in django repository and in django project website. The goal for using the long format of the attributes is to increase clarity and make it easier for those who will interact with this action in the future. The removal of the 'q' parameter is intentional and should help us read more information in the command execution logs. Checklist [x] This PR targets the main branch. [ ] The commit message is written in past tense, mentions the ticket number, and ends with a period. [ ] I have checked the "Has patch" ticket flag in the Trac system. [ ] I have added or updated relevant tests. [ ] I have added or updated relevant docs, including release notes if applicable. [ ] I have attached screenshots in both light and dark modes for any UI changes. @pauloxnet could you say more about why that change helps? It looks like you’ve changed the parameter names from the short-hands to long-hands, which I assume is to make it clearer what the command does? Two other questions: * See the -q option is now gone, not sure if you missed it or intentionally removed it? * We also have sphinx-build in use in the Makefile – if we did this change from short-hand to long-hand, should it also be done there? The goal for using the long format of the attributes is to increase clarity and make it easier for those who will interact with this action in the future. The removal of the 'q' parameter is intentional and should help us read more information in the command execution logs. This improvement is preparatory to the alignment work in the generation of the documentation that we want to implement between the Django repository and that of its website. See: https://github.com/django/djangoproject.com/issues/1634 I would leave the changes of the Makefile in a future PR, because in any case it is already different from the command executed here, regardless of the long or short format of the options, which by the way do not change the behavior. The Ubuntu version is outdated: the django website runs in the Python 3.12 docker container based on Debian 12, and the server runs Ubuntu 24.04. Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost. Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost. Personally, I think that using the long form of arguments helps other people a lot to understand what those arguments do without having to consult the Sphinx guide, so in my opinion this commit had every right to be part of the Git history of the repository, just like commits that fix typos do. Thank you @pauloxnet for this PR! Though, in all honesty, it's hard to justify the extra entry in git history for this change; the longer argument forms don't seem to justify the cost. Personally, I think that using the long form of arguments helps other people a lot to understand what those arguments do without having to consult the Sphinx guide, so in my opinion this commit had every right to be part of the Git history of the repository, just like commits that fix typos do. I understand your point, but docs.yml is a configuration file that's primarily intended for use by the build system, not for regular contributors. Most contributors will not need to read or understand this file in depth. While clarity in configuration files is important, the level of detail you're suggesting might not be necessary for this context. Unlike documentation or code, where clarity directly impacts the user experience or maintainability, changes like this don't provide significant value in terms of readability for most contributors. That said, I do appreciate your attention to detail!
gharchive/pull-request
2024-09-27T16:29:25
2025-04-01T06:38:24.421201
{ "authors": [ "nessita", "pauloxnet" ], "repo": "django/django", "url": "https://github.com/django/django/pull/18630", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1517771453
Integrate testing (playwright) + documentation Blocking: https://github.com/dawnwages/wagtail-indymeet/issues/3 This issue must be done first. This may be a good place to ask Ed Rivas or Zan Anderle for their opinion on Django + Playwright. or Andrew Knight or Debbie O'Brien Doesn't have to be playwright https://youtu.be/_tAhD-OCuN8 https://github.com/RachellCalhoun/blog-posts
gharchive/issue
2023-01-03T18:32:43
2025-04-01T06:38:24.423949
{ "authors": [ "dawnwages" ], "repo": "djangonaut-space/wagtail-indymeet", "url": "https://github.com/djangonaut-space/wagtail-indymeet/issues/35", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2240707243
Fix "IntelliJ extension fails on nil theme values" Fixes #221. Thanks for the PR @rads!
gharchive/pull-request
2024-04-12T18:38:22
2025-04-01T06:38:24.430471
{ "authors": [ "djblue", "rads" ], "repo": "djblue/portal", "url": "https://github.com/djblue/portal/pull/222", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
845078844
Implement non-retriable errors ManageConnection::should_retry can now be implemented to prevent bb8 from attempting to retry connection attempts. This serves two purposes: The caller of pool.get will now receive the underlying exception if a "catastrophic" error is hit. Previously, the caller would get a TimedOut with no cause. The error can be surfaced quickly, rather than going through several connection attempts. This is particularly useful for interactive use of bb8, such as allowing a user to directly input a connection to a program such a CLI. If the connection is incorrect, it is desirable to fail fast with a useful error message. The tests will fail in the postgres/redis libraries because I haven't propagated the changes to make Error implement Clone. I thought we should discuss that first. It seems like there are two paths forward: If you get a catastrophic error, the Sink does not get a copy of the actual error. Instead, it could get a CatastrophicError or something else. This seems reasonable to me. Make Error: Clone as I've shown. This is not backwards compatible and is a fairly annoying bound to have. Yes, I think your understanding of (1) is correct, and I do think it might solve your specific problem. I had forgotten about this exact feature, or I would have mentioned it sooner as potentially helpful to your problem. The initial spawning of connections can be executed asynchronously while returning the first error directly. Note, to make that work you'll need to configure min_idle to something other than 0 (the default). Given the lack of follow-up, going to close this for now. Feel free to reopen if you think it's still relevant. My bad for losing track of this. It didn't solve my problem actually. I landed up just making an initial health check call myself, outside of bb8. I passed the client into bb8 so the initial (http) connection goes into the pool.
gharchive/pull-request
2021-03-30T19:02:42
2025-04-01T06:38:24.437471
{ "authors": [ "djc", "marcbowes" ], "repo": "djc/bb8", "url": "https://github.com/djc/bb8/pull/104", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
388970732
Fix travis nightly rustfmt Let's see if this helps. On review, we weren't using rustfmt in the nightly build anyway for obvious reasons.
gharchive/pull-request
2018-12-08T23:48:55
2025-04-01T06:38:24.438476
{ "authors": [ "Ralith" ], "repo": "djc/quinn", "url": "https://github.com/djc/quinn/pull/113", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
735516546
Reserved word and uniqueness validation First of all, I want to say thank you for this gem!!! We found an issue where the column name is equal to a reserved word and we have to add an index to make it unique. RSpec.describe DatabaseConsistency::Checkers::MissingUniqueIndexChecker do subject(:checker) { described_class.new(model, attribute, validator) } let(:model) { klass } let(:attribute) { :from } let(:validator) { klass.validators.first } context 'with postgresql database' do include_context 'postgresql database context' context 'when uniqueness validation has case sensitive option turned off' do context 'when the column name is equal to a reserved word' do let(:klass) { define_class { |klass| klass.validates :from, uniqueness: { case_sensitive: false } } } before do define_database_with_entity do |table| table.string :from table.index "lower('from')", unique: true end end specify do expect(checker.report).to have_attributes( checker_name: 'MissingUniqueIndexChecker', table_or_model_name: klass.name, column_or_attribute_name: 'lower(from)', status: :ok, message: nil ) end end end end end It looks like the problem occurs during the following comparison: extract_index_columns(index.columns).sort == sorted_index_columns which equals ["lower('from'::text)"] == ["lower(from)"] Hey @chubchenko , thank you for pointing this out! And for using the gem, of course! I'm not sure how smart should we be here, but I guess it would be fine if compare assuming that value can be casted at any type. So, I'll try to fix it ASAP 👍 The fix is here: https://github.com/djezzzl/database_consistency/pull/73. I'll release it probably today (with some other improvements). Hey, I just released 0.8.9, please try it out and let me know if it didn't help. Feel free to reopen the issue if needed. P.S. Sorry it took me so long to release.
gharchive/issue
2020-11-03T17:41:37
2025-04-01T06:38:24.445616
{ "authors": [ "chubchenko", "djezzzl" ], "repo": "djezzzl/database_consistency", "url": "https://github.com/djezzzl/database_consistency/issues/72", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2055271026
🛑 bin is down In b1ed902, bin (https://bin.djsnipa1.repl.co) was down: HTTP code: 0 Response time: 0 ms Resolved: bin is back up in 1104976 after 46 minutes.
gharchive/issue
2023-12-25T01:17:47
2025-04-01T06:38:24.461729
{ "authors": [ "djsnipa1" ], "repo": "djsnipa1/upptime", "url": "https://github.com/djsnipa1/upptime/issues/1346", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1285028806
🛑 Writeguard is down In 4a3b654, Writeguard (https://www.writeguard.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Writeguard is back up in 6381a0a.
gharchive/issue
2022-06-26T20:55:17
2025-04-01T06:38:24.464061
{ "authors": [ "djsnipa1" ], "repo": "djsnipa1/upptime", "url": "https://github.com/djsnipa1/upptime/issues/547", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1167419826
🛑 Writeguard is down In c888ede, Writeguard (https://www.writeguard.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Writeguard is back up in d435fc9.
gharchive/issue
2022-03-12T22:44:02
2025-04-01T06:38:24.466646
{ "authors": [ "djsnipa1" ], "repo": "djsnipa1/upptime", "url": "https://github.com/djsnipa1/upptime/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1303029794
🛑 Writeguard is down In 80118aa, Writeguard (https://www.writeguard.com) was down: HTTP code: 0 Response time: 0 ms Resolved: Writeguard is back up in bdbc20a.
gharchive/issue
2022-07-13T07:36:50
2025-04-01T06:38:24.468980
{ "authors": [ "djsnipa1" ], "repo": "djsnipa1/upptime", "url": "https://github.com/djsnipa1/upptime/issues/814", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
61874605
Merge Apps visitBCN https://github.com/maurovc/visitBCN https://itunes.apple.com/us/app/visitbcn/id904676442?l=es&ls=1&mt=8 a menjar https://github.com/maurovc/aMenjar https://itunes.apple.com/us/app/a-menjar!/id816473131?l=es&ls=1&mt=8 Color Blur https://github.com/maurovc/ColorBlur https://itunes.apple.com/us/app/id928863510 iGrades https://github.com/maurovc/iGrades https://itunes.apple.com/us/app/id816987574 Peggsite https://github.com/jenduf/GenericSocialApp https://itunes.apple.com/us/app/peggsite/id938445951?mt=8 It would be really helpful if you would create a pull request instead of suggesting new ones as an issue. It's ok, all contributions are welcome :-)
gharchive/issue
2015-03-15T17:52:11
2025-04-01T06:38:24.484134
{ "authors": [ "Bautistax", "anthonymonori", "dkhamsing" ], "repo": "dkhamsing/open-source-ios-apps", "url": "https://github.com/dkhamsing/open-source-ios-apps/issues/16", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
392893996
Add REST API ENDPOINTS for Slack slash command 우선 슬랙 슬래쉬 커맨드와 healthcheck API를 만들었습니다 :smile: 네 그렇습니다. 앞으로 할일은 슬랙에서 리퀘스트가 옴 // x-form-urlencoded🙄 슬랙에서 온 리퀘스트가 맞는지 verify (이부분이 쪼오금 복잡합니다 👻) 현재 작성중이신 시간모듈의 시간 파서로 날짜 획득 및 변환 (한국시간이면 미국시간으로 vice versa) 에러날경우 에러 메시지 아니면 시간 반환 이거 또 반환은 json으로 해야 됩니다 🙄 하면 끝입니다 😆 일단 여기까지 오고 나면 피쳐를 더 올려볼수도 있겠지만(커맨드기 아니라 실시간 변환 등등) 하지만 젤 어려운 단계는 이미 끝난 상태가 되기 때문에 별로 어렵지 않게 되니 일단 커맨드를 목표로 하는 거죠 그리고 머지도 해주시면 감사하겠습니다 😆 merge 버튼 찾느라 -ㅂ-);;
gharchive/pull-request
2018-12-20T05:18:14
2025-04-01T06:38:24.491456
{ "authors": [ "kkweon", "nicewook" ], "repo": "dl4b/timebot", "url": "https://github.com/dl4b/timebot/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
157345052
Areas of D usage As the first five minutes matter, I thought it would be nice if we have an overview of areas in which D is used. It is different to the great Overview page as it tries for every area to inform about applications written in D key reasons/selling points That being said, this document is a draft and your feedback & review is highlighy appreciated. If I left out an area or you miss a great application - please let me know! Also note that I propose to put this on dlang.org, because in contrast to the wiki we can go through the very helpful peer-review stage and have the official branding. Can someone have a look at the content? I beliebe providing such an overview easily accessible from the front page is part of our mission to make D more mainstream! Agreed that this is important. I wouldn't put "Bare metal" at the top of the list, though. It's not D's strongest point currently. Also, "Compilers" are not really bare metal applications. Agreed that this is important. I wouldn't put "Bare metal" at the top of the list, though. How would you structure it? It's not D's strongest point currently. Also, "Compilers" are not really bare metal applications. Renamed the category to System programming, fixed all the links and went over the text again. I'd say by "popularity" of the usage area. For example "Numerical computing / Data science" seems to be a hot topic, judging from the forum activity. On the other hand, "Game development" is sexy. But we probably shouldn't make a science out of it. I'd say by "popularity" of the usage area. For example "Numerical computing / Data science" seems to be a hot topic, judging from the forum activity. On the other hand, "Game development" is sexy I put academia on purpose on the bottom to avoid the impression it's an "academic language". Maybe we can create a new grouping? We could split academia into research and teaching, but then teaching would be alone. Has someone else a better idea? This one is still broken, need to use LINK2 as below... Rebased & fixed. The spacing on GPU programming looks wonky due to it being justified. Also, I wouldn't necessarily group Numerical Computing, GPU Computing, and Data Science in to Academia. Maybe split them in to their own category. I might also mention how chaining range operations is like dplyr or something like that in the data science section. This should be linked from somewhere. Documentation→Articles? Resources? I see that overview.html is apparently only linked from the Learn tour. That's not exactly great. The spacing on GPU programming looks wonky due to it being justified. Also, I wouldn't necessarily group Numerical Computing, GPU Computing, and Data Science in to Academia. Maybe split them in to their own category. It's now named cutting-edge research. I might also mention how chaining range operations is like dplyr or something like that in the data science section. On it - will come soon. Thanks for the idea. This should be linked from somewhere. I know added overview and this article to the bottom of dlang.org - I think it fits, because the heading is "Why D?". However the wording isn't the best yet. I see that overview.html is apparently only linked from the Learn tour. That's not exactly great. I know (I made many PRs to the tour) - we are highly "underlinked"! Resources? We have to be careful the current number of menu items is already too much. Can we somehow create a new group? Articles Could be an idea, but probably not the first place I search for an overview. Documentation. We should merge http://dlang.org/comparison.html (linked from documentation) with http://dlang.org/overview.html I had another look through the text and fixed a couple of points. Btw that's how I propose to list the overview on the front page: How should we proceed? More nitpicks? Hey I went through the text again (minor revisions), but I added fancy icons - they are not optimal yet. The usual CSS hacks (responsive, nice alignment) are missing, however I think you could already check whether some icons don't fit overview sections See all This looks really cool but I think should be approved by W/A. Normally this would probably belong on the wiki, but I really like how it's laid out. Some weird wrapping here, maybe just use a constant margin for the whole section: The icon currently used for "Industry" seems more associated with academy, I think. Last week I tried to make the icons more responsive. It's not perfect, but should look a lot better than before. Moreover I add two nitpick runs over the text - can someone take 5-10 minutes to give the text a final nitpick round, s.t. we can ship an initial version? :) I think using this for the industry icon would make more sense. Why is the industry icon on the other side of the page? I would remove the word "perfect" from the industry section. Your aim as a programmer is never to make perfect code as you know that's impossible. I think using this for the industry icon would make more sense. It isn't part of FontAwesome 4.2 (see also #1371) - I guess to whether icon we decide there, I will just use the same here. Why is the industry icon on the other side of the page? The idea was to have indent the subcategories, so I put for the categories the icons on the right side. Is this too weird? Better ideas? "few scientific programmers care about" sounds negative. "few scientific programs need to worry about" sounds better IMO. Also I would mention parallelism somewhere in that section. "Last but least" -> "Last but not least" I would link to the orgs using D page somewhere in the industry section Is this too weird? Better ideas? Yeah, it's kind of jarring. This effect is already achieved via the typography of the section headers. Yeah, it's kind of jarring. This effect is already achieved via the typography of the section headers. Alrighty :) Also I would mention parallelism somewhere in that section. I added: " allows to easily parallelize your algorithm and pipelines," I would link to the orgs using D page somewhere in the industry section I added: "D has been used in many, diverse domains. A short selection will be presented below. For a full overview you can browse the list of reported $(LINK2 $(ROOT_DIR)orgs-using-d.html, organizations which use the D Language). Thanks @JackStouffer :) Do we need more eyes or is this ready for the first version? I guess we're ready. Should we take away the funny but aged D mascot? Please squash commits I guess we're ready. Should we take away the funny but aged D mascot? There is a related discussion for the DLang Tour: https://github.com/stonemaster/dlang-tour/pull/248 Please squash commits With pleasure :) Engage!
gharchive/pull-request
2016-05-28T15:58:03
2025-04-01T06:38:24.524892
{ "authors": [ "CyberShadow", "JackStouffer", "aG0aep6G", "andralex", "jmh530", "schuetzm", "wilzbach" ], "repo": "dlang/dlang.org", "url": "https://github.com/dlang/dlang.org/pull/1314", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
2709133707
utf8 string not read/written to windows console sum.proxy reported this on 2014-06-25T13:48:23Z Transfered from https://issues.dlang.org/show_bug.cgi?id=12990 CC List aldacron bugzilla (@WalterBright) dlang-bugzilla (@CyberShadow) Description import std.stdio; void main() { string s = stdin.readln(); write(s); } The code above should write a unicode (specifically cyrillic) string to output to a windows console (with cp set to 65001), but the string comes out empty. The same code works correctly when run through windows debugger windbg.exe, so hopefully it will be an easy fix. sum.proxy commented on 2014-06-27T15:27:51Z I still see no output in the regular console (no exception indication either). However, when I run it with windbg.exe it throws some exception (can't tell which one exactly, couldn't figure out how to load debug symbols). Appears like a write problem to me.. dfj1esp02 commented on 2014-06-27T18:56:55Z Then try write(cast(ubyte[])s); dfj1esp02 commented on 2014-06-27T15:02:18Z import std.stdio, std.utf; void main() { string s = stdin.readln(); validate(s); write(s); } Check if validation passes. dfj1esp02 commented on 2014-06-27T14:59:07Z This bug is probably better to split. It either read an invalid utf-8 string, or couldn't write a valid utf-8 string. sum.proxy commented on 2014-06-28T07:12:22Z This time it returned an empty array ([]). Thanks. sum.proxy commented on 2014-10-25T10:41:22Z I tried the new version of the compiler with the issue you referred to, but alas - no luck. Please see https://issues.dlang.org/show_bug.cgi?id=1448#c12 SetConsoleCP(65001) and SetConsoleOutputCP(65001) didn't help either. Thanks. sum.proxy commented on 2014-08-15T11:34:25Z Sorry, any feedback on this one? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T02:10:16Z Try calling SetConsoleCP(65001) and SetConsoleOutputCP(65001). dfj1esp02 commented on 2014-07-07T09:00:29Z An empty array means no input rather than no output. Did it wait for the input? Do you compile it for console or GUI subsystem? echo 000 | yourprogram.exe Does this work? sum.proxy commented on 2014-07-07T09:38:18Z Yes, it does wait for the input, but the output is empty. It's a console application and sending the input through pipe seems to work correctly. dlang-bugzilla (@CyberShadow) commented on 2014-10-25T13:51:34Z Indeed. Happens with both DMC and MSVC runtime. sum.proxy commented on 2014-07-03T08:02:07Z I also tried it on a 32-bit windows system and the behavior is the same - no output. sum.proxy commented on 2014-10-25T20:25:45Z From what I know this program will work incorrectly for any non-ascii unicode input, which I have confirmed through simple tests. scanf and strlen rely on '\0' to indicate string termination, but I don't think this goes well with unicode strings. I believe the right way to do something similar (without buffer length) is this: #include <stdio.h> #include <fcntl.h> #include <io.h> int main( void ) { wchar_t buf[1024]; _setmode( _fileno( stdin ), _O_U16TEXT ); _setmode( _fileno( stdout ), _O_U16TEXT ); wscanf( L"%ls", buf ); wprintf( L"%s", buf ); } For further info please refer to http://www.siao2.com/2008/03/18/8306597.aspx and http://msdn.microsoft.com/en-us/library/tw4k6df8%28v=vs.120%29.aspx HTH, Thanks. sum.proxy commented on 2014-10-25T14:35:30Z Do you find it necessary to report the issue elsewhere, or the guys in charge of https://issues.dlang.org/show_bug.cgi?id=1448 will do it? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T13:53:32Z "scanf" misbehaves in the same way. Not a D bug, I think. sum.proxy commented on 2014-10-28T12:28:55Z Or perhaps "the right" way would be to stick to UTF-16, since it's default for Unicode in Windows. dlang-bugzilla (@CyberShadow) commented on 2014-10-25T14:42:32Z Report it where? To Microsoft? Figuring out why scanf is failing would probably be the next step to resolving this. sum.proxy commented on 2014-10-28T11:32:14Z I believe the problem is that default internal representation of Unicode in Windows is UTF-16, which implies that some sort of conversion would be necessary here. I haven't found a way to do it right yet. dlang-bugzilla (@CyberShadow) commented on 2014-10-26T00:35:23Z (In reply to Sum Proxy from comment #18) > scanf and strlen rely on '\0' to indicate string termination, but I don't > think this goes well with unicode strings. Not true. At least, not true with UTF-8, which is what we set the CP to. > I believe the right way to do something similar (without buffer length) is > this: I would not say that's the "right" way. That's the way to read wchar_t text, but we need UTF-8 text. sum.proxy commented on 2014-10-25T14:50:12Z Are you referring to C's scanf? Is it consistently reproducible in a small chunk of C code? dlang-bugzilla (@CyberShadow) commented on 2014-10-25T15:01:25Z Yep: /////////// test.c /////////// void main() { char buf[1024]; SetConsoleCP(65001); SetConsoleOutputCP(65001); scanf("%s", buf); printf("%d", strlen(buf)); } ////////////////////////////// bugzilla (@WalterBright) commented on 2023-06-22T08:10:47Z (In reply to Sum Proxy from comment #22) > SetConsoleCP(1200); 1200 utf-16 Unicode UTF-16, little endian byte order (BMP of ISO 10646); available only to managed applications https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers sum.proxy commented on 2014-10-28T12:53:37Z This actually works on my system: ///////////// test.d ////////////// import std.stdio; import std.c.windows.windows; extern(Windows) BOOL SetConsoleCP( UINT ); void main() { SetConsoleCP(1200); string s = stdin.readln(); write(s); } ///////////////////////////////////
gharchive/issue
2014-06-25T13:48:23
2025-04-01T06:38:24.553159
{ "authors": [ "dlangBugzillaToGithub" ], "repo": "dlang/phobos", "url": "https://github.com/dlang/phobos/issues/9403", "license": "BSL-1.0", "license_type": "permissive", "license_source": "github-api" }
505980105
Port Radar Visualization to Open Distro This issue is an idea for the Open Distro Hack Day at All Things Open on October 14. The idea is to port this visualization to Open Distro and submit it as an official plugin for Open Distro for Kibana. related to https://github.com/chaoss/grimoirelab/issues/219
gharchive/issue
2019-10-11T17:50:35
2025-04-01T06:38:24.570268
{ "authors": [ "GeorgLink", "valeriocos" ], "repo": "dlumbrer/kbn_radar", "url": "https://github.com/dlumbrer/kbn_radar/issues/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
369895933
Wait for Cloud-Init To Complete Version Reports: Distro version of host: cat /etc/*release | grep PRETTY_NAME PRETTY_NAME="Ubuntu 18.04.1 LTS" Terraform Version Report terraform --version Terraform v0.11.8 + provider.libvirt (unversioned) + provider.template v1.0.0 Libvirt version virsh --version 4.0.0 terraform-provider-libvirt plugin version (git-hash) 0.5.0 (Downloaded binary from releases) Description of Issue/Question I've got cloud-init setup to install an Ansible playbook on the VM. Is there any way to wait until the cloud-init script has completed before outputting values? Additional Infos: SELinux is disabled, ufw is inactive. Nothing special about my config. Hi ! Check this out https://github.com/hashicorp/packer/issues/2639. Adding this would make the codebase rely on unstable 3party changes, an would impact much the codebase for this. You can solve this e.g in your workflow with a remote exec and looping.p @kjenney are you satisfied with the answer more or less ? to me we can close this because in a realistic world you can use remote-exec and wait for it or doing other workflow around the core. In the terraform-libvirt part i see this really far away from a future implementation ( is not what i would prioritize as high, imho). feel free to ask any add info Closing for additional question feel free to join the gitter chat! Thx
gharchive/issue
2018-10-14T12:05:32
2025-04-01T06:38:24.592969
{ "authors": [ "MalloZup", "kjenney" ], "repo": "dmacvicar/terraform-provider-libvirt", "url": "https://github.com/dmacvicar/terraform-provider-libvirt/issues/449", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1863472564
Add gpt-3.5-turbo-16k model to encoding mapping support Fixes #19 Thanks a lot for your contribution!
gharchive/pull-request
2023-08-23T14:37:30
2025-04-01T06:38:24.621234
{ "authors": [ "anthonypuppo", "dmitry-brazhenko" ], "repo": "dmitry-brazhenko/SharpToken", "url": "https://github.com/dmitry-brazhenko/SharpToken/pull/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
116060117
build error:threadediter.h:326:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ After execute make on terminal, such error happens: g++ -std=c++0x -c -DMSHADOW_FORCE_STREAM -Wall -O3 -I./mshadow/ -I/usr/local/opt/openblas/include -I./dmlc-core/include -fPIC -Iinclude -msse3 -funroll-loops -Wno-unused-parameter -Wno-unknown-pragmas -DMSHADOW_USE_CUDA=0 -DMSHADOW_USE_CBLAS=1 -DMSHADOW_USE_MKL=0 -DMSHADOW_RABIT_PS=0 -DMSHADOW_DIST_PS=0 -DMXNET_USE_OPENCV=1 `pkg-config --cflags opencv` -fopenmp -c src/io/io.cc -o build/io/io.o In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h: In lambda function: ./dmlc-core/include/dmlc/threadediter.h:326:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ consumer_cond_.notify_all(); ^ ./dmlc-core/include/dmlc/threadediter.h:326:37: note: candidate is: In file included from ./dmlc-core/include/dmlc/threadediter.h:15:0, from src/io/./iter_prefetcher.h:13, from src/io/io.cc:8: /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: void std::condition_variable::notify_all() <near match> notify_all() noexcept; ^ /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: no known conversion for implicit ‘this’ parameter from ‘const std::condition_variable*’ to ‘std::condition_variable*’ In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h:333:37: error: no matching function for call to ‘std::condition_variable::notify_all() const’ consumer_cond_.notify_all(); ^ ./dmlc-core/include/dmlc/threadediter.h:333:37: note: candidate is: In file included from ./dmlc-core/include/dmlc/threadediter.h:15:0, from src/io/./iter_prefetcher.h:13, from src/io/io.cc:8: /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: void std::condition_variable::notify_all() <near match> notify_all() noexcept; ^ /home/rescape/lib/include/c++/4.8.0/condition_variable:83:5: note: no known conversion for implicit ‘this’ parameter from ‘const std::condition_variable*’ to ‘std::condition_variable*’ In file included from src/io/./iter_prefetcher.h:13:0, from src/io/io.cc:8: ./dmlc-core/include/dmlc/threadediter.h:352:45: error: no matching function for call to ‘std::condition_variable::notify_all() const’ if (notify) consumer_cond_.notify_all(); Any suggestions of how to solve this? Thanks! Perhaps my gcc versions causes? my gcc version is gcc version 4.8.0 (GCC) A similar issue is reported in previous project with GCC 4.8.0: https://github.com/dmlc/cxxnet/issues/221 Could you please try to upgrade it? At least 4.8.4 is good to go. Yes, I upgrade gcc version and this message disappear, perhaps the minimum version of gcc of the requirement to build mxnet should be updated on the doc site. Happy to hear that. Thanks for the advice.
gharchive/issue
2015-11-10T08:23:52
2025-04-01T06:38:24.629683
{ "authors": [ "unityunreal", "winstywang" ], "repo": "dmlc/mxnet", "url": "https://github.com/dmlc/mxnet/issues/530", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
211970648
[WIP] Proposal: Extend imdecode to uint16 I am working with image data that does not fit well into uint8. Given that I am working with 100k-1M images, I need a data iterator that is fast, such as the ImageIter. I have been looking into extending the ImageIter to use other types besides uint8 for reading data. It appears that after the initial read, there is no problem with container type. I've tried writing a new iter in python (using python reads), and they are still slow. So I've turned my attention to extending mxnet's imdecode function to work with uint16 in addition to uint8. This PR is a sketch, if there is interest I will continue. There are a number of issues remaining, just as mshadow has no uint16 type (to replace mshadow::kUint8). I am open to comments on my approach. An alternative would be to allow I/O from CV mat files, which can store any format. How about add a dtype parameter to allow uint8, int32, and float32? I don't there there would be much difference speed wise between int16 and int32. Also since you are going to convert it to float later anyway, why not directly convert to float in imdecode The idea behind uint16 is that they can be stored in tifs which I know opencv can read, I am not sure about int32 and imdecode. In the end, what image formats should this thing read? We could make a new templated version of mxnet::io::Imdecode, then the orginal (and exposed) mxnet::io::Imdecode contain a have a switch, which choses the correct templated function based on the dtype param. So you are suggesting that we drop mshadow::kUint8 and replace with mshadow::float16? I have not looked in depth at mxnet and mshadow, but I assume most processing is done on singles (float16). yes something like mx.io.imdecode(..., dtype='float32') should work. We don't need to make imdecode templated. We only need to do a conversion at the end with MSHADOW_DTYPE_SWITCH like the other ops. I went back and re-read the opencv docs on imencode and imdecode. opencv only supports uint8 and uint16 formats for all image encoding and decoding. If we wanted an alternative image io format, one that can use a larger variety of dtypes, it think we need to look outside opencv. NDArrays load very fast, but the files are quite large (unsure if there is a way to change this), and does not offer compression? Pillow supports a lot of formats , but it is python only, and I would want to test what formats+dtypes it can actually save/read. Given that imagerecio can be handled mostly in python now, python only might not be a problem (for the python platform). I would want the final result of all this is fast i/o such as those offered by RecordIO. I doesn't matter what format cudnn supports. You can use opencv to decode into a temporary buffer of whatever type and convert it to the target output format mxnet supports Im saying, that if you want to save/read dtypes that are not uint8 or uint16, you cannot use opencv for img encoding or decoding. Can one of the admins verify this patch? We only support images that opencv can read. Ok. Let's close this. The effort is not worth the one uncommon new format. I now use a simpler approach to pack images with other dtypes.
gharchive/pull-request
2017-03-05T18:05:18
2025-04-01T06:38:24.636990
{ "authors": [ "jmerkow", "piiswrong" ], "repo": "dmlc/mxnet", "url": "https://github.com/dmlc/mxnet/pull/5261", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
198423879
solve mxnet docker build error by add --no-same-owner to tar in make/deps.mk If build ps-lite in mxnet docker, I miss following error when tar some deps in make/deps.mk: Cannot change ownership to uid xxxx , gid xxxx: Permission denied According to this post, https://www.krenger.ch/blog/linux-tar-cannot-change-ownership-to-permission-denied/ I fix this and test OK when open ps-lite in mxnet docker build I can not understand travis error...
gharchive/pull-request
2017-01-03T07:45:55
2025-04-01T06:38:24.639717
{ "authors": [ "xlvector" ], "repo": "dmlc/ps-lite", "url": "https://github.com/dmlc/ps-lite/pull/69", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
386347471
[SCHEDULE] Fix code lowering when loop condition depends on outer axis. Fixe #2207 cc @ajtulloch Oh wow, thanks @tqchen.
gharchive/pull-request
2018-11-30T20:57:02
2025-04-01T06:38:24.644244
{ "authors": [ "ajtulloch", "tqchen" ], "repo": "dmlc/tvm", "url": "https://github.com/dmlc/tvm/pull/2208", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
183869300
How to tune memory parameters when training large data set on yarn I have a large data set stored on hdfs where training data is about 800G and valid data is about 90G. The data format is libsvm and each instance has 308 features at most. When running training process via yarn, I always get the error INFO dmlc.ApplicationMaster: Diagnostics., num_tasks100, finished=20, failed=80 [DMLC] Task 26 killed because of exceeding allocated physical memory The job is submitted via ../dmlc-core/tracker/dmlc-submit \ --cluster yarn \ --num-workers 100 \ --num-servers 10 \ --worker-memory 20g \ --server-memory 20g \ --queue ${my_queue} \ --ship-libcxx /opt/gcc-4.8.2/lib64/ \ ../xgboost train.conf train.conf # General Parameters, see comment for each definition # choose the booster, can be gbtree or gblinear booster = gbtree # choose logistic regression loss function for binary classification objective = binary:logistic # Tree Booster Parameters # step size shrinkage eta = 1.0 # minimum loss reduction required to make a further partition gamma = 1.0 # minimum sum of instance weight(hessian) needed in a child min_child_weight = 1 # maximum depth of a tree max_depth = 4 # instance sampling subsample = 0.8 # feature sampling when splitting colsample_bylevel = 0.8 # Task Parameters # the number of round to do boosting num_round = 2 #0 means do not save any model except the final round model save_period = 0 # The path of training data data = "hdfs://.../train" # The path of validation data, used to monitor training process, here [test] sets name of the validation set eval[test] = "hdfs://.../valid" # evaluate on training data as well each round eval_train = 1 # The path of model out model_out = "hdfs://.../model/" Please help me tuning the memory parameters so the job can run successfully. @tqchen I have tested that xgboost can not handle such a large data. So do down sampling before training. @formath Hi, sorry for posting at this closed issue, but I have encountered the same problem as you had (You can check the details here) I'm wondering what's the largest data size you have tested on YARN version xgboost? Cause in my case, it cannot handle even 10GB data (I allocated about 40GB physical memory). Thank you!
gharchive/issue
2016-10-19T05:32:09
2025-04-01T06:38:24.647198
{ "authors": [ "formath", "yichenpan" ], "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/issues/1681", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
363769562
how can I extract the feature weights of the gblinear booster I am trying to extract the weights of my input features from a gblinear booster. But it seems like it's impossible to do it in python. I am wondering if there's any way to extract them. @dwy904 Did you try using get_dump() or dump_model()? Here is what I got. model_ranke. get_dump() ['bias:\n9.55492e+08\nweight:\n0.288463\n0.151129\n0.00716773\n31.5678\n-1.86324\n0.160523\n2.6101\n-0.0675516\n-7.74334e-10\n0.240564\n0.518223\n-2.97623\n-1.05936\n1.29177\n-0.597921\n1755.24\n-3.80193\n-3.40322\n372.426\n0.0564985\n0.0504407\n0.772678\n1.65475\n1.73482\n0.0058949\n-0.0397106\n-22.6941\n-0.82854\n712.713\n-0.138361\n1.21099\n-0.597615\n-1.56131\n-2.138\n0.13256\n-0.0376063\n-2.89459\n-2.41698\n0.653052\n']``` One more question, is the sequence of the model_rank.get_dump() command exactly the same as the sequence of the command, model_rank.feature_names? except for the bias term. I think so. Thank you so much!
gharchive/issue
2018-09-25T21:15:55
2025-04-01T06:38:24.650604
{ "authors": [ "dwy904", "hcho3" ], "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/issues/3722", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
296254220
[core] fix slow predict-caching with many classes Addresses the issue of the O(num_class^2) prediction cache behavior #1689, #2926 where cache updates within CommitModel may start dominating with many classes. The reason was that the cache was updated for each single class group commit in a boosting round. E.g., with set.seed(1) n <- 1e4 num_feat <- 200 num_class <- 150 y <- apply(rmultinom(n, 1, rep(1, num_class)), 2, function(yy) which(yy != 0)) - 1 dtr <- xgb.DMatrix(matrix(rnorm(n*num_feat), n, num_feat), label = y) param <- list(objective='multi:softprob', num_class=num_class, debug_verbose=1, tree_method='hist', subsample=0.6) bst <- xgb.train(param, data=dtr, nrounds=1) rm(bst) gc() the timing result before (note that no watchlist was used, and using it would make caching even slower): [16:21:50] ======== Monitor: GBTree ======== [16:21:50] BoostNewTrees: 6.436368s [16:21:50] CommitModel: 10.707612s and after the fix: [16:22:51] ======== Monitor: GBTree ======== [16:22:51] BoostNewTrees: 6.497372s [16:22:51] CommitModel: 0.109006s Some minor changes: remove redundant 'if' in cpu_predictor get rid of compiler warnings make R on windows not to complain about configure script by providing an empty configure.win workaround for R v3.4.3 bug #3081 Codecov Report Merging #3109 into master will decrease coverage by <.01%. The diff coverage is 10.52%. @@ Coverage Diff @@ ## master #3109 +/- ## ============================================ - Coverage 43.79% 43.79% -0.01% Complexity 228 228 ============================================ Files 159 159 Lines 12507 12507 Branches 466 466 ============================================ - Hits 5478 5477 -1 - Misses 6837 6838 +1 Partials 192 192 Impacted Files Coverage Δ Complexity Δ src/gbm/gbtree.cc 17.95% <0%> (-0.1%) 0 <0> (ø) src/objective/regression_obj.cc 84% <100%> (ø) 0 <0> (ø) :arrow_down: src/predictor/cpu_predictor.cc 68.71% <50%> (+0.19%) 0 <0> (ø) :arrow_down: Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 375d753...c414b00. Read the comment docs.
gharchive/pull-request
2018-02-12T02:52:00
2025-04-01T06:38:24.661201
{ "authors": [ "codecov-io", "khotilov" ], "repo": "dmlc/xgboost", "url": "https://github.com/dmlc/xgboost/pull/3109", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
274097084
Improve resource requirements for utilitarian jobs For cleanup, logcollect and merge jobs. By default, they use 1 core, request 1GB of RAM and have a MaxRSS watchdog set to ~2.3GB. We should check ES data and maybe lower these requirements for a better usage of the resources. Might not be a very good idea... I've just found a merge job for the TaskChain_Relval_Multicore template that had a performance failure: PerformanceError PerformanceKill (Exit Code: 50660) Error in CMSSW step cmsRun1 Number of Cores: None Job has exceeded maxRSS: 2355.2 Job has RSS: 2425 Weird. Merge jobs should all use fast-copy of baskets which is fast and should use little memory. Might be worthwhile to get a log of that job and figure out what went wrong... Are you volunteering yourself to look at it? :) #8451 maybe a duplicate You mean the other way around :) from #8451 make sure you update also what goes in htcondor when you rework this So... how straightforward it is to increase the threshold to some higher value, say, 4GB? You don't want to do this for all such jobs. Requesting 4GB for standard merge, cleanup, logcollect etc jobs means you have less resources that can run them (you wait longer to run them and can run less of them) and you leave less resources available for other jobs. If special types of utility jobs (i.e. NANOAOD merges that aren't really standard merges) need more memory, we should request more memory just for these special types of jobs. Cleanup and LogCollect could probably be reduced though. If special types of utility jobs (i.e. NANOAOD merges that aren't really standard merges) need more memory, we should request more memory just for these special types of jobs. Thanks! That's what we want. For the record, several of the Task getters/setters methods don't touch "utilitarian" jobs. Right now we cannot change resource requirements for such jobs and if we want to support updates to those tasks too, that's going to be tricky and likely ugly for the assigner/unified side (the only way I see memory updates working without causing issues in other tasks would be specificifying every single tasks and its Memory requirement). Hi, Note that the NanoAOD merge issues are really a ROOT bug -- and affect how well these files can be effectively read by users. See: https://github.com/cms-sw/cmssw/pull/22445 For the other merge jobs - are we really seeing memory limits, or are we simply snapshotting cmsRun when it forks? The watchdog should be using PSS, not RSS, in the end. Brian
gharchive/issue
2017-11-15T09:59:21
2025-04-01T06:38:24.699402
{ "authors": [ "amaltaro", "bbockelm", "hufnagel", "thongonary", "vlimant" ], "repo": "dmwm/WMCore", "url": "https://github.com/dmwm/WMCore/issues/8331", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1794898674
🛑 AR Panel is down In f6b0753, AR Panel (https://panel.siembro.com) was down: HTTP code: 404 Response time: 128 ms Resolved: AR Panel is back up in 6d70b22.
gharchive/issue
2023-07-08T11:23:56
2025-04-01T06:38:24.706654
{ "authors": [ "dnahmiyasSiembro" ], "repo": "dnahmiyasSiembro/uptime_service", "url": "https://github.com/dnahmiyasSiembro/uptime_service/issues/215", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
598386409
ERROR:root:call() missing 1 required positional argument: 'value' When I ran %reload_ext lab_black in my Jupyterlab, I got an error message: ERROR:root:__call__() missing 1 required positional argument: 'value' Traceback (most recent call last): File "/opt/anaconda3/lib/python3.7/site-packages/lab_black.py", line 218, in format_cell formatted_code = _format_code(cell) File "/opt/anaconda3/lib/python3.7/site-packages/lab_black.py", line 29, in _format_code return format_str(src_contents=code, mode=FileMode()) TypeError: __call__() missing 1 required positional argument: 'value' Here's line 29: Here's line 218: Which function misses what argument? Also, I just updated to Jupyterlab to 2.1.0 so I don't know if there are weird incompatibility issues. Did anyone encounter this? Thanks!! I have been receiving the same error as well. I doubt if it has anything to do with compatibility of the version. Anyway currently I am using JupyterLab version 1.2.16 Any suggestions on this would be helpful. This is caused by having an old version of black. pip install -U black fixed it for me.
gharchive/issue
2020-04-12T01:26:53
2025-04-01T06:38:24.730667
{ "authors": [ "Yuan-Meng", "ashwinidathatri", "terhorst" ], "repo": "dnanhkhoa/nb_black", "url": "https://github.com/dnanhkhoa/nb_black/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
331119221
"See More Results" when searching produces an error when searching in Firefox Description While on Firefox, if the user tries to search and use "See More Results" at the bottom of the search results, it produces an error and does not open the results page. Expected result Jump to search result page. This works correctly on google chrome. Current result Error in console: TypeError: access to strict mode caller function is censored Affected version [x] 9.2 Fixed by #2039 Please close @dnnsoftware/tag is you have no further comment
gharchive/issue
2018-06-11T09:51:36
2025-04-01T06:38:24.738081
{ "authors": [ "tpluscode" ], "repo": "dnnsoftware/Dnn.Platform", "url": "https://github.com/dnnsoftware/Dnn.Platform/issues/2108", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
331628703
Cannot create new user with email address as username Description Going to Persona Bar > Security > Member Accounts tab > Registration Settings, and marking Use Email Address as Username, will generate an error upon login, displaying message "A critical error has occurred. Please check the Event Viewer for further details." Steps to reproduce Set up a clean DNN 9.2 install. Go to Persona Bar > Manage > Users. Click the "Add User" button. Fill details for new user, setting an email address as the username. Click "Save" button. An error message is displayed saying: "The username specified is invalid. Please specify a valid username." Current result After upgrading to DNN 9.2, customer is no longer able to add user with their email addresses as username. The following Error Message is displayed: "The username specified is invalid. Please specify a valid username." Expected result Being able to add email address as a valid user name in the default user creation form. Affected version [x] 9.2 This has been fixed in PR #2070 and has been caused by #1972 There is also a companion PR dnnsoftware/Dnn.AdminExperience.Extensions#537 which applies the same fix in Persona Bar and removes some code duplication Still occurs in 9.3.2 @nickcrisp I cannot reproduce in 9.3.2 using the above steps to reproduce... I can not reproduce either. @valadas Close? Actually, this one IS closed :) @nickcrisp can you please open a new issue with new steps to reproduce if you can still make it happen on 9.3.2 Thanks.
gharchive/issue
2018-06-12T15:03:09
2025-04-01T06:38:24.743740
{ "authors": [ "Tychodewaard", "nickcrisp", "tpluscode", "valadas" ], "repo": "dnnsoftware/Dnn.Platform", "url": "https://github.com/dnnsoftware/Dnn.Platform/issues/2115", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
211944694
Failed to acquire CRAS Please paste the output of the following command here: sudo edit-chroot -all chronos@localhost / $ sudo edit-chroot -all name: precise encrypted: no Entering /mnt/stateful_partition/crouton/chroots/precise... crouton: version 1-20170228132702~master:21cb695b release: precise architecture: amd64 targets: kde host: version 9000.91.0 (Official Build) stable-channel swanky kernel: Linux localhost 3.10.18 #1 SMP Wed Feb 22 23:32:43 PST 2017 x86_64 x86_64 x86_64 GNU/Linux freon: yes Unmounting /mnt/stateful_partition/crouton/chroots/precise... Please describe your issue: Cannot complete chroot setup due to failing fetching CRAS. If known, describe the steps to reproduce the issue: I was longing to install Ubuntu on my Toshiba CB35, and after a really long list of something, the system started to acquire CRAS. "Connecting to chromium.googlesource.com (chromium.googlesource.com)|2404:6800:4008:c06::52|:443... failed: Network is unreachable." and for unknown reason,the process was terminated and all the effort seemed gone. I typed sudo startkde as a hail Mary. Unlike previous attempts, the system came back with"the chroot may not be fully set up, would you like to finish it?", it liked a ray of hope that broke the darkness. With joy and excitement I typed yes. Yet the system still cannot acquire CRAS persistently. I copied the link and found it's accessible in Chrome, and the file has been successfully downloaded. Anyhow, I tried to boot the system again but this time it came back with "UID 1000 not found in precise",as my last ray of hope faded away. I have established a proxy link in System Setting due to the censorship in China mainland. However it doesn't solve my issue. Thank you for your reading and I hope God be with you. I found a modification that requires a mandatory SSL link in order to ensure proxy involved on Chinese website. I tried with that and it came back with "Proxy tunneling failed: Proxy Authentication Required Unable to establish SSL connection." I have deployed a Shadowsocks on another computer and used as a proxy server link via LAN, it works fine until now. On other website no security issue discovered. That computer using AES-256 to establish link with some server (you guys call it VPS right?) I just missed an option -P for assign a proxy. Sorry for being a noob. Have a nice day guys!
gharchive/issue
2017-03-05T10:43:16
2025-04-01T06:38:24.762061
{ "authors": [ "JeremiahHsu" ], "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/issues/3123", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
448575376
Is there a way to make a crouton chroot use Chrome OS VPN (that is setup by an Android Play Store app)? name: stretch encrypted: no Entering /mnt/stateful_partition/crouton/chroots/stretch... crouton: version 1-20190403182822~master:174af0eb release: stretch architecture: amd64 xmethod: xorg targets: xorg,gnome,xiwi,keyboard,core,extension host: version 12105.42.0 (Official Build) beta-channel cave kernel: Linux localhost 3.18.0-19339-g3e4e496860da #1 SMP PREEMPT Fri May 17 02:20:05 PDT 2019 x86_64 GNU/Linux freon: yes Please describe your issue: I have a VPN running on Chrome OS that is setup by an Android Play Store app, https://play.google.com/store/apps/details?id=com.fast.free.unblock.thunder.vpn&hl=en_US, as an example. The chroot does not use this VPN connection. (A curl request inside the chroot to http://api.ipify.org, shows that it is using the original, non-VPN IP address). If known, describe the steps to reproduce the issue: I have seen https://github.com/dnschneid/crouton/wiki/VPNC. I do not have the 'settings' for the setup/config of vpnc.conf, and ideally would like to be able to use any store VPN app. In the wiki I read: some VPNs requires client features and configurations that are not available. For those networks, it is possible to establish a VPN connection from a chroot that is usable from both within the chroot and Chromium OS. Which made me hopeful when I first started reading it. However, at configuration, it asks for settings, which I do not have access to since it is an android app. If it is not possible, I get it. If it is possible, or there is some way that it could be possible, if anyone could point me in the right direction, or offer any help at all, it would be much appreciated! I have read, and seen the solutions (on Reddit) for using OpenVPN or Private Internet Access VPN (You would have knowledge of actual settings to use to be able to setup). I am wanting/trying to make this work with any Play Store VPN that sets up a VPN in ChromeOS. I was actually kind of surprised when I first checked the IP that it was NOT going through the VPN. I figured that "that" would be a problem people may have been trying to figure out (how to get chroot to NOT go through ChromeOS VPN). Anyways... Thank you. I think this is much more a function of the VPN application creating the tunnel. Some VPN applications seem to configure the tunnel such that everything goes through it...others not...it may also be a function of chromeos version or even hardware. Google searching suggests this is certainly a varying topic. For me anyway, running the TorGuard VPN android app on my samsung pro routes Android, ChromeOS, and crouton traffic through the VPN. also...this might help...mine's been set to "default" for a while now though... within chrome://flags, there's Enable ARC VPN integration. ("Allow Android VPN clients to tunnel Chrome traffic. – Chrome OS")
gharchive/issue
2019-05-26T14:56:36
2025-04-01T06:38:24.768984
{ "authors": [ "brizzbane", "rmartin16" ], "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/issues/4069", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
54043771
Kiwi: Send keycodes instead of keysyms This is a partial fix for #1275 (but still much better than current situation, so it can be merged as-is). The protocol now sends X11 key codes instead of key symbols. Keycodes (mostly) correspond to physical keyboard keys, so that the chroot can translate it, depending on the keyboard layout set in crouton. For that purpose, I had to update protocol version from VF1 to VF2. The extension is backward compatible, though. What works: Setting US keyboard in Chromium OS (or any keyboard layout without dead keys), any keyboard layout in crouton should work fine. Extension can connect to old chroot (VF1), in that case keysyms are sent. What still does not work: Setting a keyboard layout in Chromium OS with dead keys (e.g. US Intl) leads to lost key events (I think it's fixable, as the key code appears when the key is lifted). Basic framework for reverting Search+key mapping is there, if the user wants Search+Left to mean Super+Left and not Home (see #1324). Not implemented yet (we need another configuration check box). Other things to be implemented in other PR: Warn the user that it is connecting to an old chroot. Eventually, drop support for VF1 (this requires extensive rework of error handling in kiwi) What I don't think can be fixed: Switching Ctrl/Search/Alt in Chromium OS also results in switched positions in crouton. Search+numbers appears the same as Search+fn keys (F1-F10). I don't think we can distinguish between the 2, to support Super+numbers (see #1324). (untested on freon, my peppy refuses to switch to dev channel...) This works well on Freon. I'm inclined to merge it as-is, since my comments are only nits. I was having trouble testing this with the current extension. Does the new extension get rolled out automatically? @stsquad : The extension auto-updates, when no chroot is running (version in chrome://extensions should now be 2.1.0). You also need to update your chroot. @drinkcat \o/ with the updated extension I just tested master and I have ~# and `¬ working ;-) \o/ with the updated extension I just tested master and I have ~# and `¬ working ;-) Good to hear! See https://code.google.com/p/chromium/issues/detail?id=425156#c16. Both "What still does not work", and "What I don't think can be fixed" are magically fixed with freon! Now there is still a slight bug as "OSLeft"/Super_L event are sometimes discarded. Happy not to have to implement another horrible hack... Awesome! I think freon will be generally a good thing for crouton. Just out of curiosity, is there any reason to still keep the fix in this commit when using Freon? It seems to be the culprit of #1558. @cribalik can you confirm that #1275 doesn't re-emerge if you revert this? @dnschneid I confirmed it with Swedish, Dvorak US and Spanish keyboards in Chrome; the keys recieved from X11 were the same regardless of the layout in Chrome. Switching around the Alt/Search/Ctrl keys also did not have any impact on X11 keycodes. Search + LeftArrow is received as OSLeft + LeftArrow, and not OSLeft + Home (which is what the fix was for I believe). This seems to be Freon's doing that suddenly the correct keycodes are sent. Do we have to keep backwards compatability with non-Freon users? If not, I propose reverting the aforementioned fix. Many users are still on non-freon systems, so a straight-out revert is not an option (yet). We'll need to either have two code paths based on whether Freon is present or not (which probably means having the chroot tell the extension one way or the other), or wait until Freon is in stable on all devices to revert. My device (2012 Samsung ARM / stable channel) isn't on Freon yet, but is very stable with precise, and I wouldn't mind losing the GUI for awhile. I've got a blog platform underneath that is very stable (open ssh, mysql, apache2, php5, myphpadmin, wordpress, etc.). I'm willing to move to trusty at some point, but precise has been extremely stable the past few months on the ARM. I mostly run the chroot with enter-chroot and utilize the server processes.
gharchive/pull-request
2015-01-12T11:32:17
2025-04-01T06:38:24.780100
{ "authors": [ "cribalik", "dnschneid", "drinkcat", "stsquad", "tedm" ], "repo": "dnschneid/crouton", "url": "https://github.com/dnschneid/crouton/pull/1339", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1116780194
🛑 Site Histovec is down In d7c7083, Site Histovec (https://histovec.interieur.gouv.fr/histovec) was down: HTTP code: 0 Response time: 0 ms Resolved: Site Histovec is back up in 121dc68.
gharchive/issue
2022-01-27T22:01:03
2025-04-01T06:38:24.785420
{ "authors": [ "mogador26" ], "repo": "dnum-mi/stats-sites-api", "url": "https://github.com/dnum-mi/stats-sites-api/issues/97", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
660359980
Better error handling With better error handling, I can help people resolve issues easily. Needs to keep aligned with the privacy statement on the README. Added in recent slew of commits → https://github.com/doamatto/southnode/compare/1f4703cf154d...90e23efb57dd
gharchive/issue
2020-07-18T20:22:33
2025-04-01T06:38:24.789156
{ "authors": [ "doamatto" ], "repo": "doamatto/southnode", "url": "https://github.com/doamatto/southnode/issues/1", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
533022084
[feature] Java 11 Builds Any plans to add Java 11 build tests to Travis? Update Travis build file is easy. Making Achilles work for Java 11 is another story: massive update of all maven plugins because of the new module system introduced in Java 9 maybe some internal JDK classes used by Achilles to circumvent some defects of APT are no longer available/have been moved You are welcomed to contribute to this migration I found the Cassandra JIRA that Java 11 would be targetted for 4.0, so I assume changes here would be dependent on that
gharchive/issue
2019-12-05T00:01:29
2025-04-01T06:38:24.791031
{ "authors": [ "cricket007", "doanduyhai" ], "repo": "doanduyhai/Achilles", "url": "https://github.com/doanduyhai/Achilles/issues/366", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
129699481
Build mod_proxy_*.so by default I think it would be useful if mod_proxy and friends were available by default. +1 (I asked for this some time ago in #6, for Apache 2.2)
gharchive/pull-request
2016-01-29T08:20:34
2025-04-01T06:38:24.811364
{ "authors": [ "carletes", "oyvindio" ], "repo": "docker-library/httpd", "url": "https://github.com/docker-library/httpd/pull/13", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }