id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
666342031
unity 2019LTS Build Failed I use unity 2019.4.4f1 for run unity action and get failed ,but I use 2019.2.11f would get pass my repo git@github.com:ZhouJiaZhi/test-secret.git And I have another two question 1.could this support git submodule ? 2.my repo using nuget for import third party package,could this support ? reference : git@github.com:GlitchEnzo/NuGetForUnity.git Did you update the license for 2019.4? Could you also please provide http links instead of an ssh reference? @webbertakken Below is the https link, Thank you for you https://github.com/ZhouJiaZhi/test-secret.git You will have to update the license. From your logs: LICENSE SYSTEM [2020727 10:28:18] bbc1cb99063843499779e57fc056d8ad != 9EA140DC-07DC-50A4-8CED-7830F267F477 LICENSE SYSTEM [2020727 10:28:18] bbc1cb99063843499779e57fc056d8ad != C02Z51CCLVDQ``` well,I renewed my license again today but failed, may I use this license in your repo? And do you know when your license will expire No, kindly use your own license. You should renew your license using the exact same steps you followed before, using the alf file that's generated by the unity version that you would like. See https://unity-ci.com/docs/github/activation OK, I will try it later, Thank you
gharchive/issue
2020-07-27T14:32:08
2025-04-01T06:40:55.022371
{ "authors": [ "ZhouJiaZhi", "webbertakken" ], "repo": "webbertakken/unity-actions", "url": "https://github.com/webbertakken/unity-actions/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1058265661
WebDataset no longer has a length argument (documentation) In the docs, it's mentioned that "you can specify an explicit size using the length= argument to WebDataset" https://github.com/webdataset/webdataset/blob/2eaa96e6a266ad0ae1a1433e86eb6c2d3b7c50f8/docs/sharding/index.html#L177-L179 This is no longer true, as there is no length= argument to WebDataset. https://github.com/webdataset/webdataset/blob/45724cdfcc0935c63ac6952c5f6b0ada9fa44f37/webdataset/dataset.py#L29-L39 I'm not sure if the documentation is out of date in mentioning a removed parameter, or if the parameter should be there, but got removed accidentally. Thanks for the report. The v1 documentation is a bit out of date. Use webdataset.FakeLength for setting the length in v1. If you want to force a specific epoch length, use .repeat().slice(num_samples) (In v2, you can use .with_length(n)) The __len__ method had to be removed because PyTorch considers having a __len__ method on an IterableDataset to be wrong. Updated the documentation.
gharchive/issue
2021-11-19T08:21:04
2025-04-01T06:40:55.128274
{ "authors": [ "tmbdev", "wongjoel" ], "repo": "webdataset/webdataset", "url": "https://github.com/webdataset/webdataset/issues/125", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
370051042
Question: How to manage admin panel with only one email (admin's emails) I want to manage the admin only with one email. So no one else can request a login url. Best Regards. Oh I see. May be I'm doing something wrong, but. here is my steps. /admin is available from Chrome and Firefox Logout from Chrome (/admin/logout) Request url for another email (/admin/login) 3.1 it says Access denayd /admin is still available in Firefox and Chrome note, no access tokens are created from admin panel.Am I missing something? http://178.128.163.248 for my example Thanks in advance.
gharchive/issue
2018-10-15T08:25:24
2025-04-01T06:40:55.135457
{ "authors": [ "nodetop" ], "repo": "webdevstar/React-Ecommerce", "url": "https://github.com/webdevstar/React-Ecommerce/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
232069863
Cucumber 2 Feedback/questions more than welcome! this uses the oficial cucumber2 now, I fixed tags as well (the syntax has changed between v1 and v2). The remaining problem is that this doesn't automatically load step_definitions that are not listed in require opt. I will have to think about it a bit. I think how it worked with v1 was that step_definitions folder that was adjacent to a feature file was loaded recursively what we've been using now is to have one "entry" file that we load in require section of the wdio.conf.js file, that loads everything else. But that doesn't seem optimal. Cant wait for this to get merged. Cucumber.js already released v3 🙈
gharchive/pull-request
2017-05-29T17:27:25
2025-04-01T06:40:55.144739
{ "authors": [ "christian-bromann", "lgandecki", "pantherqin" ], "repo": "webdriverio/wdio-cucumber-framework", "url": "https://github.com/webdriverio/wdio-cucumber-framework/pull/60", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
124280060
Added code coverage Added code coverage Added unique build number from git commit numbers library is shared library will have unique number :+1: after removing old probet references @yannick-polius and @shwetabhandare please re-review :+1: @KjellKod make sure that code coverage works if the rpm build and tests are executed as root user :+1:
gharchive/pull-request
2015-12-29T22:56:24
2025-04-01T06:40:55.203478
{ "authors": [ "KjellKod", "craig-cogdill", "yannick-polius" ], "repo": "weberr13/DeathKnell", "url": "https://github.com/weberr13/DeathKnell/pull/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1027749177
refactor(Client): added logging for response trackingId https://jira-eng-gpk2.cisco.com/jira/browse/SPARK-264725 For users of the future: 🚀 This was an awesome resource for understanding HTTPSURLConnection! Here is a secondary resource requestTrackingId used to be trackingId those requestTrackingId's were already there, should I leave them there or do you want me to replace them as suggested? let's just make sure that logs make sense, when we're saying request that the request ID is used and when we say response the response ID is used
gharchive/pull-request
2021-10-15T19:31:58
2025-04-01T06:40:55.211330
{ "authors": [ "emanuallan", "lalli-flores" ], "repo": "webex/webex-java-sdk", "url": "https://github.com/webex/webex-java-sdk/pull/37", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1405910132
Explain how to use docker/build-push-action with deployment-key configs This PR adds a recipe for using docker/build-push-action with multiple Deploy Keys (#78) to the docs. You can't access the ssh/git config from within the Dockerfile, if they are not inside docker's build context (which is the path of the checked-out repo by default).
gharchive/pull-request
2022-10-12T09:49:25
2025-04-01T06:40:55.215764
{ "authors": [ "j-riebe" ], "repo": "webfactory/ssh-agent", "url": "https://github.com/webfactory/ssh-agent/pull/133", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2104662054
Handling 401s from GitHub We want to send an organization alert, not raise an error. Current situation is that, by silencing the 401, we miss the invalid credentials check. We need to still raise, but catch it higher up. Seems like a more generic way to handle this situation is warranted; perhaps something in Replicator#backfill. Have had a similar situation with Intercom.
gharchive/pull-request
2024-01-29T05:09:48
2025-04-01T06:40:55.350835
{ "authors": [ "rgalanakis" ], "repo": "webhookdb/webhookdb", "url": "https://github.com/webhookdb/webhookdb/pull/856", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
337698771
fix style _removeValue value was not defined here. I think we want to delete the key? Indeed. Thanks!
gharchive/pull-request
2018-07-02T23:45:32
2025-04-01T06:40:55.360565
{ "authors": [ "modulesio", "ngokevin" ], "repo": "webmixedreality/exokit", "url": "https://github.com/webmixedreality/exokit/pull/149", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
394465968
Add monospace fonts On non-desktop systems (Magic Leap/Android), fonts availability is controlled via fonts.xml. Right now we just use the single system font, but we can theoretically support any font we can get a file for. One notable missing font is a monospace one, which is required for in-XR devtools rendering. This PR adds Inconsolata as the default monospace font for non-desktop Exokit. Tested and works in <canvas>. There is an unrelated bug in CanvasRenderingContext2D.measureText -- it seems to use metrics of the old font that was set.
gharchive/pull-request
2018-12-27T18:57:44
2025-04-01T06:40:55.362540
{ "authors": [ "modulesio" ], "repo": "webmixedreality/exokit", "url": "https://github.com/webmixedreality/exokit/pull/672", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
300515410
Inconsistency between system scalar type serialize/parseValue methods Hi, after upgrading my project from 0.10 to 0.11 version I get some weird behaviors with system scalar types when using inputs with variables. I don't really get why we have a differents between serialize/parseValue methods. Here an example IntType::parseValue no more accepts string but IntType::serialize does: (new IntType())->serialize('9') !== (new IntType())->parseValue('9'); // 9 !== null In JS implementation both methods has the same behavior so why in PHP we add these differents? I override system scalar types to use 0.10 version right now but this is not the best solution... I'm just trying to understand the reason why... How I override system scalar types without using reflection: // Override scalar types $overrideScalarTypes = \Closure::bind(function () { self::$internalTypes = [ self::ID => new MyType\IDType(), self::STRING => new MyType\StringType(), self::FLOAT => new MyType\FloatType(), self::INT => new MyType\IntType(), self::BOOLEAN => new MyType\BooleanType() ]; }, null, \GraphQL\Type\Definition\Type::class); $overrideScalarTypes(); This can maybe helps some people encountering same issue. As far as I remember it was done in #170. It was actually a valid issue (at least for strings and booleans). A related issue in graphql-js - https://github.com/graphql/graphql-js/issues/771 (yet they seem to fix only string case, not boolean or int). So do you need to accept strings as integer input? Sorry for the long delay with a response, I suggest to send a PR after I merge #248 (hopefully on this weekend) This problem is fixed in #248 as the serialize and parseValue methods have been aligned together. Nice! Thank you @danez :+1: . @mcg-web Can you check if the new version works for you and close if it is OK? It seems to be OK! Thank you @vladar and @danez :+1:
gharchive/issue
2018-02-27T07:04:40
2025-04-01T06:40:55.367869
{ "authors": [ "danez", "mcg-web", "vladar" ], "repo": "webonyx/graphql-php", "url": "https://github.com/webonyx/graphql-php/issues/254", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
208442215
Add "use strict" to every file included I'm using the script-loader (https://github.com/webpack-contrib/script-loader) to import some old libs. What I noticed is that at the top of the imported script is appended the "use strict" sentence but I dont want it. Is there a way to disable this behaviour? @mattiaocchiuto Did you use babel-loader or es6's import? see this. script-loader does add anything here, see @wuxiandiejia comment please :)
gharchive/issue
2017-02-17T13:30:20
2025-04-01T06:40:55.379842
{ "authors": [ "mattiaocchiuto", "michael-ciniawsky", "wuxiandiejia" ], "repo": "webpack-contrib/script-loader", "url": "https://github.com/webpack-contrib/script-loader/issues/25", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
283793376
refactor: Webpack 4.x compatibility Compatibility issues [ ] N/A Commit message body - Updates code style to Prettier - Updates engines - Updates peerDependencies - Enforces commit message formatting - Migrates to CircleCI 2.0 - Removes deprecated React JSX support BREAKING CHANGE: Drops support for NodeJS 4.x BREAKING CHANGE: Drops support for Webpack 2.x BREAKING CHANGE: `loaderUtils.getOptions()` is not backwards compatible BREAKING CHANGE: JSX Support was deprecated in `v0.4.0` Please use [svg-inline-react](https://www.npmjs.com/package/svg-inline-react) package @mernen @d3viant0ne @michael-ciniawsky Could I get a status update on this? @bebraw @d3viant0ne @SpaceK33z @TheLarkInn Hi all, I'm not trying to be a pest, but is there anything I can do to help you get this released? Our team would really appreciate it. Thanks! @d3viant0ne did you abandon this change / project? I migrated to https://www.npmjs.com/package/svg-url-loader and it works fine for me. In case anyone else is interested. I removed all dependency to this lib. @msphn Thanks for the tip. Unfortunately, this loader has a lot of useful features that aren't included in that other loader (classPrefix is really important for me). Fortunately, this still works with Webpack 4 (for me). Can I help you, to get this PR into production? Is there any chance this could get pulled at some point? Is there any issue in the code that prevents that? Update would be very welcome in our team. It is sad that this PR just sits here. Any change it can be integrated in a new release? Is there someone over at the core Webpack team that can add someone else as a maintain to this repo? Or maybe transfer ownership if it isn't going to be maintained with the rest of Webpack? @sokra @jhnns @TheLarkInn @spacek33z ⬆️ @webpack @webpack-contrib @bebraw, @TheLarkInn, @spacek33z - Anyone know what happened to @d3viant0ne ? His last contribution was in February? @will-russell He (amongst some other developers me included) isn't active in the project anymore. @will-russell He (amongst some other developers me included) isn't active in the project anymore. @bebraw - thank you for the reply! Who was left in charge of the project; do they know this is hanging out there? RE: @TheLarkInn @spacek33z @will-russell I think it's up to the current contrib team to maintain. Someone show pick up the PR and merge. CC @evilebottnawi. In todo Are there any news on this issue? Are there any news on this issue? @TheLarkInn @SpaceK33z 🦗🦗🦗 Also waiting for this
gharchive/pull-request
2017-12-21T07:19:50
2025-04-01T06:40:55.388086
{ "authors": [ "SuneRadich", "aramin", "bebraw", "d3viant0ne", "dazlious", "ddx32", "elliottregan", "evilebottnawi", "liorgreenb", "msphn", "randak", "will-russell" ], "repo": "webpack-contrib/svg-inline-loader", "url": "https://github.com/webpack-contrib/svg-inline-loader/pull/75", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
197461674
Adding build time in build stats Webpack outputs build time when building, e.g. Time: 4716ms in following output : Hash: 374e0bb6b1cd7f3b7bb7 Version: webpack 1.14.0 Time: 4716ms Asset Size Chunks Chunk Names bundle.js 5 kB 0 [emitted] main + 5 hidden modules Build time is a very useful data and I couldn't find a way to add it in grunt-webpack stats. Is it possible to add build time in grunt-webpack stats ? Oh actually just saw a timings item in the stats part of the options. Can't test it right now but it probably works. Feel free to close my issue then :) But maybe adding it to the example configuration file would be a good idea ?
gharchive/issue
2016-12-24T12:43:24
2025-04-01T06:40:55.390082
{ "authors": [ "yannicklerestif" ], "repo": "webpack/grunt-webpack", "url": "https://github.com/webpack/grunt-webpack/issues/107", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
643441265
docs(concepts): fix grammar Just a small fix. Thanks!
gharchive/pull-request
2020-06-23T00:28:39
2025-04-01T06:40:55.399052
{ "authors": [ "EugeneHlushko", "chenxsan" ], "repo": "webpack/webpack.js.org", "url": "https://github.com/webpack/webpack.js.org/pull/3796", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
755718014
Feature/introduce api versioning badge Introducing a Badge component for api versioning. E.g.: Cherry picked from https://github.com/webpack/webpack.js.org/pull/4217. Usage Just use <Badge text='' /> in any .mdx files, no need to import it as it's registered as a global component. Nice one, what about adding a link to the release notes? Nice one, what about adding a link to the release notes? @montogeek Once I wanted to add it, but it would be too messy. Thus I avoided it. Feel free to add it if you find any nice way handling it. @montogeek Once I wanted to add it, but it would be too messy. Thus I avoided it. Feel free to add it if you find any nice way handling it. @chenxsan What is your idea? I would create a map/dictionary: { '5.8': 'https://releaseurl' } @chenxsan What is your idea? I would create a map/dictionary: { '5.8': 'https://releaseurl' } @chenxsan What is your idea? I would create a map/dictionary: { '5.8': 'https://releaseurl' } Looks good. @chenxsan What is your idea? I would create a map/dictionary: { '5.8': 'https://releaseurl' } Looks good.
gharchive/pull-request
2020-12-03T00:40:16
2025-04-01T06:40:55.404725
{ "authors": [ "chenxsan", "montogeek" ], "repo": "webpack/webpack.js.org", "url": "https://github.com/webpack/webpack.js.org/pull/4230", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1094911546
asset/source option to change line endings from CRLF to LF Feature request When using type: 'asset/source', I get a string which contains \r\n. I want the string to be consistent on all platforms regardless of if the file on the file system was CRLF or not. The feature request is to add a way of changing all \r\n to \n as part of the imported string. What is the expected behavior? Imported string should have LF line endings What is motivation or use case for adding/changing the behavior? I am using the imported string in useState in react. The \r\n is causing an unnecessary re-render because the code changes it to \n and my useEffects get called twice, with basically the same string (but with different line endings). How should this be implemented in your opinion? Breaking change Always replace CRLF with LF Option Maybe an option to process the text before, or an option to set the line ending. Are you willing to work on this yourself? maybe, depending on how many changes it needs Sorry, out of scope webpack, you can write a plugin @alexander-akait Would the plugin process the file first before the asset/source happens? You can also create loader (it will be even easy), and remove newlines using RegExp, it is not safe to move this logic in core
gharchive/issue
2022-01-06T02:39:45
2025-04-01T06:40:55.409437
{ "authors": [ "ChocolateLoverRaj", "alexander-akait" ], "repo": "webpack/webpack", "url": "https://github.com/webpack/webpack/issues/15113", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2682759600
[Bug or Feature?] Top-Level Await Dynamic Import Incompatibility with Provide Plugin Causing Undefined Variables Bug report What is the current behavior? All variables exported from an ESM that contains a top-level awaited dynamic import are undefined. If the current behavior is a bug, please provide the steps to reproduce. if(isElectron){ var { IPC } = await import('../../ENV/electron') } export const something = () => { IPC?.on('json',() => {...}) } This case demonstrates that some logic depends on a dynamic module, and something is provided by ProvidePlugin, but it's undefined at runtime. This issue can be resolved by replacing await import() with require(), but this makes the ESM code impure. What is the expected behavior? Expect the provide-plugin to work correctly when a variable is exported from an ESM that contains a top-level awaited dynamic import. Other relevant information: webpack version: ^5.95.0 Node.js version: 23.1.0 Operating System: win11 Additional tools: just try to use await sync require() just try to use await sync require() I know , but I suppose "This issue can be resolved by replacing await import() with require(), but this makes the ESM code impure." Unfortunately I can't help without a real example, it should work and we have tests on such cases, I'll temporarily move it to discussions, as soon as you provide an example we will try to understand why it doesn't work in your case
gharchive/issue
2024-11-22T10:40:17
2025-04-01T06:40:55.414897
{ "authors": [ "Callme-VR", "Kane-Kuroneko", "alexander-akait" ], "repo": "webpack/webpack", "url": "https://github.com/webpack/webpack/issues/19000", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
133712817
webpack 2: ordering problem with mixed import and export * from 'x' using v2.0.7-beta, I see Uncaught TypeError: Cannot redefine property: b on load. Seems to be using the names from utility, combinatorics, and math for all the re-exports, instead of using each of them. Note that peekable is exported next, and imported from later. When I re-order the peekable import before the exports (i.e. just below lodash), the error goes away. Note: ./utility exports isIterable, isEmpty, size, and iterator. ./combinatorics exports combinations and product. ./math exports inclusive. These 7 names are re-exported for each of the export * generated lines. Error is thrown on line 6 at page load. Source: import { property } from 'lodash' export * from './utility' export * from './combinatorics' export * from './math' export * from './peekable' export * from './repeatable' import { isIterable, iterator } from './utility' import { peekable } from './peekable' Generated: (with a few line breaks added) /* harmony import */ var __WEBPACK_IMPORTED_MODULE_0_lodash__ = __webpack_require__(1); /* harmony import */ var __WEBPACK_IMPORTED_MODULE_0_lodash___default = __WEBPACK_IMPORTED_MODULE_0_lodash__ && __WEBPACK_IMPORTED_MODULE_0_lodash__.__esModule ? function() { return __WEBPACK_IMPORTED_MODULE_0_lodash__['default'] } : function() { return __WEBPACK_IMPORTED_MODULE_0_lodash__; } /* harmony import */ Object.defineProperty(__WEBPACK_IMPORTED_MODULE_0_lodash___default, 'a', { get: __WEBPACK_IMPORTED_MODULE_0_lodash___default }); /* harmony import */ var __WEBPACK_IMPORTED_MODULE_1__utility__ = __webpack_require__(271); /* harmony namespace reexport */ [["b","size"],["f","isEmpty"],["i","isIterable"],["j","iterator"],["l","combinations"],["s","product"],["t","inclusive"]].forEach(function(i) { Object.defineProperty(exports, i[0], {configurable: false, enumerable: true, get: function() { return __WEBPACK_IMPORTED_MODULE_1__utility__[i[1]]; }});}); /* harmony namespace reexport */ [["b","size"],["f","isEmpty"],["i","isIterable"],["j","iterator"],["l","combinations"],["s","product"],["t","inclusive"]].forEach(function(i) { Object.defineProperty(exports, i[0], {configurable: false, enumerable: true, get: function() { return __WEBPACK_IMPORTED_MODULE_2__combinatorics__[i[1]]; }});}); /* harmony import */ var __WEBPACK_IMPORTED_MODULE_2__combinatorics__ = __webpack_require__(998); /* harmony namespace reexport */ [["b","size"],["f","isEmpty"],["i","isIterable"],["j","iterator"],["l","combinations"],["s","product"],["t","inclusive"]].forEach(function(i) { Object.defineProperty(exports, i[0], {configurable: false, enumerable: true, get: function() { return __WEBPACK_IMPORTED_MODULE_3__math__[i[1]]; }});});/* harmony import */ var __WEBPACK_IMPORTED_MODULE_3__math__ = __webpack_require__(999); /* harmony import */ var __WEBPACK_IMPORTED_MODULE_4__peekable__ = __webpack_require__(344); /* harmony namespace reexport */ [["b","size"],["f","isEmpty"],["i","isIterable"],["j","iterator"],["l","combinations"],["s","product"],["t","inclusive"]].forEach(function(i) { Object.defineProperty(exports, i[0], {configurable: false, enumerable: true, get: function() { return __WEBPACK_IMPORTED_MODULE_4__peekable__[i[1]]; }});});/* harmony import */ var __WEBPACK_IMPORTED_MODULE_5__repeatable__ = __webpack_require__(595); /* harmony namespace reexport */ [["b","size"],["f","isEmpty"],["i","isIterable"],["j","iterator"],["l","combinations"],["s","product"],["t","inclusive"]].forEach(function(i) { Object.defineProperty(exports, i[0], {configurable: false, enumerable: true, get: function() { return __WEBPACK_IMPORTED_MODULE_5__repeatable__[i[1]]; }});});/* harmony export */ exports["r"] = reverse;/* harmony export */ exports["q"] = enumerate;/* harmony export */ exports["g"] = some;/* harmony export */ exports["o"] = every;/* harmony export */ exports["c"] = smaller;/* harmony export */ exports["d"] = larger;/* harmony export */ exports["a"] = first;/* harmony export */ exports["p"] = last;/* unused harmony export reduce *//* unused harmony export ifilter *//* harmony export */ exports["n"] = imap;/* harmony export */ exports["u"] = flatten;/* harmony export */ exports["m"] = chain;/* harmony export */ exports["k"] = pluck;/* unused harmony export initial *//* unused harmony export partition *//* unused harmony export slice */var _marked = [reverse, enumerate, ifilter, imap, flatten, initial, slice].map(regeneratorRuntime.mark); Seems to work correctly if I re-order the peekable import like so: import { property } from 'lodash' import { peekable } from './peekable' export * from './utility' export * from './combinatorics' export * from './math' export * from './peekable' export * from './repeatable' import { isIterable, iterator } from './utility' Note: not sure if it's an ordering problem or not; could be a side effect of something else. :+1: :smile:
gharchive/issue
2016-02-15T13:27:36
2025-04-01T06:40:55.421566
{ "authors": [ "benmosher" ], "repo": "webpack/webpack", "url": "https://github.com/webpack/webpack/issues/2050", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
346524170
HMR support to multi entries What kind of change does this PR introduce? ref #7829 Did you add tests for your changes? not yet Does this PR introduce a breaking change? no What needs to be documented once your changes are merged? no Should work with webpack@latest. Feel free to report new issue with reproducible repo.
gharchive/pull-request
2018-08-01T09:40:00
2025-04-01T06:40:55.424280
{ "authors": [ "kamijin-fanta", "vankop" ], "repo": "webpack/webpack", "url": "https://github.com/webpack/webpack/pull/7832", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1799667119
ESLint plugin doesn't recognize typescript-eslint's stylistic(-type-checked) or strict-type-checked rulesets Coming over from https://github.com/JoshuaKGoldberg/template-typescript-node-package/pull/601, my ESLint config contains these plugins (among others) under extends: "plugin:@typescript-eslint/strict", "plugin:@typescript-eslint/stylistic", "plugin:@typescript-eslint/strict-type-checked", "plugin:@typescript-eslint/stylistic-type-checked", ...and Knip only recognizes strict from them: $ pnpm run lint:knip > template-typescript-node-package@1.28.40 lint:knip /Users/josh/repos/template-typescript-node-package > knip Unlisted dependencies (3) @typescript-eslint/eslint-plugin-strict-type-checked .eslintrc.cjs @typescript-eslint/eslint-plugin-stylistic .eslintrc.cjs @typescript-eslint/eslint-plugin-stylistic-type-checked .eslintrc.cjs  ELIFECYCLE  Command failed with exit code 3. Fun fact - if you add stylistic.*|strict.* to the regex in https://github.com/webpro/knip/blob/d3e6f01f6bf2bd437e774c1d615c3c2bc5c56032/src/plugins/eslint/helpers.ts#L77, it works right. Would you be up for this as a bandaid fix pending the big TODO comment in that file? https://github.com/webpro/knip/blob/d3e6f01f6bf2bd437e774c1d615c3c2bc5c56032/src/plugins/eslint/helpers.ts#L69-L71 :rocket: This issue has been resolved in v2.15.2. See Release 2.15.2 for release notes. Thanks for reporting this, and thanks for the work in @typescript-eslint/* v6! :rocket: This issue has been resolved in v2.15.3. See Release 2.15.3 for release notes.
gharchive/issue
2023-07-11T19:58:56
2025-04-01T06:40:55.430623
{ "authors": [ "JoshuaKGoldberg", "webpro" ], "repo": "webpro/knip", "url": "https://github.com/webpro/knip/issues/154", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
2091776834
Add Netlify plugin Adds a Netlify plugin! With the addition of this PR we can also now support the .prettierrc.toml extension: https://github.com/webpro/knip/blob/main/packages/knip/src/plugins/prettier/index.ts#L18. With the addition of this PR we can also now support the .prettierrc.toml extension: https://github.com/webpro/knip/blob/main/packages/knip/src/plugins/prettier/index.ts#L18. Yes, this is great! Thanks for merging! :rocket: This pull request is included in v4.2.0. See Release 4.2.0 for release notes. Thank you! Very good idea, great execution 🙏 I did minor changes in your work after a major refactoring today, but nothing essential.
gharchive/pull-request
2024-01-20T00:58:15
2025-04-01T06:40:55.434124
{ "authors": [ "uncenter", "webpro" ], "repo": "webpro/knip", "url": "https://github.com/webpro/knip/pull/466", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
1640758695
small typo asserts -> assets I also found a dead link. The new link may not be correct, please check. Thanks.
gharchive/pull-request
2023-03-26T03:44:24
2025-04-01T06:40:55.439561
{ "authors": [ "sameastburn" ], "repo": "webprogramming260/.github", "url": "https://github.com/webprogramming260/.github/pull/38", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
117224986
www.appr.tc, apprtc.net, should redirect to appr.tc Otherwise, they will have different permissions, cookies, etc. @samdutton Tis is purely a code change right (no domain owner etc needed)? I assume here https://github.com/webrtc/apprtc/blob/master/src/app_engine/apprtc.py#L535. If so I could take a look in case you do not have cycles to spare. I think we probably want to do this via DNS (and remove that apprtc.net hack), but if for some reason we can't do that, we could do it as you suggest. @samdutton, can you try the DNS changes first? I tried DNS changes yesterday — seems broken now for apprtc.net, am chasing with the domain host. On 3 Feb 2016 8:09 p.m., "Justin Uberti" notifications@github.com wrote: I think we probably want to do this via DNS (and remove that apprtc.net hack), but if for some reason we can't do that, we could do it as you suggest. @samdutton https://github.com/samdutton, can you try the DNS changes first? — Reply to this email directly or view it on GitHub https://github.com/webrtc/apprtc/issues/221#issuecomment-179433699. ...and for www.appr.tc, I also tried DNS changes and came up against problems. Will try again. On 4 February 2016 at 09:31, Sam Dutton dutton@google.com wrote: I tried DNS changes yesterday — seems broken now for apprtc.net, am chasing with the domain host. On 3 Feb 2016 8:09 p.m., "Justin Uberti" notifications@github.com wrote: I think we probably want to do this via DNS (and remove that apprtc.net hack), but if for some reason we can't do that, we could do it as you suggest. @samdutton https://github.com/samdutton, can you try the DNS changes first? — Reply to this email directly or view it on GitHub https://github.com/webrtc/apprtc/issues/221#issuecomment-179433699. OK. Maybe we should just have a table in the code and redirect accordingly. We'll need to do this for apprtc.appspot.com anyway. e.g. REDIRECT_DOMAINS = ['apprtc.appspot.com', 'apprtc.net', 'www.apprtc.net', 'apprtc.webrtc.org', 'www.appr.tc'] I've basically just copied your suggestion, I've not actually tried it yet ;). Long story short (a lot of support chats with the domain hosting company...) I have finally, apparently, managed to get www.apprtc.net and apprtc.net redirecting to https://appr.tc. Please let me know if any problems with this. On 5 February 2016 at 09:38, Christoffer Jansson notifications@github.com wrote: I've basically just copied your suggestion, I've not actually tried it yet ;). — Reply to this email directly or view it on GitHub https://github.com/webrtc/apprtc/issues/221#issuecomment-180272716. Awesome! Any chance of adding apprtc.appspot.com to that list? ;) Or do we have to use the redirect hack for that? I think apprtc.appspot.com may have to go in the Python... On 5 February 2016 at 12:45, Christoffer Jansson notifications@github.com wrote: Awesome! Any chance of adding apprtc.appspot.com to that list? ;) Or do we have to use the redirect hack for that? — Reply to this email directly or view it on GitHub https://github.com/webrtc/apprtc/issues/221#issuecomment-180338247.
gharchive/issue
2015-11-16T21:44:27
2025-04-01T06:40:55.469957
{ "authors": [ "KaptenJansson", "juberti", "samdutton" ], "repo": "webrtc/apprtc", "url": "https://github.com/webrtc/apprtc/issues/221", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1888398575
🛑 DragonBoat Times is down In 16d1665, DragonBoat Times (https://www.dragonboattimes.com) was down: HTTP code: 521 Response time: 170 ms Resolved: DragonBoat Times is back up in 85f5a04 after 8 minutes.
gharchive/issue
2023-09-08T21:50:54
2025-04-01T06:40:55.538755
{ "authors": [ "webworldview" ], "repo": "webworldview/uptime", "url": "https://github.com/webworldview/uptime/issues/3272", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1364668623
FileNotFoundException: Could not find file '...\minigame\wasmcode\c2f2148f99ca3e03.webgl.wasm.code.unityweb.wasm.br'. Unity2021.3.5f1c1用最新版的转换插件"导出WEBGL并转换为小游戏(常用)"报错: Usage: .../Assets\WX-WASM-SDK/Editor/Brotli/win_x86_64/brotli.exe [--force] [--quality n] [--decompress] [--input filename] [--output filename] [--repeat iters] [--comment commment] [--verbose] [--window n] FileNotFoundException: Could not find file '...\minigame\wasmcode\c2f2148f99ca3e03.webgl.wasm.code.unityweb.wasm.br'. System.IO.FileSystem.CopyFile (System.String sourceFullPath, System.String destFullPath, System.Boolean overwrite) (at :0) System.IO.File.Copy (System.String sourceFileName, System.String destFileName, System.Boolean overwrite) (at :0) System.IO.File.Copy (System.String sourceFileName, System.String destFileName) (at :0) WeChatWASM.WXEditorWindow.Brotlib (System.String filePath) (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:1011) WeChatWASM.WXEditorWindow.GenerateBinFile (System.Boolean isFromConvert) (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:648) WeChatWASM.WXEditorWindow.DoExport (System.Boolean buildWebGL) (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:1339) WeChatWASM.WXEditorWindow.OnGUI () (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:1256) UnityEditor.HostView.InvokeOnGUI (UnityEngine.Rect onGUIPosition) (at :0) UnityEditor.DockArea.DrawView (UnityEngine.Rect dockAreaRect) (at :0) UnityEditor.DockArea.OldOnGUI () (at :0) UnityEngine.UIElements.IMGUIContainer.DoOnGUI (UnityEngine.Event evt, UnityEngine.Matrix4x4 parentTransform, UnityEngine.Rect clippingRect, System.Boolean isComputingLayout, UnityEngine.Rect layoutSize, System.Action onGUIHandler, System.Boolean canAffectFocus) (at :0) UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, UnityEngine.Matrix4x4 worldTransform, UnityEngine.Rect clippingRect, System.Action onGUIHandler, System.Boolean canAffectFocus) (at :0) UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, System.Action onGUIHandler, System.Boolean canAffectFocus) (at :0) UnityEngine.UIElements.IMGUIContainer.HandleIMGUIEvent (UnityEngine.Event e, System.Boolean canAffectFocus) (at :0) UnityEngine.UIElements.IMGUIContainer.SendEventToIMGUIRaw (UnityEngine.UIElements.EventBase evt, System.Boolean canAffectFocus, System.Boolean verifyBounds) (at :0) UnityEngine.UIElements.IMGUIContainer.SendEventToIMGUI (UnityEngine.UIElements.EventBase evt, System.Boolean canAffectFocus, System.Boolean verifyBounds) (at :0) UnityEngine.UIElements.IMGUIContainer.HandleEvent (UnityEngine.UIElements.EventBase evt) (at :0) UnityEngine.UIElements.CallbackEventHandler.HandleEventAtTargetPhase (UnityEngine.UIElements.EventBase evt) (at :0) UnityEngine.UIElements.MouseCaptureDispatchingStrategy.DispatchEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at :0) UnityEngine.UIElements.EventDispatcher.ApplyDispatchingStrategies (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel, System.Boolean imguiEventIsInitiallyUsed) (at :0) UnityEngine.UIElements.EventDispatcher.ProcessEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at :0) UnityEngine.UIElements.EventDispatcher.ProcessEventQueue () (at :0) UnityEngine.UIElements.EventDispatcher.OpenGate () (at :0) UnityEngine.UIElements.EventDispatcherGate.Dispose () (at :0) UnityEngine.UIElements.EventDispatcher.ProcessEvent (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel) (at :0) UnityEngine.UIElements.EventDispatcher.Dispatch (UnityEngine.UIElements.EventBase evt, UnityEngine.UIElements.IPanel panel, UnityEngine.UIElements.DispatchMode dispatchMode) (at :0) UnityEngine.UIElements.BaseVisualElementPanel.SendEvent (UnityEngine.UIElements.EventBase e, UnityEngine.UIElements.DispatchMode dispatchMode) (at :0) UnityEngine.UIElements.UIElementsUtility.DoDispatch (UnityEngine.UIElements.BaseVisualElementPanel panel) (at :0) UnityEngine.UIElements.UIElementsUtility.UnityEngine.UIElements.IUIElementsUtility.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr, System.Boolean& eventHandled) (at :0) UnityEngine.UIElements.UIEventRegistration.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr) (at :0) UnityEngine.UIElements.UIEventRegistration+<>c.<.cctor>b__1_2 (System.Int32 i, System.IntPtr ptr) (at :0) UnityEngine.GUIUtility.ProcessEvent (System.Int32 instanceID, System.IntPtr nativeEventPtr, System.Boolean& result) (at <79f3a9d75afc454f9a46d7c9960e4a65>:0) same here 请问项目路径是否包含空格? 的确,去掉空格后又报如下错误: Win32Exception: ApplicationName='node', CommandLine='--experimental-modules dump_wasm_symbol.mjs 项目路径/out', CurrentDirectory='Assets/WX-WASM-SDK/Editor/Node', Native error= 系统找不到指定的文件。 System.Diagnostics.Process.StartWithCreateProcess (System.Diagnostics.ProcessStartInfo startInfo) (at :0) System.Diagnostics.Process.Start () (at :0) (wrapper remoting-invoke-with-check) System.Diagnostics.Process.Start() System.Diagnostics.Process.Start (System.Diagnostics.ProcessStartInfo startInfo) (at :0) WeChatWASM.UnityUtil.CreateCmdProcess (System.String cmd, System.String args, System.String workdir) (at Assets/WX-WASM-SDK/Editor/UnityUtil.cs:286) WeChatWASM.UnityUtil.RunCmd (System.String cmd, System.String args, System.String workdir, System.Action`3[T1,T2,T3] progressUpdate) (at Assets/WX-WASM-SDK/Editor/UnityUtil.cs:234) WeChatWASM.WXEditorWindow.DoExport (System.Boolean buildWebGL) (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:1358) WeChatWASM.WXEditorWindow.OnGUI () (at Assets/WX-WASM-SDK/Editor/WXEditorWindow.cs:1256) UnityEditor.HostView.InvokeOnGUI (UnityEngine.Rect onGUIPosition) (at :0) UnityEditor.DockArea.DrawView (UnityEngine.Rect dockAreaRect) (at :0) 追踪到此处: // 如果是2021版本,官方symbols产生有BUG,这里需要用工具将embedded的函数名提取出来 #if UNITY_2021_2_OR_NEWER var path = "Assets/WX-WASM-SDK/Editor/Node"; var nodePath = "node"; #if UNITY_EDITOR_OSX nodePath = "/usr/local/bin/node"; #endif WeChatWASM.UnityUtil.RunCmd(nodePath, string.Format($"--experimental-modules dump_wasm_symbol.mjs {dst}"), path); UnityEngine.Debug.LogError($"Unity 2021版本使用Embeded Symbols, 代码包中含有函数名体积较大, 发布前<a href="https://github.com/wechat-miniprogram/minigame-unity-webgl-transform/blob/main/Design/WasmSplit.md\">使用代码分包工具</a>进行优化"); #endif 请安装最新的Node,由于Unity2021自己生成的symbols文件有问题,因此我们开发了工具去生成,该工具以来node。
gharchive/issue
2022-09-07T13:29:24
2025-04-01T06:40:55.565609
{ "authors": [ "Jax0rz", "Oooocean", "sirius2015" ], "repo": "wechat-miniprogram/minigame-unity-webgl-transform", "url": "https://github.com/wechat-miniprogram/minigame-unity-webgl-transform/issues/146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2300822085
ReferenceError: UrlLink is not defined 然后我试图这样引入也失败import {UrlLink} from 'wechaty' const { UrlLink } = bot
gharchive/issue
2024-05-16T16:16:10
2025-04-01T06:40:55.571199
{ "authors": [ "aqpmzngldh", "wang2200" ], "repo": "wechaty/puppet-padlocal", "url": "https://github.com/wechaty/puppet-padlocal/issues/303", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1126705957
v1.11.57 upgrade "@juzi.bot/whatsapp-web.js": "1.15.10" 需要描述下这个分支的用途。 duplicated with https://github.com/wechaty/puppet-whatsapp/pull/145
gharchive/pull-request
2022-02-08T02:08:24
2025-04-01T06:40:55.572596
{ "authors": [ "bung87", "su-chang" ], "repo": "wechaty/puppet-whatsapp", "url": "https://github.com/wechaty/puppet-whatsapp/pull/133", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2036915260
change: default http port to 8080 默认启动端口改为 8080 OK
gharchive/pull-request
2023-12-12T03:44:18
2025-04-01T06:40:55.630202
{ "authors": [ "axibx", "jamesbee" ], "repo": "weimob-tech/go-project-boot", "url": "https://github.com/weimob-tech/go-project-boot/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
181129782
Consider adding functional tests to this project Functional tests are an integral ingredient of highly-quality, maintainable commands. WP-CLI tries to make it as easy as possible to add functional tests to your package with its wp scaffold package-tests command: https://github.com/wp-cli/scaffold-package-command#wp-scaffold-package-tests I'd encourage you to consider adding functional tests to your package :) By starting your functional tests early on, it also makes it much easier to maintain your project over time. We love test and I'll consider to learn a bit of behat as soon as I can. @danielbachhuber ever seen those warnings: 0.22s$ composer validate --strict You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug ./composer.json is valid, but with a few warnings See https://getcomposer.org/doc/04-schema.md for details on the schema require.wp-cli/wp-cli : unbound version constraints (>=0.23.0) should be avoided The command "composer validate --strict" failed and exited with 1 during . Your build has been stopped. ?? Yep, see history of https://github.com/wp-cli/scaffold-package-command/pull/56 I have one last question. I've gitignored following composer.lock composer.phar installer just because all seems to work without them. But i can't find what the best practice is, said that the scaffolded .gitignore was not ignoring them. Where could I find documentation about what and why check into the repo? But i can't find what the best practice is, said that the scaffolded .gitignore was not ignoring them. Where could I find documentation about what and why check into the repo? There isn't documentation about these specifically. composer.lock is something you should commit to your project, because it locks new installations of your project to specific dependency versions. composer.phar and installer aren't necessary in the installation step because composer is pre-installed on Travis, CircleCI, and other CI systems. I've created an issue to remove it https://github.com/wp-cli/scaffold-package-command/issues/59 Thanks for help and for all the fish :)
gharchive/issue
2016-10-05T11:38:10
2025-04-01T06:40:55.649685
{ "authors": [ "danielbachhuber", "pioneerskies" ], "repo": "welaika/wp-cli-db2utf8", "url": "https://github.com/welaika/wp-cli-db2utf8/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
283422047
Refractor fill function (Ok) Sorry for the last pull request, Travis CI wasn't working and me neither : ). Anyways, the new fill function should be marginally quicker and arguably more maintainable. Don't hesitate if you have any remarks or question about the change, I'm glad you like it Thanks for submitting this PR! This is actually the first PR to be merged into the library. Expect to see a link to your GitHub profile on the special thanks page when I wrap up the 0.6.4 release. Please feel free to submit additional PRs if you see any other areas of the codebase that could use refactoring. If you'd like to open up any design discussion or ideas for features, please feel free to open an issue. Thanks for your time and effort!
gharchive/pull-request
2017-12-20T01:32:00
2025-04-01T06:40:55.652205
{ "authors": [ "TApplencourt", "welchbj" ], "repo": "welchbj/tt", "url": "https://github.com/welchbj/tt/pull/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1060518411
Unreliable notifications Hi I have a device using a nRF52 chip that sends a notification immediately after the notification is enabled to initially populate some fields in my Android app. I noted that this is not reliable and could narrow it down to something happening on Android, as I control the nRF52 and could verify that the notification is indeed sent. There is no log entry, it just does not happen. I have 4 characteristics, but get notifications for any number between 0 and 4 when I initially connect. Later, the notifications do work. What I tried so far: Delaying the observe() call all together or individually spaced for up to 2 seconds in between calls Delaying the response from the nRF52 in the same manner My code to set up notifications look like that: private suspend fun setupAmountNotification(peripheral: BluetoothPeripheral) { peripheral.getCharacteristic( UUID.fromString(SERVICE_UUID), UUID.fromString(CHAR_AMOUNT_UUID) )?.let { peripheral.observe(it) {value -> Log.i(TAG, "Notification for amount") val parser = BluetoothBytesParser(value, ByteOrder.LITTLE_ENDIAN) runOnUiThread{ onAmountNotification(parser.getIntValue(FORMAT_UINT32)) } } } } In turn, I get this log (the other observe functions are analogous): D/MainActivity: Peripheral eo-4095 has CONNECTED D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1403-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1402-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1401-14b3-457c-xxxx-34bf34932966 enable: true D/BluetoothGatt: setCharacteristicNotification() - uuid: e44e1404-14b3-457c-xxxx-34bf34932966 enable: true I/MainActivity: Notification for reservoir I/MainActivity: Notification for status Are you just missing the first notification you are expecting? And do later ones arrive normally? If so, I might have an idea what it could be. I store the lambda for the callback after the enabling of the notification succeeded. So the is indeed a race condition if the first notification comes in immediately because the coroutine runs on a different thread. Yes, I am just missing the first one, later ones do arrive normally. So your assumption could be correct. However, I tried to delay the response of the nRF52, but this did not really result in a different behaviour. And by delay I mean values between 10 an 1000 ms. Does it really take that long to store the lambda? I worked around it by first reading the values manually and then wait for the notification for all subsequent values. It does work, but is less elegant than what I intended. Is there anything I can try to make it work? It is not really a breaking bug though. Ok, let me try a fix tomorrow.... I released a new version (0.1.2) with a possible fix. Can you try it? Will do tonight! Works like a charm! Thank you so much for your effort and work. And quick too! Works like a charm! Thank you so much for your effort and work. And quick too!
gharchive/issue
2021-11-22T19:30:00
2025-04-01T06:40:55.658262
{ "authors": [ "clemens-", "weliem" ], "repo": "weliem/blessed-android-coroutines", "url": "https://github.com/weliem/blessed-android-coroutines/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2277393200
请问 tabbar iconPath icon.png 用的是什么系列的图标呀 tabbar选的很有眼光,审美在线,我想在这个基础上加几个tabbar图标,请问这个图标是什么系列或者名称或者包的呢?多谢哦 tabbar选的很有眼光,审美在线,我想在这个基础上加几个tabbar图标,请问这个图标是什么系列或者名称或者包的呢?多谢哦 不是网上的免费图标,是从我以前项目里随便拿几个来用的,公司UI自己画的 cool 多谢回复 节日快乐
gharchive/issue
2024-05-03T10:32:32
2025-04-01T06:40:55.659809
{ "authors": [ "welives", "wozzup" ], "repo": "welives/taro-react-starter", "url": "https://github.com/welives/taro-react-starter/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1153839358
Removal of ticketing on requesting What is it and who's it for? As we remove the ability to book a ticket for the building, we also need to remove this functionality from the new requesting flows Implementation Remove reference to booking a ticket from within the requesting flows (remove text and CTA) @DominiqueMarshall - do you have a visual for how this confirmation screen should look without the reference to ticketing?
gharchive/issue
2022-02-28T09:13:41
2025-04-01T06:40:55.662552
{ "authors": [ "DominiqueMarshall", "cbowskill" ], "repo": "wellcomecollection/wellcomecollection.org", "url": "https://github.com/wellcomecollection/wellcomecollection.org/issues/7726", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1791284537
Turn .container into a utility component Who is this for? Devs/maintenance What is it doing for them? One step closer to getting rid of utility classes, see #10018 for more details I had to add isContainer to Space as Container and Space are both styled components and I couldn't find another way to merge them - if we hate it, let's discuss. Something is up with styled-components and type declaration where it ruins the rendering of the rest of the file (see screenshots). If we move that type declaration in its own right, it renders it properly. Extra faff, but so much more readable. I don't love the isContainer Same, I really don't like it! I just realised there might be a simpler solution, it looks the same to me as prod does, so idk if I was just too close to it when I originally made the PR? Do confirm if I'm missing something? Ha, that would work. It did actually cross my mind when I first looked at it and then I got caught up in trying to find a way to combine styled components and forgot. Think that's what happened to me too 😅 sometimes we just have to walk away and come back. I'll merge on Monday 👍
gharchive/pull-request
2023-07-06T10:36:41
2025-04-01T06:40:55.667008
{ "authors": [ "gestchild", "rcantin-w" ], "repo": "wellcomecollection/wellcomecollection.org", "url": "https://github.com/wellcomecollection/wellcomecollection.org/pull/10019", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
867804888
Release 1.1.0 Description Release 1.1.0 Checklist [x] Change wellcomeml/__version__.py [x] Add changelog [ ] make dist [ ] Verify new package was generated correctly on the pip registry and GitHub releases Codecov Report Merging #278 (28bc28b) into main (e81be34) will decrease coverage by 58.77%. The diff coverage is n/a. :exclamation: Current head 28bc28b differs from pull request most recent head b4e2f0f. Consider uploading reports for the commit b4e2f0f to get more accurate results @@ Coverage Diff @@ ## main #278 +/- ## =========================================== - Coverage 86.10% 27.33% -58.78% =========================================== Files 41 41 Lines 2296 2290 -6 =========================================== - Hits 1977 626 -1351 - Misses 319 1664 +1345 Impacted Files Coverage Δ wellcomeml/ml/clustering.py 17.82% <ø> (-72.28%) :arrow_down: wellcomeml/metrics/ner_classification_report.py 8.69% <0.00%> (-91.31%) :arrow_down: wellcomeml/datasets/conll.py 12.50% <0.00%> (-87.50%) :arrow_down: wellcomeml/ml/cnn.py 12.24% <0.00%> (-78.58%) :arrow_down: wellcomeml/io/s3_policy_data.py 14.49% <0.00%> (-76.82%) :arrow_down: wellcomeml/ml/bilstm.py 15.60% <0.00%> (-75.89%) :arrow_down: wellcomeml/spacy/spacy_doc_to_prodigy.py 25.00% <0.00%> (-75.00%) :arrow_down: wellcomeml/ml/spacy_ner.py 20.00% <0.00%> (-73.85%) :arrow_down: wellcomeml/datasets/winer.py 8.27% <0.00%> (-73.80%) :arrow_down: ... and 20 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update e81be34...b4e2f0f. Read the comment docs. Will run make dist when approved. Release new version WellcomeML Starting to release... Uploaded correctly. We're on 1.1.0 on pipit And on GitHub. 🎉
gharchive/pull-request
2021-04-26T15:04:03
2025-04-01T06:40:55.686738
{ "authors": [ "aCampello", "codecov-commenter" ], "repo": "wellcometrust/WellcomeML", "url": "https://github.com/wellcometrust/WellcomeML/pull/278", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
388309148
Create a topic for edits to the Miro VHS data The editing script then forwards the new item to the topic, so the catalogue transformer gets it immediately. In theory the reporting pipeline could subscribe to it as well. It doesn't cover edits of the form “toggle the isClearedForCatalogueAPI” parameter, but we can easily add that later. @kenoir All good now?
gharchive/pull-request
2018-12-06T17:05:03
2025-04-01T06:40:55.688487
{ "authors": [ "alexwlchan" ], "repo": "wellcometrust/platform", "url": "https://github.com/wellcometrust/platform/pull/3144", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2350863862
Add language support for search engine I'm trying to add search engine to my website. Unfortunately, it seems that my language is not supported by Zola & tabi: Error: Failed to serve the site Error: Tried to build search index for language pl which is not supported Or maybe I should ask Elasticlunr developers...? I believe Zola uses elasticlunr-rs. Related discussion: https://github.com/mattico/elasticlunr-rs/issues/13 If elasticlunr supports it and Zola is updated to use the last elasticlunr, tabi will work with its search index (with no changes). OK, I don't see support for my language. I have to investigate it.
gharchive/issue
2024-06-13T11:10:15
2025-04-01T06:40:55.701888
{ "authors": [ "stalkerGH", "welpo" ], "repo": "welpo/tabi", "url": "https://github.com/welpo/tabi/issues/329", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
110779875
11.4修改登陆方法 应该是username public Object login(@Param("name")String name, 应该是 public Object login(@Param("username")String name, 好
gharchive/issue
2015-10-10T08:18:20
2025-04-01T06:40:55.715807
{ "authors": [ "wendal", "xing-kenny" ], "repo": "wendal/nutz-book", "url": "https://github.com/wendal/nutz-book/issues/5", "license": "cc0-1.0", "license_type": "permissive", "license_source": "bigquery" }
2018011746
examples/aishell/s0 run.sh bug in stage 5 Describe the bug A clear and concise description of what the bug is. When I run 'bash run.sh --stage 5 --stop_stage 5', it comes a bug and the run.sh doesn't run any more. To Reproduce Steps to reproduce the behavior: Edit '$CUDA_VISIBLE_DEVICES' to '0', '$num_workers' to 1, '$average_num' to 1. Run 'bash run.sh --stage -1 --stop_stage 4'. When train.py finishes epoch 3 training, press 'Ctrl+C' to stop. Run 'bash run.sh --stage 5 --stop_stage 5'. See the bug. Additional context The output message: ''' $ bash run.sh --stage 5 --stop_stage 5 do model average and final checkpoint is exp/conformer/avg_1.pt Namespace(dst_model='exp/conformer/avg_1.pt', max_epoch=65536, min_epoch=0, num=1, src_path='exp/conformer', val_best=True) best val scores = [13.99669639] selected epochs = [3] ['exp/conformer/3.pt'] Processing exp/conformer/3.pt Saving to exp/conformer/avg_1.pt /home/tian/miniconda3/envs/myenv/lib/python3.8/site-packages/torch/_jit_internal.py:726: FutureWarning: ignore(True) has been deprecated. TorchScript will now drop the function call on compilation. Use torch.jit.unused now. {} warnings.warn( Namespace(attn_weight=0.0, batch_size=32, beam_size=10, bpe_model=None, checkpoint='exp/conformer/avg_1.pt', config='exp/conformer/train.yaml', connect_symbol='', context_bias_mode='', context_graph_score=0.0, context_list_path='', ctc_weight=0.3, data_type='raw', decoder_scale=0.0, decoding_chunk_size=-1, dict='data/dict/lang_char.txt', gpu=0, hlg='', lm_scale=0.0, modes=['ctc_greedy_search', 'ctc_prefix_beam_search', 'attention', 'attention_rescoring'], non_lang_syms=None, num_decoding_left_chunks=-1, override_config=[], penalty=0.0, r_decoder_scale=0.0, result_dir='exp/conformer', reverse_weight=0.5, search_ctc_weight=1.0, search_transducer_weight=0.0, simulate_streaming=False, test_data='data/test/data.list', transducer_weight=0.0, word='') 2023-11-30 15:02:43,902 INFO Checkpoint: loading from checkpoint exp/conformer/avg_1.pt {'accum_grad': 4, 'cmvn_file': 'exp/conformer/global_cmvn', 'ctc_conf': {'ctc_blank_id': 0}, 'dataset_conf': {'batch_conf': {'batch_size': 16, 'batch_type': 'static'}, 'fbank_conf': {'dither': 0.1, 'frame_length': 25, 'frame_shift': 10, 'num_mel_bins': 80}, 'filter_conf': {'max_length': 40960, 'min_length': 0, 'token_max_length': 200, 'token_min_length': 1}, 'resample_conf': {'resample_rate': 16000}, 'shuffle': True, 'shuffle_conf': {'shuffle_size': 1500}, 'sort': True, 'sort_conf': {'sort_size': 500}, 'spec_aug': True, 'spec_aug_conf': {'max_f': 10, 'max_t': 50, 'num_f_mask': 2, 'num_t_mask': 2}, 'speed_perturb': True}, 'decoder': 'transformer', 'decoder_conf': {'attention_heads': 4, 'dropout_rate': 0.1, 'linear_units': 2048, 'num_blocks': 6, 'positional_dropout_rate': 0.1, 'self_attention_dropout_rate': 0.0, 'src_attention_dropout_rate': 0.0}, 'dtype': 'fp32', 'encoder': 'conformer', 'encoder_conf': {'activation_type': 'swish', 'attention_dropout_rate': 0.0, 'attention_heads': 4, 'cnn_module_kernel': 15, 'dropout_rate': 0.1, 'input_layer': 'conv2d', 'linear_units': 2048, 'normalize_before': True, 'num_blocks': 12, 'output_size': 256, 'pos_enc_layer_type': 'rel_pos', 'positional_dropout_rate': 0.1, 'selfattention_layer_type': 'rel_selfattn', 'use_cnn_module': True}, 'grad_clip': 5, 'input_dim': 80, 'is_json_cmvn': True, 'lfmmi_dir': '', 'log_interval': 100, 'max_epoch': 240, 'model_conf': {'ctc_weight': 0.3, 'length_normalized_loss': False, 'lsm_weight': 0.1}, 'model_dir': 'exp/conformer', 'optim': 'adam', 'optim_conf': {'lr': 0.002}, 'output_dim': 4233, 'save_states': 'model_only', 'scheduler': 'warmuplr', 'scheduler_conf': {'warmup_steps': 25000}, 'train_engine': 'torch_ddp', 'use_amp': False, 'vocab_size': 4233, 'init_infos': {}} Traceback (most recent call last): File "wenet/bin/recognize.py", line 271, in main() File "wenet/bin/recognize.py", line 248, in main results = model.decode( File "/home/tian/master_graduation_project/wenet/wenet/transformer/asr_model.py", line 248, in decode results['attention_rescoring'] = attention_rescoring( File "/home/tian/master_graduation_project/wenet/wenet/transformer/search.py", line 382, in attention_rescoring s = r_decoder_out[i][len(hyp) - j - 1][w] IndexError: invalid index of a 0-dim tensor. Use tensor.item() in Python or tensor.item<T>() in C++ to convert a 0-dim tensor to a number ''' Desktop (please complete the following information): OS: Ubuntu in WSL2 Browser : Vscode Version: 22.04 LTS 在wenet/transformer/search.py 文件下 372行加一个hyp = hyp[0] 代码, 试一试,成功解码且数值无误可以提个PR修复下 s = r_decoder_out[i][len(hyp) - j - 1][w].item() 呢 还是报错: Traceback (most recent call last): File "wenet/bin/recognize.py", line 271, in main() File "wenet/bin/recognize.py", line 248, in main results = model.decode( File "/home/tian/master_graduation_project/wenet/wenet/transformer/asr_model.py", line 248, in decode results['attention_rescoring'] = attention_rescoring( File "/home/tian/master_graduation_project/wenet/wenet/transformer/search.py", line 382, in attention_rescoring s = r_decoder_out[i][len(hyp) - j - 1][w].item() IndexError: invalid index of a 0-dim tensor. Use tensor.item() in Python or tensor.item<T>() in C++ to convert a 0-dim tensor to a number 我用的是u2pp_conformer, 你这个看起来是conformer,我再试一下这个
gharchive/issue
2023-11-30T07:05:27
2025-04-01T06:40:55.735515
{ "authors": [ "David-tianqiong", "xingchensong" ], "repo": "wenet-e2e/wenet", "url": "https://github.com/wenet-e2e/wenet/issues/2181", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
901060195
Gigaspeech model broken link The pretrained conformer model link provided for the Gigaspeech example doesn't work, says "file not exist" Model link: http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/gigaspeech/20210115_conformer_exp.tar.gz try http://mobvoi-speech-public.ufile.ucloud.cn/public/wenet/gigaspeech/20210520_conformer_exp.tar.gz This works, thank you! Would you like to also push that updated link to the repo? @pengzhendong I'm trying to load the model from that link, just calling "torch.load()" and giving it the path to the final.pt file, but I get errors - I've attached the stack trace below. Is there an issue with this model file, or is there another way I should be calling it? Thanks! File "wenet/bin/export_jit.py", line 43, in <module> load_checkpoint(model, args.checkpoint) File "/Users/ark/Documents/projects/wenet/wenet/utils/checkpoint.py", line 18, in load_checkpoint checkpoint = torch.load(path, map_location='cpu') File "/opt/miniconda3/envs/wenet/lib/python3.8/site-packages/torch/serialization.py", line 577, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/opt/miniconda3/envs/wenet/lib/python3.8/site-packages/torch/serialization.py", line 241, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: [enforce fail at inline_container.cc:144] . PytorchStreamReader failed reading zip archive: failed finding central directory It has been updated. Please change your pytorch version to 1.6. Got it, thanks! For anybody who stumbles upon this in the future, I was actually using torch 1.6, but on a machine without a GPU - with a GPU it worked fine, even on torch 1.8. Hello again! I just have another question trying to get the model working. The steps I followed were: Download and untar the file provided at the download link Export the model, with: python wenet/bin/export_jit.py --config 20210520_conformer_exp/train.yaml --checkpoint 20210520_conformer_exp/final.pt --output_file final.zip, this runs successfully Try to recognize on the librispeech test-clean test set (since I don't have access to download gigaspeech yet): python wenet/bin/recognize.py --gpu 0 --mode attention_rescoring --config 20210520_conformer_exp/train.yaml --checkpoint 20210520_conformer_exp/final.pt --test_data examples/librispeech/s0/data/test_clean/format.data --beam_size 20 --batch_size 1 --penalty 0.0 --dict 20210520_conformer_exp/words.txt This fails, I believe because words.txt contains log probabilities of the tokens instead of the indices. I tried replacing the log probabilities with indices, but it seems like the words need to be correctly sorted, and use the correct special tokens as well. I'm able to decode if I use integer indices in words.txt, but the results are nonsensical, presumably because the indices don't correspond correctly to the proper tokens. Is there another words.txt file that I should be using instead? Thanks again for the help! Hello again @pengzhendong - just wanted to ask the above question again, if you have a chance to take a look. The same issue for me as well, there are log probabilities instead of integer indices and I tried to convert them into indices but the output is worse. Update for others - the same link now seems to point to an updated word list, that I was able to get running this time.
gharchive/issue
2021-05-25T16:27:34
2025-04-01T06:40:55.743242
{ "authors": [ "pengzhendong", "rohithkodali", "temp1096" ], "repo": "wenet-e2e/wenet", "url": "https://github.com/wenet-e2e/wenet/issues/407", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2640655223
Add Local Disk Check and Temporary Snapshot Copy for pg_restore This PR enhances the snapshot loading process by verifying if the disk from which the snapshot will be loaded is "local." If the disk is not local, the snapshot is first copied to a predefined temporary directory to enable access. The snapshot is then loaded using pg_restore, and the local copy is deleted after the process completes. Additionally, the --if-exists flag is added to the pg_restore command to prevent errors in cases where certain database objects already exist, improving robustness and ensuring smoother restore operations. Thanks for your contribution, and sorry for the long approval time.
gharchive/pull-request
2024-11-07T10:56:37
2025-04-01T06:40:55.764256
{ "authors": [ "Udaberrico", "azgooon" ], "repo": "weslinkde/laravel-postgres-tools", "url": "https://github.com/weslinkde/laravel-postgres-tools/pull/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2222213799
[Enhancement Request] Fine-Grained GTID Support for Improved Read-After-Write Performance Background The current implementation of the read_after_write consistency feature in the system relies on waiting for the execution of the last global transaction identifier (GTID), indiscriminately applying this method across SQL operations regardless of their data dependencies. This broad-stroke approach leads to unnecessarily high latency and decreased throughput for read-after-write operations, particularly when these operations do not interact with the same table. The lack of differentiation significantly hinders performance, especially in use cases where operations could otherwise proceed in parallel without data consistency issues. Proposal Implement Table-Level Read-After-Write Support: Introduce the capability for the system to intelligently discern operations across different tables, allowing for parallel processing of read-after-write operations where there are no direct data dependencies. This refinement is anticipated to substantially lower wait times for operations not confined to the same table, enhancing responsiveness. Provide Configuration Options for Global and Table Levels: Offer users the ability to adjust read-after-write settings specifically for global and table levels. This granularity in configuration would empower users to tailor performance optimization strategies more precisely to their application's operational characteristics and requirements. Performance Analysis for Global and Table Level Settings: Undertake a comprehensive analysis to evaluate the performance implications of utilizing global versus table-level settings for read-after-write operations. The insights gained from this analysis would equip users with the knowledge to make informed decisions, optimizing their configurations for either broader or more targeted performance improvements based on their specific scenarios. Proposal For Read-After-Write Performance Improvement Introduction Hi, I find this project very cool and want to be a contributor, I write this proposal based on the ReadAfterWrite Consistency Document, and the changes I make are marked in Bold or Delete Line. This is only a preliminary version, and I hope that I can fully discuss the optimization logic with community members before designing the code implementation, looking forward to your reply : ) Goals Session Level ReadAfterWrite: Ensure read requests get latest write in the same client connection. Instance Level ReadAfterWrite: Ensure read requests get latest write in the WeSQL WeScale Instance. Implement Table-Level Read-After-Write Support. Design Details Step 1: Get GTID after write operation without extra network round Starting from MySQL 5.7, the MySQL protocol implements a mechanism to collect the GTIDs to be sent over the wire in the response packet. This feature assists us in acquiring GTIDs without introducing further network rounds. To enable the feature: Client needs to set Capability flag CLIENT_SESSION_TRACK when connecting to MySQL via mysql protocol. This will enable MySQL to send the tracking information back to the client. Client also needs to issue SET @@SESSION_TRACK_GTIDS = 'OWN_GTID' to indicate MySQL to return GTID in the OK packet. This system variable tracks the last DML and DDL commit GTID. Step 2: Manage the latest GTID and update time for each table in the last t senonds We can use struct LatestGTIDManager to manage the latest GTID and update time for each table. The code below is just used to illustrate the method: // LatestGTIDEntry represents an entry in the LatestGTIDManager with the table name, GTID, and the time it was updated. type LatestGTIDEntry struct { GTID string UpdateTime time.Time } // LatestGTIDManager manages the latest GTID and update time for each table. type LatestGTIDManager struct { latestGTIDs map[string]LatestGTIDEntry // Key is the table name, value is the LatestGTIDEntry struct. expireTime time.Duration // The expiration time for GTID entries. mu sync.RWMutex // Mutex for read-write synchronization. wg sync.WaitGroup // WaitGroup to wait for the cleanup goroutine to finish. } // NewLatestGTIDManager creates a new instance of LatestGTIDManager. func NewLatestGTIDManager(expireTime time.Duration) *LatestGTIDManager { return &LatestGTIDManager{ latestGTIDs: make(map[string]LatestGTIDEntry), expireTime: expireTime, } } // UpdateGTID updates the latest GTID and update time for a given table. func (m *LatestGTIDManager) UpdateGTID(tableName, gtid string) { m.mu.Lock() defer m.mu.Unlock() m.latestGTIDs[tableName] = LatestGTIDEntry{ GTID: gtid, UpdateTime: time.Now(), } } // GetLatestGTID retrieves the latest GTID for a given table. // If the table is not found or the GTID has expired, it returns an empty string and false. func (m *LatestGTIDManager) GetLatestGTID(tableName string) (string, bool) { m.mu.RLock() defer m.mu.RUnlock() entry, ok := m.latestGTIDs[tableName] if !ok || time.Now().Sub(entry.UpdateTime) > m.expireTime { return "", false } return entry.GTID, true } // startCleaner starts a goroutine to periodically clean up expired GTID entries. func (m *LatestGTIDManager) startCleaner() { m.wg.Add(1) go func() { defer m.wg.Done() ticker := time.NewTicker(m.expireTime) defer ticker.Stop() for { select { case <-ticker.C: m.mu.Lock() now := time.Now() for tableName, entry := range m.latestGTIDs { if now.Sub(entry.UpdateTime) > m.expireTime { delete(m.latestGTIDs, tableName) } } m.mu.Unlock() } } }() } // Stop waits for the cleanup goroutine to finish. func (m *LatestGTIDManager) Stop() { m.wg.Wait() } Depends on the consistency level, the LatestGTIDManager may be initialized in the client’s session or a global memory data structure. // Initialize LatestGTIDManager with an expiration time of 10 minutes. gm := NewLatestGTIDManager(10 * time.Second) gm.startCleaner() Step 3: Store the GTID in WeSQL WeScale sessions After parsing the response packet and get the GTIDs, WeSQL WeScale will store them in the memory. If the operation is a write operation, the LatestGTIDManager will update the latest GTID and write time for the table has be written. gm.UpdateGTID("my_table", "abcdefg-1234567-890") Depends on the consistency level, the GTIDs may be stored in the client’s Session or a global memory data structure. When a read operation happens, we will utilize the LatestGTIDManager to get the Latest_GTID_for_Table_to_be_Read. Two situations will occur at this time: The table has been updated in the last t seconds, we get its Latest_GTID_for_Table_to_be_Read, and enter the Step 4. The table has NOT been updated in the last t seconds, the information for this table has been cleaned up by LatestGTIDManager, at this point, in a radical way, since the last write to the table has been at least t seconds, it can be considered that the last write to the table has been completed from every follower, we can just pick a follower and read it. Later read operations will utilize GTIDs stored in WeSQL WeScale’s memory, to ensure retrieval of data that was previously written. See belowing steps for more details. Step 4: Select a MySQL follower for reading A CLUSTER_GTID_EXEUTED memory data structure is matained in WeSQL WeScale’s memory, it contains’s all the @@global.gtid_executed values from the cluster. The CLUSTER_GTID_EXEUTED is updated by the health-check module periodically, and obviously it will be lagging. Therefore, GTIDs from step1 will update CLUSTER_GTID_EXEUTED constantly. During routing phase of a read operation, it will use the GTID (from session or global memory data structure) to pick a MySQL instance based on CLUSTER_GTID_EXEUTED. During routing phase of a read operation, it will use the Latest_GTID_for_Table_to_be_Read (from LatestGTIDManager stored in session or global memory) to pick a MySQL instance based on CLUSTER_GTID_EXEUTED. As long as the picked MySQL instance containes the Latest_GTID_for_Table_to_be_Read, the read operation can be directly forwarded to the MySQL instance. Step 5: Ensure write requests have been propagated to the follower MySQL All the follower MySQL instances may be lagging, or the CLUSTER_GTID_EXEUTED may be out-of-date for whatever reasons. It is possible that no follower (expect leader, it always holds all the data) is available for a read operation in Step 4. We can either send the read operation to the leader, or send the read operation to the follower with a WAIT_FOR_EXECUTED_GTID_SET prefix. WAIT_FOR_EXECUTED_GTID_SET function will keep waiting until a GTID is executed on the follower or until times out. We can use multi-statement to save one network round: -- for example, if user's SQL is: select * from t1; -- the actual SQL sent to follower may be a multi-statement like this: select WAIT_FOR_EXECUTED_GTID_SET('ab73d556-cd43-11ed-9608-6967c6ac0b32:7', 3);select * from t1; We need to handle the mysql protocol carefully to use the multi-statement, otherwise the mysql connection may be broken. Thank you for your interest in this topic. If you would like to proceed, please feel free to send an email to 399geray@gmail.com. Your understanding is correct, and we can discuss the implementation details further. We should consider the scalability of the implementation because this feature fundamentally analyzes the dependency between two SQL. The basic approach is at the table level, and we will implement a more fine-grained dependency detection. Cool! I have sent you an email about the idea of dependency detection , I'll take some more time to read the source code carefully and look forward to discussing it with you further! Hi Terry, 没错,这个项目是目前OSPP的项目,你可以尝试先拉代码跑通wescale,如果想获取更多资料尽可能的了解wescale可以加我的微信:wanttowin399,并且我们考虑使用最新的filter功能来实现这个feature。 祝好 geray Terry Gao @.***> 于2024年4月30日周二 23:48写道: Cool! I have sent you an email about the idea of dependency detection , I'll take some more time to read the source code carefully and look forward to discussing it with you further! — Reply to this email directly, view it on GitHub https://github.com/wesql/wescale/issues/472#issuecomment-2085730565, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALUJHBQWKMRXGDUYKMO4A63Y764LTAVCNFSM6AAAAABFUZLBO2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBVG4ZTANJWGU . You are receiving this because you authored the thread.Message ID: @.***> Hi, I am very interested in this issue. Yesterday, I sent an email outlining some of my thoughts and ideas. I look forward to the opportunity to discuss them with you further.Thank you for your time and consideration!
gharchive/issue
2024-04-03T08:05:46
2025-04-01T06:40:55.822996
{ "authors": [ "big-dust", "gerayking", "terry-xuan-gao" ], "repo": "wesql/wescale", "url": "https://github.com/wesql/wescale/issues/472", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1365487475
Implementation of Seguro Phase II Includes: Seguro Phase II Unit tests Integration tests Benchmark suite Benchmark results makefile Fully commented Enhanced README Notes: A partially-complete version of Seguro Phase II that follows the design outlined in the proposal more closely is available on a different branch in my fork. It is trivial to git cherry-pick or git rebase the two branches together. However, I think the version included in this PR is the one we should go with. Looks good. Only a few more questions above, then I'll approve and merge. Thank you. GH isn't showing it well, but I pushed a new version of the final commit. It fixed the typo for "129 fragments or more", fixed inconsistent usage of "additional"/"total" to "remaining", and added a little more explanation to the fragment header.
gharchive/pull-request
2022-09-08T03:54:24
2025-04-01T06:40:55.857081
{ "authors": [ "ashelkovnykov", "matthew-levan" ], "repo": "wexpertsystems/seguro", "url": "https://github.com/wexpertsystems/seguro/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2061218298
🛑 SeAT is down In 8a882cd, SeAT (https://seat.winterco.space) was down: HTTP code: 502 Response time: 504 ms Resolved: SeAT is back up in 0dced41 after 34 minutes. Resolved: SeAT is back up in 0dced41 after 34 minutes.
gharchive/issue
2024-01-01T01:00:08
2025-04-01T06:40:55.898196
{ "authors": [ "wfjsw" ], "repo": "wfjsw/status-winterco-org", "url": "https://github.com/wfjsw/status-winterco-org/issues/644", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
495480162
Fixup python cli The python manager cli currently is expecting a sessionid for a non-logged in user. In more recent versions of Django, a sessionid is not provided until a user has logged in. In addition, we should update the csrftoken header after logging in to match the csrftoken value passed back. Signed-off-by: Joe Grund jgrund@whamcloud.io Codecov Report Merging #1211 into master will not change coverage. The diff coverage is n/a. @@ Coverage Diff @@ ## master #1211 +/- ## ======================================= Coverage 95.39% 95.39% ======================================= Files 2 2 Lines 152 152 ======================================= Hits 145 145 Misses 7 7 Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update a11a63e...7af640c. Read the comment docs.
gharchive/pull-request
2019-09-18T22:14:02
2025-04-01T06:40:55.920128
{ "authors": [ "codecov-io", "jgrund" ], "repo": "whamcloud/integrated-manager-for-lustre", "url": "https://github.com/whamcloud/integrated-manager-for-lustre/pull/1211", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1145267901
Rename wrong words def_excel 😃 HA, excellent catch - thank you! I'll merge this tomorrow, @runningzyp - and don't forget to add yourself to contributors in the README too!
gharchive/pull-request
2022-02-21T03:14:48
2025-04-01T06:40:55.922178
{ "authors": [ "FlipperPA", "runningzyp" ], "repo": "wharton/drf-excel", "url": "https://github.com/wharton/drf-excel/pull/53", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
300013961
Suggestion: Switches to control how missing packages are handled This is a follow up suggestion for #10. There could be a set of switches controlling how missing packages are handled: --error-missing: This is the default and this switch does not need to be specified (and it doesn't even have to exist other than for documentation purposes). If a package is missing it will break with an error without running any of the packages. This is already the default so nothing has to be done (expect maybe add documentation). --warn-missing: If a package is missing a warning will be printed but all packages that have the script will still be run. This is the same as the existing --exclude-missing (and it could of course keep that name). So it is already done. --ignore-missing: This is the lerna/oao behaviour. If a package is missing it will be ignored without any warnings. This switch would need to be added. I'm not generally fond of solving problems by adding more switches but I think most people coming from lerna/oao will want to have the --ignore-missing behaviour to feel at home and be able migrate to wsrun. Existing lerna users have already voiced an opinion towards this in the yarn RFC. If this would be an accepted approach I could work on a PR for this. Please don't open new issues. We can continue the discussion in #10 Yes, this suggestion is different than the one in #10 (which is about eliminating the warnings for --exclude-missing). I thought it would be cleaner to have a separate issue, but sure, we can discuss this suggestion in #10 too. I agree that adding more switches is not good (as I already stated in the original post in this issue). But since #10 has a wont-fix label and there seems to be no agreement, it could be argued that leaving lerna users without any migration option is worse than adding more switches. Lerna users are provided with a migration option, which warns that the behaviour is not desirable. The entire point of adding the --exclude-missing option is to support migrating lerna users without endorsing lerna's behaviour as acceptable or good.
gharchive/issue
2018-02-25T10:09:17
2025-04-01T06:40:55.957723
{ "authors": [ "jonaskello", "spion" ], "repo": "whoeverest/wsrun", "url": "https://github.com/whoeverest/wsrun/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2341758089
🛑 Lunes Negro is down In ae589d0, Lunes Negro (https://lunesnegro.com.ar) was down: HTTP code: 0 Response time: 0 ms Resolved: Lunes Negro is back up in dd8e29f after 10 minutes.
gharchive/issue
2024-06-08T17:38:41
2025-04-01T06:40:55.960325
{ "authors": [ "whoisnegrello" ], "repo": "whoisnegrello/upptimetest", "url": "https://github.com/whoisnegrello/upptimetest/issues/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
115551790
Handle curl_easy_perform() errors Hello! I'd like to be able to detect with a finer-grained resolution errors that prevent Curl from performing a request. It seems the right place to start is to change session.cpp:350 to capture the return value from curl_easy_perform(), and then perhaps add some error fields to the cpr::Response that is returned. I'm happy to open a PR with this feature added if there were interest in it; and if so, I'd like to get your opinion on the right way to structure this feature. This is a great suggestion @DEGoodmanWilson. In my own testing, I've always just set CURL_VERBOSE to 1 to get more information when curl doesn't do what I want it to do. Now that the library has matured a bit since those days, I can definitely see the value in having a more stable error API. One approach is to surface curl errors and have the users inspect those errors to figure out what's wrong. I can see how this approach might cause some problems in the future, because I want to leave the frontend of the API as curl-free as possible. Ideally, the design of the interface should allow for the implementation to completely swap out curl for another http framework (looking mostly at Boost.ASIO). That said, that's a pretty long way off and I think there's enough value here that it's worth doing soon. I think minimally the curl error code could be captured in an integer field in cpr::Response, and potentially the string error from curl_easy_strerror can be captured in cpr::Response as well. Feel free to throw up a PR and we could take it from there! Thanks for the quick response! Sounds good. I share your concerns about exposing Curl to the end-user of CPR. Perhaps we could inter-translate into a CPR-owned enum (but continue using the curl-generated message, since that is meant for humans rather than computers anyway?) What is your feeling on throwing an exception instead of adding the error to cpr::Response? This could be problematic for the aync versions of the HTTP methods. My own inclination is to not throw an exception, but I can see arguments either way. I would prefer to keep the library as exception free as possible. Clients could always wrap their own exception mechanisms over a simple error code they can check. Conversely, a client could wrap absorb exceptions and write their own error codes against this, but I think the former is a bit nicer to larger groups of people (looking at you, Google Coding Standards). I think the approach you're suggesting is reasonable, I'll have to take a closer look at #61 tomorrow when I have some free cycles. If that's good to go, I'll close this issue when that gets merged in. Does that sound good? :+1: Sounds awesome. Merged in https://github.com/whoshuu/cpr/commit/bdb877c4ae29423ec45a8cb37125e14c91983da1. Thanks for the PR and work!
gharchive/issue
2015-11-06T17:33:18
2025-04-01T06:40:55.966714
{ "authors": [ "DEGoodmanWilson", "whoshuu" ], "repo": "whoshuu/cpr", "url": "https://github.com/whoshuu/cpr/issues/60", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
317900940
Init adaptation for Pytorch 0.4.0 shared mode does not work yet The only thing is should we use enable_grad(), set_grad_enabled(), no_grad() as a context-manager. I think a context-manager is more clear that the gradient mode is only defined in such a block below with statement. I will merge this PR first. We can complete adaptation afterwards :)
gharchive/pull-request
2018-04-26T06:59:39
2025-04-01T06:40:55.983515
{ "authors": [ "whr94621", "zhengzx-nlp" ], "repo": "whr94621/NJUNMT-pytorch", "url": "https://github.com/whr94621/NJUNMT-pytorch/pull/16", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
457528389
version bumps Nothing major; the biggest two are: electron-builder electron But both are just minor version bumps 👌 npm run electron:mac still works -- 🙆‍♂ Current Mac dmg file size: 104MB 🎉
gharchive/pull-request
2019-06-18T14:49:52
2025-04-01T06:40:55.985277
{ "authors": [ "cal2195", "whyboris" ], "repo": "whyboris/Video-Hub-App", "url": "https://github.com/whyboris/Video-Hub-App/pull/192", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
134388494
Add ES6 Map support to setup a waterfall flow It's impossible to use an Object to setup a waterfall flow, because the order of the keys is not guaranteed on an Object. However, an ES6 Map is ordered! ("A Map object iterates its elements in insertion order"). Without an Map literal, Map will be unwieldy, but there's hope, but assuming that gets fixed, it could look something like this: flw.waterfall([ //imaginary Map literal syntax foo: function getFoo(callback) { callback(null, 'I pity the foo'); }, bar: function getBar(results, callback) { console.log(results.foo); callback(); }, ]); getBar gets two arguments, results and callback. results is a hashmap containing keys for each of the previously finished functions, in this case one key foo which contains the value with which the getFoo callback was called. A waterfall is just a series that returns values. Now you have a nice context object so a waterfall is not really needed I think .. I think this would be a whole new project, mapperFall or something :)
gharchive/issue
2016-02-17T20:17:49
2025-04-01T06:40:55.988295
{ "authors": [ "godspeedelbow", "whyhankee" ], "repo": "whyhankee/flw", "url": "https://github.com/whyhankee/flw/issues/2", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
933191195
WIP: NLP metrics Description General Checklist [ ] Tests added for this feature/bug if it was a bug, test must cover it. [ ] Conform by the style guides, by using formatter [ ] Documentation updated [ ] (optional) Please add a label to your PR Pull Request Test Coverage Report for Build 987111448 2 of 40 (5.0%) changed or added relevant lines in 3 files are covered. No unchanged relevant lines lost coverage. Overall coverage decreased (-0.6%) to 79.277% Changes Missing Coverage Covered Lines Changed/Added Lines % src/whylogs/core/model_profile.py 1 4 25.0% src/whylogs/core/metrics/nlp_metrics.py 0 35 0.0% Totals Change from base Build 980164614: -0.6% Covered Lines: 3352 Relevant Lines: 4062 💛 - Coveralls Still need to add a notebook for the overall metrics, and some more tests
gharchive/pull-request
2021-06-30T00:05:35
2025-04-01T06:40:55.996638
{ "authors": [ "coveralls", "lalmei" ], "repo": "whylabs/whylogs", "url": "https://github.com/whylabs/whylogs/pull/248", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2505556541
🛑 Le Coq 🇪🇸 (es) is down In e51b361, Le Coq 🇪🇸 (es) (https://holiday-lescala.com/es/h/casa-le-coq/) was down: HTTP code: 0 Response time: 0 ms Resolved: Le Coq 🇪🇸 (es) is back up in 07713cd after 5 minutes.
gharchive/issue
2024-09-04T14:36:51
2025-04-01T06:40:56.002078
{ "authors": [ "whytspace" ], "repo": "whytspace/upptime-holiday-lescala", "url": "https://github.com/whytspace/upptime-holiday-lescala/issues/1186", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2135275245
AI wont validate I have my AI key and I also have chat gpt4 yet when I click on validate it says I don't have access to GPT4. What am I doing wrong? I have looked for some type of install guide, but there appears to be none.
gharchive/issue
2024-02-14T22:03:27
2025-04-01T06:40:56.003481
{ "authors": [ "TheMorningStarLucifer" ], "repo": "wickercar/foundry-ai-text-importer", "url": "https://github.com/wickercar/foundry-ai-text-importer/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1070565154
Option for choosing OpenCL new option in the config file: GPU = True, openMM uses CUDA GPU = False, openMM uses CPU GPU = OpenCL, openMM uses OpenCL Codecov Report Merging #40 (91c0672) into master (9c62c39) will decrease coverage by 0.50%. The diff coverage is 10.52%. I would prefer a routine that checks if CUDA is available and if it is, it will use CUDA, otherwise it will fall back to openCL. Here are some examples in which people have done this before: https://programtalk.com/python-examples/simtk.openmm.Platform.getPlatformByName/ There is an option already implemented by CHARMM-GUI I think we can use that, I will try it soon # Set platform DEFAULT_PLATFORMS = "CUDA", "OpenCL", "CPU" enabled_platforms = [ Platform.getPlatform(i).getName() for i in range(Platform.getNumPlatforms()) ] if args.platform: if not args.platform[0] in enabled_platforms: print( "Unable to find OpenMM platform '{}'; exiting".format(args.platform[0]), file=sys.stderr, ) sys.exit(1) platform = Platform.getPlatformByName(args.platform[0]) else: for platform in DEFAULT_PLATFORMS: if platform in enabled_platforms: platform = Platform.getPlatformByName(platform) break if isinstance(platform, str): print( "Unable to find any OpenMM platform; exiting".format(args.platform[0]), file=sys.stderr, ) sys.exit(1)
gharchive/pull-request
2021-12-03T12:58:16
2025-04-01T06:40:56.011719
{ "authors": [ "JohannesKarwou", "codecov-commenter" ], "repo": "wiederm/transformato", "url": "https://github.com/wiederm/transformato/pull/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
175299063
Added if-unmodified-since to transcludes cc @wikimedia/services Puppet change is Gerrit 308773 Changes Unknown when pulling 6da295a880568bffe597b05dfc30d237ab193006 on Pchelolo:trans_unmodified into ** on wikimedia:master**.
gharchive/pull-request
2016-09-06T17:02:04
2025-04-01T06:40:56.016919
{ "authors": [ "Pchelolo", "coveralls", "d00rman" ], "repo": "wikimedia/change-propagation", "url": "https://github.com/wikimedia/change-propagation/pull/97", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1275793590
通用搜索 联系人 群组 聊天记录 频道等 已解决。
gharchive/issue
2022-06-18T13:42:21
2025-04-01T06:40:56.048812
{ "authors": [ "imndx" ], "repo": "wildfirechat/uni-chat", "url": "https://github.com/wildfirechat/uni-chat/issues/9", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
202467181
[WFCORE-2226] Add @Test annotation to a test in ValueTypeCompletionTestCase https://issues.jboss.org/browse/WFCORE-2226 Can one of the admins verify this patch? @jtymel , this patch is fine. Thanks for the fix. this is ok to test
gharchive/pull-request
2017-01-23T07:59:00
2025-04-01T06:40:56.055406
{ "authors": [ "bstansberry", "jfdenise", "jtymel", "wildfly-ci" ], "repo": "wildfly/wildfly-core", "url": "https://github.com/wildfly/wildfly-core/pull/2107", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1828027886
[WFLY-18296] Remove the org.jboss.jts dependency on jdk.console https://issues.redhat.com/browse/WFLY-18296 Upstream: #17050 @mmusgrov Is this ok?
gharchive/pull-request
2023-07-30T18:29:42
2025-04-01T06:40:56.064483
{ "authors": [ "bstansberry" ], "repo": "wildfly/wildfly", "url": "https://github.com/wildfly/wildfly/pull/17051", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2379383055
Timed interrupt fatfs Lots of automatic reformatting has happened, ignore most changes as the code structure hasn't changed. Changes that have been made: Timed_interrupt_fatfs app added to scenario_apps Seeed_sample app added to scenario_apps i2c_slave_app added to scenario_apps I've merged main from the Himax team to the repo upto date Pull this branch locally, then within the makefile, update the APP_TYPE for desired app to run. Timed_interrupt_fatfs requires an SDcard to be inserted to the device in order to run. Closing this PR as no need to be merged anymore but leaving as a potential reference source
gharchive/pull-request
2024-06-28T01:39:16
2025-04-01T06:40:56.066753
{ "authors": [ "Tobyntobyn", "victor-wildlife" ], "repo": "wildlifeai/Seeed_Grove_Vision_AI_Module_V2", "url": "https://github.com/wildlifeai/Seeed_Grove_Vision_AI_Module_V2/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
766984398
Jekyll 4.2.0 breaks this plugin I just upgraded to Jekyll 4.2.0 (which was released recently) and noticed that I get an error upon attempting to run my project. Here is the stacktrace: Configuration file: [...]/_config.yml Configuration file: [...]/_config_dev.yml Source: [...] Destination: [...]/build_dev Incremental build: disabled. Enable with --incremental Generating... Creating output directory [...]/build_dev/assets/media/r Generating [...]/build_dev/assets/media/r/imagename.jpg Liquid Exception: undefined method `filter_cache' for nil:NilClass in [...]/_posts/postname.md bundler: failed to load command: jekyll ([...]/.rbenv/versions/2.6.3/bin/jekyll) NoMethodError: undefined method `filter_cache' for nil:NilClass [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:425:in `item_property' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `block in sort_input' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `map' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:385:in `sort_input' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/filters.rb:320:in `sort' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/strainer.rb:56:in `invoke' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/context.rb:86:in `invoke' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:84:in `block in render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `each' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `inject' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/variable.rb:82:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/tags/assign.rb:26:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:103:in `render_node_to_output' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:91:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:208:in `block in render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:242:in `with_profiling' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:207:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:220:in `render!' [...]lib/jekyll-responsive-image/renderer.rb:28:in `render_responsive_image' [...]lib/jekyll-responsive-image/tag.rb:16:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:103:in `render_node_to_output' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/block_body.rb:91:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:208:in `block in render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:242:in `with_profiling' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:207:in `render' [...]lib/ruby/gems/2.6.0/gems/liquid-4.0.3/lib/liquid/template.rb:220:in `render!' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:39:in `block (3 levels) in render!' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:59:in `measure_counts' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:38:in `block (2 levels) in render!' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:63:in `measure_bytes' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:37:in `block in render!' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:70:in `measure_time' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/liquid_renderer/file.rb:36:in `render!' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:131:in `render_liquid' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:80:in `render_document' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/renderer.rb:63:in `run' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:547:in `render_regenerated' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:532:in `block (2 levels) in render_docs' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:531:in `each' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:531:in `block in render_docs' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:530:in `each_value' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:530:in `render_docs' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:210:in `render' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/site.rb:80:in `process' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:28:in `process_site' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:65:in `build' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:36:in `process' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `block in process_with_graceful_fail' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `each' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/command.rb:91:in `process_with_graceful_fail' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/lib/jekyll/commands/build.rb:18:in `block (2 levels) in init_with_program' [...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `block in execute' [...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `each' [...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/command.rb:221:in `execute' [...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary/program.rb:44:in `go' [...]lib/ruby/gems/2.6.0/gems/mercenary-0.4.0/lib/mercenary.rb:21:in `program' [...]lib/ruby/gems/2.6.0/gems/jekyll-4.2.0/exe/jekyll:15:in `<top (required)>' [...]bin/jekyll:23:in `load' [...]bin/jekyll:23:in `<top (required)>' If I have some time I might file a PR as I have some other modifications in a fork I'd like to have considered upstream. fights off bot fights off bot Same problem here. Same problem here. Sorry for such a slow reply on this. I'm not sure whether it's a Jekyll or Liquid change, but it's frustrating to have a breaking change regardless! I'll take a look at #103 and see whether I can get a release ready ASAP. Sorry for such a slow reply on this. I'm not sure whether it's a Jekyll or Liquid change, but it's frustrating to have a breaking change regardless! I'll take a look at #103 and see whether I can get a release ready ASAP. Does anyone have a sample site/config that shows this issue? I can't reproduce it. Does anyone have a sample site/config that shows this issue? I can't reproduce it. Does anyone have a sample site/config that shows this issue? I can't reproduce it. Yes, see https://github.com/wildlyinaccurate/jekyll-responsive-image/pull/103#discussion_r563534717 Does anyone have a sample site/config that shows this issue? I can't reproduce it. Yes, see https://github.com/wildlyinaccurate/jekyll-responsive-image/pull/103#discussion_r563534717 Please don't, this is still an issue.Am 13.02.2021 13:13 schrieb "stale[bot]" notifications@github.com: This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. —You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or unsubscribe.
gharchive/issue
2020-12-14T22:30:58
2025-04-01T06:40:56.075027
{ "authors": [ "Lominean", "brandonb927", "inavarrorubio", "salomvary", "sebastianhaas", "wildlyinaccurate" ], "repo": "wildlyinaccurate/jekyll-responsive-image", "url": "https://github.com/wildlyinaccurate/jekyll-responsive-image/issues/101", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1806667769
🛑 Emby is down In 533e512, Emby (https://emby.will.cymru) was down: HTTP code: 523 Response time: 482 ms Resolved: Emby is back up in 0ae6a36.
gharchive/issue
2023-07-16T17:57:42
2025-04-01T06:40:56.086080
{ "authors": [ "will936" ], "repo": "will936/Upptime", "url": "https://github.com/will936/Upptime/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
510123050
Multiple optimizers but only one loss Hey! I have a question regarding this library. Really, like how it forces me to structure my code better. I encountered one problem I did not know how to solve based on the documentation. Let's say I have two optimizers for two parts of the network, e.g. my configure_optimizers() looks like this: def configure_optimizers(self): optimizer_encoder = optim.Adam(self.encoder.parameters(), ...) optimizer_decoder = optim.Adam(self.decoder.parameters(), ...) return [optimizer_encoder, optimizer_decoder] now in the training loop I forward pass the encoder, then the decoder and compute my loss based on the output: def training_step(self, batch, batch_nb, optimizer_idx): inp, gt = ... encoding = self.encoder(inp) pred = self.decoder(encoding) loss = F.mse_loss(pred, gt) return {'loss': loss} Since I have two optimizers I have to respect that this function is called two times with different optimizer_idx however I just have one loss to backprop. How would I go about this? What have you tried? I tried something like this def training_step(self, batch, batch_nb, optimizer_idx): if optimizer_idx == 1: return {} inp, gt = ... encoding = self.encoder(inp) pred = self.decoder(encoding) loss = F.mse_loss(pred, gt) return {'loss': loss} However, this leads to an error since no loss key is present in trainer.py:1392. in that case, just pass in both sets of params to a single optimizer But I explicitly want two different learning rates for different parts of the network. That is not really possible with a single optimizer AFAIK. One possibility could be to scale gradients on the weights for which I want lower learning rate before running the optimizer but that is really not a a clean solution. It is possible with parameter groups using a single optimizer. Your use-case is actually the example in the docs: https://pytorch.org/docs/stable/optim.html#per-parameter-options That’s nice. Thank you for hint!
gharchive/issue
2019-10-21T17:02:11
2025-04-01T06:40:56.094638
{ "authors": [ "amatsukawa", "selflein", "williamFalcon" ], "repo": "williamFalcon/pytorch-lightning", "url": "https://github.com/williamFalcon/pytorch-lightning/issues/404", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2102790901
Tailwind Language Server Slow Problem description Tailwind Language server is very slow compared to the other servers that i am using causing a delay of nearly 1 second before it displays anything. I am using LazyVim and have not really changed anything. Why do you think this is an issue with mason-lspconfig.nvim? The issue seems to be with how the lsp server is being handled but i am not completely aware of it so any help is appreciated. Neovim version (>= 0.7) NVIM v0.9.5 Build type: Release LuaJIT 2.1.1702233742 Operating system/version Linux redox-laptop 6.7.1-arch1-1 #1 SMP PREEMPT_DYNAMIC Sun, 21 Jan 2024 22:14:10 +0000 x86_64 GNU/Linux I've manually reviewed the Nvim LPS client log (:LspLog) to find potential errors [x] Yes I've recently downloaded the latest plugin version of mason.nvim, mason-lspconfig.nvim, and nvim-lspconfig [x] Yes Affected language servers Tailwind Language Server Steps to reproduce Just simply use it. Actual behavior some issue with how the information is handed back from the server but not sure. Expected behavior should be faster. LspInfo Language client log: /home/redox/.local/state/nvim/lsp.log Detected filetype: typescriptreact 4 client(s) attached to this buffer: Client: tsserver (id: 1, bufnr: [269, 3]) filetypes: javascript, javascriptreact, javascript.jsx, typescript, typescriptreact, typescript.tsx autostart: true root directory: /home/redox/Code/Native/BloodDonation cmd: /home/redox/.local/share/nvim/mason/bin/typescript-language-server --stdio Client: emmet_language_server (id: 2, bufnr: [269, 3]) filetypes: css, eruby, html, htmldjango, javascriptreact, less, pug, sass, scss, typescriptreact autostart: true root directory: /home/redox/Code/Native/BloodDonation cmd: /home/redox/.local/share/nvim/mason/bin/emmet-language-server --stdio Client: tailwindcss (id: 3, bufnr: [269, 3]) filetypes: aspnetcorerazor, astro, astro-markdown, blade, clojure, django-html, htmldjango, edge, eelixir, elixir, ejs, erb, eruby, gohtml, gohtmltmpl, haml, handlebars, hbs, html, html-eex, heex, jade, leaf, liquid, mdx, mustache, njk, nunjucks, php, razor, slim, twig, css, less, postcss, sass, scss, stylus, sugarss, javascript, javascriptreact, reason, rescript, typescript, typescriptreact, vue, svelte autostart: true root directory: /home/redox/Code/Native/BloodDonation cmd: /home/redox/.local/share/nvim/mason/bin/tailwindcss-language-server --stdio Client: copilot (id: 4, bufnr: [269, 3]) filetypes: autostart: false root directory: /home/redox/Code/Native/BloodDonation cmd: node /home/redox/.local/share/nvim/lazy/copilot.lua/copilot/index.js Other clients that match the filetype: typescriptreact Config: eslint filetypes: javascript, javascriptreact, javascript.jsx, typescript, typescriptreact, typescript.tsx, vue, svelte, astro root directory: Not found. cmd: /home/redox/.local/share/nvim/mason/bin/vscode-eslint-language-server --stdio cmd is executable: true autostart: true custom handlers: eslint/openDoc, eslint/noLibrary, eslint/probeFailed, eslint/confirmESLintExecution Configured servers list: lua_ls, pyright, marksman, cssls, emmet_language_server, eslint, tailwindcss, jsonls, ruff_lsp, yamlls, html, tsserver, volar LspLog No response Healthcheck mason: require("mason.health").check() mason.nvim ~ - OK mason.nvim version v1.9.0 - OK PATH: prepend - OK Providers: mason.providers.registry-api mason.providers.client - OK neovim version >= 0.7.0 mason.nvim [Registries] ~ - OK Registry `github.com/mason-org/mason-registry version: 2024-01-26-net-canoe` is installed. mason.nvim [Core utils] ~ - OK unzip: `UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send` - OK wget: `GNU Wget 1.21.4 built on linux-gnu.` - OK curl: `curl 8.5.0 (x86_64-pc-linux-gnu) libcurl/8.5.0 OpenSSL/3.2.0 zlib/1.3.1 brotli/1.1.0 zstd/1.5.5 libidn2/2.3.4 libpsl/0.21.2 (+libidn2/2.3.4) libssh2/1.11.0 nghttp2/1.59.0` - OK gzip: `gzip 1.13` - OK tar: `tar (GNU tar) 1.35` - OK bash: `GNU bash, version 5.2.26(1)-release (x86_64-pc-linux-gnu)` - OK sh: `Ok` mason.nvim [Languages] ~ - WARNING Go: not available - ADVICE: - spawn: go failed with exit code - and signal -. go is not executable - WARNING Composer: not available - ADVICE: - spawn: composer failed with exit code - and signal -. composer is not executable - WARNING PHP: not available - ADVICE: - spawn: php failed with exit code - and signal -. php is not executable - WARNING Ruby: not available - ADVICE: - spawn: ruby failed with exit code - and signal -. ruby is not executable - WARNING RubyGem: not available - ADVICE: - spawn: gem failed with exit code - and signal -. gem is not executable - OK node: `v20.10.0` - OK cargo: `cargo 1.75.0` - WARNING julia: not available - ADVICE: - spawn: julia failed with exit code - and signal -. julia is not executable - OK python: `Python 3.11.6` - OK luarocks: `/usr/bin/luarocks 3.9.2` - OK java: `openjdk version "17.0.10" 2024-01-16` - OK javac: `javac 17.0.10` - OK npm: `10.2.3` - OK pip: `pip 23.3.2 from /usr/lib/python3.11/site-packages/pip (python 3.11)` - OK python venv: `Ok` mason.nvim [GitHub] ~ - OK GitHub API rate limit. Used: 4. Remaining: 56. Limit: 60. Reset: Sat 27 Jan 2024 01:26:03 AM PKT. Install and authenticate via gh-cli to increase rate limit. Screenshots or recordings No response I can confirm. When editing a simple markdown file, my neovim freezes for a few seconds. Then reacts to input, then freezes again. It's unusable. Uninstalling the tailwind lsp server alleviated the issue. Maybe the issue comes from mason is installing a very old version for tailwindcss lsp: ✓ tailwindcss-language-server tailwindcss Language Server Protocol implementation for Tailwind CSS. installed version 0.0.27 homepage https://github.com/tailwindlabs/tailwindcss-intellisense languages CSS categories LSP executables tailwindcss-language-server current version: 0.12.6 From what i've heard before the issue with tailwind-lsp comes from the fact that there isn't any way to control the results being sent back and nvim-cmp unfortunately also lacks the ability at the moment to handle them asynchronously. The response being sent by the lsp is huge which causes the hiccup in the editor as @matmilbury has mentioned. For those that might end up here i would recommend using yionkes fork for nvim-cmp that implements this and also give a try to blink.cmp. Blink is very new at the moment and needs more polish but if you can make do without a lot of external cmp sources you'd be more than happy with it. It does not seem like tailwind would provide this option of restricting outputs anytime in the near future so most likely this issue would need to be handled by nvim-cmp. I'm currently using the fork from yionke and it works absolutely fine for me with tailwind.
gharchive/issue
2024-01-26T19:39:18
2025-04-01T06:40:56.106110
{ "authors": [ "Redoxahmii", "loeffel-io", "matmilbury" ], "repo": "williamboman/mason-lspconfig.nvim", "url": "https://github.com/williamboman/mason-lspconfig.nvim/issues/352", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1133194155
yamlls: Unable to find executable Problem description yamlls server is not running, fails to find executable. Config: yamlls filetypes: yaml, yaml.docker-compose root directory: /home/aryzing/workspace/project cmd: yaml-language-server --stdio cmd is executable: Unable to find executable. Please check your path and ensure the server is installed autostart: true custom handlers: Neovim version (>= 0.6) NVIM v0.7.0-dev+1048-gdba1df635 Build type: RelWithDebInfo LuaJIT 2.1.0-beta3 Operating system/version Linux linux 5.11.0-49-generic #55-Ubuntu SMP Wed Jan 12 17:36:34 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux I've recently downloaded the latest plugin version of both nvim-lsp-installer and nvim-lspconfig [X] Yes Affected language servers yamlls Actual behavior yamlls is not running Expected behavior For yamlls to run LspInstallInfo output ✓ yamlls installed 12 Feb 2022 00:14 filetypes yaml, yaml.docker-compose path ~/.local/share/nvim/lsp_servers/yaml homepage https://github.com/redhat-developer/yaml-language-server ↓ Server configuration schema (press enter to collapse) → redhat.telemetry.enabled default: null → yaml.completion default: true → yaml.customTags default: [] → yaml.disableAdditionalProperties default: false → yaml.format.bracketSpacing default: true → yaml.format.enable default: true → yaml.format.printWidth default: 80 → yaml.format.proseWrap default: "preserve" → yaml.format.singleQuote default: false → yaml.hover default: true → yaml.maxItemsComputed default: 5000 → yaml.schemaStore.enable default: true → yaml.schemaStore.url default: "https:\/\/www.schemastore.org\/api\/json\/catalog.json" → yaml.schemas default: {} → yaml.trace.server default: "off" → yaml.validate default: true Installation log Believe these are the relevant lines, let me know if you need more. These match the timestamp above, and correspond to the current installed instance that's not working. [INFO Sat 12 Feb 2022 00:14:49 EET] ...-installer/lua/nvim-lsp-installer/ui/status-win/init.lua:644: Starting install server_name="yamlls", requested_version="" [INFO Sat 12 Feb 2022 00:14:51 EET] ...-installer/lua/nvim-lsp-installer/ui/status-win/init.lua:663: Installation completed server_name="yamlls", success=true Healthcheck nvim-lsp-installer: require("nvim-lsp-installer.health").check() ======================================================================== ## nvim-lsp-installer report - OK: neovim version >= 0.6.0 - WARNING: **Go**: not available - WARNING: **Ruby**: not available - WARNING: **RubyGem**: not available - WARNING: **Composer**: not available - WARNING: **PHP**: not available - WARNING: **javac**: not available - WARNING: **julia**: not available - OK: **sh**: `Ok` - OK: **bash**: `GNU bash, version 5.1.4(1)-release (x86_64-pc-linux-gnu)` - OK: **tar**: `tar (GNU tar) 1.34` - OK: **gzip**: `gzip 1.10` - OK: **curl**: `curl 7.74.0 (x86_64-pc-linux-gnu) libcurl/7.74.0 OpenSSL/1.1.1j zlib/1.2.11 brotli/1.0.9 libidn2/2.3.0 libpsl/0.21.0 (+libidn2/2.3.0) libssh/0.9.5/openssl/zlib nghttp2/1.43.0 librtmp/2.3` - OK: **wget**: `GNU Wget 1.21 built on linux-gnu.` - OK: **python3**: `Python 3.10.1` - OK: **node**: `v17.1.0` - OK: **java**: `Ok` - OK: **npm**: `8.1.2` - OK: **pip3**: `pip 21.2.4 from /usr/local/lib/python3.10/site-packages/pip (python 3.10)` I did, the docs are great. Here's some relevant code from my config, local servers = { -- [other servers omitted from this snippet] "yamlls", } for _, name in pairs(servers) do local server_is_found, server = lsp_installer.get_server(name) if server_is_found then if not server:is_installed() then print("Installing " .. name) server:install() end end end lsp_installer.on_server_ready(function(server) local serverOpts = { on_attach = on_attach, -- [properly defined, but not included in this snippet] } -- [other servers omitted from this snippet] if server.name == "yamlls" then serverOpts.settings = { yaml = { schemas = { ["https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/schemas/v3.1/schema.json"] = "/**/openapi.yaml", }, }, } end server:setup(serverOpts) end) Does it start and attach if you try the following (this creates a new git repository in a tmp dir)? $ cd `mktemp -d` $ git init $ touch test.yml $ nvim test.yml It does, it starts and attaches successfully. In seeing the git init command above, I thought I'd mention that the yaml file that made me report this issue is in a git submodule. Just tried it with a new yaml file at the root of the parent git repo and it attached just fine, completion working too. I think the Unable to find executable. message in :LspInfo is not always 100% true - sometimes it has a tendency to report that the executable was not found when the issue is something else. I believe one cause is when the server fails to properly start - can you find anything of interest in the LSP logs? exe 'tabnew ' .. luaeval("vim.lsp.get_log_path()")
gharchive/issue
2022-02-11T22:20:05
2025-04-01T06:40:56.121985
{ "authors": [ "aryzing", "williamboman" ], "repo": "williamboman/nvim-lsp-installer", "url": "https://github.com/williamboman/nvim-lsp-installer/issues/476", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1408172348
Does this support Django 4.1 ? Framework :: Django :: 4.0 Not tested yet but for trial, you can go for Django 4.0 and you can raise the issues here.
gharchive/issue
2022-10-13T17:23:51
2025-04-01T06:40:56.130005
{ "authors": [ "anjanesh", "msantoshk" ], "repo": "willmeyers/django-bunny-storage", "url": "https://github.com/willmeyers/django-bunny-storage/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1876383493
Gpu allocation tests Added a series of tests for the to(device) functionality across several of our LinearOperators. I think you want from linalg.operator_market import op_names, get_test_operator rather than from test.linalg.operator_market import op_names, get_test_operator. (see e.g. the example tests in test_decomps.py)
gharchive/pull-request
2023-08-31T22:17:09
2025-04-01T06:40:56.148249
{ "authors": [ "AndPotap", "mfinzi" ], "repo": "wilson-labs/cola", "url": "https://github.com/wilson-labs/cola/pull/37", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1421607222
Customize settings per container Apologies if I am missing something in documentation or examples, but is there a straight forward way to have per container settings (for example, a different set of Allowed Groups) without duplicating common settings such as LDAP URL? Hi @MrNova111, Thanks for your interest in ldapAuth. Apologies if I am missing something in documentation or examples, but is there a straight forward way to have per container settings (for example, a different set of Allowed Groups) without duplicating common settings such as LDAP URL? Unfortunately, there isn't. If you try to overwrite the middleware configs traefik will return an error like this: traefik | time="2022-10-25T13:17:18Z" level=error msg="Middleware defined multiple times with different configurations in [...]" providerName=docker middlewareName=ldap_auth I believe I may have figured out a solution that uses go templating. In my configuration file I defined a template that contains all my common settings, and then created a middleware instance for each container router that references the common template: {{define "ldapTemplate"}}Url: ldaps://example.org{{end}} {{define "ldapConfig"}}http: middlewares: ui-ldapAuth: plugin: ldapAuth: LogLevel: DEBUG {{template "ldapTemplate"}} AllowedGroups: - groupA web-ldapAuth: plugin: ldapAuth: LogLevel: DEBUG {{template "ldapTemplate"}} AllowedGroups: - groupB {{end}} {{template "ldapConfig"}} Then I simply assign each container service its own middleware: version: '3.5' services: traefik: image: traefik:v2.9 volumes: - ./traefik.yml:/etc/traefik/traefik.yml:ro - ./ldapAuth-conf.yml:/dynamic-conf/ldapAuth-conf.yml:ro - /var/run/docker.sock:/var/run/docker.sock:ro ui: labels: - traefik.enable=true - traefik.http.routers.ui.rule=Host(`ui.localhost`) - traefik.http.routers.ui.tls=true - traefik.http.routers.ui.middlewares=ui-ldapAuth@file web: labels: - traefik.enable=true - traefik.http.routers.web.rule=Host(`web.localhost`) - traefik.http.routers.web.tls=true - traefik.http.routers.web.middlewares=web-ldapAuth@file Glad to know that worked for you. Only for future reference the docs about traefik's go-templating could be found here.
gharchive/issue
2022-10-24T23:24:29
2025-04-01T06:40:56.152303
{ "authors": [ "MrNova111", "wiltonsr" ], "repo": "wiltonsr/ldapAuth", "url": "https://github.com/wiltonsr/ldapAuth/issues/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1014635665
Console use is not allowing WindiCSS 3.1.8 to work in the browser any longer I'm trying to use WindiCSS in the browser, similar to this: https://github.com/antfu/windicss-runtime-dom But anytime I import the Processor, I immediately get a Uncaught ReferenceError: process is not defined Repo https://github.com/JohnCampionJr/vitesse-windicss-browser Just added a couple of lines trying to bring in Windi This is something in the changes between 3.1.8 and 3.1.7. Reverting to 3.1.7 makes the problem go away. It is from the use of Console here. https://github.com/windicss/windicss/commit/a042e030b87f37bea4a939f4fe92713920a3f9e1#diff-bbb20b2922ba3b91e8b0b876d68f8f379f263fe754b88ebb5b5b3079983fec6e Added in this commit: https://github.com/windicss/windicss/pull/426 Need a better way to warn users.... Scratch that, just saw PR #488 Released as v3.1.9
gharchive/issue
2021-10-04T01:31:48
2025-04-01T06:40:56.169426
{ "authors": [ "JohnCampionJr", "antfu" ], "repo": "windicss/windicss", "url": "https://github.com/windicss/windicss/issues/489", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
498972402
Old pod logs show up on startup repro: tilt up fortune change the startup log message kill tilt tilt up fortune (potentially repeat a few times) observed: we get the fortune build output, followed by the pod output from multiple fortune pods: fortune ┊ STEP 3/3 — Deploying fortune ┊ │ Injecting images into Kubernetes YAML fortune ┊ │ Applying via kubectl: fortune ┊ │ matt-fortune:deployment fortune ┊ fortune ┊ │ Step 1 - 0.738s fortune ┊ │ Step 2 - 0.000s fortune ┊ │ Step 3 - 0.227s fortune ┊ │ Done in: 0.965s fortune ┊ fortune ┊ 2019/09/23 16:21:19 Starting Fortune Service on :8082 fortune ┊ 2019/09/23 16:22:55 Starting Fortune Service on :8082!! fortune ┊ 2019/09/23 16:24:23 Starting Fortune Service on :8082 expected: we get the fortune build output followed by the pod output from the current fortune pod It's possible it's fine to show pod output from the previous fortune pod, but: it should be prior to the build log in the tilt ui, since it preceded it chronologically. its current position following the build is very confusing I don't think there's an argument for showing pod logs for pods that never existed at the same time as Tilt (and I'm kind of surprised / unsure how Tilt's even managing to get them) This is observable more dramatically if the service was in a crash loop (you'll get startup logs from every crash!) or if the service had logged a lot (@jazzdan reported tilt was taking a lot of cpu dealing with old logs) Originally written by @landism I don't think there's an argument for showing pod logs for pods that never existed at the same time as Tilt (and I'm kind of surprised / unsure how Tilt's even managing to get them) Silly me. For this particular repro, these logs are simply coming from the pod that is running when Tilt starts. It has multiple startup messages because there were multiple live updates to the same pod. I'm not sure what a good solution to this is. fixed by #2287
gharchive/issue
2019-09-26T15:50:19
2025-04-01T06:40:56.200152
{ "authors": [ "jazzdan", "landism", "nicks" ], "repo": "windmilleng/tilt", "url": "https://github.com/windmilleng/tilt/issues/2263", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
998891144
GitHub action to include TypeScript linting Add a step in the master-push-pull GitHub action to perform TypeScript linting, propagating the status to fail || pass the build accordingly Now completed! https://github.com/windranger-io/windranger-solidity-template/blob/main/.github/workflows/master-push-pull.yml contains npm run lint, which currently lints the TypeScript
gharchive/issue
2021-09-17T03:31:15
2025-04-01T06:40:56.232550
{ "authors": [ "CjHare" ], "repo": "windranger-io/windranger-solidity-template", "url": "https://github.com/windranger-io/windranger-solidity-template/issues/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
989092498
i want to let latex file auto pair {} after enter a function, but now it auto pair () using neovim , and the plugin. i need make this plugin adapt latex, to auto appear {} instead () like the picture below https://github.com/windwp/nvim-autopairs/blob/master/lua/nvim-autopairs/completion/cmp.lua you need to modify that line. if all function on latex will insert { so you can make a PR :+1: can you help to modify it to achieve the goal -------- 原始邮件 -------- 发件人: windwp @.> 日期: 2021年9月6日周一 晚上8:56 收件人: windwp/nvim-autopairs @.> 抄送: Flavius Buffon @.>, Author @.> 主 题: Re: [windwp/nvim-autopairs] i want to let latex file auto pair {} after enter a function, but now it auto pair () (#124) https://github.com/windwp/nvim-autopairs/blob/master/lua/nvim-autopairs/completion/cmp.lua you need to modify that line. if all function on latex will insert { so you can make a PR 👍 — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/windwp/nvim-autopairs/issues/124#issuecomment-913628368, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AKQEFEABI52K5RVOXXDXRALUAS3BNANCNFSM5DQJKKOQ. hope to get your support! Would it be possible to disable map_complete per filetype? I would like it active for everything but latex.
gharchive/issue
2021-09-06T11:53:40
2025-04-01T06:40:56.239156
{ "authors": [ "ChristianChiarulli", "flaviusbuffon", "windwp" ], "repo": "windwp/nvim-autopairs", "url": "https://github.com/windwp/nvim-autopairs/issues/124", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1778797922
Can we have the same deletion behavior as jiangmiao/auto-pairs? Is your feature request related to a problem? Please describe. For some pair of brackets where the only characters between them are space or newline, when deleting the opening bracket, it's best to also delete the ending bracket. E.g: {| } where | is the current cursor, after pressing <BS>, it's expected to also delete the ending bracket. Describe the solution you'd like continuously checking the characters from the current cursor to the ending bracket, if there are only spaces and newlines, enable deleting in pair. Describe alternatives you've considered None I tried that plugin because of this issue @TroySigX ... after i did that it doesnt even seem to backspace across newlines. When you use that plugin, does it actually do that? Yes, it does delete across newlines. Here's my config: require('nvim-treesitter.configs').setup({ endwise = { enable = true, }, autotag = { enable = true, }, }) require('npairs-int-upair').setup({ bs = 'u', map = 'n', }) local Rule = require('nvim-autopairs.rule') local npairs = require('nvim-autopairs') local cond = require('nvim-autopairs.conds') npairs.add_rules({ Rule('$', '$', { 'tex', 'latex' }):with_move(cond.none()):with_del(cond.done()):with_cr(cond.done()), })
gharchive/issue
2023-06-28T12:01:58
2025-04-01T06:40:56.242558
{ "authors": [ "9mm", "TroySigX" ], "repo": "windwp/nvim-autopairs", "url": "https://github.com/windwp/nvim-autopairs/issues/369", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1915027255
Is italic / boldItalic text supported? Hi, I wanted to ask if it is possible to italicize the text displayed. I am trying to do something like this: foo_component = { name = "foo", text = function() return { "foo", { "yellow", "ActiveBg", "bold,italic" } } end, } So I want the text foo to be displayed in bold-italic font. Unfortunately this doesn't work with my config (text is displayed in regular font). Changing it to plain "bold" however (like in the evil-line example) works just fine. Am I missing something, or is "italic" / "bold,italic" text not yet supported? It is not supported yet :). tmaybe you can create your own highlight and by nvim_set_hl and use that name on component. that nvim_set_hl have a lot of options. we only support fg,bg,bold.
gharchive/issue
2023-09-27T08:55:37
2025-04-01T06:40:56.245348
{ "authors": [ "azolus", "windwp" ], "repo": "windwp/windline.nvim", "url": "https://github.com/windwp/windline.nvim/issues/63", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
415916823
Ask a question: Can winston provide an api that reopens the log file? My usage scenario: I use pm2 to start my nodejs service, for example: 1 master and 4 workers. Use winston to write to the same log file in each worker. For example, the file name is: access.log Can I use logrotate on the Linux system to do the rotate log (without copytruncate ), and each time logrotate creates a new log file and renames the old log file, I notify each worker to reopen the log file. +1 I'm in the exact same boat is there a way to reopen a log ?
gharchive/issue
2019-03-01T02:50:58
2025-04-01T06:40:56.258905
{ "authors": [ "jarone", "petef19" ], "repo": "winstonjs/winston", "url": "https://github.com/winstonjs/winston/issues/1608", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1694074155
After Update - An exception has been thrown during the rendering of a template Winter CMS Build dev-develop PHP Version 8.1 Database engine MySQL/MariaDB Plugins installed No response Issue description Just updated the working Winter CMS 1.2.1 with composer update and winter:up commands, and it's broken now. The last update I did was about two months ago. Steps to replicate So, I get the An exception has been thrown during the rendering of a template ("Undefined array key "about"") in "C:\laragon\www\winter3\themes\mytheme\partials\intro.htm" at line 3. intro.htm points to hero1.htm which post list component: [viewBag] [blogPosts] pageNumber = "{{ :page }}" postsPerPage = 7 noPostsMessage = "No posts found" sortOrder = "random" categoryPage = "about" postPage = "blog-post" == As I understand the error is pointing to categoryPage = "about" part. Before the update, it was working Workaround No response @mjauvin I think this issue is caused by the recent refactoring / improvements, are you able to replicate this at all? Sorry, can't reproduce this. @01Kuzma are you able to upload a basic version of your theme that contains just enough to replicate this issue in a fresh install of the winter develop branch and the latest blog plugin? @petehalverson side note, would really love to have Octodock back in some capacity 😉 Maybe this one will help... filmustudija.zip I tried your theme, but it is in an unuseable state, plugins/partials missing, layout half beaten to death... I managed to remove cruft to make it work, and didn't get the error you reported. Please submit a theme in a usable state and a procedure to replicate your issue with it. @mjauvin , that's strange. I'm getting some other errors with it, the frontend even is not loading throwing errors... OK, I will try to remake it. @mjauvin I've reviewed it, I don't know what to upload, because the theme is image dependent (pulls them from storage), without them, it looks empty (as you probably saw it). I've just removed the partials with private information and excessive templates. Removing the component form the page , of course, removes this error. But I have another one, accessing page Portfolio with two components: Post list & Category List gives this: Removing the Category List removes the error. Can you show your Portfolio page ? Specifically, the url and the blogCategories component settings ? I use this component without any problems on my latest website. @LukeTowers I was able to generate an error with the blogCategories component when setting an invalid slug to the component slug property. What generates the error is this change: - public $currentCategorySlug; + public string $currentCategorySlug = ''; PHP now throws an error if you assign null to this Class property because it now expects a string. So basically, if you have the following page/component settings, it will throw an error: url = /blog/:slug layout = default [blogCategories] slug = "{{ :invalidSlug }}" categoryPage = "blog" == Notice the {{ :invalidSlug }} assigned to the component's slug property when it should be {{ :slug }} I suspect it's possible to trigger similar errors in other blog components as well because of the extra property validation that was added. This is not necessarily a bad thing, but might break badly written themes. @mjauvin , here it is: title = "Portfolio" url = "/portfolio/:page?" layout = "default" meta_description = "Desc..." is_hidden = 0 [blogPosts] pageNumber = "{{ :page }}" categoryFilter = "{{ :slug }}" postsPerPage = 10 noPostsMessage = "Įrašų nerasta" sortOrder = "published_at desc" categoryPage = "blog-category" postPage = "blog-post" [blogCategories] slug = "{{ :slug }}" displayEmpty = 0 categoryPage = "blog-category" == {% set posts = blogPosts.posts %} Just change: [blogCategories] slug = "{{ :slug }}" To: [blogCategories] slug = "{{ :page }}" To solve your issue. @LukeTowers should we change the component like this to restore original behavior ? diff --git a/components/Categories.php b/components/Categories.php index 10a958c..b0609e3 100644 --- a/components/Categories.php +++ b/components/Categories.php @@ -17,12 +17,12 @@ class Categories extends ComponentBase /** * Reference to the page name for linking to categories. */ - public string $categoryPage = ''; + public ?string $categoryPage = ''; /** * Reference to the current category slug. */ - public string $currentCategorySlug = ''; + public ?string $currentCategorySlug = ''; public function componentDetails(): array { @mjauvin it solves the portfolio issue. Why did this happen? I've created this theme long time ago based on some tutorials, as I remember And how to fix the main problem? What should I change here? "\partials\intro.htm" at line 3. is pointing to hero-slider/hero1.htm with Post List component, which is: [viewBag] [blogPosts] pageNumber = "{{ :page }}" postsPerPage = 7 noPostsMessage = "No posts found" sortOrder = "random" categoryPage = "about" postPage = "blog-post" == @mjauvin it solves the portfolio issue. Thank you! Why did this happen? I've created this theme long time ago based on some tutorials, as I remember It happens because there are errors in your theme and the last update to the plugin introduced property validation for the components. And how to fix the main problem? What should I change here? "\partials\intro.htm" at line 3. is pointing to hero-slider/hero1.htm with Post List component, which is: Please, always give the full settings section of the page you ask help for, otherwise it's hard to help. [viewBag] [blogPosts] pageNumber = "{{ :page }}" postsPerPage = 7 noPostsMessage = "No posts found" sortOrder = "random" categoryPage = "about" postPage = "blog-post" == @mjauvin , sorry, have edited the last post @mjauvin , any thoughts regarding last issue?
gharchive/issue
2023-04-29T17:19:22
2025-04-01T06:40:56.275050
{ "authors": [ "01Kuzma", "LukeTowers", "mjauvin" ], "repo": "wintercms/wn-blog-plugin", "url": "https://github.com/wintercms/wn-blog-plugin/issues/38", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1145827443
Archunit 0.23.0 Closes #1815. Method references are now picked up, leading to fewer false positives in detecting unused methods. Required a minor code change to now use JavaCodeUnitAccess. Updated store to only freeze the for now disabled unused methods test, as the unused public methods test now no longer has a false positive. We can reconsider whether that test needs to remain disabled, as there should no longer be false positives. Ideally and eventually we can remove true unused methods through #1702; until then this minimal change allows us to pick up new releases of ArchUnit. I'll try and have a look today, but @tomakehurst will still need to do the honours of merging as I've not yet got commit access ☺ I think this LGTM - happy for Tom to have another pair of eyes and merge when ready :+1: Discussed outside GitHub; this one is good to go it seems! :)
gharchive/pull-request
2022-02-21T13:51:03
2025-04-01T06:40:56.344897
{ "authors": [ "jamietanna", "timtebeek" ], "repo": "wiremock/wiremock", "url": "https://github.com/wiremock/wiremock/pull/1816", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
257793308
dsSend(name, group, data) not working? Hi again. Here is my code: Sender: https://pastebin.com/dVFaGdp2 Receiver: https://pastebin.com/DuRpP8GG No print out message from receiver. Also I tried with Indicators, but the same. Am I doing something wrong? Again issue was because of world saving system.
gharchive/issue
2017-09-14T17:13:14
2025-04-01T06:40:56.346670
{ "authors": [ "NaveNO" ], "repo": "wiremod/wire", "url": "https://github.com/wiremod/wire/issues/1466", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1403010789
Feature/zwave module Женя попросил добавить модуль wbe zwave в webui. На железке не гонял, т.к. нет. Или попрошу погонять Катю, или сам найду каких-нибудь zwave-железок @wb-adegtyarev поможет с z-vawe железками у Саши точно заработало на 2207 со стороны hwconf - всё ок, можно вливать
gharchive/pull-request
2022-10-10T11:20:47
2025-04-01T06:40:56.350024
{ "authors": [ "vdromanov" ], "repo": "wirenboard/wb-hwconf-manager", "url": "https://github.com/wirenboard/wb-hwconf-manager/pull/96", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2221151508
Rework Markdoc info hierarchy Description (required) Restructure table-of-contents with clearer h2 sections. Before, everything was nested under Configuration. This breaks out related sections, and moves higher-traffic content (ex. how to use UI components) further up). Before | After Related issues & labels (optional) Closes # Suggested label: @bholmesdev the table of contents does indeed look much nicer! Just because the diff is going to look terrible here, and not really reflect what you actually did, is the section on Partials the only new/changed content (other than reordering?) This will save me a bunch of close reading trying to figure out what content actually did change, and will also help the translators who will have to make sense of this PR when they update in all the other languages. :smile: Yes, apologies! Let's get the Partials PR reviewed first, then rebase this PR once it is merged. That way we don't have to untangle new content from reorganization. @sarah11918 Okay, rebased and ready for review! Great! Assuming this is just reorganization of existing content for flow, this now should be an easier read!
gharchive/pull-request
2024-04-02T18:22:11
2025-04-01T06:40:56.414107
{ "authors": [ "bholmesdev", "sarah11918" ], "repo": "withastro/docs", "url": "https://github.com/withastro/docs/pull/7744", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2242286657
i18n(fr): Updating guides/backend/supabase.mdx from #7767 Description (required) Updating guides/backend/supabase.mdx from #7767 Related issues & labels (optional) Closes # Suggested label: i18n Lunaria Status Overview 🌕 This pull request will trigger status changes. Learn more By default, every PR changing files present in the Lunaria configuration's files property will be considered and trigger status changes accordingly. You can change this by adding one of the keywords present in the ignoreKeywords property in your Lunaria configuration file in the PR's title (ignoring all files) or by including a tracker directive in the merged commit's description. Tracked Files File Note Locale src/content/docs/fr/guides/backend/supabase.mdx Localization changed, will be marked as complete. fr Warnings reference Icon Description 🔄️ The source for this localization has been updated since the creation of this pull request, make sure all changes in the source have been applied.
gharchive/pull-request
2024-04-14T17:53:15
2025-04-01T06:40:56.421199
{ "authors": [ "astrobot-houston", "thomasbnt" ], "repo": "withastro/docs", "url": "https://github.com/withastro/docs/pull/7891", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2187336664
Update showcase-sites.astro Add DipSway starlight website as showcase example Hello! Thank you for opening your first PR to Starlight! ✨ Here’s what will happen next: Our GitHub bots will run to check your changes. If they spot any issues you will see some error messages on this PR. Don’t hesitate to ask any questions if you’re not sure what these mean! In a few minutes, you’ll be able to see a preview of your changes on Vercel 🤩 One or more of our maintainers will take a look and may ask you to make changes. We try to be responsive, but don’t worry if this takes a few days.
gharchive/pull-request
2024-03-14T21:59:58
2025-04-01T06:40:56.424074
{ "authors": [ "astrobot-houston", "fl0wo" ], "repo": "withastro/starlight", "url": "https://github.com/withastro/starlight/pull/1618", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2512268986
Convert URL to file path correctly for Git virtual module Description Closes #2302 Correctly resolves a path to an internal module to handle file paths with special characters size-limit report 📦 Path Size /index.html 6.15 KB (0%) /_astro/*.js 22.36 KB (0%) /_astro/*.css 13.72 KB (0%)
gharchive/pull-request
2024-09-08T08:40:14
2025-04-01T06:40:56.426839
{ "authors": [ "astrobot-houston", "delucis" ], "repo": "withastro/starlight", "url": "https://github.com/withastro/starlight/pull/2303", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
386549370
Archive table Enhancements [x] Row index [x] Rowsorter [x] Multicolumn priority support for Rowsorter [x] Pick and choose the shown columns New Requirement: Multi column support standard style with icons
gharchive/issue
2018-12-02T11:05:17
2025-04-01T06:40:56.431656
{ "authors": [ "witmoca" ], "repo": "witmoca/BEATs", "url": "https://github.com/witmoca/BEATs/issues/11", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1876106367
Fix Firefox Display issues If it works it works? Tested on Mobirise 4 & 5.
gharchive/pull-request
2023-08-31T18:52:01
2025-04-01T06:40:56.436820
{ "authors": [ "Stage4000" ], "repo": "witsec/mobirise-white-label", "url": "https://github.com/witsec/mobirise-white-label/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1558541734
While i try to install mocha then configuration is not for mocha but for jest: What happened? precondition: detox is installed in devDep mocha is installed in devDep detox-cli is installed in global While i try to install mocha then configuration is not for mocha but for jest: output below. npx detox init -r mocha Created a file at path: .detoxrc.js Created a file at path: e2e/jest.config.js Created a file at path: e2e/starter.test.js What i tried: different installation of lib: by npm and yarn as well, i mean that i tried both: yarn and npm for all libs What was the expected behaviour? No response Was it tested on latest Detox? [X] I have tested this issue on the latest Detox release and it still reproduces. Help us reproduce this issue! No response In what environment did this happen? Detox version: 20.1.2 React Native version: Has Fabric (React Native's new rendering system) enabled: (yes/no) Node version: v14.17.6 npm: 8.9.0 yarn: 1.22.18 Test-runner (select one): jest / mocha Detox logs Detox logs paste logs here! Device logs Device logs paste logs here! More data, please! No response Detox 20+ does not support Mocha. There is a discussion in https://github.com/wix/Detox/issues/3772 if someone wants to try to create a third-party integration detox-mocha.
gharchive/issue
2023-01-26T17:57:07
2025-04-01T06:40:56.447376
{ "authors": [ "jstawow", "noomorph" ], "repo": "wix/Detox", "url": "https://github.com/wix/Detox/issues/3876", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
730542796
Expand retry() API and improve exec() retry-logging [x] This is a small change [ ] This change has been discussed in issue #<?> and the solution has been agreed upon with maintainers. Description: Associated with the ongoing work on Genymotion-Cloud integration (#2429), where logging of command failures, and better-retry control, are required. will reopen soon
gharchive/pull-request
2020-10-27T15:12:00
2025-04-01T06:40:56.449182
{ "authors": [ "d4vidi" ], "repo": "wix/Detox", "url": "https://github.com/wix/Detox/pull/2434", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
232961112
FYI: All multiline textinputs are autogrowing by default on iOS LMK if you have questions. I will also work on same functionality on Android, if you have ideas how it should be implemented, please share them with me. @shergin that's great news! thanks for the update! This means that I can just remove the manual height handling for iOS... A couple of questions: In what version of RN did this feature become available? Can it be controlled via some prop? (turn it off/on for example) Just published version 4.0.0 which uses the default RN implementation for auto expanding, Hopefully at some point this will be supported on Android as well so it can be simplified and all other hacks can be removed.
gharchive/issue
2017-06-01T17:59:20
2025-04-01T06:40:56.451740
{ "authors": [ "artald", "shergin" ], "repo": "wix/react-native-autogrow-textinput", "url": "https://github.com/wix/react-native-autogrow-textinput/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
282685136
Signature help for mixins crashes when value includes '(' or ')' Either in a string or as a mixin param list. Fixed with postcss-value-parser
gharchive/issue
2017-12-17T10:49:27
2025-04-01T06:40:56.458859
{ "authors": [ "tempit" ], "repo": "wix/stylable-intelligence", "url": "https://github.com/wix/stylable-intelligence/issues/144", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
124976985
can't load .sql files from unexploded jars in classPathFiles i'm using the embedded-mysql there is a function classPathFile basically it does new File(resource.toURI) this doesn't work for me when running maven, because everything is in a jar, so there is no file in the file that it can access my jar depends on test-utils jar in test scope, and it has the mysql init code in main this can be solved using java 8 filesystem do you have your files packed in the classes resources or test resources ? basically we load files from the classpath so if the are on other jars it's still possible to read them. you load them by doing something equivalent to new File(resource.URI) if the uri is internal to a jar, this doesn't work. you can work around this using new java8 filesystem abstraction. my files are packed in main/resources of a jar that i depend on in test scope, so it comes as an unexploded jar. @grunzwei - ok, so I might just have to loadResourceAsStream() instead. Will have to add test to verify it. What I don't want to do yet is to make this library bound to java 8 - there is no real reason for that, so why not leave door open for poor java 7 users:) @viliusl had we used guava we wouldn't have this issue, no ? ;) Haven't check. Problem with guava I had was that I could not use most recent version due to conflicts with framework, but maybe even not most recent would cut it. I will play with it once I will have time. Hopefully next week. On Wed, Jan 6, 2016 at 10:46 AM, Noam Almog notifications@github.com wrote: @viliusl https://github.com/viliusl had we used guava we wouldn't have this issue, no ? ;) — Reply to this email directly or view it on GitHub https://github.com/wix/wix-embedded-mysql/issues/36#issuecomment-169268513 . -- Vilius Lukošius Software Engineer Cell: +37061111226 33 Didžioji St./ 2 Rūdninkų St., Vilnius, Lithuania @viliusl any progress on that? I'm trying to migrate projects to wix-embedded-mysql and looks like it is an issue. @hugebdu https://github.com/wix/wix-embedded-mysql/commit/48a0ebc93799eaf48af81bc808f54ba67ea5a044 @hugebdu and the little script: def loadResources(path: String): Seq[String] = { if (path.isEmpty) { Nil } else { val resources = new PathMatchingResourcePatternResolver().getResources(path) resources.sortBy(_.getFilename).map(r => IOUtils.toString(r.getInputStream, java.nio.charset.Charset.forName("UTF-8"))) } } Use like reloadSchema(aSchemaConfig... withCommand(loadResources("classpath:*.sql"))) @dkomanov your commit is not yet merged, right? My PR merged: https://github.com/wix/wix-embedded-mysql/pull/43 @hugebdu - I did an impl for this (https://github.com/wix/wix-embedded-mysql/commits/scripts-in-jar), but still thinking on naming and apis - which should be deprecated/supported. So @dkomanov solution will work for you right now.
gharchive/issue
2016-01-05T14:26:44
2025-04-01T06:40:56.471099
{ "authors": [ "dkomanov", "grunzwei", "hugebdu", "noam-almog", "viliusl" ], "repo": "wix/wix-embedded-mysql", "url": "https://github.com/wix/wix-embedded-mysql/issues/36", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
550119968
Do not extend final class TestRunner and just copy-and-paste its contents This is definitely a hack and a maintenance nightmare - but it works for now. We can look for a better alternative later, see https://github.com/wizaplace/phpunit-slicer/issues/7 I learned about phpunit-slicer from your issue over at https://github.com/sebastianbergmann/phpunit/issues/4121 I've been using this PR (git commit hash 306846a872ca4ed8c2ec8b5592cfe29d28a8d415) a few weeks now since moving over to Github Actions in a private repo and it's category "game changer" in terms of cutting down the runtime (~10k tests => went down from initial 2 suites ~15 mins to 6 slices done in ~4-5 mins). I already tried to come up with something better but ran into all the problems you probably did too and would have ended up copying the file over 🤷‍♀️ I lack the knowledge of phpunit internals (and a bit time) to dig into that. But it seems phpunit is on a pace to seal many of it's openness in order to reduce complexity and too-creative-use on their side. Likely due to all the petty support issues to have to deal with (just a speculation, reading their issues for a while gives me the impression a bit). Couldn't upgrade to phpunit9 due to version constraint in composer.json Have not yet found time (will try later) but anyone already knowing if it "just works" or not? I got it working but I was basically doing the same copypaste galore you already did 😄 It becomes pretty clear that any package trying to do this has to be creative because the way phpunit keeps changing currently you can't simply have a single code base support all those versions. Either you jump through hoops with PHPunit version checks or class_exists stuff for a single code base or the package just keeps dedicated major releases mirroring the respective PHPUnit release. Of course only now I realized that @spawnia was already working on this in https://github.com/mll-lab/phpunit-slicer/commits/phpunit-9 but not PR Not sure who runs the repo, last commits are from @wizacedric and @philippe-vandermoere : are you still interesting in keeping the repo, any thoughts on the matter? @mfn quick heads up, the PHPUnit 9 branch only works with version 9.2.5 (potentially lower, have not tried that). 9.2.6 breaks things again. I locked the requirement to make sure https://github.com/mll-lab/phpunit-slicer/commits/phpunit-9 keeps working for now. @spawnia thanks! I did not notice anymore as I assimilated the slicer in a private code repo already with 9.2.6 . Call that luck 😅 So we would end up continuously having to copy/paste adapt the two classes… When I assimilated the code (I need to move fast here) it turns out it's not THAT bad in this bad situation: copy over command copy over testsuite in both apply minimal changes to allow extending it (i.e. g remove final, make methods protected instead of private and remove certain return signatures keep the most relevant business logic still inside separate files So every time the release a new version it's the same over and over again… I can see myself patching this for a company driven project but not maintaining in an OSS project when other people start to depend on this, as I'll just schedule the upgrades when I've really time for this. Damn :) @spawnia @mfn Thanks for both your inputs. I don't think we can accept the PR as it is. It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users. We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit... Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this? @wizacedric PHPUnit brought us into this mess by locking down their implementation, adding final and private all over the place. At the same time, they did not offer extension mechanisms that are quite flexible enough to achieve what is needed. Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this? Maybe using BeforeTestHook and skipping all tests that are not in the current slice? We could probably hack around and (ab-)use static properties to pass the CLI arguments and count the tests. According to https://github.com/sebastianbergmann/phpunit/issues/4121, PHPUnit would most likely add a solution that allows extensions to control PHPUnit test selection through an XML export/import. It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users. Agree it is problematic, still better than no solution at all. We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit... Not totally true, we would have to cover more specific ranges, down to the minor version. Maybe using BeforeTestHook I gave it a cursory look so I might be wrong, but the hook doesn't even receive the test lest can return a value or signal in any way how to proceed or not. Probably would require calling \PHPUnit\Framework\Assert::markTestSkipped if we could even figure out we sliced a particular test, but since the context is basically "string name of test", I doubt that. The whole extending phpunit chapter, whilst looking nice from above (hooks everywhere), since you basically can't "control" anything isn't really useful for us. I could see a hacky way using global state somewhere, calculating the slice based on counting the test executed (hopefully tests are always run in the same order…) and throw a SkippedTestError. I've a >10k test suite, if I make 5 slices this means I'll 40k times throw this error to skip the tests… nah, I don't think I'll even attempt this. Want to see a "funny" coincidence? me, want to figure out how properly extend via createRunner => https://stackoverflow.com/questions/63208917/how-to-extend-phpunit-textui-commandcreaterunner-in-recent-versions-of-phpuni "Remove Command::createRunner()" I can hardly believe (see the timestamps involved) this is a coincidence. Might as well join efforts. Agree, I made https://github.com/wizaplace/phpunit-slicer/pull/9 to show the intent I had in mind. It's still copypasta galore, so no real improvement over the status quo. I've yet to look further but I wonder if the concept of test filters in phpunit can be better (ab)used to "slice", see https://github.com/sebastianbergmann/phpunit/blob/ff047828b43b7ba88300372fb41943ddceb2db03/src/Runner/Filter/Factory.php#L50-L60 🤷‍♀️ How about we give in and try to get something like https://github.com/sebastianbergmann/phpunit/issues/3387 implemented in PHPUnit? Yeah I remember it was mentioned in https://github.com/sebastianbergmann/phpunit/issues/4121#issuecomment-589594578 ;) Btw. this issue links to https://github.com/sebastianbergmann/phpunit/pull/3605 which I wasn't really aware of And I thought "aha, 'phpunit chunk'" and voila https://www.google.com/search?hl=en&q=phpunit chunk => https://github.com/jwage/phpchunkit But basic installation fails, all dependencies are so outdated I can't install it with L7/PHPUnit9 Not sure if @jwage is still active in this area? How about we give in and try to get something like sebastianbergmann/phpunit#3387 implemented in PHPUnit? After having to go through the pain recently again of adapting sources for PHPUnit 9.3, I finally gave it a stab at I'm happy (…) to report that the copypaste approach still works with PHPUnit 9.5 :} I tried https://github.com/wizaplace/phpunit-slicer/pull/10 but I couldn't get it working, left comments there. Only now I saw I got feedback in my phpunit PR which I was not ware of, will look at this ASAP too. Since the signs are not good that the copypaste approach will continue to work as phpunit is changing their internals faster then I change my underwear and https://github.com/sebastianbergmann/phpunit/pull/4449 also doesn't show any movement, I devised a new strategy. I don't think I'm the first to come up with this but TBH I have not seen this solution somewhere else: create list of the tests in XML format from phpunit: phpunit --list-tests-xml all_tests.xml Use a script to "splice" this XML into smaller fragments, based on the idea phpunit-slicer -> phpunit_xml_slicer.php phpunit_xml_slicer.php all_tests.xml 2/10 > slice_2_10.xml Use yet another script phpunit_xml_class_to_file.php which takes the sliced XML and: builds map of all the test classes using the composer.json matches them for their PSR-4 namespace and uses this to convert them to files replaces any existing testsuites purely with a single suite with all the files from the references test classes from the sliced XML phpunit_xml_class_to_file.php composer.json phpunit.xml.dist slice_2_10.xml > phpunit-ci.xml Use the phpunit-ci.xml to run phpunit in CI, which now only contains the <file>s from the sliced XML: phpunit --configuration phpunit-ci.xml This sounds involved, but it's just a few lines added to e.g. Github Action step and presto, you can get almost the same benefit as from phpunit-slicer, except now it's not depending on any PHPUnit internals anymore; pseudo example: jobs: phpunit: strategy: matrix: phpunit-slices: ['1/6', '2/6', '3/6', '4/6', '5/6', '6/6'] steps: # … other steps before - name: 'phpunit: export all tests as XML' run: vendor/bin/phpunit --list-tests-xml all_tests.xml - name: 'phpunit: slice tests ${{ matrix.phpunit-slices }}' run: phpunit_xml_slicer.php all_tests.xml ${{ matrix.phpunit-slices }} > slice.xml - name: 'phpunit: convert classes to files and inject them back into phpunit XML config' run: phpunit_xml_class_to_file.php composer.json phpunit.xml.dist slice.xml > phpunit-ci.xml - run: vendor/bin/phpunit --configuration phpunit-ci.xml Biggest difference The approach used by phpunit-slicer can operation on the "test method" level, i.e. very fine grained. (Not sure how it handles @dataprovider) The approach outlined above operates on the "test class" level, i.e. it's much more coarse I had concerns that this would create an imbalance of my test suite I'm testing this with (~15k tests) so that some slices would run much longer than others, but turns out in practice the difference was not noticeable. Might me a lucky case for me though, YMMV. Why?! I was exploring https://laravel.com/docs/8.x/testing#running-tests-in-parallel the other day and I could not really use phpunit-slicer for this, ran into all sorts of issues and finally took another stab at it. What about the phpunit PR you mentioned? https://github.com/sebastianbergmann/phpunit/pull/4449 Not sure when this progresses, but this would be the ideal case for a solution her: it would be as fine grained as phpunit-slicer is, i.e. operate on the "individual test" level (including @dataprovider) paratest (the underlying tool for Laravels parallel testing) signaled they would support above PR too => https://github.com/paratestphp/paratest/issues/556#issuecomment-741632177 which means I would expect adding it to Laravel would be at least technically possible too Let's see \o/ I spent already way more on these kind of topics I ever thought I would, happy to hear about other perspectives / solutions / approaches 😄 @wizacedric PHPUnit brought us into this mess by locking down their implementation, adding final and private all over the place. At the same time, they did not offer extension mechanisms that are quite flexible enough to achieve what is needed. Can https://phpunit.readthedocs.io/en/9.2/extending-phpunit.html#extending-the-testrunner be a way to move forward on this? Maybe using BeforeTestHook and skipping all tests that are not in the current slice? We could probably hack around and (ab-)use static properties to pass the CLI arguments and count the tests. According to sebastianbergmann/phpunit#4121, PHPUnit would most likely add a solution that allows extensions to control PHPUnit test selection through an XML export/import. It would create a hard link between a version of phpunit/phpunit and a version of wizaplace/phpunit-slicer, which would be a problem for most users. Agree it is problematic, still better than no solution at all. We would need to have as many versions of wizaplace/phpunit-slicer as there are versions of phpunit/phpunit... Not totally true, we would have to cover more specific ranges, down to the minor version. Looks like @mfn and me maintain something like this on the side anyways. It is a bit ugly, but not too much of a hassle. Might as well join efforts. @spawnia it's PHP, and its open source. why not we forking then simply remove the final keyword and replacing the private into protected ? it seem that's the only way in PHP, because PHP don't support overloading. I'm facing the same problem as yours. Long run time and in-consistent test results, make me nuts. It's OK when I run manually by group and filter, but found tons of error and fails when run all at once. for the time being, I will patch the code into phpunit directly
gharchive/pull-request
2020-01-15T11:09:13
2025-04-01T06:40:56.533378
{ "authors": [ "mfn", "spawnia", "steamboatid", "wizacedric" ], "repo": "wizaplace/phpunit-slicer", "url": "https://github.com/wizaplace/phpunit-slicer/pull/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1846605595
gui: installer: convey registering on a signing device isn't always required As discussed in #545, in some cases we nudge the user to register the descriptor on their signing device although they might not have one. Those cases only ever arise when importing a descriptor (either when recovering from backup or participating in the creation of a descriptor on another laptop), since when the descriptor is created beforehand we can simply detect whether a signing device was used and thereby needs to be registered (implemented since https://github.com/wizardsardine/liana/pull/470). Therefore, detect when the registration step arises as part of an import process and if so adjust the language to convey registration on a signing device may not be necessary. Result: Fixes #545. In the future we could ask them beforehand whether they'll be using a signing device and only show them this step if so. But until we do that it's a minimal patch for the current misleading behaviour. When importing the descriptor, user needs to store again the ledger HMAC. I am ok to make the step more easy to skip for the import descriptor process, but user with ledger will have to do it. I don't understand your comment in the context of this PR?
gharchive/pull-request
2023-08-11T10:44:34
2025-04-01T06:40:56.544378
{ "authors": [ "darosior", "edouardparis" ], "repo": "wizardsardine/liana", "url": "https://github.com/wizardsardine/liana/pull/606", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1744756185
افزودن پنل جدید سلام. میخاستم ببینم میشه پنل مرزبان اضافه بشه به ویز ویز؟ ربات شما عملن برای فروش طراحی شده ولی از بهترین پنل برای فروش پشتیبانی نمیکنه! ادمین های مرزبان خیلی آدمای درست و باحالین زیادم آپدیت بیخود و بی جهت هم نمیدن که کار شما برای ربات سخت بشه، اگه امکانش هست این پنل رو اضافه کنید، تنها پنل ایرانیه که از قابلیت پنل مرکزی (نود) پشتیبانی میکنه، خیلی از بچه های ایرانی از این پنل استفاده میکنن و اینکه ربات شما این پنل رو پشتیبانی نمیکنه خیلی بده. بازم ممنون بخاطر زحماتت. سلام متاسفانه خیر اضافه نمیشه
gharchive/issue
2023-06-06T23:17:25
2025-04-01T06:40:56.550136
{ "authors": [ "hadi7000", "wizwizdev" ], "repo": "wizwizdev/wizwizxui-timebot", "url": "https://github.com/wizwizdev/wizwizxui-timebot/issues/267", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1695043936
درگاه آیدی پی اضافه کردن درگاه آیدی پی ایدی پی بفهمه فروش vpn دارید ممکنه ببنده درگاه رو اضافه کردن درگاه آیدی پی ایدی پی بفهمه فروش vpn دارید ممکنه ببنده درگاه رو اضافه کردن درگاه آیدی پی همه درگاه پرداخت ها مشکل دارن
gharchive/issue
2023-05-04T00:59:35
2025-04-01T06:40:56.552165
{ "authors": [ "arian2240", "hossein13851212" ], "repo": "wizwizdev/wizwizxui-timebot", "url": "https://github.com/wizwizdev/wizwizxui-timebot/issues/47", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2407119116
Werror: nb_attr.h:199:14: error: ISO C++ forbids zero-size array https://github.com/wjakob/nanobind/blob/b0136fe6ac1967cb2399456adc346a1af06a3b88/include/nanobind/nb_attr.h#L212 full stack include/nanobind/nb_attr.h: In instantiation of ‘struct nanobind::detail::func_data_prelim<0>’: include/nanobind/nb_func.h:95:38: required from ‘PyObject* nanobind::detail::func_create(Func&&, Return (*)(Args ...), std::index_sequence<Is2 ...>, const Extra& ...) [with bool ReturnRef = false; bool CheckGuard = true; Func = nanobind::ndarray<float>& (*&)(nanobind::ndarray<float>&, const nanobind::ndarray<float>&); Return = nanobind::ndarray<float>&; Args = {nanobind::ndarray<float>&, const nanobind::ndarray<float>&}; long unsigned int ...Is = {0, 1}; Extra = {nanobind::scope, nanobind::name}; PyObject = _object; std::index_sequence<Is2 ...> = std::integer_sequence<long unsigned int, 0, 1>]’ include/nanobind/nb_func.h:187:37: required from ‘void nanobind::cpp_function_def(Return (*)(Args ...), const Extra& ...) [with Return = ndarray<float>&; Args = {ndarray<float>&, const ndarray<float>&}; Extra = {scope, name}]’ include/nanobind/nb_func.h:256:21: required from ‘nanobind::module_& nanobind::module_::def(const char*, Func&&, const Extra& ...) [with Func = nanobind::ndarray<float>& (&)(nanobind::ndarray<float>&, const nanobind::ndarray<float>&); Extra = {}]’ src/mod.cpp:55:8: required from here include/nanobind/nb_attr.h:199:14: error: ISO C++ forbids zero-size array [-Werror=pedantic] 199 | arg_data args[Size]; I can turn off werror of course, so just consider this an FYI, and close if you wish it was reported before, there is a big comment about it just above that line https://github.com/wjakob/nanobind/blob/b0136fe6ac1967cb2399456adc346a1af06a3b88/include/nanobind/nb_attr.h#L181-L213 so there is, it wasn't immediately before it so it didn't register, for your sanity you could consider // GCC and Clang do. (See above comments for zero-length array Werror) arg_data args[Size];
gharchive/issue
2024-07-13T19:36:12
2025-04-01T06:40:56.558915
{ "authors": [ "PhilipDeegan", "wojdyr" ], "repo": "wjakob/nanobind", "url": "https://github.com/wjakob/nanobind/issues/639", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
703526292
More packaging: v8stdint.h clash The v8stdint.h header is used in multiple packages, and creates collisions in large namespaces like Debian. Furhermore, this file is not really needed on any linux systen, since these have stdint.h available. Enclosed patch drops v8stdint.h from package on hosts having stdint.h available, thus resolving this issue. 0001-Avoid-using-v8stdint.h-unless-needed.patch.gz Please consider a pull request instead of a patch file: https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request Also, v8stdint.h is used conditionally on Windows only, so it should never cause an issue with Debian: https://github.com/wjwwood/serial/blob/cbcca7c83745fedd75afb7a0a27ee5c4112435c2/include/serial/v8stdint.h#L39 I'll close this since I cannot take a patch file (no git author, etc). The problem is not the usage, the problem is the very existence of this file which typically goes int /usr/include, creating collisions with other packages. The patch is a git patch, you can use git -am which brings you author, date etc. I was admittedly lazy, the debian packaging lives on gitlab... I see, given this is a very background project for me, I still don’t see me taking it without a pull request. But the patch is useful for others that stumble here. Thanks!
gharchive/issue
2020-09-17T12:04:25
2025-04-01T06:40:56.563769
{ "authors": [ "leamas", "wjwwood" ], "repo": "wjwwood/serial", "url": "https://github.com/wjwwood/serial/issues/229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
719075952
Updateeffect does not work in version 0.3.0 Updateeffect does not work in version 0.3.0,it just adds special effects to the whole video, but there is no time limit.Can you help me? On UpdateEffect its crash and didn't provide any crash log.Please provide any solution using 0.3.1 fix this bug. Whenever i click on effect i get this error even on 0.3.1 update Build fingerprint: 'Xiaomi/kenzo/kenzo:6.0.1/MMB29M/V10.2.1.0.MHOMIXM:user/release-keys' Revision: '0' ABI: 'arm' pid: 27096, tid: 27152, name: Thread-1008 >>> com.testapp.app <<< signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 r0 dc19605c r1 00000016 r2 ab79acb8 r3 00000000 r4 ab1477c4 r5 ab891230 r6 ab1477c0 r7 f39a37e8 r8 00000438 r9 ab4d6148 sl ab4d6158 fp 00000004 ip ab247ff8 sp f39a3798 lr dbf1e15f pc dbf1e16e cpsr 60010030 backtrace: #00 pc 000e816e /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity21FaceMakeupV2SubEffect11OnDrawFrameEPNS_13FaceDetectionENSt6__ndk14listIPNS_9SubEffectENS3_9allocatorIS6_EEEEiiiiiiy+173) #01 pc 000db40b /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity6Effect11OnDrawFrameEjiiiiy+146) #02 pc 000d8df1 /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity12ImageProcess9OnProcessEjxiiii+122) #03 pc 000927e5 /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity12CameraRecord4DrawEv+488) #04 pc 00093b19 /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity12CameraRecord11RenderFrameEv+16) #05 pc 00095fa7 /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity7Message7ExecuteEv+30) #06 pc 0009572b /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity7Handler14ProcessMessageEv+74) #07 pc 00095611 /data/app/com.testapp.app-1/lib/arm/libtrinity.so (_ZN7trinity7Handler18MessageQueueThreadEPv+4) #08 pc 0004185b /system/lib/libc.so (_ZL15__pthread_startPv+30) #09 pc 000192a5 /system/lib/libc.so (__start_thread+6)a
gharchive/issue
2020-10-12T06:01:51
2025-04-01T06:40:56.587683
{ "authors": [ "WU1208", "khalilurrehman28", "wlanjie" ], "repo": "wlanjie/trinity", "url": "https://github.com/wlanjie/trinity/issues/108", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
183005990
finding entries in WMO code tables finding different ways to search scientific domain experts: operational meteorologist (in charge of a particular kind of data collection) lets say they are interested in http://test.wmocodes.info/bufr4/b/21 weather radar but they don't know where the WMO keep this in particular let us consider peakiness http://test.wmocodes.info/bufr4/b/21/071 http://test.wmocodes.info/bufr4/b/21/093 http://test.wmocodes.info/bufr4/b/21/094 http://test.wmocodes.info/bufr4/b/21/182 lets just search http://test.wmocodes.info/ui/text-search?query=peakiness how may i narrow my search to the scenario i am interested in 'search' for table b http://test.wmocodes.info/ui/text-search?query=table+b find BUFR4 table B in the list and select this takes us to: http://test.wmocodes.info/bufr4/_b consider velocity, as this has a wider path at the start so it's more useful to show context search consider adding a link to the 'narrowing your search' page to the search results template http://test.wmocodes.info/ui/about/findingentries
gharchive/issue
2016-10-14T09:39:02
2025-04-01T06:40:56.609417
{ "authors": [ "marqh" ], "repo": "wmo-registers/codes-wmo-deploy", "url": "https://github.com/wmo-registers/codes-wmo-deploy/issues/49", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }