id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
104154691
new bigInt(array of bytes) does not work In Java, to manipulate a textual string (for example for RSA) you would convert the string to bytes, construct a new BigInteger with the bytes and then do modPow(...). The result would be the encrypted string. This however does not work with this library. Could this feature be added? My javascript equivalent test (with BigInteger.min.js version 3bd64ef38bba5e05912922f987f1b16e65798591): var test = "bla"; var bytes = []; for (var i = 0; i < test.length; i++) { bytes.push(test.charCodeAt(i)); } var bInt = new bigInt(bytes); console.log(bInt + ", " + bigInt.isInstance(bInt)); bInt.modPow(10, 100); And this yields the following output (Firefox console): [object Object], false TypeError: bInt.modPow is not a function Concluding that the constructor failed to create the object. The problem isn't that new bigInt(bytes) isn't working correctly, the problem is that no such constructor exists. First, bigInt is just a function, not a constructor, so using new will mess things up. (new is used internally for the BigInteger and SmallInteger classes, but the API doesn't expose those). Second, bigInts are represented internally using base 107, so I don't feel comfortable providing a method for creating a bigInt from an array of bytes, because people might assume that doing so is the cheapest/most low-level way to create a bigInt, while it actually would be a fairly expensive operation requiring base conversion. You can do what you want using a slightly different approach: generate a hex string from the bytes, and then create the bigInt using the hex string: var test = "bla"; var hexString = ""; for (var i = 0; i < test.length; i++) { hexString += ("0" + test.charCodeAt(i).toString(16)).slice(-2); } var bInt = new bigInt(hexString, 16); console.log(bInt + ", " + bigInt.isInstance(bInt)); console.log(+bInt.modPow(10, 100)); Which outputs 6450273, true 49 Let me know whether that resolves your question. Oh my bad, I assumed it was a
gharchive/issue
2015-08-31T23:42:46
2025-04-01T06:39:59.711770
{ "authors": [ "limnic", "peterolson" ], "repo": "peterolson/BigInteger.js", "url": "https://github.com/peterolson/BigInteger.js/issues/44", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
1606504145
Alarm control unit not available Hi Petr, can you help me? I can log in and set the central locally without authentication problems. I can install and configure the addon from hacs but then the control panel can’t find the alarm central. Modello dispositivo:DS-PWA96-M-WE Versione Firmware:V1.2.8 build 230110 Versione Web:V4.25.1 build 221212 sensors: DS-PDMCX-E-WE DS-PDPG12P-EG2-WE DS-PDPC12P-EG2-WE DS-PDMCK-EG2-WE DS-PDMC-EG2-WE sirens: DS-PS1-I-WE DS-PS1-E-WE tag readers: DS-PT1-WE wireless keyfob: DS-PKF1-WE I attach the debug error_log.txt I see main problem. AccessModuleType and that was fixed with 0.6.4. Next thing is Invalid detector type magnetShockDetector but we are not crashing on this. Please update to version 0.6.4 and restart homeassinstant. You are great!!!! Thank you very much. It seems to work fine now I still don't see the status of the batteries but it's a minor problem Everytime if something is not working. Issue + log. We will sort it out. More type of detectors are on the way. Also batteries are issue #16, currently working on it how to implement this.
gharchive/issue
2023-03-02T10:06:03
2025-04-01T06:39:59.756502
{ "authors": [ "leciuk81", "petrleocompel" ], "repo": "petrleocompel/hikaxpro_hacs", "url": "https://github.com/petrleocompel/hikaxpro_hacs/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1032011570
The meaning of progress When I implement trace collection in your osdi21-artifact branch. I am a little confuse the meaning of progress in validation-2048.csv. I obtain progress information via get_progress. However, it seems not mismatch your released trace information. My collected batch=128 trace information. My collected batch=2048 trace information. May I ask some suggestions to mitigate the mismatch? Hi @gaow0007, apologies for the late reply. progress means the number of iterations discounted by the statistical efficiency. For example, 200 iterations at 50% statistical efficiency translates to "progress" of 100. This is based on theory so may not exactly match in practice. For your experiments, one thing I can think of is to make sure you are changing the scaled-up batch size (M) and not the baseline batch size (M_0). The batch_size parameter for the DataLoader class is the baseline batch size for Pollux. To set a fixed scaled-up batch size, you should set the TARGET_BATCH_SIZE environment variable instead. Sorry this part is not well documented. Thanks for your explanation. A further question: Where does TARGET_BATCH_SIZE work in the code? I fail to search such key word. @aurickq A further question: Where does TARGET_BATCH_SIZE work in the code? I fail to search such key word. @aurickq https://github.com/petuum/adaptdl/blob/osdi21-artifact/adaptdl/adaptdl/torch/data.py On line 266. Thanks for your reply.
gharchive/issue
2021-10-21T03:11:20
2025-04-01T06:39:59.764557
{ "authors": [ "aurickq", "gaow0007" ], "repo": "petuum/adaptdl", "url": "https://github.com/petuum/adaptdl/issues/104", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2459195423
Save bonding info to file after receiving it Hi I am trying to figure how to save the bonding info to the /etc/btferret.dat file right after pairing has succeeded. Right now, when I start the server from python3 btferret.py and then s and LE server - the pairing information gets saved to the file in etc when I close the server. The problem with that kind of an approach is that if my application closes unexpectedly, I might lose the information in memory. The btferret.dat file is written on program termination by this call in close_all in btlib.c: at LINE 5115 rwlinkey(1,0,NULL); There is some protection against close_all not being called explicitly by using atexit which specifies functions that are called when the program exits: at LINE 2017 atexit(close_all); This should work when the program terminates, but if not, to save the data at any other time, call rwlinkey(1,0,NULL). In C, this is pretty simple, just declare rwlinkey in btlib.h or in your code. It will then be callable from your C code. void rwlinkey(int rwflag,int ndevice,unsigned char *addr); In Python it is more complicated because function calls go via btfpython.c, and rwlinkey would need its own entry. An easier option is to piggy-back on an existing Python call. I would suggest Set_flags: at LINE 13062 in btlib.c modify as follows void set_flags(int flags,int onoff) { if(flags == 0) { rwlinkey(1,0,NULL); return; } if(onoff == FLAG_OFF) gpar.settings &= ~flags; else gpar.settings |= flags; } Then recompile the Python module with "python3 btfpy.py build". Call from Python code via: btfpy.Set_flags(0,0) I don't think there will be any problems with this, but I cannot be absolutely sure. @petzval thanks for the detailed info - I will try this out and report here. Are you open to accepting a PR where I add this as a separate API? I'd rather not maintain a fork if I can avoid it. I've had a careful look at rwlinkey and there is a problem with calling it before program termination. If the device disconnects, a re-connection during the same session will probably fail. Some minor changes will fix this, and a revised version of rwlinkey is below. Rather than maintaining a PR, I would rather just add a new function to save the pairing info and do a full test to make sure it works . I can also see the value of adding an empty user-defined function inside btlib.c. These will be in a new version within a week or so. void rwlinkey(int rwflag,int ndevice,unsigned char *addr) { int n,k,i,j,addcount,flag; unsigned char *badd,*key; struct devdata *dp; FILE *stream; static char *fname = "/etc/btferret.dat"; static int count = -1; static int delflag = 0; static int writeflag = 0; static unsigned char zero[6] = {0,0,0,0,0,0}; static unsigned char *table = NULL; if(rwflag == 0) { // read if(ndevice > 0 && (dev[ndevice]->linkflag & (KEY_NEW | KEY_FILE)) != 0) return; // dev[]->linkey is good if(count < 0) { count = 0; // in file stream = fopen(fname,"rb"); if(stream == NULL) return; n = 0; count = fgetc(stream); if(count != 0 && count != 0xFF) { k = count*22; table = (unsigned char *)malloc(k); if(table != NULL && fread(table,1,k,stream) == (unsigned int)k) n = 1; } fclose(stream); if(n == 0) { count = 0; NPRINT "Read key data failed\n"); return; } } for(k = 0 ; k < count && table != NULL ; ++k) { badd = table + k*22; key = badd+6; if(addr == NULL) n = devnfrombadd(badd,BTYPE_CL | BTYPE_LE | BTYPE_ME,DIRN_FOR); else { n = bincmp(badd,addr,6,DIRN_FOR); if(n != 0) n = ndevice; } if( (ndevice == 0 && n > 0) || (ndevice > 0 && n == ndevice) ) { // all on init (ndevice=0) or ndevice only dp = dev[n]; if((dp->linkflag & KEY_FILE) != 0 && bincmp(zero,key+10,6,DIRN_FOR) != 0) { for(n = 0 ; n < 10 ; ++n) dp->divrand[n] = key[n]; dp->linkflag |= PAIR_FILE; } else { for(n = 0 ; n < 16 ; ++n) dp->linkey[n] = key[n]; dp->linkflag |= KEY_FILE; } } } } else if(rwflag == 1) { // write // update table if(writeflag != 0) { NPRINT "WARNING 2nd close\n"); return; } writeflag = 1; flag = 0; // no changes to table if(count > 0 && table != NULL && delflag == 0) { for(k = 0 ; k < count ; ++k) { badd = table + k*22; key = badd+6; n = devnfrombadd(badd,BTYPE_CL | BTYPE_LE | BTYPE_ME,DIRN_FOR); if(n > 0) { dp = dev[n]; if(bincmp(zero,key+10,6,DIRN_FOR) != 0) { if((dp->linkflag & PAIR_NEW) != 0) { for(n = 0 ; n < 16 ; ++n) key[n] = dp->divrand[n]; dp->linkflag &= ~PAIR_NEW; dp->linkflag |= PAIR_FILE; } flag = 1; } else if((dp->linkflag & KEY_NEW) != 0) { // must be KEY_FILE also for(n = 0 ; n < 16 ; ++n) key[n] = dp->linkey[n]; dp->linkflag &= ~KEY_NEW; dp->linkflag |= KEY_FILE; flag = 1; } } } } // count NEW additions not in table addcount = 0; for(n = 1 ; devok(n) != 0 ; ++n) { if((dev[n]->linkflag & KEY_NEW) != 0) ++addcount; if((dev[n]->linkflag & PAIR_NEW) != 0) ++addcount; } if(flag == 0 && delflag == 0 && addcount == 0) return; // no changes if(count + addcount > 100) { NPRINT "/etc/btferret.dat file of paired devices is large\n"); NPRINT "Recommendation: delete it and re-pair devices\n"); } if(count + addcount > 254) { NPRINT "Too many paired devices - delete /etc/btferret.dat\n"); NPRINT "file to reset and then re-pair devices\n"); return; } stream = fopen(fname,"wb"); if(stream == NULL) return; fputc(count+addcount,stream); if(count > 0) fwrite(table,1,count*22,stream); k = 0; for(n = 1 ; k < addcount && devok(n) != 0 ; ++n) { dp = dev[n]; if((dp->linkflag & KEY_NEW) != 0) { fwrite(dp->baddr,1,6,stream); fwrite(dp->linkey,1,16,stream); dp->linkflag &= ~KEY_NEW; dp->linkflag |= KEY_FILE; ++k; } if((dp->linkflag & PAIR_NEW) != 0) { fwrite(dp->baddr,1,6,stream); fwrite(dp->divrand,1,16,stream); dp->linkflag &= ~PAIR_NEW; dp->linkflag |= PAIR_FILE; ++k; } } fclose(stream); } else if(rwflag == 2 && count > 0 && table != NULL && ndevice > 0) { // delete flag = 0; for(k = 0 ; k < count && flag == 0 ; ++k) { badd = table + k*22; n = devnfrombadd(badd,BTYPE_CL | BTYPE_LE | BTYPE_ME,DIRN_FOR); if(n == ndevice) { // found - remove flag = 1; delflag = 1; dev[n]->linkflag &= ~KEY_FILE; for(j = k ; j < count ; ++j) { badd = table + k*22; for(i = 0 ; i < 22 ; ++i) badd[i] = badd[i+22]; } --count; } } } else if(rwflag == 3 && count > 0) { for(k = 0 ; k < count && table != NULL ; ++k) { badd = table + k*22; n = devnfrombadd(badd,BTYPE_CL | BTYPE_LE | BTYPE_ME,DIRN_FOR); if(n >= 0) NPRINT "%s (%02X) =",dev[n]->name,dev[n]->linkflag >> 10); else NPRINT "Unknown ="); for(j = 0 ; j < 22 ; ++j) { NPRINT " %02X",badd[j]); if(j == 5) NPRINT " ="); } NPRINT "\n"); flushprint(); } } } I can now see other problems associated with calling rwlinkey twice, so I cannot recommend the changes I have suggested. A new version will have a fix. Thanks a lot @petzval - will try it out and get back! @petzval do you think it makes sense to have an event for that? Maybe LE_BONDED - so that user can take further action in the callback
gharchive/issue
2024-08-10T17:11:16
2025-04-01T06:39:59.774923
{ "authors": [ "anujdeshpande", "petzval" ], "repo": "petzval/btferret", "url": "https://github.com/petzval/btferret/issues/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2248454112
”Brețcu”, ”Mărtănuș”, ”Oituz” are listed in the drop-down menu Description: When user writes ”Brețcu”, ”Mărtănuș”, ”Oituz” the locations appear in the drop-down menu next to the comune and the county witch is a part according to the law. Precondition: The website is up an running. Step 1 Write in the search bar "Brețcu" Expected results The location appears in the drop-down menu as "Sat Brețcu, COVASNA (Brețcu)". Step 2 Press "x" button. Expected results The location was deleted from the search bar. Step 3 Write in the search bar "Mărtănuș" Expected results The location appears in the drop-down menu as "Sat Mărtănuș, COVASNA (Brețcu)". Step 4 Press "x" button. Expected results The location was deleted from the search bar. Step 5 Write in the search bar "Oituz" Expected results The location appears in the drop-down menu as "Sat Oituz, COVASNA (Brețcu)".
gharchive/issue
2024-04-17T14:33:33
2025-04-01T06:39:59.786789
{ "authors": [ "bogi1492" ], "repo": "peviitor-ro/ui.orase", "url": "https://github.com/peviitor-ro/ui.orase/issues/3097", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
741167777
Refactor CRC interfaces and implementations This PR: Creates a templated CRCChecker interface class, as well as CRC8 and CRC32 aliases which can be used as interfaces. Removes inclusion of the Pufferfish/HAL/STM32/CRC.h header from driver files so that they can compile without the STM32 libraries. Updates the CRC32C-based driver classes to take the CRC32 interface in their constructors, rather than an STM32-specific CRC32C class. Generalizes the CRC8 software implementation to work for CRCs of any size in the templated SoftCRC class in Pufferfish/HAL/CRCChecker.h. This class subclasses CRCChecker and is usable both for any CRC8 and for CRC32C. It uses a table lookup-based algorithm rather than the naive bit-by-bit algorithm. Because it implements the CRC32 interface, it should be possible to use the SoftCRC32 as a drop-in replacement for the hardware-based HALCRC32 in unit tests by dependency injection, though I haven't written a unit test confirming this. Defines a templated CRCParameters struct for initializing SoftCRC objects in the SFM3019 driver and other Sensirion drivers. Simplifies the interface of SensirionDevice by passing the CRC implementation in the constructor, rather than passing in CRC parameters with every method call. Creates Catch2 unit tests for CRC8 (with SDP3019 parameters) and CRC32C, matching the example inputs which were used in a unit test in the backend (though that test was deleted in #90 because we moved to a third-party library for CRC in python). Adds clang-tidy checks for the Catch2 unit tests to the Github Actions workflow. Fixes clang-tidy issues for the Catch2 Nonin OEM III unit tests written by Hemanth. As discussed with Renji on Slack, this PR appears to work, so I will merge this PR in first. For records-keeping: This project is licensed under Apache License v2.0 for any software, and Solderpad Hardware License v2.1 for any hardware - do you agree that your contributions to this project will be under these licenses, too? Yes Were any of these contributions also part of work you did for an employer or a client? No Does this work include, or is it based on, any third-party work which you did not create? No
gharchive/pull-request
2020-11-12T00:14:07
2025-04-01T06:39:59.797445
{ "authors": [ "ethanjli" ], "repo": "pez-globo/pufferfish-software", "url": "https://github.com/pez-globo/pufferfish-software/pull/240", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1573840015
!problems with training! I have done all preparations for training,including apex, tinycuda. My torch version is 1.13.1+cu116. Once I trained the first DFF, I came across such err as below: Traceback (most recent call last): File "train.py", line 21, in from models.networks import NGP File "/home/DFF/models/networks.py", line 3, in import tinycudann as tcnn File "/opt/conda/lib/python3.8/site-packages/tinycudann-1.7-py3.8-linux-x86_64.egg/tinycudann/init.py", line 9, in from tinycudann.modules import free_temporary_memory, NetworkWithInputEncoding, Network, Encoding File "/opt/conda/lib/python3.8/site-packages/tinycudann-1.7-py3.8-linux-x86_64.egg/tinycudann/modules.py", line 50, in C = importlib.import_module(f"tinycudann_bindings.{cc}_C") File "/opt/conda/lib/python3.8/importlib/init.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) ImportError: /opt/conda/lib/python3.8/site-packages/tinycudann-1.7-py3.8-linux-x86_64.egg/tinycudann_bindings/_86_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl36is_contiguous_nondefault_policy_implENS_12MemoryFormatE It seems that you failed to install tinycudann itself. Did the tinycudann work by itself? If it fails, it would be an issue with your installation of tinycudann. I cannot solve it in this thread. Please see the tinycudann repository. And, note that DFF README uses 1.10.2+cu111. I didn't check if later versions work. Ok,I created a new conda environment and solved this problem. inycudann_bindings/_86_C Can you show how you solve this problem? My instance is CUDA 11.7. Can you show your torch, cuda and CUDA verison? Thank you.
gharchive/issue
2023-02-07T07:45:00
2025-04-01T06:39:59.806297
{ "authors": [ "PatrickDDj", "fangli333", "soskek" ], "repo": "pfnet-research/distilled-feature-fields", "url": "https://github.com/pfnet-research/distilled-feature-fields/issues/8", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
177412168
Speed up tests This PR reduces extra test cases. LGTM
gharchive/pull-request
2016-09-16T12:20:19
2025-04-01T06:39:59.807215
{ "authors": [ "gwtnb", "okuta" ], "repo": "pfnet/chainer", "url": "https://github.com/pfnet/chainer/pull/1665", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
197130804
Use autospec in mock creation. Use 'autospec'. Related to https://github.com/pfnet/chainer/pull/2037 LGTM!
gharchive/pull-request
2016-12-22T09:31:19
2025-04-01T06:39:59.808273
{ "authors": [ "okuta", "zori" ], "repo": "pfnet/chainer", "url": "https://github.com/pfnet/chainer/pull/2038", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
989007392
doc: Move type annotation to description from signature This is a resubmit of #241 to trainer-dev branch as it was out of synchronization. /test
gharchive/pull-request
2021-09-06T10:09:07
2025-04-01T06:39:59.809143
{ "authors": [ "kmaehashi" ], "repo": "pfnet/pytorch-pfn-extras", "url": "https://github.com/pfnet/pytorch-pfn-extras/pull/312", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2712131306
Tags that start with an underscore may cause incorrect rendering eg. : #theme #_theme #the_me Maybe it has something to do with Chinese characters? #教程 #obsidian #theme #_theme #the_me #_教程 #_指南 #_参考 #_公司/阿里 #_公司/百度 Hi there! That's the limitation of the plugin as mentioned in #20
gharchive/issue
2024-12-02T14:35:01
2025-04-01T06:39:59.810503
{ "authors": [ "chenbihao", "pfrankov" ], "repo": "pfrankov/obsidian-colored-tags", "url": "https://github.com/pfrankov/obsidian-colored-tags/issues/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
72047902
Independent Dashboard Theme With the current 'Theme' design, the chosen theme is used in all pages. This work-around will allow the setting of a 'theme' for the Dashboard, Independent of the Theme for the balance of the pfSense Pages. Use case would be setting the "Dashboard" to a Wide-Screen theme, and leaving all other pages in a Standard Width theme. When the index.php file is loaded, if the "Dashboard Theme" is defined with a theme that is different than the "Default Theme", the "Default Theme" setting is captured into $g['theme_revert'] . The new "Dashboard Theme" is now used for the Dashboard. All other pages will use the "Default Theme" setting. If the user applies any changes in any of the Dashboard Widgets, it will save the "Dashboard Theme" as the "Default Theme" and this will cause all future pages to use the "Dashboard Theme". To over come this issue, code is added to the php function write_config() to revert the "Theme" back to its "Default" before saving the changes to the config.xml file. Future designs of Themes should allow for independent Theme customization per page to better utilize Wide-Screen/Standard-Width-Screen settings. Thanks Phil, I have incorporated those changes. pfSense is moving to bootstrap, as you can see in https://github.com/SjonHortensius/pfsense. I'm not sure it's worth it to add this right now.
gharchive/pull-request
2015-04-30T03:07:23
2025-04-01T06:39:59.817815
{ "authors": [ "BBcan177", "rbgarga" ], "repo": "pfsense/pfsense", "url": "https://github.com/pfsense/pfsense/pull/1633", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
614739350
DNS/Ping/Traceroute IDN support. Issue #10538 [X] Redmine Issue: https://redmine.pfsense.org/issues/10538 [X] Ready for review Add support for IDN hostnames on the DNS/Ping/Traceroute diagnostics pages. in the same way it's possible to add IDN support to DNS Resolver/IPsec/OpenVPN etc. but I think it's better to add idn_to_utf8() to Form_Input class
gharchive/pull-request
2020-05-08T13:21:05
2025-04-01T06:39:59.819797
{ "authors": [ "vktg" ], "repo": "pfsense/pfsense", "url": "https://github.com/pfsense/pfsense/pull/4309", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1966655151
Support passing parse mode to parse/fingerprint functions Fixes #215 Mainly commenting so I don't forget: For the newly introduced _opt functions, I wonder if we should also consider the use cases requested in https://github.com/pganalyze/libpg_query/issues/50 here, i.e. to set GUCs that affect parsing. Need to think through this more how it could be part of the API, but might be worth doing now so we don't have to break the _opt function API in the future. @lcheruka Thanks for the contribution! Merged with slight revisions to allow passing additional options beyond the parse mode.
gharchive/pull-request
2023-10-28T16:21:21
2025-04-01T06:39:59.833695
{ "authors": [ "lcheruka", "lfittl" ], "repo": "pganalyze/libpg_query", "url": "https://github.com/pganalyze/libpg_query/pull/216", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2541240304
[#366] client side translation of numeric values WIP Translations for :- Command Output Format Valid field Compression field Encryption field @jesperpedersen PTAL The idea is that all changes are in cli.c. Maybe we can extract some static functions into in the public API once we have a good idea @jesperpedersen PTAL We are close with this @jesperpedersen PTAL Looks good to me besides some format issues. Would be nice if we can have some output demos~ Okay attaching some demos for commands: Backup: List Backup: Status: Staus Details: @Jubilee101 Please have a look. You can also look at converting sizes to "B", "kB", "MB", "GB", "TB" and "PB", so something like UsedSpace: 15.3MB and so on You can also look at converting sizes to "B", "kB", "MB", "GB", "TB" and "PB", so something like UsedSpace: 15.3MB and so on Can we also do for timestamp I think timestamp is ok as is @jesperpedersen Do we need to limit the number of digits after decimal in the translated size? If so, I am thinking of adding another function pgmoneta_append_double_setprecision(char* s, double d, int32_t precision) in utils.c to limit the number of digits. Yes, you add a separate function for it - and set the default to 2 digits so XXX.YY @jesperpedersen @Jubilee101 PTAL Looks like you are having some problems with rebasing, other than that it looks good Looks like you are having some problems with rebasing, other than that it looks good Not sure what is wrong with the rebasing. Kindly elaborate Looks like you are having some problems with rebasing, other than that it looks good Not sure what is wrong with the rebasing. Kindly elaborate Nvm, it looks alright now. Probably a glitch on my side, apologies~ Looks like you are having some problems with rebasing, other than that it looks good Not sure what is wrong with the rebasing. Kindly elaborate Nvm, it looks alright now. Probably a glitch on my side, apologies~ No issues :) @jesperpedersen @Jubilee101 Review!! Comments like that doesn't work AT ALL. Your pull request is open, so we will get to it I am extremely sorry @jesperpedersen @Jubilee101 Never gonna happen again Hey, can you rebase? Hey, can you rebase? Done Merged, thanks for your contribution!
gharchive/pull-request
2024-09-22T20:13:51
2025-04-01T06:39:59.852970
{ "authors": [ "Jubilee101", "ashu3103", "jesperpedersen" ], "repo": "pgmoneta/pgmoneta", "url": "https://github.com/pgmoneta/pgmoneta/pull/379", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
128317299
Fixed issue with header function evaluation + added 2 tests Fixes #454 Reply header functions were evaluated twice per request. Reply header functions were overridden by evaluated values. After fix each header function is evaluated only once per request. After fix each header function is evaluated on each and every request separately. Hmm, the problem with coverage is here: https://coveralls.io/builds/4824108/source?filename=lib%2Frequest_overrider.js#L461. I've coded it to be sure that both response.headers and response.rawHeaders will be evaluated. But it seems that interceptor.headers are identical to interceptor.rawHeaders, but in array (instead of object) representation. Is that true? If yes then I can change: if (typeof value === "function") { // Check if header has not been already evaluated. Evaluate it otherwise. if (evaluatedHeaders.hasOwnProperty(key)) { response.rawHeaders[rawHeaderIndex + 1] = evaluatedHeaders[key] } else { response.rawHeaders[rawHeaderIndex + 1] = value(req, response, responseBody); } } to if (typeof value === "function") { response.rawHeaders[rawHeaderIndex + 1] = evaluatedHeaders[key]; } @alekbarszczewski agree, that should be enough. @pgte Done, squashed additional commit, thanks. @alekbarszczewski thanks! Landed on v6.0.1.
gharchive/pull-request
2016-01-23T09:27:12
2025-04-01T06:39:59.860305
{ "authors": [ "alekbarszczewski", "pgte" ], "repo": "pgte/nock", "url": "https://github.com/pgte/nock/pull/455", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1172304583
端口转发功能可能存在的问题 师傅,这个端口转发的功能可能存在些许问题。事情是这样的,我打算模拟一个环境,把listener_1本地监听端口(8017)转发到服务器(18017)上,然后再在本地配置一个listener_2(IP填服务器IP,监听端口填18017),这样生成攻击载荷的时候选择listener_2也可以成功上线。 但是我发现如果我先配置了listener_2,再使用stowaway转发端口,stowaway就会报错 而如果先使用stowaway转发端口,再配置listener_2,CS就会报错端口占用。 经过测试,发现功能类似Stowaway的另一位师傅的工具Venom可以在上文的这种环境下成功转发端口(先配置listener,后转发端口以及先转发端口,后配置listener都是可以成功的) 麻烦师傅修修BUG啦! 如果我没理解错的话,listener_2不需要配置,你只需要配置listener_1就行 或者使用reflect命令,你这个写反了
gharchive/issue
2022-03-17T12:16:18
2025-04-01T06:39:59.866483
{ "authors": [ "2308652512", "ph4ntonn" ], "repo": "ph4ntonn/Stowaway", "url": "https://github.com/ph4ntonn/Stowaway/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
669372235
内存泄漏严重 xxxxx这里是问题描述xxxx 当前使用的版本号 v0.1.9 是否已经升级到新版本 是 当前遇到的问题 内存泄漏自动宕机 错误日志或截图 xxx 希望增加的功能 希望大佬尽快解决内存泄漏的问题! 更新最新版本v0.2.1
gharchive/issue
2020-07-31T03:25:09
2025-04-01T06:39:59.887068
{ "authors": [ "haoyuanliu", "phachon" ], "repo": "phachon/mm-wiki", "url": "https://github.com/phachon/mm-wiki/issues/229", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
598207630
Forked my own verson of SWAPI I forked your SWAPI. I'm curious to know when following the MakeFile, which python version do you used for this? I'm assuming the service has always been written in Python 2 for a while.. Also do you recommend using virtualenv when running locally? I'm running into a lot of versioning incompatibility noises with Django modules... @awongCM have you managed to run it locally or on the server? I tried to run it locally. It failed. When troubleshooting, I realised the entire django api platform is using Python 2, instead of Python 3. I have both Python versions on my Mac. But the app still doesn’t play nicely with it... Hence the reason for raising this github issue. I have managed to start the project: https://swapi.dev, it is deployed from my fork: https://github.com/juriy/swapi thank so much @Juriy Thanks for doing that @Juriy. I sincerely appreciated your help. Just out of my insatiable curiosity, how did you manage to get your forked project to work locally? Do you have the correct Python 2 dependencies that got everything working on the outset? I'm keen to know as I want to get it working for my forked project. Thanks again! @awongCM I was deploying it right away on the server, so I didn't try it locally yet, it is a next thing to do. I used Amazon Linux 2, which already has the appropriate version of Python - 2.7.16. I had to install the toolset and dependencies (gcc-c++, make, zlib-devel zip unzip bzip2-devel postgres, postgresql-devel, python-devel, libmemcached-devel, ). Not sure if everything is needed from the list, but most of it makes sense. Then I fixed the version of keen in requirements.txt to 0.3.0, to fix the incompatibility error. Then it started without an issue. You need to provide environment variables for DEBUG and DATABASE_URL to for the service to start. That's pretty much it. As for the local setup - I haven't tried it yet, but I assume the process is going to be similar. Except, maybe, for using sqlite instead of Postgres for development mode. @Juriy Awesome stuff. Thank you very much for putting all this together. I sincerely appreciate your effort to give this much detail. This would certainly be a useful guide for me to try installing myself locally to work when troubleshooting. Thanks again!
gharchive/issue
2020-04-11T07:07:00
2025-04-01T06:39:59.924947
{ "authors": [ "CaioBRosa", "Juriy", "awongCM" ], "repo": "phalt/swapi", "url": "https://github.com/phalt/swapi/issues/148", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1398776469
Pre-release v0.8.3 Thank you for your Pull Request! We have developed this task checklist from the Development Process Guide to help with the final steps of the process. Completing the below tasks helps to ensure our reviewers can maximize their time on your code as well as making sure the admiral codebase remains robust and consistent. Please check off each taskbox as an acknowledgment that you completed the task or check off that it is not relevant to your Pull Request. This checklist is part of the Github Action workflows and the Pull Request will not be merged into the devel branch until you have checked off each task. [ ] Place Closes #<insert_issue_number> into the beginning of your Pull Request Title (Use Edit button in top-right if you need to update) [ ] Code is formatted according to the tidyverse style guide. Run styler::style_file() to style R and Rmd files [ ] Updated relevant unit tests or have written new unit tests - See Unit Test Guide [ ] If you removed/replaced any function and/or function parameters, did you fully follow the deprecation guidance? [ ] Update to all relevant roxygen headers and examples [ ] Run devtools::document() so all .Rd files in the man folder and the NAMESPACE file in the project root are updated appropriately [ ] Address any updates needed for vignettes and/or templates [ ] Update NEWS.md if the changes pertain to a user-facing function (i.e. it has an @export tag) or documentation aimed at users (rather than developers) [ ] Build admiral site pkgdown::build_site() and check that all affected examples are displayed correctly and that all new functions occur on the "Reference" page. [ ] Address or fix all lintr warnings and errors - lintr::lint_package() [ ] Run R CMD check locally and address all errors and warnings - devtools::check() [ ] Link the issue in the Development Section on the right hand side. [ ] Address all merge conflicts and resolve appropriately [ ] Pat yourself on the back for a job well done! Much love to your accomplishment! @bms63 We have to release today. As per CRAN (related to the encoding issue): Please fix before 2022-10-06 to safely retain your package on CRAN. @bms63 If you want to get updates relating to #1454 included, then please open a PR for that today. I'm out of office today but will login again this evening to create the release and upload to CRAN.
gharchive/pull-request
2022-10-06T04:45:29
2025-04-01T06:39:59.955943
{ "authors": [ "thomas-neitmann" ], "repo": "pharmaverse/admiral", "url": "https://github.com/pharmaverse/admiral/pull/1492", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1392141299
Icons not rendering correctly The icons in the sidebar aren't rendering correctly for me. <div class="lsb lsb-flex lsb-items-center lsb-py-3 lg:lsb-py-1.5 -lsb-ml-2 lsb-group lsb-cursor-pointer lsb-group hover:lsb-text-indigo-600" phx-click="close-folder" phx-target="1" phx-value-path="/admin/storybook/components/feedback"> <i class="fa-solid fa-caret-down lsb lsb-pl-1 lsb-pr-2"></i> </div> From my investigation, it's because the lsb class defines a font-family property which cascades down to the icon element and overrides the font-family of the fa-* classes. When I added !important to the fa-* font-family property, the icons render properly. duplicate of #111 Posted as a comment on #111 instead.
gharchive/issue
2022-09-30T09:24:58
2025-04-01T06:40:00.011611
{ "authors": [ "benregn", "woylie" ], "repo": "phenixdigital/phx_live_storybook", "url": "https://github.com/phenixdigital/phx_live_storybook/issues/117", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
413922915
Training multi class at the same time? Hi, how to training multi class at the same time? Sorry this is not supported currently.
gharchive/issue
2019-02-25T03:59:58
2025-04-01T06:40:00.077666
{ "authors": [ "jinfagang", "philip-huang" ], "repo": "philip-huang/PIXOR", "url": "https://github.com/philip-huang/PIXOR/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1946865239
Can not get feedback from Sever I installed the environment and ran the demo code. However, I can not get the feedback from the client. Is there any issue with my set up? Do you have java installed properly? I have java installed: $ java -version openjdk version "24-loom" 2025-03-18 OpenJDK Runtime Environment (build 24-loom+7-60) OpenJDK 64-Bit Server VM (build 24-loom+7-60, mixed mo de, sharing) should I use an older java runtime? even after setting up java 8, the server isn't starting: Exception in thread "main" edu.stanford.nlp.io.RuntimeIOException: argsToProperties could not read properties f ile: corenlp_server-8090116e1a084988.props at edu.stanford.nlp.util.StringUtils.argsToProperties(StringUtils.java:1060) at edu.stanford.nlp.util.StringUtils.argsToProperties(StringUtils.java:973) at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.<init>(StanfordCoreNLPServer.java:183) at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.launchServer(StanfordCoreNLPServer.java:1590) at edu.stanford.nlp.pipeline.StanfordCoreNLPServer.main(StanfordCoreNLPServer.java:1644) Caused by: java.io.IOException: Unable to open "corenlp_server-8090116e1a084988.props" as class path, filename or URL at edu.stanford.nlp.io.IOUtils.getInputStreamFromURLOrClasspathOrFileSystem(IOUtils.java:501) at edu.stanford.nlp.io.IOUtils.readerFromString(IOUtils.java:634) at edu.stanford.nlp.util.StringUtils.argsToProperties(StringUtils.java:1051) ... 4 more [Thread-0] INFO CoreNLP - CoreNLP Server is shutting down. I am trying to start the server manually here because when the python library tries to start the server automatically, it doesn't do -cp "<filepath"> and instead just does -cp <filepath
gharchive/issue
2023-10-17T08:33:44
2025-04-01T06:40:00.089712
{ "authors": [ "1171-jpg", "KarthikRaju391", "philipperemy" ], "repo": "philipperemy/stanford-openie-python", "url": "https://github.com/philipperemy/stanford-openie-python/issues/61", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
55285366
Spring 4 Thank you for a very good tutorial. What about Spring 4 annotations config integration? Thank you. Check my fork for Spring4 + Annotations https://github.com/storytime/angular-rest-springsecurity
gharchive/issue
2015-01-23T14:15:03
2025-04-01T06:40:00.096820
{ "authors": [ "storytime" ], "repo": "philipsorst/angular-rest-springsecurity", "url": "https://github.com/philipsorst/angular-rest-springsecurity/issues/17", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2132274237
⚠️ ECDC Shop has degraded performance In 394cb8c, ECDC Shop (https://shop.memmingen-indians.de) experienced degraded performance: HTTP code: 200 Response time: 7622 ms Resolved: ECDC Shop performance has improved in c5e3f04 after 21 minutes.
gharchive/issue
2024-02-13T13:00:37
2025-04-01T06:40:00.113412
{ "authors": [ "phimado" ], "repo": "phimado/status", "url": "https://github.com/phimado/status/issues/66", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2326903624
Add dynamic syncing of code and applications support TODO [x] purge consolidated protocols [x] sync priv dirs of start_apps @josevalim ready for a new pass
gharchive/pull-request
2024-05-31T03:44:44
2025-04-01T06:40:00.131846
{ "authors": [ "chrismccord" ], "repo": "phoenixframework/flame", "url": "https://github.com/phoenixframework/flame/pull/40", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
159687968
Add push template to package.json We should add the push template to the CLI: https://github.com/phonegap/phonegap-template-push Added this with above commit
gharchive/issue
2016-06-10T17:54:10
2025-04-01T06:40:00.160555
{ "authors": [ "mwbrooks", "surajpindoria" ], "repo": "phonegap/phonegap-cli", "url": "https://github.com/phonegap/phonegap-cli/issues/606", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
244106264
PDF_417 format not being read on iOS Issue I'm using this plugin to read the barcode on US drivers' licenses and extract some information, for android it worked like a charm, the problem is when you try to use it on iOS, it doesn't read the barcode, seems like it doesn't recognize the format. According to PR-431 (https://github.com/phonegap/phonegap-plugin-barcodescanner/pull/431) this problem was resolved, still it doesn't work on iOS. Platform and Version iOS 10.3.1 Android 6.2.3 Cordova CLI version and cordova platform version cordova --version 7.0.1 cordova platforms android 6.2.3 && ios 4.4.0 Plugin version cordova plugin version 6.0.8 Sample Code that illustrates the problem cordova.plugins.barcodeScanner.scan( scanSuccess, scanError, { formats: 'PDF_417,CODE_128' } ); } function scanSuccess (result) { console.log(result.format, result.text); } According to the documentation, I don't think they support PDF_417 format on iOS yet. We could ask them if they plan to support it soon. Please comment on #478 and give me an example image. Despite the documentation , PDF_417 works fine on ios for me , but never did the same on android, I've tried with some phonegap-plugin-barcodescanner plugins's version like 6.0.6 and 7.1.0 , didn't works for me on android. So frustrating, I can't find some post that really fix it.
gharchive/issue
2017-07-19T16:51:25
2025-04-01T06:40:00.164843
{ "authors": [ "jamesyangf", "jeanpi-gomez", "lsaballo", "macdonst" ], "repo": "phonegap/phonegap-plugin-barcodescanner", "url": "https://github.com/phonegap/phonegap-plugin-barcodescanner/issues/509", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
163430598
sync.cancel(); not working propperly on second cancel() When canceling the sync process the first time, it works perfectly. Starting the same process again and cancel again, contentsync still downloads the file. It's easy to reproduce from the sample code. Just put sync.cancel() in the progress-callback and start it two times. sync.on('progress', function(progress) { sync.cancel(); console.log("Progress event", progress); app.setProgress(progress); }); Only tested on iOS. I'd like to work on this as my first contribution. Can you please help me get started with it? @mansimarkaur For sure! Fork this repository, make your changes in separate branch and submit a pull request please.
gharchive/issue
2016-07-01T16:28:38
2025-04-01T06:40:00.166880
{ "authors": [ "dmaus", "imhotep", "mansimarkaur" ], "repo": "phonegap/phonegap-plugin-contentsync", "url": "https://github.com/phonegap/phonegap-plugin-contentsync/issues/136", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
144322364
iOS 9.3 Hello, i reed this https://github.com/phonegap/phonegap-plugin-push/issues/752 about push problem in iOS 9.3. I have the same problem but i don't understand what i need to do to resolve the problem. What does it mean that I need to fill everything on the push object? Thanks to all! @lucasabba if you are using the same sample code as in #752 remove the call to unregister. You should only call that method if you don't want to get push messages anymore. Also, sorry to be a jerk but I'm now closing issues that don't follow the issue submission guidelines. This plugin takes a lot of work to maintain and I can't afford the time required to solicit the required info from everyone. I'm closing this issue but please feel free to re-open the issue if you provide the requested details from the issue submission guidelines.
gharchive/issue
2016-03-29T16:48:22
2025-04-01T06:40:00.169505
{ "authors": [ "lucasabba", "macdonst" ], "repo": "phonegap/phonegap-plugin-push", "url": "https://github.com/phonegap/phonegap-plugin-push/issues/761", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
158925723
'notification' event not working on Notification click in Android I can not get 'notification' called on notification click in Android. Everything else works fine. Device: Samsung galaxy s5 payload: { "notification" : { "body" : "test body", "title" : "test title" }, "data": { "eventId": "TEST123", "content-available":"1" }, "priority":"high", "registration_ids" :[reg_ids] } Android manifest: <application android:hardwareAccelerated="true" android:icon="@drawable/icon" android:label="@string/app_name" android:supportsRtl="true"> <activity android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale" android:label="@string/activity_name" android:launchMode="singleTop" android:name="MainActivity" android:theme="@android:style/Theme.DeviceDefault.NoActionBar" android:windowSoftInputMode="adjustResize"> <intent-filter android:label="@string/launcher_name"> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:exported="true" android:name="com.adobe.phonegap.push.PushHandlerActivity" /> <receiver android:name="com.adobe.phonegap.push.BackgroundActionButtonHandler" /> <receiver android:exported="true" android:name="com.google.android.gms.gcm.GcmReceiver" android:permission="com.google.android.c2dm.permission.SEND"> <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> <category android:name="${applicationId}" /> </intent-filter> </receiver> <service android:exported="false" android:name="com.adobe.phonegap.push.GCMIntentService"> <intent-filter> <action android:name="com.google.android.c2dm.intent.RECEIVE" /> </intent-filter> </service> <service android:exported="false" android:name="com.adobe.phonegap.push.PushInstanceIDListenerService"> <intent-filter> <action android:name="com.google.android.gms.iid.InstanceID" /> </intent-filter> </service> <service android:exported="false" android:name="com.adobe.phonegap.push.RegistrationIntentService" /> </application> init object: { android: { senderID: senderId }, ios: { alert: true, badge: true, sound: true, senderID: senderId, gcmSandbox: gcmSandbox } } Am I missing something? Any help is greatly appreciated. I tested it on a huawei Honor 3c. Handler gets fired sometimes (Randomly) Is the PushHandlerActivity responsible for firing the 'notification' event on a notification click? I am not seeing any logs from that. (I have no knowledge in android development (native)) @malwatte wrong payload. If you include the notification part then the OS takes over. Try sending the following instead: { "data": { "body" : "test body", "title" : "test title", "eventId": "TEST123", "content-available":"1" }, "priority":"high", "registration_ids" :[reg_ids] } @macdonst yeahh, niceee, it worked. I thought if it was ok to add both "data" and "notification" to the payload. Will this work in iOS as well? Ops. iOS notifications does not work without "notification" : { "body" : "test body", "title" : "test title" } Helppppp. Does that mean I need two payloads. One for android and one for iOS? I ill refer #960 for above question
gharchive/issue
2016-06-07T13:40:26
2025-04-01T06:40:00.175506
{ "authors": [ "macdonst", "malwatte" ], "repo": "phonegap/phonegap-plugin-push", "url": "https://github.com/phonegap/phonegap-plugin-push/issues/967", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2038647591
Enhance Photobooth Experience with Distinct ISO and Shutter Speed Settings for Live View and Capture This pull request introduces an enhancement to the photobooth application by allowing separate ISO and shutter speed settings for the live preview mode and the photo capture mode. This feature is particularly useful for cameras that do not support Exposure Simulation and when using an external flash. Looks great, thank you for this contribution! ✨ I'll let the automated tests run, test on my system later also and merge this evening.
gharchive/pull-request
2023-12-12T22:34:02
2025-04-01T06:40:00.185582
{ "authors": [ "Peda1996", "mgineer85" ], "repo": "photobooth-app/photobooth-app", "url": "https://github.com/photobooth-app/photobooth-app/pull/135", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
156125175
Graphics improperly extends DisplayObject Hi there, Looks like the signature for Graphics.generateTexture doesn't match that of DisplayObject.generateTexture. This is an issue for typescript compilation (see pixi.d.ts). Thanks for the awesomeness y'all. Experienced the same problem when I tried to update to the latest version of Phaser. Suggested fix? Because PIXI.DisplayObject.generateTexture returns a RenderTexture, where-as PIXI.Graphics.generateTexture_does_ return Texture`. So they are actually different. Then is it possible to rename Graphics.generateTexture? The current implementation prevents typescript compilation from succeeding. I wish I was back on game work :( Reminds me of my old stack question. http://stackoverflow.com/questions/29593905/typescript-declaration-extending-class-with-static-method/29595798#29595798 declare class DisplayObject { static generateTexture(): RenderTexture; } Include the same signature from the primary above to the other class and overload it. declare class Graphics extends DisplayObject { static generateTexture(): RenderTexture; static generateTexture(): Texture; } Does it work? Overloading the function sounds like a reasonable solution to me. "Include the same signature from the primary above to the other class and overload it." Except that both signatures would be visible and the 1st signature (from base class) wouldn't ever actually work as described. Renaming the PIXI.Graphics.generateTexture() might be best. Yeah totally but that is a different issue. What you see today in JS is valid and correct JS. If TypeScript cannot define it, then you must work around it. If you want to change the implementation, you have the much bigger job of making a PR which is going to be bug free, and not break 1000 peoples existing games for the sake of a function overload. Hmm. Valid javascript or not, it's a flawed design. Either Graphics derives from DisplayObject or it does not. If it does, it must provide a working definition of DisplayObject.generateTexture. Typescript is simply enforcing proper design. I'm not sure what precipitated the original change to the signature of DisplayObject.generateTexture, but you might want to reconsider... Yeah I am having this issue even when rolling back to 2.4.6. I tried that function overloading recommendation and didn't seem to work. I am also wondering if this has anything to do with the misbehaving of the cursor when changing to a hand when over a graphic. @smks - i modified the Graphics signature to match that of DisplayObject and I'm working ok...though I'm exercising very little of the engine. This is kinda a blocker for me right now. @shinygruv3 How did you get it working exactly? @JimmyBoh - I modified ../phaser/typescript/pixi.d.ts locally and changed the signature of Graphics.generateTexture to match that of DisplayObject.generateTexture: generateTexture(resolution?: number, scaleMode?: number, renderer?: PixiRenderer): RenderTexture; Awesome, thanks @shinygruv3. I'm just starting a new project with Phaser. It's small, but I will share any issues or oddities I encounter. This is now fixed in the dev branch. Well, I say 'fixed', the defs are basically reverted back to how they were before. So the TS compilation errors are done, but the return type is technically incorrect. Then again, it has been for several years now and no-one complained, so such is life. @shinygruv3 Your turnaround is not working for me :(
gharchive/issue
2016-05-21T22:46:46
2025-04-01T06:40:00.193934
{ "authors": [ "JimmyBoh", "clark-stevenson", "d3lm", "jamesgroat", "kikemx78", "photonstorm", "shinygruv3", "smks" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/2492", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
158806669
useHandCursor not working properly in Phaser 2.4.8 The hard cursor only appears while pressing. Example: https://jsfiddle.net/snake/vg3Ljk26/ I've tested in other versions and works fine, only the 2.4.8 have this issue, I was going to report this, just to add more info: This Issue is about: A bug in the API When mouse over the button in version 2.4.7 works normally. 2.4.7 example 2.4.8 example In 2.4.8 detects only if you click outside button(inside canvas), hold and move over the button, and also after this don't dispatch mouseOut if you release inside button. The same occur with sprite with event.onInputOver(inputEnabled is enabled). After some research looks like something related with Phaser.Pointer parameter in inputHandler._pointerOverHandler. Thanks for spending time reporting this issue. However we've already fixed this in the dev branch of Phaser, and it will be part of the next official release. You can track what's already been fixed by looking at the Change Log part of the dev README file.
gharchive/issue
2016-06-07T00:47:18
2025-04-01T06:40:00.199440
{ "authors": [ "MichelAlonso", "marcelodeassis", "photonstorm" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/2540", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
301142076
Documentation: Broken Link to Code of Conduct in phaser/.github/CONTRIBUTING.md This Issue is about: An error in the documentation Hello! I have noticed that in the first section of CONTRIBUTING.md, the link to the code of conduct is outdated, and thus results in a 404 error. See the image below to see the broken link highlighted in blue. The fix would be to edit the url of the link from "[...]/v2/CODE_OF_CONDUCT.md", to "[...]/.github/CODE_OF_CONDUCT.md". I'm happy to issue a pull request with this fix! Thanks for opening this issue, and for submitting a PR to fix it. We have merged your PR into the master branch and attributed the work to you in the Change Log. If you need to tweak the code for whatever reason please submit a new PR.
gharchive/issue
2018-02-28T18:58:01
2025-04-01T06:40:00.202669
{ "authors": [ "melissaelopez", "photonstorm" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/3297", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
351828841
Cropped sprites behave unexpectedly when recropped Phaser Version: 3.11 Operating System: Windows 7 Professional x64 Browser: Firefox Developer Edition 62.0b13 x64 I have noticed while working on a project of mine that if you use .setCrop on an already cropped sprite, all other cropped sprites will also be affected. Here's a small demo to try and show this: var config = { type: Phaser.AUTO, width: 800, height: 600, backgroundColor: '#2d2d2d', parent: 'phaser-example', scene: { preload: preload, create: create } }; var group; var game = new Phaser.Game(config); function preload() { this.load.spritesheet('diamonds', 'diamonds32x24x5.png', { frameWidth: 32, frameHeight: 24 }); } function create() { group = this.add.group(); group.createMultiple({ key: 'diamonds', frame: [0,1,2,3,4], frameQuantity: 2, repeat: 1 }); group.children.iterate(function(child) { child.setCrop(0, 0, 32, 32); }); group.children.entries[10].setCrop(0, 0, 32, 8); Phaser.Actions.SetXY(group.getChildren(), 32, 100, 32); } To start off with .setCrop(0, 0, 32, 32) is ran on all children of the group, I don't believe this should have any immediately visible effect. However, strangely the frame is set to four (the last of the sprite sheet). So this is what I would have expected: This is what I actually got: After this the 10th child of the group is cropped with .setCrop(0, 0, 16, 16) which I would think would make it so you could only see a quarter of that particular sprite. But for some reason all the sprites in the group are cropped this way instead of just the one. So in end what I would expect to see is this: What I actually end up with is this: If you need any more information for this issue, please let me know. Thank you for spending time reporting this issue. However, we've already fixed this in the master branch of Phaser and it will be part of the next official release. You can track what's already been fixed by searching closed issues, or by looking at the Change Log.
gharchive/issue
2018-08-18T15:28:44
2025-04-01T06:40:00.207825
{ "authors": [ "lgibson02", "photonstorm" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/3948", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
374880805
is video module removed from phaser3? is phaser3 still has video module? Phaser 3 has never had video support (it wasn't removed, it was just never there in the first place) - it will be added in 2019, or if someone in the community wants to do it before then, I'd merge it in too. I have a video plugin, which can display video on DOM, or canvas, here is a demo. It took time to show fullscreen video on canvas for phaser so I leave my result. import 'phaser' const GameWidth = 1280 const GameHeight = 640 const FrameCenter = { x: GameWidth / 2, y: GameHeight / 2 } export class OpeningMovie extends Phaser.Scene { private movieFrame: Phaser.GameObjects.Image private movieTexture: Phaser.Textures.CanvasTexture private video: HTMLVideoElement constructor() { super({ key: 'OpeningMovie' }) } init(params) { } preload() { } create() { this.movieTexture = this.textures.createCanvas('movie', GameWidth, GameHeight) this.movieFrame = this.add.image(FrameCenter.x, FrameCenter.y, 'movie').setInteractive() this.video = document.createElement('video') this.video.src = '/some/movie.mp4' const game = this this.video.addEventListener('loadeddata', function() { this.play() const fps = 30 const loop = () => { if (!this.paused && !this.ended) { game.movieTexture.context.drawImage(this, 0, 0, GameWidth, GameHeight) game.movieTexture.refresh() setTimeout(loop, 1000/fps) } } loop() }) this.video.addEventListener('ended', function() { game.goNextScene() }) this.video.addEventListener('pause', function() { game.goNextScene() }) this.movieFrame.on('pointerdown', () => { this.video.pause() }) } goNextScene() { this.video.remove() this.movieTexture.destroy() this.scene.switch('Menu') } update() { } } Thank you. References: https://github.com/photonstorm/phaser/issues/3575#issuecomment-455912692 https://stackoverflow.com/questions/4429440/html5-display-video-inside-canvas/38711016
gharchive/issue
2018-10-29T06:53:36
2025-04-01T06:40:00.211452
{ "authors": [ "asukiaaa", "huangshengping2017", "photonstorm", "rexrainbow" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/4133", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
939388742
Particle emitter does not stops, even though maxParticles is being set Version Phaser Version: 3.55.2 Operating system: Windows 10 Browser: Chrome 91 Description Particle emitter does not stop creating particles, even thought the maxParticles is being set. When changing the below codepen from maxParticles: 30 to maxParticles: 19, it stops. From 20 onwards it does not stops. It is a bit random, just keep playing with the numbers. It also have the same weird behavior if you change the lifespan or speed. Example Test Code https://codepen.io/ivorcosta/pen/qBmZGON I'm not sure what's the expected behavior for maxParticles. It changed from v3.15.1 to v3.16.1, but that may have been because of an unrelated bugfix. In the test code example, the emitter is never at its limit because it never creates 30 particles. It creates only 20 or so and then recycles them. If you change lifespan or frequency it may then hit the limit if no dead particles are available. It does seem peculiar to me that once an emitter reaches its limit (described as a creation limit) it stops emitting particles at all, even if dead ones are available. I thought maxParticles was the maximum total allowed amount of particles the emitter could fire. I want a small animation that fires a few particles and stops. In order to achieve that, I have to start a emitter and use setTimeout to stop it after a while? Wouldn't have a more elegant solution? maxParticles is the maximum amount it can fire in one single burst, not in total. There is no such feature for 'kill this emitter after X particles are emitted', although I may consider it for a PR. @photonstorm Is https://labs.phaser.io/view.html?src=src/game objects\particle emitter\fire max 10 particles.js&v=3.16.1 the expected behavior then? There is no such feature for 'kill this emitter after X particles are emitted', although I may consider it for a PR. @photonstorm The emitter does stop once the sum of the recycling-pending (dead) particle count and the alive count is greater than maxParticles. maxParticles is the maximum amount it can fire in one single burst, not in total. By changing maxParticles to 15 in this example, the sum of dead and alive ones never exceed 15 since the dead ones are recycled. Set to hard limit the amount of particle objects this emitter is allowed to create. Docs sentence communicates particle object amount limit. While this is technically true since total amount of js objects (dead + alive) is not greater than that, it is misleading since one expects the option to apply on what they see on screen. https://github.com/photonstorm/phaser/blob/aa5f54cfa268c86823bf1dd52c83099fa0ef9ac2/src/gameobjects/particles/ParticleEmitter.js#L462-L471 By changing the following on the aforementioned example page, one can validate the non-exceeding counts over time: // ... maxParticles: 15, // ... function update (time, delta) { fpsText.setText('FPS: ' + (1000/delta).toFixed(3) + '\n' + `Alive ${particles.emitters.first.alive.length}` + '\n' + `Dead ${particles.emitters.first.dead.length}`); }
gharchive/issue
2021-07-08T01:41:44
2025-04-01T06:40:00.221062
{ "authors": [ "ivorcosta", "kootoopas", "photonstorm", "samme" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/issues/5773", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
585876249
Add Tiled point object and change offset in createFromObjects() This PR Adds a new feature Fixes a bug? Current Behavior Tiled point objects are parsed to { rectangle: true, width: 0, height: 0, … } When creating a sprite from a Tiled object with `{ width: 0, height: 0 }, createFromObjects() maps the Tiled object origin onto the sprite origin using the sprite's dimensions (which are not zero). This is probably not what authors want. New Behavior Tiled point objects are parsed to { point: true, width: 0, height: 0, … } When creating a sprite from a zero-size object, createFromObjects() doesn't adjust the position at all. The new sprite is given coordinates identical to the object. I think this is more sensible, but it is a "breaking" change for projects using createFromObjects() with zero-dimensioned Tiled objects. Internal Changes I removed some redundant assignments in Tiled.ParseObject(). 👍
gharchive/pull-request
2020-03-23T02:05:32
2025-04-01T06:40:00.225327
{ "authors": [ "photonstorm", "samme" ], "repo": "photonstorm/phaser", "url": "https://github.com/photonstorm/phaser/pull/5051", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
333231207
Can't separate numbers from strings Hi I have trained a RandomForest algoritm and it won't separate numbers from strings. This is my class: <?php declare(strict_types=1); namespace PhpmlExamples; include '../../vendor/autoload.php'; use Phpml\Classification\Ensemble\RandomForest; use Phpml\Dataset\CsvDataset; use Phpml\Dataset\ArrayDataset; use Phpml\FeatureExtraction\TokenCountVectorizer; use Phpml\Tokenization\WordTokenizer; use Phpml\CrossValidation\StratifiedRandomSplit; use Phpml\FeatureExtraction\TfIdfTransformer; use Phpml\Metric\Accuracy; $dataset = new CsvDataset('../data/languages.csv', 1); $vectorizer = new TokenCountVectorizer(new WordTokenizer()); $tfIdfTransformer = new TfIdfTransformer(); $samples = []; foreach ($dataset->getSamples() as $sample) { $samples[] = $sample[0]; } $vectorizer->fit($samples); $vectorizer->transform($samples); $tfIdfTransformer->fit($samples); $tfIdfTransformer->transform($samples); $dataset = new ArrayDataset($samples, $dataset->getTargets()); $randomSplit = new StratifiedRandomSplit($dataset, 0.1); $classifier = new RandomForest(); $classifier->setFeatureSubsetRatio('log'); $classifier->train($randomSplit->getTrainSamples(), $randomSplit->getTrainLabels()); $predictedLabels = $classifier->predict($randomSplit->getTestSamples()); $examples = [ ['content' => 'I need to build this english sentence by myself'], ['content' => 'Italiano pizza pasta margeritta'], ['content' => 'Do I have to change?'], ['content' => 'Je voudrais une boîte de chocolates.'], ['content' => 'irewhgewruhgiee e retyrtyrt'], ['content' => '64564'], ['content' => '5476'], ['content' => '9925'], ['content' => '24'], ]; foreach ($examples as $example) { $newSample = [$example['content']]; $vectorizer->transform($newSample); $tfIdfTransformer->transform($newSample); echo $classifier->predict($newSample)[0] . '<br>'; } var_dump($examples); echo 'Accuracy: '.Accuracy::score($randomSplit->getTestLabels(), $predictedLabels); ?> And this is my training data: `"content","label" "123","number" "6546","number" "548","number" "5678","number" "583","number" "38","number" "454","number" "28655","number" "65","number" "3568","number" "5679","number" "5679","number" "96","number" "Where's the nearest railway station?","string" "The storms caused flooding.","string" "Where is the duty free shop?","string" "I witnessed it happening.","string" "I would like two postcards, please.","string" "How about going to the cinema?","string" "Où est la boulangerie?","string" "Je voudrais une boîte de chocolates.","string" "Y a-t-il un autre hôtel près d'ici?","string" "Vérifiez la batterie, s'il vous plaît.","string" "La banque ouvre à quelle heure?","string" "Est-ce que je peux l'écouter?","string" "Vous devez faire une déclaration de perte.","string" "Combien des élèves y a-t-il dans votre collège?","string"` Any ideas why RandomForest recognise all strings as numbers? moved to: https://github.com/php-ai/php-ml/issues/282
gharchive/issue
2018-06-18T11:28:59
2025-04-01T06:40:00.263529
{ "authors": [ "konradja100" ], "repo": "php-ai/php-ml", "url": "https://github.com/php-ai/php-ml/issues/283", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1394964749
Generate stubs for ACF PRO 6.0.2 Hi, ACF Pro had an update recently and thought of creating a PR to update the stubs for it. Thank you, Cezar Hello @cezarpopa! This repo has not so much life in it. So I'm merging your PR blindly. Thank you. Hello @cezarpopa! This repo has not so much life in it. So I'm merging your PR blindly. Thank you!
gharchive/pull-request
2022-10-03T15:55:54
2025-04-01T06:40:00.276073
{ "authors": [ "cezarpopa", "szepeviktor" ], "repo": "php-stubs/acf-pro-stubs", "url": "https://github.com/php-stubs/acf-pro-stubs/pull/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1009648849
🛑 phpMyFAQ API is down In 495392e, phpMyFAQ API (https://api.phpmyfaq.de/versions) was down: HTTP code: 0 Response time: 0 ms Resolved: phpMyFAQ API is back up in 2287337.
gharchive/issue
2021-09-28T11:51:25
2025-04-01T06:40:00.304165
{ "authors": [ "thorsten" ], "repo": "phpMyFAQ/status.phpmyfaq.de", "url": "https://github.com/phpMyFAQ/status.phpmyfaq.de/issues/1381", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2054795808
🛑 phpMyFAQ Homepage is down In a7d24c8, phpMyFAQ Homepage (https://www.phpmyfaq.de) was down: HTTP code: 0 Response time: 0 ms Resolved: phpMyFAQ Homepage is back up in f9bf1fc after 10 minutes.
gharchive/issue
2023-12-23T13:58:10
2025-04-01T06:40:00.306649
{ "authors": [ "thorsten" ], "repo": "phpMyFAQ/status.phpmyfaq.de", "url": "https://github.com/phpMyFAQ/status.phpmyfaq.de/issues/6752", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1942425136
🛑 Blog is down In 1169cc9, Blog (https://blog.phpbb.com) was down: HTTP code: 503 Response time: 71 ms Resolved: Blog is back up in 524a40f after 13 minutes.
gharchive/issue
2023-10-13T18:20:21
2025-04-01T06:40:00.308993
{ "authors": [ "phpbb-user" ], "repo": "phpbb/status-site", "url": "https://github.com/phpbb/status-site/issues/184", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
103737863
opcode error with composer and phpunit I had PHP 5.5.28 installed on my Ubuntu machine with everything working as expected. I decided to start using phpbrew to make upgrading to PHP 5.6 easier. To start with I just installed the same version of PHP that I have on my machine: phpbrew install 5.5.28 +default+dbs+debug+apxs2 (I also installed 5.6.12, with xdebug, and got the same errors) After doing so everything worked great with 2 exceptions: phpunit and composer. Both commands run to completion but right at the end I get fatal opcode errors which cause issues with the CI server. As an example I get the following at the end of running phpunit (4.6.10): PHP Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0 Fatal error: Invalid opcode 65/16/8. in phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php on line 0 Let me know if you need any further details. Any help would be greatly appreciated. Thanks Did you installe eaccelerator or some similar? Hey jhdxr. eaccellerator isn't installed. I did install uopz and decided to remove that temporarily and the fatal errors went away so it looks to be an issue with that and not phpbrew. Thanks for the response and I'll just go ahead and close the issue. I think you may refer to krakjoe/uopz#19 , which has an explanation of this fatal error.
gharchive/issue
2015-08-28T15:19:00
2025-04-01T06:40:00.312871
{ "authors": [ "ciatog", "jhdxr" ], "repo": "phpbrew/phpbrew", "url": "https://github.com/phpbrew/phpbrew/issues/572", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
854622876
Digest relative and absolute relative links before converting to pdf It might be a good idea to remove all relative links (/foo.smth, foo.smth or file://path/foo.smth) from the HTML file before rendering it to HTML, since these links won't be valid in the resulting PDF anyways. This would, of course, leave the thing the links are displayed as intact, and only remove their linking ability. Upgrade to v1.9.1 to have this.
gharchive/issue
2021-04-09T15:21:52
2025-04-01T06:40:00.364029
{ "authors": [ "phseiff" ], "repo": "phseiff/github-flavored-markdown-to-html", "url": "https://github.com/phseiff/github-flavored-markdown-to-html/issues/28", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1290993359
🛑 Where Have I Been? is down In 276d9cf, Where Have I Been? (https://wherehaveibeen.co.uk) was down: HTTP code: 403 Response time: 121 ms Resolved: Where Have I Been? is back up in 0f1bbf2.
gharchive/issue
2022-07-01T06:56:47
2025-04-01T06:40:00.393266
{ "authors": [ "EddiesTech" ], "repo": "pi4li/statuspage", "url": "https://github.com/pi4li/statuspage/issues/107", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
259076959
THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT Issue created from code comment with imdone.io NOTE: THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT id:179 src/vs/workbench/workbench.main.nls.js:6 @imdone - Efficiently manage your project's technical debt. imdone.io Issue closed by removing a comment. NOTE: THIS FILE WILL BE OVERWRITTEN DURING BUILD TIME, DO NOT EDIT id:179 gh:179 src/vs/workbench/workbench.main.nls.js:6 @imdone - Efficiently manage your project's technical debt. imdone.io
gharchive/issue
2017-09-20T08:11:26
2025-04-01T06:40:00.396650
{ "authors": [ "piascikj" ], "repo": "piascikj/vscode", "url": "https://github.com/piascikj/vscode/issues/179", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728000586
Slow for parsing large arrays containing objects Environment Browser. Timing tested in latest Firefox. Description During experimentation with Jackson-JS we noticed that its JSON deserialization is rather slow for large arrays. The JSON response we parse is an array containing roughly 1200 of the below objects - and parsing takes about 15 seconds. I'm aware this use case is rather extreme, but unfortunately we are dealing with a legacy API within an enterprise here - so things won't improve somewhen soon. import { JsonIgnoreProperties, JsonProperty, JsonAlias } from 'jackson-js'; @JsonIgnoreProperties({ value: [ 'abc', 'def', //... 67 more ignore properties ]}) export class MyModel { @JsonProperty() @JsonAlias({values: ['xyz']}) myVarName: number; // ... 18 more properties with @JsonProperty and @JsonAlias } What you'd like to happen: I'd love to hear your thoughts on strategies on how to mitigate the parsing impact and/or speed it up. Alternatives you've considered: Obviously, we can cache the result - but 15 secs still seems way too slow. Although building a paging into the backend API would solve this, this is very unlikely to happen within a reasonable time span Hi @marbetschar Did you find a way to speed up the deserialization? @badetitou back then we used @marcj/marshal instead of jackson-js. Don't know if it is still maintained though (I no longer work at the project in question).
gharchive/issue
2020-10-23T07:56:41
2025-04-01T06:40:00.404264
{ "authors": [ "badetitou", "marbetschar" ], "repo": "pichillilorenzo/jackson-js", "url": "https://github.com/pichillilorenzo/jackson-js/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
651917191
Fails when client package is minified Environment Any environment that minifies code. For example, Angular production builds which use WebPack with optimization enabled. Description Minification should not have any effect on usage including serialization and/or deserialization. After minification serialization/deserialization fails with the error Invalid Keyword. This can be traced to an issue with getArgumentNames and meriyah's parseScript failing to parse the minified function signature. Also, it appears that even if the parsing succeeded the argument names would then be incorrect but I am not sure if this affects usage. Steps to reproduce Create a class that uses JsonCreator or JsonPropertys on the constructor. Minify the class Attempt to serialize or deserialize using the minified class. @kdubb - I see you have a PR open for this issue; any idea what's going on with it? the same here @niveo @cdunford We have a fork @outfoxx/jackson-js currently published on NPM that includes all of our open PRs. It's looking like this great project may be abandoned.
gharchive/issue
2020-07-07T01:19:41
2025-04-01T06:40:00.407974
{ "authors": [ "cdunford", "kdubb", "niveo" ], "repo": "pichillilorenzo/jackson-js", "url": "https://github.com/pichillilorenzo/jackson-js/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
365155509
Consider normalizing submitted flags before checking The current code uses substring (python: submitted_flag in desired_flag). This leads to surprises when there are, say, leading/trailing spaces in the submitted flag. P.S. For competition, most problems have the flags in a special format but there are some exceptions. For the problems in the latter group (no format), this can create confusion if some user formats the flag because now the flag contains extra characters. After brief discussion with problem devs, what would work better would be a case-insensitive flag on the problem, carried over from the problem.json config.
gharchive/issue
2018-09-29T18:26:01
2025-04-01T06:40:00.409608
{ "authors": [ "hiliang-cmu", "maverickwoo" ], "repo": "picoCTF/picoCTF", "url": "https://github.com/picoCTF/picoCTF/issues/208", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1533126755
Sorting pages using a customer header isn't honoured. I am sorting pages (posts) per a meta header. <ul> {% if (current_page.id == "categories") %} {% for page in pages|sort_by("page.meta.Catposition") %} {% if (page.meta.Category == "Sections") and not page.hidden %} <li><a href="{{ page.url }}">{{ page.title }}</a></li> {% endif %} {% endfor %} {% endif %} </ul> --- Title: Title of the page Template: template_name Position: 1 Category: Sections Catposition: 1 --- Earlier, the sorting was done by Position and it worked. I later changed it to Catposition and added this header to all the required pages numbered them correctly. But for some reason, sorting by Catposition just does not work. Posts are listed alphabetically. However, the moment I revert the sorting code to page.meta.Position, lists are sorted as per the Position. It is as if something is syntactically wrong with the code containing Catposition even though it is an exact copy with a change in the header. Any idea what could be wrong? I'm not seeing anything obvious here... if it works with Position, and you've change all occurrences to Catposition instead, it should function exactly the same. The fact that it's falling back to alphabetical sort would probably imply that it's not finding Catposition in the page. I know it's not helpful, but maybe double check everything for typos? Copy and paste Catposition between your code and your metadata just to sanity-check that it matches, etc. @PhrozenByte Do you have any thoughts on this? Check for typos and upper/lowercase (especially when the meta header was registered using a plugin's onMetaHeaders or the theme's pico-theme.yml) I figured out what's causing it. For next and previous buttons I had added a sorting configuration in the config.yml as per this discussion. sort_directories: - docs pages_order_by: meta pages_order_by_meta: position pages_order: asc This is overriding my {% for page in pages|sort_by("page.meta.Number")%} in the .twig template. If I change pages_order_by_meta: position above to pages_order_by_meta: Catposition, the order of posts listings is correct. But I can't change that since elsewhere in the website, I need the pages to be sorted as per position. Don't you think that the {% for page in pages|sort_by("page.meta.Catposition")%} in the template must have overridden the sorting order in config file since templates are the bare metal layer to listings' page? To be honest, what it sounds like is that your original code just didn't work. It only appeared to work because you were already sorting pages by position globally. And, I'm realizing now that I'm looking at the docs, that your syntax for sort_by is wrong. 😅 It should be: {% for page in pages|sort_by(['meta', 'Number'])%} Not: {% for page in pages|sort_by("page.meta.Number")%} So, why don't you give that a try and see if it behaves right. 😉 So, why don't you give that a try and see if it behaves right. It does. Crazy! I don't know how I got the idea of using page.meta.name. Perhaps it was an edit to the listing code that sorted pages as per default headers (title, time, etc). Thanks for the help. No worries. It happens. 😉 It is an odd syntax. On the technical side, this is because you're giving a Pico function some strings as arguments rather than reading them inside of Twig. Don't feel too bad though, me and @PhrozenByte didn't catch that one either. 😅
gharchive/issue
2023-01-14T04:03:28
2025-04-01T06:40:00.418205
{ "authors": [ "PhrozenByte", "mayamcdougall", "notakoder" ], "repo": "picocms/Pico", "url": "https://github.com/picocms/Pico/issues/657", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2231083310
Ask, conversation, conversations commands https://drive.google.com/open?id=1PVJrobh4PflT3Vbg7HoEiDDH_jyxMNcN https://drive.google.com/open?id=1EIywJT5rSXRE40TanCHWMAdGQpY6KBLu @hal-8999-alpha anything wrong with this PR? @hal-8999-alpha anything wrong with this PR? Only minor thing I had was to change it to GPT 3.5 instead of ChatGPT3
gharchive/pull-request
2024-04-08T12:33:31
2025-04-01T06:40:00.420584
{ "authors": [ "BishoyHanyRaafat", "hal-8999-alpha" ], "repo": "pieces-app/cli-agent", "url": "https://github.com/pieces-app/cli-agent/pull/79", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2629793554
gcore provider not working apiVersion: se.quencer.io/v1alpha1 kind: DNSIntegration metadata: labels: argocd.argoproj.io/instance: k8s-blue-cc-phonebook name: gcore spec: provider: name: gcore secretRef: keys: - key: GCORE_PERMANENT_API_TOKEN name: GCORE_API_TOKEN name: phonebook-secrets zones: - myzone.com Name: gcore Namespace: Labels: argocd.argoproj.io/instance=k8s-blue-cc-phonebook Annotations: <none> API Version: se.quencer.io/v1alpha1 Kind: DNSIntegration Metadata: Creation Timestamp: 2024-11-01T20:06:44Z Finalizers: phonebook.se.quencer.io/deployment Generation: 1 Resource Version: 90979843 UID: b0c39020-951e-448c-8bbd-df03550567e0 Spec: Provider: Name: gcore Secret Ref: Keys: Key: GCORE_PERMANENT_API_TOKEN Name: GCORE_API_TOKEN Name: phonebook-secrets Zones: wxs.ro Status: Conditions: Last Transition Time: 2024-11-01T20:06:44Z Reason: Deployment.apps "provider-gcore" is invalid: spec.template.spec.containers[0].image: Required value Status: Error Type: Deployment Events: <none> Sorry about that, the release process is very manual and error prone. I have rebuilt and reshipped 0.3.7. Usually, helm chart versions needs to increment but I didn't do it. Can you try again? You might need to remove phonebook from your helm repo or at the very least force an update. No worries, but i tried everything that I could think off, I am deploying with argo, I've tried hard refresh, removing the app, etc, nothing worked, same error, I suspect this is still present. Can you please reopen it and maybe bump the chart version so we can exclude the caching of the old version? Thank you! @WladyX Before I make changes to the helm chart can you test this DNSIntegration for me first? apiVersion: se.quencer.io/v1alpha1 kind: DNSIntegration metadata: labels: argocd.argoproj.io/instance: k8s-blue-cc-phonebook name: gcore spec: provider: name: gcore image: "ghcr.io/pier-oliviert/providers-gcore:v0.3.7" secretRef: keys: - key: GCORE_PERMANENT_API_TOKEN name: GCORE_API_TOKEN name: phonebook-secrets zones: - myzone.com You can specify images directly and I want to make sure the image loads OK for you. One thing I realized is that all images are built for x64 platform so I want to make sure the image is actually running in your cluster. still does not work and i don't think it's related to x64. ❯ kd dnsintegrations.se.quencer.io gcore Name: gcore Namespace: Labels: argocd.argoproj.io/instance=k8s-blue-cc-phonebook Annotations: <none> API Version: se.quencer.io/v1alpha1 Kind: DNSIntegration Metadata: Creation Timestamp: 2024-11-02T12:09:00Z Finalizers: phonebook.se.quencer.io/deployment Generation: 2 Resource Version: 91942169 UID: a91c64b1-7d34-4a1b-83bc-fb9a00380131 Spec: Provider: Image: ghcr.io/pier-oliviert/providers-gcore:v0.3.7 Name: gcore Secret Ref: Keys: Key: GCORE_PERMANENT_API_TOKEN Name: GCORE_API_TOKEN Name: phonebook-secrets Zones: myzone.com Status: Conditions: Last Transition Time: 2024-11-02T12:09:22Z Reason: Deployment.apps "provider-gcore" is invalid: spec.template.spec.containers[0].image: Required value Status: Error Type: Deployment Events: <none> ❯ kg dnsintegrations.se.quencer.io gcore -oyaml|kneat apiVersion: se.quencer.io/v1alpha1 kind: DNSIntegration metadata: labels: argocd.argoproj.io/instance: k8s-blue-cc-phonebook name: gcore spec: provider: image: ghcr.io/pier-oliviert/providers-gcore:v0.3.7 name: gcore secretRef: keys: - key: GCORE_PERMANENT_API_TOKEN name: GCORE_API_TOKEN name: phonebook-secrets zones: - myzone.com scratch that, i removed the dnsintegration gcore and redeployed it and the pod started, will do some more tests and come back, thank you! @WladyX I was going crazy over here, looking at the code I couldn't understand what was going on and was about to create a discord server to discuss with you. Glad you got it working! i just tried deleting the dnsintegration and testing without image spec, still does not work, so pls bump the chart, so I can test it without image in dnsintegration. the record was created ok on gcore when I tried with the image specified, but I would have expected an error from desec, since it should try to create the record on both, but that's another story, I can opened another issue if you like. thank you!
gharchive/issue
2024-11-01T20:09:45
2025-04-01T06:40:00.439396
{ "authors": [ "WladyX", "pier-oliviert" ], "repo": "pier-oliviert/phonebook", "url": "https://github.com/pier-oliviert/phonebook/issues/19", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
672692786
Fix docs Assets section Changes Fix docs Assets section. Before: After: Thank you!
gharchive/pull-request
2020-08-04T10:41:13
2025-04-01T06:40:00.476897
{ "authors": [ "FredKSchott", "Utwo" ], "repo": "pikapkg/snowpack", "url": "https://github.com/pikapkg/snowpack/pull/732", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2055160253
🛑 Blog URL is down In 0b0d43a, Blog URL (http://blog.pik.farm) was down: HTTP code: 0 Response time: 0 ms Resolved: Blog URL is back up in cfff9f9 after 12 minutes.
gharchive/issue
2023-12-24T16:45:59
2025-04-01T06:40:00.490724
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/1175", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2068430165
🛑 www.pikfarm.com is down In 17a712b, www.pikfarm.com (https://www.pikfarm.com) was down: HTTP code: 0 Response time: 0 ms Resolved: www.pikfarm.com is back up in 81515ef after 40 minutes.
gharchive/issue
2024-01-06T06:36:03
2025-04-01T06:40:00.493870
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/3120", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2103067902
🛑 www.pikfarm.com is down In 33abd07, www.pikfarm.com (https://www.pikfarm.com) was down: HTTP code: 0 Response time: 0 ms Resolved: www.pikfarm.com is back up in 23b6516 after 1 hour, 42 minutes.
gharchive/issue
2024-01-26T23:48:36
2025-04-01T06:40:00.496928
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/5889", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2104342464
🛑 Stripe Test Endpoint URL is down In 5e7a09e, Stripe Test Endpoint URL ($STRIPETESTAPI) was down: HTTP code: 0 Response time: 0 ms Resolved: Stripe Test Endpoint URL is back up in d4f1468 after 31 minutes.
gharchive/issue
2024-01-28T20:49:39
2025-04-01T06:40:00.499104
{ "authors": [ "cybertheory" ], "repo": "pikfarm/PikfarmStatus", "url": "https://github.com/pikfarm/PikfarmStatus/issues/6157", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
237010295
convert from using uint32 to uint16 in array and run containers Overview See title and issue - wait on this until RLE is merged. Fixes #664 Pull request checklist [ ] I have read the contributing guide. [ ] I have agreed to the Contributor License Agreement. [ ] I have updated the documentation. [ ] I have resolved any merge conflicts. [ ] I have included tests that cover my changes. [ ] All new and existing tests pass. Code review checklist This is the checklist that the reviewer will follow while reviewing your pull request. You do not need to do anything with this checklist, but be aware of what the reviewer will be looking for. [ ] Ensure that any changes to external docs have been included in this pull request. [ ] If the changes require that minor/major versions need to be updated, tag the PR appropriately. [ ] Ensure the new code is properly commented and follows Idiomatic Go. [ ] Check that tests have been written and that they cover the new functionality. [ ] Run tests and ensure they pass. [ ] Build and run the code, performing any applicable integration testing. @travisturner On that line, we're trying to decide if it would be more space efficient to convert to an array container to a run container. Each run takes up twice the space of an element of an array, so if the number of runs is less than half the array cardinality, then using RLE is more efficient. TLDR; I think 2 is correct there @jaffee but doesn't the change to 16-bit array values mean that each run takes up 4x the space of an array element? we changed to interval16 for run containers as well oh, got it. sorry. closed due to #758
gharchive/pull-request
2017-06-19T20:12:47
2025-04-01T06:40:00.512714
{ "authors": [ "jaffee", "tgruben", "travisturner" ], "repo": "pilosa/pilosa", "url": "https://github.com/pilosa/pilosa/pull/665", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2210297384
getTypesForEIP712Domain error Wagmi + Viem + AA upgrade === disaster. Anyone seen this error: ../../node_modules/permissionless/actions/smartAccount/signTypedData.ts:137:49 const types = { > 137 | EIP712Domain: getTypesForEIP712Domain({ domain }), | ^ 138 | ...(types_ as TTypedData) 139 | } '((TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `bytes[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]...' is not assignable to type 'TypedDataDomain | undefined'. Type '(TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `bytes[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]:...' is not assignable to type 'TypedDataDomain | undefined'. Type 'unknown' is not assignable to type 'TypedDataDomain | undefined'. "viem": "2.9.3", "wagmi": "2.5.12", "@alchemy/aa-accounts": "3.6.1", "@alchemy/aa-alchemy": "3.7.0", "@alchemy/aa-core": "3.6.1", "@alchemy/aa-ethers": "3.6.1", "@alchemy/aa-signers": "3.6.1", Same error here. Anyone can help? Hard to tell what's going on without a minimal reproduction. Likely that skipLibCheck is falsy in your tsconfig.json. Hey what's your tsconfig's target? or as @jxom pointed out skipLibCheck is falsy in your tsconfig.json Hey but can you provide a small repro repo? It will help us solve it! We're also getting this bug when moving from typescript 5.2.2 -> 5.4.5 ../../node_modules/permissionless/actions/smartAccount/signTypedData.ts:139:49 Type error: Type '((TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `uint64[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`...' is not assignable to type 'TypedDataDomain | undefined'. Type '(TTypedData extends { [x: string]: readonly TypedDataParameter[]; [x: `string[${string}]`]: undefined; [x: `function[${string}]`]: undefined; [x: `address[${string}]`]: undefined; [x: `uint32[${string}]`]: undefined; [x: `uint64[${string}]`]: undefined; [x: `uint256[${string}]`]: undefined; [x: `bytes32[${string}]`]...' is not assignable to type 'TypedDataDomain | undefined'. Type 'unknown' is not assignable to type 'TypedDataDomain | undefined'. 137 | 138 | const types = { > 139 | EIP712Domain: getTypesForEIP712Domain({ domain }), | ^ 140 | ...(types_ as TTypedData) 141 | } 142 | trying to get a minimal repro/narrow down the typescript version Sorry missed the ongoing thread. I can confirm that skipLibCheck is true on my end as well. Typescript upgrades are a huge PITA and cost factor these days :(. Would probably be best to always test on the latest supported Typescript version to catch this early. obviously this isn't a permanent fix for the issue but you can use patch-package to patch in an ignore flag for this type issue. Doing this with the patch-package allows the patch to be applied after post-install so this will work in production builds and for other engineers working in the project. example patch: const types = { + // @ts-ignore EIP712Domain: getTypesForEIP712Domain({ domain }), ...(types_ as TTypedData) }
gharchive/issue
2024-03-27T09:32:49
2025-04-01T06:40:00.556458
{ "authors": [ "CarterAppleton", "eruizgar91", "gavinnewcomer", "jxom", "mwawrusch", "plusminushalf" ], "repo": "pimlicolabs/permissionless.js", "url": "https://github.com/pimlicolabs/permissionless.js/issues/153", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2241463129
Weather board fails to upload every other reading My weather board is powered via the USB-Micro cable and when it uploads a reading and takes another one it seems to start flashing red and fails, but then a few tries later it succeeds (Probably a connection issue). I was wondering if there's anyway of me disabling it sleeping because I think that's what the issue is. @MrBisquit First step would be to apply the wifi improvements code in pr #199 as that can sort all sorts of issues including ones similar to what you have described. The files for this are available in a recent build: https://github.com/pimoroni/enviro/actions/runs/8784437254 Alright, I'll try that next time I bring it in, thanks :) (I'll close the issue once I've tested it and if it works, which may be a while depending on when I next bring it in) @MrBisquit This code is now in main, so upgrade to v0.2.0 and see how you go. I feel like there could be an option for like if it's going to be plugged into a constant power source (instead of batteries) to have the option to never disconnect from Wi-Fi so that it could possibly prevent this issue.
gharchive/issue
2024-04-13T09:35:54
2025-04-01T06:40:00.561622
{ "authors": [ "MrBisquit", "sjefferson99" ], "repo": "pimoroni/enviro", "url": "https://github.com/pimoroni/enviro/issues/216", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1913847557
request: add deflate/zlib/gzip compression support The introduction of the deflate module is slated for Micropython 1.21, but that release is months overdue. Any chance that module can be added to a pirate-flavour release? Specifically, I want to use zlib.compress, but 1.20.x only includes the decompress function and DecompIO class. I've not had much luck or experience generating anything but super trivial .mpy files. It looks like Deflate was merged in https://github.com/micropython/micropython/commit/3533924c36ae85ce6e8bf8598dd71cf16bbdb10b and we're already targeting a pre-release commit of MicroPython, so I just need to find the time to walk it forward to the latest upstream commit and fix all the breaking changes. (There have been some renames to MicroPython RP2 boards which will break out build.) Okay apparently that was not as difficult as I'd thought. The "deflate" module should be in these builds: https://github.com/pimoroni/pimoroni-pico/actions/runs/6418396733?pr=858 Thanks @Gadgetoid At some point I want to learn what got changed in that commit to allow it to be built. Wow... https://github.com/micropython/micropython/releases/tag/v1.21.0 just released :laughing: I think we were accidentally prescient here. I think our build worked pretty much fine without changes by just bumping the commit has to the latest MicroPython, but no doubt the various changes will have far reaching repercussions beyond "does it build." For anyone else finding this thread who wants to enable compression with deflate.DeflateIO, I recommend as a starting point this tutorial on Medium and this post from @Gadgetoid. In my case, I added the line #define MICROPY_PY_DEFLATE_COMPRESS (1) to mpconfigboard.h for the Pico W. The file is in micropython/board/RPI_PICO_W/. The actions take care of the rest of the firmware build.
gharchive/issue
2023-09-26T16:05:51
2025-04-01T06:40:00.567542
{ "authors": [ "Gadgetoid", "grunkyb" ], "repo": "pimoroni/pimoroni-pico", "url": "https://github.com/pimoroni/pimoroni-pico/issues/854", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
969178682
Mahsa shirazi profile branch Hi! Here is my _index and avatar.jpg files to add to the website, Thanks! Please add my files to the website
gharchive/pull-request
2021-08-12T16:49:02
2025-04-01T06:40:00.568796
{ "authors": [ "MahsaShirazi" ], "repo": "pimsmath/m2pi.ca", "url": "https://github.com/pimsmath/m2pi.ca/pull/145", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
591384575
Make ls ignore vim backup files Fixes Checklist [x] I have made a change to this repository, be it functionality, testing, documentation, spelling, or grammar. [x] I updated my branch with the master branch. [ ] I have added the necessary testing to prove my fix is effective/my feature works (or I did not modify functionality). [ ] I have added necessary documentation about the functionality in an appropriate .md file. [ ] I have appropriately commented any code I have modified Short description of what this PR does: In case user has vim set up to create backup files in the current directory, make the ls command ignore backup files (ending in '~' by default). I find this helpful with my setup, but it obviously won't be of benefit to everyone. Probably won't cause any problems, though. I don't imagine anyone uses '~' at the end of a filename for any other purpose (and for some reason, also stores them in the notes directory). Yeah, take it or leave it. :-) Nice! I'm definitely on board with the goal, good suggestion. Unfortunately it will break things though, because it looks like the default version of ls on a Mac doesn't support --ignore (or -I): https://stackoverflow.com/questions/11213849/how-to-ls-ignore-on-osx. I think this would work everywhere if implemented with grep -v instead. Would that work for you? AFAIK that should do the same thing. Any downsides you're aware of? Ah, yes, I was worried that -I might not be supported everywhere. I had originally planned on using grep -v before I discovered that ls had its own --ignore option. As far as I know, grep -v is standard and should be supported on all *nix platforms, so that's probably the way to do it. :-) Excerpts from Tim Perry's message of April 1, 2020 5:46 am: Nice! I'm definitely on board with the goal, good suggestion. Unfortunately it will break things though, because it looks like the default version of ls on a Mac doesn't support --ignore (or -I): https://stackoverflow.com/questions/11213849/how-to-ls-ignore-on-osx. I think this would work everywhere if implemented with grep -v instead. Would that work for you? AFAIK that should do the same thing. Any downsides you're aware of? -- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/pimterry/notes/pull/76#issuecomment-607228112 @pimterry as excluding files are popping up often. what's your opinion on having .notesignore file like .gitignore and maybe a flag called --ignore-file to provide ignore file with other name. Additionally, It will be good if notes respected the .gitignore file and ignore all files defined in there. I've now updated this to use grep -v instead, added a quick test, and merged it. Thanks @eshapard! Nice improvement :+1:. @rmNULL I'm open to some kind of ignore config, if you or others would find that useful. I'd be surprised if we needed a special command line argument for it, I expect it's very nearly always the same set of patterns, and for one-off ignoring you can just pipe to grep -v. I think we should still ignore some standard things (like this) by default either way. We already have the config file, so I expect we'd either want to extend that somehow, or link out from that, maybe with an IGNORE_FILE param that points to a gitignore-formatted file, so users can point to .gitignore or .notesignore or wherever. I'm probably not going to jump on that any time soon, but feel free if it'd be useful to you.
gharchive/pull-request
2020-03-31T19:58:09
2025-04-01T06:40:00.579653
{ "authors": [ "eshapard", "pimterry", "rmNULL" ], "repo": "pimterry/notes", "url": "https://github.com/pimterry/notes/pull/76", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
440620102
parser: support mysql-compatible explain format What problem does this PR solve? Support MySQL compatible explain format as is described in MySQL Manual. such as: explain format=traditional select * from t; explain format=json select * from t; What is changed and how it works? TRADITIONAL type would be translate as row to be compatible with TiDB current implementation. JSON type is implemented just in parser side and will be blocked during preprocessing in TiDB Check List Tests Unit test Integration test Manual test (add detailed scripts or steps below) Code changes N/A Side effects N/A Related changes N/A @kennytm @tiancaiamao @morgo PTAL @kennytm 'row' is peculiar to TiDB , and is semantic equivalent to TRADITIONAL in MySQL, So I think it's ok to conflate the two. @spongedu yes but the list of columns of TiDB's "row" and MySQL's TRADITIONAL format are different. @kennytm I think the format section just specify how the results of EXPLAIN are displayed. For example the TRADITIONAL display the results as rows, and JSON display the results as a json string. so are 'row' and 'dot'. As for the column count issues, I think it's a content issue , and is orthogonal to the format. LGTM
gharchive/pull-request
2019-05-06T09:46:28
2025-04-01T06:40:00.649209
{ "authors": [ "kennytm", "morgo", "spongedu" ], "repo": "pingcap/parser", "url": "https://github.com/pingcap/parser/pull/316", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
470873043
parser: fix compatibility for OnDelete,OnUpdate clauses What problem does this PR solve? fix compatibility for OnDelete,OnUpdate clauses this pr support following syntax: reference_definition: REFERENCES tbl_name (key_part,...) [MATCH FULL | MATCH PARTIAL | MATCH SIMPLE] [ON DELETE reference_option] [ON UPDATE reference_option] reference_option: RESTRICT | CASCADE | SET NULL | NO ACTION | SET DEFAULT What is changed and how it works? Check List Tests [x] Unit test [x] Integration test [ ] Manual test (add detailed scripts or steps below) [ ] No code Code changes Has exported function/method change Has exported variable/fields change Has interface methods change Side effects Possible performance regression Increased code complexity Breaking backward compatibility Related changes Need to cherry-pick to the release branch Need to update the documentation Need to be included in the release note I realized a similar PR has been already merged when I tried to fix merge conflict. https://github.com/pingcap/parser/commit/191583a459a3574d57fa6211778da25e6ef61846 what do you think about these two kinds of implements. @kennytm @zz-jason PTAL
gharchive/pull-request
2019-07-22T03:20:08
2025-04-01T06:40:00.655930
{ "authors": [ "leoppro" ], "repo": "pingcap/parser", "url": "https://github.com/pingcap/parser/pull/393", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1343003277
The configuration/implementation of async gRPC may be not optimal Enhancement Configuration Issues https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Interpreters/Settings.h#L363-L364 The async gRPC server uses these configurations for EstablishMPPConnection. async_pollers_per_cq is the number of threads per completion queue. (default 200) async_cqs is the number of completion queues. (default 1) The number of gRPC threads is async_pollers_per_cq * async_cqs = 200. In my opinion, these default configurations have two issues. The default thread number is too large. In view of the fact that async gRPC uses a non-blocking socket, in theory, there is no blocking point when doing RPC work. Therefore, the thread number should not exceed the number of CPU cores. The default completion queue number is too small, which may introduce some unnecessary synchronization overheads. For example, CqEventQueue uses an MPSC queue and it also uses a spinlock to support multiple consumers. In addition, pollset_work is called when calling CompletionQueue::Next. A mutex is acquired during this call. Although sometimes this mutex is released, this overhead can not be ignored especially when the thread number is 200. Actually, the official guide of gRPC performance says If having to use the async completion-queue API, the best scalability trade-off is having numcpu’s threads. The ideal number of completion queues in relation to the number of threads can change over time (as gRPC C++ evolves), but as of gRPC 1.41 (Sept 2021), using 2 threads per completion queue seems to give the best performance. At present, TiFlash uses v1.26 gRPC. The perf_notes from v.126 gRPC says Right now, the best performance trade-off is having numcpu's threads and one completion queue per thread. I guess using multiple threads per completion queue is good for load balance but the number should not be too large. We can carefully test it to gain the best performance. Implementation Issues The first issue is about notify_cq. https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Server/FlashGrpcServerHolder.cpp#L141-L157 From the code above, we can see there is a another gRPC thread pool for notify_cq. In fact, the default number gRPC thread for EstablishMPPConnection is 400(200 for cq, 200 for notify_cq), which is a scary number. What is the difference between call_cq and notification_cq in gRPC? Notification_cq gets the tag back indicating a call has started. All subsequent operations (reads, writes, etc) on that call report back to call_cq. For most async servers my recommendation is to use the same cq. This allows fine-grained control over which threads handle which kinds of events (based on which queues they are polling). Like you may have a master thread polling the notification_cq and worker threads all polling their own call_cqs, or something like that. This code tests when the notify_cq and call_cq is called. I think we do not need to control which threads handle notification events so the notify_cq should be the same as call_cq then the default 200 threads can be removed totally. By the way, grpc-rs also uses one completion queue both for call_cq and notify_cq. code here The second issue is about combining different gRPC thread pools. Async gRPC client in TiFlash also has a gRPC thread pool. https://github.com/pingcap/tiflash/blob/d7e4e2995cebf0cf25e13a2a91adebada9390b66/dbms/src/Server/Server.cpp#L1242-L1248 (good to see that its pool size is std::thread::hardware_concurrency). Combining different gRPC thread pools can reduce the thread number and context switch. This also makes it easier to add new async RPC in the future. It can be done with some class abstraction and refactor. /cc @windtalker @bestwoody @yibin87 there is block behavior in EstablishMppTask,so the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small. there is block behavior in EstablishMppTask,so the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small. Got it. The gPRC thread pool for notify_cq can not be removed now and the size should be big enough due to the blocking function findTaskWithTimeout. Seems it could easily become a bottleneck. Look forward to changing this function to non-blocking behavior. But at least the number of gRPC thread pool for call_cq can be decreased to the number of CPU core and the completion queue number can be increased to a more suitable value. there is block behavior in EstablishMppTask,so the number of pollers should big enough to avoid that. we need modify the findTaskWithTimeout before make the number of pollers small. Got it. The gPRC thread pool for notify_cq can not be removed now and the size should be big enough due to the blocking function findTaskWithTimeout. Seems it could easily become a bottleneck. Look forward to changing this function to non-blocking behavior. But at least the number of gRPC thread pool for call_cq can be decreased to the number of CPU core and the completion queue number can be increased to a more suitable value. 400 threads is not a big number for tiflash. the performance is not as true as what official gRPC says,since I tested the params. if you can some benchmark results to prove it,that will be better.
gharchive/issue
2022-08-18T12:05:59
2025-04-01T06:40:00.797933
{ "authors": [ "bestwoody", "gengliqi" ], "repo": "pingcap/tiflash", "url": "https://github.com/pingcap/tiflash/issues/5653", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1444937262
Use alarm to retry client's connection What problem does this PR solve? Issue Number: ref #6225 Problem Summary: What is changed and how it works? Check List Tests [ ] Unit test [ ] Integration test [ ] Manual test (add detailed scripts or steps below) [ ] No code Side effects [ ] Performance regression: Consumes more CPU [ ] Performance regression: Consumes more Memory [ ] Breaking backward compatibility Documentation [ ] Affects user behaviors [ ] Contains syntax changes [ ] Contains variable changes [ ] Contains experimental features [ ] Changes MySQL compatibility Release note None /cc @gengliqi @windtalker /merge /merge /run-unit-test /run-all-tests /merge
gharchive/pull-request
2022-11-11T05:05:29
2025-04-01T06:40:00.803960
{ "authors": [ "gengliqi", "windtalker", "xzhangxian1008", "ywqzzy" ], "repo": "pingcap/tiflash", "url": "https://github.com/pingcap/tiflash/pull/6297", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1286947410
migrator(ticdc): Delete old changefeed What problem does this PR solve? Issue Number: close #xxx What is changed and how it works? Check List Tests Unit test Integration test Manual test (add detailed scripts or steps below) No code Questions Will it cause performance regression or break compatibility? Do you need to update user documentation, design documentation or monitoring documentation? Release note Please refer to [Release Notes Language Style Guide](https://pingcap.github.io/tidb-dev-guide/contribute-to-tidb/release-notes-style-guide.html) to write a quality release note. If you don't think this PR needs a release note then fill it with `None`. /run-all-tests /run-all-tests /run-all-tests /run-all-tests /run-integration-tests Codecov Report Merging #6103 (952e369) into cli-use-open-api (d1de53d) will decrease coverage by 0.6547%. The diff coverage is 77.7777%. Flag Coverage Δ cdc 64.6197% <77.7777%> (+0.1419%) :arrow_up: dm 51.9506% <ø> (-0.0254%) :arrow_down: engine ? Flags with carried forward coverage won't be shown. Click here to find out more. @@ Coverage Diff @@ ## cli-use-open-api #6103 +/- ## ======================================================== - Coverage 58.4562% 57.8014% -0.6548% ======================================================== Files 708 550 -158 Lines 83471 73204 -10267 ======================================================== - Hits 48794 42313 -6481 + Misses 30244 26996 -3248 + Partials 4433 3895 -538
gharchive/pull-request
2022-06-28T07:59:02
2025-04-01T06:40:00.812601
{ "authors": [ "codecov-commenter", "sdojjy" ], "repo": "pingcap/tiflow", "url": "https://github.com/pingcap/tiflow/pull/6103", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1616390173
apiv2(ticdc): add ut for api default value What problem does this PR solve? Issue Number: close #8480 What is changed and how it works? add ut for api default value Check List Tests Unit test Integration test Manual test (add detailed scripts or steps below) No code Questions Will it cause performance regression or break compatibility? Do you need to update user documentation, design documentation or monitoring documentation? Release note `None`. /run-all-tests /run-integration-tests /run-verify-tests /run-integration-tests
gharchive/pull-request
2023-03-09T04:24:46
2025-04-01T06:40:00.816985
{ "authors": [ "sdojjy" ], "repo": "pingcap/tiflow", "url": "https://github.com/pingcap/tiflow/pull/8481", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
197544942
raft: port pre-vote feature Hi, This PR port the pre-vote feature for etcd/raft. It merges #1330, #1425, and https://github.com/coreos/etcd/pull/7060. PTAL @siddontang @BusyJay @ngaut PTAL @siddontang @BusyJay @ngaut PTAL @BusyJay The modification for raft in https://github.com/coreos/etcd/pull/6975 is merged. PTAL @siddontang @BusyJay PTAL @siddontang @BusyJay LGTM PTAL @BusyJay @zhangjinpeng1987 PTAL @BusyJay
gharchive/pull-request
2016-12-26T06:46:06
2025-04-01T06:40:00.820630
{ "authors": [ "hhkbp2", "siddontang" ], "repo": "pingcap/tikv", "url": "https://github.com/pingcap/tikv/pull/1444", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
953575696
Test codeql check it works? LGTM
gharchive/pull-request
2021-07-27T07:10:15
2025-04-01T06:40:00.884693
{ "authors": [ "eeliu" ], "repo": "pinpoint-apm/pinpoint-c-agent", "url": "https://github.com/pinpoint-apm/pinpoint-c-agent/pull/352", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
130169524
What is the intentional use of elixometer.TestReporter? There is a TestReport In file https://github.com/pinterest/elixometer/blob/master/lib/elixometer.ex I notice that is is being used in test: https://github.com/pinterest/elixometer/blob/master/config/test.exs But is that just for Test environment, but for some reason, it si part of the main module? If production won't ever use or it, would suggest this should not be part of main module. The idea is that you use that inside tests if you want to get the output of logging. Elixiometer was written pretty early in our elixir exploration, and this should be made top-level. Better yet, we could alter the mixfile to look in a different directory and remove it entirely from lib and put into test/support or something.
gharchive/issue
2016-01-31T19:08:47
2025-04-01T06:40:00.886998
{ "authors": [ "linearregression", "scohen" ], "repo": "pinterest/elixometer", "url": "https://github.com/pinterest/elixometer/issues/31", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1742366613
TileData: fix typo + copyediting Noticed an extraneous a in TileData's header description. Took a quick pass of copyediting while looking at that file thanks!!
gharchive/pull-request
2023-06-05T18:48:19
2025-04-01T06:40:00.888265
{ "authors": [ "dangerismycat", "rlingineni" ], "repo": "pinterest/gestalt", "url": "https://github.com/pinterest/gestalt/pull/2988", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
947169740
Could add support to github.com/pion/rtp or maybe add a library that makes transcoding(aac->pcm) easier As description from https://github.com/pion/webrtc/issues/1888, could you add support to github.com/pion/rtp or maybe add a library that makes transcoding(aac->pcm) easier? Did you find the solution to transcoding audio from acc to pcm?
gharchive/issue
2021-07-19T01:24:22
2025-04-01T06:40:00.952104
{ "authors": [ "jianzhiyao", "yourchanges" ], "repo": "pion/rtp", "url": "https://github.com/pion/rtp/issues/146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1457057883
Potential regression by a commit c0159aa in causing TestAssociation_Shutdown to fail @enobufs Just wondering: could it be that this PR (commit c0159aa2d49c240362038edf88baa8a9e6cfcede) introduced a regression which makes the unit-test `TestAssociation_Shutdown` fail? See https://github.com/pion/sctp/actions/runs/3495811764/jobs/5852996166 Originally posted by @stv0g in https://github.com/pion/sctp/issues/239#issuecomment-1321076117 Hi @jerry-tao, @stv0g, are you able to repro this in your environment? The error does not happen to me... :( I did not reproduce it either, will dig deeper this week. I just tested it again on the current master branch and the test succeeded. But I see a possibly related PR #236 I tried to produce it again without success. So maybe I've dreamt it... I just saw it again but only on 1.18
gharchive/issue
2022-11-20T21:45:26
2025-04-01T06:40:00.955581
{ "authors": [ "edaniels", "enobufs", "jerry-tao", "stv0g" ], "repo": "pion/sctp", "url": "https://github.com/pion/sctp/issues/250", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
730141016
UI - fix a bug that shows error at unintended times on the application detail page What this PR does / why we need it: Which issue(s) this PR fixes: Fixes # Does this PR introduce a user-facing change?: NONE Thank you. /approve
gharchive/pull-request
2020-10-27T06:06:34
2025-04-01T06:40:00.977590
{ "authors": [ "cakecatz", "nghialv" ], "repo": "pipe-cd/pipe", "url": "https://github.com/pipe-cd/pipe/pull/1024", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
695324407
Allow supplying absolute path to the private key This PR fixes #19. I've dropped the startsWith("/" checks and improved docs a bit. @pitchmuc PTAL. Hello and thanks for the Pull request. I am testing it now and the error I was trying to avoid is popping up again : In my config file, I am using this notation : { "org_id": "ED48F97C5922D40C00@AdobeOrg", "api_key": "d2ef4b231cea4cf29d91b10052", "tech_id": "3155ED511C70AE8@techacct.adobe.com", "secret": "**********************", "pathToKey": "/config/private.key" } This kind of notation could be used and I wanted my function to work in that case. In order to avoid that issue, I was doing my startswith() but it seems that it prevents you from using full path. Normally, when entering the full path, it should starts with something like "C://". (so my startswith should avoid messing with it). Is it different on your environment ? I need to think of a way to make both work. That's some new path notation for me. In *NIX systems, an absolute path starts with a / and a relative one starts with a ./. In Windows, an absolute path starts with C:\\ as you've mentioned, while a relative remains the same .\. I'm not sure why you're avoiding using C:\\-like paths, but for *NIX systems, the current startswith() workaround breaks the usage of absolute paths completely. From my perspective, the library should support commonly-used standards and can support non-standard notations if they do not break compatibility. I'd be glad to help you out here if there's smth else I can do. OK. I am not having ton of experience on NIX system, this is interesting. Thanks for your explanation. I am trying to be flexible so you can use both notation, independently of your current system. I tried to upload a new version of the import logic in your branch. The main idea is to check if the file can be accessed before applying this startswith(). By checking if file exist, it may solve your issue without breaking my code. Main logic here: test_path = _Path(path).exists() if test_path == False: if path.startswith('/'): path = "."+path with open(_Path(path), 'r') as file: .... Let me know if this works for you. @pitchmuc PTAL again. I've added a reusable part that checks the presence of the file using the supplied path and uses the file if it is available, otherwise tries to convert the absolute path to a relative one and tries it out. I've also added the fail-fast approach with exceptions if config/private key files are not available. @pitchmuc I've back merged your latest changes. It'd be great if you can check it out. Please let me know if there's anything else you'd like me to do/change before merging the PR. Hello @xSAVIKx , I checked the commit, thank for the function provided. It makes it cleaner. I will merge the proposed changes. We are not in sync with the master anymore as I provided a new version with VirtualReport capability for @loldenburg. I will take care of merging the changes. You proposed suggestion will be part of the next release on pypi. Thanks a lot ! @pitchmuc Perhaps worth to use https://docs.python.org/3/library/os.html#os.sep in the future.
gharchive/pull-request
2020-09-07T18:17:41
2025-04-01T06:40:01.059357
{ "authors": [ "Ryuno-Ki", "pitchmuc", "xSAVIKx" ], "repo": "pitchmuc/adobe_analytics_api_2.0", "url": "https://github.com/pitchmuc/adobe_analytics_api_2.0/pull/20", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
160516717
pcfdev setup fails under Windows 7 When I run the binary distribution from Pivotal network under Windows 7 I get the following error: C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows>pcfdev-v0.16.0-windows.exe panic: runtime error: index out of range goroutine 1 [running]: panic(0x93a720, 0xc082002040) /usr/local/go/src/runtime/panic.go:481 +0x3f4 github.com/pivotal-cf/pcfdev-cli/vendor/github.com/cloudfoundry/cli/plugin.Start (0x33a5098, 0xc08206b200) /ext-go/1/src/github.com/pivotal-cf/pcfdev-cli/vendor/github.com/cloudfo undry/cli/plugin/plugin_shim.go:16 +0x494 main.main() /ext-go/1/src/github.com/pivotal-cf/pcfdev-cli/main.go:76 +0xd51 PCFDev 0.15 setup was working fine. Hi @mrumpf, PCF Dev is now a cf CLI plugin. You need to install it with cf install-plugin pcfdev-v0.16.0-windows.exe. See the getting started tutorial: https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev Or the docs: https://docs.pivotal.io/pcf-dev/ (We will eventually run cf install-plugin for you when you run the plugin directly, to reduce confusion.) Hm, I was confused by the extenions "exe"... Here is what I get now: {code} { pcfdev-v0.16.0-windows } » cf install-plugin ./pcfdev-v0.16.0-windows.exe Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk. Do you want to install the plugin ./pcfdev-v0.16.0-windows.exe? (y or n)> y Installing plugin pcfdev-v0.16.0-windows.exe... OK Plugin pcfdev v0.0.0 successfully installed. { pcfdev-v0.16.0-windows } » cf dev start Please retrieve your Pivotal Network API from: https://network.pivotal.io/users/dashboard/edit-profile FAILED Error: invalid Pivotal Network API token {code} I think that is a corporate proxy issue... Did you copy and paste your API token from PivNet? The Invalid Pivotal Network API token error implies a successful connection to PivNet with an invalid token. We've seen occasional issues with the Windows DOS command line and cf CLI password-type prompts, so use PowerShell if you aren't already. ok. It seems to work under PowerShell: I was using Cygwin bash before. PS C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows> cf install-plugin pcfdev-v0.16.0-windows.exe **Attention: Plugins are binaries written by potentially untrusted authors. Install and use plugins at your own risk.* * Do you want to install the plugin pcfdev-v0.16.0-windows.exe? (y or n)> y Installing plugin pcfdev-v0.16.0-windows.exe... OK Plugin pcfdev v0.0.0 successfully installed. PS C:\Users\mrumpf\Downloads\pcfdev-v0.16.0-windows> cf dev start Please retrieve your Pivotal Network API from: https://network.pivotal.io/users/dashboard/edit-profile API token> BETA SOFTWARE END USER LICENSE AGREEMENT ... Last Updated: April 14th, 2014 Accept (yes/no):> yes Downloading VM... Progress: |=> | 1% Cou could reduce confusion if mention that under Windows PowerShell should be used Where to find the API token
gharchive/issue
2016-06-15T20:31:33
2025-04-01T06:40:01.072311
{ "authors": [ "mrumpf", "sclevine" ], "repo": "pivotal-cf/pcfdev", "url": "https://github.com/pivotal-cf/pcfdev/issues/77", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
782370008
The lifecycle image should be configurable RFC: https://github.com/pivotal/kpack/pull/560 (soon to be merged) Problem: Users have no way of upgrading the lifecycle image without upgrading the kpack release Criteria: An update to ConfigMap lifecycle-image.data.image results in builders being recreated The ConfigMap lifecycle-image.data.platformApiVersions should be validated that kpack supports all of the platform apis Actions: The ConfigMap lifecycle-image should no longer be mounted in the kpack-controller kpack should watch for changes to ConfigMap lifecycle-image in the kpack namespace Updates to the ConfigMap lifecycle-image.data.image value should result in builders being recreated lifecycle-image.data.platformApiVersions is a new optional field containing space-separated values in the form of X.X if lifecycle-image.data.platformApiVersions contains a version that is not supported by kpack, the ConfigMap should fail validation Perhaps lifecycle-image.data.platformApiVersions should be an annotation to prevent implying that is a functional configuration attribute? Perhaps lifecycle-image.data.platformApiVersions should be an annotation to prevent implying that is a functional configuration attribute? Moving to accepted and will accept the user facing issue https://github.com/vmware-tanzu/kpack-cli/issues/142 Moving to accepted and will accept the user facing issue https://github.com/vmware-tanzu/kpack-cli/issues/142
gharchive/issue
2021-01-08T20:13:40
2025-04-01T06:40:01.080099
{ "authors": [ "matthewmcnew", "mgibson1121", "tylerphelan" ], "repo": "pivotal/kpack", "url": "https://github.com/pivotal/kpack/issues/594", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
320108238
Add IPv6 Routes By Default Many devices have IPv6 enabled by default, and some (e.g. iOS) do not allow it to be disabled in the .ovpn config, or pushed by the server. It would be nice if PiVPN would also create IPv6 routes to push to the clients and redirect IPv6 traffic through the tunnel. It could then either be discarded at the server or forwarded along with the IPv4 traffic. At a minimum this should serve as a workaround in the cases where it is impossible to block IPv6 traffic entirely, or the interface can not be disabled on the client. closing as #259 is already up to add support for IPv6
gharchive/issue
2018-05-03T23:01:08
2025-04-01T06:40:01.081641
{ "authors": [ "4s3ti", "mep85" ], "repo": "pivpn/pivpn", "url": "https://github.com/pivpn/pivpn/issues/529", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
227954984
double id sur la redirection vers element, apres create element je créé un nouveau projet, je valide, c'est enregistré, on me redirige vers cette url : je sais pas où est le code de la fonction save du dynForm, je trouve pas le moment où on redirige vers la nouvelle page. Si quelqu'un sait où c'est ! Ok, je vais regarder ça :) @clement tu peux t'en charger. Quand on essaye d'ajouter un projet, on mets juste le nom et une image et le lien de redirection ressemble a ça : http://127.0.0.1/ph/co2#project.detail.id.nullnull Si on ne pas d'image c'est bon ok ma caille ! ça c'est cool ... marche avec et sans fineUploader good job l'artiste
gharchive/issue
2017-05-11T11:04:30
2025-04-01T06:40:01.113971
{ "authors": [ "Kgneo", "RaphaelRIVIERE", "clement59" ], "repo": "pixelhumain/co2", "url": "https://github.com/pixelhumain/co2/issues/146", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1612390025
[perf_tool/cluster] Implement GKE cluster operations. Summary: Implement operations for calling out to GKE to create, healthcheck, and delete a cluster. We use the backoff package to retry all the GKE operations, because they can often fail transiently. Type of change: /kind test-infra Test Plan: A follow-up PR adds the subcommand test_gke_cluster to perf_tool which allows creating and deleting a cluster with the GKE cluster provider. I used that command to verify these operations work. @pixie-io-buildbot test this please
gharchive/pull-request
2023-03-06T23:22:16
2025-04-01T06:40:01.135730
{ "authors": [ "JamesMBartlett" ], "repo": "pixie-io/pixie", "url": "https://github.com/pixie-io/pixie/pull/975", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
108906283
PIXI.Graphics is a bad parent (it inerits from container but does not care about its children) PIXI.Graphics.{clone,generateTexture} and possible others don't take into account that Graphics can have children. I think it should - or at least be documented. I wonder how people feel about us making it inherit from DisplayObject instead of container... Seems to be a viable option that would clear things up - but it would break compatibility. For example we make depend on it in our svg renderer. So I guess its no option for a minor release. fixed in v4 👍
gharchive/issue
2015-09-29T16:24:56
2025-04-01T06:40:01.147342
{ "authors": [ "FlorianLudwig", "GoodBoyDigital", "englercj" ], "repo": "pixijs/pixi.js", "url": "https://github.com/pixijs/pixi.js/issues/2131", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
304231555
Pixi FontFace in Chrome PixiJS 4.6.2 Browser: nwjs-sdk-v0.28.3-win-x64 (chrome) Pixi uses the default Chrome font when you point a custom font-face using the FontFace obj: Sample code: var fontFace = new FontFace('95845e56-1b06-457d-ac03-c765ad13ec9e.ttf','url(chrome-extension://fadmacekllolfgoklhinaljokeapepka/data/95845e56-1b06-457d-ac03-c765ad13ec9e.ttf)'); document.fonts.add(fontFace); let pixapp = new PIXI.Application({width:810, height:41, transparent:true}); fnt_pix.appendChild(pixapp.view); var style = new PIXI.TextStyle({ fontFamily:'95845e56-1b06-457d-ac03-c765ad13ec9e.ttf', fontSize:36 }); var richText = new PIXI.Text('ABCDEFGHIJKLMNOPQRSTUWVXYZ',style); pixapp.stage.addChild(richText); Pixi successfully donwloaded the font files: But i suspect Chrome is handling the default font to Pixi, part of their "Webfonts Intervention" feature. And because this only happens on the first moment, when Pixi is downloading the fonts: (Each line is an PIXI.Application) On a second moment if i don't reload the page and create new PIXI.Application objects everything runs smooth: Any hints? Produced another sample code to better illustrate the problem, you can run it on regular Chrome. Observed the problem also occurs when font-faces loads directly from CSS. Here: pxtx.zip A quick fix might be to add https://github.com/typekit/webfontloader to your project. You could then do something like this to only display the content once the fonts were loaded let countLoadedFonts = 0; WebFont.load({ custom: { //font name set in css families: ['css fontName1', 'css fontName2', 'css fontName3'] }, testStrings: { 'css fontName1': '\uE003\uE005', 'css fontName2': '\uE003\uE005', 'css fontName3': '\uE003\uE005' }, loading: function () { console.log('css font loading'); }, active: function () { countLoadedFonts++; console.log('css fontName active', countLoadedFonts); //Display pixi content } });` macguffin, your suggestion was effective! But i still suggest the devs to give a chance to this issue. I banged my head for many hours until you comment. =D @Nocthan it can be a browser bug that exists using regular html and css with custom fonts. Not only does the font have to be loaded, but it has to be used once before it 'kicks in' as it were. Therefore you have libraries like the one given, or the one I use, https://github.com/bramstein/fontfaceobserver - which uses the font in the background, and measures the size of the area it has been used, and when the size changes it knows the custom font is loaded, active, and can be used. Closing as I think we're covered here :)
gharchive/issue
2018-03-12T03:36:50
2025-04-01T06:40:01.156186
{ "authors": [ "Nocthan", "macguffin", "themoonrat" ], "repo": "pixijs/pixi.js", "url": "https://github.com/pixijs/pixi.js/issues/4754", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2326539861
Bloxstrap Crash [BUG] Acknowledgement of preliminary instructions [X] I have read the preliminary instructions, and I am certain that my problem has not already been addressed. What problem did you encounter? When I start bloxstrap a new window opens saying that Roblox has crashed. Yes it only happens when I use Bloxstrap! Im gonna upload my configurations in the comments ClientAppSettings.json Settings.json State.json Solved after reinstalling Bloxstrap with winget
gharchive/issue
2024-05-30T21:03:19
2025-04-01T06:40:01.168281
{ "authors": [ "HACKER21078" ], "repo": "pizzaboxer/bloxstrap", "url": "https://github.com/pizzaboxer/bloxstrap/issues/1886", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1886704869
Game Freeze. After a while of playing, roblox itself completely freezes and does not respond in any way. I have to unload the application from the task manager. This happens almost always. https://github.com/pizzaboxer/bloxstrap/assets/144396571/3eb4c80e-60b1-4400-9683-a0a1d2668bdd Are you able to provide a more specific timeframe on how long it takes for this to happen? Also, can I see your FastFlag list? Open the menu, go to the FastFlags tab, open the editor, and show a picture of the whole list. In about two to seven minutes. I said to open the editor. this? Okay, that seems good. Have you also checked to see if this happens only with Bloxstrap? https://github.com/pizzaboxer/bloxstrap/wiki/Switching-between-Roblox-and-Bloxstrap Yes, this only happens with Bloxstrap. With the regular roblox client everything is normal. You could try setting your rendering mode to Automatic, and your framerate limit to 0. See if doing those two changes anything. Okay, I'll give it a try. I will test it in several games. It became even worse. in addition to freezing Roblox itself, it became problematic to go to the Task Manager to remove a task from Roblox and finish its work. How about if you disable activity tracking? I turned it off from the beginning. Well, I can't really think of what else would be causing it since at this point Bloxstrap should be functioning exactly like how the official launcher does. You could also try forcing a Roblox reinstallation? Go to the Behaviour tab, scroll to the bottom, option should be there. Enable it, save, and try and join. Honestly I reinstalled both Bloxstrap and Roblox using the Revo uninstaller utility. And the result is still the same. Well, I'm not really sure what to say because at this point Bloxstrap should be working exactly like the official launcher, but I'll try and do some more troubleshooting here. Can you go to where Bloxstrap is installed (open the menu, installation tab, open installation folder), go inside the "Versions" folder, go inside "version-xxxxxxxxxxxxxxxx", and launch RobloxPlayerBeta.exe directly? See if the problem happens with just that. same thing https://github.com/pizzaboxer/bloxstrap/assets/144396571/40bea83d-78d7-4257-b44e-779078d6dcdd This time, delete the "ClientSettings" folder, then launch RobloxPlayerBeta.exe again. Issue has been awaiting clarification for over a month, closing as stale. If you have anything further to add, just respond and I'll reopen.
gharchive/issue
2023-09-07T23:25:20
2025-04-01T06:40:01.175803
{ "authors": [ "MeGoddess", "pizzaboxer" ], "repo": "pizzaboxer/bloxstrap", "url": "https://github.com/pizzaboxer/bloxstrap/issues/649", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1094295992
JS-Array-Challenge 안녕하세요! 오늘부터 시작하려고 합니다! 좋은 자료 공유 감사드려요! 제가 아직 미숙해서 질문 하나 드립니다! npm test로 jest를 실행할 때에 ./Challenge/하위의 모든 디렉토리를 검사하더라구요. package.json 에서 "scripts"의 "test" 경로를 ./Challenge/유저명 이런식으로 고쳐서 사용했는데, 올바른 사용법인지 궁금합니다😊 (깃허브에는 package.json의 변경사항을 업로드하지 않았습니다!) 2022년 좋은 일만 가득하시길 바랍니다🤞 테스트 잘 돌아갔다면 그렇게 해도 좋을 것 같아요! 고생하셨습니다😊
gharchive/pull-request
2022-01-05T12:21:46
2025-04-01T06:40:01.188564
{ "authors": [ "BB-choi", "pkiop" ], "repo": "pkiop/JS-Array-Challenge", "url": "https://github.com/pkiop/JS-Array-Challenge/pull/161", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2209385168
🛑 ✅JrebelLicenseServer is down In a755ab4, ✅JrebelLicenseServer (https://jrebel.wayok.cn) was down: HTTP code: 0 Response time: 0 ms Resolved: ✅JrebelLicenseServer is back up in ecf0b94 after 7 minutes.
gharchive/issue
2024-03-26T21:42:10
2025-04-01T06:40:01.200188
{ "authors": [ "pkl1024" ], "repo": "pkl1024/status", "url": "https://github.com/pkl1024/status/issues/571", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1520323038
Failing registration with missing metadata in multilanguage journal Dear @bozana and @asmecher , we just had the situation that the plugin attempted to register an article in a multi-language journal (German & English). The editor entered the metadata for the article's author only in the German form, while leaving the English metadata blank. In the DOI registration process however, the plugin stopped working (and did not process the subsequent articles) when hitting the described article. Can you reproduce the bug? Thanks in advance. Best Regards, Adrian Hi @GrazingScientist, thanks for reporting. We would need a few more information in order to figure out what is happening: Which OJS version are you using? What is the primary locale of the article? -- Is it German? -- All metadata must be entered in the article primary locale... When you try to export only that article do you get any errors and what errors exactly? If export works fine, when you try to register only that article what exactly happens? It would be good to first use support forum, so that we are able to first check and test and to then create an issue once we figured out that it is a bug or something needs to be done... Best wishes, Bozana
gharchive/issue
2023-01-05T08:35:00
2025-04-01T06:40:01.224415
{ "authors": [ "GrazingScientist", "bozana" ], "repo": "pkp/crossref-ojs", "url": "https://github.com/pkp/crossref-ojs/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1265538997
Trying to get multiple account authorizations instead of just one I'm trying to create a program for spotipy where anyone can log in. I can make it where only I can get an authorization but am struggling with trying to get it where when the program runs it asks for a new login for authorization. Here is the code I have so far for the Authorization. Not sure if it actually works. I just don't know how to be able to login to different spotify accounts with the correct authorization only know how to hard code in my account. Not sure where to go. Take a look at the examples, especially app.py That example helped me alot getting started to make a multi-user app with flask. Note that FlaskSessionCacheHandler is currently not in the pip package, for that refer to issue#838
gharchive/issue
2022-06-09T03:19:40
2025-04-01T06:40:01.334987
{ "authors": [ "bjbuddyboy", "git-eri" ], "repo": "plamere/spotipy", "url": "https://github.com/plamere/spotipy/issues/827", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2177519934
Anlernmodus von Livisigeräten Checklist [X] I have filled out the template to the best of my ability. [X] This only contains 1 feature request (if you have multiple feature requests, open one feature request for each feature request). [X] This issue is not a duplicate feature request of previous feature requests. Is your feature request related to a problem? Please describe. ist kein Problem Describe the solution you'd like ist es möglich, neue Geräte direkt über HA anzulernen? Würde Sinn machen wenn man Geräte hat die in Reichweite von der Zentrale ist, aber keine Wlanverbindung zur Zentrale besteht. Letztens hatte ich im Keller (vor dem 1.3.) einen Zwischenstecker eingerichtet, ohne Cloud hätte das im Kellerbunker nie funktioniert. Hatte da LTE aber kein Wlan. Via Cloud konnte ich den Stecker einrichten Sprich.. will wer einer ein Gerät einrichten das ausserhalb seines Heimnetzwerkes, aber in Reichweite der Zentrale ist braucht dafür eine externe Schnittstelle. Und da wäre HA perfekt dafür Würde auch den Blödsinn VPN obsolet machen wenn man Nabu Casa nutzt. Describe alternatives you've considered . Additional context . sorry, meine Idee konnte ich nicht in englisch vorstellen.. zu umfngreich ist eh blödsinnig. Das nutzen zu 99% deutsche User. Warum wir dann englisch kommunizieren versteh ich nicht. Author deutsch, User deutsch.. warum kompliziert wenns einfach gehen könnte Ich hab schon mal erklärt, warum ich die Issues auf Englisch hielten möchte - falls diese Integration jemals wieder in Home Assistant zurückgeführt wird (was ja aufgrund der Tatsache, dass sich dort niemand mehr darum kümmert, gar nicht so unwahrscheinlich ist), macht es Sinn Fehler, Probleme und Implementierung auf Englisch zu dokumentieren. Für Feature Requests macht das vielleicht nicht so viel Sinn, allerdings ist das hier auch kein Wunschkonzert: Der Anlernmodus geht tatsächlich über das hinaus, was ich benötige (woher sollen denn auch noch groß neue Geräte kommen), daher werde ich da keine Zeit reininvestieren… na gut, wenn nur in englisch... ok. Aber ich finde das unnötig weil es viele gibt wo Probleme haben dieser Sprache nicht mächtig sind. Das musszmt auch Du einsehen na gut, wenn nur in englisch... ok. Aber ich finde das unnötig weil es viele gibt wo Probleme haben dieser Sprache nicht mächtig sind. Das musszmt auch Du einsehen Leider gehöre ich auch zu denen mit 55 ist das schon ne Weile her. Deswegen quäle ich mich oft mit Fehler und melde kaum welche, hinzu kommt, dass ich Windows beherrsche, aber von Linux keinen Plan habe. Naja, ich benutze dann halt einen Übersetzer, damit gehts auch, man wird dann nur für Blöd gehalten. Ich kann das aber verstehen, das man hier nicht zweisprachig mit Fehlern aggieren möchte. Dann muss man halt mit manchen Sachen leben oder nutzt ein deutschsprachiges Forum wie das von Simon. Vielleicht sollten sich die Livisi Nutzer dort zu einem eigenem Thema zusammentun. Möglichweise richtet Simon uns ja eine Abteilung ein. Das könnte man ja noch im Livisi Forum kommunizieren. Jedenfalls bin ich Planbet dankbar für diese geile Integration. Den Livisi Mist konnte man ja garnicht gebrauchen. Unfortunately, I'm also one of those at 55 that's been a while. That's why I often torture myself with mistakes and hardly report any, in addition to the fact that I master Windows, but have no plan for Linux. Well, I'll just use a translator, that's fine, you'll only be considered stupid. But I can understand that you don't want to act bilingually with mistakes here. Then you just have to live with some things or use a German-language forum like Simon's. Maybe the Livisi users there should get together on their own topic. Maybe Simon will set up a department for us. This could still be communicated in the Livisi Forum. In any case, I am grateful to Planbet for this great integration. You couldn't use the Livisi crap at all.
gharchive/issue
2024-03-10T04:28:29
2025-04-01T06:40:01.344336
{ "authors": [ "metaiiica", "olli711", "planbnet" ], "repo": "planbnet/livisi_unofficial", "url": "https://github.com/planbnet/livisi_unofficial/issues/96", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
256528281
Refactor Tests We should be using temp directories rather than having multiple directories for each test Fixed in #16
gharchive/issue
2017-09-10T18:09:59
2025-04-01T06:40:01.351132
{ "authors": [ "pbvarga1" ], "repo": "planetarypy/planetary_test_data", "url": "https://github.com/planetarypy/planetary_test_data/issues/14", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
516761943
thanks thank for the course Ex_Files_UaR_Git_GitHub.zip You're welcome. You're welcome.
gharchive/pull-request
2019-11-03T00:12:38
2025-04-01T06:40:01.352777
{ "authors": [ "SymphorienLipandza", "planetoftheweb" ], "repo": "planetoftheweb/responsivebootstrap", "url": "https://github.com/planetoftheweb/responsivebootstrap/pull/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1920943231
Use YAML merge list definition Multiple merge keys were previously supported, but not anymore. Codecov Report Merging #537 (e56f176) into main (e3f1e2b) will decrease coverage by 0.01%. The diff coverage is n/a. Additional details and impacted files @@ Coverage Diff @@ ## main #537 +/- ## ========================================== - Coverage 97.24% 97.23% -0.01% ========================================== Files 6 6 Lines 363 362 -1 Branches 77 45 -32 ========================================== - Hits 353 352 -1 Misses 8 8 Partials 2 2 see 1 file with indirect coverage changes
gharchive/pull-request
2023-10-01T19:57:19
2025-04-01T06:40:01.356090
{ "authors": [ "codecov-commenter", "plannigan" ], "repo": "plannigan/columbo", "url": "https://github.com/plannigan/columbo/pull/537", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
447720681
studiesStudyDbIdObservationvariablesGetAsync() When I call the Java client's StudiesApi.studiesStudyDbIdObservationvariablesGetAsync() with studyDbId == 1001 I get: java.lang.IllegalArgumentException: missing discriminator field: <> at io.swagger.client.JSON.getDiscriminatorValue(JSON.java:83) at io.swagger.client.JSON.access$000(JSON.java:41) at io.swagger.client.JSON$2.getClassForElement(JSON.java:61) at io.gsonfire.gson.TypeSelectorTypeAdapterFactory$TypeSelectorTypeAdapter.read(TypeSelectorTypeAdapterFactory.java:65) at io.gsonfire.gson.NullableTypeAdapter.read(NullableTypeAdapter.java:36) at io.gsonfire.gson.HooksTypeAdapter.deserialize(HooksTypeAdapter.java:86) at io.gsonfire.gson.HooksTypeAdapter.read(HooksTypeAdapter.java:54) at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.read(TypeAdapterRuntimeTypeWrapper.java:41) at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:82) at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:131) at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:222) at com.google.gson.Gson.fromJson(Gson.java:927) at com.google.gson.Gson.fromJson(Gson.java:892) at com.google.gson.Gson.fromJson(Gson.java:841) at io.swagger.client.JSON.deserialize(JSON.java:157) at io.swagger.client.ApiClient.deserialize(ApiClient.java:710) at io.swagger.client.ApiClient.handleResponse(ApiClient.java:913) at io.swagger.client.ApiClient$1.onResponse(ApiClient.java:879) at com.squareup.okhttp.Call$AsyncCall.execute(Call.java:177) at com.squareup.okhttp.internal.NamedRunnable.run(NamedRunnable.java:33) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588) at java.lang.Thread.run(Thread.java:818) I have found an acceptable solution to this problem in the Java client code generated from BrAPI V2.0. The generated code contains a class JSON.java which handles the JSON serialization and de-serialization of the JSON objects. The problem is when it is de-serializing an object, it is expecting an extra "discriminator" field to handle Java type resolution. If this class had been responsible for the original serialization, then this would not be a problem, however since the JSON is coming from a remote server, the "discriminator" field is missing, just as the error message indicates. The easiest solution is to modify JSON.java and tell it how to determine the correct class without the need for an independant discriminator field. For example, Study.java has an extra field studyDbId and StudyNewRequest.java does not have this field. Based on the presence or absence of studyDbId in the response JSON, I can determine which class to instantiate. @peterrosario if and when you are ready to move to V2.0 for your java client, let me know and I can give you my code changes for JSON.java
gharchive/issue
2019-05-23T15:10:37
2025-04-01T06:40:01.360865
{ "authors": [ "BrapiCoordinatorSelby", "peterrosario" ], "repo": "plantbreeding/API", "url": "https://github.com/plantbreeding/API/issues/377", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
124303059
undefined method `utc' for "2015-12-30 04:14:49 UTC":String # # # allow_unconfirmed_access_for = nil # confirmation_period_valid? # will always return true # def confirmation_period_valid? self.class.allow_unconfirmed_access_for.nil? || (confirmation_sent_at && confirmation_sent_at.utc >= self.class.allow_unconfirmed_access_for.ago) end # Checks if the user confirmation happens before the token becomes invalid # Examples: # pls see file https://github.com/vishyme/Estydemo.git
gharchive/issue
2015-12-30T04:41:32
2025-04-01T06:40:01.370374
{ "authors": [ "vishyme" ], "repo": "plataformatec/devise", "url": "https://github.com/plataformatec/devise/issues/3878", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
501976143
Suggestion - devise module settings in initializer not model Has anybody brought up why devise modules that affect not only models, are placed in models not initializer? It is weird that a line in model directly affects routes. Might it not be better to put modules in devise initializer so that instead of class User < ApplicationRecord devise :database_authenticatable, :validatable end it would be something like Devise.setup do |config| config.devise_model User, :database_authenticable, :validatable end just food for thought While I would admit the coupling is a bit akward as you noted, the catalyst is the scoping when using multiple models (resource types). The alternative would be having domain requirements (Users are recoverable but Admins are not, etc.) defined away from the Domain Model. To adress the scoping issue, I propose just model name as the first parameter( as in the example). We don't even need to change that much logic under the hood (as a first step), since it should be possible to inject functionality in models exactly the same way it is done now, just doing it behind the scenes. Another added bonus, is that this way it is possible to configure devise per-resource-type, as the configuration logic can be scoped to serve a specific resource (for example different unlock strategies, for User and Admin, etc.) My understanding was that, it actually gets set up via the the devise_for call in the routes file (the right place). In turn, yes, it looks at the model and detects which modules are loaded and then loads the appropriate (default) routes if not overrides are given. That said, I think your premise of "a line in model directly affects routes" is incorrect. It is more specifically, a line in routes checks the model (which is configured in the model file) to create routes. Simply put, the model definition does not create routes, but it informs the creation of routes depending on the modules loaded. Simply put, the model definition does not create routes, but it informs the creation of routes depending on the modules loaded. That is correct. Another added bonus, is that this way it is possible to configure devise per-resource-type, as the configuration logic can be scoped to serve a specific resource (for example different unlock strategies, for User and Admin, etc.) This would require a bigger change, right? I don't think that simply changing where the code gets injected in the model would also make this possible since some of the configurations are global. This would require a bigger change, right? I don't think that simply changing where the code gets injected in the model would also make this possible since some of the configurations are global. Yes, but my suggestion to move the configuration from model to initializer just opens this possibility for future improvements. I do not suggest doing all of this in one go. Yes, but my suggestion to move the configuration from model to initializer just opens this possibility for future improvements. I do not suggest doing all of this in one go. I do think the configuration per-resource could be done with the current design too. I understand what you saying about code organization but I'm struggling to see why your proposal would be more flexible than the way it is today. the method delete_all_data does not actually delete data, it just calls another method. IMO, that's true. The method delete_all_data has a dependency on the class name SomeModel and its method delete_all_data. By dependency, I mean that this method would have to change whether SomeModel changes its name, the parameters or the name of its own delete_all_data method. But if SomeModel changes its implementation of delete_all_data while keeping the public API the same, then the parent method does not have to change at all. The 'best practice' in rails-centric libraries is convention over configuration. The convention (90% use case) is that if someone has a certain module enabled for their devise resource, Recoverable as an example, they also want to have the routes needed to make that work. If you really need to de-couple these concerns, you may do so at the model and routes level, via overrides to the `devise_for, respectively. The initializer is for global-level configuration (in this case global denotes applying to all the devise resources in use). If you want it to apply only to one resource-type, use options in the model or routes definition calls. You can disable creating routes altogether and create them manually if you wish as well. I would also say as an aside, that it seems the concerns you have with de/coupling may have to do with an approach that doesn't see the library as a container [block box if you will] itself. The fact that devise informs itself through inference (via the model) rather than hard-coded configuration in the routes is just a choice for convenience of testing/implementation and keeping the library DRY. Do you have a specific use case / failing test case that you can't accomplish with the current form? No, it was just something that I thought would be a better way of configuring devise when looking for a problem why some devise routes weren't being generated. As I was continuously searching for something in the model itself, I realized that it seems somewhat odd to look for routes config in the model. This was just to present a potential alternate approach in the configuration. There are no problems or cases that I've come across so far where this approach would fail to deliver. :) As I was continuously searching for something in the model itself, I realized that it seems somewhat odd to look for routes config in the model. @LeKristapino That's a fair point, it would be easier to find out how things work. But this kind of change requires work both from maintainers and users of the Gem. We want to make it easier for the users to upgrade to new versions, that's why we'll leave as it is. Thanks again for bringing this up and for the discussion.
gharchive/issue
2019-10-03T10:16:31
2025-04-01T06:40:01.381892
{ "authors": [ "LeKristapino", "colinross", "tegon" ], "repo": "plataformatec/devise", "url": "https://github.com/plataformatec/devise/issues/5147", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1813311771
FR: Possibility to Sync the File System's Modification Date with the Yaml Date Is Your Feature Request Related to a Problem? Please Describe. Regarding YAML and file system modification dates that become unsynced with each other due to moving/downloading the Vault to a new location. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Running "Lint all files" / "Lint folder" will update all files because file sys mod date is later than Yaml mod date, but the file's content hasn't actually been modified. Describe the Solution You'd Like Is it in theory possible to programatically set the file system's mod date from within an Obsidian plugin? In that case it would be nice to have a function that one runs only once the file sys mod dates gets out of sync. The function would go through all files in the vault and make sure that the file mod date is set to the current value in whatever Yaml key represents the modification date. Describe Alternatives You've Considered Is there any stand-alone app? But it would have to run on all platforms. Additional Context Having run the "sync mod dates" function would allow "lint all files" to just update the Yaml mod dates on the files that were actually modified. This relates to #386 . There is nothing that can be done by the Linter in regards to this. However you may be able to use Custom Commands to handle this. But to be clear,the Linter cannot do this because Obsidian does not allow for this in their API. There probably is a way to do so, but those functions are not added to an API or anything. It would take some work and testing on your part to get it working. Would it be possible to have a "Lint all Files Modified Today" feature in Linter? It would be possible so long as the date modified is consisered the source of truth. It would check if the date modidied was today and if so, then lint those files. Does linting on file change (#799 ) and autosave (#392 ) not address this issue? I am assuming the latter handles your scenario. Looks like I linked to the wrong issue. I was referring to #183 for autosave. It would lint the file after a change is made to it (thus no need for running a lint on the current file manually). As for linting a file when the active file changes, that makes sense. I was pretty sure it would not meet your needs, but thought I would mention it as well. I was referring to #183 for autosave. It would lint the file after a change is made to it. How do you detect changes made to a file? I listen to the current active file's editor content (source mode displayed content). If the rditor says something changed, then I assume something changed. I listen to the current active file's editor content (source mode displayed content). If the rditor says something changed, then I assume something changed. Ok. Autosave as in #183 doesn't work for me since I don't want the file to suddenly change. What would work is if you manage a list (as the Recent Files plugin manages a list of recent files in a json) where you add each file as soon as a change occurs. This would be a list of files pending linting. Then make a command available that lints all files in the list and then clears it. Then there's no need to iterate through all files as suggested above. But this list of files pending linting might cause problems when multiple clones of the same vault exists, where files haven't been synced. I can say that I would not add that data to a json file if I were to add it. So there would be no issue with syncing. It sounds like you are looking for a feature that currently has not been requested, so I can put it in the backlog and see what interest current users have in this feature. I am open to PRs in the meantime, but I at this time I don't see myself working on this anytime soon. Sounds good. I am glad that you found a solution.
gharchive/issue
2023-07-20T07:03:49
2025-04-01T06:40:01.390893
{ "authors": [ "karmeye", "pjkaufman" ], "repo": "platers/obsidian-linter", "url": "https://github.com/platers/obsidian-linter/issues/810", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }