id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
863419975
Add username & bio step Closed by #112
gharchive/issue
2021-04-21T03:45:39
2025-04-01T04:35:04.550333
{ "authors": [ "Robinnnnn", "kaitoo1" ], "repo": "mikeydub/gallery", "url": "https://github.com/mikeydub/gallery/issues/108", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
703403484
Support plural name for mongodb collections Is your feature request related to a problem? Please describe. Does this library expect mongodb collection names to be singular? If so, please consider to allow developers to use plural named collections. Let's say you already have database which has books and authors collections. A simple solution would be to name your entities like class Authors instead of class Author. The problem is when you define book.author relationship, you might have: class Books{ // ... @ManyToOne() author: Authors; // <- plural name looks confusing // ... } So using plural classes wouldn't be a good option. Describe the solution you'd like Either book or books collection can be used and handled automatically. Or the table naming convention choice (plural/singular) can be set through options upon MikroORM.init or somewhere else. Describe alternatives you've considered I have checked Naming Strategy page. So one can implement a solution by themselves. Additional context I'm testing out this great library just for some hours. I searched this repo but I couldn't find any related issues. I'm sorry if I've missed any doc or misunderstood something. You can use any name, just specify it in the @Entity({ collection: 'my-custom-name' }) decorator or define custom naming strategy (you can extend the MongoNamingStrategy and override just the one method). No plans to do some pluralization magic automatically (pluralization is not that simple thing, I've been there... :]). Great, thanks. I agree that pluralization will be a headache and the overriding decorator is enough for my case. thanks!
gharchive/issue
2020-09-17T09:03:03
2025-04-01T04:35:04.572717
{ "authors": [ "B4nan", "toshimo" ], "repo": "mikro-orm/mikro-orm", "url": "https://github.com/mikro-orm/mikro-orm/issues/846", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2055063991
jobclient init fails with gnumake 4.4 and make --jobserver-style=fifo currently, the jobclient fails to parse MAKEFLAGS like -j2 --jobserver-auth=fifo:/tmp/asdf $ ./py/test/test.sh + make -j2 python3 test.py "jobserver on" debug jobclient.py 781944 2023-12-24 11:01:16.407: init: MAKEFLAGS: -j2 --jobserver-auth=fifo:/tmp/GMfifo781943 debug jobclient.py 781944 2023-12-24 11:01:16.410: init: fdRead = None, fdWrite = None, maxJobs = 2, maxLoad = None debug jobclient.py 781944 2023-12-24 11:01:16.410: init failed: no fds make --jobserver-style=fifo is the default since gnumake 4.4 quickfix: make --jobserver-style=pipe gnumake docs --jobserver-style=[style] Chooses the style of jobserver to use. This option only has effect if parallel builds are enabled (see Parallel Execution). On POSIX systems style can be one of fifo (the default) or pipe. On Windows the only acceptable style is sem (the default). This option is useful if you need to use an older versions of GNU make, or a different tool that requires a specific jobserver style. see also https://github.com/spack/spack/issues/33014 https://github.com/rust-lang/jobserver-rs/issues/47 jobserver-style fifo should work now in javascript and python Great. I'll set up a GH Actions workflow that will publish to PyPI when a py/... tag is pushed https://pypi.org/project/gnumake-tokenpool/0.0.6/ (created by the new GH Actions workflow) closing as "should be fixed" jobserver-style fifo should work now in javascript and python
gharchive/issue
2023-12-24T11:02:34
2025-04-01T04:35:04.579435
{ "authors": [ "milahu", "mkoeppe" ], "repo": "milahu/gnumake-tokenpool", "url": "https://github.com/milahu/gnumake-tokenpool/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2007182627
feat: Add Prometheus metrics You can visit localhost/prometheus/metrics to view the metrics. Currently exposes http, db and cache metrics from django-prometheus. U can also use the package to expose custom metrics relevant to shynet, but this is not done in this PR. HTTP metrics are great for insights to the shynet instance status. Dashboards that can be created from this can be previewed with the images in Django-mixin, if you don't have a running Grafana instance. Maybe it should be behind a flag? Hey! Thanks for the PR — and I'm very sorry for the long delay in my review. Unfortunately I think we're better off not introducing another dependency here; while I can see why Prometheus metrics would be useful, I would prefer not to introduce the complexity of introducing a mixin.
gharchive/pull-request
2023-11-22T22:13:11
2025-04-01T04:35:04.582947
{ "authors": [ "adinhodovic", "milesmcc" ], "repo": "milesmcc/shynet", "url": "https://github.com/milesmcc/shynet/pull/301", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
173455186
Add type logic gates Within our code base I've found it useful to sometimes have the ability to do OR, AND, NOT, XOR on a typelevel for evidences. AND is usually fine as you simply need 2 parameters, but OR, XOR and NOT can sometimes cause a few issues. I'm unsure whether these belong in shapeless at all and this is merely a speculative PR. Current coverage is 74.25% (diff: 85.00%) Merging #631 into master will increase coverage by 0.59% @@ master #631 diff @@ ========================================== Files 68 68 Lines 3619 3601 -18 Methods 2943 2929 -14 Messages 0 0 Branches 139 132 -7 ========================================== + Hits 2666 2674 +8 + Misses 953 927 -26 Partials 0 0 Powered by Codecov. Last update 739df03...f00cb12 Closing because I've changed the original code and the fact that the usages in which it helps is fairly niche.
gharchive/pull-request
2016-08-26T13:38:46
2025-04-01T04:35:04.586284
{ "authors": [ "codecov-io", "yilinwei" ], "repo": "milessabin/shapeless", "url": "https://github.com/milessabin/shapeless/pull/631", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1972950369
Add a service to control "follow me" feature Add a custom service to control the "follow me" feature introduced in https://github.com/mill1000/midea-msmart/pull/91 Expose the current "follow me" state in an attribute of the climate entity. The justification for using a service over a switch is to obscure the feature slightly due to the following reasons. There is no capability to determine if this function exists. Without a capability we can't decide when to create a "follow me" switch entity intelligently. Otherwise we would hide the feature behind an integration option. Usage of this feature isn't straightforward and is very device specific. Close #49 and related to https://github.com/mill1000/midea-msmart/issues/89 Hi there Mill1000, updated the component to head and also msmart-ng but i could not find any service exposed in HA. Should i reconfigure the AC to allow it to appear? I did a reload in one of the objects, please find below the debug log. Thanks reconfig.log Your system still appears to be running msmart-ng 2023.10.0 nov 02 15:56:15 note hass[11155]: 2023-11-02 15:56:15.077 INFO (MainThread) [custom_components.midea_ac] Starting midea-ac-py. Using msmart-ng version 2023.10.0. hi, will reboot and test again, files in FS are tee right versions, I am unsure about what is caching the scripts got that sorted out - my mistake - had replaced some files testing the fork and gentoo ebuild could not take care of them (obviously). Removed all the cruft and now is good with 2023.11.0 find below: note /usr/lib/python3.11 # msmart-ng query --capabilities --debug --auto 192.168.51.11 DEBUG:asyncio:Using selector: EpollSelector INFO:msmart.cli:Discovering 192.168.51.11 on local network. DEBUG:msmart.discover:Discovery sent to 192.168.51.11:6445. DEBUG:msmart.discover:Discovery sent to 192.168.51.11:20086. DEBUG:msmart.discover:Waiting 5 seconds for responses... DEBUG:msmart.discover:Discovery response from 192.168.51.11: 837000c8200f00005a5a0111b8007a8000000000ae0b0117020b171485bb0200008c00000000000000000180000000004b348d72d8889acb7240f464358f60cbc820cce3bcf8e9a84ccb3d 138590a2f1c872b27cb206d16a913c10599e1df2ab6bc94ee8cdf7c791796aa54d53e8bd1b6bb051d02cd1aa372d6943fcab72a2baaf07247d607980f5ea909c34540b991f4bdfb3e16e33d88768cc4c3d0658937d0bb19369bf0317b24d3a4de9e6a131062e6e76707 4840d2701668a6417b7a4e95f54dd1eacc3d0512c34ad6728bfe1f8 DEBUG:msmart.discover:Decrypted data from 192.168.51.11: 0b33a8c02c19000030303030303050303030303030305131383846324244343146313143303030300b6e65745f61635f463131430000870002000000000000000000ac00acac0000000088f2bd 41f11c150029092122000301000000000000000000000000000000000000000000000000000000000000000000 INFO:msmart.cloud:Using Midea cloud server: https://mp-prod.appsmb.com (China: False). INFO:httpx:HTTP Request: POST https://mp-prod.appsmb.com/mas/v5/app/proxy?alias=%2Fv1%2Fuser%2Flogin%2Fid%2Fget "HTTP/1.1 200 " DEBUG:msmart.cloud:API response: {"code":"0","msg":"ok","data":{"loginId":"9083a440-1e41-436a-b160-537873e7"}} DEBUG:msmart.cloud:Received loginId: 9083a440-1e41-436a-b160-537873e7 DEBUG:msmart.discover:Discovered 1 devices. INFO:httpx:HTTP Request: POST https://mp-prod.appsmb.com/mas/v5/app/proxy?alias=%2Fmj%2Fuser%2Flogin "HTTP/1.1 200 OK" DEBUG:msmart.cloud:API response: {"code":0,"msg":"成功","data":{"randomData":"fb0dbdb9cf7fe62884c863748cde56ac69bab89beffdbc147e211b48bf2dab35","uid":"83947250d2448ffdf02e111c74bad51e","accountId":"1023665078"," nickname":"midea","mdata":{"tokenPwdInfo":{"tokenPwd":"ef39928683934880aa2c449686eb1b9b","expiredDate":1701558077409,"createDate":1698966077409},"userInfo":{"sourceId":"mj_12345","empId":"5456771537545216","addr ess":"","gender":"0","mobile":"midea@mailinator.com","userDeptInfoList":null,"extras":null,"nameEn":null,"employeeNumber":null,"headPhoto":null,"uid":"83947250d2448ffdf02e111c74bad51e","name":"midea@mailinator.c om","email":null},"doDeviceBind":null,"accessToken":"T1ieepysjif44fd26","signUnlockEnabled":null},"accessToken":"d91b46cd982dcfc0e57177c589f882446a61c02649cdeb63351ac3af359f0c5e","userId":"89056511576065","email ":"midea@mailinator.com"}} DEBUG:msmart.cloud:Received accessToken: T1ieepysjif44fd26 DEBUG:msmart.discover:Fetching token and key for udpid '476c45dc37a93fa5c7f0f7edd247d8ca' (little). INFO:httpx:HTTP Request: POST https://mp-prod.appsmb.com/mas/v5/app/proxy?alias=%2Fv1%2Fiot%2Fsecure%2FgetToken "HTTP/1.1 200 OK" DEBUG:msmart.cloud:API response: {"code":"0","msg":"ok","data":{"tokenlist":[{"udpId":"476c45dc37a93fa5c7f0f7edd247d8ca","key":"c4dfde800bf44d4aa3edbcc01056dbd4336accec31a64ecebde923d1797f6085","token":"95099634 B1804A967C78A9BEACDCDE29F992016D15DADDBDC9E5F552871CFAC10A425CBD9D0C2A2327785BE852BC2B2D1006175DEE7203BB86285006D3E45D57"}]}} INFO:msmart.lan:Creating new connection to 192.168.51.11:6444. DEBUG:msmart.lan:Connected to 192.168.51.11:6444. INFO:msmart.lan:Authenticating with 192.168.51.11:6444. DEBUG:msmart.lan:Sending data to 192.168.51.11:6444: 837000402000000095099634b1804a967c78a9beacdcde29f992016d15daddbdc9e5f552871cfac10a425cbd9d0c2a2327785be852bc2b2d1006175dee7203bb86285006d3e45d57 DEBUG:msmart.lan:Received data from 192.168.51.11:6444: 83700005200f00204552524f52 DEBUG:msmart.discover:Fetching token and key for udpid '5f54dd1eacc3d0512c34ad6728bfe1f8' (big). INFO:httpx:HTTP Request: POST https://mp-prod.appsmb.com/mas/v5/app/proxy?alias=%2Fv1%2Fiot%2Fsecure%2FgetToken "HTTP/1.1 200 OK" DEBUG:msmart.cloud:API response: {"code":"0","msg":"ok","data":{"tokenlist":[{"udpId":"5f54dd1eacc3d0512c34ad6728bfe1f8","key":"4377ea0938354e8d9b290cb0d57f92342be3419105d8436f8b73cc0a80de419f","token":"020BDC3A 4A203192E9F846C37AD4B2CB836076778129D0A635CCFF2872C6ABCA44EDC8E1E758FC8AFE1D017D8E478547F169BBE8A65588EA4A3EC6230BEE7B3B"}]}} INFO:msmart.lan:Authenticating with 192.168.51.11:6444. DEBUG:msmart.lan:Sending data to 192.168.51.11:6444: 8370004020000001020bdc3a4a203192e9f846c37ad4b2cb836076778129d0a635ccff2872c6abca44edc8e1e758fc8afe1d017d8e478547f169bbe8a65588ea4a3ec6230bee7b3b DEBUG:msmart.lan:Received data from 192.168.51.11:6444: 8370004020018d4e80118e3add882050ae54c680590e1af1fbc04e56a9a088f10f95a4fba0c6896eb13a6fb79384f768e4800b9c38052d1eabb9326444b271e5442d2ea6e17972dc INFO:msmart.lan:Authentication successful. Expiration: 2023-11-03T11:01:19, Local key: 185c143fb2c837ba473cf484e1c1e900b4a9baa4ee47b95ab5d6b63f04913baa DEBUG:msmart.base_device:Sending command to 192.168.51.11:6444: aa21ac8d000000000003418100ff03ff000200000000000000000000000003016971 DEBUG:msmart.lan:Sending packet to 192.168.51.11:6444: 5a5a0111680020000000000038140117020b171485bb0200008c0000000000000000000000000000b8436dd15e84d5a4fc6fbf77d2b12486e10c552981b23022cb71ea0fc54dc25ecfa0ce55888a c57fc42a7eacc3285d3727f32b0c5f040e93388a8977b7ea50f6 DEBUG:msmart.lan:Sending data to 192.168.51.11:6444: 8370008e20663c44adae338d18860f10f4dff39c2763c0c0c57f9454a8e2daae1c1418aa584842d1353b5e0d3f076d2a7a87cb2250d6a934646ce6a30a0be1f152437707f868fe67734761b45274e8 acdf93a1f966cb6accd4cb4c30059fd311b4f08b31efebc2475d114f3aff8d3eda66dd9500295d86fa885ceedd2dbe394b1e107da49feabdc1c2fca685405e04f1eb05bf08c26a DEBUG:msmart.lan:Received data from 192.168.51.11:6444: 8370008e2063f9445b700ff07d33d1ac77b8eef36fdb19f255501f995176b87a5a89e76573dc3d1257e4c1032c06bc6b76634330cb21deec05f90e2b8a18d7eb087ba3684e8832102e0bc3783e9 00dad0d4c7cc70f2a41ff9e58bb0a4911ff0d61866231595917274225d29eab09358618f09187358572c4f23225883abc1922fe8a11a357a5ebe5d0c94f73cb056539afe4d3c0f2c5 DEBUG:msmart.lan:Received packet from 192.168.51.11:6444: 5a5a0111680020800000000073150117020b171485bb0200008c00000000000000000180000000002cb0ab226b623f535a159ed9890d14dd23b3b0cbeea13b1a82f50b3ce8d13acbdb02087fc 130190d2c92515b84178f3231065738bf7e2169d9930b1e3060ff67 DEBUG:msmart.lan:Received response from 192.168.51.11:6444: aa22ac00000000000303c00087667f7f0030000000564c00000000000000000069b98d DEBUG:msmart.base_device:Response from 192.168.51.11:6444 in 0.590000 seconds. DEBUG:msmart.device.AC.command:State response payload: c00087667f7f0030000000564c00000000000000000069 INFO:msmart.cli:Querying device capabilities. DEBUG:msmart.base_device:Sending command to 192.168.51.11:6444: aa0faca3000000000003b50111022fa7 DEBUG:msmart.lan:Sending packet to 192.168.51.11:6444: 5a5a011158002000000000000f150117020b171485bb0200008c000000000000000000000000000008863383a51f6d4436ea7dd0f0c64916915c618b02ed2747fad73f2ffa0db1d31f965ead4a26 2ef67a53332f7237fcbc DEBUG:msmart.lan:Sending data to 192.168.51.11:6444: 8370007e20668945f431e6f245b739ef677db8580e771426d13d485816b03f4febfeb937312dc5bec1c8dbec36dc3ca1c53a56e90a02560f4be97f1e8bfad9020a0c10515ad37452f2c59ca6c6f5b7 03d80b576fc53a0faf4b4fed738ffa79c67ebe20179cd1a9529e41024d7173571b5c2c56314b07f167525fee338eebd21ea8fa5487c383 DEBUG:msmart.lan:Received data from 192.168.51.11:6444: 8370008e2063b32b8298dd85d50948b474cc911615ca61ba060b36a9ad071e64bb56614ce1e08b5238fe20ec9885e236155dc6c6f7c9ea3c1645008a71a3b2ed77e883dd6789cd1ce97a3ecfe4d 451a7816354049257816af971a4e0883e87bbab44884b82e904d6905efffe30a8d3aa934b0b51f34b4879ed8284c5aa5bedceab32c92a3279b0c2ef224157da0e696d17c2f45887a7 DEBUG:msmart.lan:Received packet from 192.168.51.11:6444: 5a5a0111680020800000000032950117020b171485bb0200008c00000000000000000180000000002ed30f7318deb2150985a069e6df704fc6c5350d624b48edff5254e941e2aad1eb8d98ade 0f90bb5f67d9331fb42c16d300448b8f086c48abe2ba03ff12b6670 DEBUG:msmart.lan:Received response from 192.168.51.11:6444: aa29ac00000000000303b5071202010113020101140201011502010116020101170201001a020101dedb DEBUG:msmart.base_device:Response from 192.168.51.11:6444 in 0.450000 seconds. DEBUG:msmart.device.AC.command:Capabilities response payload: b5071202010113020101140201011502010116020101170201001a020101 DEBUG:msmart.device.AC.command:Supported capabilities: {'eco_mode': True, 'eco_mode_2': False, 'freeze_protection': True, 'heat_mode': True, 'cool_mode': True, 'dry_mode': True, 'auto_mode': True, 'swing_horizon tal': True, 'swing_vertical': True, 'power_stats': False, 'power_setting': False, 'power_bcd': False, 'filter_notice': False, 'filter_clean': False, 'turbo_heat': True, 'turbo_cool': True} INFO:msmart.cli:{'supported_modes': [<OperationalMode.FAN_ONLY: 5>, <OperationalMode.DRY: 3>, <OperationalMode.COOL: 2>, <OperationalMode.HEAT: 4>, <OperationalMode.AUTO: 1>], 'supported_swing_modes': [<SwingMod e.OFF: 0>, <SwingMode.HORIZONTAL: 3>, <SwingMode.VERTICAL: 12>, <SwingMode.BOTH: 15>], 'supported_fan_speeds': [<FanSpeed.LOW: 40>, <FanSpeed.MEDIUM: 60>, <FanSpeed.HIGH: 100>, <FanSpeed.AUTO: 102>], 'supports_c ustom_fan_speed': False, 'supports_eco_mode': True, 'supports_turbo_mode': True, 'supports_freeze_protection_mode': True, 'supports_display_control': False, 'supports_filter_reminder': False, 'max_target_tempera ture': 30, 'min_target_temperature': 16} Working - called the service now again after a reboot and service works as expected - will do some automation with that ! :) appreciate again!! Just a note, perhaps worth to add in the functionality description: Did the automation. it seems for the first time it to activate, I have to issue Fup disable and then enable it. If i try to enable it straight, AB beeps but nothing happens. I'd bet it might be dependent of AC model... Cheers,
gharchive/pull-request
2023-11-01T19:14:37
2025-04-01T04:35:04.614403
{ "authors": [ "mill1000", "tabascoz" ], "repo": "mill1000/midea-ac-py", "url": "https://github.com/mill1000/midea-ac-py/pull/50", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
62790721
extract lib/whiteSpace.js into its own module these methods are pretty solid (long time since we changed anything) and are extremely useful for plugin authors. - we should definitely extract them. maybe name it rocambole-ws or rocambole-whitespace so it's similar to rocambole-token :+1: to rocambole-whitespace pushed rocambole-whitespace v1.0.0 to npm! https://www.npmjs.com/package/rocambole-whitespace
gharchive/issue
2015-03-18T20:25:08
2025-04-01T04:35:04.617234
{ "authors": [ "jzaefferer", "millermedeiros" ], "repo": "millermedeiros/esformatter", "url": "https://github.com/millermedeiros/esformatter/issues/278", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1083841495
How to support local images on nextjs deploy? I have been trying to get this image optimizer work for some of my side projects and it works for the externals urls with the domain added to the terraform and nextjs config files, but I also use images inside my public folder and they are not working. The url params is like this for these requests: ?url=/static/learn/image-1.png&w=3840&q=75. But the endpoint just throws the error url" parameter is valid but upstream response is invalid which I've seen when the domain is not added. I suspect one workaround for this would be to move to absolute urls for these, and then add the project url to the next config, but I don't want to go this route because locally imported images have nice placeholderBlur to them, and also switching to absolute urls would look ugly. How does one work this out? I see that the nextjs example uses assets.vercel.com which I've already tried adding. But it still doesn't work. I thought that this issue might only be locally, and I pushed the local branch to the vercel, but the issue is still the same cause the url param above is the same I guess. I love this custom image optimizer you guys have made, it is so simple to get started with and compared to the current pricing for vercel image optimization makes much more sense. But please help me out with this!! Hi, yes our Next.js example should be exactly what you are looking for. Looking at the code from the image-optimizer the error message appears for absolute paths only when the origin responds with a status code other than 200: https://github.com/vercel/next.js/blob/3e83205eab208a5b4f53d4590e5ba33f42564a39/packages/next/server/image-optimizer.ts#L227 Can you check the origin file at <your-domain.com>/static/learn/image-1.png and verify that it responds with a 200 HTTP status? Hey, thanks for replying so quickly! Yes the image is indeed available under my own website URL, I've rechecked it with navigating to the url you suggested with real website domain. So something like this: www.mywebsitedomain.com/static/learn/image-1.png. Does the images needs to be under the cloudfront domain that is generated by terraform? If yes how can I make my /public images be available under that?? No, the website domain with the original image is simply grabbed via fetch over http or https. So there is no need to put the original image under the same CloudFront domain. The module currently finds the domain for absolute paths with the Host-Header that is sent when an embedded image <img src="..." /> from your site is requested. So the full request for an image must be: Request URL: https://xyz.cloudfont.net/_next/image?url=%2Fstatic%2Flearn%2Fimage-1.png Host: www.mywebsitedomain.com The module then calculates the full URL for the the image from these two parameters and resolves it to: www.mywebsitedomain.com/static/learn/image-1.png. On requests directly against the CloudFront distribution the Host header is the same as in the request, so opening https://xyz.cloudfont.net/_next/image?url=%2Fstatic%2Flearn%2Fimage-1.png directly in the browser would send the following request: Request URL: https://xyz.cloudfont.net/_next/image?url=%2Fstatic%2Flearn%2Fimage-1.png Host: xyz.cloudfont.net From which the module then calculates the invalid original source URL: xyz.cloudfont.net/static/learn/image-1.png. Since this behaviour is very confusing I plan to bring a new option to the module where you can define the baseDomain for absolute paths directly from the Terraform config instead of relying on the Host header. Actually there is a bug in how the Host of the original image is determined. I created a new example with next export and S3 to get more insight and saw a similar problem to what you reported. Will try to get a fix published by tomorrow: #95 Hey, thanks for being so humble through it. One more question I've is there a way this module can support the locally imported images as well? It is a recent feature in nextjs and tbh it is quite handy, the syntax for it is something like this: import Image from 'next/image' import profilePic from '../public/me.png' function Home() { return ( <> <h1>My Homepage</h1> <Image src={profilePic} alt="Picture of the author" // width={500} automatically provided // height={500} automatically provided // blurDataURL="data:..." automatically provided // placeholder="blur" // Optional blur-up while loading /> <p>Welcome to my homepage!</p> </> ) } Although it's not a big deal, but when importing images like this we get the nice placeholder blur, and width and height automatically. By the way I did do a little bit of debugging and the reason this way of importing is not working at the moment is because nextjs is appending some hash value after the image file path, so something like: "image-2.1e15263b" (where original name of the file is just image-2). Yes, good point 👍 I will update the Next.js examples to also include samples for the blur option. My hope is that they work out of the box once the bug is fixed, but we'll see 😅 Okay I've now released v12.0.1 that should fix the issue. From my understanding it should work for you after upgrading to this release without further configuration. If it still doesn't work, you could try the new next_image_base_origin option where you can statically define the base URL where absolute image paths should be resolved to: module "next_image_optimizer" { source = "milliHQ/next-js-image-optimization/aws" + next_image_base_origin = "https://www.mywebsitedomain.com" } Feel free to reopen or comment to this issue if the problem still exists.
gharchive/issue
2021-12-18T14:14:33
2025-04-01T04:35:04.628667
{ "authors": [ "jaisharx", "ofhouse" ], "repo": "milliHQ/terraform-aws-next-js-image-optimization", "url": "https://github.com/milliHQ/terraform-aws-next-js-image-optimization/issues/94", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
971290684
[Suggestion] add CN RC4 Release note Note: This repository is ONLY used to solve issues related to DOCS. For other issues, please move to other repositories. Is there anything that's missing or inappropriate in the docs? Please describe. Describe what the problem is, and where is it. Ex. I think it's better to add/change [...] in [Install Milvus]. Describe your suggestion A clear and concise description of what you want to change or add. Additional reference Provide any other reference docs or websites about your request. fixed, closing the issue
gharchive/issue
2021-08-16T01:44:27
2025-04-01T04:35:04.665220
{ "authors": [ "LocoRichard" ], "repo": "milvus-io/milvus-docs", "url": "https://github.com/milvus-io/milvus-docs/issues/368", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1808442573
Issue with running Milvus-Lite in a Databricks notebook Environment Milvus version: 2.2.11 Deployment mode: milvus-lite in Databricks SDK version(e.g. pymilvus v2.0.0rc2): pymilvus 2.2.0.dev23 OS(Ubuntu or CentOS): Ubuntu 22.04.2 LTS in Azure Databricks Databricks Runtime: 13.1 ML GPU: Others: Environment: milvus=2.2.11 Note: Databricks Spark reserves 40000 - 40002 ports. from milvus import default_server, debug_server, MilvusServer, MilvusServerConfig from pymilvus import connections, utility MilvusServerConfig.RANDOM_PORT_START = 50000 server = MilvusServer(debug=True) server.start() connections.connect("debug",host='127.0.0.1', port=server.listen_port) utility.list_collections( using='debug') I get the error : [list_collections] retry:10, cost: 60.00s, reason: <_MultiThreadedRendezvous: StatusCode.UNAVAILABLE, internal: Milvus Proxy is not ready yet. please wait> In the log I see: [ERROR] [grpcclient/client.go:161] ["failed to get client address"] [error="find no available rootcoord, check rootcoord state"] [stack="github.com/milvus-io/milvus/internal/util/grpcclient.(*ClientBase[...]).connect I am having similar issue, any luck on resolving this?
gharchive/issue
2023-07-17T19:18:31
2025-04-01T04:35:04.669736
{ "authors": [ "doddys", "prateek05" ], "repo": "milvus-io/milvus-lite", "url": "https://github.com/milvus-io/milvus-lite/issues/60", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2191528032
fix: use underscore for field name, "from" is keyword in python Related Issue: https://github.com/milvus-io/milvus-proto/issues/244 Related Milvus Issue: https://github.com/milvus-io/milvus/issues/30647 Related PR: #243 /approve /lgtm
gharchive/pull-request
2024-03-18T08:26:55
2025-04-01T04:35:04.671475
{ "authors": [ "chyezh", "czs007" ], "repo": "milvus-io/milvus-proto", "url": "https://github.com/milvus-io/milvus-proto/pull/261", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1195883907
tuple index out of range First of all, I would like to thank you for your great research. This study enabled me to analyze more data. I'm facing another problem now. when I try to run train_config.py, I saw one error that indicate at SubGNN.py line 820. Did I doing something wrong? I give argv same as your readme and modify files you said. please let me know the reason why it doesn't work. Thank you. Hi, same problem here. Did you find a solution? @westernsalt @etsoukanara I got the same error. Have you fixed it?
gharchive/issue
2022-04-07T11:04:33
2025-04-01T04:35:04.728079
{ "authors": [ "Isabellajhon", "etsoukanara", "westernsalt" ], "repo": "mims-harvard/SubGNN", "url": "https://github.com/mims-harvard/SubGNN/issues/24", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1920247312
Feature request: support actions spanning multiple HCL files targeting the same projects Currently, tfmigrate history mode allows users to: Keep track of which migrations have been applied and apply all unapplied migrations in sequence. However, this can result in tfmigrate plan errors if ever multiple tfmigrate HCL files target the same Terraform root module projects. For example, consider a tfmigrate plan invocation (using history mode) that targets the following unapplied migrations: # migration-1.hcl migration "multi_state" "foo" { from_dir = "project-1" to_dir = "project-2" actions = [ "mv null.foo null.foo", ] } # migration-2.hcl migration "multi_state" "foo" { from_dir = "project-1" to_dir = "project-2" actions = [ "mv null.bar null.bar", ] } In this scenario, despite the migrations' validity -- and despite that both project-1's and project-2's *.tf configuration reflects the desired post-migration end state -- tfmigrate plan encounters terraform plan command returns unexpected diffs, as it invokes tfmigrate plan against each individual migration HCL file serially (tfmigrate plan migration-1.hcl, tfmigrate plan migration-2.hcl). Alternatively, could it be reasonable for tfmigrate plan to have the capability of discovering and merging multiple migration HCL files in such scenarios, and performing a single tfmigrate plan (and thereby a single terraform plan) against a single, combined migration? Perhaps this seems like an unexpected use case. However, in my experience using tfmigrate to perform non-trivial large scale migrations (> 1K resources, for example), circumstantial factors -- and sheer volume of targeted resources -- often warrant the codification of the migrations across multiple, distinct migration HCL files. Thank you for your proposal! The current implementation is limited to defining only one migration per file, but how about using a file boundary as a transaction boundary to group multiple migrations as an atomic transaction? I think having clear transaction boundaries on a file-by-file basis is better being declarative than implicitly merging migrations depending on the state of the migration history. In particular, implicit merging is not apparent how the merge will take place to the reviewer from the code changes because it depends on the runtime state instead of at the time of reviewing. While it is inevitable to redesign the Migrator interface, it could support more generic use cases. For example, migrations splitting dir1 into dir2 and dir3 should be grouped into a single transaction. The transaction should track the state of all directories being affected and roll back them if something wrong happens. I don't fully understand what environment you are trying to migrate, but a migration of 10k lines is impossible to review anyway, even if you divide it into several files. If you are not writing 10k lines by hand but generating them in some script, why not review the script and generate the 10k lines in one file? I don't fully understand what environment you are trying to migrate, but a migration of 10k lines is impossible to review anyway, even if you divide it into several files. If you are not writing 10k lines by hand but generating them in some script, why not review the script and generate the 10k lines in one file @minamijoyo These are good questions. In fact, our migration HCL files are programmatically generated (and even programmatically validated prior to tfmigrate apply-time. However, we're facing some other unique constraints, hence pitching you on the idea outlined in this issue. However, I think you're ultimately right: our use case is unusual and caters to an unfortunate anti-pattern, and so I'll close this issue. Thank you for at least entertaining the discussion! :)
gharchive/issue
2023-09-30T12:53:06
2025-04-01T04:35:04.735000
{ "authors": [ "mdb", "minamijoyo" ], "repo": "minamijoyo/tfmigrate", "url": "https://github.com/minamijoyo/tfmigrate/issues/154", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1584373539
pb .net 7.0 package v 2.0 dont'work under .net 7 Hello @be-swarm , Yes, there is a known bug with RestSharp (https://github.com/restsharp/RestSharp/issues/2000). We are going to publish a new version with their preview release which fix this bug.
gharchive/issue
2023-02-14T15:31:39
2025-04-01T04:35:04.782666
{ "authors": [ "Kvindee", "be-swarm" ], "repo": "mindee/mindee-api-dotnet", "url": "https://github.com/mindee/mindee-api-dotnet/issues/120", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2080841901
"Too many open files" error when making many mocked async requests Reproducible example that I think is minimal: import httpx from mocket.mockhttp import Entry from mocket import async_mocketize @pytest.mark.asyncio @async_mocketize async def test_many(): url = "http://httpbin.local/ip" Entry.single_register(Entry.GET, url, status=404) for i in range(200): async with httpx.AsyncClient() as client: response = await client.get(url) assert response.status_code == 404 With number of open files limited in order to make it easier to hit the limit: ulimit -n 100 This fails like so : .devenv/state/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py:1627: in connect_tcp await get_running_loop().create_connection( /nix/store/5k91mg4qjylxbfvrv748smfh51ppjq0g-python3-3.11.6/lib/python3.11/asyncio/base_events.py:1085: in create_connection raise exceptions[0] /nix/store/5k91mg4qjylxbfvrv748smfh51ppjq0g-python3-3.11.6/lib/python3.11/asyncio/base_events.py:1069: in create_connection sock = await self._connect_sock( /nix/store/5k91mg4qjylxbfvrv748smfh51ppjq0g-python3-3.11.6/lib/python3.11/asyncio/base_events.py:973: in _connect_sock await self.sock_connect(sock, address) /nix/store/5k91mg4qjylxbfvrv748smfh51ppjq0g-python3-3.11.6/lib/python3.11/asyncio/selector_events.py:632: in sock_connect self._sock_connect(fut, sock, address) /nix/store/5k91mg4qjylxbfvrv748smfh51ppjq0g-python3-3.11.6/lib/python3.11/asyncio/selector_events.py:640: in _sock_connect fd = sock.fileno() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ @staticmethod def fileno(): > Mocket.r_fd, Mocket.w_fd = os.pipe() E OSError: [Errno 24] Too many open files mocket/mocket.py:260: OSError Here is the new version which fixes this issue: https://pypi.org/project/mocket/3.12.3/
gharchive/issue
2024-01-14T18:22:48
2025-04-01T04:35:04.785299
{ "authors": [ "ento", "mindflayer" ], "repo": "mindflayer/python-mocket", "url": "https://github.com/mindflayer/python-mocket/issues/217", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
249843686
Removed split in SHA256 record As told by RFC4255 section 3.1.3 The message-digest algorithm is presumed to produce an opaque octet string output, which is placed as-is in the RDATA fingerprint field. and in section 3.2: The RDATA of the presentation format of the SSHFP resource record consists of two numbers (algorithm and fingerprint type) followed by the fingerprint itself, presented in hex, e.g.: host.example. SSHFP 2 1 123456789abcdef67890123456789abcdef67890 The use of mnemonics instead of numbers is not allowed. Using spaces would only make sense if the entire value were longer than 255 bytes as mentioned in RFC4408 section 3.1.3: As defined in [RFC1035] sections 3.3.14 and 3.3, a single text DNS record (either TXT or SPF RR types) can be composed of more than one string. If a published record contains multiple strings, then the record MUST be treated as if those strings are concatenated together without adding spaces. For example: IN TXT "v=spf1 .... first" "second string..." MUST be treated as equivalent to IN TXT "v=spf1 .... firstsecond string..." SPF or TXT records containing multiple strings are useful in constructing records that would exceed the 255-byte maximum length of a string within a single TXT or SPF RR record. Therefore i am removing that space here. Nice! Thanks for your detailed description and research.
gharchive/pull-request
2017-08-13T00:28:48
2025-04-01T04:35:04.789681
{ "authors": [ "Gunni", "mindfuckup" ], "repo": "mindfuckup/Scripts", "url": "https://github.com/mindfuckup/Scripts/pull/11", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1221709761
Responsive remote/external/CDN image I implement to work on this soon. Current use case: I copy/paste Gutenberg page from one site to another. The markup reference the original image but the generated srcset attributes are wrong (because my render_block hooks consider the ID of files which are not present in the destination instance). My plan is to use get_timber_image_responsive_src (which is my current entrypoint inside the library) to check whether a file is local or remote. My rough plan from here is: Timmy::get_image() would detect it's an external/remote file URL and if yes, return an instance of \Timmy\RemoteImage (subclass of \Timmy\Image) Timmy\RemoteImage would mostly override srcset() which would check over http for the possibly existing derivative images. If some happen to exist then cache the result of the (in)existence inside a transient. Generate the adequate srcset attribute. This assume the size attribute passed to get_timber_image_responsive_src at destination WP instance is equal to the one passed at source WordPress instance (or at least that the resulting URL paths match). Optional (for me): If the URL match known/supported CDN hostname, use its particular API to fetch the variations. (Cloudflare cdn-cgi & co) Your thoughts? (@gchtr ?) Hey @drzraf. Sorry for the late reply, been pretty busy lately. In general, I can say that using a custom class that extends Timmy\Image and overwriting or adding some functionality in there is exactly a use case that I’d want to make possible with v1. I would probably use a similar strategy if I had the same use case. There’s a timmy/image/class filter (see here) that you could use to use to make Timmy use your custom RemoteImage class. This is supposed to work similarly to Timber’s class maps. There aren’t any additional parameters yet in that filter, because I didn’t figure out what would be useful. Can you already figure out whether the image has an external or remote URL before you use Timmy::get_image()? If you only have the ID, I guess that’s really hard. Or do you have URLs saved in the markup that you can check? Steps 2 and 3 sound like they could work. If you overwrite the srcset() function and see that you could move some of the functionality in there into separate methods, then you’re welcome to make a pull request so we can merge it in. It would be cool if the methods are easy to overwrite when there isn’t too much logic in there. Maybe you can also see that it could work with a new filter added somewhere in Timmy’s code. I’m open for suggestions. Optional (for me): If the URL match known/supported CDN hostname, use its particular API to fetch the variations. (Cloudflare cdn-cgi & co) At the moment, I can’t help you with any CDN related requests, because we didn’t use Timmy in a site that uses a CDN, yet. If you have any insights though or want to make a pull request to get something working, you’re welcome to do so. Was this implemented?
gharchive/issue
2022-04-30T03:35:45
2025-04-01T04:35:04.796537
{ "authors": [ "drzraf", "gchtr" ], "repo": "mindkomm/timmy", "url": "https://github.com/mindkomm/timmy/issues/49", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2158321992
PXP-670: [CLI] Implement the AppSignal integration to the deployment status command Summary The PR also include the error handling when the application config is not enabled or when the appsignal integration is not enabled. Ticket: PXP-670 What's Changed Added [x] add wukong deployment status command [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jk-gan The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment @jk-gan I got this error while trying to test the feature I understand this PR does not cover the init logic, but I wonder how did you test it ? @jk-gan I got this error while trying to test the feature I understand this PR does not cover the init logic, but I wonder how did you test it ? I know how to get past this error. Now I got this AppSignal doesn't recognize the latest deploy marker from the url param, so I will use incident_marker=last for the link @jk-gan can you match the actual UI with the mockup as closet possible ? At least I want to see the APM as table title. For the Prod, I think we can just show the name of the AppSignal Environment ? It will be clearer to developer about where we're pulling data from. [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jk-gan, onimsha The full list of commands accepted by this bot can be found here. Needs approval from an approver in each of these files: OWNERS Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment
gharchive/pull-request
2024-02-28T07:47:10
2025-04-01T04:35:04.822207
{ "authors": [ "jk-gan", "onimsha", "sauron-droid" ], "repo": "mindvalley/wukong-cli", "url": "https://github.com/mindvalley/wukong-cli/pull/198", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1329022428
🛑 MineVN Maven & Resource Pack is down In 098ea4b, MineVN Maven & Resource Pack (http://app.minevn.net/pack) was down: HTTP code: 0 Response time: 0 ms Resolved: MineVN Maven & Resource Pack is back up in 778913c.
gharchive/issue
2022-08-04T18:20:08
2025-04-01T04:35:04.893494
{ "authors": [ "minhh2792" ], "repo": "minhh2792/uptime", "url": "https://github.com/minhh2792/uptime/issues/105", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2627213835
[RELEASE] MinIO Client RELEASE.2024-10-29T15-34-59Z MinIO Client RELEASE.2024-10-29T15-34-59Z has changes that impact the docs: [ ] Adds new mc support top rpc subcommand (PR #5060) See also PR #5064 for RPC playback. [ ] Updates to mc support inspect (PR #5062) [x] Adds JSON output for global flag to mc pipe (PR #5065) Not doing anything with 5065. Previously mc pipe --json didn't do anything, now it does. That's a global flag, so nothing really to add anywhere.
gharchive/issue
2024-10-31T15:53:15
2025-04-01T04:35:04.941600
{ "authors": [ "djwfyi" ], "repo": "minio/docs", "url": "https://github.com/minio/docs/issues/1361", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2428696587
DOCS-1273: Operator 6.0.0 Deploy/Upgrade docs, removing Console references Addresses #1273 Summary This pass does three things: Updates all tutorials related to Operator/Tenant deployment for Kustomize and Helm Removes references to Operator Console + updates to reference Kustomize/Helm wherever possible Slightly tidies up old or dangling references This pass does not do these things: Link out heavily to Kubernetes docs (for later) Clean up organization (singleplat build handles this) Addresses OpenShift, Rancher, etc. Staged: http://192.241.195.202:9000/staging/DOCS-1273/k8s/index.html I tested upgrading from 5.0.15 to 6.0.1 with Helm 3.15.3, to see how the CRD would change. I can see the change in CRD and the diff shows that it does appear to update. But online docs and internal discussion indicate that Helm does not deal with CRD updates cleanly, so I'm not sure what's going on here or how to validate Helm upgrades working as expected. That said we can still push this forward to at least get the docs out, possibly leaving a note on Helm Operator install docs stating that Helm does not handle CRD upgrades cleanly and that additional work may be required - though I still do not know what work. cc @cniackz @pjuarezd @dvaldivia @ramondeklein for clarity on the above I tested upgrading from 5.0.15 to 6.0.1 with Helm 3.15.3, to see how the CRD would change. I can see the change in CRD and the diff shows that it does appear to update. But online docs and internal discussion indicate that Helm does not deal with CRD updates cleanly, so I'm not sure what's going on here or how to validate Helm upgrades working as expected. That said we can still push this forward to at least get the docs out, possibly leaving a note on Helm Operator install docs stating that Helm does not handle CRD upgrades cleanly and that additional work may be required - though I still do not know what work. cc @cniackz @pjuarezd @dvaldivia @ramondeklein for clarity on the above CRD's are allways updated on the Operator helm chart update @ravindk89, we don't store them in the /crd directory, preciselly to ensure are always updated, otherwise what you mention as Helm does not handle CRD upgrades cleanly , instead we store CRD's in the /templates and allways update it, as a regular helm chart resource. Thanks @pjuarezd From https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ Another important point to consider in the discussion around CRD support is how the rendering of templates is handled. One of the distinct disadvantages of the crd-install method used in Helm 2 was the inability to properly validate charts due to changing API availability (a CRD is actually adding another available API to your Kubernetes cluster). If a chart installed a CRD, helm no longer had a valid set of API versions to work against. This is also the reason behind removing templating support from CRDs. With the new crds method of CRD installation, we now ensure that helm has completely valid information about the current state of the cluster. It sounds like we're OK as long as we are building the charts with Helm 2, but Helm 3 removes the templating hook/approach we are currently using. Is there any issue for the end user here to consider w.r.t. Helm 3, or can we safely assume that it's only baking the charts where it matters? @allanrogerr @cesnietor @pjuarezd final thoughts before I push tihs out? Staging updated
gharchive/pull-request
2024-07-25T00:26:21
2025-04-01T04:35:04.950348
{ "authors": [ "feorlen", "pjuarezd", "ravindk89" ], "repo": "minio/docs", "url": "https://github.com/minio/docs/pull/1284", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
853569983
Timeout everytime enable secure connection (https) Hey there! I always get time out error every time try to upload something from an html form to my bucket. It says Full Error Messages : Get "https://_an address_/_my bucket name_/?location=": dial tcp _an ip address_: i/o timeout{}. But, if i disable the secure option, it runs well. In case you want to know, I use IBM Cloud Storage I always get time out error every time try to upload something from an html form to my bucket. It says Full Error Messages : Get "https://_an address_/_my bucket name_/?location=": dial tcp _an ip address_: i/o timeout{}. You need to check it your backend supports TLS @mikaeru Ahh, it's fixed now. I have no idea what's happening so there's no more error like that
gharchive/issue
2021-04-08T15:04:36
2025-04-01T04:35:04.964798
{ "authors": [ "harshavardhana", "mikaeru" ], "repo": "minio/minio-go", "url": "https://github.com/minio/minio-go/issues/1476", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2137122848
Tuple comparison minimizes to MiniRust code that isn't well-formed. Minimizing the following program leads to a MiniRust program that isn't well-formed: fn main() { let x = (1, 0) == (0, 1); } This doesn't reproduce any more, not sure when exactly it got fixed.
gharchive/issue
2024-02-15T17:58:39
2025-04-01T04:35:05.023144
{ "authors": [ "RalfJung", "essickmango" ], "repo": "minirust/minirust", "url": "https://github.com/minirust/minirust/issues/157", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1067517588
Add publishing warning when CYA and confirmation pages are not in a form Trello good stuff. thanks for putting in the acceptance tests. can we add a more descriptive PR message please and just a screen grab of what it looks like i think we can dry up the views a little bit. a single variable that holds the warning message and then just checking if it is present? before showing the warning div. chat about it tomorrow
gharchive/pull-request
2021-11-30T17:52:47
2025-04-01T04:35:05.040380
{ "authors": [ "brenetic", "njseeto" ], "repo": "ministryofjustice/fb-editor", "url": "https://github.com/ministryofjustice/fb-editor/pull/926", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1959357011
(APS-14) Hide withdrawn assessments Now assessments can be withdrawn, we need to hide them on the assessments and application listing pages. We're going to do this filtering on the frontend, so closing this
gharchive/pull-request
2023-10-24T14:12:51
2025-04-01T04:35:05.041229
{ "authors": [ "pezholio" ], "repo": "ministryofjustice/hmpps-approved-premises-api", "url": "https://github.com/ministryofjustice/hmpps-approved-premises-api/pull/1075", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2067202835
Update Owner tag on this repo User Story "owner" = "nac@digital.justice.gov.uk" We do not have access to this email address This need to be replaced by lanwifi-devops@digital.justice.gov.uk Value / Purpose Ensure that codebase reflect correct address for the team. Useful Contacts No response Additional Information No response Definition of Done [ ] Tag updated with correct value [ ] README has been updated [ ] Another team member has reviewed [ ] Tests are green https://dsdmoj.atlassian.net/jira/software/c/projects/ND/boards/1430/backlog?epics=visible&selectedIssue=ND-85&text=owner
gharchive/issue
2024-01-05T11:43:26
2025-04-01T04:35:05.044973
{ "authors": [ "NNavickas1", "juddin927" ], "repo": "ministryofjustice/network-access-control-infrastructure", "url": "https://github.com/ministryofjustice/network-access-control-infrastructure/issues/247", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1002085849
Bump iam_s3_replication_role from 1.0.0 to 2.0.0 Review https://github.com/ministryofjustice/aws-root-account/pull/266 Do we want to update? Is there any impact from update? If no issues, carry out update. Not required. Closed Reopened as now resolved.
gharchive/issue
2021-09-21T09:06:14
2025-04-01T04:35:05.046764
{ "authors": [ "AntonyBishop", "jakemulley" ], "repo": "ministryofjustice/operations-engineering", "url": "https://github.com/ministryofjustice/operations-engineering/issues/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1617403663
Inactive Users repo: Add unit tests Background The repo python code within the repo could do with a refactor and apply unit tests. This is the current process that needs to be improved / automated better: Run the Dormant GH Users scripts on the MoJ and AP GH Orgs, get the dormant export file from Tony, add it to the repo, run the script with false, email all the users that are not active, after a week or two, get another Dormant export file from Tony, run the script with false to check for any new users, add their usernames to the allow list temporarily, run the script with true, to remove the users that haven't logged into GH, expect users to email after to be added back into the GH Orgs. Unit Test for for github_service.py Unit Test for for slack_service.py
gharchive/issue
2023-03-09T14:49:24
2025-04-01T04:35:05.048825
{ "authors": [ "NickWalt01" ], "repo": "ministryofjustice/operations-engineering", "url": "https://github.com/ministryofjustice/operations-engineering/issues/2240", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
589211657
Log reasons validation of code details failed Purpose Log failures when validating a code to add to an LPA so problems can be debugged/monitored/etc. Fixes UML-603 Approach It adds logging. Maybe it is too defensive? Learning Any tips and tricks, blog posts or tools which helped you. Plus anything notable you've discovered about the Refunds service Checklist [ ] I have performed a self-review of my own code [ ] I have added relevant logging with appropriate levels to my code [ ] I have updated documentation (Confluence/GitHub wiki/tech debt doc) where relevant [ ] I have added tests to prove my work [ ] The product team have tested these changes Coverage increased (+0.2%) to 75.822% when pulling 7593dcf73c7b47c02dfe6a6022665c5866567929 on UML-603-log-failures-to-add-lpas into 5634af897a837ddd1f455a0817a859c49677cfd1 on master. Does that exception handler do what we want? If any problem occurs then delete the map we just created seems weird. Also I hate that UUID loop, the whole point of UUIDs is that in practice you won't see a collision. Does that exception handler do what we want? If any problem occurs then delete the map we just created seems weird. I think it's because we don't know what those exceptions might be, hence the catch-all. This is a cleanup thing so we do not end up with orphaned records, not ideal but at least keeps the data consistent. If we can capture what the exceptions are, via logging, we could change this to be more specific, and handle them more gracefully. Maybe a tech debt item for later?
gharchive/pull-request
2020-03-27T15:26:06
2025-04-01T04:35:05.054249
{ "authors": [ "coveralls", "hawx", "williamfalconeruk" ], "repo": "ministryofjustice/opg-use-an-lpa", "url": "https://github.com/ministryofjustice/opg-use-an-lpa/pull/282", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1239602618
유전 알고리즘 구현 유전 알고리즘 클래스로 구현 게임에 적용 신경망의 w 값들을 유전적으로 찾는다. 세대별 유전자 수는? 유전자형 결정 초기 모집단 결정 개체 적합도 계산 선택 수행(Selection) 교차 처리(Crossover) 변이 수행(Mutation)
gharchive/issue
2022-05-18T08:21:07
2025-04-01T04:35:05.056338
{ "authors": [ "SOLokill", "minjune8506" ], "repo": "minjune8506/GeneticAvoidance", "url": "https://github.com/minjune8506/GeneticAvoidance/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2712242104
🛑 Matrix is down In 827a4e0, Matrix (https://matrix.mint.lgbt:8448/_matrix/federation/v1/version) was down: HTTP code: 0 Response time: 0 ms Resolved: Matrix is back up in d1026a9 after 47 minutes.
gharchive/issue
2024-12-02T14:57:02
2025-04-01T04:35:05.060764
{ "authors": [ "lunaisnotaboy" ], "repo": "mint-lgbt/status", "url": "https://github.com/mint-lgbt/status/issues/4260", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
275096719
add ocplib-endian to jbuild rules; test on 4.06 as well this fixes #30 now that (a) cstruct.3.2.0 is out (which does not depend on ocplib-endian) and (b) https://github.com/ocaml/opam-repository/pull/10774 is merged to restrict all released mirage-profile to cstruct {< "3.2.0"}, I can either have the newest cstruct (which I like to use) or mirage-logs (which depends on mirage-profile). can we have a release of mirage-profile, @talex5? I'm happy to do the release if you don't have the time to do it, just want to check whether there's anything outstanding I should wait for first!? Sure, go ahead!
gharchive/pull-request
2017-11-18T17:50:04
2025-04-01T04:35:05.070004
{ "authors": [ "hannesm", "talex5" ], "repo": "mirage/mirage-profile", "url": "https://github.com/mirage/mirage-profile/pull/31", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
924580736
go内联代码格式化问题
gharchive/issue
2021-06-18T06:18:23
2025-04-01T04:35:05.088138
{ "authors": [ "lhlyu" ], "repo": "misakafs/json-to-go", "url": "https://github.com/misakafs/json-to-go/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1466087300
How to manage services?such as systemctl Your CentOS9 does not seem to recognize systemctl and service By the way, do you want something else like Ubuntu-WSL How to enable or manage this version of process service? I get an error as soon as I use the command System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down Create a file "/etc/wsl.conf" with content like this: [boot] systemd=true Create a file "/etc/wsl.conf" with content like this: [boot] systemd=true And reboot wsl. It dost not work! mv /usr/bin/systemctl /usr/bin/systemctl.old curl https://raw.githubusercontent.com/gdraheim/docker-systemctl-replacement/master/files/docker/systemctl.py > /usr/bin/systemctl chmod +x /usr/bin/systemctl The problem is solved: PS C:\Users\merit> wsl -l -v NAME STATE VERSION * alpine Stopped 2 Rocky9 Stopped 2 docker-desktop-data Stopped 2 kali-linux Stopped 2 docker-desktop Stopped 2 ubuntu1804 Stopped 2 CentOS7 Stopped 2 PS C:\Users\merit> wsl --version WSL 版本: 1.1.3.0 内核版本: 5.15.90.1 WSLg 版本: 1.0.49 MSRDC 版本: 1.2.3770 Direct3D 版本: 1.608.2-61064218 DXCore 版本: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp Windows版本: 10.0.22000.1574 update wsl2 ifself, then the file "/etc/wsl.conf" will take effect. For an alternative under WSL1, please see https://github.com/mishamosher/CentOS-WSL/issues/7
gharchive/issue
2022-11-28T09:28:48
2025-04-01T04:35:05.096348
{ "authors": [ "FlowerBirds", "dong-lufei", "felixserna", "git136975643", "mishamosher" ], "repo": "mishamosher/CentOS-WSL", "url": "https://github.com/mishamosher/CentOS-WSL/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
73283227
import of all data it seems, we don't have all packages in our data. E.g. okular, libreoffice are not available. We should look for those data. There are no entries for them in /usr/share/app-info/xmls/fedora-22.xml.gz.
gharchive/issue
2015-05-05T11:11:31
2025-04-01T04:35:05.103034
{ "authors": [ "jmlich", "misli" ], "repo": "misli/fedora-software", "url": "https://github.com/misli/fedora-software/issues/9", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
125038948
allowing missing --- in yaml files Fixes issues #54 Thanks, one question: so there is no magic mark to distinguish a yaml file? I don't think so :( I'm going with Python's EAFP style here. If we can parse it as YAML, it must be YAML :) the JSON check seems good though, so I left it in. See http://stackoverflow.com/questions/11360858/what-is-the-eafp-principle-in-python If more formats are supported later this may become more complex.
gharchive/pull-request
2016-01-05T19:59:07
2025-04-01T04:35:05.108839
{ "authors": [ "mission-liao", "pwfff" ], "repo": "mission-liao/pyswagger", "url": "https://github.com/mission-liao/pyswagger/pull/55", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
217894438
Need unique object for identifying self-events in 'add' methods Currently, addFeature and addCamera and other add methods generate events, but there is no way for the users to tell if it is their own event or someone else's. Currently, addFeature and addCamera and other add methods generate events, but there is no way for the users to tell if it is their own event or someone else's. see #91 A new issue will be opened for tests
gharchive/issue
2017-03-29T14:27:53
2025-04-01T04:35:05.110223
{ "authors": [ "ee11cbb19052e40b07aac0ca060c23ee", "rsupnekar" ], "repo": "missioncommand/emp3-android", "url": "https://github.com/missioncommand/emp3-android/issues/108", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
178159858
replace dmd and color_dmd widgets with display widget Currently we have dmd and color_dmd widgets. These aren't really needed. We should just create a single widget called display which just shows another display on the current slide. It could have all the properties that are currently in the dmd widgets, like the dot screen, color controls, ability to render colors to mono, etc. This could possibly replace the slide_frame widget too. This is a longer term thing (after expo) since it will need a fair amount of testing and probably break configs. superseded by #186
gharchive/issue
2016-09-20T19:43:28
2025-04-01T04:35:05.111778
{ "authors": [ "toomanybrians" ], "repo": "missionpinball/mpf-mc", "url": "https://github.com/missionpinball/mpf-mc/issues/167", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2174187881
Unnecessary packaging dependency We just noted that Mistral Python client included pandas as a hard dependency for one of the cookbook examples. It should be either a soft or not a dependency. Thanks for the feedback. This issue was fixed.
gharchive/issue
2024-03-07T15:54:44
2025-04-01T04:35:05.140407
{ "authors": [ "leehanchung", "sophiamyang" ], "repo": "mistralai/client-python", "url": "https://github.com/mistralai/client-python/issues/64", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
262073170
Property dropdown positioning in Chrome 61 Reported by @kkashi01 on the dev forum: In the latest version of Chrome (Chrome 61) the dropdowns for properties in the designer do not adjust for the scroll position of the page. They seem to appear always at the location they should appear if scrollTop = 0. This is likely an issue with GWT and we may need to update/patch GWT to address this. Initially reported by @gabryk91 This needs to have high priority level. This is also affected by the scrollX property as well. Looks like this was fixed in the GWT master.
gharchive/issue
2017-10-02T12:47:15
2025-04-01T04:35:05.142623
{ "authors": [ "ewpatton", "kkashi01", "moliata" ], "repo": "mit-cml/appinventor-sources", "url": "https://github.com/mit-cml/appinventor-sources/issues/944", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
54635766
New screen option: ShowTitleBar I added new option in the screen, which allowes App Inventor users to enable/disable ShowTitleBar. This method is in the form of the getter method in the block editor and as property in the designer property list. But there won't be any change in the designer viewer if you decide to change them. Future work, make the changes reflect on the designer viewer. We can't really implement the method which Dan (from Google) has suggested - use Window.requestFeature() with FEATURE_NO_TITLE . Because the property is called after the setContentView. This approach will crash the app. We can change it in the manifest, "via the window theme, e.g@android:style/Theme.Black.NoTitleBar", but this hides all screens' title bar. I want to give the maximum design freedom to App Inventor users, so I have proposed the following method. All the API calls exist since API level 1. The old commit is here - https://github.com/mit-cml/appinventor-sources/pull/185 @halatmit @jisqyv @afmckinney is this called the title bar or the action bar? I tried this and setting showtitlebar to false removes the title, but it does not remove the gray bar itself. can you show me a demo of how it is supposed to work? I know you want to give people maximum control, but I suggest that the useful simple case is to either have no bars (fullscreen option) or to have both bars. == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Fri, Jan 16, 2015 at 6:00 PM, WeiHua Li notifications@github.com wrote: I added new option in the screen, which allowes App Inventor users to enable/disable ShowTitleBar. This method is in the form of the getter method in the block editor and as property in the designer property list. But there won't be any change in the designer viewer if you decide to change them. Future work, make the changes reflect on the designer viewer. We can't really implement the method which Dan (from Google) has suggested use Window.requestFeature() with FEATURE_NO_TITLE . Because the property is called after the setContentView. This approach will crash the app. We can change it in the manifest, "via the window theme, e.g@android:style/Theme.Black.NoTitleBar", but this hide all the title bar. I want to give the maximum design freedom to App Inventor users, so I have proposed the following method. All the API calls exist since API level 1. The old commit is here - #185 https://github.com/mit-cml/appinventor-sources/pull/185 You can view, comment on, or merge this pull request online at: https://github.com/mit-cml/appinventor-sources/pull/222 Commit Summary Added new screen option: ShowTitleBar Updated the component documentation - ShowTitleBar File Changes M appinventor/appengine/src/com/google/appinventor/client/OdeMessages.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-0 (4) M appinventor/appengine/src/com/google/appinventor/client/TranslationComponentProperty.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-1 (1) M appinventor/appengine/src/com/google/appinventor/client/TranslationDesignerProperties.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-2 (2) M appinventor/appengine/src/com/google/appinventor/client/youngandroid/YoungAndroidFormUpgrader.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-3 (4) M appinventor/components/src/com/google/appinventor/components/common/YaVersion.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-4 (9) M appinventor/components/src/com/google/appinventor/components/runtime/Form.java https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-5 (36) M appinventor/docs/reference/components/userinterface.html https://github.com/mit-cml/appinventor-sources/pull/222/files#diff-6 (2) Patch Links: https://github.com/mit-cml/appinventor-sources/pull/222.patch https://github.com/mit-cml/appinventor-sources/pull/222.diff — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222. This is called Titlebar in the old device (<= 2.3), Action bar in the newer device. Are you running on Lollipop? How about I do it at the manifest level? so it is one for all the screens? It is much easier, simpler that way? FullScreen in Android means different thing => no title bar (action bar) + no status bar + plus no home key panel. Do you mean by no title bar + no status bar? @halatmit If we have to choose, I suggest the modern nomenclature (action bar). But think about my suggestion to just have either both bars or none (fullscreen). I don't know about implentation. It would be good to dynamic contol rather, which I'm guess means it cannot be done in the manifest. See if this runtime method works: http://stackoverflow.com/questions/2868047/fullscreen-activity-in-android Thanks for the pointer. There is no action bar in App Inventor, only title bar... Dynamic control can't be done at the manifest. I already saw and tried the openstack approach. It doesn't really work in App Inventor. The code " requestWindowFeature(Window.FEATURE_NO_TITLE);" must be called before the "setContentView" (within Form.onCreate()). If I have a property "ShowTitleBar" in the property list in the Designer and has the corresponding setter method "requestWindowFeature(Window.FEATURE_NO_TITLE)", this will crash the app. The method "requestWindowFeature(Window.FEATURE_NO_TITLE);", can't be call after the setContentView. The approach I am using now seems have compatible issue with Lollipop, which I didn't test. Did you try out anything lower than 5.0 to see if that works? @halatmit The emulator I tried it on is an old system, pre-lollipop. It's the standard emulator we ship with app inventor. (gingerbread, or earlier?) Let's find a dynamic approach that works, if possible. And is there a good reason to have separate properties rather than just fullscreen vs. regular? == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Jan 17, 2015 at 9:54 PM, WeiHua Li notifications@github.com wrote: Thanks for the pointer. There is no action bar in App Inventor, only title bar... Dynamic control can't be done at the manifest. I already saw and tried the openstack approach. It doesn't really work in App Inventor. The code " requestWindowFeature(Window.FEATURE_NO_TITLE);" must be called before the "setContentView" (within Form.onCreate()). If I have a property "ShowTitleBar" in the property list in the Designer and has the corresponding setter method "requestWindowFeature(Window.FEATURE_NO_TITLE)", this will crash the app. The method "requestWindowFeature(Window.FEATURE_NO_TITLE);", can't be call after the setContentView. The approach I am using now seems have compatible issue with Lollipop, which I didn't test. Did you try out anything lower than 5.0 to see if that works? @halatmit https://github.com/halatmit — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-70394449 . @halatmit I verified this and this approach doesn't work on Lollipop. I will try to finish this pull request next week. "title bar" is obsolete. We should be using "action bar" and making sure this operation is consistent with Android design practices. == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Feb 21, 2015 at 9:48 AM, WeiHua Li notifications@github.com wrote: @halatmit https://github.com/halatmit I verified this and this approach doesn't work on Lollipop. I will try to finish this pull request next week. — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75374666 . ActionBar unfortunately doesn't work with API3. I believe its API 11 On Sat, Feb 21, 2015 at 9:58 AM, hal notifications@github.com wrote: "title bar" is obsolete. We should be using "action bar" and making sure this operation is consistent with Android design practices. == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Feb 21, 2015 at 9:48 AM, WeiHua Li notifications@github.com wrote: @halatmit https://github.com/halatmit I verified this and this approach doesn't work on Lollipop. I will try to finish this pull request next week. — Reply to this email directly or view it on GitHub < https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75374666 . — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75375143 . At this point, I don't see a clean solution to set it dynamically. Give the fact that we don't have action bar in App Inventor (still title bar), I decide to move this setting to the manifest. The ShowTitleBar will be an option of the Screen1.property. We should talk with the Android people about suggestions for handle this. Let's not build in things that will be obsolete with Lollipop or future versions. == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Feb 21, 2015 at 10:50 AM, WeiHua Li notifications@github.com wrote: At this point, I don't see a clean solution to set it dynamically. Give the fact that we don't have action bar in App Inventor (still title bar), I decide to move this setting to the manifest. The ShowTitleBar will be an option of the Screen1.property. — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75377417 . @halatmit what do you mean? Back in Dec, we had emails with Dan from Google. He recommends to do it in the manifest. Title bar is in App Inventor, Action Bar is NOT. Doing in the Manifest (set the right theme) just involves in checking in the android version: theme A for pre material, and theme B for material or later. I'm getting confused. Do modern (Lollipop) android apps have title bars? I thought they did not. Let's talk through the complete design a little, before you go ahead. Also, see here as a possible alternative: http://www.androidsnippets.com/how-to-make-an-activity-fullscreen == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Feb 21, 2015 at 11:00 AM, WeiHua Li notifications@github.com wrote: @halatmit https://github.com/halatmit what do you mean? Back in Dec, we had emails with Dan from Google. He recommends to do it in the manifest. Title bar is in App Inventor, Action Bar is NOT. Doing in the Manifest (set the right theme) just involves in checking in the android version: theme A for pre material, and theme B for material or later. — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75377828 . I double checked again. The previous solution works on Android 5.0, lollipop as well. Can you try the attach apk and provide comment? Wei wli17@mit.edu On 2/21/15 11:28 AM, hal wrote: I'm getting confused. Do modern (Lollipop) android apps have title bars? I thought they did not. Let's talk through the complete design a little, before you go ahead. Also, see here as a possible alternative: http://www.androidsnippets.com/how-to-make-an-activity-fullscreen == Hal Abelson hal@mit.edu Prof. of Comp. Sci. and Eng. MIT Dept. of Elec. Eng. and Comp. Sci. MIT Media Lab, Second Class Faculty On Sat, Feb 21, 2015 at 11:00 AM, WeiHua Li notifications@github.com wrote: @halatmit https://github.com/halatmit what do you mean? Back in Dec, we had emails with Dan from Google. He recommends to do it in the manifest. Title bar is in App Inventor, Action Bar is NOT. Doing in the Manifest (set the right theme) just involves in checking in the android version: theme A for pre material, and theme B for material or later. — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75377828 . — Reply to this email directly or view it on GitHub https://github.com/mit-cml/appinventor-sources/pull/222#issuecomment-75379114. I double checked again. The previous solution works on Android 5.0, lollipop as well. Can you try the attach apk and provide comment? https://www.dropbox.com/s/azks3y9sww1ldnq/AndroidTitleBar.apk?dl=0 Wei wli17@mit.edu If the API 4 code is merged, we can start using Support Library which give access to ActionBar. That could solve the compatibility issue here. @halatmit try the new code again (for ShowTitleBar) ? If you prefer, use the following aia and apk for testing: https://www.dropbox.com/sh/nuvwd6snrfra1oq/AAAPfN11WCwjt-70Du_2w-_ua?dl=0 Thanks, Wei @halatmit @jisqyv This is updated and should be ready to review and merge. From the last conversation, we agreed to NOT have a special property - ShowTitleBar; instead, we are going to have a check box right next to the Title property, this checkbox will serve as the function for the ShowTitleBar . The updated aia and apk files are here - https://www.dropbox.com/sh/nuvwd6snrfra1oq/AAAPfN11WCwjt-70Du_2w-_ua?dl=0 We agreed to have a "checkbox" right next to the Title property in the designer earlier; however, this is not doable because gwt doesn't allow me to overloading the method in the designer - Title(boolean) / Title(string). The error message is the following - Inconsistent types java.lang.String and boolean for property Title in component Form * Instead, I renamed the method to TitleVisible, which hope to be consistent with the Title property. In the designer, the property Title should be next to the TitleVisible. Works fine. I also tested it on 2.2.2 and it works there. Do you want to add the translations for TitleVisible ? @weihuali0509 Also check on whitespace issues in the code @halatmit @jisqyv added the translations. @jisqyv can you point out the issues with the while spaces? @weihuali0509 @halatmit Never mind about the whitespace if you cannot find it. I'll take care of it when I merge the change (likely today or tomorrow assuming my tests work). @halatmit @jisqyv I believed I have cleaned all the whitespaces. LGTM @weihuali0509 @jisqyv I think this needs some more discussion. I don't like the idea of not being able to use the action bar as we go forward. Will this change prevent use of the action bar? @halatmit @weihuali0509 I have verified that this PR does not conflict with the work on MenuItems. We should merge it. (I meant to do that a few days ago!) @halatmit @weihuali0509 MERGED into the ucr branch for the next component release.
gharchive/pull-request
2015-01-16T23:00:08
2025-04-01T04:35:05.195135
{ "authors": [ "halatmit", "jisqyv", "josmas", "kkashi01", "weihuali0509" ], "repo": "mit-cml/appinventor-sources", "url": "https://github.com/mit-cml/appinventor-sources/pull/222", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1315708734
Can you open source the trained network for BEVfusion-e? the network for BEVfusion-e is not published, would it be possible to make it available to the open source community? We don't expect it to be released in the near future.
gharchive/issue
2022-07-23T17:33:22
2025-04-01T04:35:05.200225
{ "authors": [ "BaiLiping", "kentang-mit" ], "repo": "mit-han-lab/bevfusion", "url": "https://github.com/mit-han-lab/bevfusion/issues/85", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
639341612
about the shift module inplace output I find that the output of inplace shift is different from the normal shift. is that right? Hi, the current inplace shift has some bug due to out-of-order execution (we believe). Therefore, we may need special design to take care of that. Hi, the current inplace shift has some bug due to out-of-order execution (we believe). Therefore, we may need special design to take care of that. When I run the testing code in temporal_shift.py, the output of inplace shift and default shift is consistent. However, the final output of the network between inplace=false and inplace=true is totally different. Does this also due to out-of-order execution? Hi, I believe so. The test case fails to spot the bug, that was why I released the code before. But in real cases, it still has such kind of bugs. I haven't profiled on CPU, but the bug exists on GPU.
gharchive/issue
2020-06-16T03:29:23
2025-04-01T04:35:05.202649
{ "authors": [ "Usernamezhx", "applesleam", "tonylins" ], "repo": "mit-han-lab/temporal-shift-module", "url": "https://github.com/mit-han-lab/temporal-shift-module/issues/105", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
777449708
Does torchsparse supports 16-bit training ? Dear All, I am currently adding Pytorch Lightning support to TorchPoints3d. If TorchSparse was to support 16-bit and be scriptable, we could have a pretty decent speed up and people would have strong reason to use torchsparse for production. Here is the PR: https://github.com/nicolas-chaulet/torch-points3d/pull/528 Also, here are updates from our previous release. We support ME and torchsparse backends + possible conversion between them. Best regards, Thomas Chaton. Thanks for the great work! The current version does not support FP16, but we are actively working on incorporating FP16 into the library. We will let you know once we have some results. Thanks for the great work! The current version does not support FP16, but we are actively working on incorporating FP16 into the library. We will let you know once we have some results. @zhijian-liu. Thanks for your amazing work! Any updates on FP16 support in the library? It would be much faster if we can use model.half() and inputs.half() like in Pytorch.
gharchive/issue
2021-01-02T12:02:34
2025-04-01T04:35:05.205646
{ "authors": [ "YoushaaMurhij", "tchaton", "zhijian-liu" ], "repo": "mit-han-lab/torchsparse", "url": "https://github.com/mit-han-lab/torchsparse/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
542809066
How to grab tcp at the same time and forward http to other tools How to grab tcp at the same time and forward http to other tools I need client-> mitmproxy-> burp-> server But i don't want to lose tcp traffic so what should i do tcp_message (flow: tcp.TCPMessage) does not correctly forward http Thanks for getting in touch. We're generally happy to help out, but can I please ask you to use StackOverflow for all support and how-to questions? You can post the link to your question here so that people can follow up. Thanks!
gharchive/issue
2019-12-27T09:22:20
2025-04-01T04:35:05.217602
{ "authors": [ "ldbfpiaoran", "mhils" ], "repo": "mitmproxy/mitmproxy", "url": "https://github.com/mitmproxy/mitmproxy/issues/3759", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1371574520
Certificates: display course dates As a learner, I'd like my certificate to include dates so that when I share it, people will know when I took the course. Designs and Mockups See, for example, https://rc.xpro.mit.edu/certificate/c9819368-40c6-4ff4-b53c-7241348a09de/ Acceptance Criteria: [ ] The dates should be displayed even if there are no CEUs set [ ] On course certificates, the dates should be the start and end dates of the course. Out of Scope Program certificates should display the earliest start date of all the course runs the learner passed and the latest end date of all the course runs the learner passed Question: instead of using the end date of the course, can the 2nd date be the date the certificate was created? (not updated) @aliraza-abbasi we have the same issue on xPRO. @pdpinch please let me know which issue you are facing. I’ll check it on priority @aliraza-abbasi it can wait until morning, but it is important: https://github.com/mitodl/mitxpro/issues/2420 Question: instead of using the end date of the course, can the 2nd date be the date the certificate was created? (not updated) For context, I'm thinking about how to handle self-paced courses. Let me know which date should be displayed on the certificate. I'll add it also can I get the design where to add the start and end date @aliraza-abbasi it can wait until morning, but it is important: mitodl/mitxpro#2420 I'm looking into it
gharchive/issue
2022-09-13T14:44:33
2025-04-01T04:35:05.236092
{ "authors": [ "aliraza-abbasi", "pdpinch" ], "repo": "mitodl/mitxonline", "url": "https://github.com/mitodl/mitxonline/issues/967", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1349883269
Remove disable_compat_19 from consul configs Description Removing the disable_compat_19 field from consul config which has been deprecated. Motivation and Context Consul won't start up anymore with this field set. https://www.consul.io/docs/upgrading/upgrade-specific#1-9-telemetry-compatibility How Has This Been Tested? Types of changes [X] Bug fix (non-breaking change which fixes an issue) [ ] New feature (non-breaking change which adds functionality) [ ] Enhancement (improves on existing behavior) [ ] Breaking change (fix or feature that would cause existing functionality to change) Checklist: [X] My code follows the code style of this project. (Did you install and run the pre-commit hooks?) [ ] My change requires a change to the documentation. [ ] I have updated the documentation accordingly. :+1:
gharchive/pull-request
2022-08-24T18:50:41
2025-04-01T04:35:05.247856
{ "authors": [ "Ardiea", "blarghmatey" ], "repo": "mitodl/ol-infrastructure", "url": "https://github.com/mitodl/ol-infrastructure/pull/1024", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2526174865
🛑 VividCraft is down In e50056a, VividCraft (https://vividcraft.com.au) was down: HTTP code: 530 Response time: 266 ms Resolved: VividCraft is back up in 08db8cf after 6 minutes.
gharchive/issue
2024-09-14T09:50:46
2025-04-01T04:35:05.273415
{ "authors": [ "mitraNBIT" ], "repo": "mitraNBIT/Uptime", "url": "https://github.com/mitraNBIT/Uptime/issues/26", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2413927849
Document what a "concerning" contributer means If possible, please define what a "concerning" contributor test means. Eg. Failing - has concerning contributors 733 found, 0 permitted Both - affiliation count 2 Which contributors / why were they (all) flagged? What affiliations were concerning? Thanks for asking! This is documented on the Hipcheck website as part of the Complete Guide to Hipcheck.
gharchive/issue
2024-07-17T15:29:50
2025-04-01T04:35:05.274910
{ "authors": [ "alilleybrinker", "japharl" ], "repo": "mitre/hipcheck", "url": "https://github.com/mitre/hipcheck/issues/211", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1957268106
Add missing methods for UTF-8 strings Add icupy.icu.Collator.compare_utf8(source: bytes, target: bytes) Add icupy.icu.IDNA.label_to_ascii_utf8(label: bytes, info: IDNAInfo) Add icupy.icu.IDNA.label_to_unicode_utf8(label: bytes, info: IDNAInfo) Add icupy.icu.IDNA.name_to_ascii_utf8(name: bytes, info: IDNAInfo) Add icupy.icu.IDNA.name_to_unicode_utf8(name: bytes, info: IDNAInfo) Add icupy.icu.Normalizer2.is_normalized_utf8(s: bytes) Add icupy.icu.Normalizer2.normalize_utf8(options: int, src: bytes) Add icupy.icu.UnicodeSet.span_utf8(b: bytes, length: int, span_condition: USetSpanCondition) Codecov Report Merging #56 (ba75757) into main (839145e) will decrease coverage by 0.03%. The diff coverage is 81.01%. @@ Coverage Diff @@ ## main #56 +/- ## ========================================== - Coverage 86.31% 86.29% -0.03% ========================================== Files 146 146 Lines 16210 16289 +79 ========================================== + Hits 13992 14056 +64 - Misses 2218 2233 +15 Files Coverage Δ src/uniset.cpp 83.45% <83.33%> (-0.01%) :arrow_down: src/tblcoll.cpp 81.23% <77.77%> (-0.10%) :arrow_down: src/normalizer2.cpp 80.37% <80.00%> (-0.06%) :arrow_down: src/idna.cpp 87.50% <81.81%> (-2.72%) :arrow_down: :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2023-10-23T14:09:23
2025-04-01T04:35:05.293648
{ "authors": [ "codecov-commenter", "miute" ], "repo": "miute/icupy", "url": "https://github.com/miute/icupy/pull/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
709541396
Upgrade npm dependencies & lint code A fresh clone and an npm i throw the following error: > tsc -p src src/sns-server.ts(235,32): error TS2345: Argument of type '{}' is not assignable to parameter of type 'ArrayLike<{}>'. Property 'length' is missing in type '{}'. ../../../node_modules/@types/yargs/index.d.ts(225,106): error TS2304: Cannot find name 'unknown'. ../../../node_modules/@types/yargs/index.d.ts(238,112): error TS2304: Cannot find name 'unknown'. ../../../node_modules/@types/yargs/index.d.ts(600,28): error TS2304: Cannot find name 'unknown'. ../../../node_modules/@types/yargs/index.d.ts(762,9): error TS2304: Cannot find name 'unknown'. npm ERR! code ELIFECYCLE npm ERR! errno 2 npm ERR! serverless-offline-sns@0.68.0 build: `tsc -p src` npm ERR! Exit status 2 npm ERR! npm ERR! Failed at the serverless-offline-sns@0.68.0 build script. npm ERR! This is probably not a problem with npm. There is likely additional logging output above. This PR upgrade all npm dependencies to fix the issue and close #106 #96 #95 #94 Thank you I have changed a lot in the repo so I added these changes myself. I have also added you as a contributor to the Readme. Thanks for picking these things up.
gharchive/pull-request
2020-09-26T13:52:38
2025-04-01T04:35:05.346468
{ "authors": [ "Alexandre-io", "mj1618" ], "repo": "mj1618/serverless-offline-sns", "url": "https://github.com/mj1618/serverless-offline-sns/pull/108", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
382620105
composer error - The requested package mjaschen/phpgeo No version set (parsed as 1.0.0) is satisfiable by mjaschen/phpgeo[No version set (parsed as 1.0.0)] but these confl ict with your requirements or minimum-stability. Hi, can you explain what you're doing? It'd be helpful if you share your composer.json and the command you're trying to run. i am trying to install the library using composer composer require mjaschen/phpgeo my composer.json is the same as in the library i have php 7.0 { "name": "mjaschen/phpgeo", "description": "Simple Geo Library", "keywords": [ "distance", "area", "coordinate", "geo", "gis", "bounds", "ellipsoid", "calculation", "polyline", "polygon", "geofence", "simplify", "length", "vincenty", "haversine", "bearing", "projection", "gps", "earth", "track", "point" ], "homepage": "https://phpgeo.marcusjaschen.de/", "type": "library", "license": "MIT", "authors": [ { "name": "Marcus Jaschen", "email": "mjaschen@gmail.com", "homepage": "https://www.marcusjaschen.de/" } ], "support" : { "email" : "mjaschen@gmail.com" }, "require": { "php": ">=7.0" }, "autoload": { "psr-0": { "Location": "src/" } }, "require-dev": { "phpunit/phpunit": "~6.0", "vimeo/psalm": "^1.0", "jakub-onderka/php-parallel-lint": "^1.0", "squizlabs/php_codesniffer": "^3.2", "phpmd/phpmd": "^2.6" }, "scripts": { "ci:lint": "./vendor/bin/parallel-lint src", "ci:psalm": "./vendor/bin/psalm", "ci:sniff": "./vendor/bin/phpcs --standard=codesniffer_rules.xml src || true", "ci:coding-standards": "./vendor/bin/phpmd src text cleancode,codesize,design,naming,unusedcode || true", "ci:tests": "./vendor/bin/phpunit tests/", "ci:static": [ "@ci:lint", "@ci:psalm", "@ci:sniff", "@ci:coding-standards" ], "ci:dynamic": [ "@ci:tests" ], "ci": [ "@ci:static", "@ci:dynamic" ] } } { "name": "mjaschen/phpgeo", "description": "Simple Geo Library", "keywords": [ "distance", "area", "coordinate", "geo", "gis", "bounds", "ellipsoid", "calculation", "polyline", "polygon", "geofence", "simplify", "length", "vincenty", "haversine", "bearing", "projection", "gps", "earth", "track", "point" ], "homepage": "https://phpgeo.marcusjaschen.de/", "type": "library", "license": "MIT", "authors": [ { "name": "Marcus Jaschen", "email": "mjaschen@gmail.com", "homepage": "https://www.marcusjaschen.de/" } ], "support" : { "email" : "mjaschen@gmail.com" }, "require": { "php": ">=7.0" }, "autoload": { "psr-0": { "Location": "src/" } }, "require-dev": { "phpunit/phpunit": "~6.0", "vimeo/psalm": "^1.0", "jakub-onderka/php-parallel-lint": "^1.0", "squizlabs/php_codesniffer": "^3.2", "phpmd/phpmd": "^2.6" }, "scripts": { "ci:lint": "./vendor/bin/parallel-lint src", "ci:psalm": "./vendor/bin/psalm", "ci:sniff": "./vendor/bin/phpcs --standard=codesniffer_rules.xml src || true", "ci:coding-standards": "./vendor/bin/phpmd src text cleancode,codesize,design,naming,unusedcode || true", "ci:tests": "./vendor/bin/phpunit tests/", "ci:static": [ "@ci:lint", "@ci:psalm", "@ci:sniff", "@ci:coding-standards" ], "ci:dynamic": [ "@ci:tests" ], "ci": [ "@ci:static", "@ci:dynamic" ] } } IMHO, it doesn't make a lot of sense to use the library's composer.json for your project. Or is there a reason why you'd like to install phpgeo as a requirement to phpgeo? I bet everything will work fine if you use a clean composer.json (or no composer.json at all) and then run composer require mjaschen/phpgeo. it's working fine now Thanks
gharchive/issue
2018-11-20T11:28:37
2025-04-01T04:35:05.371598
{ "authors": [ "mjaschen", "tamer-dahdul" ], "repo": "mjaschen/phpgeo", "url": "https://github.com/mjaschen/phpgeo/issues/41", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
180560917
Border attribute for mj-section or mj-container Hi, I've been using this framework a lot of time and I love this framework so much. It is possible to use border attribute to the mj-section / mj-container. I have read the documentation but it said it is not possible. hope it will be added in the next update. Thank you so much Border on mj-section & mj-column is coming in the next major release 3.0.0. https://github.com/mjmlio/mjml/pull/321 :----------------------------: Dale McConnell On 3 October 2016 at 05:24, Afdallah Wahyu Arafat notifications@github.com wrote: Hi, I've been using this framework a lot of time and I love this framework so much. It is possible to use border attribute to the mj-section / mj-container. I have read the documentation but it said it is not possible. hope it will be added in the next update. Thank you so much — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mjmlio/mjml/issues/403, or mute the thread https://github.com/notifications/unsubscribe-auth/AKxLOMPh0KyBHN555kOOjcGN6iUgAbX-ks5qwINigaJpZM4KMOwb . Hi there, There's already an issue on that so I close this one : #92 I don't know if we can have border on container atm, due to how hard it is to have border on container without breaking the responsive.
gharchive/issue
2016-10-03T04:23:59
2025-04-01T04:35:05.392360
{ "authors": [ "afdallah-war", "dalefish", "iRyusa" ], "repo": "mjmlio/mjml", "url": "https://github.com/mjmlio/mjml/issues/403", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2210925502
Data not fetched for Global Mean Temperature I tried to create a plot of Mean Temperature today, but it does not work. First I chose the variable, which changed the options I had in the other dropdowns. I always chose those accodingly. However, there is no data displayed. Hi @smartie2076, Thank you for the report. As you can see, the data is displayed in orange, but constant at 0°C, which seems to be an issue with the data that was uploaded. This is not an issue related to Future Sight directly, so I will close this issue in a couple days, but I will pass the word to the relevant people to let them know this data issue. Again, thanks for your feedback. Cheers, MJ The PRIMES 2022 model does not model the temperature, so this data is an artifact of the upload, and can be safely ignored. In the future, we will not upload this data. Sorry for the confusion ! I will close the issue then. Hi @mjothy ! This does not only apply to primes, I think - it seems like temperature is a possible variable. Here is another model whete no data is available, and also no 0 stored. So isnt this a db issue, if your dropdowns offer the option but there is not content? And I found a working combination. Hi @mjothy ! This does not only apply to primes, I think - it seems like temperature is a possible variable. Here is another model whete no data is available, and also no 0 stored. So isnt this a db issue, if your dropdowns offer the option but there is not content? And I found a working combination. Hi ! For PRIMES, the issue was, the modellers uploaded data for temperature (but a 0 line, because they don't actually have that data). => You can see the orange line following the x axis on your 1rst screenshot For your new screenshots, concerning IMAGE, that does seem like a bug... You should not be able to select options in the dropdown if there's no data. I'll reopen for this then. (Sorry for the unsightly posts above @mjothy, seems like the Github App is not really good at adding images and post editing o.O) Hello @smartie2076, Thank you for your comments. Could you please share more details about how you get the issue of no datapoints found? There is a possibility of encountering the same issue in the following scenario: Selecting in models: REMIND 2.1, IMAGE 3.2 Selection in scenarios: DIAG-C400-LIN-HighEff After that, if we deselect REMIND 2.1 from models list. This action will keep the selection options in other dropdown list ( no data filter applied). To refresh the filtered data in dropdown lists, you should open the list to retreive the new filtered values. Thank you in advance for your time to respond. Sure, this is how I get there: https://future-sight.ecemf.eu/ -> Create new Dashboard Focus on Variables / Global Mean Editing selected block Choose variables from dropdown list / Global Mean -> Click somewhere else on the screen Choose regions from dropdown list / World -> Click somewhere else on the screen Choose models from dropdown list / Image 3.2 -> Click directly on the scenarios -> screnario list dropdown does not change -> I can choose scenarios that are not actually available for IMAGE 3.2. If I click somewhere else on the screen after selecting IMAGE 3.2, then the dropdown list updates. Ergo: The issue appears when people want to be fast and click onto the next dropdown list before "confirming the selection of the current dropdown by clicking somewhere on the screen". Maybe add an "ok" button to avoid this? Hello @smartie2076 , Hello, Thank you for your clarification. Certainly, the bug appears when we try to quickly select data. Our filtering system is submitted when we close the list options. This action causes filtering in other dropdown lists below. For that reason, when we open a dropdown list without actually closing the current working dropdown list, it returns the old list because the selected options are not submitted. This user action flow is actually interesting, as many users will try to select data as quickly as they can. So, we will try to add a bug fix to handle this case in the next release. Thank you, and feel free to provide any suggestions or report other bugs.
gharchive/issue
2024-03-27T14:15:07
2025-04-01T04:35:05.406024
{ "authors": [ "mery-19", "mjothy", "smartie2076" ], "repo": "mjothy/future-sight", "url": "https://github.com/mjothy/future-sight/issues/7", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2763775640
fix: Correct image format selection When setting an output filename, figure.write_image is passed an open fileobject and so it'd never select a format based on the filename extension. Explicitly set a format when none has been seleceted, and don't try to second-guess the user when a format has been passed in that conflicts with the filename. Fixes #24 Codecov Report All modified and coverable lines are covered by tests :white_check_mark: Project coverage is 100.00%. Comparing base (974adaf) to head (2bfa90e). Additional details and impacted files @@ Coverage Diff @@ ## main #27 +/- ## ========================================= Coverage 100.00% 100.00% ========================================= Files 4 4 Lines 246 251 +5 Branches 16 17 +1 ========================================= + Hits 246 251 +5 :umbrella: View full report in Codecov by Sentry. :loudspeaker: Have feedback on the report? Share it here.
gharchive/pull-request
2024-12-30T22:58:38
2025-04-01T04:35:05.410845
{ "authors": [ "codecov-commenter", "mjpieters" ], "repo": "mjpieters/pyright-analysis", "url": "https://github.com/mjpieters/pyright-analysis/pull/27", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2459829097
Should BLEU score be dependent on the order of the sequences? Hey! Using sacrebleu to evaluate a model for a project of mine, and just noticed this: bleu = sacrebleu.corpus_bleu(decoded_cands, decoded_refs).score randper = torch.randperm(len(decoded_refs)) decoded_refs = [decoded_refs[i] for i in randper] decoded_cands = [decoded_cands[i] for i in randper] bleu_rand0 = sacrebleu.corpus_bleu(decoded_cands, decoded_refs).score randper = range(len(decoded_refs)) decoded_refs = [decoded_refs[i] for i in randper] decoded_cands = [decoded_cands[i] for i in randper] bleu_rand1 = sacrebleu.corpus_bleu(decoded_cands, decoded_refs).score print("bleu 0: ", bleu, "bleu 1: ", bleu_rand0, "bleu 2: ", bleu_rand1) Here, the order of the sentences make the BLEU score change. bleu is identical to bleu_rand1 but varies drastically with bleu_rand0 (randomly permuted). I am not an expert in this, but what is the reason it can vary so much (some permutations will result in about a score 20 and others in a score in the 60s). Am I doing something wrong? Thanks in advance! If you have a single reference translation for each sentence, you should use corpus_bleu(decoded_cands, [decoded_refs]) instead of corpus_bleu(decoded_cands, decoded_refs). Otherwise, you are treating each character in decoded_refs as a (single-character) sentence, so the BLEU scores are meaningless. See https://github.com/mjpost/sacrebleu/tree/master#using-sacrebleu-from-python, which shows a 2-reference example: refs = [ # First set of references ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], # Second set of references ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'], ] sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'] from sacrebleu.metrics import BLEU bleu = BLEU() bleu.corpus_score(sys, refs) where the last three lines are equivalent (unless the language is zh, ja or ko) to the legacy way you use: import sacrebleu sacrebleu.corpus_bleu(sys, refs) I admit corpus_bleu(hypotheses, references) may be misleading because "hypotheses" means a single hypothesis for each sentence in the document, while "references" means "A sequence of reference documents with document being defined as a sequence of reference strings" (according to the documentation). This is also reflected in the method parameters signature (hypotheses: Sequence[str], references: Sequence[Sequence[str]]). Unfortunately, both [["sent 1", "sent 2"]] and ["sent 1", "sent 2"] and even a single string "sent 1" pass the Sequence[Sequence[str]] type check (I am not sure why). Maybe we could add an explicit check for this common misuse. If you have a single reference translation for each sentence, you should use corpus_bleu(decoded_cands, [decoded_refs]) instead of corpus_bleu(decoded_cands, decoded_refs). Otherwise, you are treating each character in decoded_refs as a (single-character) sentence, so the BLEU scores are meaningless. See https://github.com/mjpost/sacrebleu/tree/master#using-sacrebleu-from-python, which shows a 2-reference example: refs = [ # First set of references ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], # Second set of references ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'], ] sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'] from sacrebleu.metrics import BLEU bleu = BLEU() bleu.corpus_score(sys, refs) where the last three lines are equivalent (unless the language is zh, ja or ko) to the legacy way you use: import sacrebleu sacrebleu.corpus_bleu(sys, refs) I admit corpus_bleu(hypotheses, references) may be misleading because "hypotheses" means a single hypothesis for each sentence in the document, while "references" means "A sequence of reference documents with document being defined as a sequence of reference strings" (according to the documentation). This is also reflected in the method parameters signature (hypotheses: Sequence[str], references: Sequence[Sequence[str]]). Unfortunately, both [["sent 1", "sent 2"]] and ["sent 1", "sent 2"] and even a single string "sent 1" pass the Sequence[Sequence[str]] type check (I am not sure why). Maybe we could add an explicit check for this common misuse. Oh, got it. Yes, I was using it wrong. Thanks for the help:)
gharchive/issue
2024-08-12T00:17:04
2025-04-01T04:35:05.419682
{ "authors": [ "davidgonmar", "martinpopel" ], "repo": "mjpost/sacrebleu", "url": "https://github.com/mjpost/sacrebleu/issues/272", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1570647386
Custom Dataset based training image channel color Hi, I am confused about the image channel color when doing custom dataset-based training and I really would like to clarify that point. As you have stated in <README_TRAIN.md> document, we are able to check our training samples folder, which is <RUN_DIR>/training_samples, to understand whether our image channel order is correct or not. When I check the image saved in that folder, it looks like an RGB order because if I read that image with cv2 module and do not convert it from BGR to RGB, since cv2 reads images in BGR order, the result with "matplotlib.pylot" imshow shows my image is BGR order. At the same time, If I read that image with PIL module and then plot it with the same "matlotlib.pylot" imshow shows it's an RGB order. So, should we assume that If the image in the <RUN_DIR>/training_samples folder is an RGB order, then that image color channel is correct, isn't it? Thanks Thank you for the question. Easiest way to think about it is we use cv2.imread() to load images during evaluation. So this means that the loaded numpy is assumed to be BGR channel order. When you prepare the CustomImageFolderDataset, you have the option of swap_color_channel. Using this, just make sure that the sample is in BGR channel order. The easiest way to check it is by saving the sample by cv2.imwrite(path, np.asarray(sample)) and make sure that the saved image file looks okay. Does that clarify your question? It's completely clear right now. Thanks for your explanation.
gharchive/issue
2023-02-04T00:07:01
2025-04-01T04:35:05.442189
{ "authors": [ "ErolCitak", "mk-minchul" ], "repo": "mk-minchul/AdaFace", "url": "https://github.com/mk-minchul/AdaFace/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
494176689
Biundo/kafka @mkaufmaner Forgive this clumsy pull request. I didn't mean to drag in all the commits since you created your branch. I'm not a super git expert, but if you welcome the suggestions and would prefer I try to package them up as a diff only against your branch, I'll go back and do that. Also, forgive me just jumping in with suggestions... I've been doing a lot of work on the Nest docs -- both some original writing, and a lot of stylistic cleanup. I thought I'd offer my input if you would like it. In addition to the PR, I noticed a couple of other things in reading the docs. Here are those observations: The second code example (approx. line 106) uses the type ClientKafka. I wondered whether you might have meant to say KafkaClient here? In the text just below that same code example, it says "There is a small difference to the previous examples" and mentions ClientProxy, but I didn't see any other references to ClientProxy, so I wonder if this might be confusing? In the section describing how partitions are rebalanced, the sentence However, using this partitioning method, a new consumer could join the consumer group and fall where within the sorted list of consumer names because the consumer name is randomly set on application launch. was a little confusing to me. I wasn't sure what the fall where within part was referring to. @johnbiundo No need for apologies, your input and changes are appreciated! I merged your changes and elaborated further on the Message Pattern. Also, all references to KafkaClient were changed to ClientKafka, was a bit clumsy there. Too much context switching on my side as of recent so thanks again for the review! Cool, glad it helped! Looking forward to using this feature. Very welcome addition to Nest!
gharchive/pull-request
2019-09-16T17:19:28
2025-04-01T04:35:05.445991
{ "authors": [ "johnbiundo", "mkaufmaner" ], "repo": "mkaufmaner/docs.nestjs.com", "url": "https://github.com/mkaufmaner/docs.nestjs.com/pull/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
606738203
C handler Is your feature request related to a problem? Please describe. Opening for visibility. Describe the solution you'd like A C handler. For extensions written in C. Additional context Doxygen? Do I understand correctly that this issue is for supporting docstrings of Python C extensions? e.g compiled with Cython? I've been wondering about how to best support that with mkdocstrings. For the Cython use-case, i think a C handler is tricky because the docstrings are placed in strangely named C variables. It might be easier to import the extension type in python and print the __doc__ attribute? Additional context Doxygen? Just thought I'd throw this out there for anyone passing by: doxybook2 (actively developed) could be used instead of this plugin with doxygen's XML output. Notice there is a python lib called "doxybook" on pypi, but it has gone stale since doxybook2 was developed in C++ (same developer). @suned, I realize I never answered your question above: I'm very sorry. The issue was about supporting documentation of C code, generally. But since Python C extensions are C code, they could be supported by such a handler as well (correct me if I'm wrong). @2bndy5, hi, thanks for the link to doxybook2! Nice project :slightly_smiling_face: And I see the project has a JSON-ouput feature, which means it coul definitely (and maybe easily) be integrated as a handler for mkdocstrings, in order to make it benefit from mkdocstrings features such as cross-references or markdown inline injection (instead of page generation). I'm so glad you're interested. I was afraid my comment sounded like spam. As I said in #665, I would be happy to try and implement this. Is there a good reference I can follow for writing a handler? From #665: Hopefully this would be easy to implement? All there is to C is pretty much global variables, types, functions, and macros. I'm not sure how mkdocstrings handlers work, but I would imagine that it would be somewhat easy to implement via pycparser. Thanks, pycparser looks like a good candidate indeed, for Python extensions at least. If you don't have the time, or would like to focus on another issue, I would be happy to try to implement this myself, but I would need some example code to follow on how to write a handler. Thanks! So, handlers are basically a collect and a render method. The collect method takes an identifier and configuration options, extracts info from sources or compiled code and returns it how it wants. The render method takes what the collect method returned and renders it with Jinja templates. You can get inspiration from the Python handler itself. Where the Python handler uses Griffe, your Python/C handler would use pycparser (probably). Maybe you'll need an intermediate data representation layer to better play with Jinja templates. To illustrate it, here's how your handler would compare to existing ones: mkdocstrings python handler griffe (intermediate data representation) ast typescript handler griffe-typedoc (Python structs for TypeScript code) typedoc python/c handler (optional) intermediate data representation pycparser To kickstart a handler, you can use our handler template :slightly_smiling_face: You can start a 1-1 conversation on Matrix/Gitter with me if you'd like to ask quick questions while working on it :slightly_smiling_face: Quick question: once compiled, C extensions expose Python objects, right? Griffe should already be able to handle those, as it supports dynamic analysis (importing modules and inspecting them). That however requires that the C extensions are compiled in order to build the docs. A solution built on pycparser would allow to build the docs without compiling the extensions :+1: Quick comment: once compiled, C extensions expose Python objects, right? Griffe should already be able to handle those, as it supports dynamic analysis (importing modules and inspecting them). That however requires that the C extensions are compiled in order to build the docs. A solution built on pycparser would allow to build the docs without compiling the extensions :+1: Not necessarily. They can expose a module object via PyInit_, which can hold other Python objects, but they can expose just about anything they want to the shared object file. I think doxygen might be a bit much for mkdocstrings - it is huge, and those who need all of its features will just use its markdown output feature anyway. It's generally a specification for C++ code, so it's way more in-depth than it needs to be for a C handler. I'm going to implement the base features of doxygen that would be useful in C, but I won't use doxygen itself to do that - I'll just write the parser myself. If a separate C++ handler were to be written in the future, then doxygen itself could be used. Yeah I agree, depending on / bundling Doxygen would be a bit brutal :sweat_smile: Definitely pick the tool of your choice or get some fun implementing a parser! Looking forward to see what you come up with :slightly_smiling_face: Thanks to @ZeroIntensity we have a first version of a C handler. Very much experimental. Feel free to give us feedback here: https://github.com/mkdocstrings/c/issues. The code is private and the docs say nothing about dependencies. Does it use libclang? I'm just curious how this was implemented without doxygen. If it uses pycparser, then the docs should note that only C99 is fully supported (link) while C11 is partially supported. libclang can be made to use whatever C/C++ standard, but getting doc comments (which have to be extracted from regular comments) can be a pain because it practically requires traversing all tokens in the AST from libclang with consideration given to the context of the symbol being documented. Please report bugs! I haven't had time to sit down and fully stress test it yet -- I'm almost certain there are many issues.
gharchive/issue
2020-04-25T10:16:44
2025-04-01T04:35:05.461690
{ "authors": [ "2bndy5", "ZeroIntensity", "pawamoy", "suned" ], "repo": "mkdocstrings/mkdocstrings", "url": "https://github.com/mkdocstrings/mkdocstrings/issues/97", "license": "ISC", "license_type": "permissive", "license_source": "github-api" }
149695344
Added test cases for Handling multiple attempts at ipmi reservation ids 1.Test Wrong ReservationID is for testing expired reservation id. 2.Test Correct ReservationID is for testing correct reservation id. 3.Command content are packaged with fake value. 4.This version has some issues need to resolve. Signed-off-by: Nan Li bjlinan@cn.ibm.com This PR is replaced by https://github.com/mkumatag/openbmc-automation/pull/50.
gharchive/pull-request
2016-04-20T08:19:21
2025-04-01T04:35:05.484093
{ "authors": [ "williamli80" ], "repo": "mkumatag/openbmc-automation", "url": "https://github.com/mkumatag/openbmc-automation/pull/45", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
351763538
[Broken Example] Style transfer webcam example https://ml5js.org/docs/style-transfer-webcam-example seems to be having a bit of trouble: A request to https://ml5js.org/docs/assets/scripts/example-style-transfer-video.js 404s and the page sits displaying Loading model.... Also, just above that as part of the You can train your own images following this tutorial. section, the this tutorial is a dead link to https://github.com/ml5js/ml5-data-and-models/tree/master/training Just fixed by https://github.com/ml5js/ml5-website/commit/e0024087c8c3ede9ec983e3bc1d0fae81c4d61da! thanks
gharchive/issue
2018-08-17T23:04:46
2025-04-01T04:35:05.490672
{ "authors": [ "22a", "cvalenzuela" ], "repo": "ml5js/ml5-website", "url": "https://github.com/ml5js/ml5-website/issues/82", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
415897146
change broken references to LSTMGenerator() to charRNN() was working on CharRnn and noticed that the documentation links are broken/the code sample uses an outdated function name (LSTMGenerator() instead of charRNN()). Might be worth considering just changing all the references to LSTM to charRNN to avoid confusion? @brondle - super! Thank you for catching this issue. I've now merged in your changes and updated the website. The changes should be live now. I did notice one thing while going through the docs which is that the link in the CharRNN() example is also broken similar to what you had discovered. We should change this as well - just making a note here so we don't forget! see: https://ml5js.org/docs/CharRNN#source Thanks again!
gharchive/pull-request
2019-03-01T01:18:43
2025-04-01T04:35:05.493138
{ "authors": [ "brondle", "joeyklee" ], "repo": "ml5js/ml5-website", "url": "https://github.com/ml5js/ml5-website/pull/99", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1946540881
[Bug] Compiling android model of Llama-2-7b-chat-hf on Windows 🐛 Bug Error encountered in latest when compiling android model of Llama-2-7b-chat-hf on Windows. python -m mlc_llm.build --target android --max-seq-len 768 --model dist/models/Llama-2-7b-chat-hf --quantization q4f16_1 Target configured: opencl -keys=opencl,gpu -max_function_args=128 -max_num_threads=256 -max_shared_memory_per_block=16384 -max_threads_per_block=256 -texture_spatial_limit=16384 -thread_warp_size=1 [23:26:03] D:\a\package\package\tvm\src\node\reflection.cc:109: AttributeError: relax.expr.Var object has no attributed shard_dim Stack trace not available when DMLC_LOG_STACK_TRACE is disabled at compile time. [23:26:03] D:\a\package\package\tvm\src\node\reflection.cc:109: AttributeError: relax.expr.Var object has no attributed shard_strategy Stack trace not available when DMLC_LOG_STACK_TRACE is disabled at compile time. Those two attrib errors repeat roughly 100 times then finally the index out of bounds hits. [23:26:05] D:\a\package\package\tvm\src\relax\ir\expr.cc:174: Check failed: index < tuple_info->fields.size() (197 vs. 197) : Index out of bounds: Tuple params is of size 197, and cannot be accessed with index 197 Stack trace not available when DMLC_LOG_STACK_TRACE is disabled at compile time. Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "c:\Users\UserName\project\mlc-llm\mlc_llm\build.py", line 46, in <module> main() File "c:\Users\UserName\project\mlc-llm\mlc_llm\build.py", line 42, in main core.build_model_from_args(parsed_args) File "c:\Users\UserName\project\mlc-llm\mlc_llm\core.py", line 645, in build_model_from_args new_params = utils.convert_weights(param_manager, params, args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "c:\Users\UserName\project\mlc-llm\mlc_llm\utils.py", line 229, in convert_weights mod_transform = relax.transform.LazyTransformParams()(mod_transform) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\anaconda3\envs\project\Lib\site-packages\tvm\ir\transform.py", line 238, in __call__ return _ffi_transform_api.RunPass(self, mod) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\anaconda3\envs\project\Lib\site-packages\tvm\_ffi\_ctypes\packed_func.py", line 239, in __call__ raise_last_ffi_error() File "C:\Users\UserName\anaconda3\envs\project\Lib\site-packages\tvm\_ffi\base.py", line 415, in raise_last_ffi_error _LIB.TVMGetLastPythonError.restype = ctypes.c_void_p ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\anaconda3\envs\project\Lib\ctypes\__init__.py", line 389, in __getattr__ func = self.__getitem__(name) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\UserName\anaconda3\envs\project\Lib\ctypes\__init__.py", line 394, in __getitem__ func = self._FuncPtr((name_or_ordinal, self)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: function 'TVMGetLastPythonError' not found. Did you mean: 'TVMAPISetLastPythonError'? Steps to reproduce the behavior: The is following the outlined Android App instructions via https://llm.mlc.ai/docs/deploy/android.html Environment Windows 10 10.0.19045, building for Android Rust installed and exposed to PATH. Android studio installed and configured. OpenJDK installed All env vars configured and independently verified. The test basis now tested on two different computers with similar but different spec. fresh conda env py3.11 mlc-llm installed via pip with --recursive. tvm installed via pip python -m pip install --pre -U -f https://mlc.ai/wheels mlc-ai-nightly zstd.dll downloaded and coped to conda env/site-packages/tvm/ Llama-2-7b-chat-hf pulled via git. into correct dist/models/Llama-2-7b-chat-hf subfolder. To verify TVM here is the following support info. python -c "import tvm; print(tvm.file)" C:\Users\UserName\anaconda3\envs\project\Lib\site-packages\tvm_init_.py python -c "import tvm; print(tvm._ffi.base._LIB)" <CDLL 'C:\Users\UserName\anaconda3\envs\project\Lib\site-packages\tvm\tvm.dll', handle 7ffd3a1b0000 at 0x2275b9db7d0> (project) c:\Users\UserName\project\mlc-llm>python -c "import tvm; print('\n'.join(f'{k}: {v}' for k, v in tvm.support.libinfo().items()))" USE_NVTX: OFF USE_GTEST: AUTO SUMMARIZE: OFF USE_IOS_RPC: OFF USE_MSC: OFF USE_ETHOSU: CUDA_VERSION: NOT-FOUND USE_LIBBACKTRACE: AUTO DLPACK_PATH: 3rdparty/dlpack/include USE_TENSORRT_CODEGEN: OFF USE_THRUST: OFF USE_TARGET_ONNX: OFF USE_AOT_EXECUTOR: ON BUILD_DUMMY_LIBTVM: OFF USE_CUDNN: OFF USE_TENSORRT_RUNTIME: OFF USE_ARM_COMPUTE_LIB_GRAPH_EXECUTOR: OFF USE_CCACHE: AUTO USE_ARM_COMPUTE_LIB: OFF USE_CPP_RTVM: USE_OPENCL_GTEST: /path/to/opencl/gtest USE_MKL: OFF USE_PT_TVMDSOOP: OFF MLIR_VERSION: NOT-FOUND USE_CLML: OFF USE_STACKVM_RUNTIME: OFF USE_GRAPH_EXECUTOR_CUDA_GRAPH: OFF ROCM_PATH: /opt/rocm USE_DNNL: OFF USE_VITIS_AI: OFF USE_MLIR: OFF USE_RCCL: OFF USE_LLVM: llvm-config --link-static USE_VERILATOR: OFF USE_TF_TVMDSOOP: OFF USE_THREADS: ON USE_MSVC_MT: OFF BACKTRACE_ON_SEGFAULT: OFF USE_GRAPH_EXECUTOR: ON USE_NCCL: OFF USE_ROCBLAS: OFF GIT_COMMIT_HASH: 62c05266986ea6639a9fd16fb87ba75a9ec056a8 USE_VULKAN: ON USE_RUST_EXT: OFF USE_CUTLASS: OFF USE_CPP_RPC: OFF USE_HEXAGON: OFF USE_CUSTOM_LOGGING: OFF USE_UMA: OFF USE_FALLBACK_STL_MAP: OFF USE_SORT: ON USE_RTTI: ON GIT_COMMIT_TIME: 2023-10-07 16:42:11 -0700 USE_HEXAGON_SDK: /path/to/sdk USE_BLAS: none USE_ETHOSN: OFF USE_LIBTORCH: OFF USE_RANDOM: ON USE_CUDA: OFF USE_COREML: OFF USE_AMX: OFF BUILD_STATIC_RUNTIME: OFF USE_CMSISNN: OFF USE_KHRONOS_SPIRV: OFF USE_CLML_GRAPH_EXECUTOR: OFF USE_TFLITE: OFF USE_HEXAGON_GTEST: /path/to/hexagon/gtest PICOJSON_PATH: 3rdparty/picojson USE_OPENCL_ENABLE_HOST_PTR: OFF INSTALL_DEV: OFF USE_PROFILER: ON USE_NNPACK: OFF LLVM_VERSION: 17.0.2 USE_OPENCL: OFF COMPILER_RT_PATH: 3rdparty/compiler-rt RANG_PATH: 3rdparty/rang/include USE_SPIRV_KHR_INTEGER_DOT_PRODUCT: OFF USE_OPENMP: OFF USE_BNNS: OFF USE_CUBLAS: OFF USE_METAL: OFF USE_MICRO_STANDALONE_RUNTIME: OFF USE_HEXAGON_EXTERNAL_LIBS: OFF USE_ALTERNATIVE_LINKER: AUTO USE_BYODT_POSIT: OFF USE_HEXAGON_RPC: OFF USE_MICRO: OFF DMLC_PATH: 3rdparty/dmlc-core/include INDEX_DEFAULT_I64: ON USE_RELAY_DEBUG: OFF USE_RPC: ON USE_TENSORFLOW_PATH: none TVM_CLML_VERSION: USE_MIOPEN: OFF USE_ROCM: OFF USE_PAPI: OFF USE_CURAND: OFF TVM_CXX_COMPILER_PATH: C:/Program Files/Microsoft Visual Studio/2022/Enterprise/VC/Tools/MSVC/14.35.32215/bin/HostX64/x64/cl.exe HIDE_PRIVATE_SYMBOLS: OFF Let me know if there is any other info I can gather for you. -Cort I noticed three issues with your report: One is, which seems related to sharding: [23:26:03] D:\a\package\package\tvm\src\node\reflection.cc:109: AttributeError: relax.expr.Var object has no attributed shard_dim [23:26:03] D:\a\package\package\tvm\src\node\reflection.cc:109: AttributeError: relax.expr.Var object has no attributed shard_strategy And the second one is: [23:26:05] D:\a\package\package\tvm\src\relax\ir\expr.cc:174: Check failed: index < tuple_info->fields.size() (197 vs. 197) : Index out of bounds: Tuple params is of size 197, and cannot be accessed with index 197 The last one seems to related to some latest change from upstream TVM (CC the author @Lunderberg if you'd love to take a look): AttributeError: function 'TVMGetLastPythonError' not found. Did you mean: 'TVMAPISetLastPythonError'? Would you like to provide the full python stack trace of the first two issues? Particularly the first one will be helpful to me :) Update: https://github.com/mlc-ai/mlc-llm/issues/1112#issuecomment-1776503604 Fixed
gharchive/issue
2023-10-17T04:53:16
2025-04-01T04:35:05.517491
{ "authors": [ "ashmon", "junrushao" ], "repo": "mlc-ai/mlc-llm", "url": "https://github.com/mlc-ai/mlc-llm/issues/1079", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1730202014
Fail to compile CLI I can successfully compile the model but fail to compile cli: -- The C compiler identification is GNU 8.4.0 -- The CXX compiler identification is GNU 8.4.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test SUPPORT_CXX17 -- Performing Test SUPPORT_CXX17 - Success -- Setting default build type to RelWithDebInfo CMake Error at CMakeLists.txt:47 (add_subdirectory): add_subdirectory given source "tvm" which is not an existing directory. -- system-nameLinux -- VERSION: 0.2.00 -- Performing Test CMAKE_HAVE_LIBC_PTHREAD -- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed -- Looking for pthread_create in pthreads -- Looking for pthread_create in pthreads - not found -- Looking for pthread_create in pthread -- Looking for pthread_create in pthread - found -- Found Threads: TRUE CMake Error at CMakeLists.txt:54 (tvm_file_glob): Unknown CMake command "tvm_file_glob". I met the same problem while using mlc_ai_nightly wheel, but I successfully compiled mlc_chat_cli by building tvm-unity from the source(https://github.com/mlc-ai/relax) I met the same problem while using mlc_ai_nightly wheel, but I successfully compiled mlc_chat_cli by building tvm-unity from the source(https://github.com/mlc-ai/relax) and setting $TVM_HOME thank you, i'll try this I met the same problem while using mlc_ai_nightly wheel, but I successfully compiled mlc_chat_cli by building tvm-unity from the source(https://github.com/mlc-ai/relax) and setting $TVM_HOME I had same issue. This fixed it for me Please check our documentation on CLI compilation: https://mlc.ai/mlc-llm/docs/tutorials/deploy-models.html#option-2-build-from-source Please check our documentation on CLI compilation: https://mlc.ai/mlc-llm/docs/tutorials/deploy-models.html#option-2-build-from-source thank you very much! I will try that! see code in scripts/prep_deps.sh, just git clone https://github.com/apache/tvm 3rdparty/tvm --branch unity --recursive and export TVM_HOME="${TVM_HOME:-3rdparty/tvm}" Please check our documentation on CLI compilation: https://mlc.ai/mlc-llm/docs/tutorials/deploy-models.html#option-2-build-from-source load error? Hi @YouCii, we restructured the documentation and that link is outdated. Now the CLI compilation instructions is here https://mlc.ai/mlc-llm/docs/deploy/cli.html#option-2-build-mlc-runtime-from-source
gharchive/issue
2023-05-29T07:04:19
2025-04-01T04:35:05.525646
{ "authors": [ "Atlantis12000", "MasterJH5574", "YouCii", "llucid-97", "rejoicesyc", "sunyuhan19981208", "yzh119" ], "repo": "mlc-ai/mlc-llm", "url": "https://github.com/mlc-ai/mlc-llm/issues/255", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2735958928
Fix broken URLs Migrated from https://github.com/mlcommons/mlperf-automations_archived/issues/21 Originally created by @arjunsuresh on Tue, 17 Sep 2024 14:37:00 GMT We currently have many broken links in the README files. Some of these are false positives like the linkedin URLs and private github repo links. It'll be good to replace/remove the broken ones. Seems to be fine now: https://github.com/mlcommons/mlperf-automations/actions/workflows/check-broken-links.yml
gharchive/issue
2024-12-12T13:42:57
2025-04-01T04:35:05.528226
{ "authors": [ "anandhu-eng", "arjunsuresh" ], "repo": "mlcommons/mlperf-automations", "url": "https://github.com/mlcommons/mlperf-automations/issues/26", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
166897525
Column names test To discuss. Such a test is definitively useful. I agree that the highlighted line looks like a bug. Il y a ce test aussi qui est complémentaire. Il opère sur un dataset. https://github.com/mldbai/mldb/blob/master/testing/MLDB-832-select_star.py +1 once the duplicated line is removed.
gharchive/pull-request
2016-07-21T19:22:25
2025-04-01T04:35:05.530111
{ "authors": [ "FinchPowers", "guyd" ], "repo": "mldbai/mldb", "url": "https://github.com/mldbai/mldb/pull/558", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
170462545
Include a way to download demo posts [x] This is a feature request. Would love an option to download sample posts markdown displayed in the demo so I could quickly replicate functionality, especially for first-time Jekyll users. Is there any way I can get them? Thanks for the amazing theme Michael. They're all in the gh-pages branch. You can download from there and then integrate the _posts, _pages, and collections. Just be sure to properly configure all the settings and front matter defaults in _config.yml. Quickest way to get up and running would be to download the gh-pages branch and change titles, url, repository and a few other thinks specific to the demo site. I've also broken the sample posts out into a different repo to be used in other themes for testing purposes. https://github.com/mmistakes/jekyll-theme-unit-test Ah, I forgot I could check gh-pages. Silly me. Thanks Michael!
gharchive/issue
2016-08-10T16:38:01
2025-04-01T04:35:05.729505
{ "authors": [ "mmistakes", "stephenkoo" ], "repo": "mmistakes/minimal-mistakes", "url": "https://github.com/mmistakes/minimal-mistakes/issues/427", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
203909704
[Feature Request] Transfers Add possibility to add how many transfers to move --transfers x. Oops, somehow I missed one the most important options... I'll definitely add it.
gharchive/issue
2017-01-30T00:44:20
2025-04-01T04:35:05.755796
{ "authors": [ "Stonedestroyer", "mmozeiko" ], "repo": "mmozeiko/RcloneBrowser", "url": "https://github.com/mmozeiko/RcloneBrowser/issues/1", "license": "unlicense", "license_type": "permissive", "license_source": "bigquery" }
130703444
Initial implementation of scss_files_option (doc/tests tbd) refs #4 @doits looks good :smile: Will you write some specs for it? At least for the scss_file? method? :smile: @mmozuras there you go Not sure what Travis has, locally checks work. Do you have any idea? @doits fetched it and it fails for me locally too. Here's a fix that made it pass for me: https://github.com/mmozuras/pronto-scss/commit/93c57a422cbc52dc3c31af3d2f38b77324d499eb I moved into test.git folder and: renamed git to .git. ran git init. removed the unnecessary files and left only the changes in aforementioned fix. renamed .git to git (because you can't have a repo inside a repo, that's the only way I found to write these tests). committed the change. @mmozuras The steps you proposed didn't fix it on travis, though. I just checkout out test.git from master and added the new world.css.scss, did a git commit and didn't touch anything else. Looks like this worked and didn't break travis for now. @doits thanks! Released as 0.5.3 :smile: :bow:
gharchive/pull-request
2016-02-02T14:48:07
2025-04-01T04:35:05.760909
{ "authors": [ "doits", "mmozuras" ], "repo": "mmozuras/pronto-scss", "url": "https://github.com/mmozuras/pronto-scss/pull/5", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2583976538
What types of challenges can be submitted? Hi there 👋🏻 I want to ask that what types of challenges can be submitted? First, you have already mentioned that we can create contributions in Python, Javascript, C++ and Java languages. I want to ask that if we submit a web app like in python we use django and flask to make these. Can we submit a whole web app project? Any type of challenge is welcome as long as it provides value to others. The programming languages listed are just examples you are free to use any language you prefer. Submissions can range from a single file or script to a full project, including web apps built with frameworks like Django or Flask. For project submissions, please create a directory with a name following the pattern project_<your_project_name> to keep things organized. I appreciate all contributions that enhance the repository! @mmujtabah Thanks 👍🏻 Can you update README.md for this? This will let everyone know what type of contributions they can make. Done Okay now you can close this issue.
gharchive/issue
2024-10-13T13:36:44
2025-04-01T04:35:05.775529
{ "authors": [ "alizaincodes", "mmujtabah" ], "repo": "mmujtabah/CodingChallenges", "url": "https://github.com/mmujtabah/CodingChallenges/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2673082299
ENH: Add meegkit Will fail until meegkit actually builds Now we wait for https://github.com/conda-forge/uv-feedstock/issues/114
gharchive/pull-request
2024-11-19T18:13:19
2025-04-01T04:35:05.788086
{ "authors": [ "larsoner" ], "repo": "mne-tools/mne-installers", "url": "https://github.com/mne-tools/mne-installers/pull/306", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2550582220
Raw plot yields either empty or buggy plot Description of the problem I'm working with intracranial EEG data. The plotting is erratic. Sine .fif files yield a blank screen if plotting from jupyter notebook (.ipynb). If plotting from a .py file, the data looks corrupted no matter what uV I choose. Interestingly, some .fif files seem to load fine, so I'm not sure if it's something inherent to the data itself that is problematic. However, the data in the actual object looks fine if I plot it using matplotlib. .ipynb (tested with vscode): .py (tested with vscode and pycharm): Steps to reproduce #The code for the .ipynb: raw_file = mne.io.read_raw_fif(fname, verbose=True,preload=False) raw_file.load_data() raw_file.notch_filter(np.arange(60, 241, 60)).filter(l_freq=1., h_freq=None) raw_file.plot(scalings = dict(mag=1e-12, grad=4e-11, eeg=350e-6, eog=150e-6, ecg=5e-4, emg=1e-3, ref_meg=1e-12, misc=1e-3, stim=1, resp=1, chpi=1e-4, whitened=1e2)) #The code for the .py: raw_file = mne.io.read_raw_fif(fname, verbose=True,preload=False).crop(tmin=0*60, tmax=10*60) #.pick(np.arange(20,50)) #.crop(tmin=420*60, tmax=500*60).resample(512) raw_file.load_data() raw_file.plot(scalings = dict(mag=1e-12, grad=4e-11, eeg=35e-6, eog=150e-6, ecg=5e-4, emg=1e-3, ref_meg=1e-12, misc=1e-3, stim=1, resp=1, chpi=1e-4, whitened=1e2)) plt.show() Link to data No response Expected results I expect a good looking plot Actual results The plot looks bad Additional information Platform macOS-14.6.1-arm64-arm-64bit Python 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:38:07) [Clang 16.0.6 ] Executable /Users/michaa08/Library/Mobile Documents/com~apple~CloudDocs/Python_Scripts/Environments/eegPreprocess/bin/python CPU arm (12 cores) Memory 64.0 GB Core ├☑ mne 1.8.0 (latest release) ├☑ numpy 1.26.3 (OpenBLAS 0.3.26 with 12 threads) ├☑ scipy 1.12.0 └☑ matplotlib 3.8.2 (backend=module://matplotlib_inline.backend_inline) Numerical (optional) ├☑ sklearn 1.4.0 ├☑ numba 0.58.1 ├☑ nibabel 5.2.0 ├☑ nilearn 0.10.2 ├☑ dipy 1.7.0 ├☑ openmeeg 2.5.7 ├☑ pandas 2.2.0 ├☑ h5io 0.2.1 ├☑ h5py 3.10.0 └☐ unavailable cupy Visualization (optional) ├☑ pyvista 0.43.2 (OpenGL 4.1 Metal - 88.1 via Apple M2 Max) ├☑ pyvistaqt 0.11.0 ├☑ vtk 9.2.6 ├☑ qtpy 2.4.1 (PyQt5=5.15.8) ├☑ pyqtgraph 0.13.3 ├☑ mne-qt-browser 0.6.1 ├☑ ipywidgets 8.1.1 ├☑ trame_client 2.15.0 ├☑ trame_server 2.15.0 ├☑ trame_vtk 2.7.0 ├☑ trame_vuetify 2.4.2 └☐ unavailable ipympl Ecosystem (optional) ├☑ eeglabio 0.0.2-4 ├☑ mffpy 0.8.0 └☐ unavailable mne-bids, mne-nirs, mne-features, mne-connectivity, mne-icalabel, mne-bids-pipeline, neo, edfio, pybv your scaling is definitely off. You can press "-" key to reduce scaling with qt browser and matplotlib if it's a qt window. You can also look at the "auto" parameter value for scalings. I've tried both of those, looking at scaling between 0uV to 7000 uV. In either of the above UIs, it doesn't change the appearance at all. On Thu, Sep 26, 2024 at 9:27 AM Alexandre Gramfort @.***> wrote: your scaling is definitely off. You can press "-" key to reduce scaling with qt browser and matplotlib if it's a qt window. You can also look at the "auto" parameter value for scalings. — Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/12873#issuecomment-2376976906, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJQU3RDASKML5WRHDSP62MDZYQDUPAVCNFSM6AAAAABO5ATNJCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNZWHE3TMOJQGY . You are receiving this because you authored the thread.Message ID: @.***> -- Andrew J. Michalak @.*** @.***> Can you tell me the standard deviation of your data in the raw object? Ah you've solved it, thank you! The general stdev range across the 96 channels is 40-160 with a few outliers... So I changed eeg=350e-6 to eeg=40, and this did the trick. The voltage marker in the UI now says 125000000.0 uV, but the EEG looks good. The majority of the files in this dataset are visualized well at 700uV (i.e. eeg=350e-6), so I'm wondering if there was some conversion that was/wasn't done for these other files. Thank you very much!!!!
gharchive/issue
2024-09-26T13:23:06
2025-04-01T04:35:05.803047
{ "authors": [ "Anj-RU", "agramfort" ], "repo": "mne-tools/mne-python", "url": "https://github.com/mne-tools/mne-python/issues/12873", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
443439257
BUG, VIZ saving an animated topomap fails import mne path = mne.datasets.kiloword.data_path() fname = path + '/kword_metadata-epo.fif' evoked = mne.read_epochs(fname).average() fig, anim = evoked.animate_topomap() anim.save("topo.gif") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/jona/miniconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 1174, in save writer.grab_frame(**savefig_kwargs) File "/Users/jona/miniconda3/lib/python3.6/contextlib.py", line 99, in __exit__ self.gen.throw(type, value, traceback) File "/Users/jona/miniconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 232, in saving self.finish() File "/Users/jona/miniconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 358, in finish self.cleanup() File "/Users/jona/miniconda3/lib/python3.6/site-packages/matplotlib/animation.py", line 395, in cleanup out, err = self._proc.communicate() File "/Users/jona/miniconda3/lib/python3.6/subprocess.py", line 863, in communicate stdout, stderr = self._communicate(input, endtime, timeout) File "/Users/jona/miniconda3/lib/python3.6/subprocess.py", line 1525, in _communicate selector.register(self.stdout, selectors.EVENT_READ) File "/Users/jona/miniconda3/lib/python3.6/selectors.py", line 351, in register key = super().register(fileobj, events, data) File "/Users/jona/miniconda3/lib/python3.6/selectors.py", line 237, in register key = SelectorKey(fileobj, self._fileobj_lookup(fileobj), events, data) File "/Users/jona/miniconda3/lib/python3.6/selectors.py", line 224, in _fileobj_lookup return _fileobj_to_fd(fileobj) File "/Users/jona/miniconda3/lib/python3.6/selectors.py", line 39, in _fileobj_to_fd "{!r}".format(fileobj)) from None ValueError: Invalid file object: <_io.BufferedReader name=18> Same in notebook, ipython, REPL. This looks like a matplotlib bug, can you see if you can replicate with a simple mpl snippet instead? You mean manually creating an animation where I loop over time steps for a topomap? No not even a topomap, a script with only plt calls (nothing from MNE). In other words, can you get any matplotlib animation to save? Is the problem specific to MNE topomaps, or are all mpl animation saves broken for you? Oh no, I can in principle save animations with matplotlib's animation functionality. Okay so we do something with our plotting that does not work for them then. The question is what is it? The traceback you pasted has only matplotlib in the traceback so I'm not sure how best to debug. FWIW this works on my machine (modified to use a smaller time snippet so it does not take forever): import mne path = mne.datasets.kiloword.data_path() fname = path + '/kword_metadata-epo.fif' evoked = mne.read_epochs(fname).average() evoked.crop(0, 0.1) fig, anim = evoked.animate_topomap() anim.save("topo.gif") I get: But I get this message: MovieWriter ffmpeg unavailable; trying to use <class 'matplotlib.animation.PillowWriter'> instead. So maybe it's a matplotlib corner case bug where MNE code is used with the ffmpeg writer? It works for me if I use writer='imagemagick' in the save call - and that seems to be the case universally. So not MNE related.
gharchive/issue
2019-05-13T14:43:43
2025-04-01T04:35:05.808760
{ "authors": [ "jona-sassenhagen", "larsoner" ], "repo": "mne-tools/mne-python", "url": "https://github.com/mne-tools/mne-python/issues/6298", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
684251830
Epochs and Events (No stim channel found to extract event triggers) Hey i got this error when i m trying to get events and making epochs. ValueError: No stim channel found to extract event triggers. What to do if my dataset file(.edf) have no stim channel? I actually want to do frequency time analysis of my data. Thanks This appears to be a question about usage, not a bug report or a new feature request. Please use the MNE mailing list or the gitter channel for usage questions.
gharchive/issue
2020-08-23T20:33:31
2025-04-01T04:35:05.811123
{ "authors": [ "agramfort", "mianjalal786" ], "repo": "mne-tools/mne-python", "url": "https://github.com/mne-tools/mne-python/issues/8148", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1765180991
Easycap-M43 montage Fixes #11742 A new PR for https://github.com/mne-tools/mne-python/issues/11737 Since my GitHub commit sign was not verified due to a system change, I had to redo the commits. I think this might have caused the issue with CI checks. @sappelhoff would you kindly go through it once and close the previous PR? Thank you! Since my GitHub commit sign was not verified due to a system change, I had to redo the commits. Some of the CI issues stem from the facts that: this is your first time contributing here, so some CI workflows need to be approved by maintainers to run (I did that) you don't have an account with CircleCI and our CI there is configured such that only registered users will trigger a run (feel free to register there with your GitHub account; but maintainers can also manually trigger a run, like I did) signed commits are not required, and you could look into overwriting your own history and force-pushing to your branch instead of opening new PRs :-) but I understand if opening a new PR is sometimes an easier way out of a tricky situation. Thanks for your contribution! Hope to see more of them. thanks @dasdiptyajit !
gharchive/pull-request
2023-06-20T11:11:15
2025-04-01T04:35:05.814497
{ "authors": [ "dasdiptyajit", "drammock", "sappelhoff" ], "repo": "mne-tools/mne-python", "url": "https://github.com/mne-tools/mne-python/pull/11744", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1570646001
Solve TensorRT Bug Will hopefully speed up retinaface inference time even more! This is a warning message from TensorFlow indicating that the TensorRT libraries could not be loaded. TensorRT is a high-performance deep learning inference library from Nvidia, and is used to optimize TensorFlow models for deployment on Nvidia GPUs. The message states that two TensorRT libraries (libnvinfer.so.7 and libnvinfer_plugin.so.7) could not be opened because the file does not exist. This suggests that TensorRT may not be installed properly on your system, or that the libraries are not located in a directory that is in the LD_LIBRARY_PATH environment variable. To resolve this issue, you can either install TensorRT and make sure the libraries are in a directory that is in the LD_LIBRARY_PATH, or you can reconfigure TensorFlow to not use TensorRT. Install the whole thing, but not from pip (https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html) -> Ignore, when TensorRT is not needed, otherwise downgrade to TensorRT 7.X using python=3.8 in emorec env (https://github.com/tensorflow/tensorflow/issues/57679)
gharchive/issue
2023-02-04T00:04:35
2025-04-01T04:35:05.825279
{ "authors": [ "mo12896" ], "repo": "mo12896/emotion-recognition", "url": "https://github.com/mo12896/emotion-recognition/issues/17", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
565901208
create 0000-01-02-moaathibra part of the course. thanks for the help
gharchive/pull-request
2020-02-16T12:58:18
2025-04-01T04:35:05.846004
{ "authors": [ "moaathibra" ], "repo": "moaathibra/github-slideshow", "url": "https://github.com/moaathibra/github-slideshow/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1421137777
chore/settings This PR includes: The UI touch for the settings view. LGTM!
gharchive/pull-request
2022-10-24T16:56:56
2025-04-01T04:35:05.846994
{ "authors": [ "moak13" ], "repo": "moak13/fc_weather", "url": "https://github.com/moak13/fc_weather/pull/15", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1370077930
Add rust enclave report create and verify wrappers A new trait has been added mc-sgx-tservice::Report which provides functionality to create an mc-sgx-core-types::Report and to verify a mc-sgx-core-types::Report. This functionality operates inside of an enclave. Current dependencies on/for this PR: main PR #125 PR #126 PR #127 PR #130 👈 This comment was auto-generated by Graphite. Codecov Report Merging #130 (722caec) into feature/unsealing_functions (18bf2d2) will decrease coverage by 0.79%. The diff coverage is 0.00%. @@ Coverage Diff @@ ## feature/unsealing_functions #130 +/- ## =============================================================== - Coverage 83.04% 82.24% -0.80% =============================================================== Files 32 33 +1 Lines 1852 1870 +18 =============================================================== Hits 1538 1538 - Misses 314 332 +18 Impacted Files Coverage Δ core/types/src/lib.rs 100.00% <ø> (ø) core/types/src/report.rs 100.00% <ø> (ø) tservice/src/lib.rs 100.00% <ø> (ø) tservice/src/report.rs 0.00% <0.00%> (ø) tservice/src/seal.rs 61.59% <ø> (ø) :mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
gharchive/pull-request
2022-09-12T15:13:28
2025-04-01T04:35:05.867503
{ "authors": [ "codecov-commenter", "nick-mobilecoin" ], "repo": "mobilecoinfoundation/sgx", "url": "https://github.com/mobilecoinfoundation/sgx/pull/130", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1827332202
🛑 PizzaTime Web is down In 0dd4018, PizzaTime Web (https://pizzatime.uy) was down: HTTP code: 0 Response time: 0 ms Resolved: PizzaTime Web is back up in e3bf539.
gharchive/issue
2023-07-29T05:44:43
2025-04-01T04:35:05.870063
{ "authors": [ "mobilitysol" ], "repo": "mobilitysol/monitorweb", "url": "https://github.com/mobilitysol/monitorweb/issues/1482", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
4503736
Should timing badges take into account beforeEach/afterEach? My beforeEach functions are pretty slow, and sometimes even dominate test time. (I set up test fixture data and initialize an Ember app.) But the little timing badges that show up in the browser (above 50ms or so) do not include this time, they only seem to measure the time of the test functions themselves. I'm thinking that it would be sensible for the badges to include the beforeEach/afterEach time. (Perhaps there are also other parts where these timings are used, like timeouts -- I'm fairly new to mocha.) What do you think? Looks like this was never completed: $ cat example.js beforeEach(function(done) { setTimeout(done, 180); }); it('foo', function(done) { setTimeout(done, 100); }); it('bar', function(done) { done(); }); $ ./bin/mocha --reporter spec example.js ✓ foo (101ms) ✓ bar 2 passing (468ms) I could come up with a PR for this. The resulting output would be: $ ./bin/mocha --reporter spec example.js ✓ foo (281ms) ✓ bar (181ms) 2 passing (468ms) $ ./bin/mocha --reporter --verbose spec example.js beforeEach (181ms) ✓ foo (281ms) beforeEach (181ms) ✓ bar (181ms) 2 passing (468ms) To clarify, I'd probably keep the current behavior of having timeouts apply to contexts, and not aggregated test execution time. @danielstjules It would be also be nice to show the name of the hook, for when you have multiple ones. @danielstjules go for it! :) @danielstjules would love this! cc @Nokel81; you might want to consider this when looking at the stats gathering. Great idea, the aggregater should have access to this information so reporters will be able to start using it @boneskull @Nokel81 I created a PR https://github.com/mochajs/mocha/pull/3776 (based on https://github.com/mochajs/mocha/pull/2244) which introduces an option --time-setup to include the duration of beforeEach hooks in the duration of each test. I have found this useful for optimizing tests in my own projects. bump, seeing that the pr is still open, is it still being worked on? what are the alternatives? Would be awesome to get #3776 merged, it's exactly the solution I need. My use case is CircleCI test splitting which uses the timing data from the junit reporter, which is completely off if it doesn't include beforeEach. I ran into this and was sadden to see that the PR #3776 is still outstanding, so I set out to see if there was some way I could hack this behaviour into current MochaJS, and this is what I came up with: let start = performance.now() export const mochaHooks = { beforeEach: async function () { const currentTest = this.currentTest const originalRun = currentTest.run currentTest.run = function (fn) { originalRun.call(currentTest, function (...args) { const end = performance.now() currentTest.duration = Math.round(end - start) start = end fn(...args) }) } } } It's put into a --require file which adds a global hook. I have made it such that start is set immediately and updated after each test, this ensures all time spent between tests are included, so any before / beforeEach / after / afterEach etc. is counted in the duration. It's kind of hackish, but works.
gharchive/issue
2012-05-09T23:24:18
2025-04-01T04:35:05.990145
{ "authors": [ "Nokel81", "boneskull", "danielstjules", "dasilvacontin", "jbnicolai", "joliss", "jsphkm", "kevinburkeshyp", "micdah", "sgilroy", "theflow" ], "repo": "mochajs/mocha", "url": "https://github.com/mochajs/mocha/issues/419", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2575652657
Fix and improve handling boolean parameters This fixes edge cases where the current solution did not work properly, but it also introduces an equivalence between yes/no and on/off. The latter can be changed so that the behavior is the same, but I think it's reasonable... close #285
gharchive/pull-request
2024-10-09T11:33:10
2025-04-01T04:35:06.015492
{ "authors": [ "jajik" ], "repo": "modcluster/mod_proxy_cluster", "url": "https://github.com/modcluster/mod_proxy_cluster/pull/287", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
919709513
Dependency Management This pull request updates all dependencies in the repository, and removes any that were unused. These changes have been tested for compatibility, and work as intended. A few packages were also deprecated, and as a result, removed. This PR makes use of NPM v7. If you have not already, please make sure you have updated your local version of NPM to 7 or higher. doesn't seem like it? This appears to be true, confirmed through testing. PR has been updated with latest changes. Feel free to merge.
gharchive/pull-request
2021-06-13T03:48:02
2025-04-01T04:35:06.017222
{ "authors": [ "DamienVesper" ], "repo": "moddio/taro", "url": "https://github.com/moddio/taro/pull/91", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
200784920
application of choices annotation on enumerations Reported by eichler on 25 May 2011 08:45 UTC the idea: use a existing enumeration in a choicebox with a restricted number of choices. this could avoid the definition of an additional enumeration type with almost the same options/choices. example: type Dynamics=enumeration(DynamicFreeInitial "DynamicFreeInitial", FixedInitial "FixedInitial", SteadyStateInitial "SteadyStateInitial", SteadyState "SteadyState")"definition of balance equations"; parameter Types.Dynamics energyDynamics=Dynamics.DynamicFreeInitial // only two alternatives are allowed, so restrict the choices at the GUI to the both annotation(choices( choice= Dynamics.DynamicFreeInitial, choice= Dynamics.FixedInitial)); Migrated-From: https://trac.modelica.org/Modelica/ticket/559 Modified by dietmarw on 25 May 2011 19:34 UTC Comment by hansolsson on 28 Sep 2011 08:28 UTC The idea (using choices to restrict enumerations to avoid having too many enumerations types) looks promising, but to me this looks like changes in the library and not in the spec and thus changing milestone. I assume that illegal settings (e.g. energyDynamics=Dynamics.SteadyStateInitial) will be handled by an assert in the library. Comment by hansolsson on 24 Feb 2012 14:06 UTC As previously indicated this seems like a library issue and not a a specification issue. Comment by dietmarw on 1 Mar 2012 14:39 UTC Feasible for MSL 3.2.1? Comment by fcasella on 5 Mar 2013 22:54 UTC As of March 5 2013, the only case in Modelica.Fluid where this might apply is Types.ModelStructure and Types.ModelStructureReduced. How this might be fixed will depend on how we eventually implement the specialized pipe components Changelog removed by fcasella on 5 Mar 2013 22:54 UTC Comment by otter on 19 Apr 2013 08:17 UTC Since this is a tiny issue and seems to appear only once in MSL, it seems to be not worth to spend effort on it now. Comment by dietmarw on 19 Apr 2013 08:21 UTC Well but maybe we can implement it in the future. Comment by hansolsson on 19 Apr 2013 16:07 UTC There are three parts here: The general idea of avoiding new enumerations by reusing an existing one with choices-annotation to limit the intended choices. The specific case of using this in a specific place in MSL. Is this ok according to the specification? In reverse order: Yes, it is ok according to the spec as far as I can see. The specific case cannot easily be added after MSL 3.2.1 since it seems it breaks backward compatibility (changing the type of a parameter). The general idea can be done at any point and is not connected to any specific component of MSL and it can also be done multiple times etc - i.e. it is not suitable to track the general idea with a ticket. Comment by otter on 13 Dec 2015 18:44 UTC Since this would be a non-backwards compatible changes, change the milestone Comment by hansolsson on 15 Dec 2015 10:55 UTC I would re-close this, since the current specification allows choices as a GUI-restriction - but changing an existing enumeration is not fully backwards compatible. Each specific enumeration getting such a choices-annotation deserves its own ticket. I would re-close this Ok.
gharchive/issue
2017-01-14T06:22:52
2025-04-01T04:35:06.026793
{ "authors": [ "beutlich", "modelica-trac-importer" ], "repo": "modelica/Modelica", "url": "https://github.com/modelica/Modelica/issues/559", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
878675975
Mapping argument type mismatch error since 2.4.0 Mapping worked fine with 2.3.9, but with changes in 2.4.0 it fails. Stacktrace below is for current master version. org.modelmapper.MappingException: ModelMapper mapping errors: 1) Converter org.modelmapper.internal.converter.MergingCollectionConverter@3a32f30e failed to convert org.hibernate.collection.internal.PersistentBag to java.util.List. 1 error at org.modelmapper.internal.Errors.throwMappingExceptionIfErrorsExist(Errors.java:380) at org.modelmapper.internal.MappingEngineImpl.map(MappingEngineImpl.java:80) at org.modelmapper.ModelMapper.mapInternal(ModelMapper.java:573) at org.modelmapper.ModelMapper.map(ModelMapper.java:495) *** Caused by: org.modelmapper.MappingException: ModelMapper mapping errors: 1) Failed to set value 'class SomeAdminModel { class SomeBaseModel { class SomeRootModel { ... } id: 6 year: null ... } aaa: null bbb: null }' on models.AbcAdminModel.setSome() 1 error at org.modelmapper.internal.Errors.toMappingException(Errors.java:258) at org.modelmapper.internal.PropertyInfoImpl$MethodMutator.setValue(PropertyInfoImpl.java:128) at org.modelmapper.internal.MappingContextImpl.getParentDestination(MappingContextImpl.java:325) at org.modelmapper.internal.MappingEngineImpl.setDestinationValue(MappingEngineImpl.java:230) at org.modelmapper.internal.MappingEngineImpl.propertyMap(MappingEngineImpl.java:187) at org.modelmapper.internal.MappingEngineImpl.typeMap(MappingEngineImpl.java:151) at org.modelmapper.internal.MappingEngineImpl.map(MappingEngineImpl.java:114) at org.modelmapper.internal.converter.MergingCollectionConverter.convert(MergingCollectionConverter.java:59) at org.modelmapper.internal.converter.MergingCollectionConverter.convert(MergingCollectionConverter.java:31) at org.modelmapper.internal.MappingEngineImpl.convert(MappingEngineImpl.java:306) at org.modelmapper.internal.MappingEngineImpl.map(MappingEngineImpl.java:109) at org.modelmapper.internal.MappingEngineImpl.setDestinationValue(MappingEngineImpl.java:245) at org.modelmapper.internal.MappingEngineImpl.propertyMap(MappingEngineImpl.java:187) at org.modelmapper.internal.MappingEngineImpl.typeMap(MappingEngineImpl.java:151) at org.modelmapper.internal.MappingEngineImpl.map(MappingEngineImpl.java:114) at org.modelmapper.internal.MappingEngineImpl.map(MappingEngineImpl.java:71) ... 159 common frames omitted Caused by: java.lang.IllegalArgumentException: argument type mismatch at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.modelmapper.internal.PropertyInfoImpl$MethodMutator.setValue(PropertyInfoImpl.java:126) ... 173 common frames omitted I map an object with list field to another class with list field. Each list has different type and it fails during mapping of the field of type 'SomeAdminModel'. I took a look in the debugger and for some reason it tries to set an unmapped object and this produces "argument type mismatch" error above. my mapper configuration is as following: modelMapper.getConfiguration().setFieldMatchingEnabled(true).setMatchingStrategy(MatchingStrategies.STRICT).setFieldAccessLevel( Configuration.AccessLevel.PRIVATE).setUseOSGiClassLoaderBridging(true); Hi @Roman-Skripka , Can you provides the source and destination model for reproducing this issue? Thanks! Any solution? Same for us. Stuck on 2.3.9 which prevents upgrade to JDK > 11.0.10 Hello. Is there any update about this problem? I have updated my modelmapper version from 2.3.9 to 3.1.0 and my mapping layer is also broken. I've been working with modelmapper for 3 years, using version 2.3.0. But as time passed and my project grew, the construction time grew along with it. this gave me a smaller output. This year I updated the version to 3.0.0. But all the mappings stopped working and now I just don't know what I can do to improve my project. Same Problem here. After moving from 2.3.9 to any version higher than that, i got exactly the same exception as described above. Trying to upgrade Java to JVM 17 using version 2.3.9 does not work either. Are there any updates on this issue? Same problem here in v3.1.0, and I can see in the debugger that a String is trying to be mapped to an object via reflection. The exception is thrown at line 124 of PropertyInfoImpl at version v3.1.0. https://github.com/modelmapper/modelmapper/blob/251772e790fe6b0799db052681c82bea0e760b43/core/src/main/java/org/modelmapper/internal/PropertyInfoImpl.java#L124 Info from debugger: member = public void com.mypackage.ClassADto.setClassBDto(com.mypackage.ClassBDto) value = "com.mypackage.ClassB@44d62b68" // String from toString(), not the object
gharchive/issue
2021-05-07T09:37:24
2025-04-01T04:35:06.091061
{ "authors": [ "DagleAnderson", "Roman-Skripka", "Spookyguy", "chhsiao90", "husseyd", "kelvinsleonardo", "lazaro3487", "marcioggs" ], "repo": "modelmapper/modelmapper", "url": "https://github.com/modelmapper/modelmapper/issues/615", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2193900578
从sdwebui扩展中安装(install from url),重启后不显示facechain 从sdwebui扩展中安装(install from url),重启后不显示facechain, 有朋友遇到这个问题吗? 我也是 Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.7.0 Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e Installing requirements for diffusers Launching Web UI with arguments: --theme light --xformers --api --autolaunch Civitai Helper: Get Custom Model Folder Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.3.1, num models: 22 [lora-prompt-tool] Get Custom Model Folder ControlNet preprocessor location: G:\sd-webui-aki-v4.6.1\extensions\sd-webui-controlnet\annotator\downloads 2024-03-22 13:42:48,343 - ControlNet - INFO - ControlNet v1.1.441 2024-03-22 13:42:48,426 - ControlNet - INFO - ControlNet v1.1.441 sd-webui-prompt-all-in-one background API service started successfully. Loading weights [1681bf15ca] from G:\sd-webui-aki-v4.6.1\models\Stable-diffusion\PONY\js2prony_v10.safetensors 2024-03-22 13:42:49,402 - ControlNet - INFO - ControlNet UI callback registered. Civitai Helper: Settings: Civitai Helper: max_size_preview: True Civitai Helper: skip_nsfw_preview: False Civitai Helper: open_url_with_js: True Civitai Helper: proxy: 2024-03-22 13:42:49,999 - modelscope - INFO - PyTorch version 2.1.2+cu118 Found. 2024-03-22 13:42:50,009 - modelscope - INFO - Loading ast index from G:\sd-webui-aki-v4.6.1.cache\modelscope\hub\ast_indexer Creating model from config: G:\sd-webui-aki-v4.6.1\repositories\generative-models\configs\inference\sd_xl_base.yaml 2024-03-22 13:42:50,313 - modelscope - INFO - Loading done! Current index file version is 1.12.0, with md5 d5f1181da20c4b632256d7ae966f0089 and a total number of 964 components indexed *** Error executing callback ui_tabs_callback for G:\sd-webui-aki-v4.6.1\extensions\facechain\scripts\facechain_sdwebui.py Traceback (most recent call last): File "G:\sd-webui-aki-v4.6.1\modules\script_callbacks.py", line 166, in ui_tabs_callback res += c.callback() or [] File "G:\sd-webui-aki-v4.6.1\extensions\facechain\scripts\facechain_sdwebui.py", line 15, in on_ui_tabs import app File "G:\sd-webui-aki-v4.6.1\extensions\facechain\app.py", line 16, in from facechain.inference import preprocess_pose, GenPortrait File "G:\sd-webui-aki-v4.6.1\extensions\facechain\facechain\inference.py", line 10, in from diffusers import StableDiffusionPipeline, StableDiffusionControlNetPipeline, ControlNetModel, ImportError: cannot import name 'StableDiffusionXLPipeline' from 'diffusers' (G:\sd-webui-aki-v4.6.1\python\lib\site-packages\diffusers_init_.py) Running on local URL: http://127.0.0.1:7860 To create a public link, set share=True in launch(). Startup time: 20.2s (prepare environment: 9.4s, import torch: 2.2s, import gradio: 0.7s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.3s, list SD models: 0.2s, load scripts: 1.8s, create ui: 2.4s, gradio launch: 2.3s, app_started_callback: 0.1s). Loading VAE weights specified in settings: G:\sd-webui-aki-v4.6.1\models\VAE\sdxlVAE_v10.safetensors Applying attention optimization: xformers... done. Model loaded in 5.3s (load weights from disk: 1.1s, create model: 0.6s, apply weights to model: 3.0s, apply half(): 0.2s, calculate empty prompt: 0.2s). 我也是 Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: v1.7.0 Commit hash: cf2772fab0af5573da775e7437e6acdca424f26e Installing requirements for diffusers Launching Web UI with arguments: --theme light --xformers --api --autolaunch Civitai Helper: Get Custom Model Folder Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.3.1, num models: 22 [lora-prompt-tool] Get Custom Model Folder ControlNet preprocessor location: G:\sd-webui-aki-v4.6.1\extensions\sd-webui-controlnet\annotator\downloads 2024-03-22 13:42:48,343 - ControlNet - INFO - ControlNet v1.1.441 2024-03-22 13:42:48,426 - ControlNet - INFO - ControlNet v1.1.441 sd-webui-prompt-all-in-one background API service started successfully. Loading weights [1681bf15ca] from G:\sd-webui-aki-v4.6.1\models\Stable-diffusion\PONY\js2prony_v10.safetensors 2024-03-22 13:42:49,402 - ControlNet - INFO - ControlNet UI callback registered. Civitai Helper: Settings: Civitai Helper: max_size_preview: True Civitai Helper: skip_nsfw_preview: False Civitai Helper: open_url_with_js: True Civitai Helper: proxy: 2024-03-22 13:42:49,999 - modelscope - INFO - PyTorch version 2.1.2+cu118 Found. 2024-03-22 13:42:50,009 - modelscope - INFO - Loading ast index from G:\sd-webui-aki-v4.6.1.cache\modelscope\hub\ast_indexer Creating model from config: G:\sd-webui-aki-v4.6.1\repositories\generative-models\configs\inference\sd_xl_base.yaml 2024-03-22 13:42:50,313 - modelscope - INFO - Loading done! Current index file version is 1.12.0, with md5 d5f1181da20c4b632256d7ae966f0089 and a total number of 964 components indexed *** Error executing callback ui_tabs_callback for G:\sd-webui-aki-v4.6.1\extensions\facechain\scripts\facechain_sdwebui.py Traceback (most recent call last): File "G:\sd-webui-aki-v4.6.1\modules\script_callbacks.py", line 166, in ui_tabs_callback res += c.callback() or [] File "G:\sd-webui-aki-v4.6.1\extensions\facechain\scripts\facechain_sdwebui.py", line 15, in on_ui_tabs import app File "G:\sd-webui-aki-v4.6.1\extensions\facechain\app.py", line 16, in from facechain.inference import preprocess_pose, GenPortrait File "G:\sd-webui-aki-v4.6.1\extensions\facechain\facechain\inference.py", line 10, in from diffusers import StableDiffusionPipeline, StableDiffusionControlNetPipeline, ControlNetModel, ImportError: cannot import name 'StableDiffusionXLPipeline' from 'diffusers' (G:\sd-webui-aki-v4.6.1\python\lib\site-packages\diffusers__init__.py) Running on local URL: http://127.0.0.1:7860 To create a public link, set share=True in launch(). Startup time: 20.2s (prepare environment: 9.4s, import torch: 2.2s, import gradio: 0.7s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.3s, list SD models: 0.2s, load scripts: 1.8s, create ui: 2.4s, gradio launch: 2.3s, app_started_callback: 0.1s). Loading VAE weights specified in settings: G:\sd-webui-aki-v4.6.1\models\VAE\sdxlVAE_v10.safetensors Applying attention optimization: xformers... done. Model loaded in 5.3s (load weights from disk: 1.1s, create model: 0.6s, apply weights to model: 3.0s, apply half(): 0.2s, calculate empty prompt: 0.2s). 你这个diffusers库报错了啊,没有’StableDiffusionXLPipeline‘这个module,看看diffusers的版本 尝试在环境变量中添加红圈里这一项(我是这样解决的) please try out the newest train-free, 10s inference version facechain-fact.
gharchive/issue
2024-03-19T03:00:17
2025-04-01T04:35:06.112482
{ "authors": [ "Lynn2023", "bj803", "datuizhuang", "dengfeng0729", "sunbaigui" ], "repo": "modelscope/facechain", "url": "https://github.com/modelscope/facechain/issues/533", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1758523649
🛑 Radarr is down In 4eca7ac, Radarr ($RADARR_URL) was down: HTTP code: 403 Response time: 565 ms Resolved: Radarr is back up in 12dcbce.
gharchive/issue
2023-06-15T10:28:33
2025-04-01T04:35:06.114975
{ "authors": [ "modem7" ], "repo": "modem7/Status", "url": "https://github.com/modem7/Status/issues/3506", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1007477448
🛑 Wiki is down In ac1f46a, Wiki (https://omegawiki.modem7.com) was down: HTTP code: 523 Response time: 2455 ms Resolved: Wiki is back up in 3d82235.
gharchive/issue
2021-09-26T19:15:28
2025-04-01T04:35:06.117629
{ "authors": [ "modem7" ], "repo": "modem7/Status", "url": "https://github.com/modem7/Status/issues/58", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
655386712
Wrong output for DataFrame.std with level parameter System information OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04 Modin version (modin.__version__): 0.7.3+197.g8602fc0 Python version: 3.8.3 Code we can use to reproduce: import modin.pandas as pd import pandas data = {"col1": [1, 2, 3], "col2": [11, 12, 13]} index = [[4, 4, 5], [6, 7, 7]] df_pandas = pandas.DataFrame(data, index=index) df_modin = pd.DataFrame(data, index=index) print("df_pandas.std(level=0)\n", df_pandas.std(level=0)) print("df_modin.std(level=0)\n", df_modin.std(level=0)) Output: df_pandas.std(level=0) col1 col2 4 0.707107 0.707107 5 NaN NaN df_modin.std(level=0) col1 0.707107 col2 0.707107 dtype: float64 Describe the problem Level parameter doesn't handled by DataFrame.std function, probably we should use base.py::_handle_level_agg function like it is done in the count function. Source code / logs Close to track #1709
gharchive/issue
2020-07-12T12:01:49
2025-04-01T04:35:06.128410
{ "authors": [ "amyskov", "devin-petersohn" ], "repo": "modin-project/modin", "url": "https://github.com/modin-project/modin/issues/1714", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1488968338
REFACTOR-#5434: Define public interfaces in modin.core.execution.dask module Signed-off-by: Anatoly Myachev anatoly.myachev@intel.com What do these changes do? [x] first commit message and PR title follow format outlined here NOTE: If you edit the PR title to match this format, you need to add another commit (even if it's empty) or amend your last commit for the CI job that checks the PR title to pick up the new PR title. [x] passes flake8 modin/ asv_bench/benchmarks scripts/doc_checker.py [x] passes black --check modin/ asv_bench/benchmarks scripts/doc_checker.py [x] signed commit with git commit -s [x] Resolves #5434 [x] tests added and passing [x] module layout described at docs/development/architecture.rst is up-to-date @YarShev could you take a look? @YarShev ready for review
gharchive/pull-request
2022-12-10T21:10:00
2025-04-01T04:35:06.133056
{ "authors": [ "anmyachev" ], "repo": "modin-project/modin", "url": "https://github.com/modin-project/modin/pull/5418", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
456712099
reading gzipped csv files What do these changes do? pd.read_csv is able to open gzipped files Issue Number Resolves .gz files #630 Sounds good! On Tue, Jun 25, 2019 at 2:07 PM Devin Petersohn notifications@github.com wrote: @devin-petersohn commented on this pull request. In modin/pandas/test/test_io.py https://github.com/modin-project/modin/pull/682#discussion_r297394257: @@ -375,6 +375,18 @@ def teardown_fwf_file(): os.remove(TEST_FWF_FILENAME) +def test_read_csv_gzip(): I agree, but I think we can fix this in a future PR. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/modin-project/modin/pull/682?email_source=notifications&email_token=AC6N6VNC7DPOBZ5QO4UDVW3P4KCJVA5CNFSM4HYSS5N2YY3PNVWWK3TUL52HS4DFWFIHK3DMKJSXC5LFON2FEZLWNFSXPKTDN5WW2ZLOORPWSZGOB4T53WY#discussion_r297394257, or mute the thread https://github.com/notifications/unsubscribe-auth/AC6N6VLXDWX5QD3XFK6O5N3P4KCJVANCNFSM4HYSS5NQ . guys do you need any help for this PR ? Reading compressed file for me sounds like a great idea. Feel free to add me in the loop Cheers :)
gharchive/pull-request
2019-06-17T02:12:37
2025-04-01T04:35:06.138474
{ "authors": [ "binary-signal", "dulinda", "williamma12" ], "repo": "modin-project/modin", "url": "https://github.com/modin-project/modin/pull/682", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2584282923
🛑 Plus Repo is down In 5811e5a, Plus Repo (https://repo.plus) was down: HTTP code: 0 Response time: 0 ms Resolved: Plus Repo is back up in 6f7e221 after 22 minutes.
gharchive/issue
2024-10-13T21:27:16
2025-04-01T04:35:06.266918
{ "authors": [ "moechs" ], "repo": "moechs/upptime", "url": "https://github.com/moechs/upptime/issues/19", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1350313024
🛑 STATUS.MOFE.IO is down In a14bc01, STATUS.MOFE.IO (https://status.mofe.io) was down: HTTP code: 0 Response time: 0 ms Resolved: STATUS.MOFE.IO is back up in 883e08c.
gharchive/issue
2022-08-25T04:41:01
2025-04-01T04:35:06.271682
{ "authors": [ "mofelee" ], "repo": "mofelee/upptime", "url": "https://github.com/mofelee/upptime/issues/710", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
193321836
Red required star do not display on arrays From @aleksei-saharov on November 26, 2016 8:49 Hi, all I have some problems with required properties on http://editor.swagger.io/ If I try set array as required property, I can't see red star near it. YAML: Entity: type: object required: - Values properties: Values: description: Array of values. type: array items: type: number format: double UI result: Alternative view: As you can see, everything is good with alternative view, but main view without red star. I can write minItems: 1 but I will be displayed only if I expand property definition using black triangle. Here I just tell about outside information as for simple property. Copied from original issue: swagger-api/swagger-editor#1124 This is a JSON Schema View issue. Maybe use minItems ? @henrikbulldog please see end description I wrote about that. It is only about style equality for properties and arrays. Iam facing the same issue, is it fixed ? or is there alternatives to overcome this issue?
gharchive/issue
2016-12-04T00:03:05
2025-04-01T04:35:06.319059
{ "authors": [ "SaiTejaLamkam", "aleksei-saharov", "henrikbulldog", "saharj" ], "repo": "mohsen1/json-schema-view-js", "url": "https://github.com/mohsen1/json-schema-view-js/issues/13", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1800681870
[FEAT] Add example for blocking iteration process Is your proposal related to a problem? Blocking is causes confusion for many users. We are in the process of adding documentation, but some more practical ways to show the interactive process when building a pipeline would be good to bring new users along the journey. Describe the solution you'd like See discussion on #1389 Describe alternatives you've considered Additional context Worth noting I'm thinking about working on some code to auto suggest blocking rules (for prediction and training) based on the new more efficient count comparisons function. I.e. to some extent (hopefully) this should be easier in future Hey @RobinL, yea I knew you were looking at some auto generated rules but wasn't sure what sort of timescales that work was on so thought these additional docs were still worth considering, given the feedback we receive on blocking. Yeah, definitely worth having something, maybe just fairly brief at the moment. In terms of timescales, not sure, but now the groundwork is in place I don't think it will take too long to get something basic up and running.
gharchive/issue
2023-07-12T10:33:45
2025-04-01T04:35:06.348067
{ "authors": [ "RobinL", "RossKen" ], "repo": "moj-analytical-services/splink", "url": "https://github.com/moj-analytical-services/splink/issues/1430", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
439808524
How to get headers and set cookies from within an action? In a signIn action I'm implementing along with signUp and signOut I need to get the IP address of the client and its user-agent string and store this info in a database so that it can later be used to help identify the origin of the client. Upon signing in I'm also issuing a new token for the client to be used in all subsequent requests; and this token needs to be set by the backend as an HTTP cookie in the response. Could anybody suggest how I can do that? I've seen that I can set headers via ctx.meta.$responseHeaders but is there something like ctx.meta.$requestHeaders? And also, should I use ctx.meta.$responseHeaders to set cookies? Please suggest what would be the best way to get/set headers/cookies in my use case. Thanks a lot for any help! You can use ctx.meta.$responseHeaders to set cookie in the response. For ctx.meta.$requestHeaders, use onBeforeCall and set headers what you need into the Context meta. Similar answers: https://github.com/moleculerjs/moleculer-web/issues/93#issuecomment-446039185 https://github.com/moleculerjs/moleculer-web/issues/82#issuecomment-437668275 https://github.com/moleculerjs/moleculer-web/issues/49#issuecomment-393157380 Ok, thank you very much! Now I understand. @icebob Is there any way to access the HTTP request from the ctx of the action? I'm asking this because I want to access the request headers without modifying the moleculer node that holds the gateway... Then you should put all headers into the ctx.meta. You can't send the whole req because it's not serializable.
gharchive/issue
2019-05-02T22:05:12
2025-04-01T04:35:06.403767
{ "authors": [ "HighSoftWare96", "icebob", "polomoshnov" ], "repo": "moleculerjs/moleculer-web", "url": "https://github.com/moleculerjs/moleculer-web/issues/117", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
601486905
All API Gateways in development don't regenerate auto alias routes When you have more than one API Gateway, the regeneration of autoAliased routes does not occur on every service reload. So far I've pinned it down to the debouncing of regenerateAllAutoAliases. At this point, I am assuming that the debounce function is being shared between in memory service instances due to the nature of how Moleculer uses Mixins. (I personally prefer the class inheritance, class composition, and class decorator approaches, particularly since I'm using Typescript, and all the mixins are hard to track and get auto-completion for 😉 , but I digress) I really love the overall library design and simplicity, thanks for all your hard work on this. Hmm, nice catch. Are you running multiple API gateway with Node clusters? Or could you give me a repro example code? Right now I'm just running two different API gateways in development. One is a general API gateway, the other is a dedicated auth server. In production they will run in their own docker instances, so no problem there. Perhaps I'm misusing it. Either way, I know normally you would run these servers in separate nodes, so you wouldn't see this issue, even if you were scaling up. It's easily reproducible though. Just have the same runner startup two different services that both extend ApiGateway and use autoAliases. Only one will get updates. If still needed, I'll try to make a simple reproduction.
gharchive/issue
2020-04-16T21:18:35
2025-04-01T04:35:06.405529
{ "authors": [ "NickClark", "icebob" ], "repo": "moleculerjs/moleculer-web", "url": "https://github.com/moleculerjs/moleculer-web/issues/177", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1647465872
Critical Bug: Static quantity in buildShippingLineItems Hi Currently, all orders are rejected if a shipping method is charged more than once. Here is an example: --> 2x Zuschlag (1x = 2,50€) We noticed that in the function "buildShippingLineItems" a static 1 is given. If e.g. 2x 2.50 € comes from Shopware, the Mollie API expects 2.50€, but gets a total of 5.00€ and runs into an error: https://github.com/mollie/Shopware6/blob/master/src/Service/MollieApi/Builder/MollieShippingLineItemBuilder.php $mollieLineItem = new MollieLineItem( OrderLineType::TYPE_SHIPPING_FEE, sprintf('Delivery costs %s', $i), 1, $price, $delivery->getId(), sprintf('mol-delivery-%s', $i), '', '' ); This should be fixed as soon as possible. It's really critical. We have a lot of cancellations and the customer cannot order with any Mollie payment method. We are losing a lot of big orders here right now. I would appreciate a quick feedback! Here some error logs: [2023-03-30T03:17:38.698283+00:00] Mollie.ERROR: Error when starting Mollie payment: Could not create Mollie order (Session: ...) {"function":"order-prepare","exception":"[object] (RuntimeException(code: 422): Could not create Mollie order at ..../vendor/store.shopware.com/molliepayments/src/Service/MollieApi/Order.php:176)\n[previous exception] [object] (Mollie\Api\Exceptions\ApiException(code: 422): [2023-03-30T03:17:38+0000] Error executing API call (422: Unprocessable Entity): Order line 3 is invalid. Total amount is off. Expected total amount to be €1.00 (1 × €1.00), got €3.00. Documentation: https://docs.mollie.com/overview/handling-errors. Field: lines.3.totalAmount at .../vendor/store.shopware.com/molliepayments/src/Service/MollieApi/Client/MollieHttpClient.php:150)"," [2023-03-30T05:04:51.641672+00:00] Mollie.CRITICAL: Could not create Mollie order (Session: ...) {"function":"finalize-payment","exception":"[object] (Mollie\Api\Exceptions\ApiException(code: 422): [2023-03-30T05:04:51+0000] Error executing API call (422: Unprocessable Entity): Order line 3 is invalid. Total amount is off. Expected total amount to be €2.50 (1 × €2.50), got €5.00. Documentation: https://docs.mollie.com/overview/handling-errors. Field: lines.3.totalAmount at .../vendor/store.shopware.com/molliepayments/src/Service/MollieApi/Client/MollieHttpClient.php:150)"," Hi thank you for this Not that familiar with that part of the code but I'll check it out as soon as possible we are usually pretty fast on fixes, so shouldnt be a big deal :) thank you for your help Hi I just checked out what is happening, and figured out its no global critical bug, but related to your specific use case with additional delivery quantities. I would still give you the rest of the budget to provide a fast fix, but to ensure that its really working I would please need your configuration on the shipping setup, thank you I would appreciate quick feedback to fix this for you, thank you Hi @boxblinkracer Thanks for the quick help! It is an edge case for customers with multiple delivery methods. That's right. There is no easy setting for this. I can give you access to the client's development environment. I can provide you the data by email. Contact me at: d.lange@one-dot.de Thanks!
gharchive/issue
2023-03-30T11:52:17
2025-04-01T04:35:06.457732
{ "authors": [ "boxblinkracer", "davidlange" ], "repo": "mollie/Shopware6", "url": "https://github.com/mollie/Shopware6/issues/553", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
489697827
de locale adds "." in MMM format Describe the bug When using the German locale, the format MMM adds a dot This is not really a bug ... To Reproduce Steps to reproduce the behavior: toggle locale between 'en' and 'de' and compare moment(new Date().format('MMM DD, YYYY') Expected behavior This may be due to Germans preferring this formatting, however, I believe for consistency it may be better to have users add the "." like so: format('MMM. DD, YYYY') Workaround Of cause a workaround is to replace the "." like so: format('MMM DD, YYYY').replace('.','') Expected behaviour should match the common convention for that locale. In this case it looks like moment does meet expectations - some month abbreviations end in a '.', and some don't(!) A couple of ways to check: See the unicode consortium's survey tool for locale data - specifically the German locale data. Check the output of Intl.DateTimeFormat for the locale: new Intl.DateTimeFormat("de", { year: "numeric", month: "short", }).format(new Date("2019-06-03")); // Juni 2019 new Intl.DateTimeFormat("de", { year: "numeric", month: "short", }).format(new Date("2019-04-03")); // Apr. 2019 Note that the rule is the same in many European languages: if the word is tool long, only the 3/4 first letters (depending of understanding and possible needed disambiguation) are kept, then a dot is added to mark the word as abbreviated. If the word is short enough, it is used as is for short version: no dot are no letter are missing.
gharchive/issue
2019-09-05T11:39:35
2025-04-01T04:35:06.487539
{ "authors": [ "ashsearle", "kylekatarnls", "philipeachille" ], "repo": "moment/moment", "url": "https://github.com/moment/moment/issues/5220", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
532943727
Setting week/year in the last week of the year results in wrong date Describe the bug When constructing a moment object with a date that is in the last week of the year 2019-12-31, then setting the year or week results in an incorrect date. To Reproduce Steps to reproduce the behavior: executing this code: moment('2019-12-30', "YYYY-MM-DD").year(2019).week(52).format() results in: "2020-12-21T00:00:00-06:00" Instead of using format(), the error can also be seen by calling .year() Expected behavior The last set year and week should be respected. Screenshots Desktop (please complete the following information): OS: Windows Browser Chrome Version 78.0.3904.108 (Official Build) (64-bit) Moment-specific environment The time zone setting of the machine the code is running on: Central Standard Time The time and date at which the code was run: 3:24PM Other libraries in use (TypeScript, Immutable.js, etc): this can be seen on momentjs.com Please run the following code in your environment and include the output: console.log((new Date()).toString()) console.log((new Date()).toLocaleString()) console.log((new Date()).getTimezoneOffset()) console.log(navigator.userAgent) console.log(moment.version) Wed Dec 04 2019 15:25:22 GMT-0600 (Central Standard Time) 12/4/2019, 3:25:22 PM 360 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36 2.24.0 undefined Additional context I am using this in tests to simulate how moment will format dates during the last week of the year. For example, in my test, I would call moment('2019-12-31') instead of moment(). What does moment('2019-12-30', "YYYY-MM-DD").week() return in your environment? Thank you for the information! I was not aware of that / missed seeing weekYear in the documentation. That seems to have helped a lot to understand more what moment does behind the scenes for handling weeks in the year.
gharchive/issue
2019-12-04T21:26:42
2025-04-01T04:35:06.495636
{ "authors": [ "ashsearle", "michael-woodward-spok" ], "repo": "moment/moment", "url": "https://github.com/moment/moment/issues/5311", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }