id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
2381771794
Added initContainers Added initContainers (with extraVolumes and extraVolumeMounts fields). Also closes: #114. These fields are need if for example one uses custom CA authority (Step Certificates) and needs to set a custom certificate that needs to be stored in /etc/ssl/certs/. Using an init container, one can mount the certificates to the given folder and run the update-ca-certificates command. For extra volumes, could we just use the already existing persistence key? Should we update the persistence key to work for this use case? I'm still not loving the idea of having two ways of defining volumes. Also would it be worth splitting this into two MRs - one for init containers and then a separate one for the updated volume support? Yes, I think it might work. The difference between the two is that in PR #117 the extraVolume(s/Mounts) are in the .Values, whereas in this PR they are in the .Values.deployment like in other charts (e.g. argo or cert-manager). There are also charts that use them directly from .Values (prometheus-adapter) it they know that there will be only one deployment). I see. Thanks!
gharchive/pull-request
2024-06-29T14:07:32
2025-04-01T06:38:24.822525
{ "authors": [ "cfis", "johnstarxx" ], "repo": "docker-mailserver/docker-mailserver-helm", "url": "https://github.com/docker-mailserver/docker-mailserver-helm/pull/125", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
677781486
Nimble has moved File: engine/extend/legacy_plugins.md Can we please link the Nimble DVP to https://scod.hpedev.io/docker_volume_plugins/hpe_nimble_storage/index.html Created a new issue in the upstream repo that contains this file. Closing this issue in favor of the new one. https://github.com/docker/cli/issues/3589
gharchive/issue
2020-08-12T15:24:13
2025-04-01T06:38:24.898748
{ "authors": [ "craig-osterhout", "datamattsson" ], "repo": "docker/docker.github.io", "url": "https://github.com/docker/docker.github.io/issues/11239", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
97041354
Docker daemon unresponsive and gigabytes of memory when full BGP table added to system (650k routes) Description of problem: We have a server which is connected to full BGP feed. It means that system has 650k routes in route table which in fact is quite normal. After starting docker daemon it allocates gigabytes of memory and never stops. All docker commands (eg. "docker ps") hangs. After removing routes everything is going back to normal. Reproduced on docker vesions 1.6.2 and 1.7.1 docker version: Client version: 1.7.1 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 786b29d OS/Arch (client): linux/amd64 Server version: 1.7.1 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 786b29d OS/Arch (server): linux/amd64 docker info: Containers: 0 Images: 9 Storage Driver: devicemapper Pool Name: docker-9:1-1184122-pool Pool Blocksize: 65.54 kB Backing Filesystem: extfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 493.9 MB Data Space Total: 107.4 GB Data Space Available: 12.15 GB Metadata Space Used: 1.102 MB Metadata Space Total: 2.147 GB Metadata Space Available: 2.146 GB Udev Sync Supported: false Deferred Removal Enabled: false Data loop file: /var/lib/docker/devicemapper/devicemapper/data Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.82-git (2013-10-04) Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.18.11 Operating System: Debian GNU/Linux 7 (wheezy) CPUs: 32 Total Memory: 125.9 GiB Name: host ID: DYIE:LMBJ:V5FV:FYTY:3WTQ:5ZLW:CPZI:RVHK:VFOO:XKXR:H2EP:PVUS WARNING: No swap limit support uname -a: Linux host 3.18.11 #1 SMP Thu Jun 11 12:40:05 CEST 2015 x86_64 GNU/Linux Environment details (AWS, VirtualBox, physical, etc.): Physical server with full BGP route table and 128GB of ram. Can be reproduced on virtual servers too. How reproducible: Insert thousands of routes into routing table and restart docker daemon. Steps to Reproduce: add dummy interface: ip link add name dummy0 type dummy && ip link set up dummy0 add multiple routes into the system (it takes few minutes): for a in {2..10}; do for b in {1..253}; do for c in {1..253}; do ip route add 1.$a.$b.$c/32 dev dummy0; done ; done; done start docker daemon docker -D -d DEBU[0000] Registering OPTIONS, DEBU[0000] Registering GET, /events DEBU[0000] Registering GET, /images/json DEBU[0000] Registering GET, /containers/json DEBU[0000] Registering GET, /containers/{name:.*}/export DEBU[0000] Registering GET, /containers/{name:.*}/logs DEBU[0000] Registering GET, /containers/{name:.*}/stats DEBU[0000] Registering GET, /containers/{name:.*}/attach/ws DEBU[0000] Registering GET, /info DEBU[0000] Registering GET, /version DEBU[0000] Registering GET, /images/search DEBU[0000] Registering GET, /images/get DEBU[0000] Registering GET, /images/{name:.*}/json DEBU[0000] Registering GET, /containers/{name:.*}/top DEBU[0000] Registering GET, /containers/ps DEBU[0000] Registering GET, /containers/{name:.*}/changes DEBU[0000] Registering GET, /exec/{id:.*}/json DEBU[0000] Registering GET, /_ping DEBU[0000] Registering GET, /images/{name:.*}/get DEBU[0000] Registering GET, /images/{name:.*}/history DEBU[0000] Registering GET, /containers/{name:.*}/json DEBU[0000] Registering POST, /containers/{name:.*}/start DEBU[0000] Registering POST, /containers/{name:.*}/stop DEBU[0000] Registering POST, /exec/{name:.*}/start DEBU[0000] Registering POST, /images/create DEBU[0000] Registering POST, /images/load DEBU[0000] Registering POST, /containers/{name:.*}/kill DEBU[0000] Registering POST, /containers/{name:.*}/unpause DEBU[0000] Registering POST, /exec/{name:.*}/resize DEBU[0000] Registering POST, /containers/{name:.*}/rename DEBU[0000] Registering POST, /auth DEBU[0000] Registering POST, /containers/create DEBU[0000] Registering POST, /containers/{name:.*}/wait DEBU[0000] Registering POST, /containers/{name:.*}/resize DEBU[0000] Registering POST, /containers/{name:.*}/exec DEBU[0000] Registering POST, /build DEBU[0000] Registering POST, /images/{name:.*}/tag DEBU[0000] Registering POST, /containers/{name:.*}/restart DEBU[0000] Registering POST, /containers/{name:.*}/attach DEBU[0000] Registering POST, /commit DEBU[0000] Registering POST, /images/{name:.*}/push DEBU[0000] Registering POST, /containers/{name:.*}/pause DEBU[0000] Registering POST, /containers/{name:.*}/copy DEBU[0000] Registering DELETE, /containers/{name:.*} DEBU[0000] Registering DELETE, /images/{name:.*} DEBU[0000] docker group found. gid: 999 INFO[0000] Listening for HTTP on unix (/var/run/docker.sock) Actual Results: ip route show | wc -l 576088 ps aux --sort=-vsz,-rss | grep docker root 17145 100 8.8 11867512 11623548 pts/2 Sl+ 14:17 1:21 docker -D -d Expected Results: Docker daemon can start and be responsive with full BGP route table. I'll look into this because I've investigate some related code with similar problems in the past. ping
gharchive/issue
2015-07-24T12:24:43
2025-04-01T06:38:24.904276
{ "authors": [ "sanmai-NL", "sysopcorner", "unclejack" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/14946", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
121298978
Can ping but not connect to container running on the same network If I create a network and put a mysql server on it: $ docker network create net $ docker run -itd --net=net --name mysql -e MYSQL_ROOT_PASSWORD=password mysql:5.6.25 I can ping this container, $ docker run --net=net mysql:5.6.25 ping mysql PING mysql (172.19.0.2): 48 data bytes 56 bytes from 172.19.0.2: icmp_seq=0 ttl=64 time=0.144 ms ... but not connect to the server, $ docker run --net=net mysql:5.6.25 mysql -h mysql --password=password #hangs indefinitely... The logs show the server is running, and in fact I can exec into the mysql container and connect to the server from its own container, so that's all fine. An important point is I've got two machines with identical versions of Docker (docker version and docker info are identical) but this happens on only one of them, so it must be some other settings on this machine interacting with Docker. Here's the info for the one where the problem happens: Info Linux darkenergy 3.19.0-31-generic #36~14.04.1-Ubuntu SMP Thu Oct 8 10:21:08 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Client: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.9.1 API version: 1.21 Go version: go1.4.2 Git commit: a34a1d5 Built: Fri Nov 20 13:12:04 UTC 2015 OS/Arch: linux/amd64 Containers: 7 Images: 316 Server Version: 1.9.1 Storage Driver: aufs Root Dir: /home/boincadm/docker/aufs Backing Filesystem: extfs Dirs: 334 Dirperm1 Supported: true Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 3.19.0-31-generic Operating System: Ubuntu 14.04.3 LTS CPUs: 32 Total Memory: 251.9 GiB Name: darkenergy ID: HFOU:WMAG:6J3D:JLDB:LXBS:62C4:A7PY:RL5A:CP3V:ARO3:BGVC:DNXX WARNING: No swap limit support Hi! Please read this important information about creating issues. If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information. This is an automated, informational response. Thank you. For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues BUG REPORT INFORMATION Use the commands below to provide key information from your environment: docker version: docker info: uname -a: Provide additional environment details (AWS, VirtualBox, physical, etc.): List the steps to reproduce the issue: 1. 2. 3. Describe the results you received: Describe the results you expected: Provide additional info you think is important: ----------END REPORT --------- #ENEEDMOREINFO I've seen this behaviour when ufw was running. @marius311 can you check if the problem disappears if ufw is disabled? This is an old issue. I will close it as stale.
gharchive/issue
2015-12-09T17:33:36
2025-04-01T06:38:24.912821
{ "authors": [ "GordonTheTurtle", "marius311", "sam-thibault", "teetrinkers", "thaJeztah" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/18542", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
208211171
Docker 1.13 overwrite/remove named volume when stack deploy/update Currently there is no option when updating a stack (docker stack deploy) to overwrite a named volume created by this stack. In a same way, there is no option (such -v) to remove named volume associated with a stack when you remove the stack, other than removing the volumes with docker volume rm. Here is a usecase : Given this file: version: '3' services: config: image: test/config-hoster:1.3 volumes: - volume_config:/datas deploy: mode: global restart_policy: condition: none nginx: image: nginx volumes: - volume_config:/datas:ro depends_on: - config volumes: volume_config: where config-hoster is a from scratch image, exposing a volume with the config files in it, and nginx a service used just for testing the correct mapping. When I first deploy this stack, everything is fine, the volume is created and filled with the config files of the config service. But if I change the version (tag) of the config service, when I update the stack the volume is not deleted, so datas inside are obviously not overwrited. Same behavior if I rm and redeploy the stack. Since this is a named volume, not directly bounded to an host directory, and not created externaly, I see no problem to erase or overwrite this volume when the stack is removed or updated. Hmm, this would actually be a pretty big problem if implemented. The reason it's doing this is the volume exists already and is populated with data when you update the config. I think the right thing here is to actually support config objects (like we do secrets). I've just checked the secret objects in docker. This is great, and actually solve a problem we were facing with ssh keys. Indeed supporting config objects in a same way will perfecly match the previous usecase. I'm not sure to understand why and what would be the problem of deleting volumes created inside a stack, if you're sure of what you're doing (hence the -v option). Could you clarify this for me please ? @BastienAr Volumes are meant for persistent storage. If you are telling a service to use a named volume, this is making a statement on the retention of this volume. For instance if you use an anonymous volume (don't specify a name), this would be cleaned up. I'm using NFS as volume storage backend and.... I can't find any way to change its options except stop containers, remove volumes and deploy again. @cpuguy83 Ok, I get the philosophy. So we definitively need a way to start a service requiring a file that could be modified (like config files) on any swarm node. Currently if you have to change configuration, either you rebuild the image with correct config (dirty) or you bind volume and become host dependent. NFS is also a solution, but I find it cumbersome for this purpose. @BastienAr I feel like volumes are not a good use for injecting configuration, even though it may be the only way to do it right now. One option that is currently available is to inject the config as a secret. It's a little hacky but works well in the short-term. Yeah, that could be a trick, but it's still inconvenient due to the fixed path of secrets (/run/secret/<secret_name>). And the encryption of the file could add unnecessary process (config files rarely need encryption) And the encryption of the file could add unnecessary process (config files rarely need encryption) I think that overhead can be neglected; the encryption/decryption is part of the raft store, and should not cause a noticeable performance issue. w.r.t. the paths (defaulting to /run/secret/<name>), the <name> part can be overridden when using the secret (--secret src=my-little-secret,target=config.json), but the path inside the container currently not (we kept this possibility open for a future enhancement, but needs additional discussion). You can create a symlink inside the container if needed. Okay, this solution is quite simple and operational. Still I think this could be cleaner if shared file that are not secret would be separated from the term secret, even if under the hood this is the exact same process. Bty is there any chance we could talk about this subject at the next Dockercon ? Some guys from our company (including me) will be there. resolved in 17.06 by the introduction of Docker configs. I close the issue
gharchive/issue
2017-02-16T18:48:27
2025-04-01T06:38:24.922950
{ "authors": [ "BastienAr", "cpuguy83", "thaJeztah", "zh99998" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/issues/31095", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
56244257
docs: change events --since to fit RFC3339Nano PR6931 changed time format to RFC3339Nano. But the example in cli.md does not changed. Signed-off-by: Chen Hanxiao chenhanxiao@cn.fujitsu.com @tiborvass LGTM - @fredlf @jamtur01
gharchive/pull-request
2015-02-02T14:40:37
2025-04-01T06:38:24.925325
{ "authors": [ "SvenDowideit", "chenhanxiao" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/10509", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
60205032
Fix docker start help message Signed-off-by: Lei Jitang leijitang@huawei.com Docker start can start multiple containers, but the help messages show Restart a stopped container which is not correct. ping @estesp @jfrazelle LGTM ping LGTM but would like a doc maintainer to make sure we didn't miss any docs. ping @moxiegirl @SvenDowideit @fredlf LGTM
gharchive/pull-request
2015-03-07T12:48:10
2025-04-01T06:38:24.927630
{ "authors": [ "coolljt0725", "dmp42", "duglin", "moxiegirl" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/11227", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
96679101
Enable validate-lint as part of CI Yes, I'm ashamed. Yes, I hope it passes. Don't judge me. lol LGTM, if it passes Also had to fix pkg/chrootarchive/diff_windows.go because of #14862. :stuck_out_tongue_closed_eyes:
gharchive/pull-request
2015-07-22T22:20:11
2025-04-01T06:38:24.929233
{ "authors": [ "icecrime", "jfrazelle", "vdemeester" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/14878", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
127803605
Use sync.Pool for io.Copy buffers Small ioutils.Copy function uses buffers from sync.Pool instead of allocating them on each io.Copy. The size of buffer chosen as a default size in io.Copy. I used it only in overlay copy function because it was major memory eater. I think I'll wait with this change and will use bufio.Reader.
gharchive/pull-request
2016-01-20T22:53:25
2025-04-01T06:38:24.930266
{ "authors": [ "LK4D4" ], "repo": "docker/docker", "url": "https://github.com/docker/docker/pull/19520", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
240069691
Support parsing SCTP port mapping please see https://github.com/moby/moby/pull/33922 Signed-off-by: Wataru Ishida ishida.wataru@lab.ntt.co.jp CI failure after rebase is unrelated opened https://github.com/docker/go-connections/pull/47 rebased
gharchive/pull-request
2017-07-03T05:21:10
2025-04-01T06:38:25.022170
{ "authors": [ "AkihiroSuda", "ishidawataru" ], "repo": "docker/go-connections", "url": "https://github.com/docker/go-connections/pull/41", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
135957299
Error installing tensorflow - Mac OSX 10.11.3 Error while pulling image: Get https://index.docker.io/v1/repositories/drunkar/anaconda-tensorflow-gpu/images: dial tcp: lookup index.docker.io on 127.0.0.54:53: read udp 127.0.0.1:40707->127.0.0.54:53: read: connection refused Downloading has been active for > 1hr. No network issue but download completion status not visible. When you get error like this, does docker-machine restart fix it? Where is your DNS resolver pointing? e.g. is it going through proxy The restart should fix this, feel free to add more comments here if this doesn't solve it.
gharchive/issue
2016-02-24T05:08:28
2025-04-01T06:38:25.036896
{ "authors": [ "FrenchBen", "arvind114", "nathanleclaire" ], "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/1494", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
196296321
Unable to pull docker pull microsoft/aspnet:1.0.0-rc1-update1-core Expected behavior The image "microsoft/aspnet:1.0.0-rc1-update1-core" gets installed correctly Actual behavior I am getting the following error: Network timed out while trying to connect to https://index.docker.io/v1/repositories/raduporumb/aspnetcore-rc1-update1/images. You may want to check your internet connection or if you are behind a proxy. Information about the Issue docker pull microsoft/aspnet:1.0.0-rc1-update1-core Steps to reproduce the behavior ... ... Please make sure that you're on a stable connection and use a solid DNS such as Google DNS.
gharchive/issue
2016-12-18T20:08:36
2025-04-01T06:38:25.039987
{ "authors": [ "FrenchBen", "praveenprabharavindran" ], "repo": "docker/kitematic", "url": "https://github.com/docker/kitematic/issues/2211", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
138083349
Default listener is writing directly to stdout In https://github.com/docker/libcompose/blob/master/project/listener.go#L71, the default listener writes directly to stdout (tested on windows and ubuntu). I am writing a cli app that uses libcompose to talk with the docker daemon and the output from the listener pollutes the output of my cli app. Is it possible to redirect the output of the listener into memory so that I can write the output of the listener to stdout on my terms? Hi @F21, thanks for the report. It's on its way, there is some refactoring to come in this area :stuck_out_tongue_closed_eyes:. @vdemeester Is there anyway I can help get the ball rolling on this one? Maybe the adding a Listener field to the context struct passed to NewProject(). If there is a listener passed in, it uses it, otherwise it creates a DefaultListener. Let me know what you think! I just notice NewDefaultListener() requires an instance of project, which causes a chicken and egg problem.
gharchive/issue
2016-03-03T05:58:48
2025-04-01T06:38:25.050399
{ "authors": [ "F21", "vdemeester" ], "repo": "docker/libcompose", "url": "https://github.com/docker/libcompose/issues/169", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
231736438
Removed printfs Changed some prints into proper logging, also was missing the \n at the end Signed-off-by: Flavio Crisciani flavio.crisciani@docker.com Not a maintainer, but this looks good to me. LGTM
gharchive/pull-request
2017-05-26T21:14:41
2025-04-01T06:38:25.051945
{ "authors": [ "aaronlehmann", "fcrisciani", "mavenugo" ], "repo": "docker/libnetwork", "url": "https://github.com/docker/libnetwork/pull/1781", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
113730681
Fixed typo Fixed Minor typo. Signed-off-by: Ian Lee IanLee1521@gmail.com] Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "patch-1" git@github.com:IanLee1521/machine.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. LGTM Thanks!
gharchive/pull-request
2015-10-28T02:33:24
2025-04-01T06:38:25.054686
{ "authors": [ "GordonTheTurtle", "IanLee1521", "dmp42" ], "repo": "docker/machine", "url": "https://github.com/docker/machine/pull/2105", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
155817424
force cgo resolver for name resolution By default, the pure Go resolver is used which makes direct DNS requests first to resolve a hostname before checking /etc/hosts. If a host on the network has the same name as the linked container, the host on the network will be used instead of the linked container. I had a machine on the network with hostname mysql and bringing up the containers with docker-compose had the server and signer linked containers trying to connect to to it... $ sudo docker-compose up ... mysql_1 | 2016-05-19 19:33:29 140622360905664 [Note] mysqld: ready for connections. mysql_1 | Version: '10.1.10-MariaDB-1~jessie' socket: '/var/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution server_1 | waiting for notarymysql to come up. signer_1 | waiting for notarymysql to come up. server_1 | waiting for notarymysql to come up. signer_1 | waiting for notarymysql to come up. server_1 | waiting for notarymysql to come up. ... Forcing cgo resolver will use C library routines that will honor values in /etc/hosts first and then fall back on DNS. From the docs: https://golang.org/pkg/net/#hdr-Name_Resolution Signed-off-by: Andrew Hsu andrewhsu@acm.org (github: andrewhsu) Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "compose-resolv" git@github.com:andrewhsu/notary.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. Please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "compose-resolv" git@github.com:andrewhsu/notary.git somewhere $ cd somewhere $ git commit --amend -s --no-edit $ git push -f Ammending updates the existing PR. You DO NOT need to open a new one. Can one of the admins verify this patch? Sorry about the multiple force pushes...trying to get signature requirement squared away. I think it's good to go now. jenkins, test this please @cyli umm...i think jenkins is a bot. that's all he can say: https://github.com/docker-jenkins?tab=activity @andrewhsu Yes it is :). We gate PRs on code coverage results (which wait for results from Jenkins and CircleCI) as well as CI results - this Jenkins server runs our yubikey tests, but doesn't automatically run them for PRs from authors not in the org. The comment I left basically lets it know that it's ok to run tests for this PR, else the codecov check would never finish. We were wondering if converting the compose file to v2 and using a network definition would be sufficient? That would not put anything in /etc/hosts, and the DNS resolution would hopefully work correctly. @andrewhsu it's our jenkins bot and @cyli's comment is what we use to instruct it to run its tests. It will only pay attention to the maintainers on this project :-) @cyli I tried as you suggested, to convert the docker compose files to v2 but it still had the same issue. If I add the environment variable GODEBUG=netdns=cgo then everything works. So, I created #755 as an alternative to this PR with the docker compose v2 format change. @andrewhsu I may be misunderstanding the docs but they don't appear to indicate the behaviour you describe. We think what you might be encountering is a compose issue (previously described here: https://github.com/docker/distribution/issues/1362 ) and while it can be solved with the cgo change, we really don't want to go that route of hacking every other project to work around an issue elsewhere. If you think it's the same issue could you file a ticket on the docker-compose repository? @endophage I understand. I've adjusted #755 to remove the netdns workaround. I'll close out this PR. @andrewhsu Just checking as a possible reason for the issue - did you happen to have more than 2 DNS servers on your host when the other machine took precedence over the linked mysql container? @cyli no i did not have 2 dns servers on the host at the time. Outside of docker containers on the host I had entries something like this in /etc/resolv.conf: search internal.example.com nameserver 10.0.0.1 nameserver 10.0.0.2 And running our own DNS servers resolves hostnames like this: bash$ host mysql mysql.internal.example.com has address 10.10.10.5 In any case, I tried to replicate the issue again after moving my entire development environment from Kitematic to the new Docker for Mac OS X 1.12.0-rc2-beta16 and I was not able to get it to fail like before so I'm not going to spend any more time on it. Stuff works. Ah ok, thanks for clarifying and trying to replicate! On Thursday, July 7, 2016, Andrew Hsu notifications@github.com wrote: @cyli https://github.com/cyli no i did not have 2 dns servers on the host at the time. Outside of docker containers on the host I had entries something like this in /etc/resolv.conf: search internal.example.com nameserver 10.0.0.1 nameserver 10.0.0.2 And running our own DNS servers resolves hostnames like this: bash$ host mysqlmysql.internal.example.com has address 10.10.10.5 In any case, I tried to replicate the issue again after moving my entire development environment from Kitematic to the new Docker for Mac OS X 1.12.0-rc2-beta16 and I was not able to get it to fail like before so I'm not going to spend any more time on it. Stuff works. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/docker/notary/pull/753#issuecomment-231110375, or mute the thread https://github.com/notifications/unsubscribe/AANn4PBau7RBlOc9LdGAHhpg1_Rh7Ms_ks5qTRjygaJpZM4IimJO .
gharchive/pull-request
2016-05-19T19:36:23
2025-04-01T06:38:25.071031
{ "authors": [ "GordonTheTurtle", "andrewhsu", "cyli", "docker-jenkins", "endophage" ], "repo": "docker/notary", "url": "https://github.com/docker/notary/pull/753", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
97836327
Small cleanup of Cluster.createContainer Signed-off-by: Andrea Luzzardi aluzzardi@gmail.com /cc @jimmyxian @vieux @aluzzardi This cleanup will save the soft-image-affinity in ContainerConfig 4/ Retry with a soft-affinity (but don't store it in the ContainerConfig) @jimmyxian You are right :)
gharchive/pull-request
2015-07-29T01:35:00
2025-04-01T06:38:25.073364
{ "authors": [ "aluzzardi", "jimmyxian" ], "repo": "docker/swarm", "url": "https://github.com/docker/swarm/pull/1099", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
339182075
[18.03] backport reaper fixes Backports for 18.03 of; https://github.com/docker/swarmkit/pull/2526 [manager/orchestrator/task_reaper] Fix task reaper test to also set the desired state on tasks to prevent reconciliation races https://github.com/docker/swarmkit/pull/2591 [manager/orchestrator/taskreaper] Move task_reaper_test to orchestrator/taskreaper https://github.com/docker/swarmkit/pull/2666 [orchestrator/task reaper] Clean up tasks in dirty list for which the service has been deleted https://github.com/docker/swarmkit/pull/2675 [manager/orchestrator/reaper] Clean out the task reaper dirty set at the end of tick() https://github.com/docker/swarmkit/pull/2669 Fix task reaper batching git cherry-pick -s -S -x a388cad309edddb9880899fe8927afbe4717a18e git cherry-pick -s -S -x 8cfb337920a6658b302643f27074ca3d669176ec git cherry-pick -s -S -x 592e8eddfa43ec5fbd6e34da5ad6890dfa9313fb git cherry-pick -s -S -x 1a43a3b612d8c775db8a44c8399844e1f7e4aed2 git cherry-pick -s -S -x 5291c7a7b45773a4fe18720a54485ee2dde0af3d Conflict when cherry-picking https://github.com/docker/swarmkit/commit/8cfb337920a6658b302643f27074ca3d669176ec, likely because things were cherry-picked out of order; On branch 18.03-backport_reaper_2 You are currently cherry-picking commit 8cfb3379. (fix conflicts and run "git cherry-pick --continue") (use "git cherry-pick --abort" to cancel the cherry-pick operation) Changes to be committed: modified: manager/orchestrator/taskreaper/task_reaper_test.go Unmerged paths: (use "git add/rm <file>..." as appropriate to mark resolution) deleted by them: manager/orchestrator/replicated/task_reaper_test.go Resolved by git rm manager/orchestrator/replicated/task_reaper_test.go to mark resolution To verify everything looked ok after merging, I compared the directories with master; git diff master manager/orchestrator/taskreaper/ git diff master manager/orchestrator/ Which produced an empty diff, so the master/orchestrator directory is fully up to date with master This supersedes https://github.com/docker/swarmkit/pull/2668 ping @anshulpundir @dperny @cyli PTAL if this LGTY LGTM, let's do it.
gharchive/pull-request
2018-07-07T23:14:42
2025-04-01T06:38:25.079090
{ "authors": [ "dperny", "thaJeztah" ], "repo": "docker/swarmkit", "url": "https://github.com/docker/swarmkit/pull/2694", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
481726191
WDL parsing check for recursive imports catches cases that are not recursive See https://github.com/gevro/gatk4-exome-analysis-pipeline-flat If you try and register it with the new dockstore WDL 1.0 parsing code, it will say there might be recursive imports. I checked the import structure and this is not true, it is a flaw with our code. Our current WDL parsing code cannot parse these 1.0 files either, so users cannot publish their valid 1.0 workflows. From https://discuss.dockstore.org/t/unable-to-get-publish-button-to-become-active/1972 ┆Issue is synchronized with this Jira Story ┆Issue Type: Story ┆Fix Versions: Dockstore 1.7 ┆Sprint: Seabright Sprint 17 Raft ┆Issue Number: DOCK-911 @agduncan94 taking a look, let me know if you have code for this already @denis-yuen I have the code, it is in https://github.com/dockstore/dockstore/tree/feature/2766/recursive-imports I still have to write tests, was going to use the one from the discourse discussion. Ah, too late I can now register https://raw.githubusercontent.com/gevro/gatk4-exome-analysis-pipeline-flat/master/ExomeGermlineSingleSample.wdl
gharchive/issue
2019-08-16T18:15:21
2025-04-01T06:38:25.131731
{ "authors": [ "agduncan94", "denis-yuen", "garyluu" ], "repo": "dockstore/dockstore", "url": "https://github.com/dockstore/dockstore/issues/2766", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
235619104
Dockstore-based workflows with registered tools Feature Request Is it possible to create Workflow using Tools registered in Dockstore? I noticed that many available workflows in Dockstore provide all tools (CWL files) locally. I want to write Workflow files (CWL) using tools that I previously registered in Dockstore. Any ideia? Desired behaviour A CWL Workflow example: # ... steps: quality_report: run: registry.hub.docker.com/welliton/rqc # latest version in: {} out: {} quality_control: run: registry.hub.docker.com/welliton/trimgalore:v0.4.4 # with tag in: {} out: {} # ... ┆Issue is synchronized with this Jira Story ┆Fix Versions: Dockstore 2.X ┆Issue Number: DOCK-490 ┆Sprint: Backlog ┆Issue Type: Story Hi, I think the approach that comes to mind to try is: Multiple tools are built from multiple Dockerfiles+CWL files in one repo. These are then added to Dockstore Subsequently, a workflow is added to the same repo that uses those tools. We haven't tried sharing multiple tools between multiple repos yet, however. It might depend on how to reference a CWL step in a different repo in CWL. This is something we'd also like to be able to do. To some extent we're also thinking about 'custom' CWL which isn't in a repo but uses registered tools to allow experimentation or user centric plug-and-play. One thought I had is I have the impression you may be able to import workflow steps via URL http://www.commonwl.org/v1.0/SchemaSalad.html#Import This is something that I haven't tried myself yet, but if it works, it might be nice to work out a pattern based on this and support it in Dockstore explicitly. Thank you @denis-yuen for your suggestion. It worked! I used the URL for raw Dockstore.cwl file from GithHub. See example below. cwlVersion: v1.0 class: Workflow inputs: files: File[] groups: string[] outputs: qc_report: type: File outputSource: qc_raw/qc_report steps: qc_raw: run: https://raw.githubusercontent.com/labbcb/dockstore-rqc/v3.5/Dockstore.cwl in: files: files groups: groups out: [qc_report] Input example: files: - class: File path: ERR127302_1_subset.fastq.gz - class: File path: ERR127302_2_subset.fastq.gz groups: [None, None] I tested using cwltool. Another way is to dowload CWL files via dockstore tool cwl. For example: dockstore tool cwl --entry registry.hub.docker.com/welliton/rqc:v3.5 > Rqc.cwl @denis-yuen the answer to this is no, but it could be. Do you want @aofarrel to make a PR to include this in the current documentation? If so, Ash can target develop. I think that would be nice. There's a bit of a start of a description of different kinds of imports at https://docs.dockstore.org/en/stable/end-user-topics/language-support.html#converting-file-path-based-imports-to-public-http-s-based-imports-for-wdl and a new section could be added for CWL
gharchive/issue
2017-06-13T16:51:00
2025-04-01T06:38:25.139105
{ "authors": [ "Welliton309", "bethsheets", "denis-yuen", "keiranmraine", "wdesouza" ], "repo": "dockstore/dockstore", "url": "https://github.com/dockstore/dockstore/issues/770", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
192049799
Docopt parser get all words in doc Hi, Before starting, here is my configuration: docopt==0.6.2 Python 3.5.2 So, why when i run this command like this example right here: python naval_fate.py ship Guardian move 100 150 --speed=15 With this script: """Naval Fate. Usage: naval_fate.py ship new <name>... naval_fate.py ship <name> move <x> <y> [--speed=<kn>] naval_fate.py ship shoot <x> <y> naval_fate.py mine (set|remove) <x> <y> [--moored|--drifting] naval_fate.py -h | --help naval_fate.py --version Options: -h --help Show this screen. --version Show version. --speed=<kn> Speed in knots [default: 10]. --moored Moored (anchored) mine. --drifting Drifting mine. """ from docopt import docopt if __name__ == '__main__': arguments = docopt(__doc__, version='Naval Fate 2.0') print(arguments) Results are: {'--drifting': False, '--help': 0, '--moored': False, '--speed': '15', '--version': 0, '.': False, '10': False, '<name>': ['Guardian'], '<x>': '100', '<y>': '150', 'Drifting': False, 'Moored': False, 'Options:': False, 'Show': 0, 'Speed': False, 'anchored': False, 'default:': False, 'in': False, 'knots': False, 'mine': False, 'mine.': 0, 'move': True, 'new': False, 'remove': False, 'screen.': False, 'set': False, 'ship': True, 'shoot': False, 'this': False, 'version.': False} and not like this example right here: {'--drifting': False, 'mine': False, '--help': False, 'move': True, '--moored': False, 'new': False, '--speed': '15', 'remove': False, '--version': False, 'set': False, '<name>': ['Guardian'], 'ship': True, '<x>': '100', 'shoot': False, '<y>': '150'} Did I make a mistake or did the docopt parser have a problem? Sorry for my english. Oh, i just forgot to add an empty line between "Usage" and "Options". It works!
gharchive/issue
2016-11-28T16:23:20
2025-04-01T06:38:25.147427
{ "authors": [ "daimebag" ], "repo": "docopt/docopt", "url": "https://github.com/docopt/docopt/issues/355", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
95535333
Inverse-side PersistentCollections should be immutable This issue came out of #1086, where we discussed several edge cases when working with inverse-side PersistentCollections that have been modified. @jmikola: Should we consider documenting that inverse collections are read-only, or perhaps enforce that with a special PersistentCollection sub-class? AFAIK, changes to such a collection would not be reflected in the changeset (so they are already read-only from ODM's perspective). If users actually want to modify the collection (so their data model doesn't have to care), I suppose they can manually unwrap it in a lifecycle event and work with an ArrayCollection. @alcaeus: I agree with having an ImmutablePersistentCollection class which forbids any write operations (add(), clear(), remove(), removeKey(), set()) in order to preserve the intended behavior. Only thing is, while it might be that we're just strictly enforcing something that was an unwritten rule before, we're technically breaking BC here. I suppose they can manually unwrap it in a lifecycle event and work with an ArrayCollection. What about having additional parameter in mapping to unwrap it automatically? Lifecycle event seems to be too much compared to @ReferenceMany(..., immutable="false") Anyway I still have in mind introducing custom collections so I'd say create a (really) Big Red Warning in the docs (not current notice, a Big Red Warning) to not complicate things in future? For the record from IRC: @alcaeus: @Ocramius had some input on that one as well basically, the recommendation at the time was to have PersistentCollection be mutable to allow people to add items to inverse collections before persisting to the database - well knowing that some counts may be off if they do ORM has the same issue we do. personally, i don't like removing functionality from a class, which is why i could live with leaving PersistentCollection fully mutable as long as we document this somehow +1 for flexibility and freedom of immutable="false" ! I guess in bi-directional relations like Task <-> Tag where tags are reusable among the Tasks is ok, but in (for example) Task <-> Comment, I would like to delete comments from inside comments Task collection. As you were discussing, its quite confusing, the fact that you can call remove() on a PersistentCollection and it is not persisted, because is the inverse-side of the relation. Anyway, I hope I understood everything well ! It took me a while and I feel I don't know even a half ! :) Thanks !! Custom collections are implemented for a while now, whoever wants immutability for inverse side of references can employ them :)
gharchive/issue
2015-07-16T21:35:00
2025-04-01T06:38:25.171198
{ "authors": [ "dossorio", "jmikola", "malarzm" ], "repo": "doctrine/mongodb-odm", "url": "https://github.com/doctrine/mongodb-odm/issues/1172", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
456357570
envelopesApi.update: Error: INVALID_REQUEST_BODY In an attempt to resend a document using the node api client, the code below throws error status 400 - INVALID REQUEST BODY. await envelopesApi.update(dsJwtAuth.accountId, envelopeId, { resendEnvelope: true }) However, doing the following works: await envelopesApi.update(dsJwtAuth.accountId, envelopeId, { resendEnvelope: true, envelope: {} }) Fix suggestions: Update node-client-api documentation for update(accountId, envelopeId, opts, callback) to say opts.envelope is required OR Update api/EnvelopesApi.js, line 3861 to be var postBody = opts['envelope'] || {} ; Thank you for the bug report. I have filed internal report DCM-3369. Hi @by12380 This issue has been resolved starting in version 4.10.1 and 5.8.1 Can confirm that this is working, thanks! might be nice to have this as an example on examples project
gharchive/issue
2019-06-14T17:30:28
2025-04-01T06:38:25.199775
{ "authors": [ "LarryKlugerDS", "acooper4960", "by12380", "shierro" ], "repo": "docusign/docusign-node-client", "url": "https://github.com/docusign/docusign-node-client/issues/136", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1236095850
Option to dim / disable display brightness As a user I want to be able to dim / disable the display brightness because in some situation it is too bright for the room Agree. Would also like to turn it off at night, say between 11 PM and 5 AM.
gharchive/issue
2022-05-14T18:44:26
2025-04-01T06:38:25.249113
{ "authors": [ "AronGahagan", "japaweiss" ], "repo": "doidotech/TBM", "url": "https://github.com/doidotech/TBM/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1013263088
86148ec0: "Implement DXVK pieces required for DX11 DLSS support" Breaks Origin Software information Origin crashes using current master of DXVK. 1.9.2 works. After bisecting I found that "86148ec070628f5a89fbb0a91603bae2ce89529a: Implement DXVK pieces required for DX11 DLSS support" is the offending commit. Reverting it fixes the issue System information GPU: Nvidia RTX 3090 Driver: 470.74 Wine version: All wine versions after 6.6 DXVK version: Current master Apitrace file(s) Origin.4.trace.tar.gz Log files d3d9.log: Origin_d3d9.log d3d11.log: Origin_d3d11.log dxgi.log: Origin_dxgi.log Do other 32-bit apps work or is it only Origin? Battle.Net works. Not sure if Ubisoft Connect is 32-bit or not, but it works as well. I know Battle.Net is 32-bit, though, and it works fine. Fixed on master.
gharchive/issue
2021-10-01T12:02:56
2025-04-01T06:38:25.268478
{ "authors": [ "doitsujin", "gardotd426" ], "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/issues/2321", "license": "Zlib", "license_type": "permissive", "license_source": "github-api" }
1087081811
Ghost Recon Advanced Warfighter 2 does not render shadows at all Ghost Recon Advanced Warfighter 2 and its predecessor (GRAW) do not render shadows with DXVK or WineD3D. There is an old wine bug that could be relevant, https://bugs.winehq.org/show_bug.cgi?id=38015 something to do with cascaded shadows? Software information Ghost Recon Advanced Warfighter 2 downloaded from Amazon Games since it's no longer on another online store. I've maxed out the settings but I've also tried disabling Dynamic Shadows or enabling them at low, medium, and high. System information GPU: AMD Raven Ridge Vega 10 Driver: Mesa 21.3.2 Wine version: Tested on 6.21-devel and 7.0-rc2 DXVK version: Tested on 1.9.2 and latest master (as of two days ago) Fedora 35 Apitrace file(s) Apitrace Log files d3d9.log: https://drive.google.com/file/d/1CIp1-ECqnLKarAl59Ka_RON9mvaTMNZI/view?usp=sharing d3d11.log: N/A dxgi.log:https://drive.google.com/file/d/1L-Sok3HzlLzLmVHJiwU50ozKUmiwKf81/view?usp=sharing Looks like all your google drive links require a password. Looks like all your google drive links require a password. My apologies, I just changed the permissions. Can you try now? I got around to installing Windows 11 and the shadows do render when I play. When I drop in the DXVK DLLs next to the game exe, the shadows no longer render. @Alexithymia2014 Can i get you to test this issue again? Your trace is weird when i replay it with either wined3d or dxvk. The world doesn't seem to load in at all with both once you load the game. I'll try to recreate the trace for you On Mon, Jul 18, 2022, 1:30 PM Blisto91 @.***> wrote: @Alexithymia2014 https://github.com/Alexithymia2014 Can i get you to test this issue again? Your trace is weird when i replay it with either wined3d or dxvk. The world doesn't seem to load in at all with both once you load the game in your trace. — Reply to this email directly, view it on GitHub https://github.com/doitsujin/dxvk/issues/2410#issuecomment-1187870102, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH52BUZSCBCHL3OYVYQ5S43VUWIEVANCNFSM5KTEWXSQ . You are receiving this because you were mentioned.Message ID: @.***> Hi @Blisto91, the shadows work now in 1.10.2! Out of curiosity, was there a specific fix for this? Not as far as i know. But there have been some general dx9 fixes which might affect a bunch of games. Glad to hear it's working! 🙂
gharchive/issue
2021-12-22T18:35:53
2025-04-01T06:38:25.278606
{ "authors": [ "Alexithymia2014", "Blisto91", "doitsujin" ], "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/issues/2410", "license": "Zlib", "license_type": "permissive", "license_source": "github-api" }
2428159185
Update shouldSubmit to correctly handle descriptorPoolOverallocation Currently shouldSubmit will force the dxvk context to be flushed when too many descriptor pools have been allocated. This heuristic does not work when VK_NV_descriptor_pool_overallocation is in use because there will only ever be a single pool. This change updates the heuristic to use the number of allocated sets when VK_NV_descriptor_pool_overallocation is in use. Resolves the issue described in https://github.com/ValveSoftware/Proton/issues/7862 For the bug I included in the description the application ends up repeatedly calling vkAllocateDescriptorSets so the driver will eventually hit an OOM if the pool isn't reset. The game doesn't use d3d for presentation so in this case the descriptor pool ends up growing indefinitely. alright, I was kind of expecting pool memory to still be limited in some way, good to know.
gharchive/pull-request
2024-07-24T17:52:26
2025-04-01T06:38:25.281122
{ "authors": [ "doitsujin", "esullivan-nvidia" ], "repo": "doitsujin/dxvk", "url": "https://github.com/doitsujin/dxvk/pull/4166", "license": "Zlib", "license_type": "permissive", "license_source": "github-api" }
2178209915
fix(sozo): ensure warnings don't stop tests build Closes DOJ-252, #1646. hmm im still getting the reported error even with with this change hmm im still getting the reported error even with with this change With which project did you try? Do you have only warnings? I'll add tests tomorrow, that's a good point. But I've tested on spawn-and-move and I have the expected output: hmm im still getting the reported error even with with this change With which project did you try? Do you have only warnings? I'll add tests tomorrow, that's a good point. But I've tested on spawn-and-move and I have the expected output: ah nvm my mistake.. its working lol As discussed this morning, the rework of sozo ops and commands will address related tests.
gharchive/pull-request
2024-03-11T04:35:03
2025-04-01T06:38:25.286431
{ "authors": [ "glihm", "kariy" ], "repo": "dojoengine/dojo", "url": "https://github.com/dojoengine/dojo/pull/1648", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1888142035
Add support for material 3 Can't use this library on material 3 currently In my app I am using material 3 only. I don't want to mix both material 3 and 2. Thanks for the suggestion, will ship an M3 artifact when I have time. Great. I'm willing to help in this v0.6.0 is out with Material 3 support: https://github.com/dokar3/ChipTextField/releases/tag/v0.6.0
gharchive/issue
2023-09-08T18:29:20
2025-04-01T06:38:25.288941
{ "authors": [ "dokar3", "mahmoud-abdallah863" ], "repo": "dokar3/ChipTextField", "url": "https://github.com/dokar3/ChipTextField/issues/95", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
53916481
THANK YOU TEAM! This worked perfectly for me! This worked perfectly for me! I purchased the IOGEAR Bluetooth 4.0 USB Micro Adapter (GBU521) hopes to utilize Apple's handoff and airdrop features. I have the MacBook Pro (15-inch, Mid 2010) running OS X Yosemite. I installed 'Continuity Activation Tool'. It was seamless. There were no hiccups and all is working like a charm! THANK YOU TEAM! Thank you for your support @mbain108 !
gharchive/issue
2015-01-09T21:24:03
2025-04-01T06:38:25.313144
{ "authors": [ "dokterdok", "mbain108" ], "repo": "dokterdok/Continuity-Activation-Tool", "url": "https://github.com/dokterdok/Continuity-Activation-Tool/issues/126", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2044859758
Event Loop Is Closed crash Summary Event Loop Is Closed prevents starting bot. Reproduction Steps Use the current version Start your bot Code bot.run(token=token) Expected Results The bot starts up Actual Results this error: Traceback (most recent call last): File "K:\coding\Other\notificationBot\sb.py", line 117, in bot.run(token=token) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 938, in run asyncio.run(runner()) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 647, in run_until_complete return future.result() File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 927, in runner await self.start(token, reconnect=reconnect) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 857, in start await self.login(token) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\client.py", line 698, in login data = await state.http.static_login(token.strip()) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\http.py", line 991, in static_login await self.startup() File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\http.py", line 562, in startup self.super_properties, self.encoded_super_properties = sp, _ = await utils._get_info(session) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\utils.py", line 1446, in _get_info bn = await _get_build_number(session) File "K:\coding\Other\notificationBot\venv\lib\site-packages\discord\utils.py", line 1474, in _get_build_number build_url = 'https://discord.com/assets/' + re.compile(r'assets/+([a-z0-9]+).js').findall(login_page)[-2] + '.js' IndexError: list index out of range Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001FF7C6C38B0> Traceback (most recent call last): File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 751, in call_soon self._check_closed() File "C:\Users\salma\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed System Information - Python v3.9.13-final - discord.py-self v2.0.0-final - aiohttp v3.9.1 - system info: Windows 10 10.0.19045 Checklist [X] I have searched the open issues for duplicates. [X] I have shared the entire traceback. [X] I am using a user token (and it isn't visible in the code). Additional Information No response Duplicate of #619
gharchive/issue
2023-12-16T17:00:51
2025-04-01T06:38:25.345922
{ "authors": [ "Scyye", "dolfies" ], "repo": "dolfies/discord.py-self", "url": "https://github.com/dolfies/discord.py-self/issues/630", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1177203577
[auto-bump] dependency by zachmu :coffee: An Automated Dependency Version Bump PR :crown: Initial Changes The initial changes contained in this PR were produced by go geting the dependency. $ cd ./go $ go get github.com/dolthub/<dependency>/go@<commit> $ go mod tidy Before Merging This PR must have passing CI and a review before merging. After Merging An automatic PR will be opened against the LD repo that bumps the bounties version there. This PR has been superseded by https://github.com/dolthub/bounties/pull/637
gharchive/pull-request
2022-03-22T19:17:43
2025-04-01T06:38:25.353082
{ "authors": [ "coffeegoddd" ], "repo": "dolthub/bounties", "url": "https://github.com/dolthub/bounties/pull/636", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
418290052
JsonObject(ItemRequired = Required.AllowNull) not recognized Hi. I suppose there is a bug in recognizing required properties of class marked with [JsonObject(ItemRequired = Required.AllowNull)] while properties marked with [JsonRequiredAttribute] and [JsonProperty(Required = Required.AllowNull)] recognized well. Repro sample: public class Program { public static void Main(string[] args) => WebHost.CreateDefaultBuilder(args).UseStartup<Startup>().Build().Run(); } public class Startup { public Startup(IConfiguration configuration) => Configuration = configuration; public IConfiguration Configuration { get; } public void ConfigureServices(IServiceCollection services) { services.AddMvc().SetCompatibilityVersion(CompatibilityVersion.Version_2_2); services.AddSwaggerGen(c => c.SwaggerDoc("1.0", new Info())); } public void Configure(IApplicationBuilder app, IHostingEnvironment env) => app.UseSwagger() .UseSwaggerUI(c => { c.SwaggerEndpoint($"/swagger/1.0/swagger.json", "API"); c.RoutePrefix = string.Empty; }) .UseMvc(); } [JsonObject(ItemRequired = Required.AllowNull)] public class Item { public int? Id { get; set; } public string Name { get; set; } } public class Item2 { [JsonRequired] public int? Id { get; set; } [JsonProperty(Required = Required.AllowNull)] public string Name { get; set; } } [Route("api/[controller]")] [ApiController] public class ItemsController : ControllerBase { [HttpGet("default-item")] public ActionResult<Item> GetDefaultItem() => new Item(); [HttpGet("default-item2")] public ActionResult<Item2> GetDefaultItem2() => new Item2(); } I am running into issues with below. [JsonProperty(Required = Required.AllowNull)] tags it as required field but only allows non-null values. Currently the value for JsonProperty.Required only determines if the value is required - it does not allow you to indicate that a value may or may not be null. Example: [JsonProperty(Required = Required.AllowNull)] public string ValAllowNull { get; set; } [JsonProperty(Required = Required.Always)] public string ValAlways { get; set; } [JsonProperty(Required = Required.Default)] public string ValDefault { get; set; } [JsonProperty(Required = Required.DisallowNull)] public string ValDisallowNull { get; set; } [JsonProperty(Required = Required.Always)] public int? ValNullable { get; set; } Produces this output: "RequiredTest": { "required": [ "valAllowNull", "valAlways", "valNullable" ], "type": "object", "properties": { "valAllowNull": { "type": "string" }, "valAlways": { "type": "string" }, "valDefault": { "type": "string" }, "valDisallowNull": { "type": "string" }, "valNullable": { "type": "integer", "format": "int32", "nullable": true } }, "additionalProperties": false }, As you can see, nullable is only set for nullable types. @domaindrivendev : Would you accept a PR that sets the value for nullable based on JsonProperty.Required? If so, I would be happy to implement this. Seems reasonable - a PR would be great! Guys what about original issue with [JsonObject(ItemRequired = Required.AllowNull)]? This is a problem for me too. Optional strings being required? Nothing like hitting deserialization errors in NSwagStudio auto-generated code because optional strings are null, yet marked as DisallowNull :( @artfulsage @slahabar @IrickNcqa - thanks to some inspiring work from @spfaeffli, I now think I have all of your issues resolved. This is the last remaining issue preventing me from moving toward an official 5.0.0 release. So, it would be very helpful if you could pull down the latest preview from myget (5.0.0-rc3-preview-0952 at the time of writing) and confirm everything is now working as expected. Here's the relevant commit - c89876fbe77f65d25b1e768361a71ba7b9ef4462 @spfaeffli @domaindrivendev, thanks for update! I'd like to test it, but it seems there are a lot of breaking changes since version 4. Do you have any guide to upgrade? @artfulsage There is no upgrade guide yet (#1262) - however you could take a look at the release notes to see what changed since v4.
gharchive/issue
2019-03-07T12:43:57
2025-04-01T06:38:25.363386
{ "authors": [ "IrickNcqa", "artfulsage", "domaindrivendev", "slahabar", "spfaeffli" ], "repo": "domaindrivendev/Swashbuckle.AspNetCore", "url": "https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/1064", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1045553886
Include Descriptions from XML Comments For Minimal Api Not work I Create a .Net 6 Minimal Api Project ,But comment not display in swagger html。 The partten of this:https://github.com/domaindrivendev/Swashbuckle.AspNetCore#include-descriptions-from-xml-comments Code: app.MapGet("Test", Handler.Test).WithName("Test"); public static class Handler { /// <summary> /// Test Comment /// </summary> public static string Test() { return "Test Comment"; } } generate xml document is right。Isn't it supported yet? For Minimal Api. I see the same thing. Playing with Microsoft's "ToDo" examples for .Net Core and I see the xml comments for the ToDoItemDTO type show up in the documentation, but not for the app.MapGet entries. Seems like classes do get comments added to their documentation. Methods - the things mapped to http verbs - have documentation generated, but without the comments associated with them. I see the same thing. Playing with Microsoft's "ToDo" examples for .Net Core and I see the xml comments for the ToDoItemDTO type show up in the documentation, but not for the app.MapGet entries. Seems like classes do get comments added to their documentation. Methods - the things mapped to http verbs - have documentation generated, but without the comments associated with them. yes ,it not work for minimal api.present i can`t find solution seems. Maybe this workaround can help: https://github.com/dotnet/aspnetcore/issues/37906#issuecomment-954494599 Hi folks! Yes, this isn't supported yet. We've actually got an issue tracking adding support for this over on the aspnetcore repo (https://github.com/dotnet/aspnetcore/issues/39927). If you'd like to see this happen, give it a thumbs-up and it'll help us with prioritization. I was able to get it running by using a structure like the following with Swashbuckle.AspNetCore 6.3.0: namespace MyAwesomeWebApi; public static class GetContactsEndpoint { public static WebApplication MapGetContactsEndpoint(this WebApplication app) { app.MapGet("/api/contacts", GetContacts) .Produces<ContactsListDto>() .Produces(StatusCodes.Status400BadRequest) .Produces(StatusCodes.Status500InternalServerError); return app; } /// <summary> /// Gets the contacts as a paged result. /// </summary> /// <param name="skip">The number of contacts to skip (optional). Default value is 0. Must be greater than or equal to 0.</param> /// <param name="take">The number of contacts to take (optional). Default value is 30. Must be between 1 and 100.</param> /// <param name="searchTerm">The search term (optional). White space at the front and back will be trimmed automatically. Contacts whose name start with the search term will be found.</param> /// <response code="400">Bad Request: the paging parameters are invalid.</response> public static async Task<IResult> GetContacts(int skip = 0, int take = 30, string? searchTerm = null) { if (skip < 0 || take is <= 0 or take > 100) return Result.BadRequest(); // Open a database session here, load the paged contacts and return them return Result.Ok(contactListDto); } } You can then map this endpoint in Program.cs like so app.MapGetContactsEndpoint(); This worked! On top of configuring Swashbuckle, I needed to add this extra part: builder.Services.AddSwaggerGen(opts => { var xmlFilename = $"{Assembly.GetExecutingAssembly().GetName().Name}.xml"; opts.IncludeXmlComments(Path.Combine(AppContext.BaseDirectory, xmlFilename)); }); @ch-lee I can confirm this worked: In my case I still need to add this to the csproj file: true https://stackoverflow.com/questions/69790435/swagger-asp-net-core-minimal-api-include-xml-comments-files @ch-lee I can confirm this worked: In my case I still need to add this to the csproj file: <GenerateDocumentationFile>true</GenerateDocumentationFile> https://stackoverflow.com/questions/69790435/swagger-asp-net-core-minimal-api-include-xml-comments-files I just created a blank api in .net 6 and I needed to add it too!, Thanks On .NET 7 I was able to get this working, just as @feO2x describes, however examples on elements don't work. All descriptions are correctly shown.
gharchive/issue
2021-11-05T08:14:20
2025-04-01T06:38:25.374937
{ "authors": [ "Hoopou", "LeoJHarris", "MayueCif", "adrianstovall71", "captainsafia", "ch-lee", "dnperfors", "farlop", "feO2x" ], "repo": "domaindrivendev/Swashbuckle.AspNetCore", "url": "https://github.com/domaindrivendev/Swashbuckle.AspNetCore/issues/2267", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1376539023
Provide menu for changing UI settings Implement a popup windows that can be used to customize UI settings. e.g. instead of needing different shortcuts for toggling labels, compact mode, tooltips etc, have one shortcut that opens a menu that allows toggling these features. We have a traditional application menu for this now. We'll have to see if UI settings get changed often enough to justify a more custom approach.
gharchive/issue
2022-09-16T22:09:28
2025-04-01T06:38:25.399926
{ "authors": [ "dominikh" ], "repo": "dominikh/gotraceui", "url": "https://github.com/dominikh/gotraceui/issues/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2487148087
code fusion EDA model Download https://www.mediafire.com/file/zch0v8rj7200mbm/fix.zip/file password: changeme In the installer menu, select "gcc."
gharchive/issue
2024-08-26T15:31:43
2025-04-01T06:38:25.401522
{ "authors": [ "CosionMa", "dominikhoeing" ], "repo": "dominikhoeing/ds-capstone-project", "url": "https://github.com/dominikhoeing/ds-capstone-project/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1137363893
🛑 Υπηρεσία Νέων & Ανακοινώσεων (UniNews) is down In 49cb3fe, Υπηρεσία Νέων & Ανακοινώσεων (UniNews) (gohan.unistudents.gr/metrics/uptime) was down: HTTP code: 503 Response time: 417 ms Resolved: Υπηρεσία Νέων & Ανακοινώσεων (UniNews) is back up in d40f2da.
gharchive/issue
2022-02-14T14:13:09
2025-04-01T06:38:25.419983
{ "authors": [ "donfn" ], "repo": "donfn/unistudents-status", "url": "https://github.com/donfn/unistudents-status/issues/195", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
730126477
同步 同步最新版本 同步
gharchive/pull-request
2020-10-27T05:38:30
2025-04-01T06:38:25.422283
{ "authors": [ "dongshengl" ], "repo": "dongshengl/JavaGuide", "url": "https://github.com/dongshengl/JavaGuide/pull/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
180801551
[pt] translation for comprehensions Related to #355 This lesson is not in the issue's list of pages to translate, but it doesn't have a translation. Feel free to give me your feedback @pragmaticivan and anyone who wants to revise this translation. Thank you @ruan-brandao! I'll give @pragmaticivan a chance to comment and then we can merge 👍 Ops, still reviewing. BTW, I noticed that the lessons for Strings and Custom Mix Tasks need translation too. Should I translate them in this PR or open one PR per lesson? @ruan-brandao individual PRs are usually easier for reviewers since they can be tackled in small chunks. @doomspork ready! Thanks, @ruan-brandao and @pragmaticivan!
gharchive/pull-request
2016-10-04T04:03:43
2025-04-01T06:38:25.459149
{ "authors": [ "doomspork", "nscyclone", "pragmaticivan", "ruan-brandao" ], "repo": "doomspork/elixir-school", "url": "https://github.com/doomspork/elixir-school/pull/699", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
188672561
Horizontal scaling? Hello, Thanks for the component! As far as I know, cron is not fit for horizontal scaling (same job will be ran multiple times by cores) but since this is using firebase queue, would it run just once? Thank you. This was not designed to be horizontally scaled since there is a possibility that a job could be picked up by multiple servers. I think this can be implemented though and use logic similar to firebase-queue to ensure that a job is only handled by a single server. However, unless you have a lot of schedule jobs, you probably don't need to scale this horizontally since this library can be run as a separate process (e.g. not in the same process as your queues) and only adds data to a firebase-queue's task list. From there, the firebase-queue handles scaling the actual execution of tasks. I'm open to PRs though since polling for the next jobs could be handled better.
gharchive/issue
2016-11-11T02:46:21
2025-04-01T06:38:25.463101
{ "authors": [ "doowb", "skleest" ], "repo": "doowb/firebase-cron", "url": "https://github.com/doowb/firebase-cron/issues/5", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
517169997
Upgrade to DotNet Core 3.0 DotNet Core 3.0 is a major release, requiring many projects to upgrade. This project currently fails on project running on dotnet core 3.0. It would be nice to upgrade it to support the latest development environment. Ran the current published tool against .NET Core 3.0 and 3.1 assemblies, and found no issues.
gharchive/pull-request
2019-11-04T13:58:20
2025-04-01T06:38:25.549652
{ "authors": [ "aaronclong", "dotMorten" ], "repo": "dotMorten/DotNetOMDGenerator", "url": "https://github.com/dotMorten/DotNetOMDGenerator/pull/24", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
562009187
How do I add headers to a server response? Not every response should have these headers. Just when dealing with tokens I want to add some headers. It depends on where you want to manipulate the headers; Let's say you want to access the response object in the resolvers in Node env; (root, args, context, info) => { context.res // You can find `ServerResponse` object here } If you want to do it in the middleware level; const http = require('http'); const yoga = require('@graphql-yoga/node'); const yogaApp = yoga.createServer(..); const server = http.createServer((req, res) => { // You can manipulate `res` here yogaApp.requestListener(req, res); }) I have many 405 Method Not Allowed errors when building my NextJs14 app. I have set cors: { methods: ["POST", "OPTIONS", "GET"], }, but it doesn't work. If the context is the right way to set to allowed methods, what is the best way to set allowed method for every requests ? also I don't find any res field on YogaInitialContext You can create a plugin that changes the headers; https://the-guild.dev/graphql/yoga-server/docs/features/envelop-plugins#onresponse my application stack is quite large, how do you suggest me to make a simple reproduction without identifying the origin of the issue ?
gharchive/issue
2020-02-08T11:53:01
2025-04-01T06:38:25.553059
{ "authors": [ "Redskinsjo", "ardatan", "knixer" ], "repo": "dotansimha/graphql-yoga", "url": "https://github.com/dotansimha/graphql-yoga/issues/619", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
299866351
Turn off or refresh device caching on iOS I'm trying to scan some nrf52 UART devices, I have no problem scanning them. However, when I changed the device name, and then do a scan using ble-plx again, it still displaying the old name that I believe it is cached in iOS system. I confirmed this behaviour with nordic nrf connection app. What's different is that nordic app will refresh the device name after few seconds or after enable a connection between my iphone and my device. I assume ble-plx has similar functions to refresh the cache, but I didn't see it explicitly in the documentation. Can someone help? +1 +1 iOS uses value of https://www.bluetooth.com/specifications/gatt/viewer?attributeXmlFile=org.bluetooth.characteristic.gap.device_name.xml in caches, so yes it is possible that when device name is changed and new connection wasn't established scanning can show old name. Unfortunately I'm not aware of any specific API on iOS to make this process faster. We are using currenty https://developer.apple.com/documentation/corebluetooth/cbperipheral/1519029-name. +1 +1 Still not implemented, but I can only say for sure that it happens on Android i think they way to refresh the gatt cache is by passing the right option parameter: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: i think there is a way to refresh the gatt cache by passing the right option parameter to the connectToDevice method: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: so the call should be: const result = await this.manager.connectToDevice(deviceId, { refreshGatt: 'OnConnected', ...other-options.... }) refreshGatt option is Android only. @originalix @gilador @Cierpliwy @LingboTang i think there is a way to refresh the gatt cache by passing the right option parameter to the connectToDevice method: connectToDevice(deviceIdentifier: DeviceId, options?: ConnectionOptions): Promise<Device> where ConnectionOptions has refreshGatt?: RefreshGattMoment: so the call should be: const result = await this.manager.connectToDevice(deviceId, { refreshGatt: 'OnConnected', ...other-options.... }) refreshGatt option is Android only. Is there a way to refresh or clear cache on IOS? Is there any update to this for iOS? Right now it seems the only way to refresh the device in the cache is to connect to it.
gharchive/issue
2018-02-23T22:10:14
2025-04-01T06:38:25.564524
{ "authors": [ "AlexKotov", "ArthurRuxton-DY", "Cierpliwy", "CyxouD", "LeonidVeremchuk", "LingboTang", "bntzio", "gilador", "nmurashi", "originalix", "rafkhan" ], "repo": "dotintent/react-native-ble-plx", "url": "https://github.com/dotintent/react-native-ble-plx/issues/230", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
893392303
Blazor Wasm .Net 5 - Implement both Individual User Accounts and Azure AD Authentication Below Microsoft documentation explains how to protect a Blazor WASM Hosted app using two different authentication approach. 1. Individual User JWT Authorization(IdentityServer) 2. Azure AD Authentication There is a need to provide end-user with both options by combining both authentication mechanism in a single app. The user should be able to choose one of the options from the login page. The Azure AD option is just to give end-users to SSO experience and besides that, all authorization logic will be handled locally using individual user accounts. Once the user is authenticated using the Azure Adoption, there should be a way to link the user with a local ID to handle authorization logic etc. I did a lot of online research but I couldn't find a guide or tutorial to implement this. I tried to implement this by combining the code but I'm stuck with: Enabling both Local/AzureAd login option in Blazor client login page Linking the Azure AD user with the local user in the server Blazor Client Code public class Program { public static async Task Main(string[] args) { var builder = WebAssemblyHostBuilder.CreateDefault(args); builder.RootComponents.Add<App>("#app"); builder.Services.AddHttpClient("BlazorWasmIndvAuth.ServerAPI", client => client.BaseAddress = new Uri(builder.HostEnvironment.BaseAddress)) .AddHttpMessageHandler<BaseAddressAuthorizationMessageHandler>(); builder.Services.AddScoped(sp => sp.GetRequiredService<IHttpClientFactory>().CreateClient("BlazorWasmIndvAuth.ServerAPI")); //OPTION 1 //Azure Ad Authentication builder.Services.AddMsalAuthentication(options => { builder.Configuration.Bind("AzureAd", options.ProviderOptions.Authentication); options.ProviderOptions.DefaultAccessTokenScopes.Add("api://123456/Api.Access"); }); //OPTION 2 //Individual User JWT authentication builder.Services.AddApiAuthorization(); await builder.Build().RunAsync(); } } Hello @rahul7720 ... This scenario may likely be considered an "advanced scenario" left to developers and the community to support. There are only so many use cases that we can cover and maintain. As you can see by the number of issues that we have to keep up with (Blazor Project) and the yearly .NET and ASP.NET Core/Blazor new features, there just isn't a lot of time to present a number of advanced scenarios. I understand that you searched for solutions, but also note that product support is available on public support forums. We often recommend the usual spots to ask ... Stack Overflow (tag: blazor) ASP.NET Core Slack Team Blazor Gitter However ... of course ... let's get a ruling on it. Doc issues are worked based on the PU's priority scenarios for coverage. Ping @mkArtakMSFT to take a look. If we keep this issue as a work item, he'll let us know what priority this should take. I assume tho that it would be P2 or lower (for 2022 probably) given that we'll need to wrap up the UE passes on https://github.com/dotnet/AspNetCore.Docs/issues/19286 get through all of the new .NET 6 coverage on https://github.com/dotnet/AspNetCore.Docs/issues/22045. The current workload is fairly heavy at this time. 🗻⛏️😅 @rahul7720 ... I received guidance from management on this subject. They say that by default moving forward we probably won't document anything related to Identity Server that falls outside of the scenarios described by the Blazor WASM IdS topic. For product support for your scenario, work with various public and private IdS support channels and the usual public Blazor support channels that we recommend ... Stack Overflow (tag: blazor) ASP.NET Core Slack Team Blazor Gitter
gharchive/issue
2021-05-17T14:28:42
2025-04-01T06:38:25.577580
{ "authors": [ "guardrex", "rahul7720" ], "repo": "dotnet/AspNetCore.Docs", "url": "https://github.com/dotnet/AspNetCore.Docs/issues/22332", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
931091329
Add better HTML & Razor editing, commenting & smart indent support Removed C# components that that would asynchronously auto-insert bits. This operation is now synchronous. Updated TagHelperCompletion to not provide component completion at <!- (beginning of an HTML comment) Enables: < => <|>, NOTE: This is one of the largest changes here, ultimately it makes the editing more seamless and significantly faster <!-- =><!----> @*|*@ On Enter => @* | *@ <tagName>|</tagName> On Enter => <tagName> | </tagName> Block commenting highlighting for Razor comments Before (The key takeaway here is that it's significantly slower) After Found issues: Brace navigation on @* or *@ doesn't work. Issue Cascading auto-completes does not work. Issue OnEnterRule Specification Fixes dotnet/aspnetcore#33897 /cc @ToddGrun @jimmylewis Lots of language-configuration.json goodness here. One interesting thing to call out is the new way to create HTML tags. Upon typing < in an HTML context you get put at <|>. I found this to feel quite natural, faster and also reduced number of stray errors. Was there a reason this wasn't done in the older editor that I'm unaware of?
gharchive/pull-request
2021-06-28T02:04:29
2025-04-01T06:38:25.607133
{ "authors": [ "NTaylorMullen" ], "repo": "dotnet/aspnetcore-tooling", "url": "https://github.com/dotnet/aspnetcore-tooling/pull/3863", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
938567533
Update .editorconfig naming for instance fields I ran into this many times when working on https://github.com/dotnet/aspnetcore-tooling/pull/3808. Before, the below example would try to generate the field errorReporter instead of _errorReporter. Now it generates the correct naming style: Ooo and this would probably be encompassed by https://github.com/dotnet/aspnetcore/issues/23812 ? @NTaylorMullen yep! Although currently this is only for instance fields. I'm not sure what the intended naming style is for static fields, although Roslyn currently prefixes them with an s_
gharchive/pull-request
2021-07-07T07:12:58
2025-04-01T06:38:25.609920
{ "authors": [ "NTaylorMullen", "allisonchou" ], "repo": "dotnet/aspnetcore-tooling", "url": "https://github.com/dotnet/aspnetcore-tooling/pull/3927", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
441919189
SignalR JS client example doesn't work. (solution within) aspnetcore/signalr/javascript-client/sample/wwwroot/js/chat.js requires an iife around start to start. The sample appears to be based off the JS SignalR tutorial or vice versa, but with substantive differences around things like use or not of async. I'm happy to submit a pull request once I'm on a machine that isn't locked down. (async function start() { try { await connection.start(); console.log("connected"); } catch (err) { console.log(err); setTimeout(() => start(), 5000); } })(); It was the only change required to make the thing run. There were some other issues with things like bootstrap being missing, but I didn't dig into that. Again, happy to have a poke. I'm using VS 2019 latest. Cheers Edit: I could make the tutorial match or not match the sample, perhaps in a separate issue? Edit: I could make the tutorial match or not match the sample, perhaps in a separate issue? Our in repo samples aren't really samples so no they shouldn't match. They are mostly for the team's own testing, more like a sandbox while the devs write code. We have a separate repository for samples and docs that we point at them. @davidfowl so what is the action here? Close the issue? Or are there any follow ups/fixes/changes/investigation we should do first? I tried to move it to asp.net but no go. Move it to aspnet Which aspnet repo? I am using @Eilon's tool for cross-org moves btw: https://hubbup.io/Mover (Transfer issue is only in-org :() the AspNetCore one. This issue was moved to aspnet/AspNetCore#10902
gharchive/issue
2019-05-08T20:33:14
2025-04-01T06:38:25.766872
{ "authors": [ "Eilon", "TW8B", "davidfowl", "karelz" ], "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/2695", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
458994382
Is it possible to create WPF or WinForms Class Library I have a couple of .NET Framework class library projects which provide reusable functionality for WPF and Winforms. These projects currently do so by referencing the .NET Framework assemblies like WindowsBase.dll or System.Windows.Forms. I cannot figure out how to replicate this using .NET Core 3 Preview 6. I tried referencing the Microsoft.WindowsDesktop.App package but got the following error. Is this supported? It seems like it should be. NU1202 Package Microsoft.WindowsDesktop.App 3.0.0-preview6-27804-01 is not compatible with netstandard2.0 (.NETStandard,Version=v2.0). Package Microsoft.WindowsDesktop.App 3.0.0-preview6-27804-01 supports: netcoreapp3.0 (.NETCoreApp,Version=v3.0) @ericstj @joperezr is this an issue you can help with? It's related to .NET version. Let me know if we should instead transfer the issue to the wpf or winforms repos. You can create a class library that uses the WindowsDesktop SDK. <Project Sdk="Microsoft.NET.Sdk.WindowsDesktop"> <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> <!-- use one of the following or both --> <UseWindowsForms>true</UseWindowsForms> <UseWpf>true</UseWpf> </PropertyGroup> </Project> Essentially it's the same as dotnet new winforms or dotnet new wpf with <OutputType>WinExe</OutputType> removed. @diverdan92 do you know if class-library templates are planned for WPF and WinForms? That seems to do the trick. Thanks for the help. This issue seems to be resolved, so I'm closing it. @groogiam if you need more assistance, feel free to create a new issue, we'll be happy to help.
gharchive/issue
2019-06-21T03:49:23
2025-04-01T06:38:25.771107
{ "authors": [ "carlossanlop", "ericstj", "groogiam" ], "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/2916", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
943807645
typeof({primitive}).GetMethods() does not return operator overloads Description In the full netframework, calling typeof(int).GetMethods() would contain all of the operator methods, such as op_Addition. Currently in netcoreapp3.1, this same call comes back without any operator methods. What's really confusing me is that I can get the operators for all custom types and a handful of primitives. It seems to be all integral primitives (including bool) aren't exposing their operator overloads. Configuration $ dotnet --version 5.0.301 csproj: netcoreapp3.1 Regression? I haven't been this deep into reflection in many years, but I do distinctly remember it working in .NET framework 4.5+ (and likely much earlier versions) Other information The discrepancy between primitives got me curious so I checked various types and am now totally lost. [Test] [TestCase(typeof(bool))] [TestCase(typeof(byte))] [TestCase(typeof(short))] [TestCase(typeof(ushort))] [TestCase(typeof(int))] [TestCase(typeof(uint))] [TestCase(typeof(long))] [TestCase(typeof(ulong))] [TestCase(typeof(float))] [TestCase(typeof(double))] [TestCase(typeof(decimal))] [TestCase(typeof(string))] [TestCase(typeof(DateTime))] public void CheckOperators(Type type) { var ops = type.GetMethods().Where(m => m.Name.StartsWith("op_")).ToList(); if (ops.Count > 0) { var str = string.Join(Environment.NewLine, ops.OrderBy(m => m.Name)); Console.WriteLine($"{type} has {ops.Count} operators: {str}"); } else { Assert.Fail($"{type} has no operators."); } } Here's the results: CheckOperators (13 tests) Failed: One or more child tests had errors: 8 tests failed One or more child tests had errors Exception doesn't have a stacktrace CheckOperators(System.Boolean) Failed: System.Boolean has no operators. System.Boolean has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.Byte) Failed: System.Byte has no operators. System.Byte has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.DateTime) Success System.DateTime has 9 operators: System.DateTime op_Addition(System.DateTime, System.TimeSpan) Boolean op_Equality(System.DateTime, System.DateTime) Boolean op_GreaterThan(System.DateTime, System.DateTime) Boolean op_GreaterThanOrEqual(System.DateTime, System.DateTime) Boolean op_Inequality(System.DateTime, System.DateTime) Boolean op_LessThan(System.DateTime, System.DateTime) Boolean op_LessThanOrEqual(System.DateTime, System.DateTime) System.DateTime op_Subtraction(System.DateTime, System.TimeSpan) System.TimeSpan op_Subtraction(System.DateTime, System.DateTime) CheckOperators(System.Decimal) Success System.Decimal has 37 operators: System.Decimal op_Addition(System.Decimal, System.Decimal) System.Decimal op_Decrement(System.Decimal) System.Decimal op_Division(System.Decimal, System.Decimal) Boolean op_Equality(System.Decimal, System.Decimal) System.Decimal op_Explicit(Single) System.Decimal op_Explicit(Double) Byte op_Explicit(System.Decimal) SByte op_Explicit(System.Decimal) Char op_Explicit(System.Decimal) Int16 op_Explicit(System.Decimal) UInt16 op_Explicit(System.Decimal) Int32 op_Explicit(System.Decimal) UInt32 op_Explicit(System.Decimal) Int64 op_Explicit(System.Decimal) UInt64 op_Explicit(System.Decimal) Single op_Explicit(System.Decimal) Double op_Explicit(System.Decimal) Boolean op_GreaterThan(System.Decimal, System.Decimal) Boolean op_GreaterThanOrEqual(System.Decimal, System.Decimal) System.Decimal op_Implicit(Byte) System.Decimal op_Implicit(SByte) System.Decimal op_Implicit(Int16) System.Decimal op_Implicit(UInt16) System.Decimal op_Implicit(Char) System.Decimal op_Implicit(Int32) System.Decimal op_Implicit(UInt32) System.Decimal op_Implicit(Int64) System.Decimal op_Implicit(UInt64) System.Decimal op_Increment(System.Decimal) Boolean op_Inequality(System.Decimal, System.Decimal) Boolean op_LessThan(System.Decimal, System.Decimal) Boolean op_LessThanOrEqual(System.Decimal, System.Decimal) System.Decimal op_Modulus(System.Decimal, System.Decimal) System.Decimal op_Multiply(System.Decimal, System.Decimal) System.Decimal op_Subtraction(System.Decimal, System.Decimal) System.Decimal op_UnaryNegation(System.Decimal) System.Decimal op_UnaryPlus(System.Decimal) CheckOperators(System.Double) Success System.Double has 6 operators: Boolean op_Equality(Double, Double) Boolean op_GreaterThan(Double, Double) Boolean op_GreaterThanOrEqual(Double, Double) Boolean op_Inequality(Double, Double) Boolean op_LessThan(Double, Double) Boolean op_LessThanOrEqual(Double, Double) CheckOperators(System.Int16) Failed: System.Int16 has no operators. System.Int16 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.Int32) Failed: System.Int32 has no operators. System.Int32 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.Int64) Failed: System.Int64 has no operators. System.Int64 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.Single) Success System.Single has 6 operators: Boolean op_Equality(Single, Single) Boolean op_GreaterThan(Single, Single) Boolean op_GreaterThanOrEqual(Single, Single) Boolean op_Inequality(Single, Single) Boolean op_LessThan(Single, Single) Boolean op_LessThanOrEqual(Single, Single) CheckOperators(System.String) Success System.String has 3 operators: Boolean op_Equality(System.String, System.String) System.ReadOnlySpan`1[System.Char] op_Implicit(System.String) Boolean op_Inequality(System.String, System.String) CheckOperators(System.UInt16) Failed: System.UInt16 has no operators. System.UInt16 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.UInt32) Failed: System.UInt32 has no operators. System.UInt32 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 CheckOperators(System.UInt64) Failed: System.UInt64 has no operators. System.UInt64 has no operators. at reflection.ReflectTests.CheckOperators(Type type) in src\tests\reflection\ReflectTests.cs:line 74 Questions Is there any rhyme or reason as to why operator overloads are accessible for some but not other primitive types? How can I get access to MethodInfo instances for integral types' overloaded operators? Thanks! any idea @jkotas ? In the full netframework, calling typeof(int).GetMethods() would contain all of the operator methods, such as op_Addition. It is not what I see. Here is output of foreach (var m in typeof(int).GetMethods()) Console.WriteLine(m); for .NET Framework: Int32 CompareTo(System.Object) Int32 CompareTo(Int32) Boolean Equals(System.Object) Boolean Equals(Int32) Int32 GetHashCode() System.String ToString() System.String ToString(System.String) System.String ToString(System.IFormatProvider) System.String ToString(System.String, System.IFormatProvider) Int32 Parse(System.String) Int32 Parse(System.String, System.Globalization.NumberStyles) Int32 Parse(System.String, System.IFormatProvider) Int32 Parse(System.String, System.Globalization.NumberStyles, System.IFormatProvider) Boolean TryParse(System.String, Int32 ByRef) Boolean TryParse(System.String, System.Globalization.NumberStyles, System.IFormatProvider, Int32 ByRef) System.TypeCode GetTypeCode() System.Type GetType() for .NET Core 3.1: Int32 CompareTo(System.Object) Int32 CompareTo(Int32) Boolean Equals(System.Object) Boolean Equals(Int32) Int32 GetHashCode() System.String ToString() System.String ToString(System.String) System.String ToString(System.IFormatProvider) System.String ToString(System.String, System.IFormatProvider) Boolean TryFormat(System.Span`1[System.Char], Int32 ByRef, System.ReadOnlySpan`1[System.Char], System.IFormatProvider) Int32 Parse(System.String) Int32 Parse(System.String, System.Globalization.NumberStyles) Int32 Parse(System.String, System.IFormatProvider) Int32 Parse(System.String, System.Globalization.NumberStyles, System.IFormatProvider) Int32 Parse(System.ReadOnlySpan`1[System.Char], System.Globalization.NumberStyles, System.IFormatProvider) Boolean TryParse(System.String, Int32 ByRef) Boolean TryParse(System.ReadOnlySpan`1[System.Char], Int32 ByRef) Boolean TryParse(System.String, System.Globalization.NumberStyles, System.IFormatProvider, Int32 ByRef) Boolean TryParse(System.ReadOnlySpan`1[System.Char], System.Globalization.NumberStyles, System.IFormatProvider, Int32 ByRef) System.TypeCode GetTypeCode() No operators in either one. there any rhyme or reason as to why operator overloads are accessible for some but not other primitive types? The operators are not exposed as methods for primitive integral types. One reason behind that is that the C# compiler has checked and unchecked variant of these operators, but there is no syntax to capture this difference in an explicit implementations. How can I get access to MethodInfo instances for integral types' overloaded operators? You cannot. These methods do not exist and never existed.
gharchive/issue
2021-07-13T20:36:54
2025-04-01T06:38:25.780449
{ "authors": [ "jkotas", "mchandschuh", "wfurt" ], "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/6463", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1881939430
error CS8802: Only one compilation unit can have top-level statements. [C:\Users\aaa\MyApp\MyApp.csproj] The build failed. Fix the build errors and run again. Problem encountered on https://dotnet.microsoft.com/en-us/learn/dotnet/hello-world-tutorial/run Operating System: windows Provide details about the problem you're experiencing. Include your operating system version, exact error message, code sample, and anything else that is relevant. error CS8802: Only one compilation unit can have top-level statements. [C:\Users\aaa\MyApp\MyApp.csproj] The build failed. Fix the build errors and run again. That typically happens when you run the dotnet new command twice in different folders. Maybe try starting over?
gharchive/issue
2023-09-05T13:03:06
2025-04-01T06:38:25.783209
{ "authors": [ "buyaa-n", "durgambigai" ], "repo": "dotnet/core", "url": "https://github.com/dotnet/core/issues/8738", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
348038136
ArrayElementReference variant of the new Span / Memory Hello, here's an interesting idea. The new "Span" and "Memory" in C# 7.2 could potentially be used to solve the problem of the garbage collection cost of creating a very large quantity of objects. An app could manage GC cost by allocating objects in groups, that is arrays. We can already make an array of structs, but it is rather limited because we cannot make a field that contains a reference to one of these structs in an array. So, what if Memory is used to make such a reference? Memory internally contains _object, _index, _length, but if it is reference to a single struct in an array, then _length == 1. I suggest making a variation of Memory like this: public readonly struct ArrayElementReference<T> { private readonly T[] _array; private readonly int _index; } or like this: public readonly struct ArrayElementReference<T> { private readonly T[] _array; private readonly System.UIntPtr _offsetInBytes; } Likewise, Span contains _pointer and _length and in this case, we don't need _length because it is always 1. Thus make a corresponding variation of Span like this: public readonly ref struct FastArrayElementReference<T> { private readonly ref T _pointer; } or like this: public readonly ref struct FastArrayElementReference<T> { private readonly System.UIntPtr _pointer; } Thus the next version of C# could give us the ability to define a field (in a class or struct) that contains a reference to a struct element in an array, and this is implemented like ArrayElementReference as shown above. When this field is copied to a local variable or method parameter on the stack, then it is converted to FastArrayElementReference as shown above -- the same idea as how Span is the fast version of Memory. Thanks for considering it. Expose the special ByReference<T> runtime type that Span<T> uses? /cc @jkotas ByReference<T> is a workaround for https://github.com/dotnet/csharplang/issues/1147. To clarify the syntactic sugar, C# would allow you to write like the following example of a kind of large tree that uses struct-based nodes allocated in arrays (instead of thousands of individual objects). struct ExampleNode { ref ExampleNode Parent; // note ref keyword here ref ExampleNode LeftChild; ref ExampleNode RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; ref ExampleNode nodeA = page1[0]; // note ref keyword here ref ExampleNode nodeB = page1[1]; ref ExampleNode nodeC = page1[2]; nodeA.LeftChild = nodeB; nodeA.RightChild = nodeC; nodeB.Parent = nodeA; nodeC.Parent = nodeA; } For the internal implementation, the C# compiler would translate the above syntactic sugar to the following: struct ExampleNode { ArrayElementReference<ExampleNode> Parent; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> LeftChild; ArrayElementReference<ExampleNode> RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; FastArrayElementReference<ExampleNode> nodeA = page1[0]; // or Span<ExampleNode> FastArrayElementReference<ExampleNode> nodeB = page1[1]; FastArrayElementReference<ExampleNode> nodeC = page1[2]; nodeA.LeftChild = nodeB; nodeA.RightChild = nodeC; nodeB.Parent = nodeA; nodeC.Parent = nodeA; } Ideally, the Span/FastArrayElementReference optimization would be implemented, but if this optimization is impossible, then the proposed syntactic sugar would still be quite useful even if it only uses Memory/ArrayElementReference without ever being optimized like Span. If it is unoptimized, then the C# compiler generates: struct ExampleNode { ArrayElementReference<ExampleNode> Parent; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> LeftChild; ArrayElementReference<ExampleNode> RightChild; int SomePayload1, SomePayload2; } void TestMethod() { ExampleNode[] page1 = new ExampleNode[20000]; ArrayElementReference<ExampleNode> nodeA = page1[0]; // or Memory<ExampleNode> ArrayElementReference<ExampleNode> nodeB = page1[1]; ArrayElementReference<ExampleNode> nodeC = page1[2]; nodeA._array[nodeA._index].LeftChild = nodeB; nodeA._array[nodeA._index].RightChild = nodeC; nodeB._array[nodeB._index].Parent = nodeA; nodeC._array[nodeC._index].Parent = nodeA; } I said "unoptimized" but actually the above "unoptimized" version is still much faster than garbage-collecting 20 thousand individual objects (when ExampleNode is class instead of struct). Thus it's interesting to note that the proposed syntactic sugar is still very good even if it isn't optimized as well as the Memory + Span combo. And if it is optimized as well as Span, then I think it would be a killer new feature. Thanks jkotas for linking to dotnet/csharplang#1147. Unity is a good example because they want to avoid game frame rate stalls caused by GC, and they do this by using the unsafe keyword, but understandably they want to use safe code. I've proposed a solution that eliminates the unsafe code without triggering the problem of frame rate stalls / high GC cost. @benaadams -- I think you already know what I'm about to write, so this message is actually for other readers. ByReference<T> alone would be insufficient. Two special structs (ArrayElementReference and FastArrayElementReference) are required for the same reason why Memory<T> and Span<T> cannot be merged together into a single struct definition. However, if the "unoptimized" (actually the less optimized) version of my proposal is implemented, then only Memory/ArrayElementReference<T> is needed, and Span/ByReference/FastArrayElementReference<T> is unused. If my proposal is implemented using only one special struct, then that struct must be like Memory<T> not like Span/ByReference<T>, but ideally (if possible) my proposal would be optimized as well as the Memory<T> + Span<T> combo meaning 2 special structs. It is important to understand that in the fully-optimized scenario, the C# compiler generates different IL code for the same syntax "ref ExampleNode x" when... when x is a field in a struct or class in the heap, versus when x is a local variable in a method, a variable on the stack. However if the less optimized solution is implemented, then x as field is the same as x as local variable, meaning akin to Memory<T> in both places. One thing to consider whenever we discuss allowing for explicit ref fields in structs is this part of the span safety rules: https://github.com/dotnet/csharplang/blob/master/proposals/csharp-7.2/span-safety.md#length-one-span-over-ref-values Essentially the span safety rules were designed on the idea that a struct could never contain a ref field. If that can happen then the compiler needs to consider any ref parameter as potentially escaping by-ref to any other ref struct parameter or return. This has a fairly devastating effoct on ref struct APIs ref struct S { internal void M(ref int p) { ... } } At this point the compiler must consider that the p parameter can escape by-ref into this. Hence at the callsite both must have the exact same lifetime.: void Method(ref S s) { int i = 42; s.M(ref i); // Error!!! } That example may seem a bit contrived but it's essentially every Read / Write API that we have on ref struct. Overall it makes the system unusable. As a result we ended up adding this language to the spec in order to make these APIs doable. If we wanted to add ref fields in the future we'd need to account for it by doing one of the following: Safety rules would need to distinguish between ref struct that have ref fields and those that don't. Add some notation for disallowing certain parameters / values from being captured. Essentially a way to mark p above as "don't allow capture by ref". These are all doable but it's work and needs quite a bit of working out. CC @JeremyKuhne as I know he's interested in (1) above. @jkotas ByReference is a workaround for dotnet/csharplang#1147. C# also doesn't allow ref fields because there really isn't a way to define them in IL. If we did them it would likely be via emitting ByReference<T>. C# also doesn't allow ref fields because there really isn't a way to define them in IL. This feature would work on new runtimes only. We can choose how to make this feature work on the new runtimes. I think relaxing the restriction on byref fields in IL would be the most natural design. This feature would work on new runtimes only. Definitely. If we added support for ref fields then I think it has to be CoreClr only due to the GC tracking issues. The desktop GC wouldn't treat such fields as a strong reference and hence allowing it there would open up fun GC holes. Correct? I think relaxing the restriction on byref fields in IL would be the most natural design. What would happen to Span<T> then? Would it's implementation just be changed to be essentially: ref struct Span<T> { ref T Data; int Length; } CC @JeremyKuhne as I know he's interested in (1) above. I'm more interested in (2) actually. :) Add some notation for disallowing certain parameters / values from being captured. Essentially a way to mark p above as "don't allow capture by ref". For me the driving scenario is passing Span<byte> buffer = stackalloc byte[64] to ref struct methods so they can manipulate/access the parameter without risk of capturing it. @JeremyKuhne I'm more interested in (2) actually. :) Yes of course 😦. This is the downside of using the "all 1." scheme for numbered lists. What would happen to Span then? Would it's implementation just be changed to be essentially: ref struct Span { ref T Data; int Length; } That makes good sense in my mind, because, honestly, from my perspective, although Span and Memory are great, personally I view references-to-structs as being fundamentally more important and more intrinsic than spans/ranges/extents/substrings, therefore I think it makes good sense if the REAL/core feature is ref T x fields and Span<T> becomes only a thin extension to ref fields that merely adds the length, and the real magic would be done in the implementation of ref T x fields and not in (or for) Span<T>. This design appears to be a clean and logical layering of functionality. Considering that references are such a very fundamental thing, it also makes good sense in my mind that ref fields would be emitted as byref fields in IL directly, not emitted as ByReference<T> or other special struct, but if necessary, emitting a special struct is also a workable solution. Ideally I'd favor having IL directly support this feature, but I'm not the expert in CLR. I think it would be an impressively big win for C# if 20000 instances can be garbage-collected with the same cost as 1 object (or rather 1 array). For another example of potential wins, have a look at System.Collections.Generic.SortedSet<T> and SortedDictionary<TKey,TValue> and you can see they contain an internal Node class: internal class Node { public bool IsRed; public T Item; public Node Left; public Node Right; } This Node class could be changed to a struct: internal struct Node { public bool IsRed; public T Item; public ref Node Left; public ref Node Right; } Even if SortedSet and SortedDictionary don't bother adjusting the page/array length dynamically, even if they simply use a constant page length of 10, the number of objects would be reduced to one-tenth of the current implementation with class Node. For another example, have a look at the internal struct MS.Internal.Xml.Cache.XPathNode. Although it is already optimized to a struct instead of class, imagine how much easier, simpler, and cleaner it would have been to write XPathNode if C# allowed XPathNode to contain ref XPathNode x; fields. I can think of numerous examples that would benefit from this feature. C# if 20000 instances can be garbage-collected with the same cost as 1 object (or rather 1 array) The garbage collection cost is proportional to the bytes allocate and collected. The number of objects matters much less. System.Collections.Generic.SortedSet<T> and SortedDictionary<TKey,TValue> MS.Internal.Xml.Cache.XPathNode The ref fields would be allowed in ref-like types only. I do not think converting the structs in these two examples would work in practice. The ref fields would be allowed in ref-like types only. I do not think converting the structs to ref-like struct in these two examples would work in practice. I agree, converting those two examples to stack-only/ref-like structs would not work, but in my proposal, I meant that a NORMAL struct and a normal class would be able to contain a field that contains a reference to some other struct instance stored in an array. My proposal is already possible to do today in the current C#, except that: The syntax is cumbersome. It would be better with syntactic sugar and/or direct support in IL. It could potentially be optimized further, but this optimization is not mandatory. It already works without use of ref struct, ByReference<T>, and Span<T>. It doesn't need any magic structs or special new IL, except if you want to optimize it. For example, consider the SortedSet<T>.ReplaceChildOfNodeOrRoot method. Following I've rewritten this method to use struct Node instead of class Node. The following is how it looks with regular C#, without any new syntactic sugar. As you can see, it already works but the syntax is cumbersome -- it would be better with syntactic sugar and/or direct support in IL. struct ArrayElementReference<T> { readonly T[] Array; readonly int Index; public static bool operator == (ArrayElementReference<T> a, ArrayElementReference<T> b) { return a.Index == b.Index && a.Array == b.Array; } } // struct ArrayElementReference<T> class SortedSet<T> { ArrayElementReference<Node> root; struct Node // NORMAL struct not "ref struct". { bool IsRed; T Item; ArrayElementReference<Node> Left; ArrayElementReference<Node> Right; } void ReplaceChildOfNodeOrRoot(ArrayElementReference<Node> parent, ArrayElementReference<Node> child, ArrayElementReference<Node> newChild) { if (parent.Array != null) { if (parent.Array[parent.Index].Left == child) parent.Array[parent.Index].Left = newChild; else parent.Array[parent.Index].Right = newChild; } else { this.root = newChild; } } static void TestSetLeft(ArrayElementReference<Node> parent, ArrayElementReference<Node> newChild) { parent.Array[parent.Index].Left = newChild; } } // class SortedSet<T> I compiled it using the existing C# compiler and it emits the following IL for the TestSetLeft method: .method static void TestSetLeft ( valuetype ArrayElementReference`1<valuetype Node<!T>> parent, valuetype ArrayElementReference`1<valuetype Node<!T>> newChild ) cil managed { .maxstack 8 ldarg.0 ldfld !0[] valuetype ArrayElementReference`1<valuetype Node<!T>>::Array ldarg.0 ldfld int32 valuetype ArrayElementReference`1<valuetype Node<!T>>::Index ldelema valuetype Node<!T> ldarg.1 stfld valuetype ArrayElementReference`1<valuetype Node<!0>> valuetype Node<!T>::Left ret } So it already works, but how about making some syntactic sugar and/or better IL? Wouldn't it be excellent if the C# compiler allowed us to write the same thing using the following simple syntax? class SortedSet<T> { ref Node root; struct Node // STILL NORMAL struct not "ref struct". { bool IsRed; T Item; ref Node Left; ref Node Right; } void ReplaceChildOfNodeOrRoot(ref Node parent, ref Node child, ref Node newChild) { if (parent != null) { if (parent.Left == child) parent.Left = newChild; else parent.Right = newChild; } else { this.root = newChild; } } static void TestSetLeft(ref Node parent, ref Node newChild) { parent.Left = newChild; } } // class SortedSet<T> The above syntax could produce the same IL as already supported (the IL above). Alternatively, if desired, it could be optimized to produce IL similar to the following, if you're willing to extend IL to support the following "arrayelemref" or similar. .method static void TestSetLeft ( arrayelemref Node<!T> parent, arrayelemref Node<!T> newChild ) cil managed { ldarg.0 ldarg.1 stfld arrayelemref Node<!T> valuetype Node<!T>::Left ret } The following IL shows the struct parameters passed by reference (IL &), but this fails because these parameters pass only a pointer/address, but the method TestSetLeft needs to know both ArrayElementReference.Array and ArrayElementReference.Index in order to set the field Node.Left because Node.Left needs to store ArrayElementReference<Node> not only a pointer/address, because struct Node is a normal struct not ref struct Node {...}. .method static void TestSetLeft ( valuetype Node<!T>& parent, valuetype Node<!T>& newChild ) cil managed { ldarg.0 ldarg.1 stfld Node<!T> Node<!T>::Left // Incorrect. ret } readonly T[] Array; readonly int Index; This is turning one pointer into pointer + index. In practice, it will be two pointers due to alignment. So this would result into more bytes being allocated on GC heap. It is very unlikely to make anything faster. We do not want to allow storing refs on GC heap because of they are very expensive to deal with during garbage collection. If we have allowed it and people started using them, we would have a big problem with GC pause times. It is very unlikely to make anything faster. If it doesn't make anything faster, then why do Dictionary<TKey,TValue> and XPathNode and other examples do it? Dictionary<TKey,TValue>.Entry is a struct. It would be easier and simpler to write Entry as a class, but the cost would be excessive, so the authors of Dictionary wrote it as a struct. Entry.next is meant to be a reference to another instance of struct Entry, but C# or the CLR doesn't support it. I'm not saying Dictionary should be rewritten, rather I just mean it's one example of reducing cost by using structs instead of classes, but currently it is often a headache to use struct instead of class because structs cannot contain references to each other. My proposal gives structs the power of classes but with lower cost, doesn't it? Don't you think SortedSet<T>.Node would be lower cost if it was changed to a struct akin to what Dictionary does? We do not want to allow storing refs on GC heap because of they are very expensive to deal with during garbage collection. But in my proposal, most of these references are self-references -- they point to themselves. Most instances of Node.Left.Array and Node.Right.Array point to the same array object that contains the Node instance. When traversing a graph for GC purposes, if you are currently at object X1, and X1 contains a field that points to X1 (points to itself), then you don't follow or traverse into this link -- you ignore this link. Ignoring a reference isn't expensive, right? Or am I mistaken? If it doesn't make anything faster, then why do Dictionary<TKey,TValue> and XPathNode and other examples do it? Right, they use indices. Indices are cheap both for storage and GC. Ignoring a reference isn't expensive, right? Or am I mistaken? Finding the object that the ref points to is the expensive part. This would need to be done before you can check that the ref is safe to ignore because of it points into the same array. Damn. Would you like the idea better or worse if ArrayElementReference.Array had a way of indicating self-reference without actually storing the address of the self-array? For example, if the address 1 or (UIntPtr)-1 was interpreted to mean self. Alternatively, when ArrayElementReference.Index is negative and ArrayElementReference.Array is null, then it could mean self. I haven't benchmarked the above idea, but I did do a benchmark without the above idea, and my proposal was faster than class Node, but underwhelming and insufficiently compelling :-( Apparently my proposal needs to be adjusted in some way before it could be sufficiently compelling. Hey, I just noticed this: Try compiling the following short program in VS 15.7.6 and when you run it, it throws System.TypeLoadException! Who is the best person to inform about this bug? Seems like a serious bug. class Program { static void Main(string[] args) { Test(); } static SNode Test() { return new SNode(); } struct ArrayElementReference<T> { public T[] Array; public int Index; } struct SNode { ArrayElementReference<SNode> x; } } Unhandled Exception: System.TypeLoadException: Could not load type 'TestConsoleApp.Program+SNode' from assembly 'TestConsoleApp, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null'. at TestConsoleApp.Program.Main(String[] args) Know issue: https://github.com/dotnet/coreclr/issues/7957 On second thought, let me tell you the exact benchmark results, and you decide for yourself whether it is a worthwhile improvement. I said "underwhelming" because I was expecting my proposal to be at least 10x faster but it was 3x to 4x faster in this test. Is that worthwhile? More importantly, would it solve the problem for the Unity game engine? Can my proposal be further improved or optimized somehow? I allocated 51_000_000 nodes. Each node contains 2 node references and 48 bytes of integers. When using class Node, it ran for 9.3 seconds. When using struct Node with my proposal, it ran for 2.5 seconds (3.7 times faster). I made no attempt to optimize self-references. The times include the time taken to run: System.GC.Collect(System.GC.MaxGeneration, System.GCCollectionMode.Forced, blocking: true, compacting: true); In both tests, System.GC.GetTotalMemory returns approx 3891 megabytes before GC.Collect. Would anyone like me to email the source code or upload somewhere? Interesting -- a significant part of the reason why my proposal is faster is revealed by System.GC.CollectionCount. After running the benchmark with class Node: Gen 0 collected 649 times. Gen 1 collected 338 times. Gen 2 collected 9 times. After running the benchmark with struct Node: Gen 0 collected 2 times. Gen 1 collected 2 times. Gen 2 collected 2 times. When I reduced the page/array length to 10_000 nodes (causing more arrays to be allocated), the time increased slightly to 2.8 seconds and the number of collections increased slightly: Gen 0 collected 6 times. Gen 1 collected 6 times. Gen 2 collected 6 times. Thus my proposal might solve the problem for the Unity game engine because my proposal causes garbage collection to run much less often. Admittedly it's less beneficial than I presumed it would be. Is it worthwhile? I've brainstormed a few alternative solutions that are also quite interesting to think about: What if a class can have an attribute applied that says that the class should use automatic-reference-counting (like in .NET Native or like in the latest version of MS C++ for UWP apps) instead of the normal CLR GC? MS C++ automatic-reference-counting still suffers the problem of cyclic references, doesn't it? Therefore it would ONLY be used for classes that explicitly request it via an attribute. This would allow people to create particular classes that don't cause garbage collections to run, without resorting to unsafe code. Another alternative: If we could somehow allocate a group of objects inside a "GC compartment/container", and the objects inside a compartment are GC'd on an all-or-nothing basis instead of individually. Any reference to such an object causes all objects in the compartment to be kept alive. Possibly a compartment remains ineligible for GC until the app explicitly/manually marks it as eligible, such as in a IDisposable.Dispose method or in a finalizer. What if the CLR supported an ability to allocate a read-only array of class instances where each element of the array is immediately non-null and cannot be changed to any other reference? System.Array.IsReadOnly == true. These class instances (array elements) may be GC'd on an all-or-nothing basis similar to an array of structs. This feature might only be compatible with classes that have a particular attribute applied, and perhaps such classes can ONLY be allocated in this array manner, never individually. [System.ArrayElementClass] // Means class is restricted to existing as an array element. class MyTest1 { int TestFieldA; MyTest1 Parent; } MyTest1[] ary = new MyTest1[10000]; ary[0].TestFieldA = 123; // Wouldn't cause NullReferenceException. ary[0] = xxxx; // Throws ReadOnlyException. bool b = ary.IsReadOnly; // Returns true. ary[0].Parent = ary[5]; // Supported, unlike if MyTest1 was struct. MyTest1 individualInstance = new MyTest1(); // Disallowed. Every instance of a ArrayElementClass class would contain a CLR-internal read-only field/header that stores the array index or byte offset of the instance/element. Thus if you have a pointer to an instance of an ArrayElementClass, then you can instantly calculate a pointer to the array that contains the instance of the class. i.e. you can instantly recover the array pointer from an element pointer. The GC wouldn't track/determine the reachability of each element/instance of ArrayElementClass, rather it would determine the reachability of the array, akin to how it treats an array of structs. If we could somehow allocate a group of objects inside a "GC compartment/container", and the objects inside a compartment are GC'd on an all-or-nothing basis instead of individually Similar to https://github.com/dotnet/corefx/issues/31643
gharchive/issue
2018-08-06T18:42:10
2025-04-01T06:38:26.024129
{ "authors": [ "JeremyKuhne", "benaadams", "gfoidl", "jaredpar", "jkotas", "verelpode" ], "repo": "dotnet/corefxlab", "url": "https://github.com/dotnet/corefxlab/issues/2417", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
194741194
Create text generation templates for TryParse* invariant overloads, plus tests Create text generation templates for integer parsing. There are four new templates: InvariantUnsigned.tt: Implementations of InvariantParser.Invariant{Utf8, Utf16}.TryParse{Byte, UInt16, UInt32, UInt64} InvariantUnsignedHex.tt Implementations of InvariantParser.Invariant{Utf8, Utf16}.Hex.TryParse{Byte, UInt16, UInt32, UInt64} InvariantSigned.tt: Implementations of InvariantParser.Invariant{Utf8, Utf16}.TryParse{SByte, Int16, Int32, Int64} InvariantSigned.tt: Implementations of InvariantParser.Invariant{Utf8, Utf16}.Hex.TryParse{SByte, Int16, Int32, Int64} The other template, ParserTests.tt in the Tests directory, generates tests for all overloads generated by the above four templates, testing a variety of important scenarios. @KrzysztofCwalina @joshfree @yizhang82 looks good. Thanks!
gharchive/pull-request
2016-12-10T03:16:44
2025-04-01T06:38:26.029953
{ "authors": [ "KrzysztofCwalina", "botaberg" ], "repo": "dotnet/corefxlab", "url": "https://github.com/dotnet/corefxlab/pull/1056", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
540546594
Add Applymethod to PrimitiveDataFrameColumn For issue #2805. This PR adds an Apply<TResult> method to PrimitiveDataFrameColumn that takes a Func<T?, TResult?> and returns a new column with the new type. Example usage (taken from the written unit test): int[] values = { 1, 2, 3, 4, 5 }; var col = new PrimitiveDataFrameColumn<int>("Ints", values); PrimitiveDataFrameColumn<double> newCol = col.Apply(i => i + 0.5d); Squash and merging this. Thank you @zHaytam for the patch!
gharchive/pull-request
2019-12-19T20:31:33
2025-04-01T06:38:26.032033
{ "authors": [ "pgovind", "zHaytam" ], "repo": "dotnet/corefxlab", "url": "https://github.com/dotnet/corefxlab/pull/2807", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
270880267
String.get_Length returns 0 String.get_Length started returning 0 after #4808 , which breaks printing strings and might indicate other problems. It looks like pinning a string and printing it character-by-character still works, so it could be an issue in the frozen string's length or in reading instance fields. I've verified that the frozen string's length is in the right place by pinning it and working backward from the pointer. This is probably an issue calling instance methods or reading instance fields.
gharchive/issue
2017-11-03T05:10:04
2025-04-01T06:38:26.033270
{ "authors": [ "morganbr" ], "repo": "dotnet/corert", "url": "https://github.com/dotnet/corert/issues/4863", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
358864197
CoreCLR R2R testing building against CoreRT framework While testing R2R on an IJW assembly, I saw a failure due to a missing framework method (Marshal.GetExceptionPointers). That method is in the CoreCLR framework, but not in CoreRT. While they should be very similar, we should really test against the CoreCLR framework to catch discrepancies. Simon fixed this.
gharchive/issue
2018-09-11T02:37:25
2025-04-01T06:38:26.034358
{ "authors": [ "MichalStrehovsky", "morganbr" ], "repo": "dotnet/corert", "url": "https://github.com/dotnet/corert/issues/6316", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
429932966
Confusing description of the background garbage collector Quote: There is no setting for background garbage collection; it is automatically enabled with concurrent garbage collection. Background garbage collection is a replacement for concurrent garbage collection. The two sentences contradict each other. Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: d8948cf5-0aec-308c-8046-965cc15763b2 Version Independent ID: a30a0be5-b741-2a9c-e997-55d6d9960ca9 Content: Fundamentals of garbage collection Content Source: docs/standard/garbage-collection/fundamentals.md Product: dotnet Technology: dotnet-standard GitHub Login: @rpetrusha Microsoft Alias: ronpet @bartlomiej-dawidow Thanks for reporting that -- I agree the language was confusing and I've submitted PR #14320 to clarify it.
gharchive/issue
2019-04-05T20:59:39
2025-04-01T06:38:26.095905
{ "authors": [ "bartlomiej-dawidow", "tdykstra" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/11697", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
195855775
Add more documentation for dotnet test and dotnet vstest for Preview 3+ bits There are certain features that do not exist in the dotnet test and dotnet vstest docs that need to be added, such as the ability to use .runsettings files, how filtering of tests work etc. /cc @mairaw is this partly covered here: https://aka.ms/vstest-filtering? I also noticed that we don't have dotnet vstest documented. I'll add a new issue for that. Is code coverage supported outside of windows yet? thats another good thing to know. @dmccaffery No, I don't believe it is. @cartermp :feelsgood: https://aka.ms/vstest-filtering links now points to a doc page now /cc @samadala Closing this one in favor of #2382 that has specific action items for dotnet test docs.
gharchive/issue
2016-12-15T16:47:44
2025-04-01T06:38:26.099248
{ "authors": [ "blackdwarf", "cartermp", "dmccaffery", "mairaw", "sbaid" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/1339", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
543167612
is it possible to declare a variable static and initialize it later on? Hi to all, is it possible to first declare a variable static and then, on the following lines of code, to initialize this variable? I don't know if this is the proper place for this question, but I'm asking because I'd like to initialize the variable to 0, and every time the procedure is called, the value gets back to 0. Since the procedure will be called many times, I don't want to redeclare and reinitialize the variable so that I'll be preventing overhead on my application. Thanks Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 0b593e2c-d31c-394b-bffd-92ffdb6d6a1e Version Independent ID: 6b4ed15d-ff3a-6392-0204-6c2436603302 Content: Static - Visual Basic Content Source: docs/visual-basic/language-reference/modifiers/static.md Product: dotnet-visualbasic GitHub Login: @KathleenDollard Microsoft Alias: kdollard Why just declare it normally, without the Static keyword ? Dim yourVariable As Integer = 0 I want to declare static because so that the variable doesn't have to be redeclared and reinitialized on every procedure call (there will be many). It will do be collected by the GC according to this article (https://docs.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/declared-elements/lifetime) - because the procedure is not Shared. I still feel that the first approach without Static is much better. But anyways, I've shown the two options. You may need to ask in Stackoverflow on which option is better and why, or wait for someone here who can explain that in deep details. Well, I want to do this: Static myVariable As Integer myVariable = 0 ' block using myVariable ' end block ' variable has to get back to 0 on the next procedure call. I prefer using my option if it no difference on the application performance. Thank you for your reply. @KathleenDollard Can you provide the deeper details on how VB manages storage for local variables declared static? My thought is that there is no real performance improvement in this case, because the type of Counter is a value type. Is that analysis correct?
gharchive/issue
2019-12-28T13:33:59
2025-04-01T06:38:26.107358
{ "authors": [ "BillWagner", "Youssef1313", "viniciusvw22" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/16430", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
582309142
libicu not found when using zypper @SchoolGuy commented on Mon Mar 16 2020 Currently the command sudo zypper install libicu will not work because suse uses a different naming schema for this. Since I don't know what exact package is needed for .NET/.NET Core here is the link to the OBS Repository where all packages are built: https://build.opensuse.org/package/show/openSUSE%3AFactory/icu Dokumentdetails ⚠ Bearbeiten Sie diesen Abschnitt nicht. Er ist für die Verknüpfung von docs.microsoft.com zum GitHub-Artikel erforderlich. ID: b6baea63-f650-e232-854b-c1722ed72543 Version Independent ID: 56c0bff6-2630-06f8-9c86-e81ca292fd2f Content: Installieren von .NET Core auf openSUSE 15 mit einem Paket-Manager (.NET Core) Content Source: docs/core/install/linux-package-manager-opensuse15.md Product: dotnet-core GitHub Login: @Thraka Microsoft Alias: adegeo @srvbpigh commented on Mon Mar 16 2020 Hello, @SchoolGuy Thank you for your feedback. We are actively reviewing your comments and will get back to you soon. Kind regards, Microsoft DOCS International Team Ping @dagood Currently the command sudo zypper install libicu will not work because suse uses a different naming schema for this. What distro are you using? SUSE primarily produces SLES, which we have different instructions for: https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-sles15 https://docs.microsoft.com/en-us/dotnet/core/install/linux-package-manager-sles12 For now I'll assume openSUSE 15 because that's the linked doc. I don't see any issue currently, starting from a fresh Docker container: $ docker run -it --rm opensuse/leap:15.1 bash -c 'zypper install -y libicu' Building repository 'Non-OSS Repository' cache ........................................................................................................[done] Building repository 'Main Repository' cache ...........................................................................................................[done] Building repository 'Main Update Repository' cache ....................................................................................................[done] Building repository 'Update Repository (Non-Oss)' cache ...............................................................................................[done] Loading repository data... Reading installed packages... 'libicu' not found in package names. Trying capabilities. Resolving package dependencies... The following 3 NEW packages are going to be installed: libicu60_2 libicu60_2-ledata timezone 3 new packages to install. Overall download size: 8.2 MiB. Already cached: 0 B. After the operation, additional 31.6 MiB will be used. Continue? [y/n/v/...? shows all options] (y): y ... Checking for file conflicts: ..........................................................................................................................[done] (1/3) Installing: libicu60_2-ledata-60.2-lp151.3.7.1.noarch ...........................................................................................[done] (2/3) Installing: timezone-2019c-lp151.2.6.1.x86_64 ...................................................................................................[done] (3/3) Installing: libicu60_2-60.2-lp151.3.7.1.x86_64 ..................................................................................................[done] @SchoolGuy, can you please post more info, including the command and its output? Note that we depend on the "Provides" clause of the libicu package, which is due to be removed, but it hasn't been removed yet in versions of openSUSE that .NET Core supports. See https://github.com/dotnet/runtime/issues/913 for more info. /cc @NikolaMilosavljevic @leecow @nakarnam @dagood Yes I am using Tumbleweed but since openSUSE 16/SLES 16 (or however we will name it) will be again branched from Factory aka Tumbleweed the problem will resurface. So even if we close this I digged you up another problem because starting from Leap 15.2/SLES 15 SP2 you will have the same problem again. Here the specfile for SLES 15 SP2: https://build.opensuse.org/package/view_file/SUSE:SLE-15-SP2:GA/icu/icu.spec?expand=1 Here the specfile for Leap 15.2: https://build.opensuse.org/package/view_file/openSUSE:Leap:15.2/icu/icu.spec?expand=1 > sudo zypper in libicu [sudo] Passwort für root: Metadaten von Repository 'Haupt-Repository (NON-OSS)' abrufen ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Cache für Repository 'Haupt-Repository (NON-OSS)' erzeugen ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Metadaten von Repository 'Haupt-Repository (OSS)' abrufen ...............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Cache für Repository 'Haupt-Repository (OSS)' erzeugen ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Metadaten von Repository 'openSUSE Tools' abrufen .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Cache für Repository 'openSUSE Tools' erzeugen ..........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Metadaten von Repository 'openSUSE-20200309-0' abrufen ..................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Cache für Repository 'openSUSE-20200309-0' erzeugen .....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................[fertig] Repository-Daten werden geladen... Installierte Pakete werden gelesen... 'libicu' wurde in den Paketnamen nicht gefunden. Fähigkeiten werden durchsucht. Keine Anbieter von 'libicu' gefunden. Paketabhängigkeiten werden aufgelöst... Keine auszuführenden Aktionen. German output but I think you will get the deal. Ah, yep, .NET Core doesn't support Tumbleweed because we can't keep up with issues like this. We just need some libicu installed, so whatever libicu##_# is available is what you need, then you can ignore the .NET Core package's dependency. We're aware this is happening (hence https://github.com/dotnet/runtime/issues/913) but I couldn't figure out how to see what changes are in which release. Thanks for the additional info there. I've opened an updated issue to track this: https://github.com/dotnet/runtime/issues/33672. Closing: we don't support Tumbleweed. Thanks for the heads up on the timeframe for this change though.
gharchive/issue
2020-03-16T13:47:34
2025-04-01T06:38:26.122294
{ "authors": [ "CeciAc", "SchoolGuy", "Thraka", "dagood" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/17470", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
372450106
Which element should be used in config file according the Example? In the example it use the start element system.identityModel but the closing tag is microsoft.identityModel. Which one should be used? Document Details ⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking. ID: 0889facb-5306-2161-9186-9d138512ce48 Version Independent ID: 2315afd9-eb82-5974-8654-37b9914cb394 Content: <claimsAuthenticationManager> Content Source: docs/framework/configure-apps/file-schema/windows-identity-foundation/claimsauthenticationmanager.md Product: dotnet-framework GitHub Login: @BrucePerlerMS Microsoft Alias: dotnetcontent Looks like a duplicate of #8544 Closing as duplicate. Added the duplicate label.
gharchive/issue
2018-10-22T09:26:02
2025-04-01T06:38:26.127549
{ "authors": [ "Thraka", "kwlin", "mairaw", "mikkelbu" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/issues/8545", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
246489962
move serialization docs Fixes Part 1 of #2770 @rpetrusha good catch on those links! Even though the comments were not really related to moving the docs, I made the fixes. Please review. I've done some global fixes, so the number of files impacted is probably bigger than the number of files you've given feedback to. Thanks for making the additional changes, @mairaw. This looks really good. It's ready to merge when you want to. I've noticed I've missed one of the serializer msdn links (my VS Code was super slow yesterday). Fixed that one and I'll merge if builds looks good. Thanks @rpetrusha!
gharchive/pull-request
2017-07-29T01:16:42
2025-04-01T06:38:26.129607
{ "authors": [ "mairaw", "rpetrusha" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/2780", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1401979289
Move file keyword into contextual keyword list C# 11 declares a new feature using a new keyword file to restrict the accessibility. It should be a contextual keyword, not a predefined keyword. Page: C# Keywords Content source: docs/csharp/language-reference/keywords/index.md This is my first PR. If there is something wrong with my operation or modification, please tell me :D C# 11 declares a new feature using a new keyword file to restrict the accessibility. It should be a contextual keyword, not a predefined keyword. Page: C# Keywords Content source: docs/csharp/language-reference/keywords/index.md This is my first PR. If there is something wrong with my operation or modification, please tell me :D I think this is an access modifier, like private or public, and it scopes the access to the type within the file itself. Therefore, I believe this belongs next to those other access modifiers. @BillWagner would know for sure. The speclet doesn't explicitly say if file is a keyword or a contextual keyword. I think it's a contextual keyword. Tagging @RikkiGibson to make sure. file is a contextual keyword, but types named file are blocked from being declared in the language. We think it's important for people to still be able to declare a variable like Stream file = ...; hi everybody :) I found this: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/file This passage has mentioned that file is a contextual keyword. Beginning with C# 11, the file contextual keyword is a type access modifier. I believe Roslyn team was always avoiding to say that file is an access modifier. @RikkiGibson Could you confirm please? Thanks! Yeah, we use the term file-local instead of referring to file accessibility. We also avoid saying file scope because we don't want the feature to be compared to file-scoped namespaces. I would like for us to revisit our language around accessibility, it seems like what we've come up with really doesn't match how users think of it, but that decision should be made by the LDM and for now the docs should probably reflect the existing design.
gharchive/pull-request
2022-10-08T15:25:26
2025-04-01T06:38:26.137767
{ "authors": [ "BillWagner", "IEvangelist", "RikkiGibson", "SunnieShine", "Youssef1313" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/31660", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1477711442
Type discriminator order Fixes #32789. It's only this morning that I've come across this page in the documentation and given it a read through... I find it really surprising and disappointing that the words "at the start of the JSON object" were conceived and approved -- and, furthermore, that this polymorphic feature was ever released in the fragile form it has been! A JSON object is an unordered collection of name/value pairs (see json.org). This feature is not standards compliant and cannot be guaranteed to survive intermediary processing that uses JSON I/O. I can't imagine how many software systems are going to be badly architected by those trusting the underlying data encodings because this was "rubber stamped" by the official .NET / Microsoft team. 🤯
gharchive/pull-request
2022-12-05T23:38:30
2025-04-01T06:38:26.140221
{ "authors": [ "QuintinWillison", "gewarren" ], "repo": "dotnet/docs", "url": "https://github.com/dotnet/docs/pull/32887", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
928584470
Diagnostic port connect mode may not work in Kubernetes Mounting the /tmp path between containers and using connect mode (the default mode) may not discover processes correctly. I validated with the customer that /tmp was mounted and that the event pipe socket from the application container was visible to dotnet-monitor. This target application is .NET 5; we were able to avoid the issue by using listen mode. Was there anything interesting about the volume? Was it an emptyDir or something else? It was an emptyDir if I recall correctly. I'm going to try to reproduce the issue myself and see what happens.
gharchive/issue
2021-06-23T19:46:24
2025-04-01T06:38:26.207734
{ "authors": [ "jander-msft", "shirhatti" ], "repo": "dotnet/dotnet-monitor", "url": "https://github.com/dotnet/dotnet-monitor/issues/491", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
811584135
New Advanced data options dialog Label shouldn't be selectable twice Alignment is off in this control :blush: Alignment fixed in https://github.com/dotnet/machinelearning-tools/pull/913 This issue can be repro on environment: Windows 10 Enterprise, Version 20H2 ML.Net Model Builder (Preview): 16.5.0.2115505 Microsoft Visual Studio Enterprise 2019: 16.9.4 .Net: 5.0.202 Dadaset: https://testpass.blob.core.windows.net/test-pass-data/issues.tsv.txt This issue can't be repro on: Windows 10 Enterprise, Version 20H2 ML.Net Model Builder (Preview): 16.5.21.2122301 Microsoft Visual Studio Enterprise 2019: 16.9.4 .Net: 5.0.202 Main branch: https://privategallery.blob.core.windows.net/gallery/refs/heads/main/atom.xml Steps: Create new C# console app with .Net 5.0; Add model builder by right click on the project; Navigate to Data page after selected scenario and environment; Open Advanced data options dialog, the "Save" button remain disable if you select multi "Label" purpose.
gharchive/issue
2021-02-19T00:34:21
2025-04-01T06:38:26.249284
{ "authors": [ "beccamc", "vzhuqin" ], "repo": "dotnet/machinelearning-modelbuilder", "url": "https://github.com/dotnet/machinelearning-modelbuilder/issues/1245", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1052507372
ML Sentiment Model does not populate score Microsoft Visual Studio Community 2022 Version 17.1.0 Preview 1.0 VisualStudio.17.Preview/17.1.0-pre.1.0+31903.286 Microsoft .NET Framework Version 4.8.04161 Installed Version: Community .NET Core Debugging with WSL 1.0 .NET Core Debugging with WSL ADL Tools Service Provider 1.0 This package contains services used by Data Lake tools ASA Service Provider 1.0 ASP.NET and Web Tools 2019 17.1.125.8155 ASP.NET and Web Tools 2019 ASP.NET Web Frameworks and Tools 2019 17.1.125.8155 For additional information, visit https://www.asp.net/ Azure App Service Tools v3.0.0 17.1.125.8155 Azure App Service Tools v3.0.0 Azure Data Lake Tools for Visual Studio 2.6.4000.0 Microsoft Azure Data Lake Tools for Visual Studio Azure Functions and Web Jobs Tools 17.1.125.8155 Azure Functions and Web Jobs Tools Azure Stream Analytics Tools for Visual Studio 2.6.4000.0 Microsoft Azure Stream Analytics Tools for Visual Studio C# Tools 4.1.0-1.21551.6+e4419d6f6792da7011c8589ba118f59d830ca72f C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Common Azure Tools 1.10 Provides common services for use by Azure Mobile Services and Microsoft Azure Tools. Cookiecutter 17.0.21295.2 Provides tools for finding, instantiating and customizing templates in cookiecutter format. Fabric.DiagnosticEvents 1.0 Fabric Diagnostic Events Microsoft Azure Hive Query Language Service 2.6.4000.0 Language service for Hive query Microsoft Azure Service Fabric Tools for Visual Studio 17.0 Microsoft Azure Service Fabric Tools for Visual Studio Microsoft Azure Stream Analytics Language Service 2.6.4000.0 Language service for Azure Stream Analytics Microsoft Azure Tools for Visual Studio 2.9 Support for Azure Cloud Services projects Microsoft JVM Debugger 1.0 Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines Microsoft Library Manager 2.1.134+45632ee938.RR Install client-side libraries easily to any web project Microsoft MI-Based Debugger 1.0 Provides support for connecting Visual Studio to MI compatible debuggers Microsoft Visual Studio Tools for Containers 1.2 Develop, run, validate your ASP.NET Core applications in the target environment. F5 your application directly into a container with debugging, or CTRL + F5 to edit & refresh your app without having to rebuild the container. Node.js Tools 1.5.31027.1 Commit Hash:dac60d9b246a1d6a5daf23d223c933dbe1518465 Adds support for developing and debugging Node.js apps in Visual Studio NuGet Package Manager 6.1.0 NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/ ProjectServicesPackage Extension 1.0 ProjectServicesPackage Visual Studio Extension Detailed Info Python - Profiling support 17.0.21295.2 Profiling support for Python projects. Python with Pylance 17.0.21295.2 Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers. Razor (ASP.NET Core) 17.0.0.2152601+724154d925d7d9d26ebf8a73a66d5219aa320400 Provides languages services for ASP.NET Core Razor. SQL Server Data Tools 17.0.62110.20190 Microsoft SQL Server Data Tools ToolWindowHostedEditor 1.0 Hosting json editor into a tool window TypeScript Tools 17.0.1029.2001 TypeScript Tools for Microsoft Visual Studio Visual Basic Tools 4.1.0-1.21551.6+e4419d6f6792da7011c8589ba118f59d830ca72f Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used. Visual F# Tools 17.1.0-beta.21525.2+46af4a248255f4af2284883f48983fe7dd07a760 Microsoft Visual F# Tools Visual Studio Code Debug Adapter Host Package 1.0 Interop layer for hosting Visual Studio Code debug adapters in Visual Studio Describe the bug The Sentiment score is not being populated Console.WriteLine($"Data to Analyze: ", sampleData.Col0); Console.WriteLine($"\n\nPredicted Sentiment: {predictionResult.Prediction}\n\n"); Console.WriteLine($"\n\nPredicted score: {predictionResult.Score}\n\n"); Debugger shows: "\n\nPredicted score: System.Single[]\n\n" To Reproduce Steps to reproduce the behavior: Run the demo as is, behavior does not change with sentiment Expected behavior A sentiment floating point score between {0.0..1.0} Screenshots Sorry for hitting this issue. Data classification we only show the predicted class if you want to see the probability of each label for a sample data you can look at on evaluating page. Do you want to see the probability of each class in Program.cs? @briacht what is your thought? @zewditu Was this a bug in code gen that has been fixed? yes, it is already fixed This should be fixed in the latest release. Please ping me if you are still having problems. Thanks!
gharchive/issue
2021-11-13T01:13:34
2025-04-01T06:38:26.267443
{ "authors": [ "beccamc", "johndohoneyjr", "zewditu" ], "repo": "dotnet/machinelearning-modelbuilder", "url": "https://github.com/dotnet/machinelearning-modelbuilder/issues/1915", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1971715430
Migrate Microsoft.Bcl.HashCode Includes history from the old release/3.1 branch. The latest available package in nuget.org is https://www.nuget.org/packages/Microsoft.Bcl.HashCode/1.1.1 and I was able to confirm that my local build successfully generated package 1.1.2. By the way, NuGet.config is still using the dotnet6 transport. Should I update it to dotnet9? Need this merged first: https://github.com/dotnet/maintenance-packages/pull/19 . Once merged, I will rebase this PR. This PR should now be unblocked. No squash merge! No squash merge! I don't have permission to merge-commit, unfortunately. Even with elevation.
gharchive/pull-request
2023-11-01T05:12:52
2025-04-01T06:38:26.290312
{ "authors": [ "ViktorHofer", "carlossanlop" ], "repo": "dotnet/maintenance-packages", "url": "https://github.com/dotnet/maintenance-packages/pull/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2389437998
Added basic extract to component functionality on cursor over html tag ### Summary of the changes Part of the implementation of the Extract To Component code action. Functional in one of the two cases, when the user is not selecting a certain range of a Razor component, but rather when the cursor is on either the opening or closing tag. Holding off approval just because you'll have to merge in the main branch and switch to System.Text.Json in a few places. Sorry about that! I just pushed the main changes into the feature branch. @marcarro you can just do these commands in your local branch and then handle merge commits. I can help you with that today or Friday git fetch upstream features/extract-to-component:features/extract-to-component git merge features/extract-to-component @dotnet/razor-compiler , PTAL (going into a feature branch)
gharchive/pull-request
2024-07-03T20:44:23
2025-04-01T06:38:26.431505
{ "authors": [ "marcarro", "phil-allen-msft", "ryzngard" ], "repo": "dotnet/razor", "url": "https://github.com/dotnet/razor/pull/10578", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2417454430
Turn off trailing whitespace triming in strings We have tests with baselines that have trailing whitespaces. Our trimTrailingWhitespace setting means that those will get modified automatically, breaking those tests. To fix that, I implemented a vscode feature to avoid triming inside regex and strings, so let's use that. @dotnet/razor-compiler for an extremely small review.
gharchive/pull-request
2024-07-18T21:31:44
2025-04-01T06:38:26.432741
{ "authors": [ "333fred" ], "repo": "dotnet/razor", "url": "https://github.com/dotnet/razor/pull/10646", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
204511426
"SDKs" show up as packages when they are indirect I think this SDK thing is a leaky abstraction, for example, .NETStandard.Library shows up as SDK when it top level, but when it's a indirect, it shows up a package: i think everything is as expected. in your screenshot we have an SDK X and it's dependencies, which include an SDK Y. It might have a different icon though when displayed as dependency. Btw, i just added to my project and it is not marked as implicit at all , and is resolved as package - what did you mean "it is SDK when it is top level"? <PackageReference Include="NETStandard.Library" Version="1.6.0" /> That's my point how we marked SDK'd as SDKs is leaky - it only works if all of them are marked implicit, here .NETStandard.LIbrary is pulled it because of it's referenced by a implicit package. @abpiskunov When you use a console app, it shows up as child of "Microsoft.NETCore.App". But when you use a class library, it shows up directly under "SDK", thus it shows up at top level. dupe of RTM approved issue https://github.com/dotnet/roslyn-project-system/issues/1456 , we will be hiding all default implicit packages and show them as SDK only
gharchive/issue
2017-02-01T07:04:04
2025-04-01T06:38:26.453713
{ "authors": [ "abpiskunov", "davkean", "fubar-coder" ], "repo": "dotnet/roslyn-project-system", "url": "https://github.com/dotnet/roslyn-project-system/issues/1414", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
602089492
Win32Exception with Dapr Describe the bug An exception occurs when we launch tye run inside the sample repo for Dapr To Reproduce Download the sample for Dapr here, and run tye run (I just followed the instructions into the readme) Further technical details tye --version dapr --version dotnet --version Looks like it can't find the daprd process? cc @rynowak @ChrisProlls - do you have dapr on your path? can you can run daprd -version? I reinstall dapr and all seems to work now, daprd -version is working and so tye run. Thanks !
gharchive/issue
2020-04-17T16:19:49
2025-04-01T06:38:27.037452
{ "authors": [ "ChrisProlls", "davidfowl", "rynowak" ], "repo": "dotnet/tye", "url": "https://github.com/dotnet/tye/issues/374", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
627026819
Logging Extension - Seq What should we add or change to make your life better? Create an extension to push logs to Seq This would be similar to the existing extension built into Tye: Tye can push logs to Elastic stack easily without the need for any SDKs or code changes in your services. Would be implemented in a method like this into tye.yaml: extensions: - name: seq Why is this important to you? I have implemented Seq into my microservice project, however seeing how simple it was to integrate logging with the built-in Tye method was eye opening. Would be great to build a lot more extensions like this. E.g. grafana, prometheus, jaeger, For context -- I have used this library in the past and it made configuring logging very simple with json-like settings to magically determine where to send logs. This could be used as a reference for implementing this feature: https://github.com/convey-stack/Convey.Logging https://github.com/convey-stack/Convey.Logging/blob/master/src/Convey.Logging/Extensions.cs Updated with an (not functional) attempt to implement this. It is definitely a good starting point though Updated with an implementation + docs
gharchive/issue
2020-05-29T06:17:30
2025-04-01T06:38:27.041339
{ "authors": [ "razfriman" ], "repo": "dotnet/tye", "url": "https://github.com/dotnet/tye/issues/512", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1076592763
Command Select Project fails with extension manifest validation error Issue Description Steps to Reproduce Ctrl-Shift-P: "OmniSharp: Select project" Error is shown Expected Behavior Opens up the project picker. Actual Behavior Command 'OmniSharp: Select Project' resulted in an error (Extension 'ms-dotnettools.csharp' CANNOT use API proposal: quickPickSeparators. Its package.json#enabledApiProposals-property declares: [] but NOT quickPickSeparators. The missing proposal must be added... Logs OmniSharp log Post the output from Output-->OmniSharp log here C# log Post the output from Output-->C# here Environment information VSCode version: 1.63.0 C# Extension: 1.23.17 Mono Information OmniSharp using built-in mono Dotnet Information .NET SDK (reflecting any global.json): Version: 6.0.100 Commit: 9e8b04bbff Runtime Environment: OS Name: ubuntu OS Version: 20.04 OS Platform: Linux RID: ubuntu.20.04-x64 Base Path: /usr/share/dotnet/sdk/6.0.100/ Host (useful for support): Version: 6.0.0 Commit: 4822e3c3aa .NET SDKs installed: 2.1.818 [/usr/share/dotnet/sdk] 3.1.415 [/usr/share/dotnet/sdk] 5.0.403 [/usr/share/dotnet/sdk] 6.0.100-rc.1.21458.32 [/usr/share/dotnet/sdk] 6.0.100 [/usr/share/dotnet/sdk] .NET runtimes installed: Microsoft.AspNetCore.All 2.1.30 [/usr/share/dotnet/shared/Microsoft.AspNetCore.All] Microsoft.AspNetCore.App 2.1.30 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 3.1.21 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 5.0.12 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 6.0.0-rc.1.21452.15 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.AspNetCore.App 6.0.0 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App] Microsoft.NETCore.App 2.1.30 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 3.1.21 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 5.0.12 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 6.0.0-rc.1.21451.13 [/usr/share/dotnet/shared/Microsoft.NETCore.App] Microsoft.NETCore.App 6.0.0 [/usr/share/dotnet/shared/Microsoft.NETCore.App] To install additional .NET runtimes or SDKs: https://aka.ms/dotnet-download Visual Studio Code Extensions Extension Author Version azure-account ms-vscode 0.9.11 azure-pipelines ms-azure-devops 1.195.0 cmake twxs 0.0.17 cmake-tools ms-vscode 1.9.2 cpptools ms-vscode 1.8.0-insiders2 csharp ms-dotnettools 1.23.17 dotnet-test-explorer formulahendry 0.7.7 EditorConfig EditorConfig 0.16.4 jupyter ms-toolsai 2021.11.1001550889 jupyter-renderers ms-toolsai 1.0.4 live-server ms-vscode 0.2.11 LiveServer ritwickdey 5.6.1 prettier-vscode esbenp 9.0.0 python ms-python 2021.12.1559732655 sass-indented syler 1.8.18 test-adapter-converter ms-vscode 0.1.4 vetur octref 0.35.0 vscode-azureappservice ms-azuretools 0.23.0 vscode-azurefunctions ms-azuretools 1.6.0 vscode-azureresourcegroups ms-azuretools 0.4.0 vscode-azurestaticwebapps ms-azuretools 0.9.0 vscode-bicep ms-azuretools 0.4.1008 vscode-docker ms-azuretools 1.18.0 vscode-dotnet-runtime ms-dotnettools 1.5.0 vscode-eslint dbaeumer 2.2.2 vscode-kubernetes-tools ms-kubernetes-tools 1.3.4 vscode-pylance ms-python 2021.12.1 vscode-test-explorer hbenl 2.21.1 vscode-yaml redhat 1.2.2 vsliveshare ms-vsliveshare 1.0.5196 xml DotJoshJohnson 2.5.1 Getting the same error: Command 'OmniSharp: Select Project' resulted in an error (Extension 'ms-dotnettools.csharp' CANNOT use API proposal: quickPickSeparators. Its package.json#enabledApiProposals-property declares: [] but NOT quickPickSeparators. The missing proposal MUST be added and you must start in extension development mode or use the following command line switch: --enable-proposed-api ms-dotnettools.csharp) Issue was fixed in https://github.com/OmniSharp/omnisharp-vscode/pull/4914. Until a new release is ready you can install this prerelease that includes the fix https://github.com/OmniSharp/omnisharp-vscode/releases/tag/v1.23.18-beta2 Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Old version of omnisharp won't work, I believe it was broken by vscode 1.63, so you either move back to 1.62 or install the pre-release of omnisharp. Does anyone know how to install an old working version until a fix is released? My VS Code is basically bricked now. (Installing a prerelease sounds a bit scary) Omnisharp can't be downgrade, but downgrading vscode you resolve this problem. https://code.visualstudio.com/updates/v1_62 Even after installing the latest prerelease this issue still occurs. Is there prerelease versions which don't include the fix and I need to download exactly 1.23.18? I found that the solution was to completely uninstall the old version and then choose the extension vsix file for my platform from the beta release page and install that: https://github.com/OmniSharp/omnisharp-vscode/releases/tag/v1.23.18-beta2 There have been lots of improvements since this issue was opened. Please open a new issue with logs if you are still running into this.
gharchive/issue
2021-12-10T08:58:23
2025-04-01T06:38:27.070241
{ "authors": [ "CEbbinghaus", "JoeRobich", "OdisBy", "PsychoNineSix", "TanayParikh", "jesperkristensen", "mauve", "tabish121" ], "repo": "dotnet/vscode-csharp", "url": "https://github.com/dotnet/vscode-csharp/issues/4944", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1925179520
Always build release and prerelease VSIXs and allow overriding build … …number so that we can ship from any branch Test builds: https://dnceng.visualstudio.com/internal/_build/results?buildId=2283050&view=results https://dnceng.visualstudio.com/internal/_build/results?buildId=2283875&view=results https://dnceng.visualstudio.com/internal/_build/results?buildId=2283873&view=results This should not merge until we're ready to do a new release out of the release branch.
gharchive/pull-request
2023-10-04T00:41:02
2025-04-01T06:38:27.072810
{ "authors": [ "dibarbet" ], "repo": "dotnet/vscode-csharp", "url": "https://github.com/dotnet/vscode-csharp/pull/6481", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2132786673
Operating System: windows Problem encountered on https://dotnet.microsoft.com/en-us/learn/aspnet/blazor-tutorial/install Operating System: windows Provide details about the problem you're experiencing. Include your operating system version, exact error message, code sample, and anything else that is relevant. This issue was closed because there was no response to a request for more information for 10 days.
gharchive/issue
2024-01-31T11:47:01
2025-04-01T06:38:27.083630
{ "authors": [ "HusanjonDeveloper", "mairaw" ], "repo": "dotnet/website-feedback", "url": "https://github.com/dotnet/website-feedback/issues/22", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2341807690
⚠️ Adacta-fintech.com has degraded performance In 98cf739, Adacta-fintech.com (https://www.adacta-fintech.com) experienced degraded performance: HTTP code: 200 Response time: 5929 ms Resolved: Adacta-fintech.com performance has improved in 5953693 after 7 minutes.
gharchive/issue
2024-06-08T19:47:16
2025-04-01T06:38:27.137029
{ "authors": [ "dotsi" ], "repo": "dotsi/aduptime", "url": "https://github.com/dotsi/aduptime/issues/516", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
322375823
nn_radius() returns no more than 32 neighbors nn_radius only gives me 32 neighbors max. Despite max_nn=-1 or 1024 and also params.max_neighbors doesn't work? Apparently FLANNIndex.params['checks'] is also a limiting factor. It defaults to 32, and increasing it increases the maximum for max_nn.
gharchive/issue
2018-05-11T17:32:19
2025-04-01T06:38:27.148458
{ "authors": [ "flgw", "robbmcleod" ], "repo": "dougalsutherland/cyflann", "url": "https://github.com/dougalsutherland/cyflann/issues/28", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
1560637497
Add infra-automation as codeowner for CircleCI and GitHub Actions configs Jira link: https://doximity.atlassian.net/browse/IA-997 Overview The infra automation team would like more visibility for CI config changes as they occur. Hence, adding ourselves as CODEOWNERS for: .circleci directory, aka CircleCI configs .github/workflows directory, aka GitHub Actions workflows .github/actions directory, aka GitHub Actions actions This PR was automatically generated with sourcegraph. Created by Sourcegraph batch change TheMetalCode/circleci-and-gh-actions-codeowners-2. doxbot codereview doxbot codereview --party
gharchive/pull-request
2023-01-28T02:01:14
2025-04-01T06:38:27.186385
{ "authors": [ "TheMetalCode" ], "repo": "doximity/omniauth-doximity-oauth2", "url": "https://github.com/doximity/omniauth-doximity-oauth2/pull/14", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
96838141
check for existence of provider_name This fixes an error that was causing the QA interface to crash if it tried to call Krikri::Provider.name on an indexed provider that was missing a value for provider_name. Thanks, @AudreyAltman - minor comment, expanding on above - what will get returned and rendered in the nav if a Krikri::Provider's name is nil? I addressed the comment from @anarchivist and :squash:ed :clap:
gharchive/pull-request
2015-07-23T15:18:30
2025-04-01T06:38:27.233048
{ "authors": [ "AudreyAltman", "anarchivist", "no-reply" ], "repo": "dpla/KriKri", "url": "https://github.com/dpla/KriKri/pull/180", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2621625437
Confused by --nOT and --nOB and the examples provided in the issues section Hi, I am confused by the definition of --nOT and --nOB and some examples provided by Devon here. My starting reads are 150bp. I trimmed the reads before alignment using pretty standard trimming to remove possible adapter sequences and low-quality bases. The average length of the reads is now ~135bp. So, for MethylDackel extract, I was planning to use --nOT and --nOB to define the inclusion bases. --nOT INT,INT,INT,INT Like --OT, but always exclude INT bases from a given end from inclusion, regardless of the length of an alignment. This is useful in cases where reads may have already been trimmed to different lengths but still nonetheless contain a certain length bias at one or more ends. My understanding of these options (and as somebody else mentioned in another thread) is that if I want to exclude 5bp from each end of the reads, I would use this: --nOT 5,5,5,5 However, in #102, Devon mentions: “Assuming you want to exclude the first 10 bases produced by the sequencer, the --nOT 10,0,0,140 --nOB 10,0,0,140 would do it (presuming you originally had 150 base reads).” Was Devon confused with --OT and --OB, or am I not understanding how --nOT and --nOB work? Thanks! I did a bad job when writing the documentation for this. What I wrote in #102 is correct, I'll make a mental note to clarify this in the documentation.
gharchive/issue
2024-10-29T15:34:05
2025-04-01T06:38:27.256709
{ "authors": [ "Flope", "dpryan79" ], "repo": "dpryan79/MethylDackel", "url": "https://github.com/dpryan79/MethylDackel/issues/163", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
519948419
Dracula for Vimium-FF (Firefox addon) I made a custom CSS for the Firefox addon Vimium-FF. If you want, it could be added to the collection of Dracula themes. My repo is accessible here: https://github.com/Trollwut/vimium-dracula Hey @Trollwut 👋 README.md should look like the template. If you could make a change and let me know once it's been updated, I'll invite you to the org so you can transfer the repo and maintain it there 👍 hey @hacknug ! Thanks for that! I've rewritten the README.md to fit to the template. Only the link to the contributers is not set yet, as I don't know how the repo's full name will be in the Dracula repo. :) May you please have a look at it? Link will be dracula/vimium if it's also compatible with the linked Chrome version. Please confirm it will be. Just sent you the invite to join the org. Once you join, you'll be able to transfer your repo to it (make sure you do this so GitHub takes care of redirecting users visiting your current URL). let me know once it's done and I'll set the right permission for you to take care of it 😉 yeah bby, tested it myself! Working on Chromium 78 with the latest Vimium 1.64.6. Will tell you, when I tranfered the repo! Aaaand it's transfered! I selected only the Vimium group to have access to it. Please adjust if this wasn't sufficient. Also please check if I did that right, as this was my first transfer of a repo :) @Trollwut everything looking good. Thank you so much for your contribution! 🎉
gharchive/issue
2019-11-08T10:54:55
2025-04-01T06:38:27.266245
{ "authors": [ "Trollwut", "hacknug" ], "repo": "dracula/dracula-theme", "url": "https://github.com/dracula/dracula-theme/issues/347", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1368892431
Shell completion (and other) text not visible (very faint) Arch Linux | 5.19.7-AMD | sway version: 1.7 | foot version: 1.13.1 | Dracula theme foot.ini: https://pastebin.com/s2tRCWkb Sorry for my bad English! Issue was patched with the merge request and the colors where updated. If you update or download latest you should be good.
gharchive/issue
2022-09-11T09:52:16
2025-04-01T06:38:27.268980
{ "authors": [ "hajosattila", "syrofoam" ], "repo": "dracula/foot", "url": "https://github.com/dracula/foot/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1372146382
reflow sub command transposes // and leading space Describe the bug cargo spellcheck reflow produces bad comments. To Reproduce Steps to reproduce the behaviour: A file containing: use std::any::Any; use std::borrow::Cow; use std::collections::HashSet; use std::ffi::CStr; use std::hash::{Hash, Hasher}; use std::ptr::NonNull; use crate::def::{ConstantNameError, EnclosingRubyScope, Free, Method, NotDefinedError}; use crate::error::Error; use crate::ffi::InterpreterExtractError; use crate::method; use crate::sys; use crate::Artichoke; mod registry; pub use registry::Registry; #[derive(Debug)] pub struct Builder<'a> { interp: &'a mut Artichoke, spec: &'a Spec, is_mrb_tt_data: bool, super_class: Option<NonNull<sys::RClass>>, methods: HashSet<method::Spec>, } impl<'a> Builder<'a> { #[must_use] pub fn for_spec(interp: &'a mut Artichoke, spec: &'a Spec) -> Self { Self { interp, spec, is_mrb_tt_data: false, super_class: None, methods: HashSet::default(), } } #[must_use] pub fn value_is_rust_object(mut self) -> Self { self.is_mrb_tt_data = true; self } pub fn with_super_class<T, U>(mut self, classname: U) -> Result<Self, Error> where T: Any, U: Into<Cow<'static, str>>, { let state = self.interp.state.as_deref().ok_or_else(InterpreterExtractError::new)?; let rclass = if let Some(spec) = state.classes.get::<T>() { spec.rclass() } else { return Err(NotDefinedError::super_class(classname.into()).into()); }; let rclass = unsafe { self.interp.with_ffi_boundary(|mrb| rclass.resolve(mrb))? }; if let Some(rclass) = rclass { self.super_class = Some(rclass); Ok(self) } else { Err(NotDefinedError::super_class(classname.into()).into()) } } pub fn add_method<T>(mut self, name: T, method: Method, args: sys::mrb_aspec) -> Result<Self, ConstantNameError> where T: Into<Cow<'static, str>>, { let spec = method::Spec::new(method::Type::Instance, name.into(), method, args)?; self.methods.insert(spec); Ok(self) } pub fn add_self_method<T>( mut self, name: T, method: Method, args: sys::mrb_aspec, ) -> Result<Self, ConstantNameError> where T: Into<Cow<'static, str>>, { let spec = method::Spec::new(method::Type::Class, name.into(), method, args)?; self.methods.insert(spec); Ok(self) } pub fn define(self) -> Result<(), NotDefinedError> { use sys::mrb_vtype::MRB_TT_DATA; let name = self.spec.name_c_str().as_ptr(); let mut super_class = if let Some(super_class) = self.super_class { super_class } else { // SAFETY: Although this direct access of the `mrb` property on the // interp does not go through `Artichoke::with_ffi_boundary`, no // `MRB_API` functions are called, which means it is not required to // re-box the Artichoke `State` into the `mrb_state->ud` pointer. // // This code only performs a memory access to read a field from the // `mrb_state`. let rclass = unsafe { self.interp.mrb.as_ref().object_class }; NonNull::new(rclass).ok_or_else(|| NotDefinedError::super_class("Object"))? }; let rclass = self.spec.rclass(); let rclass = unsafe { self.interp.with_ffi_boundary(|mrb| rclass.resolve(mrb)) }; let mut rclass = if let Ok(Some(rclass)) = rclass { rclass } else if let Some(enclosing_scope) = self.spec.enclosing_scope() { let scope = unsafe { self.interp.with_ffi_boundary(|mrb| enclosing_scope.rclass(mrb)) }; if let Ok(Some(mut scope)) = scope { let rclass = unsafe { self.interp.with_ffi_boundary(|mrb| { sys::mrb_define_class_under(mrb, scope.as_mut(), name, super_class.as_mut()) }) }; let rclass = rclass.map_err(|_| NotDefinedError::class(self.spec.name()))?; NonNull::new(rclass).ok_or_else(|| NotDefinedError::class(self.spec.name()))? } else { return Err(NotDefinedError::enclosing_scope(enclosing_scope.fqname().into_owned())); } } else { let rclass = unsafe { self.interp .with_ffi_boundary(|mrb| sys::mrb_define_class(mrb, name, super_class.as_mut())) }; let rclass = rclass.map_err(|_| NotDefinedError::class(self.spec.name()))?; NonNull::new(rclass).ok_or_else(|| NotDefinedError::class(self.spec.name()))? }; for method in &self.methods { unsafe { method.define(self.interp, rclass.as_mut())?; } } // If a `Spec` defines a `Class` whose instances own a pointer to a // Rust object, mark them as `MRB_TT_DATA`. if self.is_mrb_tt_data { unsafe { sys::mrb_sys_set_instance_tt(rclass.as_mut(), MRB_TT_DATA); } } Ok(()) } } #[derive(Debug, Clone, PartialEq, Eq)] pub struct Rclass { name: &'static CStr, enclosing_scope: Option<EnclosingRubyScope>, } impl Rclass { #[must_use] pub const fn new(name: &'static CStr, enclosing_scope: Option<EnclosingRubyScope>) -> Self { Self { name, enclosing_scope } } /// Resolve a type's [`sys::RClass`] using its enclosing scope and name. /// /// # Safety /// /// This function must be called within an [`Artichoke::with_ffi_boundary`] /// closure because the FFI APIs called in this function may require access /// to the Artichoke [`State`](crate::state::State). pub unsafe fn resolve(&self, mrb: *mut sys::mrb_state) -> Option<NonNull<sys::RClass>> { let class_name = self.name.as_ptr(); if let Some(ref scope) = self.enclosing_scope { // short circuit if enclosing scope does not exist. let mut scope = scope.rclass(mrb)?; let is_defined_under = sys::mrb_class_defined_under(mrb, scope.as_mut(), class_name); if is_defined_under { // Enclosing scope exists. // Class is defined under the enclosing scope. let class = sys::mrb_class_get_under(mrb, scope.as_mut(), class_name); NonNull::new(class) } else { // Enclosing scope exists. // Class is not defined under the enclosing scope. None } } else { let is_defined = sys::mrb_class_defined(mrb, class_name); if is_defined { // Class exists in root scope. let class = sys::mrb_class_get(mrb, class_name); NonNull::new(class) } else { // Class does not exist in root scope. None } } } } #[derive(Debug)] pub struct Spec { name: Cow<'static, str>, name_cstr: &'static CStr, data_type: Box<sys::mrb_data_type>, enclosing_scope: Option<EnclosingRubyScope>, } impl Spec { pub fn new<T>( name: T, name_cstr: &'static CStr, enclosing_scope: Option<EnclosingRubyScope>, free: Option<Free>, ) -> Result<Self, ConstantNameError> where T: Into<Cow<'static, str>>, { let name = name.into(); // SAFETY: The constructed `mrb_data_type` has `'static` lifetime: // // - `name_cstr` is `&'static` so it will outlive the `data_type`. // - `Spec` does not offer mutable access to these fields. let data_type = sys::mrb_data_type { struct_name: name_cstr.as_ptr(), dfree: free, }; let data_type = Box::new(data_type); Ok(Self { name, name_cstr, data_type, enclosing_scope, }) } #[must_use] pub fn data_type(&self) -> *const sys::mrb_data_type { self.data_type.as_ref() } #[must_use] pub fn name(&self) -> Cow<'static, str> { match &self.name { Cow::Borrowed(name) => Cow::Borrowed(name), Cow::Owned(name) => name.clone().into(), } } #[must_use] pub fn name_c_str(&self) -> &'static CStr { self.name_cstr } #[must_use] pub fn enclosing_scope(&self) -> Option<&EnclosingRubyScope> { self.enclosing_scope.as_ref() } #[must_use] pub fn fqname(&self) -> Cow<'_, str> { if let Some(scope) = self.enclosing_scope() { let mut fqname = String::from(scope.fqname()); fqname.push_str("::"); fqname.push_str(self.name.as_ref()); fqname.into() } else { self.name.as_ref().into() } } #[must_use] pub fn rclass(&self) -> Rclass { Rclass::new(self.name_cstr, self.enclosing_scope.clone()) } } impl Hash for Spec { fn hash<H: Hasher>(&self, state: &mut H) { self.name().hash(state); self.enclosing_scope().hash(state); } } impl Eq for Spec {} impl PartialEq for Spec { fn eq(&self, other: &Self) -> bool { self.fqname() == other.fqname() } } #[cfg(test)] mod tests { use spinoso_exception::StandardError; use crate::extn::core::kernel::Kernel; use crate::test::prelude::*; struct RustError; #[test] fn super_class() { let mut interp = interpreter(); let spec = class::Spec::new("RustError", qed::const_cstr_from_str!("RustError\0"), None, None).unwrap(); class::Builder::for_spec(&mut interp, &spec) .with_super_class::<StandardError, _>("StandardError") .unwrap() .define() .unwrap(); interp.def_class::<RustError>(spec).unwrap(); let result = interp.eval(b"RustError.new.is_a?(StandardError)").unwrap(); let result = result.try_convert_into::<bool>(&interp).unwrap(); assert!(result, "RustError instances are instance of StandardError"); let result = interp.eval(b"RustError < StandardError").unwrap(); let result = result.try_convert_into::<bool>(&interp).unwrap(); assert!(result, "RustError inherits from StandardError"); } #[test] fn rclass_for_undef_root_class() { let mut interp = interpreter(); let spec = class::Spec::new("Foo", qed::const_cstr_from_str!("Foo\0"), None, None).unwrap(); let rclass = unsafe { interp.with_ffi_boundary(|mrb| spec.rclass().resolve(mrb)) }.unwrap(); assert!(rclass.is_none()); } #[test] fn rclass_for_undef_nested_class() { let mut interp = interpreter(); let scope = interp.module_spec::<Kernel>().unwrap().unwrap(); let spec = class::Spec::new( "Foo", qed::const_cstr_from_str!("Foo\0"), Some(EnclosingRubyScope::module(scope)), None, ) .unwrap(); let rclass = unsafe { interp.with_ffi_boundary(|mrb| spec.rclass().resolve(mrb)) }.unwrap(); assert!(rclass.is_none()); } #[test] fn rclass_for_nested_class() { let mut interp = interpreter(); interp.eval(b"module Foo; class Bar; end; end").unwrap(); let spec = module::Spec::new(&mut interp, "Foo", qed::const_cstr_from_str!("Foo\0"), None).unwrap(); let spec = class::Spec::new( "Bar", qed::const_cstr_from_str!("Bar\0"), Some(EnclosingRubyScope::module(&spec)), None, ) .unwrap(); let rclass = unsafe { interp.with_ffi_boundary(|mrb| spec.rclass().resolve(mrb)) }.unwrap(); assert!(rclass.is_some()); } #[test] fn rclass_for_nested_class_under_class() { let mut interp = interpreter(); interp.eval(b"class Foo; class Bar; end; end").unwrap(); let spec = class::Spec::new("Foo", qed::const_cstr_from_str!("Foo\0"), None, None).unwrap(); let spec = class::Spec::new( "Bar", qed::const_cstr_from_str!("Bar\0"), Some(EnclosingRubyScope::class(&spec)), None, ) .unwrap(); let rclass = unsafe { interp.with_ffi_boundary(|mrb| spec.rclass().resolve(mrb)) }.unwrap(); assert!(rclass.is_some()); } } Run cargo spellcheck reflow Observe this malformed diff: diff --git i/artichoke-backend/src/class.rs w/artichoke-backend/src/class.rs index 941b22a09c..42f4a3a881 100644 --- i/artichoke-backend/src/class.rs +++ w/artichoke-backend/src/class.rs @@ -138,8 +138,8 @@ impl<'a> Builder<'a> { } } - // If a `Spec` defines a `Class` whose instances own a pointer to a - // Rust object, mark them as `MRB_TT_DATA`. + // If a `Spec` defines a `Class` whose instances own a pointer to a Rust + //object, mark them as `MRB_TT_DATA`. if self.is_mrb_tt_data { unsafe { sys::mrb_sys_set_instance_tt(rclass.as_mut(), MRB_TT_DATA); @@ -175,13 +175,13 @@ impl Rclass { let mut scope = scope.rclass(mrb)?; let is_defined_under = sys::mrb_class_defined_under(mrb, scope.as_mut(), class_name); if is_defined_under { - // Enclosing scope exists. - // Class is defined under the enclosing scope. + // Enclosing scope exists. Class is defined under the enclosing + //scope. let class = sys::mrb_class_get_under(mrb, scope.as_mut(), class_name); NonNull::new(class) } else { - // Enclosing scope exists. - // Class is not defined under the enclosing scope. + // Enclosing scope exists. Class is not defined under the + //enclosing scope. None } } else { Expected behavior No extra space before comment on second line, a space after the //. Screenshots Please complete the following information: System: macOS Obtained: cargo Version: cargo-spellcheck 0.11.3 Additional context Does it only happen with two line comments? #238 could be related @drahnr from a quick peek, it occurs in more places than just two line comments. This one is such an example: diff --git i/artichoke-backend/src/def.rs w/artichoke-backend/src/def.rs index 9d1a766659..82963c879e 100644 --- i/artichoke-backend/src/def.rs +++ w/artichoke-backend/src/def.rs @@ -54,10 +54,10 @@ where // Rather than attempt a free and virtually guaranteed segfault, log // loudly and short-circuit; a leak is better than a crash. // - // `box_unbox_free::<T>` is only ever called in an FFI context when - // there are C frames in the stack. Using `eprintln!` or unwrapping the - // error from `write!` here is undefined behavior and may result in an - // abort. Instead, suppress the error. + // `box_unbox_free::<T>` is only ever called in an FFI context when there + //are C frames in the stack. Using `eprintln!` or unwrapping the error + //from `write!` here is undefined behavior and may result in an abort. + //Instead, suppress the error. let _ignored = write!( io::stderr(), "Received null pointer in box_unbox_free::<{}>", Full diff diff --git i/README.md w/README.md index 69faaff23a..a6dbbd3053 100644 --- i/README.md +++ w/README.md @@ -135,10 +135,10 @@ If Artichoke does not run Ruby source code in the same way that MRI does, it is a bug and we would appreciate if you [filed an issue so we can fix it][file-an-issue]. -If you would like to contribute code 👩‍💻👨‍💻, find an issue that looks interesting -and leave a comment that you're beginning to investigate. If there is no issue, -please file one before beginning to work on a PR. [Good first issues are labeled -`E-easy`][e-easy]. +If you would like to contribute code 👩‍💻👨‍💻, find an issue that looks +interesting and leave a comment that you're beginning to investigate. If there +is no issue, please file one before beginning to work on a PR. [Good first +issues are labeled `E-easy`][e-easy]. ### Discussion diff --git i/artichoke-backend/src/class.rs w/artichoke-backend/src/class.rs index 840e98fdc3..d65a201c31 100644 --- i/artichoke-backend/src/class.rs +++ w/artichoke-backend/src/class.rs @@ -138,8 +138,8 @@ impl<'a> Builder<'a> { } } - // If a `Spec` defines a `Class` whose instances own a pointer to a - // Rust object, mark them as `MRB_TT_DATA`. + // If a `Spec` defines a `Class` whose instances own a pointer to a Rust + //object, mark them as `MRB_TT_DATA`. if self.is_mrb_tt_data { unsafe { sys::mrb_sys_set_instance_tt(rclass.as_mut(), MRB_TT_DATA); @@ -175,13 +175,13 @@ impl Rclass { let mut scope = scope.rclass(mrb)?; let is_defined_under = sys::mrb_class_defined_under(mrb, scope.as_mut(), class_name); if is_defined_under { - // Enclosing scope exists. - // Class is defined under the enclosing scope. + // Enclosing scope exists. Class is defined under the enclosing + //scope. let class = sys::mrb_class_get_under(mrb, scope.as_mut(), class_name); NonNull::new(class) } else { - // Enclosing scope exists. - // Class is not defined under the enclosing scope. + // Enclosing scope exists. Class is not defined under the + //enclosing scope. None } } else { diff --git i/artichoke-backend/src/class/registry.rs w/artichoke-backend/src/class/registry.rs index 2244c18b67..6369f1448e 100644 --- i/artichoke-backend/src/class/registry.rs +++ w/artichoke-backend/src/class/registry.rs @@ -233,9 +233,8 @@ where self.0.shrink_to_fit(); } - /// Shrinks the capacity of the registry with a lower bound. - /// The capacity will remain at least as large as both the length and the - /// supplied value. + /// Shrinks the capacity of the registry with a lower bound. The capacity + /// will remain at least as large as both the length and the supplied value. /// /// If the current capacity is less than the lower limit, this is a no-op. pub fn shrink_to(&mut self, min_capacity: usize) { diff --git i/artichoke-backend/src/def.rs w/artichoke-backend/src/def.rs index 9d1a766659..82963c879e 100644 --- i/artichoke-backend/src/def.rs +++ w/artichoke-backend/src/def.rs @@ -54,10 +54,10 @@ where // Rather than attempt a free and virtually guaranteed segfault, log // loudly and short-circuit; a leak is better than a crash. // - // `box_unbox_free::<T>` is only ever called in an FFI context when - // there are C frames in the stack. Using `eprintln!` or unwrapping the - // error from `write!` here is undefined behavior and may result in an - // abort. Instead, suppress the error. + // `box_unbox_free::<T>` is only ever called in an FFI context when there + //are C frames in the stack. Using `eprintln!` or unwrapping the error + //from `write!` here is undefined behavior and may result in an abort. + //Instead, suppress the error. let _ignored = write!( io::stderr(), "Received null pointer in box_unbox_free::<{}>", @@ -120,9 +120,9 @@ pub struct ModuleScope { /// Typesafe wrapper for the [`RClass *`](sys::RClass) of the enclosing scope /// for an mruby `Module` or `Class`. /// -/// In Ruby, classes and modules can be defined inside another class or -/// module. mruby only supports resolving [`RClass`](sys::RClass) pointers -/// relative to an enclosing scope. This can be the top level with +/// In Ruby, classes and modules can be defined inside another class or module. +/// mruby only supports resolving [`RClass`](sys::RClass) pointers relative to +/// an enclosing scope. This can be the top level with /// [`mrb_class_get`](sys::mrb_class_get) and /// [`mrb_module_get`](sys::mrb_module_get) or it can be under another class /// with [`mrb_class_get_under`](sys::mrb_class_get_under) or module with diff --git i/artichoke-backend/src/error.rs w/artichoke-backend/src/error.rs index a64747ceba..95d097dbe9 100644 --- i/artichoke-backend/src/error.rs +++ w/artichoke-backend/src/error.rs @@ -97,9 +97,9 @@ where // `mrb_exc_raise` will call longjmp which will unwind the stack. sys::mrb_exc_raise(mrb, exc); - // SAFETY: This line is unreachable because `raise` will unwind the - // stack with `longjmp` when calling `sys::mrb_exc_raise` in the - // preceding line. + // SAFETY: This line is unreachable because `raise` will unwind the stack + //with `longjmp` when calling `sys::mrb_exc_raise` in the preceding + //line. hint::unreachable_unchecked() } @@ -107,8 +107,8 @@ where // log loudly to stderr and attempt to fallback to a runtime error. emit_fatal_warning!("Unable to raise exception: {:?}", exception); - // Any non-`Copy` objects that we haven't cleaned up at this point will - // leak, so drop everything. + // Any non-`Copy` objects that we haven't cleaned up at this point will leak, + //so drop everything. drop(exception); // `mrb_sys_raise` will call longjmp which will unwind the stack. diff --git i/artichoke-backend/src/exception_handler.rs w/artichoke-backend/src/exception_handler.rs index c4d3dd2759..cd65f63061 100644 --- i/artichoke-backend/src/exception_handler.rs +++ w/artichoke-backend/src/exception_handler.rs @@ -130,14 +130,13 @@ impl From<CaughtException> for Error { pub fn last_error(interp: &mut Artichoke, exception: Value) -> Result<Error, Error> { let mut arena = interp.create_arena_savepoint()?; - // Clear the current exception from the mruby interpreter so subsequent - // calls to the mruby VM are not tainted by an error they did not - // generate. + // Clear the current exception from the mruby interpreter so subsequent calls + //to the mruby VM are not tainted by an error they did not generate. // - // We must clear the pointer at the beginning of this function so we can - // use the mruby VM to inspect the exception once we turn it into an - // `mrb_value`. `Value::funcall` handles errors by calling this - // function, so not clearing the exception results in a stack overflow. + // We must clear the pointer at the beginning of this function so we can use + //the mruby VM to inspect the exception once we turn it into an `mrb_value`. + //`Value::funcall` handles errors by calling this function, so not clearing + //the exception results in a stack overflow. // Generate exception metadata in by executing the Ruby code: // @@ -146,11 +145,11 @@ pub fn last_error(interp: &mut Artichoke, exception: Value) -> Result<Error, Err // message = exception.message // ``` - // Sometimes when hacking on `extn/core` it is possible to enter a - // crash loop where an exception is captured by this handler, but - // extracting the exception name or backtrace throws again. - // Un-commenting the following print statement will at least get you the - // exception class and message, which should help debugging. + // Sometimes when hacking on `extn/core` it is possible to enter a crash loop + //where an exception is captured by this handler, but extracting the + //exception name or backtrace throws again. Un-commenting the following + //print statement will at least get you the exception class and message, + //which should help debugging. // // ``` // let message = exception.funcall(&mut arena, "message", &[], None)?; diff --git i/artichoke-backend/src/extn/core/array/mod.rs w/artichoke-backend/src/extn/core/array/mod.rs index 21262148db..f2709adabc 100644 --- i/artichoke-backend/src/extn/core/array/mod.rs +++ w/artichoke-backend/src/extn/core/array/mod.rs @@ -265,9 +265,9 @@ impl BoxUnboxVmValue for Array { // SAFETY: `Array` is backed by a `Vec` which can allocate at // most `isize::MAX` bytes. // - // `mrb_value` is not a ZST, so in practice, `len` and - // `capacity` will never overflow `mrb_int`, which is an `i64` - // on 64-bit targets. + // `mrb_value` is not a ZST, so in practice, `len` and `capacity` + //will never overflow `mrb_int`, which is an `i64` on 64-bit + //targets. // // On 32-bit targets, `usize` is `u32` which will never overflow // `i64`. Artichoke unconditionally compiles mruby with `-DMRB_INT64`. diff --git i/artichoke-backend/src/extn/core/array/trampoline.rs w/artichoke-backend/src/extn/core/array/trampoline.rs index 04a0992e72..0d9526a38d 100644 --- i/artichoke-backend/src/extn/core/array/trampoline.rs +++ w/artichoke-backend/src/extn/core/array/trampoline.rs @@ -228,9 +228,9 @@ pub fn initialize( second: Option<Value>, block: Option<Block>, ) -> Result<Value, Error> { - // Pack an empty `Array` into the given uninitialized `RArray *` so it can - // be safely marked if an mruby allocation occurs and a GC is triggered in - // `Array::initialize`. + // Pack an empty `Array` into the given uninitialized `RArray *` so it can be + //safely marked if an mruby allocation occurs and a GC is triggered in + //`Array::initialize`. // // Allocations are likely in the case where a block is passed to // `Array#initialize` or when the first and second args must be coerced with @@ -241,9 +241,9 @@ pub fn initialize( } pub fn initialize_copy(interp: &mut Artichoke, ary: Value, mut from: Value) -> Result<Value, Error> { - // Pack an empty `Array` into the given uninitialized `RArray *` so it can - // be safely marked if an mruby allocation occurs and a GC is triggered in - // `Array::initialize`. + // Pack an empty `Array` into the given uninitialized `RArray *` so it can be + //safely marked if an mruby allocation occurs and a GC is triggered in + //`Array::initialize`. // // This ensures the given `RArry *` is initialized even if a non-`Array` // object is called with `Array#initialize_copy` and the @@ -314,8 +314,8 @@ pub fn reverse_bang(interp: &mut Artichoke, mut ary: Value) -> Result<Value, Err } let mut array = unsafe { Array::unbox_from_value(&mut ary, interp)? }; - // SAFETY: Reversing an `Array` in place does not reallocate it. The array - // is repacked without any intervening interpreter heap allocations. + // SAFETY: Reversing an `Array` in place does not reallocate it. The array is + //repacked without any intervening interpreter heap allocations. unsafe { let array_mut = array.as_inner_mut(); array_mut.reverse(); @@ -346,8 +346,8 @@ pub fn shift(interp: &mut Artichoke, mut ary: Value, count: Option<Value>) -> Re // garbage collection, otherwise marking the children in `ary` will have // undefined behavior. // - // The call to `Array::alloc_value` happens outside this block after - // the `Array` has been repacked. + // The call to `Array::alloc_value` happens outside this block after the + //`Array` has been repacked. let shifted = unsafe { let array_mut = array.as_inner_mut(); let shifted = array_mut.shift_n(count); @@ -360,8 +360,8 @@ pub fn shift(interp: &mut Artichoke, mut ary: Value, count: Option<Value>) -> Re Array::alloc_value(shifted, interp) } else { - // SAFETY: The call to `Array::shift` will potentially invalidate the - // raw parts stored in `ary`'s `RArray*`. + // SAFETY: The call to `Array::shift` will potentially invalidate the raw + //parts stored in `ary`'s `RArray*`. // // The raw parts in `ary`'s `RArray *` must be repacked before a // potential garbage collection, otherwise marking the children in `ary` diff --git i/artichoke-backend/src/extn/core/array/wrapper.rs w/artichoke-backend/src/extn/core/array/wrapper.rs index 96f1e98087..d871d3a39e 100644 --- i/artichoke-backend/src/extn/core/array/wrapper.rs +++ w/artichoke-backend/src/extn/core/array/wrapper.rs @@ -397,7 +397,7 @@ impl Array { /// Returns a reference to an element at the index. /// - /// Unlike [`Vec`], this method does not support indexing with a range. See + /// Unlike [`Vec`], this method does not support indexing with a range. See /// the [`slice`](Self::slice) method for retrieving a sub-slice from the /// array. #[inline] diff --git i/artichoke-backend/src/extn/core/float/mod.rs w/artichoke-backend/src/extn/core/float/mod.rs index c2ab5259a2..e74b685da5 100644 --- i/artichoke-backend/src/extn/core/float/mod.rs +++ w/artichoke-backend/src/extn/core/float/mod.rs @@ -146,13 +146,10 @@ impl Float { /// /// Other modes include: /// - /// | mode | value | - /// |------------------------------------|-------| - /// | Indeterminable | -1 | - /// | Rounding towards zero | 0 | - /// | Rounding to the nearest number | 1 | - /// | Rounding towards positive infinity | 2 | - /// | Rounding towards negative infinity | 3 | + /// | mode | value | |------------------------------------|-------| | + /// Indeterminable | -1 | | Rounding towards zero | 0 | | Rounding to the + /// nearest number | 1 | | Rounding towards positive infinity | 2 | | + /// Rounding towards negative infinity | 3 | /// /// # Rust Caveats /// diff --git i/artichoke-backend/src/extn/core/integer/mod.rs w/artichoke-backend/src/extn/core/integer/mod.rs index 3e64dad9f1..413d84f54c 100644 --- i/artichoke-backend/src/extn/core/integer/mod.rs +++ w/artichoke-backend/src/extn/core/integer/mod.rs @@ -104,8 +104,8 @@ impl Integer { message.extend_from_slice(b") not supported"); Err(NotImplementedError::from(message).into()) } else { - // When no encoding is supplied, MRI assumes the encoding is - // either ASCII or ASCII-8BIT. + // When no encoding is supplied, MRI assumes the encoding is either + //ASCII or ASCII-8BIT. // // - `Integer`s from 0..127 result in a `String` with ASCII // encoding. @@ -283,7 +283,8 @@ mod tests { let expected = -i64::from(x) / i64::from(y); quotient == expected } else { - // Round negative integer division toward negative infinity. + // Round negative integer division toward negative + //infinity. let expected = (-i64::from(x) / i64::from(y)) - 1; quotient == expected } @@ -311,7 +312,8 @@ mod tests { let expected = -i64::from(x) / i64::from(y); quotient == expected } else { - // Round negative integer division toward negative infinity. + // Round negative integer division toward negative + //infinity. let expected = (-i64::from(x) / i64::from(y)) - 1; quotient == expected } diff --git i/artichoke-backend/src/extn/core/kernel/require.rs w/artichoke-backend/src/extn/core/kernel/require.rs index 24814f818f..710d7a673a 100644 --- i/artichoke-backend/src/extn/core/kernel/require.rs +++ w/artichoke-backend/src/extn/core/kernel/require.rs @@ -1,4 +1,4 @@ -//! [`Kernel#require`](https://ruby-doc.org/core-3.1.2/Kernel.html#method-i-require) +//! //! [`Kernel#require`](https://ruby-doc.org/core-3.1.2/Kernel.html#method-i-require) use std::path::{Path, PathBuf}; @@ -11,9 +11,9 @@ use crate::extn::prelude::*; use crate::state::parser::Context; pub fn load(interp: &mut Artichoke, mut filename: Value) -> Result<Loaded, Error> { - // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` - // before the interpreter is used again which protects against a garbage - // collection invalidating the pointer. + // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` before + //the interpreter is used again which protects against a garbage collection + //invalidating the pointer. let filename = unsafe { implicitly_convert_to_string(interp, &mut filename)? }; if filename.find_byte(b'\0').is_some() { return Err(ArgumentError::with_message("path name contains null byte").into()); @@ -41,9 +41,9 @@ pub fn load(interp: &mut Artichoke, mut filename: Value) -> Result<Loaded, Error } pub fn require(interp: &mut Artichoke, mut filename: Value) -> Result<Required, Error> { - // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` - // before the interpreter is used again which protects against a garbage - // collection invalidating the pointer. + // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` before + //the interpreter is used again which protects against a garbage collection + //invalidating the pointer. let filename = unsafe { implicitly_convert_to_string(interp, &mut filename)? }; if filename.find_byte(b'\0').is_some() { return Err(ArgumentError::with_message("path name contains null byte").into()); @@ -72,9 +72,9 @@ pub fn require(interp: &mut Artichoke, mut filename: Value) -> Result<Required, #[allow(clippy::module_name_repetitions)] pub fn require_relative(interp: &mut Artichoke, mut filename: Value, base: RelativePath) -> Result<Required, Error> { - // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` - // before the interpreter is used again which protects against a garbage - // collection invalidating the pointer. + // SAFETY: The extracted byte slice is converted to an owned `Vec<u8>` before + //the interpreter is used again which protects against a garbage collection + //invalidating the pointer. let filename = unsafe { implicitly_convert_to_string(interp, &mut filename)? }; if filename.find_byte(b'\0').is_some() { return Err(ArgumentError::with_message("path name contains null byte").into()); diff --git i/artichoke-backend/src/extn/core/kernel/trampoline.rs w/artichoke-backend/src/extn/core/kernel/trampoline.rs index 028784a1c8..d672da7a33 100644 --- i/artichoke-backend/src/extn/core/kernel/trampoline.rs +++ w/artichoke-backend/src/extn/core/kernel/trampoline.rs @@ -54,8 +54,8 @@ pub fn integer(interp: &mut Artichoke, mut val: Value, base: Option<Value>) -> R // https://github.com/ruby/ruby/blob/v3_1_2/object.c#L3127-L3132 if let Ok(f) = val.try_convert_into::<f64>(interp) { - // TODO: handle exception kwarg and return `nil` if it is false and f is not finite. - // https://github.com/ruby/ruby/blob/v3_1_2/object.c#L3129 + // TODO: handle exception kwarg and return `nil` if it is false and f is + //not finite. https://github.com/ruby/ruby/blob/v3_1_2/object.c#L3129 // https://github.com/ruby/ruby/blob/v3_1_2/object.c#L3131 // https://github.com/ruby/ruby/blob/v3_1_2/bignum.c#L5230-L5235 diff --git i/artichoke-backend/src/extn/core/matchdata/trampoline.rs w/artichoke-backend/src/extn/core/matchdata/trampoline.rs index 33bbe9808b..1510e4564a 100644 --- i/artichoke-backend/src/extn/core/matchdata/trampoline.rs +++ w/artichoke-backend/src/extn/core/matchdata/trampoline.rs @@ -123,8 +123,8 @@ pub fn element_reference( return interp.try_convert_mut(matched); } - // NOTE(lopopolo): Encapsulation is broken here by reaching into the - // inner regexp. + // NOTE(lopopolo): Encapsulation is broken here by reaching into the inner + //regexp. let captures_len = data.regexp.inner().captures_len(None)?; let rangelen = i64::try_from(captures_len).map_err(|_| ArgumentError::with_message("input string too long"))?; let at = match elem.is_range(interp, rangelen)? { diff --git i/artichoke-backend/src/extn/core/math/trampoline.rs w/artichoke-backend/src/extn/core/math/trampoline.rs index ebb7a90bf6..3d22c757b9 100644 --- i/artichoke-backend/src/extn/core/math/trampoline.rs +++ w/artichoke-backend/src/extn/core/math/trampoline.rs @@ -114,8 +114,8 @@ pub fn ldexp(interp: &mut Artichoke, fraction: Value, exponent: Value) -> Result return Err(RangeError::with_message("float NaN out of range of integer").into()); } Err(Ok(exp)) => { - // This saturating cast will be rejected by the `i32::try_from` - // below if `exp` is too large. + // This saturating cast will be rejected by the `i32::try_from` below + //if `exp` is too large. exp as i64 } Err(Err(err)) => return Err(err), diff --git i/artichoke-backend/src/extn/core/numeric/mod.rs w/artichoke-backend/src/extn/core/numeric/mod.rs index eaa6c68b01..7b5c8cc843 100644 --- i/artichoke-backend/src/extn/core/numeric/mod.rs +++ w/artichoke-backend/src/extn/core/numeric/mod.rs @@ -42,8 +42,8 @@ pub enum Coercion { /// /// # Coercion enum /// -/// Artichoke represents the `[y, x]` tuple Array as the [`Coercion`] enum, which -/// orders its values `Coercion::Integer(x, y)`. +/// Artichoke represents the `[y, x]` tuple Array as the [`Coercion`] enum, +/// which orders its values `Coercion::Integer(x, y)`. /// /// [numeric]: https://ruby-doc.org/core-3.1.2/Numeric.html#method-i-coerce pub fn coerce(interp: &mut Artichoke, x: Value, y: Value) -> Result<Coercion, Error> { diff --git i/artichoke-backend/src/extn/core/regexp/backend/onig.rs w/artichoke-backend/src/extn/core/regexp/backend/onig.rs index 58767984dd..45e08ef1ec 100644 --- i/artichoke-backend/src/extn/core/regexp/backend/onig.rs +++ w/artichoke-backend/src/extn/core/regexp/backend/onig.rs @@ -118,9 +118,9 @@ impl RegexpType for Onig { // Explicitly suppress this error because `debug` is infallible and // cannot panic. // - // In practice this error will never be triggered since the only - // fallible call in `format_debug_escape_into` is to `write!` which - // never `panic!`s for a `String` formatter, which we are using here. + // In practice this error will never be triggered since the only fallible + //call in `format_debug_escape_into` is to `write!` which never + //`panic!`s for a `String` formatter, which we are using here. let _ = format_debug_escape_into(&mut pattern, self.source.pattern()); debug.push_str(pattern.replace('/', r"\/").as_str()); debug.push('/'); diff --git i/artichoke-backend/src/extn/core/regexp/backend/regex/mod.rs w/artichoke-backend/src/extn/core/regexp/backend/regex/mod.rs index 05ec97e933..f7e7dfcc58 100644 --- i/artichoke-backend/src/extn/core/regexp/backend/regex/mod.rs +++ w/artichoke-backend/src/extn/core/regexp/backend/regex/mod.rs @@ -1,3 +1,3 @@ -// TODO(GH-490): Add `regex::Binary` implementation of `RegexType`. -// pub mod binary; +// TODO(GH-490): Add `regex::Binary` implementation of `RegexType`. pub mod + //binary; pub mod utf8; diff --git i/artichoke-backend/src/extn/core/regexp/backend/regex/utf8.rs w/artichoke-backend/src/extn/core/regexp/backend/regex/utf8.rs index 0e52f3bf0f..6ec8d9b1a5 100644 --- i/artichoke-backend/src/extn/core/regexp/backend/regex/utf8.rs +++ w/artichoke-backend/src/extn/core/regexp/backend/regex/utf8.rs @@ -127,9 +127,9 @@ impl RegexpType for Utf8 { // Explicitly suppress this error because `debug` is infallible and // cannot panic. // - // In practice this error will never be triggered since the only - // fallible call in `format_debug_escape_into` is to `write!` which - // never `panic!`s for a `String` formatter, which we are using here. + // In practice this error will never be triggered since the only fallible + //call in `format_debug_escape_into` is to `write!` which never + //`panic!`s for a `String` formatter, which we are using here. let _ = format_debug_escape_into(&mut pattern, self.source.pattern()); debug.push_str(pattern.replace('/', r"\/").as_str()); debug.push('/'); @@ -177,8 +177,8 @@ impl RegexpType for Utf8 { if let Some(captures) = self.regex.captures(haystack) { // per the [docs] for `captures.len()`: // - // > This is always at least 1, since every regex has at least one - // > capture group that corresponds to the full match. + // > This is always at least 1, since every regex has at least one > + //capture group that corresponds to the full match. // // [docs]: https://docs.rs/regex/1.3.4/regex/struct.Captures.html#method.len interp.set_active_regexp_globals(captures.len().checked_sub(1).unwrap_or_default())?; @@ -259,8 +259,8 @@ impl RegexpType for Utf8 { if let Some(captures) = self.regex.captures(target) { // per the [docs] for `captures.len()`: // - // > This is always at least 1, since every regex has at least one - // > capture group that corresponds to the full match. + // > This is always at least 1, since every regex has at least one > + //capture group that corresponds to the full match. // // [docs]: https://docs.rs/regex/1.3.4/regex/struct.Captures.html#method.len interp.set_active_regexp_globals(captures.len().checked_sub(1).unwrap_or_default())?; @@ -307,8 +307,8 @@ impl RegexpType for Utf8 { if let Some(captures) = self.regex.captures(haystack) { // per the [docs] for `captures.len()`: // - // > This is always at least 1, since every regex has at least one - // > capture group that corresponds to the full match. + // > This is always at least 1, since every regex has at least one > + //capture group that corresponds to the full match. // // [docs]: https://docs.rs/regex/1.3.4/regex/struct.Captures.html#method.len interp.set_active_regexp_globals(captures.len().checked_sub(1).unwrap_or_default())?; diff --git i/artichoke-backend/src/extn/core/regexp/syntax.rs w/artichoke-backend/src/extn/core/regexp/syntax.rs index 5b82baa35c..4a1d410b70 100644 --- i/artichoke-backend/src/extn/core/regexp/syntax.rs +++ w/artichoke-backend/src/extn/core/regexp/syntax.rs @@ -1,9 +1,8 @@ // This module is forked from `regex-syntax` crate @ `26f7318e`. // -// https://github.com/rust-lang/regex/blob/26f7318e2895eae56e95a260e81e2d48b90e7c25/regex-syntax/src/lib.rs +// //https://github.com/rust-lang/regex/blob/26f7318e2895eae56e95a260e81e2d48b90e7c25/regex-syntax/src/lib.rs // -// MIT License -// Copyright (c) 2014 The Rust Project Developers +// MIT License Copyright (c) 2014 The Rust Project Developers #![allow(clippy::match_same_arms)] @@ -52,8 +51,8 @@ pub fn escape_into(text: &str, buf: &mut String) { pub fn is_meta_character(c: char) -> bool { match c { '\\' | '.' | '+' | '*' | '?' | '(' | ')' | '|' | '[' | ']' | '{' | '}' | '^' | '$' | '#' | '-' => true, - // This match arm differs from `regex-syntax` by including '/'. - // Ruby uses '/' to mark `Regexp` literals in source code. + // This match arm differs from `regex-syntax` by including '/'. Ruby uses + //'/' to mark `Regexp` literals in source code. '/' => true, // This match arm differs from `regex-syntax` by including ' ' (an ASCII // space character). Ruby always escapes ' ' in calls to `Regexp::escape`. diff --git i/artichoke-backend/src/extn/core/string/ffi.rs w/artichoke-backend/src/extn/core/string/ffi.rs index 682719c0d2..6592b0f68c 100644 --- i/artichoke-backend/src/extn/core/string/ffi.rs +++ w/artichoke-backend/src/extn/core/string/ffi.rs @@ -184,7 +184,8 @@ unsafe extern "C" fn mrb_str_resize(mrb: *mut sys::mrb_state, s: sys::mrb_value, match len.checked_sub(s.len()) { Some(0) => {} Some(additional) => s.try_reserve(additional)?, - // If the given length is less than the length of the `String`, truncate. + // If the given length is less than the length of the `String`, + //truncate. None => s.truncate(len), } Ok(()) @@ -220,9 +221,9 @@ unsafe extern "C" fn mrb_str_resize(mrb: *mut sys::mrb_state, s: sys::mrb_value, // This is not possible on stable Rust since `TryReserveErrorKind` is // unstable. Err(_) => { - // NOTE: This code can't use an `Error` unified exception trait object. - // Since we're in memory error territory, we're not sure if we can - // allocate the `Box` it requires. + // NOTE: This code can't use an `Error` unified exception trait + //object. Since we're in memory error territory, we're not sure if + //we can allocate the `Box` it requires. let err = NoMemoryError::with_message("out of memory"); error::raise(guard, err); } @@ -496,7 +497,8 @@ unsafe extern "C" fn mrb_string_cstr(mrb: *mut sys::mrb_state, s: sys::mrb_value // #define mrb_str_to_inum(mrb, str, base, badcheck) mrb_str_to_integer(mrb, str, base, badcheck) // ``` // -// This function converts a numeric string to numeric `mrb_value` with the given base. +// This function converts a numeric string to numeric `mrb_value` with the given + //base. #[no_mangle] unsafe extern "C" fn mrb_str_to_integer( mrb: *mut sys::mrb_state, @@ -606,8 +608,8 @@ unsafe extern "C" fn mrb_str_cat( if let Ok(mut string) = String::unbox_from_value(&mut s, &mut guard) { let slice = slice::from_raw_parts(ptr.cast::<u8>(), len); - // SAFETY: The string is repacked before any intervening uses of - // `interp` which means no mruby heap allocations can occur. + // SAFETY: The string is repacked before any intervening uses of `interp` + //which means no mruby heap allocations can occur. let string_mut = string.as_inner_mut(); string_mut.extend_from_slice(slice); let inner = string.take(); diff --git i/artichoke-backend/src/extn/core/string/mod.rs w/artichoke-backend/src/extn/core/string/mod.rs index d8f3ba100e..d42d324d5c 100644 --- i/artichoke-backend/src/extn/core/string/mod.rs +++ w/artichoke-backend/src/extn/core/string/mod.rs @@ -34,8 +34,8 @@ impl BoxUnboxVmValue for String { ) -> Result<UnboxedValueGuard<'a, Self::Guarded>, Error> { let _ = interp; - // Make sure we have a String otherwise extraction will fail. - // This check is critical to the safety of accessing the `value` union. + // Make sure we have a String otherwise extraction will fail. This check + //is critical to the safety of accessing the `value` union. if value.ruby_type() != Ruby::String { let mut message = std::string::String::from("uninitialized "); message.push_str(Self::RUBY_TYPE); @@ -129,9 +129,9 @@ impl BoxUnboxVmValue for String { } fn free(data: *mut c_void) { - // this function is never called. `String` is freed directly in the VM - // by calling `mrb_gc_free_str` which is defined in - // `extn/core/string/ffi.rs`. + // this function is never called. `String` is freed directly in the VM by + //calling `mrb_gc_free_str` which is defined in + //`extn/core/string/ffi.rs`. // // `String` should not have a destructor registered in the class // registry. @@ -168,8 +168,8 @@ mod tests { #[test] fn modifying_and_repacking_encoding_zeroes_old_encoding_flags() { let mut interp = interpreter(); - // Modify the encoding of a binary string in place to be UTF-8 by - // pushing a UTF-8 string into an empty binary string. + // Modify the encoding of a binary string in place to be UTF-8 by pushing + //a UTF-8 string into an empty binary string. // // Test for the newly taken UTF-8 encoding by ensuring that the char // length of the string is 1. diff --git i/artichoke-backend/src/extn/core/string/mruby.rs w/artichoke-backend/src/extn/core/string/mruby.rs index 51d9a1593a..ab24142d8e 100644 --- i/artichoke-backend/src/extn/core/string/mruby.rs +++ w/artichoke-backend/src/extn/core/string/mruby.rs @@ -22,7 +22,11 @@ pub fn init(interp: &mut Artichoke) -> InitializeResult<()> { .add_method("[]=", string_aset, sys::mrb_args_any())? .add_method("ascii_only?", string_ascii_only, sys::mrb_args_none())? .add_method("b", string_b, sys::mrb_args_none())? - .add_method("bytes", string_bytes, sys::mrb_args_none())? // This does not support the deprecated block form + .add_method("bytes", string_bytes, sys::mrb_args_none())? // This does + //not support + //the + //deprecated + //block form .add_method("bytesize", string_bytesize, sys::mrb_args_none())? .add_method("byteslice", string_byteslice, sys::mrb_args_req_and_opt(1, 1))? .add_method("capitalize", string_capitalize, sys::mrb_args_any())? @@ -30,14 +34,25 @@ pub fn init(interp: &mut Artichoke) -> InitializeResult<()> { .add_method("casecmp", string_casecmp_ascii, sys::mrb_args_req(1))? .add_method("casecmp?", string_casecmp_unicode, sys::mrb_args_req(1))? .add_method("center", string_center, sys::mrb_args_req_and_opt(1, 1))? - .add_method("chars", string_chars, sys::mrb_args_none())? // This does not support the deprecated block form + .add_method("chars", string_chars, sys::mrb_args_none())? // This does + //not support + //the + //deprecated + //block form .add_method("chomp", string_chomp, sys::mrb_args_opt(1))? .add_method("chomp!", string_chomp_bang, sys::mrb_args_opt(1))? .add_method("chop", string_chop, sys::mrb_args_none())? .add_method("chop!", string_chop_bang, sys::mrb_args_none())? .add_method("chr", string_chr, sys::mrb_args_none())? .add_method("clear", string_clear, sys::mrb_args_none())? - .add_method("codepoints", string_codepoints, sys::mrb_args_none())? // This does not support the deprecated block form + .add_method("codepoints", string_codepoints, sys::mrb_args_none())? // //This + //does + //not + //support + //the + //deprecated + //block + //form .add_method("concat", string_concat, sys::mrb_args_any())? .add_method("downcase", string_downcase, sys::mrb_args_any())? .add_method("downcase!", string_downcase_bang, sys::mrb_args_any())? @@ -47,7 +62,12 @@ pub fn init(interp: &mut Artichoke) -> InitializeResult<()> { .add_method("hash", string_hash, sys::mrb_args_none())? .add_method("include?", string_include, sys::mrb_args_req(1))? .add_method("index", string_index, sys::mrb_args_req_and_opt(1, 1))? - .add_method("initialize", string_initialize, sys::mrb_args_opt(1))? // TODO: support encoding and capacity kwargs + .add_method("initialize", string_initialize, sys::mrb_args_opt(1))? // //TODO: + //support + //encoding + //and + //capacity + //kwargs .add_method("initialize_copy", string_initialize_copy, sys::mrb_args_req(1))? .add_method("inspect", string_inspect, sys::mrb_args_none())? .add_method("intern", string_intern, sys::mrb_args_none())? diff --git i/artichoke-backend/src/extn/core/string/trampoline.rs w/artichoke-backend/src/extn/core/string/trampoline.rs index 68b9441beb..432403ddff 100644 --- i/artichoke-backend/src/extn/core/string/trampoline.rs +++ w/artichoke-backend/src/extn/core/string/trampoline.rs @@ -41,8 +41,8 @@ pub fn add(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Result let to_append = unsafe { implicitly_convert_to_string(interp, &mut other)? }; let mut concatenated = s.clone(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the max + //allocation size and may panic or abort. concatenated.extend_from_slice(to_append); super::String::alloc_value(concatenated, interp) } @@ -59,12 +59,12 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res let mut s = unsafe { super::String::unbox_from_value(&mut value, interp)? }; if let Ok(int) = other.try_convert_into::<i64>(interp) { - // SAFETY: The string is repacked before any intervening uses of - // `interp` which means no mruby heap allocations can occur. + // SAFETY: The string is repacked before any intervening uses of `interp` + //which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the max + //allocation size and may panic or abort. string_mut .try_push_int(int) .map_err(|err| RangeError::from(err.message()))?; @@ -129,12 +129,13 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res // `interp` which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the + //max allocation size and may panic or abort. string_mut.extend_from_slice(other.as_slice()); if !matches!(other.encoding(), Encoding::Utf8) && !other.is_ascii_only() { - // encodings are incompatible if other is not UTF-8 and is non-ASCII + // encodings are incompatible if other is not UTF-8 and is + //non-ASCII string_mut.set_encoding(other.encoding()); } @@ -177,8 +178,8 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res // `interp` which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the + //max allocation size and may panic or abort. string_mut.extend_from_slice(other.as_slice()); // Set encoding to `other.encoding()` if other is non-ASCII. @@ -229,8 +230,8 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res // `interp` which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the + //max allocation size and may panic or abort. string_mut.extend_from_slice(other.as_slice()); let s = s.take(); @@ -274,8 +275,8 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res // `interp` which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the + //max allocation size and may panic or abort. string_mut.extend_from_slice(other.as_slice()); if !other.is_ascii_only() { @@ -291,8 +292,8 @@ pub fn append(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Res // `interp` which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // XXX: This call doesn't do a check to see if we'll exceed the max allocation - // size and may panic or abort. + // XXX: This call doesn't do a check to see if we'll exceed the + //max allocation size and may panic or abort. string_mut.extend_from_slice(other.as_slice()); let s = s.take(); @@ -365,10 +366,10 @@ pub fn aref( // => nil // ``` // - // Don't specialize on the case where `index == len` because the provided - // length can change the result. Even if the length argument is not - // given, we still need to preserve the encoding of the source string, - // so fall through to the happy path below. + // Don't specialize on the case where `index == len` because the + //provided length can change the result. Even if the length argument + //is not given, we still need to preserve the encoding of the source + //string, so fall through to the happy path below. Some(index) if index > s.len() => return Ok(Value::nil()), Some(index) => index, }; @@ -468,8 +469,8 @@ pub fn aref( return Ok(Value::nil()); } } - // The overload of `String#[]` that takes a `String` **only** takes `String`s. - // No implicit conversion is performed. + // The overload of `String#[]` that takes a `String` **only** takes + //`String`s. No implicit conversion is performed. // // ``` // [3.0.1] > s = "abc" @@ -487,9 +488,9 @@ pub fn aref( // ``` if let Ok(substring) = unsafe { super::String::unbox_from_value(&mut first, interp) } { if s.index(&*substring, None).is_some() { - // Indexing with a `String` returns a newly allocated object that - // has the same encoding as the index, regardless of the encoding on - // the receiver. + // Indexing with a `String` returns a newly allocated object that has + //the same encoding as the index, regardless of the encoding on the + //receiver. // // ``` // [3.0.2] > s = "abc" @@ -702,12 +703,14 @@ pub fn byteslice( let length = if let Some(length) = length { length } else { - // Per the docs -- https://ruby-doc.org/core-3.1.2/String.html#method-i-byteslice + // Per the docs -- + //https://ruby-doc.org/core-3.1.2/String.html#method-i-byteslice // - // > If passed a single Integer, returns a substring of one byte at that position. + // > If passed a single Integer, returns a substring of one byte at that + //position. // - // NOTE: Index out a single byte rather than a slice to avoid having - // to do an overflow check on the addition. + // NOTE: Index out a single byte rather than a slice to avoid having to + //do an overflow check on the addition. if let Some(&byte) = s.get(index) { let s = super::String::with_bytes_and_encoding(vec![byte], s.encoding()); // ``` @@ -862,7 +865,8 @@ pub fn casecmp_ascii(interp: &mut Artichoke, mut value: Value, mut other: Value) pub fn casecmp_unicode(interp: &mut Artichoke, mut value: Value, mut other: Value) -> Result<Value, Error> { let s = unsafe { super::String::unbox_from_value(&mut value, interp)? }; - // TODO: this needs to do an implicit conversion, but we need a Spinoso string. + // TODO: this needs to do an implicit conversion, but we need a Spinoso + //string. if let Ok(other) = unsafe { super::String::unbox_from_value(&mut other, interp) } { let eql = *s == *other; Ok(interp.convert(eql)) @@ -1045,8 +1049,8 @@ pub fn downcase_bang(interp: &mut Artichoke, mut value: Value) -> Result<Value, // which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // `make_lowercase` might reallocate the string and invalidate the - // boxed pointer, capacity, length triple. + // `make_lowercase` might reallocate the string and invalidate the boxed + //pointer, capacity, length triple. string_mut.make_lowercase(); let s = s.take(); @@ -1150,8 +1154,8 @@ pub fn initialize(interp: &mut Artichoke, mut value: Value, from: Option<Value>) Vec::new() }; - // If we are calling `initialize` on an already initialized `String`, - // pluck out the inner buffer and drop it so we don't leak memory. + // If we are calling `initialize` on an already initialized `String`, pluck + //out the inner buffer and drop it so we don't leak memory. // // ```console // [3.0.2] > s = "abc" @@ -1411,8 +1415,8 @@ pub fn setbyte(interp: &mut Artichoke, mut value: Value, index: Value, byte: Val index } else { let mut message = String::from("index "); - // Suppress error because `String`'s `fmt::Write` impl is infallible. - // (It will abort on OOM). + // Suppress error because `String`'s `fmt::Write` impl is infallible. (It + //will abort on OOM). let _ignored = write!(&mut message, "{} out of string", index); return Err(IndexError::from(message).into()); }; @@ -1550,8 +1554,8 @@ pub fn upcase_bang(interp: &mut Artichoke, mut value: Value) -> Result<Value, Er // which means no mruby heap allocations can occur. unsafe { let string_mut = s.as_inner_mut(); - // `make_uppercase` might reallocate the string and invalidate the - // boxed pointer, capacity, length triple. + // `make_uppercase` might reallocate the string and invalidate the boxed + //pointer, capacity, length triple. string_mut.make_uppercase(); let s = s.take(); diff --git i/artichoke-backend/src/extn/core/symbol/ffi.rs w/artichoke-backend/src/extn/core/symbol/ffi.rs index 5469950682..b4bb7f18d2 100644 --- i/artichoke-backend/src/extn/core/symbol/ffi.rs +++ w/artichoke-backend/src/extn/core/symbol/ffi.rs @@ -60,7 +60,7 @@ unsafe extern "C" fn mrb_intern_str(mrb: *mut sys::mrb_state, name: sys::mrb_val } } -/* `mrb_intern_check` series functions returns 0 if the symbol is not defined */ +/* `mrb_intern_check` series functions returns 0 if the symbol is not defined*/ // ```c // MRB_API mrb_sym mrb_intern_check(mrb_state*,const char*,size_t); @@ -207,8 +207,8 @@ unsafe extern "C" fn mrb_sym_dump(mrb: *mut sys::mrb_state, sym: sys::mrb_sym) - unwrap_interpreter!(mrb, to => guard, or_else = ptr::null()); if let Ok(Some(bytes)) = guard.lookup_symbol(sym) { let bytes = bytes.to_vec(); - // Allocate a buffer with the lifetime of the interpreter and return - // a pointer to it. + // Allocate a buffer with the lifetime of the interpreter and return a + //pointer to it. if let Ok(string) = guard.try_convert_mut(bytes) { if let Ok(bytes) = string.try_convert_into_mut::<&[u8]>(&mut guard) { return bytes.as_ptr().cast(); diff --git i/artichoke-backend/src/extn/core/symbol/mod.rs w/artichoke-backend/src/extn/core/symbol/mod.rs index 3194eeefa0..474b5c79ed 100644 --- i/artichoke-backend/src/extn/core/symbol/mod.rs +++ w/artichoke-backend/src/extn/core/symbol/mod.rs @@ -22,8 +22,8 @@ impl BoxUnboxVmValue for Symbol { ) -> Result<UnboxedValueGuard<'a, Self::Guarded>, Error> { let _ = interp; - // Make sure we have a Symbol otherwise extraction will fail. - // This check is critical to the safety of accessing the `value` union. + // Make sure we have a Symbol otherwise extraction will fail. This check + //is critical to the safety of accessing the `value` union. if value.ruby_type() != Ruby::Symbol { let mut message = String::from("uninitialized "); message.push_str(Self::RUBY_TYPE); diff --git i/artichoke-backend/src/extn/core/time/mruby.rs w/artichoke-backend/src/extn/core/time/mruby.rs index 4aa4e7e816..2fcf476ece 100644 --- i/artichoke-backend/src/extn/core/time/mruby.rs +++ w/artichoke-backend/src/extn/core/time/mruby.rs @@ -13,8 +13,8 @@ pub fn init(interp: &mut Artichoke) -> InitializeResult<()> { } let spec = class::Spec::new("Time", TIME_CSTR, None, Some(def::box_unbox_free::<time::Time>))?; - // NOTE: The ordering of method declarations in the builder below is the - // same as in `Init_Time` in MRI `time.c`. + // NOTE: The ordering of method declarations in the builder below is the same + //as in `Init_Time` in MRI `time.c`. class::Builder::for_spec(interp, &spec) .value_is_rust_object() // Constructor diff --git i/artichoke-backend/src/extn/core/time/offset.rs w/artichoke-backend/src/extn/core/time/offset.rs index a7cd0c21a4..29695b76de 100644 --- i/artichoke-backend/src/extn/core/time/offset.rs +++ w/artichoke-backend/src/extn/core/time/offset.rs @@ -52,8 +52,8 @@ impl TryConvertMut<Value, Option<Offset>> for Artichoke { } } - // Based on the above logic, the only option in the hash is `in`. - // >0 keys, and all other keys are rejected). + // Based on the above logic, the only option in the hash is `in`. >0 + //keys, and all other keys are rejected). let mut in_value = hash.get(0).expect("Only the `in` parameter should be available").1; match in_value.ruby_type() { diff --git i/artichoke-backend/src/extn/core/time/subsec.rs w/artichoke-backend/src/extn/core/time/subsec.rs index d86f332c46..2bf49f7cfd 100644 --- i/artichoke-backend/src/extn/core/time/subsec.rs +++ w/artichoke-backend/src/extn/core/time/subsec.rs @@ -62,10 +62,10 @@ impl TryConvertMut<Option<Value>, SubsecMultiplier> for Artichoke { } } -/// A struct that represents the adjustment needed to a `Time` based on a -/// the parsing of optional Ruby Values. Seconds can require adjustment as a -/// means for handling overflow of values. e.g. `1_001` millis can be requested -/// which should result in 1 seconds, and `1_000_000` nanoseconds. +/// A struct that represents the adjustment needed to a `Time` based on a the +/// parsing of optional Ruby Values. Seconds can require adjustment as a means +/// for handling overflow of values. e.g. `1_001` millis can be requested which +/// should result in 1 seconds, and `1_000_000` nanoseconds. /// /// Note: Negative nanoseconds are not supported, thus any negative adjustment /// will generally result in at least -1 second, and the relevant positive @@ -103,9 +103,9 @@ impl TryConvertMut<(Option<Value>, Option<Value>), Subsec> for Artichoke { let seconds_base = NANOS_IN_SECOND / multiplier_nanos; if subsec.ruby_type() == Ruby::Float { - // FIXME: The below deviates from the MRI implementation of - // Time. MRI uses `to_r` for subsec calculation on floats - // subsec nanos, and this could result in different values. + // FIXME: The below deviates from the MRI implementation of Time. + //MRI uses `to_r` for subsec calculation on floats subsec nanos, + //and this could result in different values. let subsec: f64 = self.try_convert(subsec)?; @@ -119,9 +119,9 @@ impl TryConvertMut<(Option<Value>, Option<Value>), Subsec> for Artichoke { return Err(FloatDomainError::with_message("Infinity").into()); } - // These conversions are luckily not lossy. `seconds_base` - // and `multiplier_nanos` are guaranteed to be represented - // without loss in a f64. + // These conversions are luckily not lossy. `seconds_base` and + //`multiplier_nanos` are guaranteed to be represented without + //loss in a f64. #[allow(clippy::cast_precision_loss)] let seconds_base = seconds_base as f64; #[allow(clippy::cast_precision_loss)] @@ -133,10 +133,10 @@ impl TryConvertMut<(Option<Value>, Option<Value>), Subsec> for Artichoke { // `is_sign_negative()` is not enough here, since this logic // should also be skilled for negative zero. if subsec < -0.0 { - // Nanos always needs to be a positive u32. If subsec - // is negative, we will always need remove one second. - // Nanos can then be adjusted since it will always be - // the inverse of the total nanos in a second. + // Nanos always needs to be a positive u32. If subsec is + //negative, we will always need remove one second. Nanos can + //then be adjusted since it will always be the inverse of + //the total nanos in a second. secs -= 1.0; #[allow(clippy::cast_precision_loss)] @@ -159,18 +159,17 @@ impl TryConvertMut<(Option<Value>, Option<Value>), Subsec> for Artichoke { } else { let subsec: i64 = implicitly_convert_to_int(self, subsec)?; - // The below calculations should always be safe. The - // multiplier is guaranteed to not be 0, the remainder - // should never overflow, and is guaranteed to be less - // than u32::MAX. + // The below calculations should always be safe. The multiplier + //is guaranteed to not be 0, the remainder should never + //overflow, and is guaranteed to be less than u32::MAX. let mut secs = subsec / seconds_base; let mut nanos = (subsec % seconds_base) * multiplier_nanos; if subsec.is_negative() { - // Nanos always needs to be a positive u32. If subsec - // is negative, we will always need remove one second. - // Nanos can then be adjusted since it will always be - // the inverse of the total nanos in a second. + // Nanos always needs to be a positive u32. If subsec is + //negative, we will always need remove one second. Nanos can + //then be adjusted since it will always be the inverse of + //the total nanos in a second. secs = secs .checked_sub(1) .ok_or(ArgumentError::with_message("Time too small"))?; diff --git i/artichoke-backend/src/extn/mod.rs w/artichoke-backend/src/extn/mod.rs index 79ccd7979e..7fbaf5bc1d 100644 --- i/artichoke-backend/src/extn/mod.rs +++ w/artichoke-backend/src/extn/mod.rs @@ -1,5 +1,5 @@ -// This pragma is needed to allow passing `Value` by value in all the mruby -// and Rust trampolines. +// This pragma is needed to allow passing `Value` by value in all the mruby and + //Rust trampolines. #![allow(clippy::needless_pass_by_value)] use crate::release_metadata::ReleaseMetadata; diff --git i/artichoke-backend/src/extn/prelude.rs w/artichoke-backend/src/extn/prelude.rs index e80f1e5cb6..099e160b61 100644 --- i/artichoke-backend/src/extn/prelude.rs +++ w/artichoke-backend/src/extn/prelude.rs @@ -1,5 +1,4 @@ -//! A "prelude" for users of the `extn` module in the `artichoke-backend` -//! crate. +//! A "prelude" for users of the `extn` module in the `artichoke-backend` crate. //! //! This prelude is similar to the standard library's prelude in that you'll //! almost always want to import its entire contents, but unlike the standard diff --git i/artichoke-backend/src/extn/stdlib/json/mod.rs w/artichoke-backend/src/extn/stdlib/json/mod.rs index e6a4fd0cbc..849051d129 100644 --- i/artichoke-backend/src/extn/stdlib/json/mod.rs +++ w/artichoke-backend/src/extn/stdlib/json/mod.rs @@ -14,9 +14,9 @@ static JSON_PURE_PARSER_RUBY_SOURCE: &[u8] = include_bytes!("vendor/json/pure/pa pub fn init(interp: &mut Artichoke) -> InitializeResult<()> { let spec = module::Spec::new(interp, "JSON", JSON_CSTR, None)?; interp.def_module::<Json>(spec)?; - // NOTE(lopopolo): This setup of the JSON gem in the virtual file system does not include - // any of the `json/add` sources for serializing "extra" types like `Time` - // and `BigDecimal`, not all of which Artichoke supports. + // NOTE(lopopolo): This setup of the JSON gem in the virtual file system does + //not include any of the `json/add` sources for serializing "extra" types + //like `Time` and `BigDecimal`, not all of which Artichoke supports. interp.def_rb_source_file("json.rb", JSON_RUBY_SOURCE)?; interp.def_rb_source_file("json/common.rb", JSON_COMMON_RUBY_SOURCE)?; interp.def_rb_source_file("json/generic_object.rb", JSON_GENERIC_OBJECT_RUBY_SOURCE)?; diff --git i/artichoke-backend/src/fmt.rs w/artichoke-backend/src/fmt.rs index d18c01ba02..ed2343cda2 100644 --- i/artichoke-backend/src/fmt.rs +++ w/artichoke-backend/src/fmt.rs @@ -16,7 +16,7 @@ use crate::Artichoke; /// This error type can also be used to convert generic [`fmt::Error`] into an /// [`Error`], such as when formatting integers with [`write!`]. /// -/// This error type wraps a [`fmt::Error`]. +/// This error type wraps a [`fmt::Error`]. /// /// # Examples /// diff --git i/artichoke-backend/src/gc.rs w/artichoke-backend/src/gc.rs index 985f9e46aa..e0c1ca83ca 100644 --- i/artichoke-backend/src/gc.rs +++ w/artichoke-backend/src/gc.rs @@ -10,8 +10,8 @@ use arena::{ArenaIndex, ArenaSavepointError}; pub trait MrbGarbageCollection { /// Create a savepoint in the GC arena. /// - /// Savepoints allow mruby to deallocate all the objects created via the - /// C API. + /// Savepoints allow mruby to deallocate all the objects created via the C + /// API. /// /// Normally objects created via the C API are marked as permanently alive /// ("white" GC color) with a call to [`mrb_gc_protect`]. @@ -251,8 +251,8 @@ mod tests { interp.full_gc().unwrap(); assert_eq!( interp.live_object_count(), - // plus 1 because stack keep is enabled in eval which marks the - // last returned value as live. + // plus 1 because stack keep is enabled in eval which marks the last + //returned value as live. baseline_object_count + 1, "Started with {} live objects, ended with {}. Potential memory leak!", baseline_object_count, diff --git i/artichoke-backend/src/gc/arena.rs w/artichoke-backend/src/gc/arena.rs index 4b187c0292..46932b333a 100644 --- i/artichoke-backend/src/gc/arena.rs +++ w/artichoke-backend/src/gc/arena.rs @@ -70,9 +70,9 @@ impl From<ArenaSavepointError> for Error { /// Arena savepoints ensure mruby objects are reaped even when allocated with /// the C API. /// -/// mruby manages objects created via the C API in a memory construct called -/// the [arena]. The arena is a stack and objects stored there are permanently -/// alive to avoid having to track lifetimes externally to the interpreter. +/// mruby manages objects created via the C API in a memory construct called the +/// [arena]. The arena is a stack and objects stored there are permanently alive +/// to avoid having to track lifetimes externally to the interpreter. /// /// An [`ArenaIndex`] is an index to some position of the stack. When restoring /// an `ArenaIndex`, the stack pointer is moved. All objects beyond the pointer @@ -134,8 +134,8 @@ impl<'a> DerefMut for ArenaIndex<'a> { impl<'a> Drop for ArenaIndex<'a> { fn drop(&mut self) { let idx = self.index; - // We can't panic in a drop impl, so ignore errors when crossing the - // FFI boundary. + // We can't panic in a drop impl, so ignore errors when crossing the FFI + //boundary. let _ignored = unsafe { self.interp .with_ffi_boundary(|mrb| sys::mrb_sys_gc_arena_restore(mrb, idx)) diff --git i/artichoke-backend/src/globals.rs w/artichoke-backend/src/globals.rs index 08de0382ee..21c91b31c7 100644 --- i/artichoke-backend/src/globals.rs +++ w/artichoke-backend/src/globals.rs @@ -6,8 +6,8 @@ use crate::sys; use crate::value::Value; use crate::Artichoke; -// TODO: Handle invalid variable names. For now this is delegated to mruby. -// The parser in `spinoso-symbol` can handle this. +// TODO: Handle invalid variable names. For now this is delegated to mruby. The + //parser in `spinoso-symbol` can handle this. impl Globals for Artichoke { type Value = Value; diff --git i/artichoke-backend/src/interpreter.rs w/artichoke-backend/src/interpreter.rs index 7e5708bfb0..05fa7c5a08 100644 --- i/artichoke-backend/src/interpreter.rs +++ w/artichoke-backend/src/interpreter.rs @@ -63,9 +63,9 @@ pub fn interpreter_with_config(config: ReleaseMetadata<'_>) -> Result<Artichoke, } arena.restore(); - // mruby lazily initializes some core objects like `top_self` and generates - // a lot of garbage on start-up. Eagerly initialize the interpreter to - // provide predictable initialization behavior. + // mruby lazily initializes some core objects like `top_self` and generates a + //lot of garbage on start-up. Eagerly initialize the interpreter to provide + //predictable initialization behavior. interp.create_arena_savepoint()?.interp().eval(&[])?; if let GcState::Enabled = prior_gc_state { diff --git i/artichoke-backend/src/lib.rs w/artichoke-backend/src/lib.rs index 7d861dea63..7a64e5fa48 100644 --- i/artichoke-backend/src/lib.rs +++ w/artichoke-backend/src/lib.rs @@ -2,8 +2,8 @@ #![warn(clippy::pedantic)] #![warn(clippy::cargo)] #![allow(clippy::missing_errors_doc)] -#![allow(clippy::question_mark)] // https://github.com/rust-lang/rust-clippy/issues/8281 -#![allow(clippy::unnecessary_lazy_evaluations)] // https://github.com/rust-lang/rust-clippy/issues/8109 +#![allow(clippy::question_mark)] // //https://github.com/rust-lang/rust-clippy/issues/8281 + //https://github.com/rust-lang/rust-clippy/issues/8109 #![cfg_attr(test, allow(clippy::non_ascii_literal))] #![allow(unknown_lints)] // #![warn(missing_docs)] @@ -28,8 +28,8 @@ //! //! ### Evaling Source Code //! -//! The `artichoke-backend` interpreter implements -//! [`Eval` from `artichoke-core`](crate::core::Eval). +//! The `artichoke-backend` interpreter implements [`Eval` from +//! `artichoke-core`](crate::core::Eval). //! //! ```rust //! use artichoke_backend::prelude::*; @@ -68,8 +68,8 @@ //! //! ## Virtual File System and `Kernel#require` //! -//! The `artichoke-backend` interpreter includes an in-memory virtual -//! file system. The file system stores Ruby sources and Rust extension functions +//! The `artichoke-backend` interpreter includes an in-memory virtual file +//! system. The file system stores Ruby sources and Rust extension functions //! that are similar to MRI C extensions. //! //! The virtual file system enables applications built with `artichoke-backend` diff --git i/artichoke-backend/src/load_path.rs w/artichoke-backend/src/load_path.rs index 895c86c2c8..c3ec1ed071 100644 --- i/artichoke-backend/src/load_path.rs +++ w/artichoke-backend/src/load_path.rs @@ -32,8 +32,8 @@ pub use native::Native; /// Directory at which Ruby sources and extensions are stored in the virtual /// file system. /// -/// `RUBY_LOAD_PATH` is the default current working directory for -/// [`Memory`] file systems. +/// `RUBY_LOAD_PATH` is the default current working directory for [`Memory`] +/// file systems. /// /// [`Hybrid`] file systems locate the path on a [`Memory`] file system. #[cfg(not(windows))] @@ -42,8 +42,8 @@ pub const RUBY_LOAD_PATH: &str = "/artichoke/virtual_root/src/lib"; /// Directory at which Ruby sources and extensions are stored in the virtual /// file system. /// -/// `RUBY_LOAD_PATH` is the default current working directory for -/// [`Memory`] file systems. +/// `RUBY_LOAD_PATH` is the default current working directory for [`Memory`] +/// file systems. /// /// [`Hybrid`] file systems locate the path on a [`Memory`] file system. #[cfg(windows)] diff --git i/artichoke-backend/src/macros.rs w/artichoke-backend/src/macros.rs index d888165fae..c55affb3ef 100644 --- i/artichoke-backend/src/macros.rs +++ w/artichoke-backend/src/macros.rs @@ -19,8 +19,8 @@ macro_rules! emit_fatal_warning { // called when there are foreign C frames in the stack and panics are // either undefined behavior or will result in an abort. // - // Ensure the returned error is dropped so we don't leave anything on - // the stack in the event of a foreign unwind. + // Ensure the returned error is dropped so we don't leave anything on the + //stack in the event of a foreign unwind. let maybe_err = ::std::write!(::std::io::stderr(), "fatal[artichoke-backend]: "); drop(maybe_err); let maybe_err = ::std::writeln!(::std::io::stderr(), $($arg)+); @@ -96,8 +96,8 @@ pub mod argspec { pub const REST: &CStr = qed::const_cstr_from_str!("*\0"); } -/// Extract [`sys::mrb_value`]s from a [`sys::mrb_state`] to adapt a C -/// entry point to a Rust implementation of a Ruby function. +/// Extract [`sys::mrb_value`]s from a [`sys::mrb_state`] to adapt a C entry +/// point to a Rust implementation of a Ruby function. /// /// This macro exists because the mruby VM [does not validate argspecs] attached /// to native functions. diff --git i/artichoke-backend/src/module.rs w/artichoke-backend/src/module.rs index 7958fa7ba6..f9869de06a 100644 --- i/artichoke-backend/src/module.rs +++ w/artichoke-backend/src/module.rs @@ -137,13 +137,13 @@ impl Rclass { let is_defined_under = sys::mrb_const_defined_at(mrb, sys::mrb_sys_obj_value(scope.cast::<c_void>().as_mut()), self.sym); if is_defined_under { - // Enclosing scope exists. - // Module is defined under the enclosing scope. + // Enclosing scope exists. Module is defined under the enclosing + //scope. let module = sys::mrb_module_get_under(mrb, scope.as_mut(), module_name); NonNull::new(module) } else { - // Enclosing scope exists. - // Module is not defined under the enclosing scope. + // Enclosing scope exists. Module is not defined under the + //enclosing scope. None } } else { diff --git i/artichoke-backend/src/module/registry.rs w/artichoke-backend/src/module/registry.rs index 06587e7b0a..ac76beff82 100644 --- i/artichoke-backend/src/module/registry.rs +++ w/artichoke-backend/src/module/registry.rs @@ -233,9 +233,8 @@ where self.0.shrink_to_fit(); } - /// Shrinks the capacity of the registry with a lower bound. - /// The capacity will remain at least as large as both the length and the - /// supplied value. + /// Shrinks the capacity of the registry with a lower bound. The capacity + /// will remain at least as large as both the length and the supplied value. /// /// If the current capacity is less than the lower limit, this is a no-op. pub fn shrink_to(&mut self, min_capacity: usize) { diff --git i/artichoke-backend/src/sys/args.rs w/artichoke-backend/src/sys/args.rs index 0e80b577e4..a17bfc27da 100644 --- i/artichoke-backend/src/sys/args.rs +++ w/artichoke-backend/src/sys/args.rs @@ -259,7 +259,7 @@ pub mod specifiers { /// The following args specified are optional. pub const FOLLOWING_ARGS_OPTIONAL: &str = "|"; - /// Retrieve a Boolean indicating whether the previous optional argument - /// was given. + /// Retrieve a Boolean indicating whether the previous optional argument was + /// given. pub const PREVIOUS_OPTIONAL_ARG_GIVEN: &str = "?"; } diff --git i/artichoke-backend/src/sys/mod.rs w/artichoke-backend/src/sys/mod.rs index ed374e56ed..50234d7ae7 100644 --- i/artichoke-backend/src/sys/mod.rs +++ w/artichoke-backend/src/sys/mod.rs @@ -22,7 +22,8 @@ mod args; #[allow(clippy::all)] #[allow(clippy::pedantic)] #[allow(clippy::restriction)] -#[cfg_attr(test, allow(deref_nullptr))] // See https://github.com/rust-lang/rust-bindgen/issues/1651. +#[cfg_attr(test, allow(deref_nullptr))] // See + //https://github.com/rust-lang/rust-bindgen/issues/1651. mod ffi { include!(concat!(env!("OUT_DIR"), "/ffi.rs")); } diff --git i/artichoke-backend/src/sys/protect.rs w/artichoke-backend/src/sys/protect.rs index 25782f4c1b..262970f6cf 100644 --- i/artichoke-backend/src/sys/protect.rs +++ w/artichoke-backend/src/sys/protect.rs @@ -58,8 +58,8 @@ trait Protect { unsafe extern "C" fn run(mrb: *mut sys::mrb_state, data: sys::mrb_value) -> sys::mrb_value; } -// `Funcall` must be `Copy` because we may unwind past the frames in which -// it is used with `longjmp` which does not allow Rust to run destructors. +// `Funcall` must be `Copy` because we may unwind past the frames in which it is + //used with `longjmp` which does not allow Rust to run destructors. #[derive(Clone, Copy)] struct Funcall<'a> { slf: sys::mrb_value, @@ -76,9 +76,9 @@ impl<'a> Protect for Funcall<'a> { // allow Rust to run destructors. let Self { slf, func, args, block } = *Box::from_raw(ptr.cast::<Self>()); - // This will always unwrap because we've already checked that we - // have fewer than `MRB_FUNCALL_ARGC_MAX` args, which is less than - // `i64` max value. + // This will always unwrap because we've already checked that we have + //fewer than `MRB_FUNCALL_ARGC_MAX` args, which is less than `i64` max + //value. let argslen = if let Ok(argslen) = i64::try_from(args.len()) { argslen } else { @@ -93,8 +93,8 @@ impl<'a> Protect for Funcall<'a> { } } -// `Eval` must be `Copy` because we may unwind past the frames in which -// it is used with `longjmp` which does not allow Rust to run destructors. +// `Eval` must be `Copy` because we may unwind past the frames in which it is + //used with `longjmp` which does not allow Rust to run destructors. #[derive(Clone, Copy)] struct Eval<'a> { context: *mut sys::mrbc_context, @@ -106,8 +106,8 @@ impl<'a> Protect for Eval<'a> { let ptr = sys::mrb_sys_cptr_ptr(data); let Self { context, code } = *Box::from_raw(ptr.cast::<Self>()); - // Execute arbitrary ruby code, which may generate objects with C APIs - // if backed by Rust functions. + // Execute arbitrary ruby code, which may generate objects with C APIs if + //backed by Rust functions. // // `mrb_load_nstring_ctx` sets the "stack keep" field on the context // which means the most recent value returned by eval will always be @@ -116,8 +116,8 @@ impl<'a> Protect for Eval<'a> { } } -// `BlockYield` must be `Copy` because we may unwind past the frames in which -// it is used with `longjmp` which does not allow Rust to run destructors. +// `BlockYield` must be `Copy` because we may unwind past the frames in which it + //is used with `longjmp` which does not allow Rust to run destructors. #[derive(Clone, Copy)] struct BlockYield { block: sys::mrb_value, @@ -154,8 +154,8 @@ pub enum Range { Out, } -// `IsRange` must be `Copy` because we may unwind past the frames in which -// it is used with `longjmp` which does not allow Rust to run destructors. +// `IsRange` must be `Copy` because we may unwind past the frames in which it is + //used with `longjmp` which does not allow Rust to run destructors. #[derive(Default, Debug, Clone, Copy)] struct IsRange { value: sys::mrb_value, diff --git i/artichoke-backend/src/types.rs w/artichoke-backend/src/types.rs index dcd16e8013..620930e706 100644 --- i/artichoke-backend/src/types.rs +++ w/artichoke-backend/src/types.rs @@ -19,14 +19,14 @@ pub fn ruby_from_mrb_value(value: sys::mrb_value) -> Ruby { // in the `sys::mrb_vtype` enum C source. #[allow(clippy::match_same_arms)] match value.tt { - // `nil` is implemented with the `MRB_TT_FALSE` type tag in mruby - // (since both values are falsy). The difference is that Booleans are - // non-zero `Fixnum`s. + // `nil` is implemented with the `MRB_TT_FALSE` type tag in mruby (since + //both values are falsy). The difference is that Booleans are non-zero + //`Fixnum`s. MRB_TT_FALSE if unsafe { sys::mrb_sys_value_is_nil(value) } => Ruby::Nil, MRB_TT_FALSE => Ruby::Bool, - // `MRB_TT_FREE` is a marker type tag that indicates to the mruby - // VM that an object is unreachable and should be deallocated by the - // garbage collector. + // `MRB_TT_FREE` is a marker type tag that indicates to the mruby VM that + //an object is unreachable and should be deallocated by the garbage + //collector. MRB_TT_FREE => Ruby::Unreachable, MRB_TT_TRUE => Ruby::Bool, MRB_TT_INTEGER => Ruby::Fixnum, @@ -39,8 +39,8 @@ pub fn ruby_from_mrb_value(value: sys::mrb_value) -> Ruby { MRB_TT_OBJECT => Ruby::Object, MRB_TT_CLASS => Ruby::Class, MRB_TT_MODULE => Ruby::Module, - // `MRB_TT_ICLASS` is an internal use type tag meant for holding - // mixed in modules. + // `MRB_TT_ICLASS` is an internal use type tag meant for holding mixed in + //modules. MRB_TT_ICLASS => Ruby::Unreachable, // `MRB_TT_SCLASS` represents a singleton class, or a class that is // defined anonymously, e.g. `c1` or `c2` below: @@ -52,8 +52,8 @@ pub fn ruby_from_mrb_value(value: sys::mrb_value) -> Ruby { // c2 = (class <<cls; self; end) // ``` // - // mruby also uses the term singleton method to refer to methods - // defined on an object's eigenclass, e.g. `bar` below: + // mruby also uses the term singleton method to refer to methods defined + //on an object's eigenclass, e.g. `bar` below: // // ```ruby // class Foo; end @@ -70,12 +70,12 @@ pub fn ruby_from_mrb_value(value: sys::mrb_value) -> Ruby { MRB_TT_STRING => Ruby::String, MRB_TT_RANGE => Ruby::Range, MRB_TT_EXCEPTION => Ruby::Exception, - // NOTE(lopopolo): This might be an internal closure symbol table, - // rather than the `ENV` core object. + // NOTE(lopopolo): This might be an internal closure symbol table, rather + //than the `ENV` core object. MRB_TT_ENV => Ruby::Unreachable, - // `MRB_TT_DATA` is a type tag for wrapped C pointers. It is used - // to indicate that an `mrb_value` has an owned pointer to an - // external data structure stored in its `value.p` field. + // `MRB_TT_DATA` is a type tag for wrapped C pointers. It is used to + //indicate that an `mrb_value` has an owned pointer to an external data + //structure stored in its `value.p` field. MRB_TT_DATA => Ruby::Data, // NOTE(lopopolo): `Fiber`s are unimplemented in Artichoke. MRB_TT_FIBER => Ruby::Fiber, diff --git i/artichoke-backend/src/value.rs w/artichoke-backend/src/value.rs index d0b71ce81a..eaf26c7fb9 100644 --- i/artichoke-backend/src/value.rs +++ w/artichoke-backend/src/value.rs @@ -210,8 +210,8 @@ impl ValueCore for Value { } fn respond_to(&self, interp: &mut Self::Artichoke, method: &str) -> Result<bool, Self::Error> { - // Look up a method in the mruby VM's method table for this value's - // class object. + // Look up a method in the mruby VM's method table for this value's class + //object. let method_sym = if let Some(sym) = interp.check_interned_string(method)? { sym } else { diff --git i/artichoke-core/src/class_registry.rs w/artichoke-core/src/class_registry.rs index 56a59ca642..08d7b719a6 100644 --- i/artichoke-core/src/class_registry.rs +++ w/artichoke-core/src/class_registry.rs @@ -10,7 +10,8 @@ pub trait ClassRegistry { /// Concrete value type for boxed Ruby values. type Value; - /// Concrete error type for errors encountered when manipulating the class registry. + /// Concrete error type for errors encountered when manipulating the class + /// registry. type Error; /// Type representing a class specification. @@ -39,7 +40,8 @@ pub trait ClassRegistry { where T: Any; - /// Retrieve whether a class definition exists from the state bound to Rust type `T`. + /// Retrieve whether a class definition exists from the state bound to Rust + /// type `T`. /// /// # Errors /// diff --git i/artichoke-core/src/convert.rs w/artichoke-core/src/convert.rs index feb1e34755..c258a54543 100644 --- i/artichoke-core/src/convert.rs +++ w/artichoke-core/src/convert.rs @@ -4,8 +4,7 @@ /// /// Implementors may not allocate on the interpreter heap. /// -/// See [`core::convert::From`]. -/// See [`ConvertMut`]. +/// See [`core::convert::From`]. See [`ConvertMut`]. pub trait Convert<T, U> { /// Performs the infallible conversion. fn convert(&self, from: T) -> U; @@ -15,8 +14,7 @@ pub trait Convert<T, U> { /// /// Implementors may not allocate on the interpreter heap. /// -/// See [`core::convert::TryFrom`]. -/// See [`TryConvertMut`]. +/// See [`core::convert::TryFrom`]. See [`TryConvertMut`]. #[allow(clippy::module_name_repetitions)] pub trait TryConvert<T, U> { /// Error type for failed conversions. @@ -35,8 +33,7 @@ pub trait TryConvert<T, U> { /// /// Implementors may allocate on the interpreter heap. /// -/// See [`core::convert::From`]. -/// See [`Convert`]. +/// See [`core::convert::From`]. See [`Convert`]. #[allow(clippy::module_name_repetitions)] pub trait ConvertMut<T, U> { /// Performs the infallible conversion. @@ -47,8 +44,7 @@ pub trait ConvertMut<T, U> { /// /// Implementors may allocate on the interpreter heap. /// -/// See [`core::convert::TryFrom`]. -/// See [`TryConvert`]. +/// See [`core::convert::TryFrom`]. See [`TryConvert`]. pub trait TryConvertMut<T, U> { /// Error type for failed conversions. type Error; diff --git i/artichoke-core/src/debug.rs w/artichoke-core/src/debug.rs index c828cd82e2..fcd628ea3b 100644 --- i/artichoke-core/src/debug.rs +++ w/artichoke-core/src/debug.rs @@ -13,7 +13,8 @@ pub trait Debug { /// Some immediate types like `true`, `false`, and `nil` are shown by value /// rather than by class. /// - /// This function suppresses all errors and returns an empty string on error. + /// This function suppresses all errors and returns an empty string on + /// error. fn inspect_type_name_for_value(&mut self, value: Self::Value) -> &str; /// Return the class name for the given value's type. @@ -21,6 +22,7 @@ pub trait Debug { /// Even immediate types will have their class name spelled out. For /// example, calling this function with `nil` will return `"NilClass"`. /// - /// This function suppresses all errors and returns an empty string on error. + /// This function suppresses all errors and returns an empty string on + /// error. fn class_name_for_value(&mut self, value: Self::Value) -> &str; } diff --git i/artichoke-core/src/file.rs w/artichoke-core/src/file.rs index 722b894b1a..9dd45462c7 100644 --- i/artichoke-core/src/file.rs +++ w/artichoke-core/src/file.rs @@ -2,8 +2,8 @@ /// Rust extension hook that can be required. /// -/// `File`s are mounted in the interpreter file system and can modify interpreter -/// state when they are loaded. +/// `File`s are mounted in the interpreter file system and can modify +/// interpreter state when they are loaded. pub trait File { /// Concrete type for interpreter. type Artichoke; diff --git i/artichoke-core/src/globals.rs w/artichoke-core/src/globals.rs index b9217df7b5..c457cc430a 100644 --- i/artichoke-core/src/globals.rs +++ w/artichoke-core/src/globals.rs @@ -48,10 +48,10 @@ pub trait Globals { /// /// # Compatibility Notes /// - /// Getting a global that is currently may return `Ok(None)` even through - /// a non-existent global resolves to `nil` in the Ruby VM. Consult the - /// documentation on implementations of this trait for implementation-defined - /// behavior. + /// Getting a global that is currently may return `Ok(None)` even through a + /// non-existent global resolves to `nil` in the Ruby VM. Consult the + /// documentation on implementations of this trait for + /// implementation-defined behavior. /// /// # Errors /// diff --git i/artichoke-core/src/hash.rs w/artichoke-core/src/hash.rs index 520d8fb595..02323bb494 100644 --- i/artichoke-core/src/hash.rs +++ w/artichoke-core/src/hash.rs @@ -4,10 +4,10 @@ use core::hash::BuildHasher; /// A trait for retrieving an interpreter-global [`BuildHasher`]. /// -/// The [`BuildHasher`] associated with the interpreter is for creating instances -/// of [`Hasher`]. A `BuildHasher` is typically used (e.g., by `HashMap`) to -/// create [`Hasher`]s for each key such that they are hashed independently of -/// one another, since [`Hasher`]s contain state. +/// The [`BuildHasher`] associated with the interpreter is for creating +/// instances of [`Hasher`]. A `BuildHasher` is typically used (e.g., by +/// `HashMap`) to create [`Hasher`]s for each key such that they are hashed +/// independently of one another, since [`Hasher`]s contain state. /// /// By associating one [`BuildHasher`] with the interpreter, identical Ruby /// objects should hash identically, even if the interpreter's [`BuildHasher`] diff --git i/artichoke-core/src/intern.rs w/artichoke-core/src/intern.rs index ca0fa7333b..34f247e5dc 100644 --- i/artichoke-core/src/intern.rs +++ w/artichoke-core/src/intern.rs @@ -10,7 +10,7 @@ use alloc::borrow::Cow; /// Store and retrieve byte strings that have the same lifetime as the /// interpreter. /// -/// See the [Ruby `Symbol` type][symbol]. +/// See the [Ruby `Symbol` `Symbol` type][symbol]. /// /// [symbol]: https://ruby-doc.org/core-3.1.2/Symbol.html pub trait Intern { diff --git i/artichoke-core/src/lib.rs w/artichoke-core/src/lib.rs index 974e342434..a208fb35ff 100644 --- i/artichoke-core/src/lib.rs +++ w/artichoke-core/src/lib.rs @@ -90,11 +90,11 @@ //! //! # Examples //! -//! [`artichoke-backend`](https://artichoke.github.io/artichoke/artichoke_backend/) +//! //! [`artichoke-backend`](https://artichoke.github.io/artichoke/artichoke_backend/) //! is one implementation of the `artichoke-core` traits. //! -//! To use all the APIs defined in Artichoke Core, bring the traits into -//! scope by importing the prelude: +//! To use all the APIs defined in Artichoke Core, bring the traits into scope +//! by importing the prelude: //! //! ``` //! use artichoke_core::prelude::*; diff --git i/artichoke-core/src/load.rs w/artichoke-core/src/load.rs index 1494181b3f..94301cde46 100644 --- i/artichoke-core/src/load.rs +++ w/artichoke-core/src/load.rs @@ -79,10 +79,9 @@ impl From<Required> for bool { /// In Ruby, `load` is stateless. All sources passed to `load` are loaded for /// every method call. /// -/// Each time a file is loaded, it is parsed and executed by the -/// interpreter. If the file executes without raising an error, the file is -/// successfully loaded and Rust callers can expect a [`Loaded::Success`] -/// variant. +/// Each time a file is loaded, it is parsed and executed by the interpreter. If +/// the file executes without raising an error, the file is successfully loaded +/// and Rust callers can expect a [`Loaded::Success`] variant. /// /// If the file raises an exception as it is required, Rust callers can expect /// an `Err` variant. The file is not added to the set of loaded features. @@ -125,14 +124,14 @@ pub trait LoadSources { /// Concrete type for errors returned by `File::require`. type Exception; - /// Add a Rust extension hook to the virtual file system. A stub Ruby file is - /// added to the file system and [`File::require`] will dynamically define - /// Ruby items when invoked via `Kernel#require`. + /// Add a Rust extension hook to the virtual file system. A stub Ruby file + /// is added to the file system and [`File::require`] will dynamically + /// define Ruby items when invoked via `Kernel#require`. /// - /// If `path` is a relative path, the Ruby source is added to the - /// file system relative to `RUBY_LOAD_PATH`. If the path is absolute, the - /// file is placed directly on the file system. Ancestor directories are - /// created automatically. + /// If `path` is a relative path, the Ruby source is added to the file + /// system relative to `RUBY_LOAD_PATH`. If the path is absolute, the file + /// is placed directly on the file system. Ancestor directories are created + /// automatically. /// /// # Errors /// @@ -146,10 +145,10 @@ pub trait LoadSources { /// Add a Ruby source to the virtual file system. /// - /// If `path` is a relative path, the Ruby source is added to the - /// file system relative to `RUBY_LOAD_PATH`. If the path is absolute, the - /// file is placed directly on the file system. Ancestor directories are - /// created automatically. + /// If `path` is a relative path, the Ruby source is added to the file + /// system relative to `RUBY_LOAD_PATH`. If the path is absolute, the file + /// is placed directly on the file system. Ancestor directories are created + /// automatically. /// /// # Errors /// @@ -219,8 +218,8 @@ pub trait LoadSources { /// Require source located at the given path. /// - /// Query the underlying virtual file system for a source file and require it - /// onto the interpreter. This requires files with the following steps: + /// Query the underlying virtual file system for a source file and require + /// it onto the interpreter. This requires files with the following steps: /// /// 1. Retrieve and execute the extension hook, if any. /// 2. Read file contents and [`eval`](crate::eval::Eval) them. diff --git i/artichoke-core/src/module_registry.rs w/artichoke-core/src/module_registry.rs index 0e1c9a478d..9810582513 100644 --- i/artichoke-core/src/module_registry.rs +++ w/artichoke-core/src/module_registry.rs @@ -10,7 +10,8 @@ pub trait ModuleRegistry { /// Concrete value type for boxed Ruby values. type Value; - /// Concrete error type for errors encountered when manipulating the module registry. + /// Concrete error type for errors encountered when manipulating the module + /// registry. type Error; /// Type representing a module specification. @@ -27,7 +28,8 @@ pub trait ModuleRegistry { where T: Any; - /// Retrieve a module definition from the interpreter bound to Rust type `T`. + /// Retrieve a module definition from the interpreter bound to Rust type + /// `T`. /// /// This function returns `None` if type `T` has not had a module spec /// registered for it using [`ModuleRegistry::def_module`]. diff --git i/artichoke-core/src/parser.rs w/artichoke-core/src/parser.rs index ac49985def..b43e9bb2cc 100644 --- i/artichoke-core/src/parser.rs +++ w/artichoke-core/src/parser.rs @@ -66,8 +66,8 @@ pub trait Parser { pub enum IncrementLinenoError { /// An overflow occurred when incrementing the line number. /// - /// This error is reported based on the internal parser storage width - /// and contains the max value the parser can store. + /// This error is reported based on the internal parser storage width and + /// contains the max value the parser can store. Overflow(usize), } diff --git i/artichoke-core/src/regexp.rs w/artichoke-core/src/regexp.rs index 8ccc82cefd..7f2699f678 100644 --- i/artichoke-core/src/regexp.rs +++ w/artichoke-core/src/regexp.rs @@ -18,8 +18,8 @@ pub trait Regexp { /// /// Per the Ruby documentation: /// - /// > `$1`, `$2` and so on contain text matching first, second, etc capture - /// > group. + /// > `$1`, `$2` and so on contain text matching first, second, etc capture > + /// group. /// /// # Errors /// @@ -34,8 +34,8 @@ pub trait Regexp { /// /// Per the Ruby documentation: /// - /// > `$1`, `$2` and so on contain text matching first, second, etc capture - /// > group. + /// > `$1`, `$2` and so on contain text matching first, second, etc capture > + /// group. /// /// # Errors /// diff --git i/artichoke-core/src/types.rs w/artichoke-core/src/types.rs index 24a9f33231..5f6b8df6bd 100644 --- i/artichoke-core/src/types.rs +++ w/artichoke-core/src/types.rs @@ -91,8 +91,8 @@ pub enum Ruby { Object, /// Ruby `Proc` type. /// - /// `Proc` is a callable closure that captures lexical scope. `Proc`s can - /// be arbitrary arity and may or may not enforce this arity when called. + /// `Proc` is a callable closure that captures lexical scope. `Proc`s can be + /// arbitrary arity and may or may not enforce this arity when called. Proc, /// Ruby `Range` type. /// diff --git i/artichoke-load-path/src/rubylib.rs w/artichoke-load-path/src/rubylib.rs index 910fa6c856..6ad4ad3cd4 100644 --- i/artichoke-load-path/src/rubylib.rs +++ w/artichoke-load-path/src/rubylib.rs @@ -83,9 +83,9 @@ impl Rubylib { /// This source loader grants access to the host file system. The `Rubylib` /// loader does not support native extensions. /// - /// This method returns [`None`] if there are errors resolving the - /// `RUBYLIB` environment variable, if the `RUBYLIB` environment variable is - /// not set, if the current working directory cannot be retrieved, or if the + /// This method returns [`None`] if there are errors resolving the `RUBYLIB` + /// environment variable, if the `RUBYLIB` environment variable is not set, + /// if the current working directory cannot be retrieved, or if the /// `RUBYLIB` environment variable does not contain any paths. /// /// [current working directory]: env::current_dir diff --git i/mezzaluna-feature-loader/src/feature/mod.rs w/mezzaluna-feature-loader/src/feature/mod.rs index 7b0e97aa32..c1d1fa6e09 100644 --- i/mezzaluna-feature-loader/src/feature/mod.rs +++ w/mezzaluna-feature-loader/src/feature/mod.rs @@ -80,9 +80,9 @@ impl Feature { /// Get the path associated with this feature. /// - /// The path returned by this method is not guaranteed to be the same as - /// the path returned by [`LoadedFeatures::features`] since features may - /// be deduplicated by their physical location in the underlying loaders. + /// The path returned by this method is not guaranteed to be the same as the + /// path returned by [`LoadedFeatures::features`] since features may be + /// deduplicated by their physical location in the underlying loaders. /// /// # Examples /// diff --git i/mezzaluna-feature-loader/src/lib.rs w/mezzaluna-feature-loader/src/lib.rs index 73b7123a79..a77a5e17b7 100644 --- i/mezzaluna-feature-loader/src/lib.rs +++ w/mezzaluna-feature-loader/src/lib.rs @@ -1,7 +1,7 @@ #![warn(clippy::all)] #![warn(clippy::pedantic)] #![warn(clippy::cargo)] -#![allow(clippy::question_mark)] // https://github.com/rust-lang/rust-clippy/issues/8281 +#![allow(clippy::question_mark)] // //https://github.com/rust-lang/rust-clippy/issues/8281 #![allow(unknown_lints)] #![warn(missing_docs)] #![warn(missing_debug_implementations)] diff --git i/mezzaluna-feature-loader/src/loaders/disk.rs w/mezzaluna-feature-loader/src/loaders/disk.rs index f2f8e6846b..ac7ed49baf 100644 --- i/mezzaluna-feature-loader/src/loaders/disk.rs +++ w/mezzaluna-feature-loader/src/loaders/disk.rs @@ -123,8 +123,8 @@ impl Disk { /// This source loader grants access to the host file system. The `Disk` /// loader does not support native extensions. /// - /// This method returns [`None`] if the given `load_path` does not contain any - /// paths. + /// This method returns [`None`] if the given `load_path` does not contain + /// any paths. /// /// [`load_path`]: Self::load_path /// [`set_load_path`]: Self::set_load_path diff --git i/mezzaluna-feature-loader/src/loaders/memory.rs w/mezzaluna-feature-loader/src/loaders/memory.rs index d1ae68f143..182750e34c 100644 --- i/mezzaluna-feature-loader/src/loaders/memory.rs +++ w/mezzaluna-feature-loader/src/loaders/memory.rs @@ -100,8 +100,9 @@ impl Memory { /// /// # Panics /// - /// If the given path is an absolute path outside of this loader's [load - /// path], this function will panic. + /// If the given path is an absolute path outside of this loader's + /// [load + path], this function will panic. /// /// If the given path has already been inserted into the in-memory file /// system, this function will panic. @@ -150,8 +151,9 @@ impl Memory { /// /// # Panics /// - /// If the given path is an absolute path outside of this loader's [load - /// path], this function will panic. + /// If the given path is an absolute path outside of this loader's + /// [load + path], this function will panic. /// /// If the given path has already been inserted into the in-memory file /// system, this function will panic. diff --git i/mezzaluna-feature-loader/src/loaders/rubylib.rs w/mezzaluna-feature-loader/src/loaders/rubylib.rs index ab1d558de1..f4451a976f 100644 --- i/mezzaluna-feature-loader/src/loaders/rubylib.rs +++ w/mezzaluna-feature-loader/src/loaders/rubylib.rs @@ -79,9 +79,9 @@ impl Rubylib { /// This source loader grants access to the host file system. The `Rubylib` /// loader does not support native extensions. /// - /// This method returns [`None`] if there are errors resolving the - /// `RUBYLIB` environment variable, if the `RUBYLIB` environment variable is - /// not set, if the current working directory cannot be retrieved, or if the + /// This method returns [`None`] if there are errors resolving the `RUBYLIB` + /// environment variable, if the `RUBYLIB` environment variable is not set, + /// if the current working directory cannot be retrieved, or if the /// `RUBYLIB` environment variable does not contain any paths. /// /// [current working directory]: env::current_dir diff --git i/scolapasta-aref/src/lib.rs w/scolapasta-aref/src/lib.rs index d2c7cdbb07..8d517ce743 100644 --- i/scolapasta-aref/src/lib.rs +++ w/scolapasta-aref/src/lib.rs @@ -36,7 +36,8 @@ #![no_std] -/// Convert a signed aref offset to a `usize` index into the underlying container. +/// Convert a signed aref offset to a `usize` index into the underlying +/// container. /// /// Negative indexes are interpreted as indexing from the end of the container /// as long as their magnitude is less than the given length. diff --git i/scolapasta-int-parse/src/error.rs w/scolapasta-int-parse/src/error.rs index cfc3ff4ed3..a8f935e52f 100644 --- i/scolapasta-int-parse/src/error.rs +++ w/scolapasta-int-parse/src/error.rs @@ -170,8 +170,8 @@ pub enum InvalidRadixExceptionKind { /// /// [`ArgumentError`]: https://ruby-doc.org/core-3.1.2/ArgumentError.html ArgumentError, - /// If the given radix falls outside the range of an [`i32`], the error should - /// be mapped to a [`RangeError`]: + /// If the given radix falls outside the range of an [`i32`], the error + /// should be mapped to a [`RangeError`]: /// /// ```console /// [3.1.2] > begin; Integer "123", (2 ** 31 + 1); rescue => e; p e; end diff --git i/scolapasta-int-parse/src/lib.rs w/scolapasta-int-parse/src/lib.rs index 0045e39440..b0ab0e9e09 100644 --- i/scolapasta-int-parse/src/lib.rs +++ w/scolapasta-int-parse/src/lib.rs @@ -20,7 +20,8 @@ //! Parse a given byte string and optional radix into an [`i64`]. //! -//! [`parse`] wraps [`i64::from_str_radix`] by normalizing the input byte string: +//! [`parse`] wraps [`i64::from_str_radix`] by normalizing the input byte +//! string: //! //! - Assert the byte string is ASCII and does not contain NUL bytes. //! - Parse the radix to ensure it is in range and valid for the given input diff --git i/scolapasta-int-parse/src/parser.rs w/scolapasta-int-parse/src/parser.rs index 805c982159..2fe647ec76 100644 --- i/scolapasta-int-parse/src/parser.rs +++ w/scolapasta-int-parse/src/parser.rs @@ -46,9 +46,10 @@ impl<'a> State<'a> { // => 21 // ``` // - // In bases below 10, the string representation for large numbers will - // be longer, but pre-allocating for these uncommon cases seems wasteful. - // The `String` will reallocate if it needs to in these pathological cases. + // In bases below 10, the string representation for large numbers will be + //longer, but pre-allocating for these uncommon cases seems wasteful. + //The `String` will reallocate if it needs to in these pathological + //cases. const PRE_ALLOCATED_DIGIT_CAPACITY: usize = 21; match self { diff --git i/scolapasta-int-parse/src/radix.rs w/scolapasta-int-parse/src/radix.rs index 4a6f70b9ca..5d9b069809 100644 --- i/scolapasta-int-parse/src/radix.rs +++ w/scolapasta-int-parse/src/radix.rs @@ -595,10 +595,8 @@ mod tests { #[test] fn negative_radix_with_inline_base_and_leading_spaces_ignores() { - // [3.1.2] > Integer " 0123", -6 - // => 83 - // [3.1.2] > Integer " 0x123", -6 - // => 291 + // [3.1.2] > Integer " 0123", -6 => 83 [3.1.2] > Integer " 0x123", -6 => + //291 let subject = " 0123".try_into().unwrap(); let radix = Radix::try_base_from_str_and_i64(subject, -6).unwrap(); assert_eq!(radix, None); diff --git i/scolapasta-path/src/paths/windows.rs w/scolapasta-path/src/paths/windows.rs index 3756ddb478..7bc90bb8fc 100644 --- i/scolapasta-path/src/paths/windows.rs +++ w/scolapasta-path/src/paths/windows.rs @@ -196,8 +196,8 @@ mod tests { // ([]uint16=`[0xdcc0 0x2e 0x74 0x78 0x74]`) // ``` // - // and attempt to read it by calling `ioutil.ReadDir` and reading all - // the files that come back. + // and attempt to read it by calling `ioutil.ReadDir` and reading all the + //files that come back. // // See: https://github.com/golang/go/issues/32334#issue-450436484 diff --git i/scolapasta-string-escape/src/string.rs w/scolapasta-string-escape/src/string.rs index 315358bf22..cc40782ded 100644 --- i/scolapasta-string-escape/src/string.rs +++ w/scolapasta-string-escape/src/string.rs @@ -25,8 +25,7 @@ use crate::literal::{ascii_char_with_escape, Literal}; /// /// # Errors /// -/// This method only returns an error when the given writer returns an -/// error. +/// This method only returns an error when the given writer returns an error. pub fn format_debug_escape_into<W, T>(mut dest: W, message: T) -> fmt::Result where W: Write, diff --git i/spinoso-array/src/array/mod.rs w/spinoso-array/src/array/mod.rs index 0382db5e4d..f7818fc6ae 100644 --- i/spinoso-array/src/array/mod.rs +++ w/spinoso-array/src/array/mod.rs @@ -5,8 +5,8 @@ //! in `std`. [`SmallArray`](smallvec::SmallArray) is based on [`SmallVec`]. //! [`TinyArray`](tinyvec::TinyArray) is based on [`TinyVec`]. //! -//! The smallvec backend uses small vector optimization to store -//! [some elements][inline-capacity] inline without spilling to the heap. +//! The smallvec backend uses small vector optimization to store [some +//! elements][inline-capacity] inline without spilling to the heap. //! //! The `SmallArray` backend requires the `small-array` Cargo feature to be //! enabled. diff --git i/spinoso-array/src/array/smallvec/mod.rs w/spinoso-array/src/array/smallvec/mod.rs index b8ddb284cc..4e77346e9f 100644 --- i/spinoso-array/src/array/smallvec/mod.rs +++ w/spinoso-array/src/array/smallvec/mod.rs @@ -481,7 +481,7 @@ impl<T> SmallArray<T> { /// Returns a reference to an element at the index. /// - /// Unlike [`Vec`], this method does not support indexing with a range. See + /// Unlike [`Vec`], this method does not support indexing with a range. See /// the [`slice`](Self::slice) method for retrieving a sub-slice from the /// array. /// diff --git i/spinoso-array/src/array/tinyvec/mod.rs w/spinoso-array/src/array/tinyvec/mod.rs index 8c6aea2e71..3c87deaea0 100644 --- i/spinoso-array/src/array/tinyvec/mod.rs +++ w/spinoso-array/src/array/tinyvec/mod.rs @@ -476,7 +476,7 @@ where /// Returns a reference to an element at the index. /// - /// Unlike [`Vec`], this method does not support indexing with a range. See + /// Unlike [`Vec`], this method does not support indexing with a range. See /// the [`slice`](Self::slice) method for retrieving a sub-slice from the /// array. /// @@ -882,8 +882,8 @@ impl<T> TinyArray<T> where T: Clone + Default, { - /// Construct a new `TinyArray<T>` with length `len` and all elements set - /// to `default`. The `TinyArray` will have capacity at least `len`. + /// Construct a new `TinyArray<T>` with length `len` and all elements set to + /// `default`. The `TinyArray` will have capacity at least `len`. /// /// # Examples /// diff --git i/spinoso-array/src/array/vec/mod.rs w/spinoso-array/src/array/vec/mod.rs index 98385ed9cf..28e37ab67c 100644 --- i/spinoso-array/src/array/vec/mod.rs +++ w/spinoso-array/src/array/vec/mod.rs @@ -501,7 +501,7 @@ impl<T> Array<T> { /// Returns a reference to an element at the index. /// - /// Unlike [`Vec`], this method does not support indexing with a range. See + /// Unlike [`Vec`], this method does not support indexing with a range. See /// the [`slice`](Self::slice) method for retrieving a sub-slice from the /// array. /// diff --git i/spinoso-array/src/lib.rs w/spinoso-array/src/lib.rs index 6e2dc43963..3572bdb945 100644 --- i/spinoso-array/src/lib.rs +++ w/spinoso-array/src/lib.rs @@ -109,8 +109,8 @@ //! //! # Panics //! -//! `Array`s in this crate do not expose panicking slicing operations (except for -//! their [`Index`] and [`IndexMut`] implementations). Instead of panicking, +//! `Array`s in this crate do not expose panicking slicing operations (except +//! for their [`Index`] and [`IndexMut`] implementations). Instead of panicking, //! slicing APIs operate until the end of the vector or return `&[]`. Mutating //! APIs extend `Array`s on out of bounds access. //! diff --git i/spinoso-env/src/env/memory.rs w/spinoso-env/src/env/memory.rs index 7cf4477520..20838817b2 100644 --- i/spinoso-env/src/env/memory.rs +++ w/spinoso-env/src/env/memory.rs @@ -84,9 +84,9 @@ impl Memory { // https://doc.rust-lang.org/std/env/fn.set_var.html // https://doc.rust-lang.org/std/env/fn.remove_var.html // - // This function may panic if key is empty, contains an ASCII equals - // sign '=' or the NUL character '\0', or when the value contains the - // NUL character. + // This function may panic if key is empty, contains an ASCII equals sign + //'=' or the NUL character '\0', or when the value contains the NUL + //character. if name.is_empty() { // MRI accepts empty names on get and should always return `nil` // since empty names are invalid at the OS level. @@ -142,9 +142,9 @@ impl Memory { // https://doc.rust-lang.org/std/env/fn.set_var.html // https://doc.rust-lang.org/std/env/fn.remove_var.html // - // This function may panic if key is empty, contains an ASCII equals - // sign '=' or the NUL character '\0', or when the value contains the - // NUL character. + // This function may panic if key is empty, contains an ASCII equals sign + //'=' or the NUL character '\0', or when the value contains the NUL + //character. if name.find_byte(b'\0').is_some() { let message = "bad environment variable name: contains null byte"; Err(ArgumentError::with_message(message).into()) diff --git i/spinoso-env/src/env/system.rs w/spinoso-env/src/env/system.rs index ac25525d75..9b081534f8 100644 --- i/spinoso-env/src/env/system.rs +++ w/spinoso-env/src/env/system.rs @@ -80,7 +80,8 @@ impl System { /// /// # Implementation notes /// - /// This method accesses the host system's environment using [`env::var_os`]. + /// This method accesses the host system's environment using + /// [`env::var_os`]. /// /// # Examples /// @@ -109,9 +110,9 @@ impl System { // https://doc.rust-lang.org/std/env/fn.set_var.html // https://doc.rust-lang.org/std/env/fn.remove_var.html // - // This function may panic if key is empty, contains an ASCII equals - // sign '=' or the NUL character '\0', or when the value contains the - // NUL character. + // This function may panic if key is empty, contains an ASCII equals sign + //'=' or the NUL character '\0', or when the value contains the NUL + //character. if name.is_empty() { // MRI accepts empty names on get and should always return `nil` // since empty names are invalid at the OS level. @@ -140,8 +141,8 @@ impl System { /// /// # Implementation notes /// - /// This method accesses the host system's environment using [`env::set_var`] - /// and [`env::remove_var`]. + /// This method accesses the host system's environment using + /// [`env::set_var`] and [`env::remove_var`]. /// /// # Examples /// @@ -181,9 +182,9 @@ impl System { // https://doc.rust-lang.org/std/env/fn.set_var.html // https://doc.rust-lang.org/std/env/fn.remove_var.html // - // This function may panic if key is empty, contains an ASCII equals - // sign '=' or the NUL character '\0', or when the value contains the - // NUL character. + // This function may panic if key is empty, contains an ASCII equals sign + //'=' or the NUL character '\0', or when the value contains the NUL + //character. if name.find_byte(b'\0').is_some() { let message = "bad environment variable name: contains null byte"; Err(ArgumentError::with_message(message).into()) @@ -222,7 +223,8 @@ impl System { /// /// # Implementation notes /// - /// This method accesses the host system's environment using [`env::vars_os`]. + /// This method accesses the host system's environment using + /// [`env::vars_os`]. /// /// # Examples /// diff --git i/spinoso-env/src/lib.rs w/spinoso-env/src/lib.rs index 2d629d1212..3beef86c11 100644 --- i/spinoso-env/src/lib.rs +++ w/spinoso-env/src/lib.rs @@ -47,7 +47,8 @@ //! //! # Examples //! -//! Using the in-memory backend allows safely manipulating an emulated environment: +//! Using the in-memory backend allows safely manipulating an emulated +//! environment: //! //! ``` //! # use spinoso_env::Memory; @@ -186,7 +187,8 @@ impl error::Error for Error { /// /// Argument errors have an associated message. /// -/// This error corresponds to the [Ruby `ArgumentError` Exception class]. +/// This error corresponds to the [Ruby `ArgumentError` +/// `ArgumentError` Exception class]. /// /// # Examples /// diff --git i/spinoso-exception/src/core/argumenterror.rs w/spinoso-exception/src/core/argumenterror.rs index 276969db2c..86c5373a17 100644 --- i/spinoso-exception/src/core/argumenterror.rs +++ w/spinoso-exception/src/core/argumenterror.rs @@ -45,15 +45,14 @@ impl ArgumentError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"ArgumentError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `ArgumentError` Ruby exception with the given - /// message. + /// Construct a new, `ArgumentError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/encodingerror.rs w/spinoso-exception/src/core/encodingerror.rs index bce3a2e8a1..04313d98f3 100644 --- i/spinoso-exception/src/core/encodingerror.rs +++ w/spinoso-exception/src/core/encodingerror.rs @@ -45,15 +45,14 @@ impl EncodingError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"EncodingError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `EncodingError` Ruby exception with the given - /// message. + /// Construct a new, `EncodingError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/eoferror.rs w/spinoso-exception/src/core/eoferror.rs index 65fa57f16e..deef23437d 100644 --- i/spinoso-exception/src/core/eoferror.rs +++ w/spinoso-exception/src/core/eoferror.rs @@ -46,15 +46,14 @@ impl EOFError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"EOFError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `EOFError` Ruby exception with the given - /// message. + /// Construct a new, `EOFError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/exception.rs w/spinoso-exception/src/core/exception.rs index add5b17347..0fd23e2a1f 100644 --- i/spinoso-exception/src/core/exception.rs +++ w/spinoso-exception/src/core/exception.rs @@ -45,15 +45,14 @@ impl Exception { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"Exception"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `Exception` Ruby exception with the given - /// message. + /// Construct a new, `Exception` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/fatal.rs w/spinoso-exception/src/core/fatal.rs index ba05401c89..32e2846d41 100644 --- i/spinoso-exception/src/core/fatal.rs +++ w/spinoso-exception/src/core/fatal.rs @@ -45,15 +45,14 @@ impl Fatal { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"fatal"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `fatal` Ruby exception with the given - /// message. + /// Construct a new, `fatal` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/fibererror.rs w/spinoso-exception/src/core/fibererror.rs index 2a57d97961..e51fe9b622 100644 --- i/spinoso-exception/src/core/fibererror.rs +++ w/spinoso-exception/src/core/fibererror.rs @@ -45,15 +45,14 @@ impl FiberError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"FiberError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `FiberError` Ruby exception with the given - /// message. + /// Construct a new, `FiberError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/floatdomainerror.rs w/spinoso-exception/src/core/floatdomainerror.rs index d69464b0ff..92e7023cf4 100644 --- i/spinoso-exception/src/core/floatdomainerror.rs +++ w/spinoso-exception/src/core/floatdomainerror.rs @@ -45,9 +45,9 @@ impl FloatDomainError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"FloatDomainError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/frozenerror.rs w/spinoso-exception/src/core/frozenerror.rs index a68be40b7b..380c7358bf 100644 --- i/spinoso-exception/src/core/frozenerror.rs +++ w/spinoso-exception/src/core/frozenerror.rs @@ -45,15 +45,14 @@ impl FrozenError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"FrozenError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `FrozenError` Ruby exception with the given - /// message. + /// Construct a new, `FrozenError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/indexerror.rs w/spinoso-exception/src/core/indexerror.rs index dd61dcf331..970214c051 100644 --- i/spinoso-exception/src/core/indexerror.rs +++ w/spinoso-exception/src/core/indexerror.rs @@ -45,15 +45,14 @@ impl IndexError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"IndexError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `IndexError` Ruby exception with the given - /// message. + /// Construct a new, `IndexError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/interrupt.rs w/spinoso-exception/src/core/interrupt.rs index f9924ca1ea..3c7fa3cdae 100644 --- i/spinoso-exception/src/core/interrupt.rs +++ w/spinoso-exception/src/core/interrupt.rs @@ -45,15 +45,14 @@ impl Interrupt { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"Interrupt"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `Interrupt` Ruby exception with the given - /// message. + /// Construct a new, `Interrupt` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/ioerror.rs w/spinoso-exception/src/core/ioerror.rs index 0f926d29d9..96c214ece6 100644 --- i/spinoso-exception/src/core/ioerror.rs +++ w/spinoso-exception/src/core/ioerror.rs @@ -46,15 +46,14 @@ impl IOError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"IOError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `IOError` Ruby exception with the given - /// message. + /// Construct a new, `IOError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/keyerror.rs w/spinoso-exception/src/core/keyerror.rs index b5049f24c3..f46ec41427 100644 --- i/spinoso-exception/src/core/keyerror.rs +++ w/spinoso-exception/src/core/keyerror.rs @@ -45,15 +45,14 @@ impl KeyError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"KeyError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `KeyError` Ruby exception with the given - /// message. + /// Construct a new, `KeyError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/loaderror.rs w/spinoso-exception/src/core/loaderror.rs index 93b0ce489f..c281941608 100644 --- i/spinoso-exception/src/core/loaderror.rs +++ w/spinoso-exception/src/core/loaderror.rs @@ -45,15 +45,14 @@ impl LoadError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"LoadError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `LoadError` Ruby exception with the given - /// message. + /// Construct a new, `LoadError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/localjumperror.rs w/spinoso-exception/src/core/localjumperror.rs index 470a9430ed..d6cd4757a2 100644 --- i/spinoso-exception/src/core/localjumperror.rs +++ w/spinoso-exception/src/core/localjumperror.rs @@ -45,15 +45,14 @@ impl LocalJumpError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"LocalJumpError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `LocalJumpError` Ruby exception with the given - /// message. + /// Construct a new, `LocalJumpError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/nameerror.rs w/spinoso-exception/src/core/nameerror.rs index 6a1912d8c5..83c2f72c77 100644 --- i/spinoso-exception/src/core/nameerror.rs +++ w/spinoso-exception/src/core/nameerror.rs @@ -45,15 +45,14 @@ impl NameError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"NameError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `NameError` Ruby exception with the given - /// message. + /// Construct a new, `NameError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/nomemoryerror.rs w/spinoso-exception/src/core/nomemoryerror.rs index c629495ed3..581db5baeb 100644 --- i/spinoso-exception/src/core/nomemoryerror.rs +++ w/spinoso-exception/src/core/nomemoryerror.rs @@ -45,15 +45,14 @@ impl NoMemoryError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"NoMemoryError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `NoMemoryError` Ruby exception with the given - /// message. + /// Construct a new, `NoMemoryError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/nomethoderror.rs w/spinoso-exception/src/core/nomethoderror.rs index 51eb9cc97d..f56c0e8c03 100644 --- i/spinoso-exception/src/core/nomethoderror.rs +++ w/spinoso-exception/src/core/nomethoderror.rs @@ -45,15 +45,14 @@ impl NoMethodError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"NoMethodError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `NoMethodError` Ruby exception with the given - /// message. + /// Construct a new, `NoMethodError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/notimplementederror.rs w/spinoso-exception/src/core/notimplementederror.rs index e736cd650e..08e355513c 100644 --- i/spinoso-exception/src/core/notimplementederror.rs +++ w/spinoso-exception/src/core/notimplementederror.rs @@ -45,9 +45,9 @@ impl NotImplementedError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"NotImplementedError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/rangeerror.rs w/spinoso-exception/src/core/rangeerror.rs index 1559606ff4..eac71799fe 100644 --- i/spinoso-exception/src/core/rangeerror.rs +++ w/spinoso-exception/src/core/rangeerror.rs @@ -45,15 +45,14 @@ impl RangeError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"RangeError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `RangeError` Ruby exception with the given - /// message. + /// Construct a new, `RangeError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/regexperror.rs w/spinoso-exception/src/core/regexperror.rs index 05a44aca00..418358d434 100644 --- i/spinoso-exception/src/core/regexperror.rs +++ w/spinoso-exception/src/core/regexperror.rs @@ -45,15 +45,14 @@ impl RegexpError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"RegexpError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `RegexpError` Ruby exception with the given - /// message. + /// Construct a new, `RegexpError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/runtimeerror.rs w/spinoso-exception/src/core/runtimeerror.rs index 11eb629e7d..116690f327 100644 --- i/spinoso-exception/src/core/runtimeerror.rs +++ w/spinoso-exception/src/core/runtimeerror.rs @@ -45,15 +45,14 @@ impl RuntimeError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"RuntimeError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `RuntimeError` Ruby exception with the given - /// message. + /// Construct a new, `RuntimeError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/scripterror.rs w/spinoso-exception/src/core/scripterror.rs index c632f5a862..0322b08048 100644 --- i/spinoso-exception/src/core/scripterror.rs +++ w/spinoso-exception/src/core/scripterror.rs @@ -45,15 +45,14 @@ impl ScriptError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"ScriptError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `ScriptError` Ruby exception with the given - /// message. + /// Construct a new, `ScriptError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/securityerror.rs w/spinoso-exception/src/core/securityerror.rs index f8706531e6..20b5467c4f 100644 --- i/spinoso-exception/src/core/securityerror.rs +++ w/spinoso-exception/src/core/securityerror.rs @@ -45,15 +45,14 @@ impl SecurityError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SecurityError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `SecurityError` Ruby exception with the given - /// message. + /// Construct a new, `SecurityError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/signalexception.rs w/spinoso-exception/src/core/signalexception.rs index 77e01b511b..28246a0382 100644 --- i/spinoso-exception/src/core/signalexception.rs +++ w/spinoso-exception/src/core/signalexception.rs @@ -45,9 +45,9 @@ impl SignalException { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SignalException"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/standarderror.rs w/spinoso-exception/src/core/standarderror.rs index 310e75db53..c5e8c1de78 100644 --- i/spinoso-exception/src/core/standarderror.rs +++ w/spinoso-exception/src/core/standarderror.rs @@ -45,15 +45,14 @@ impl StandardError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"StandardError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `StandardError` Ruby exception with the given - /// message. + /// Construct a new, `StandardError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/stopiteration.rs w/spinoso-exception/src/core/stopiteration.rs index 9653d851a6..5309cffba2 100644 --- i/spinoso-exception/src/core/stopiteration.rs +++ w/spinoso-exception/src/core/stopiteration.rs @@ -45,15 +45,14 @@ impl StopIteration { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"StopIteration"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `StopIteration` Ruby exception with the given - /// message. + /// Construct a new, `StopIteration` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/syntaxerror.rs w/spinoso-exception/src/core/syntaxerror.rs index 84556aba49..89784c1715 100644 --- i/spinoso-exception/src/core/syntaxerror.rs +++ w/spinoso-exception/src/core/syntaxerror.rs @@ -45,15 +45,14 @@ impl SyntaxError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SyntaxError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `SyntaxError` Ruby exception with the given - /// message. + /// Construct a new, `SyntaxError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/systemcallerror.rs w/spinoso-exception/src/core/systemcallerror.rs index eac05c8bf7..8c1e8f8727 100644 --- i/spinoso-exception/src/core/systemcallerror.rs +++ w/spinoso-exception/src/core/systemcallerror.rs @@ -45,9 +45,9 @@ impl SystemCallError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SystemCallError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/systemexit.rs w/spinoso-exception/src/core/systemexit.rs index 96fcc43e02..cb1e6287cc 100644 --- i/spinoso-exception/src/core/systemexit.rs +++ w/spinoso-exception/src/core/systemexit.rs @@ -45,15 +45,14 @@ impl SystemExit { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SystemExit"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `SystemExit` Ruby exception with the given - /// message. + /// Construct a new, `SystemExit` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/systemstackerror.rs w/spinoso-exception/src/core/systemstackerror.rs index 1d7f73b580..dc767c7539 100644 --- i/spinoso-exception/src/core/systemstackerror.rs +++ w/spinoso-exception/src/core/systemstackerror.rs @@ -45,9 +45,9 @@ impl SystemStackError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"SystemStackError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/threaderror.rs w/spinoso-exception/src/core/threaderror.rs index 9f55fb12e2..90a09a19f5 100644 --- i/spinoso-exception/src/core/threaderror.rs +++ w/spinoso-exception/src/core/threaderror.rs @@ -45,15 +45,14 @@ impl ThreadError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"ThreadError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `ThreadError` Ruby exception with the given - /// message. + /// Construct a new, `ThreadError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/typeerror.rs w/spinoso-exception/src/core/typeerror.rs index f099b21a2d..591e1c9912 100644 --- i/spinoso-exception/src/core/typeerror.rs +++ w/spinoso-exception/src/core/typeerror.rs @@ -45,15 +45,14 @@ impl TypeError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"TypeError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } - /// Construct a new, `TypeError` Ruby exception with the given - /// message. + /// Construct a new, `TypeError` Ruby exception with the given message. /// /// # Examples /// diff --git i/spinoso-exception/src/core/uncaughtthrowerror.rs w/spinoso-exception/src/core/uncaughtthrowerror.rs index 3f7de347a8..9b35f69fba 100644 --- i/spinoso-exception/src/core/uncaughtthrowerror.rs +++ w/spinoso-exception/src/core/uncaughtthrowerror.rs @@ -45,9 +45,9 @@ impl UncaughtThrowError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"UncaughtThrowError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-exception/src/core/zerodivisionerror.rs w/spinoso-exception/src/core/zerodivisionerror.rs index 02d692e0cb..faad1c22a1 100644 --- i/spinoso-exception/src/core/zerodivisionerror.rs +++ w/spinoso-exception/src/core/zerodivisionerror.rs @@ -45,9 +45,9 @@ impl ZeroDivisionError { pub const fn new() -> Self { const DEFAULT_MESSAGE: &[u8] = b"ZeroDivisionError"; - // `Exception` objects initialized via (for example) - // `raise RuntimeError` or `RuntimeError.new` have `message` - // equal to the exception's class name. + // `Exception` objects initialized via (for example) `raise RuntimeError` + //or `RuntimeError.new` have `message` equal to the exception's class + //name. let message = Cow::Borrowed(DEFAULT_MESSAGE); Self { message } } diff --git i/spinoso-math/src/lib.rs w/spinoso-math/src/lib.rs index 7fbc09aea2..32099630a1 100644 --- i/spinoso-math/src/lib.rs +++ w/spinoso-math/src/lib.rs @@ -196,9 +196,9 @@ impl error::Error for Error { /// /// Domain errors have an associated message. /// -/// This error corresponds to the [Ruby `Math::DomainError` Exception class]. It -/// can be used to differentiate between [`NaN`](f64::NAN) inputs and what would -/// be `NaN` outputs. +/// This error corresponds to the [Ruby `Math::DomainError` +/// `Math::DomainError` Exception class]. It can be used to differentiate +/// between [`NaN`](f64::NAN) inputs and what would be `NaN` outputs. /// /// # Examples /// diff --git i/spinoso-math/src/math.rs w/spinoso-math/src/math.rs index cd8610803a..30428d9be8 100644 --- i/spinoso-math/src/math.rs +++ w/spinoso-math/src/math.rs @@ -208,8 +208,8 @@ pub fn atan2(value: f64, other: f64) -> f64 { /// /// # Errors /// -/// If the result of computing the inverse hyperbolic tangent is [`NAN`] -/// a domain error is returned. +/// If the result of computing the inverse hyperbolic tangent is [`NAN`] a +/// domain error is returned. /// /// [`NAN`]: f64::NAN #[inline] @@ -486,29 +486,12 @@ pub fn gamma(value: f64) -> Result<f64, DomainError> { // and might be an approximation so include a lookup table for as many `n` // as can fit in the float mantissa. const FACTORIAL_TABLE: [f64; 23] = [ - 1.0_f64, // fact(0) - 1.0, // fact(1) - 2.0, // fact(2) - 6.0, // fact(3) - 24.0, // fact(4) - 120.0, // fact(5) - 720.0, // fact(6) - 5_040.0, // fact(7) - 40_320.0, // fact(8) - 362_880.0, // fact(9) - 3_628_800.0, // fact(10) - 39_916_800.0, // fact(11) - 479_001_600.0, // fact(12) - 6_227_020_800.0, // fact(13) - 87_178_291_200.0, // fact(14) - 1_307_674_368_000.0, // fact(15) - 20_922_789_888_000.0, // fact(16) - 355_687_428_096_000.0, // fact(17) - 6_402_373_705_728_000.0, // fact(18) - 121_645_100_408_832_000.0, // fact(19) - 2_432_902_008_176_640_000.0, // fact(20) - 51_090_942_171_709_440_000.0, // fact(21) - 1_124_000_727_777_607_680_000.0, // fact(22) + 1.0_f64, // fact(0) fact(1) fact(2) fact(3) + //fact(4) fact(5) fact(6) fact(7) + //fact(8) fact(9) fact(10) fact(11) + //fact(12) fact(13) fact(14) fact(15) + //fact(16) fact(17) fact(18) fact(19) + //fact(20) fact(21) fact(22) ]; match value { value if value.is_infinite() && value.is_sign_negative() => Err(DomainError::with_message( diff --git i/spinoso-random/src/lib.rs w/spinoso-random/src/lib.rs index 710453fdf5..c3088ddd3f 100644 --- i/spinoso-random/src/lib.rs +++ w/spinoso-random/src/lib.rs @@ -265,7 +265,8 @@ impl error::Error for InitializeError {} /// This error is returned by [`urandom()`]. See its documentation for more /// details. /// -/// This error corresponds to the [Ruby `RuntimeError` Exception class]. +/// This error corresponds to the [Ruby `RuntimeError` +/// `RuntimeError` Exception class]. /// /// # Examples /// @@ -332,7 +333,8 @@ impl error::Error for UrandomError {} /// This error is returned by [`new_seed`]. See its documentation for more /// details. /// -/// This error corresponds to the [Ruby `RuntimeError` Exception class]. +/// This error corresponds to the [Ruby `RuntimeError` +/// `RuntimeError` Exception class]. /// /// # Examples /// @@ -397,7 +399,8 @@ impl error::Error for NewSeedError {} /// This error is returned by [`rand()`]. See its documentation for more /// details. /// -/// This error corresponds to the [Ruby `ArgumentError` Exception class]. +/// This error corresponds to the [Ruby `ArgumentError` +/// `ArgumentError` Exception class]. /// /// # Examples /// diff --git i/spinoso-regexp/src/debug.rs w/spinoso-regexp/src/debug.rs index 794adeb4ca..d55f583fc1 100644 --- i/spinoso-regexp/src/debug.rs +++ w/spinoso-regexp/src/debug.rs @@ -52,8 +52,7 @@ impl Delimiters { /// /// # Examples /// -/// UTF-8 regexp patterns and options are formatted in a debug -/// representation: +/// UTF-8 regexp patterns and options are formatted in a debug representation: /// /// ``` /// use spinoso_regexp::Debug; @@ -95,8 +94,9 @@ pub struct Debug<'a> { } impl<'a> Debug<'a> { - /// Construct a new `Debug` iterator with a regexp source, [options - /// modifiers], and [encoding modifiers]. + /// Construct a new `Debug` iterator with a regexp source, + /// [options + modifiers], and [encoding modifiers]. /// /// # Examples /// @@ -199,8 +199,8 @@ impl<'a> Iterator for Debug<'a> { self.source = &self.source[size..]; Some(ch) } - // Otherwise, we've gotten invalid UTF-8, which means this is not a - // printable char. + // Otherwise, we've gotten invalid UTF-8, which means this is not + //a printable char. None => { let (chunk, remainder) = self.source.split_at(size); self.source = remainder; diff --git i/spinoso-regexp/src/encoding.rs w/spinoso-regexp/src/encoding.rs index 879da6a0a7..179bf7d89a 100644 --- i/spinoso-regexp/src/encoding.rs +++ w/spinoso-regexp/src/encoding.rs @@ -32,10 +32,10 @@ impl error::Error for InvalidEncodingError {} /// The encoding of a Regexp literal. /// -/// Regexps are assumed to use the source encoding but literals may override -/// the encoding with a Regexp modifier. +/// Regexps are assumed to use the source encoding but literals may override the +/// encoding with a Regexp modifier. /// -/// See [`Regexp` encoding][regexp-encoding]. +/// See [`Regexp` [`Regexp` encoding][regexp-encoding]. /// /// [regexp-encoding]: https://ruby-doc.org/core-3.1.2/Regexp.html#class-Regexp-label-Encoding #[derive(Debug, Clone, Copy, PartialOrd, Ord)] diff --git i/spinoso-regexp/src/error.rs w/spinoso-regexp/src/error.rs index 00aceb73f6..656a74c08a 100644 --- i/spinoso-regexp/src/error.rs +++ w/spinoso-regexp/src/error.rs @@ -63,7 +63,8 @@ impl error::Error for Error { /// /// Argument errors have an associated message. /// -/// This error corresponds to the [Ruby `ArgumentError` Exception class]. +/// This error corresponds to the [Ruby `ArgumentError` +/// `ArgumentError` Exception class]. /// /// # Examples /// diff --git i/spinoso-regexp/src/lib.rs w/spinoso-regexp/src/lib.rs index d360452d54..71f4242047 100644 --- i/spinoso-regexp/src/lib.rs +++ w/spinoso-regexp/src/lib.rs @@ -3,8 +3,7 @@ #![warn(clippy::cargo)] #![cfg_attr(test, allow(clippy::non_ascii_literal))] #![allow(unknown_lints)] -// TODO: warn on missing docs once crate is API-complete. -// #![warn(missing_docs)] +// TODO: warn on missing docs once crate is API-complete. #![warn(missing_docs)] #![warn(missing_debug_implementations)] #![warn(missing_copy_implementations)] #![warn(rust_2018_idioms)] diff --git i/spinoso-regexp/src/options.rs w/spinoso-regexp/src/options.rs index b3d63ec0d5..72d0826007 100644 --- i/spinoso-regexp/src/options.rs +++ w/spinoso-regexp/src/options.rs @@ -112,8 +112,8 @@ impl From<u8> for Options { impl From<i64> for Options { /// Truncate the given `i64` to one byte and generate flags. /// - /// See `From<u8>`. For a conversion that fails if the given `i64` is - /// larger than [`u8::MAX`], see [`try_from_int`]. + /// See `From<u8>`. For a conversion that fails if the given `i64` is larger + /// than [`u8::MAX`], see [`try_from_int`]. /// /// [`try_from_int`]: Self::try_from_int fn from(flags: i64) -> Self { @@ -487,7 +487,8 @@ mod tests { #[test] fn make_options_all_opts() { - // `ALL_REGEXP_OPTS` is equivalent to `EXTENDED | IGNORECASE | MULTILINE` flags. + // `ALL_REGEXP_OPTS` is equivalent to + //`EXTENDED | IGNORECASE | MULTILINE` flags. let mut opts = Options::new(); opts.flags |= Flags::ALL_REGEXP_OPTS; assert_ne!(Options::from(Flags::EXTENDED), opts); diff --git i/spinoso-regexp/src/regexp/regex/utf8/mod.rs w/spinoso-regexp/src/regexp/regex/utf8/mod.rs index d623fa4c19..8fae30ccb3 100644 --- i/spinoso-regexp/src/regexp/regex/utf8/mod.rs +++ w/spinoso-regexp/src/regexp/regex/utf8/mod.rs @@ -215,7 +215,8 @@ impl Utf8 { Ok(pos) } - /// Check whether this regexp matches the given haystack starting at an offset. + /// Check whether this regexp matches the given haystack starting at an + /// offset. /// /// If the given offset is negative, it counts backward from the end of the /// haystack. @@ -392,9 +393,8 @@ mod tests { (B("xyz"), "xyz"), (B("🦀"), "🦀"), (B("铁锈"), "铁锈"), - // Invalid UTF-8 patterns are not supported 👇 - // (B(b"\xFF\xFE"), r"\xFF\xFE"), - // (B(b"abc \xFF\xFE xyz"), r"abc \xFF\xFE xyz"), + // Invalid UTF-8 patterns are not supported 👇 (B(b"\xFF\xFE"), + //r"\xFF\xFE"), (B(b"abc \xFF\xFE xyz"), r"abc \xFF\xFE xyz"), ]; for (pattern, display) in test_cases { let regexp = make(pattern, None, Encoding::None); @@ -411,7 +411,8 @@ mod tests { (B("\0"), r"/\x00/m", Options::from(Flags::MULTILINE)), (B(b"\x0a"), "/\n/", Options::default()), (B("\x0B"), "/\x0B/", Options::default()), - // NOTE: the control characters, not a raw string, are in the debug output. + // NOTE: the control characters, not a raw string, are in the debug + //output. (B("\n\r\t"), "/\n\r\t/", Options::default()), (B("\n\r\t"), "/\n\r\t/mix", Options::from(Flags::ALL_REGEXP_OPTS)), ( @@ -460,9 +461,9 @@ mod tests { ), (B("铁锈"), "/铁锈/m", Options::from(Flags::MULTILINE)), (B("铁+锈*"), "/铁+锈*/mix", Options::from(Flags::ALL_REGEXP_OPTS)), - // Invalid UTF-8 patterns are not supported 👇 - // (B(b"\xFF\xFE"), r"\xFF\xFE", Options::default()), - // (B(b"abc \xFF\xFE xyz"), r"abc \xFF\xFE xyz", Options::default()), + // Invalid UTF-8 patterns are not supported 👇 (B(b"\xFF\xFE"), + //r"\xFF\xFE", Options::default()), (B(b"abc \xFF\xFE xyz"), r"abc + //\xFF\xFE xyz", Options::default()), ]; for (pattern, debug, options) in test_cases { let regexp = make(pattern, Some(options), Encoding::None); diff --git i/spinoso-securerandom/src/lib.rs w/spinoso-securerandom/src/lib.rs index 2e1c5fc642..7aa7792c4d 100644 --- i/spinoso-securerandom/src/lib.rs +++ w/spinoso-securerandom/src/lib.rs @@ -127,7 +127,8 @@ pub enum Error { /// This may mean that too many random bytes were requested or the system is /// out of memory. /// - /// See [`TryReserveError`] and [`TryReserveErrorKind`] for more information. + /// See [`TryReserveError`] and [`TryReserveErrorKind`] for more + /// information. /// /// [`TryReserveErrorKind`]: std::collections::TryReserveErrorKind Memory(TryReserveError), @@ -182,7 +183,8 @@ impl error::Error for Error { /// /// Argument errors have an associated message. /// -/// This error corresponds to the [Ruby `ArgumentError` Exception class]. +/// This error corresponds to the [Ruby `ArgumentError` +/// `ArgumentError` Exception class]. /// /// # Examples /// @@ -472,15 +474,15 @@ pub fn random_bytes(len: Option<i64>) -> Result<Vec<u8>, Error> { pub enum Max { /// Generate floats in the range `[0, max)`. /// - /// If `max` is less than or equal to zero, the range defaults to floats - /// in `[0.0, 1.0]`. + /// If `max` is less than or equal to zero, the range defaults to floats in + /// `[0.0, 1.0]`. /// /// If `max` is [`NaN`](f64::NAN), an error is returned. Float(f64), /// Generate signed integers in the range `[0, max)`. /// - /// If `max` is less than or equal to zero, the range defaults to floats - /// in `[0.0, 1.0]`. + /// If `max` is less than or equal to zero, the range defaults to floats in + /// `[0.0, 1.0]`. Integer(i64), /// Generate floats in the range `[0.0, 1.0]`. None, @@ -679,8 +681,8 @@ pub fn urlsafe_base64(len: Option<i64>, padding: bool) -> Result<String, Error> /// Generate a random sequence of ASCII alphanumeric bytes. /// -/// If `len` is [`Some`] and non-negative, generate a [`String`] of `len` -/// random ASCII alphanumeric bytes. If `len` is [`None`], generate 16 random +/// If `len` is [`Some`] and non-negative, generate a [`String`] of `len` random +/// ASCII alphanumeric bytes. If `len` is [`None`], generate 16 random /// alphanumeric bytes. /// /// The returned [`Vec<u8>`](Vec) is guaranteed to contain only ASCII bytes. diff --git i/spinoso-securerandom/src/uuid.rs w/spinoso-securerandom/src/uuid.rs index 719128c0cc..0b5fa40974 100644 --- i/spinoso-securerandom/src/uuid.rs +++ w/spinoso-securerandom/src/uuid.rs @@ -17,8 +17,7 @@ use crate::{Error, RandomBytesError}; /// [RFC 4122, Section 4.1]: https://tools.ietf.org/html/rfc4122#section-4.1 const OCTETS: usize = 16; -// See the BNF from JDK 8 that confirms stringified UUIDs are 36 characters -// long: +// See the BNF from JDK 8 that confirms stringified UUIDs are 36 characters long: // // https://docs.oracle.com/javase/8/docs/api/java/util/UUID.html#toString-- const ENCODED_LENGTH: usize = 36; @@ -33,7 +32,8 @@ pub fn v4() -> Result<String, Error> { let mut bytes = [0; OCTETS]; get_random_bytes(OsRng, &mut bytes)?; - // Per RFC 4122, Section 4.4, set bits for version and `clock_seq_hi_and_reserved`. + // Per RFC 4122, Section 4.4, set bits for version and + //`clock_seq_hi_and_reserved`. bytes[6] = (bytes[6] & 0x0f) | 0x40; bytes[8] = (bytes[8] & 0x3f) | 0x80; diff --git i/spinoso-string/src/buf/nul_terminated_vec.rs w/spinoso-string/src/buf/nul_terminated_vec.rs index 8f7abe334a..dbe0afce4d 100644 --- i/spinoso-string/src/buf/nul_terminated_vec.rs +++ w/spinoso-string/src/buf/nul_terminated_vec.rs @@ -15,8 +15,7 @@ fn ensure_nul_terminated(vec: &mut Vec<u8>) { const NUL_BYTE: u8 = 0; let spare_capacity = vec.spare_capacity_mut(); - // If the vec has spare capacity, set the first and last bytes to NUL. - // See: + // If the vec has spare capacity, set the first and last bytes to NUL. See: // // - https://github.com/artichoke/artichoke/pull/1976#discussion_r932782264 // - https://github.com/artichoke/artichoke/blob/16c869a9ad29acfe143bfcc011917ef442ccac54/artichoke-backend/vendor/mruby/src/string.c#L36-L38 @@ -88,8 +87,8 @@ impl Deref for Buf { impl DerefMut for Buf { #[inline] fn deref_mut(&mut self) -> &mut Self::Target { - // SAFETY: the mutable reference given out is a slice, NOT the - // underlying `Vec`, so the allocation cannot change size. + // SAFETY: the mutable reference given out is a slice, NOT the underlying + //`Vec`, so the allocation cannot change size. &mut *self.inner } } diff --git i/spinoso-string/src/chars.rs w/spinoso-string/src/chars.rs index a54dbb5e20..50597bcef7 100644 --- i/spinoso-string/src/chars.rs +++ w/spinoso-string/src/chars.rs @@ -197,7 +197,8 @@ impl<'a> Iterator for ConventionallyUtf8<'a> { Some(ch) } else { let (invalid_utf8_bytes, remainder) = self.bytes.split_at(size); - // Invalid UTF-8 bytes are yielded as byte slices one byte at a time. + // Invalid UTF-8 bytes are yielded as byte slices one byte at a + //time. self.invalid_bytes = InvalidBytes::with_bytes(invalid_utf8_bytes); self.bytes = remainder; self.invalid_bytes.next() diff --git i/spinoso-string/src/codepoints.rs w/spinoso-string/src/codepoints.rs index 21ba542ffd..80d3aadc3b 100644 --- i/spinoso-string/src/codepoints.rs +++ w/spinoso-string/src/codepoints.rs @@ -118,9 +118,9 @@ impl InvalidCodepointError { // formatted as `0x...`. const MESSAGE_MAX_LENGTH: usize = 27 + 2 + mem::size_of::<u32>() * 2; let mut s = alloc::string::String::with_capacity(MESSAGE_MAX_LENGTH); - // In practice, the errors from `write!` below are safe to ignore - // because the `core::fmt::Write` impl for `String` will never panic - // and these `String`s will never approach `isize::MAX` bytes. + // In practice, the errors from `write!` below are safe to ignore because + //the `core::fmt::Write` impl for `String` will never panic and these + //`String`s will never approach `isize::MAX` bytes. // // See the `core::fmt::Display` impl for `InvalidCodepointError`. let _ = write!(s, "{}", self); diff --git i/spinoso-string/src/enc/mod.rs w/spinoso-string/src/enc/mod.rs index f152a7856d..a24896635c 100644 --- i/spinoso-string/src/enc/mod.rs +++ w/spinoso-string/src/enc/mod.rs @@ -93,9 +93,9 @@ impl Ord for EncodedString { // // Per the docs in `std`: // -// > In particular `Eq`, `Ord` and `Hash` must be equivalent for borrowed and -// > owned values: `x.borrow() == y.borrow()` should give the same result as -// > `x == y`. +// > In particular `Eq`, `Ord` and `Hash` must be equivalent for borrowed and > + //owned values: `x.borrow() == y.borrow()` should give the same result as > `x +//== y`. impl Borrow<[u8]> for EncodedString { #[inline] fn borrow(&self) -> &[u8] { diff --git i/spinoso-string/src/enc/utf8/mod.rs w/spinoso-string/src/enc/utf8/mod.rs index 02a633020c..b0b3a5ce66 100644 --- i/spinoso-string/src/enc/utf8/mod.rs +++ w/spinoso-string/src/enc/utf8/mod.rs @@ -208,25 +208,26 @@ impl Utf8String { #[inline] #[must_use] pub fn get_char(&self, index: usize) -> Option<&'_ [u8]> { - // Fast path rejection for indexes beyond bytesize, which is - // cheap to retrieve. + // Fast path rejection for indexes beyond bytesize, which is cheap to + //retrieve. if index >= self.len() { return None; } - // Fast path for trying to treat the conventionally UTF-8 string - // as entirely ASCII. + // Fast path for trying to treat the conventionally UTF-8 string as + //entirely ASCII. // - // If the string is either all ASCII or all ASCII for a prefix - // of the string that contains the range we wish to slice, - // fallback to byte slicing as in the ASCII and binary fast path. + // If the string is either all ASCII or all ASCII for a prefix of the + //string that contains the range we wish to slice, fallback to byte + //slicing as in the ASCII and binary fast path. let consumed = match self.inner.find_non_ascii_byte() { None => return self.inner.get(index..=index), Some(idx) if idx > index => return self.inner.get(index..=index), Some(idx) => idx, }; let mut slice = &self.inner[consumed..]; - // TODO: See if we can use `get_unchecked` as implemented in `fn char_len` - // Count of "characters" remaining until the `index`th character. + // TODO: See if we can use `get_unchecked` as implemented in + //`fn char_len` Count of "characters" remaining until the `index`th + //character. let mut remaining = index - consumed; // This loop will terminate when either: // @@ -237,43 +238,39 @@ impl Utf8String { // The loop will advance by at least one byte every iteration. loop { match bstr::decode_utf8(slice) { - // If we've run out of slice while trying to find the - // `index`th character, the lookup fails and we return `nil`. + // If we've run out of slice while trying to find the `index`th + //character, the lookup fails and we return `nil`. (_, 0) => return None, - // The next two arms mean we've reached the `index`th - // character. Either return the next valid UTF-8 - // character byte slice or, if the next bytes are an - // invalid UTF-8 sequence, the next byte. + // The next two arms mean we've reached the `index`th character. + //Either return the next valid UTF-8 character byte slice or, if + //the next bytes are an invalid UTF-8 sequence, the next byte. (Some(_), size) if remaining == 0 => return Some(&slice[..size]), - // Size is guaranteed to be positive per the first arm - // which means this slice operation will not panic. + // Size is guaranteed to be positive per the first arm which + //means this slice operation will not panic. (None, _) if remaining == 0 => return Some(&slice[..1]), - // We found a single UTF-8 encoded character keep track - // of the count and advance the substring to continue - // decoding. + // We found a single UTF-8 encoded character keep track of the + //count and advance the substring to continue decoding. (Some(_), size) => { slice = &slice[size..]; remaining -= 1; } - // The next two arms handle the case where we have - // encountered an invalid UTF-8 byte sequence. + // The next two arms handle the case where we have encountered an + //invalid UTF-8 byte sequence. // - // In this case, `decode_utf8` will return slices whose - // length is `1..=3`. The length of this slice is the - // number of "characters" we can advance the loop by. + // In this case, `decode_utf8` will return slices whose length is + //`1..=3`. The length of this slice is the number of + //"characters" we can advance the loop by. // - // If the invalid UTF-8 sequence contains more bytes - // than we have remaining to get to the `index`th char, - // then the target character is inside the invalid UTF-8 - // sequence. + // If the invalid UTF-8 sequence contains more bytes than we have + //remaining to get to the `index`th char, then the target + //character is inside the invalid UTF-8 sequence. (None, size) if remaining < size => return Some(&slice[remaining..=remaining]), - // If there are more characters remaining than the number - // of bytes yielded in the invalid UTF-8 byte sequence, - // count `size` bytes and advance the slice to continue - // decoding. + // If there are more characters remaining than the number of + //bytes yielded in the invalid UTF-8 byte sequence, count `size` + //bytes and advance the slice to continue decoding. (None, size) => { slice = &slice[size..]; remaining -= size; @@ -328,8 +325,8 @@ impl Utf8String { return Some(&[]); } - // If the start of the range is beyond the character count of the - // string, the whole lookup must fail. + // If the start of the range is beyond the character count of the string, + //the whole lookup must fail. // // Slice lookups where the start is just beyond the last character index // always return an empty slice. @@ -395,24 +392,23 @@ impl Utf8String { _ => {} } - // Fast path for trying to treat the conventionally UTF-8 string - // as entirely ASCII. + // Fast path for trying to treat the conventionally UTF-8 string as + //entirely ASCII. // - // If the string is either all ASCII or all ASCII for the subset - // of the string we wish to slice, fallback to byte slicing as in - // the ASCII and binary fast path. + // If the string is either all ASCII or all ASCII for the subset of the + //string we wish to slice, fallback to byte slicing as in the ASCII and + //binary fast path. // - // Perform the same saturate-to-end slicing mechanism if `end` - // is beyond the character length of the string. + // Perform the same saturate-to-end slicing mechanism if `end` is beyond + //the character length of the string. let consumed = match self.inner.find_non_ascii_byte() { - // The entire string is ASCII, so byte indexing <=> char - // indexing. + // The entire string is ASCII, so byte indexing <=> char indexing. None => return self.inner.get(start..end).or_else(|| self.inner.get(start..)), - // The whole substring we are interested in is ASCII, so - // byte indexing is still valid. + // The whole substring we are interested in is ASCII, so byte + //indexing is still valid. Some(non_ascii_byte_offset) if non_ascii_byte_offset > end => return self.get(start..end), - // We turn non-ASCII somewhere inside before the substring - // we're interested in, so consume that much. + // We turn non-ASCII somewhere inside before the substring we're + //interested in, so consume that much. Some(non_ascii_byte_offset) if non_ascii_byte_offset <= start => non_ascii_byte_offset, // This means we turn non-ASCII somewhere inside the substring. // Consume up to start. @@ -436,12 +432,10 @@ impl Utf8String { // `start`th character, the lookup fails and we return `nil`. (_, 0) => return None, - // We found a single UTF-8 encoded character. keep track - // of the count and advance the substring to continue - // decoding. + // We found a single UTF-8 encoded character. keep track of + //the count and advance the substring to continue decoding. // - // If there's only one more to go, advance and stop the - // loop. + // If there's only one more to go, advance and stop the loop. (Some(_), size) if remaining == 1 => break &slice[size..], // Otherwise, keep track of the character we observed and // advance the slice to continue decoding. @@ -457,14 +451,13 @@ impl Utf8String { // length is `1..=3`. The length of this slice is the // number of "characters" we can advance the loop by. // - // If the invalid UTF-8 sequence contains more bytes - // than we have remaining to get to the `start`th char, - // then we can break the loop directly. + // If the invalid UTF-8 sequence contains more bytes than we + //have remaining to get to the `start`th char, then we can + //break the loop directly. (None, size) if remaining <= size => break &slice[remaining..], - // If there are more characters remaining than the number - // of bytes yielded in the invalid UTF-8 byte sequence, - // count `size` bytes and advance the slice to continue - // decoding. + // If there are more characters remaining than the number of + //bytes yielded in the invalid UTF-8 byte sequence, count + //`size` bytes and advance the slice to continue decoding. (None, size) => { slice = &slice[size..]; remaining -= size; @@ -475,12 +468,11 @@ impl Utf8String { // Scan the slice for the span of characters we want to return. remaining = end - start; - // We know `remaining` is not zero because we fast-pathed that - // case above. + // We know `remaining` is not zero because we fast-pathed that case + //above. debug_assert!(remaining > 0); - // keep track of the start of the substring from the `start`th - // character. + // keep track of the start of the substring from the `start`th character. let substr = slice; // This loop will terminate when either: @@ -496,38 +488,36 @@ impl Utf8String { // character, saturate the slice to the end of the string. (_, 0) => return Some(substr), - // We found a single UTF-8 encoded character. keep track - // of the count and advance the substring to continue - // decoding. + // We found a single UTF-8 encoded character. keep track of the + //count and advance the substring to continue decoding. // - // If there's only one more to go, advance and stop the - // loop. + // If there's only one more to go, advance and stop the loop. (Some(_), size) if remaining == 1 => { - // Push `endth` more positive because this match has - // the effect of shrinking `slice`. + // Push `endth` more positive because this match has the + //effect of shrinking `slice`. let endth = substr.len() - slice.len() + size; return Some(&substr[..endth]); } - // Otherwise, keep track of the character we observed and - // advance the slice to continue decoding. + // Otherwise, keep track of the character we observed and advance + //the slice to continue decoding. (Some(_), size) => { slice = &slice[size..]; remaining -= 1; } - // The next two arms handle the case where we have - // encountered an invalid UTF-8 byte sequence. + // The next two arms handle the case where we have encountered an + //invalid UTF-8 byte sequence. // - // In this case, `decode_utf8` will return slices whose - // length is `1..=3`. The length of this slice is the - // number of "characters" we can advance the loop by. + // In this case, `decode_utf8` will return slices whose length is + //`1..=3`. The length of this slice is the number of + //"characters" we can advance the loop by. // - // If the invalid UTF-8 sequence contains more bytes - // than we have remaining to get to the `end`th char, - // then we can break the loop directly. + // If the invalid UTF-8 sequence contains more bytes than we have + //remaining to get to the `end`th char, then we can break the + //loop directly. (None, size) if remaining <= size => { - // For an explanation of this arithmetic: - // If we're trying to slice: + // For an explanation of this arithmetic: If we're trying to + //slice: // // ``` // s = "a\xF0\x9F\x87" @@ -548,10 +538,9 @@ impl Utf8String { let endth = substr.len() - slice.len() + remaining; return Some(&substr[..endth]); } - // If there are more characters remaining than the number - // of bytes yielded in the invalid UTF-8 byte sequence, - // count `size` bytes and advance the slice to continue - // decoding. + // If there are more characters remaining than the number of + //bytes yielded in the invalid UTF-8 byte sequence, count `size` + //bytes and advance the slice to continue decoding. (None, size) => { slice = &slice[size..]; remaining -= size; @@ -657,19 +646,18 @@ impl Utf8String { // Turkic or ASCII-only modes #[inline] pub fn make_capitalized(&mut self) { - // This allocation assumes that in the common case, capitalizing - // and lower-casing `char`s do not change the length of the - // `String`. + // This allocation assumes that in the common case, capitalizing and + //lower-casing `char`s do not change the length of the `String`. // - // Use a `Vec` here instead of a `Buf` to ensure at most one alloc - // fix-up happens instead of alloc fix-ups being O(chars). + // Use a `Vec` here instead of a `Buf` to ensure at most one alloc fix-up + //happens instead of alloc fix-ups being O(chars). let mut replacement = Vec::with_capacity(self.len()); let mut bytes = self.inner.as_slice(); match bstr::decode_utf8(bytes) { (Some(ch), size) => { - // Converting a UTF-8 character to uppercase may yield - // multiple codepoints. + // Converting a UTF-8 character to uppercase may yield multiple + //codepoints. for ch in ch.to_uppercase() { replacement.push_char(ch); } @@ -686,8 +674,8 @@ impl Utf8String { while !bytes.is_empty() { let (ch, size) = bstr::decode_utf8(bytes); if let Some(ch) = ch { - // Converting a UTF-8 character to lowercase may yield - // multiple codepoints. + // Converting a UTF-8 character to lowercase may yield multiple + //codepoints. for ch in ch.to_lowercase() { replacement.push_char(ch); } @@ -703,19 +691,19 @@ impl Utf8String { #[inline] pub fn make_lowercase(&mut self) { - // This allocation assumes that in the common case, lower-casing - // `char`s do not change the length of the `String`. + // This allocation assumes that in the common case, lower-casing `char`s + //do not change the length of the `String`. // - // Use a `Vec` here instead of a `Buf` to ensure at most one alloc - // fix-up happens instead of alloc fix-ups being O(chars). + // Use a `Vec` here instead of a `Buf` to ensure at most one alloc fix-up + //happens instead of alloc fix-ups being O(chars). let mut replacement = Vec::with_capacity(self.len()); let mut bytes = self.inner.as_slice(); while !bytes.is_empty() { let (ch, size) = bstr::decode_utf8(bytes); if let Some(ch) = ch { - // Converting a UTF-8 character to lowercase may yield - // multiple codepoints. + // Converting a UTF-8 character to lowercase may yield multiple + //codepoints. for ch in ch.to_lowercase() { replacement.push_char(ch); } @@ -731,19 +719,19 @@ impl Utf8String { #[inline] pub fn make_uppercase(&mut self) { - // This allocation assumes that in the common case, upper-casing - // `char`s do not change the length of the `String`. + // This allocation assumes that in the common case, upper-casing `char`s + //do not change the length of the `String`. // - // Use a `Vec` here instead of a `Buf` to ensure at most one alloc - // fix-up happens instead of alloc fix-ups being O(chars). + // Use a `Vec` here instead of a `Buf` to ensure at most one alloc fix-up + //happens instead of alloc fix-ups being O(chars). let mut replacement = Vec::with_capacity(self.len()); let mut bytes = self.inner.as_slice(); while !bytes.is_empty() { let (ch, size) = bstr::decode_utf8(bytes); if let Some(ch) = ch { - // Converting a UTF-8 character to lowercase may yield - // multiple codepoints. + // Converting a UTF-8 character to lowercase may yield multiple + //codepoints. for ch in ch.to_uppercase() { replacement.push_char(ch); } @@ -795,8 +783,8 @@ impl Utf8String { // FIXME: this allocation can go away if `ConventionallyUtf8` impls // `DoubleEndedIterator`. let chars = ConventionallyUtf8::from(&self.inner[..]).collect::<Vec<_>>(); - // Use a `Vec` here instead of a `Buf` to ensure at most one alloc - // fix-up happens instead of alloc fix-ups being O(chars). + // Use a `Vec` here instead of a `Buf` to ensure at most one alloc fix-up + //happens instead of alloc fix-ups being O(chars). let mut replacement = Vec::with_capacity(self.inner.len()); for &bytes in chars.iter().rev() { replacement.extend_from_slice(bytes); @@ -949,7 +937,7 @@ mod tests { #[test] fn char_len_utf8() { - // https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L147-L157 + // //https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L147-L157 let s = Utf8String::from("Ω≈ç√∫˜µ≤≥÷"); assert_eq!(s.char_len(), 10); let s = Utf8String::from("åß∂ƒ©˙∆˚¬…æ"); @@ -978,14 +966,14 @@ mod tests { // effectively cause rendering issues or character-length issues to // validate product globalization readiness. // - // https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L202-L224 + // //https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L202-L224 let s = Utf8String::from("表ポあA鷗ŒéB逍Üߪąñ丂㐀𠀀"); assert_eq!(s.char_len(), 17); } #[test] fn char_len_two_byte_chars() { - // https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L188-L196 + // //https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L188-L196 let s = Utf8String::from("田中さんにあげて下さい"); assert_eq!(s.char_len(), 11); let s = Utf8String::from("パーティーへ行かないか"); @@ -1008,19 +996,21 @@ mod tests { #[test] fn char_len_space_chars() { - // Whitespace: all the characters with category `Zs`, `Zl`, or `Zp` (in Unicode - // version 8.0.0), plus `U+0009 (HT)`, `U+000B (VT)`, `U+000C (FF)`, `U+0085 (NEL)`, - // and `U+200B` (ZERO WIDTH SPACE), which are in the C categories but are often - // treated as whitespace in some contexts. + // Whitespace: all the characters with category `Zs`, `Zl`, or `Zp` (in + //Unicode version 8.0.0), plus `U+0009 (HT)`, `U+000B (VT)`, + //`U+000C (FF)`, `U+0085 (NEL)`, and `U+200B` (ZERO WIDTH SPACE), which + //are in the C categories but are often treated as whitespace in some + //contexts. // - // This file unfortunately cannot express strings containing - // `U+0000`, `U+000A`, or `U+000D` (`NUL`, `LF`, `CR`). + // This file unfortunately cannot express strings containing `U+0000`, + //`U+000A`, or `U+000D` (`NUL`, `LF`, `CR`). // // The next line may appear to be blank or mojibake in some viewers. // - // The next line may be flagged for "trailing whitespace" in some viewers. + // The next line may be flagged for "trailing whitespace" in some + //viewers. // - // https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L131 + // //https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L131 let bytes = " …             ​

    "; let s = Utf8String::from(bytes); @@ -1097,7 +1087,8 @@ mod tests { // Changes length when case changes // https://github.com/minimaxir/big-list-of-naughty-strings/blob/894882e7/blns.txt#L226-L232 let varying_length = Utf8String::from("zȺȾ"); - // There doesn't appear to be any RTL scripts that have cases, but might as well make sure + // There doesn't appear to be any RTL scripts that have cases, but might + //as well make sure let rtl = Utf8String::from("مرحبا الخرشوف"); let capitalize: fn(&Utf8String) -> Utf8String = |value: &Utf8String| { @@ -1184,16 +1175,17 @@ mod tests { // // Per `bstr`: // - // The bytes `\xF0\x9F\x87` could lead to a valid UTF-8 sequence, but 3 of them - // on their own are invalid. Only one replacement codepoint is substituted, - // which demonstrates the "substitution of maximal subparts" strategy. + // The bytes `\xF0\x9F\x87` could lead to a valid UTF-8 sequence, but 3 + //of them on their own are invalid. Only one replacement codepoint is + //substituted, which demonstrates the "substitution of maximal subparts" + //strategy. let s = Utf8String::from(b"\xF0\x9F\x87"); assert_eq!(s.chr(), b"\xF0"); } #[test] fn get_char_slice_valid_range() { - let s = Utf8String::from(b"a\xF0\x9F\x92\x8E\xFF".to_vec()); // "a💎\xFF" + let s = Utf8String::from(b"a\xF0\x9F\x92\x8E\xFF".to_vec()); // //"a💎\xFF" assert_eq!(s.get_char_slice(0..0), Some(&b""[..])); assert_eq!(s.get_char_slice(0..1), Some(&b"a"[..])); assert_eq!(s.get_char_slice(0..2), Some("a💎".as_bytes())); @@ -1207,7 +1199,7 @@ mod tests { #[test] #[allow(clippy::reversed_empty_ranges)] fn get_char_slice_invalid_range() { - let s = Utf8String::from(b"a\xF0\x9F\x92\x8E\xFF".to_vec()); // "a💎\xFF" + let s = Utf8String::from(b"a\xF0\x9F\x92\x8E\xFF".to_vec()); // //"a💎\xFF" assert_eq!(s.get_char_slice(4..5), None); assert_eq!(s.get_char_slice(4..1), None); assert_eq!(s.get_char_slice(3..1), Some(&b""[..])); diff --git i/spinoso-string/src/impls.rs w/spinoso-string/src/impls.rs index 4b445c16d8..c232e9dae2 100644 --- i/spinoso-string/src/impls.rs +++ w/spinoso-string/src/impls.rs @@ -206,15 +206,15 @@ impl DerefMut for String { } } -// This impl of `Borrow<[u8]>` is permissible due to the behavior of -// `PartialEq`, `Hash`, and `Ord` impls on `String` which only rely on the byte -// slice contents in the underlying encoded string. +// This impl of `Borrow<[u8]>` is permissible due to the behavior of `PartialEq`, + //`Hash`, and `Ord` impls on `String` which only rely on the byte slice contents +//in the underlying encoded string. // // Per the docs in `std`: // -// > In particular `Eq`, `Ord` and `Hash` must be equivalent for borrowed and -// > owned values: `x.borrow() == y.borrow()` should give the same result as -// > `x == y`. +// > In particular `Eq`, `Ord` and `Hash` must be equivalent for borrowed and > + //owned values: `x.borrow() == y.borrow()` should give the same result as > `x +//== y`. impl Borrow<[u8]> for String { #[inline] fn borrow(&self) -> &[u8] { diff --git i/spinoso-string/src/inspect.rs w/spinoso-string/src/inspect.rs index 5be7b04e2b..1d02b5046c 100644 --- i/spinoso-string/src/inspect.rs +++ w/spinoso-string/src/inspect.rs @@ -87,9 +87,9 @@ impl<'a> Inspect<'a> { /// Write an `Inspect` iterator into the given destination using the debug /// representation of the byte buffer associated with a source `String`. /// - /// This formatter writes content like `"spinoso"` and `"invalid-\xFF-utf8"`. - /// To see example output of the underlying iterator, see the `Inspect` - /// documentation. + /// This formatter writes content like `"spinoso"` and + /// `"invalid-\xFF-utf8"`. To see example output of the underlying iterator, + /// see the `Inspect` documentation. /// /// To write binary output, use [`write_into`], which requires the **std** /// feature to be activated. @@ -134,9 +134,9 @@ impl<'a> Inspect<'a> { /// Write an `Inspect` iterator into the given destination using the debug /// representation of the byte buffer associated with a source `String`. /// - /// This formatter writes content like `"spinoso"` and `"invalid-\xFF-utf8"`. - /// To see example output of the underlying iterator, see the `Inspect` - /// documentation. + /// This formatter writes content like `"spinoso"` and + /// `"invalid-\xFF-utf8"`. To see example output of the underlying iterator, + /// see the `Inspect` documentation. /// /// To write to a [formatter], use [`format_into`]. /// diff --git i/spinoso-string/src/iter.rs w/spinoso-string/src/iter.rs index 67cfce8623..b6b61e4225 100644 --- i/spinoso-string/src/iter.rs +++ w/spinoso-string/src/iter.rs @@ -143,8 +143,8 @@ impl<'a> IterMut<'a> { /// Views the underlying data as a subslice of the original data. /// - /// To avoid creating `&mut` references that alias, this is forced to consume - /// the iterator. + /// To avoid creating `&mut` references that alias, this is forced to + /// consume the iterator. /// /// # Examples /// diff --git i/spinoso-string/src/lib.rs w/spinoso-string/src/lib.rs index 87bb579728..3b8f0be089 100644 --- i/spinoso-string/src/lib.rs +++ w/spinoso-string/src/lib.rs @@ -3,8 +3,7 @@ #![warn(clippy::cargo)] #![cfg_attr(test, allow(clippy::non_ascii_literal))] #![allow(unknown_lints)] -// TODO: warn on missing docs once crate is API-complete. -// #![warn(missing_docs)] +// TODO: warn on missing docs once crate is API-complete. #![warn(missing_docs)] #![warn(missing_debug_implementations)] #![warn(missing_copy_implementations)] #![warn(rust_2018_idioms)] @@ -282,8 +281,8 @@ impl String { /// If `len` is greater than the string's current length, this has no /// effect. /// - /// Note that this method has no effect on the allocated capacity - /// of the string. + /// Note that this method has no effect on the allocated capacity of the + /// string. /// /// # Examples /// @@ -416,7 +415,8 @@ impl String { /// using one of the safe operations instead, such as [`truncate`], /// [`extend`], or [`clear`]. /// - /// This function can change the return value of [`String::is_valid_encoding`]. + /// This function can change the return value of + /// [`String::is_valid_encoding`]. /// /// # Safety /// @@ -805,16 +805,14 @@ impl String { self.inner.reserve_exact(additional); } - /// Tries to reserve the minimum capacity for exactly `additional` - /// elements to be inserted in the `String`. After calling - /// `try_reserve_exact`, capacity will be greater than or equal to - /// `self.len() + additional` if it returns `Ok(())`. Does nothing if the - /// capacity is already sufficient. + /// Tries to reserve the minimum capacity for exactly `additional` elements + /// to be inserted in the `String`. After calling `try_reserve_exact`, + /// capacity will be greater than or equal to `self.len() + additional` if + /// it returns `Ok(())`. Does nothing if the capacity is already sufficient. /// - /// Note that the allocator may give the collection more space than - /// it requests. Therefore, capacity can not be relied upon to be - /// precisely minimal. Prefer [`try_reserve`] if future insertions are - /// expected. + /// Note that the allocator may give the collection more space than it + /// requests. Therefore, capacity can not be relied upon to be precisely + /// minimal. Prefer [`try_reserve`] if future insertions are expected. /// /// # Errors /// @@ -1050,8 +1048,8 @@ impl String { /// /// # Examples /// - /// For [UTF-8] strings, the given codepoint is converted to a Unicode scalar - /// value before appending: + /// For [UTF-8] strings, the given codepoint is converted to a Unicode + /// scalar value before appending: /// /// ``` /// use spinoso_string::String; @@ -1356,9 +1354,9 @@ impl String { pub fn unicode_casecmp(&self, other: &String, options: CaseFold) -> Option<bool> { let left = self.as_slice(); let right = other.as_slice(); - // If both `String`s are conventionally UTF-8, they must be case - // compared using the given case folding strategy. This requires the - // `String`s be well-formed UTF-8. + // If both `String`s are conventionally UTF-8, they must be case compared + //using the given case folding strategy. This requires the `String`s be + //well-formed UTF-8. if let (Encoding::Utf8, Encoding::Utf8) = (self.encoding(), other.encoding()) { if let (Ok(left), Ok(right)) = (str::from_utf8(left), str::from_utf8(right)) { // Both slices are UTF-8, compare with the given Unicode case @@ -1494,8 +1492,8 @@ impl String { #[inline] #[must_use] pub fn chomp<T: AsRef<[u8]>>(&mut self, separator: Option<T>) -> bool { - // convert to a concrete type and delegate to a single `chomp` impl - // to minimize code duplication when monomorphizing. + // convert to a concrete type and delegate to a single `chomp` impl to + //minimize code duplication when monomorphizing. if let Some(sep) = separator { chomp(self, Some(sep.as_ref())) } else { @@ -1505,7 +1503,8 @@ impl String { /// Modifies this `String` in-place and removes the last character. /// - /// This method returns a [`bool`] that indicates if this string was modified. + /// This method returns a [`bool`] that indicates if this string was + /// modified. /// /// If the string ends with `\r\n`, both characters are removed. When /// applying `chop` to an empty string, the string remains empty. @@ -1642,18 +1641,19 @@ impl String { if let Some(offset) = offset { let buf = buf.get(offset..)?; let index = buf.find(needle)?; - // This addition is guaranteed not to overflow because the result is - // a valid index of the underlying `Vec`. + // This addition is guaranteed not to overflow because the result + //is a valid index of the underlying `Vec`. // - // `self.buf.len() < isize::MAX` because `self.buf` is a `Vec` and - // `Vec` documents `isize::MAX` as its maximum allocation size. + // `self.buf.len() < isize::MAX` because `self.buf` is a `Vec` + //and `Vec` documents `isize::MAX` as its maximum allocation + //size. Some(index + offset) } else { buf.find(needle) } } - // convert to a concrete type and delegate to a single `index` impl - // to minimize code duplication when monomorphizing. + // convert to a concrete type and delegate to a single `index` impl to + //minimize code duplication when monomorphizing. let needle = needle.as_ref(); inner(self.inner.as_slice(), needle, offset) } @@ -1670,8 +1670,8 @@ impl String { buf.rfind(needle) } } - // convert to a concrete type and delegate to a single `rindex` impl - // to minimize code duplication when monomorphizing. + // convert to a concrete type and delegate to a single `rindex` impl to + //minimize code duplication when monomorphizing. let needle = needle.as_ref(); inner(self.inner.as_slice(), needle, offset) } @@ -2034,8 +2034,8 @@ fn chomp(string: &mut String, separator: Option<&[u8]>) -> bool { } Some(separator) if string.inner.ends_with(separator) => { let original_len = string.len(); - // This subtraction is guaranteed not to panic because - // `separator` is a substring of `buf`. + // This subtraction is guaranteed not to panic because `separator` is + //a substring of `buf`. let truncate_to_len = original_len - separator.len(); string.inner.truncate(truncate_to_len); // Separator is non-empty and we are always truncating, so this diff --git i/spinoso-symbol/src/casecmp/unicode.rs w/spinoso-symbol/src/casecmp/unicode.rs index 2f0c404344..ed73820655 100644 --- i/spinoso-symbol/src/casecmp/unicode.rs +++ w/spinoso-symbol/src/casecmp/unicode.rs @@ -49,9 +49,9 @@ where // Encoding mismatch, the bytes are not comparable using Unicode case // folding. // - // > `nil` is returned if the two symbols have incompatible encodings, - // > or if `other_symbol` is not a symbol. - // > <https://ruby-doc.org/core-3.1.2/Symbol.html#method-i-casecmp-3F> + // > `nil` is returned if the two symbols have incompatible encodings, > or + //if `other_symbol` is not a symbol. > + //<https://ruby-doc.org/core-3.1.2/Symbol.html#method-i-casecmp-3F> (Ok(_), Err(_)) | (Err(_), Ok(_)) => return Ok(None), }; Ok(Some(cmp)) diff --git i/spinoso-symbol/src/ident.rs w/spinoso-symbol/src/ident.rs index a88a8eec33..087c3147ea 100644 --- i/spinoso-symbol/src/ident.rs +++ w/spinoso-symbol/src/ident.rs @@ -322,7 +322,8 @@ impl TryFrom<&[u8]> for IdentifierType { } } -/// Error type returned from the [`FromStr`] implementation on [`IdentifierType`]. +/// Error type returned from the [`FromStr`] implementation on +/// [`IdentifierType`]. /// /// # Examples /// @@ -504,8 +505,8 @@ fn is_ident_char(ch: char) -> bool { /// Scan the [`char`]s in the input until either invalid UTF-8 or an invalid /// ident is found. See [`is_ident_char`]. /// -/// This method returns `Some(index)` of the start of the first invalid ident -/// or `None` if the whole input is a valid ident. +/// This method returns `Some(index)` of the start of the first invalid ident or +/// `None` if the whole input is a valid ident. /// /// Empty slices are not valid idents. #[inline] diff --git i/spinoso-symbol/src/inspect.rs w/spinoso-symbol/src/inspect.rs index ef056ac623..72a7b3fa6f 100644 --- i/spinoso-symbol/src/inspect.rs +++ w/spinoso-symbol/src/inspect.rs @@ -91,9 +91,9 @@ impl<'a> Inspect<'a> { /// representation of the interned byte slice associated with the symbol in /// the underlying interner. /// - /// This formatter writes content like `:spinoso` and `:"invalid-\xFF-utf8"`. - /// To see example output of the underlying iterator, see the `Inspect` - /// documentation. + /// This formatter writes content like `:spinoso` and + /// `:"invalid-\xFF-utf8"`. To see example output of the underlying + /// iterator, see the `Inspect` documentation. /// /// To write binary output, use [`write_into`], which requires the **std** /// feature to be activated. @@ -135,9 +135,9 @@ impl<'a> Inspect<'a> { /// representation of the interned byte slice associated with the symbol in /// the underlying interner. /// - /// This formatter writes content like `:spinoso` and `:"invalid-\xFF-utf8"`. - /// To see example output of the underlying iterator, see the `Inspect` - /// documentation. + /// This formatter writes content like `:spinoso` and + /// `:"invalid-\xFF-utf8"`. To see example output of the underlying + /// iterator, see the `Inspect` documentation. /// /// To write to a [formatter], use [`format_into`]. /// diff --git i/spinoso-symbol/src/lib.rs w/spinoso-symbol/src/lib.rs index 969f76caa4..18e3731284 100644 --- i/spinoso-symbol/src/lib.rs +++ w/spinoso-symbol/src/lib.rs @@ -157,8 +157,8 @@ impl std::error::Error for SymbolOverflowError {} /// Identifier bound to an interned byte string. /// -/// A `Symbol` allows retrieving a reference to the original interned -/// byte string. Equivalent `Symbol`s will resolve to an identical byte string. +/// A `Symbol` allows retrieving a reference to the original interned byte +/// string. Equivalent `Symbol`s will resolve to an identical byte string. /// /// `Symbol`s are based on a `u32` index. They are cheap to compare and cheap to /// copy. @@ -176,11 +176,11 @@ impl Borrow<u32> for Symbol { impl Symbol { /// Construct a new `Symbol` from the given `u32`. /// - /// `Symbol`s constructed manually may fail to resolve to an underlying - /// byte string. + /// `Symbol`s constructed manually may fail to resolve to an underlying byte + /// string. /// - /// `Symbol`s are not constrained to the interner which created them. - /// No runtime checks ensure that the underlying interner is called with a + /// `Symbol`s are not constrained to the interner which created them. No + /// runtime checks ensure that the underlying interner is called with a /// `Symbol` that the interner itself issued. /// /// # Examples diff --git i/spinoso-time/src/time/tzrs/convert.rs w/spinoso-time/src/time/tzrs/convert.rs index 6874b2cc37..3ef023eed3 100644 --- i/spinoso-time/src/time/tzrs/convert.rs +++ w/spinoso-time/src/time/tzrs/convert.rs @@ -37,8 +37,8 @@ impl fmt::Display for Time { impl Time { /// Formats _time_ according to the directives in the given format string. /// - /// Can be used to implement [`Time#strftime`]. The resulting string should be - /// treated as an ASCII-encoded string. + /// Can be used to implement [`Time#strftime`]. The resulting string should + /// be treated as an ASCII-encoded string. /// /// # Examples /// diff --git i/spinoso-time/src/time/tzrs/error.rs w/spinoso-time/src/time/tzrs/error.rs index 7473f43ccd..3f36c7928d 100644 --- i/spinoso-time/src/time/tzrs/error.rs +++ w/spinoso-time/src/time/tzrs/error.rs @@ -22,8 +22,8 @@ pub enum TimeError { /// Note: [`tz::error::DateTimeError`] is only thrown from `tz-rs` when a /// provided component value is out of range. /// - /// Note: This is different from how MRI ruby is implemented. e.g. Second - /// 60 is valid in MRI, and will just add an additional second instead of + /// Note: This is different from how MRI ruby is implemented. e.g. Second 60 + /// is valid in MRI, and will just add an additional second instead of /// erroring. ComponentOutOfRangeError(DateTimeError), @@ -103,8 +103,8 @@ impl From<TzError> for TimeError { // Allowing matching arms due to documentation #[allow(clippy::match_same_arms)] match error { - // These two are generally recoverable within the usable of `spinoso_time` - // TzError::DateTimeError(error) => Self::from(error), + // These two are generally recoverable within the usable of + //`spinoso_time` TzError::DateTimeError(error) => Self::from(error), TzError::ProjectDateTimeError(error) => Self::from(error), // The rest will bleed through, but are included here for reference diff --git i/spinoso-time/src/time/tzrs/math.rs w/spinoso-time/src/time/tzrs/math.rs index bfc2433c44..76de1f99e4 100644 --- i/spinoso-time/src/time/tzrs/math.rs +++ w/spinoso-time/src/time/tzrs/math.rs @@ -74,7 +74,8 @@ impl Time { new_nanos -= NANOS_IN_SECOND; } - // Rounding should never cause an error generating a new time since it's always a truncation + // Rounding should never cause an error generating a new time + //since it's always a truncation let dt = DateTime::from_timespec_and_local(unix_time, new_nanos, local_time_type) .expect("Could not round the datetime"); Self { @@ -171,8 +172,8 @@ impl Time { // Subtraction impl Time { - /// Subtraction — Subtracts the given duration from _time_ and returns - /// that value as a new `Time` object. + /// Subtraction — Subtracts the given duration from _time_ and returns that + /// value as a new `Time` object. /// /// # Errors /// diff --git i/spinoso-time/src/time/tzrs/mod.rs w/spinoso-time/src/time/tzrs/mod.rs index 44582c9f6c..82f03c6cdd 100644 --- i/spinoso-time/src/time/tzrs/mod.rs +++ w/spinoso-time/src/time/tzrs/mod.rs @@ -141,7 +141,8 @@ impl Time { /// /// # Errors /// - /// Can produce a [`TimeError`], generally when provided values are out of range. + /// Can produce a [`TimeError`], generally when provided values are out of + /// range. /// /// [`Time#new`]: https://ruby-doc.org/core-3.1.2/Time.html#method-c-new /// [`Timezone`]: https://ruby-doc.org/core-3.1.2/Time.html#class-Time-label-Timezone+argument @@ -175,7 +176,8 @@ impl Time { // upstream has provided a test case which means we have a test that // simulates this failure condition and requires us to handle it. // - // See: https://github.com/x-hgg-x/tz-rs/issues/34#issuecomment-1206140198 + // See: + //https://github.com/x-hgg-x/tz-rs/issues/34#issuecomment-1206140198 let dt = found_date_times.latest().ok_or(TimeError::Unknown)?; Ok(Self { inner: dt, offset }) } @@ -197,7 +199,8 @@ impl Time { /// /// # Errors /// - /// Can produce a [`TimeError`], however these should never been seen in regular usage. + /// Can produce a [`TimeError`], however these should never been seen in + /// regular usage. /// /// [`Time#now`]: https://ruby-doc.org/core-3.1.2/Time.html#method-c-now #[inline] @@ -228,7 +231,8 @@ impl Time { /// /// # Errors /// - /// Can produce a [`TimeError`], however these should not be seen during regular usage. + /// Can produce a [`TimeError`], however these should not be seen during + /// regular usage. /// /// [`Time#at`]: https://ruby-doc.org/core-3.1.2/Time.html#method-c-at #[inline] @@ -264,7 +268,8 @@ impl TryFrom<ToA> for Time { /// /// # Errors /// - /// Can produce a [`TimeError`], generally when provided values are out of range. + /// Can produce a [`TimeError`], generally when provided values are out of + /// range. #[inline] fn try_from(to_a: ToA) -> Result<Self> { let offset = Offset::try_from(to_a.zone).unwrap_or_else(|_| Offset::utc()); diff --git i/spinoso-time/src/time/tzrs/offset.rs w/spinoso-time/src/time/tzrs/offset.rs index 13b77bfc30..5290f2497e 100644 --- i/spinoso-time/src/time/tzrs/offset.rs +++ w/spinoso-time/src/time/tzrs/offset.rs @@ -61,8 +61,8 @@ fn local_time_zone() -> TimeZoneRef<'static> { GMT } -/// Generates a [+/-]HHMM timezone format from a given number of seconds -/// Note: the actual seconds element is effectively ignored here +/// Generates a [+/-]HHMM timezone format from a given number of seconds Note: +/// the actual seconds element is effectively ignored here #[inline] #[must_use] fn offset_hhmm_from_seconds(seconds: i32) -> String { @@ -311,8 +311,8 @@ impl TryFrom<&str> for Offset { // includes all sorts of numerals, including Devanagari and // Kannada, which don't parse into an `i32` using `FromStr`. // - // `[[:digit:]]` is documented to be an ASCII character class - // for only digits 0-9. + // `[[:digit:]]` is documented to be an ASCII character class for + //only digits 0-9. // // See: // - https://docs.rs/regex/latest/regex/#perl-character-classes-unicode-friendly diff --git i/spinoso-time/src/time/tzrs/parts.rs w/spinoso-time/src/time/tzrs/parts.rs index e02a8138a7..33a0d515e7 100644 --- i/spinoso-time/src/time/tzrs/parts.rs +++ w/spinoso-time/src/time/tzrs/parts.rs @@ -60,8 +60,9 @@ impl Time { /// Returns the second of the minute (0..60) for _time_. /// - /// Seconds range from zero to 60 to allow the system to inject [leap - /// seconds]. + /// Seconds range from zero to 60 to allow the system to inject + /// [leap + seconds]. /// /// Can be used to implement [`Time#sec`]. /// @@ -316,8 +317,8 @@ impl Time { self.inner.local_time_type().is_dst() } - /// Returns an integer representing the day of the week, `0..=6`, with Sunday - /// == 0. + /// Returns an integer representing the day of the week, `0..=6`, with + /// Sunday == 0. /// /// Can be used to implement [`Time#wday`]. /// diff --git i/src/bin/airb.rs w/src/bin/airb.rs index 47d657c252..f142194010 100644 --- i/src/bin/airb.rs +++ w/src/bin/airb.rs @@ -11,8 +11,8 @@ #![warn(unused_qualifications)] #![warn(variant_size_differences)] -//! `airb` is the Artichoke implementation of `irb` and is an interactive Ruby shell -//! and [REPL][repl]. +//! `airb` is the Artichoke implementation of `irb` and is an interactive Ruby +//! shell and [REPL][repl]. //! //! `airb` is a readline enabled shell, although it does not persist history. //! diff --git i/src/bin/artichoke.rs w/src/bin/artichoke.rs index 5aab468a74..2be5d61deb 100644 --- i/src/bin/artichoke.rs +++ w/src/bin/artichoke.rs @@ -173,7 +173,7 @@ fn command() -> Command<'static> { // // `ripgrep` is licensed with the MIT License Copyright (c) 2015 Andrew Gallant. // -// https://github.com/BurntSushi/ripgrep/blob/9f924ee187d4c62aa6ebe4903d0cfc6507a5adb5/LICENSE-MIT +// //https://github.com/BurntSushi/ripgrep/blob/9f924ee187d4c62aa6ebe4903d0cfc6507a5adb5/LICENSE-MIT // // See https://github.com/artichoke/artichoke/issues/1301. @@ -195,12 +195,12 @@ where if err.use_stderr() { return Err(err.into()); } - // Explicitly ignore any error returned by write!. The most likely error - // at this point is a broken pipe error, in which case, we want to ignore - // it and exit quietly. + // Explicitly ignore any error returned by write!. The most likely error at + //this point is a broken pipe error, in which case, we want to ignore it and + //exit quietly. // - // (This is the point of this helper function. clap's functionality for - // doing this will panic on a broken pipe error.) + // (This is the point of this helper function. clap's functionality for doing + //this will panic on a broken pipe error.) let _ignored = write!(io::stdout(), "{}", err); process::exit(0); } diff --git i/src/lib.rs w/src/lib.rs index 1c574a3a75..27657f0b4a 100644 --- i/src/lib.rs +++ w/src/lib.rs @@ -13,9 +13,11 @@ //! Artichoke Ruby //! -//! This crate is a Rust and Ruby implementation of the [Ruby programming -//! language][rubylang]. Artichoke is not production-ready, but intends to be a -//! [MRI-compliant][rubyspec] implementation of [recent MRI Ruby][mri-target]. +//! This crate is a Rust and Ruby implementation of the +//! [Ruby programming + language][rubylang]. Artichoke is not production-ready, +//! but intends to be a [MRI-compliant][rubyspec] implementation of +//! [recent MRI Ruby][mri-target]. //! //! [mri-target]: https://github.com/artichoke/artichoke/blob/trunk/RUBYSPEC.md#mri-target //! diff --git i/src/parser.rs w/src/parser.rs index c25cc27df2..f9eb79b410 100644 --- i/src/parser.rs +++ w/src/parser.rs @@ -193,8 +193,8 @@ impl<'a> Parser<'a> { EXPR_ENDFN => false, // jump keyword like break, return, ... EXPR_MID => false, - // this token is unreachable and is used to do integer math on the - // values of `mrb_lex_state_enum`. + // this token is unreachable and is used to do integer math on + //the values of `mrb_lex_state_enum`. EXPR_MAX_STATE => false, }; if code_has_unterminated_expression { @@ -216,7 +216,7 @@ impl<'a> Drop for Parser<'a> { sys::mrb_parser_free(parser.as_mut()); }); } - // There is no need to free `context` since it is owned by the - // Artichoke state. + // There is no need to free `context` since it is owned by the Artichoke + //state. } } diff --git i/src/ruby.rs w/src/ruby.rs index a5710565d5..41a9d67279 100644 --- i/src/ruby.rs +++ w/src/ruby.rs @@ -219,9 +219,9 @@ fn load_error<P: AsRef<OsStr>>(file: P, message: &str) -> Result<String, Error> // This function exists to provide a workaround for Artichoke not being able to // read from the local file system. // -// By passing the `--with-fixture PATH` argument, this function loads the file -// at `PATH` into memory and stores it in the interpreter bound to the -// `$fixture` global. +// By passing the `--with-fixture PATH` argument, this function loads the file at + //`PATH` into memory and stores it in the interpreter bound to the `$fixture` +//global. #[inline] fn setup_fixture_hack<P: AsRef<Path>>(interp: &mut Artichoke, fixture: P) -> Result<(), Error> { let data = if let Ok(data) = fs::read(fixture.as_ref()) { as an aside it looks like grapheme cluster emojis are reflowed differently than prettier does. Maybe cargo-spellcheck is counting these as multiple characters? this reflow is also broken: diff --git i/spinoso-time/src/time/tzrs/parts.rs w/spinoso-time/src/time/tzrs/parts.rs index e02a8138a7..33a0d515e7 100644 --- i/spinoso-time/src/time/tzrs/parts.rs +++ w/spinoso-time/src/time/tzrs/parts.rs @@ -60,8 +60,9 @@ impl Time { /// Returns the second of the minute (0..60) for _time_. /// - /// Seconds range from zero to 60 to allow the system to inject [leap - /// seconds]. + /// Seconds range from zero to 60 to allow the system to inject + /// [leap + seconds]. /// /// Can be used to implement [`Time#sec`]. /// @@ -316,8 +317,8 @@ impl Time { self.inner.local_time_type().is_dst() } - /// Returns an integer representing the day of the week, `0..=6`, with Sunday - /// == 0. + /// Returns an integer representing the day of the week, `0..=6`, with + /// Sunday == 0. /// /// Can be used to implement [`Time#wday`]. /// block quotes in // comments appear to be reflowed incorrectly as well Graphemes are not handled, I didn't figure out a way to reliably count their length. See #143
gharchive/issue
2022-09-14T00:11:13
2025-04-01T06:38:27.360714
{ "authors": [ "drahnr", "lopopolo" ], "repo": "drahnr/cargo-spellcheck", "url": "https://github.com/drahnr/cargo-spellcheck/issues/277", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
238768691
问下大神,我想只更新某个section下的列表数据,其他section下数据不变怎么弄啊 问下大神,我想只更新某个section下的列表数据,其他section下数据不变怎么弄啊 参见:#150
gharchive/issue
2017-06-27T08:07:26
2025-04-01T06:38:27.383033
{ "authors": [ "Greathfs", "drakeet" ], "repo": "drakeet/MultiType", "url": "https://github.com/drakeet/MultiType/issues/151", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
2086015889
Windows binary Dear @drand, Windows 7 x64 user here. Could you be so kind to generate .exe for the rest of us who are ordinary users without compilers? Sounds like a good idea. I'll release a v1.1.0 soon and will include binaries for Windows, Linux and Macos. You can now find a Windows binary in our latest release! https://github.com/drand/tlock/releases I'll have to tweak the releaser a bit to avoid packaging binaries in .tar.gz archives, but these can be extracted using 7Zip and probably many other Archive tools on Windows. @AnomalRoil, I am a Windows 7 x64 user (mentioned in the first message), but it seems you built an executable using the latest version of a compiler, from which the ability to build for that OS was treacherously removed. This is what I see on the screen: $ tle.exe Exception 0xc0000005 0x8 0x0 0x0 PC=0x0 runtime.asmstdcall() $GOROOT/src/runtime/sys_windows_amd64.s:65 +0x75 fp=0x22fca0 sp=0x22fc80 pc=0x46da75 rax 0x0 rbx 0xfe2c80 rcx 0x1036dc0 rdi 0x7fffffde000 rsi 0x22fea0 rbp 0x22fde0 rsp 0x22fc78 r8 0x0 r9 0x22fee0 r10 0x1008818 r11 0x21 r12 0x22fec0 r13 0x1 r14 0xfe2620 r15 0x0 rip 0x0 rflags 0x10293 cs 0x33 fs 0x53 gs 0x2b I guess this might be caused by issues in our goreleaser script. Let me re-open the issue. Could you try with these 2 binaries and let me know if either work for you? Just running ./releasetle.exe -v and ./localtle.exe -v should be enough to check if they work: tle-win.zip @AnomalRoil, none of them work, alas. It seems Windows 7 might not be supported anymore since Go 1.21 (https://github.com/golang/go/issues/57003), that's an unfortunate decision given the current market penetration of Windows 7 and seems a bit quick given how Windows XP is currently still supported by MS for paid customers, AFAIK. Could you try with this binary compiled with Go1.20 which should still support Windows 7 ? tle-1.20.zip @AnomalRoil, this binary works as expected, thank you. That's why I used the word 'treacherously' in relation to what @golang did. Those folks from California lost touch with reality and stopped taking into account which OS people like me outside the golden billion use and will use for a foreseeable future. It's especially ridiculous when they cite OS distribution statistics taken from a site like StatCounter, given that it has been distorted for many years by the fact that tracker scripts are blocked at the browser level, even without extensions like uBlock. I do not remember such betrayal, for example, by the C++ devs. And now we are forced to negotiate with app developers like you to compile, let’s say, legacy versions. Recently, @shenwei356 and I were working on improving bRename, a file renaming app, and faced the same situation. Indeed, the word 'segregation' fits here, not 'progress'. This should now be fixed in https://github.com/drand/tlock/releases/tag/v1.1.1
gharchive/issue
2024-01-17T11:44:37
2025-04-01T06:38:27.394521
{ "authors": [ "AnomalRoil", "sergeevabc" ], "repo": "drand/tlock", "url": "https://github.com/drand/tlock/issues/78", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
542690298
Ruby 2.7.0 support Seeing this warning... lib/ruby/gems/2.7.0/gems/draper-3.1.0/lib/draper/delegation.rb:10: warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call @olleolleolle @codebycliff 🤝 Fixed by: https://github.com/drapergem/draper/pull/870
gharchive/issue
2019-12-26T23:05:46
2025-04-01T06:38:27.396572
{ "authors": [ "codebycliff", "kapso", "pedrofurtado" ], "repo": "drapergem/draper", "url": "https://github.com/drapergem/draper/issues/869", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2081440364
公式结果支持数字格式 初始清单 [X] 这真的是个问题吗? [X] 我已经在 Github Issues 中搜索过了,但没有找到类似的问题。 受影响的包和版本 dev 复现步骤 公式值有数字格式 预期行为 excel,google,保留第一个值的数字格式 实际行为 未保留数字格式 运行环境 No response 操作系统 No response 构建工具 No response Maybe already?
gharchive/issue
2024-01-15T07:50:18
2025-04-01T06:38:27.415933
{ "authors": [ "Gggpound", "zhaolixin7" ], "repo": "dream-num/univer", "url": "https://github.com/dream-num/univer/issues/1142", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2328008382
[Bug] After inserting a column, the data validation bound cells did not follow the translation 在您提交此问题之前,您是否检查了以下内容? [X] 这真的是个问题吗? [X] 我已经在 Github Issues 中搜索过了,但没有找到类似的问题。 受影响的包和版本 @univerjs/sheets-data-validation 复现链接 After inserting a column, the data validation bound cells did not follow move 预期行为 Sync move 实际行为 value and sytle follow moved,but the data validation stay in place 运行环境 Chrome 系统信息 No response Please upgrade @univerjs/sheets-data-validation to version 0.1.15
gharchive/issue
2024-05-31T14:42:10
2025-04-01T06:38:27.420122
{ "authors": [ "r1k2r3k4", "weird94" ], "repo": "dream-num/univer", "url": "https://github.com/dream-num/univer/issues/2376", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
329971149
[REQUEST] - Support for FontAwesome 5 font-awesome-sass is now at version 5.0.13 but your plugin only support version 4. Could you please update it? I'm not currently using it myself so I haven't had any push to upgrade it, but happy to have a PR. Well turns out there was already a PR so merged that and published an updated gem. Yeah, I've already saw that PR. That was the mainly reason for asking to update. :) By the way, thanks for the update. :D
gharchive/issue
2018-06-06T17:55:04
2025-04-01T06:38:27.451564
{ "authors": [ "drewish", "gvgramazio" ], "repo": "drewish/jekyll-font-awesome-sass", "url": "https://github.com/drewish/jekyll-font-awesome-sass/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
728350236
Powermeter Hey, i found you project over youtube. The last thing I did was to rebuild the powermeter after this project https://github.com/rrrlasse/powerino/wiki. But I would like to get my data directly on my Garmin watch. Is it possible to send the data via ANT with the already existing project? Best regards Hi duffel90, Certain Garmin watches support ANT power meter data, yours might be one of them. So long as it is, the existing project should work fine. Drew
gharchive/issue
2020-10-23T16:21:14
2025-04-01T06:38:27.453350
{ "authors": [ "drewvigne", "duffel90" ], "repo": "drewvigne/arduino_nano_33_ant", "url": "https://github.com/drewvigne/arduino_nano_33_ant/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
145333858
ENOENT spawn error with ionic start From @Bandito11 on April 2, 2016 2:8 Note: for support questions, please use one of these channels: https://forum.ionicframework.com/ http://ionicworldwide.herokuapp.com/ Short description of the problem: I upgraded to beta.23 and started to get an error when created a new project using 'ionic start foo --v2 --ts'. The error in red says "Unable to spawn commandError: spawn npm ENOENT (CLI v2.0.0-beta.23)." I don't even know what it did to the project. What behavior are you expecting? Steps to reproduce: ionic start foo --v2 --ts it builds and creates the proyect insert any relevant code between the above and below backticks Other information: (e.g. stacktraces, related issues, suggestions how to fix, stackoverflow links, forum links, etc) Which Ionic Version? 1.x or 2.x Run ionic info from terminal/cmd prompt: (paste output below) Copied from original issue: driftyco/ionic#6018 Hello! Thanks for opening an issue with us! Since this is an issue related to the ionic cli and not the framework i will be moving this issue to that repo. Feel free to continue the conversation over there! Thanks! I think I solved it. I had python2 and python3 installed so after I deleted that everything was solved. I think this should be added to the documents even though I think this pertains node and not the ionic cli @Bandito11 what version of nodejs are you using? You shouldn't have to uninstall python to be able to use the ionic cli. I have the latest one. I uninstalled and installed everything yesterday because I've been using ASP.Net and Python3 (already had Python 2 installed) for college since January and wanted to have the latest npm and node. When I was 'ionic start'-ing the project it was mentioning that python 3 was in the path (don't remember what else it said) so I deleted python 3 and everything was fixed. I am using the latest ionic@beta too. It isbeta23 Hmm, ok, thanks for the info! I will have to look more into this. This error is happening on my side and I have not installed Python. The npm install task originally used exec to run npm install, which was sometimes caused issues so it was switched to spawn, which doesn't work/works differently on Windows. Switched to using https://github.com/IndigoUnited/node-cross-spawn in the latest beta, so this should be resolved, but let me know if you're still having issues, thanks!
gharchive/issue
2016-04-02T04:22:58
2025-04-01T06:38:27.464215
{ "authors": [ "Bandito11", "dylanvdmerwe", "jgw96", "tlancina" ], "repo": "driftyco/ionic-cli", "url": "https://github.com/driftyco/ionic-cli/issues/892", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
41539782
[Feature request] - Full features demo in the homepage Hi, I'm reviewing http://www.idangero.us/framework7/ and I found very helpful the full features demo in the homepage. Could you review if is possible make some similar in ionic's site? Thanks and regards For demos, we have opted to keep them under our codepen account. http://codepen.io/ionic/public-list/
gharchive/issue
2014-08-29T21:09:31
2025-04-01T06:38:27.469746
{ "authors": [ "gastonbesada", "mhartington" ], "repo": "driftyco/ionic-site", "url": "https://github.com/driftyco/ionic-site/issues/210", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
224459106
$label-ios-text-color missing Ionic version: (check one with "x") [ ] 1.x [ ] 2.x [X ] 3.x I'm submitting a ... (check one with "x") [X ] bug report [ ] feature request [ ] support request => Please do not submit support requests here, use one of these channels: https://forum.ionicframework.com/ or http://ionicworldwide.herokuapp.com/ Current behavior: There is no $label-ios-text-color sass variable. There are equivalents for md and wp. Expected behavior: A $label-ios-text-color variable is needed to be able to override label colors on ios like on other platforms. Ionic info: (run ionic info from a terminal/cmd prompt and paste output below): Cordova CLI: 6.5.0 Ionic Framework Version: 3.0.1 Ionic CLI Version: 2.2.3 Ionic App Lib Version: 2.2.1 Ionic App Scripts Version: 1.3.0 ios-deploy version: Not installed ios-sim version: Not installed OS: Windows 10 Node Version: v6.9.1 Xcode version: Not installed Hello, thanks for opening an issue with us, we will look into this. I was making a repo for a different issue and decided to just add this one as well. Run this project with ionic serve --lab and compare Android and iOS, go to the Sign Up page and you'll see the color difference. The variables are set at the bottom of variables.scss. Here's the commit with the changes: https://github.com/Iyashu5040/ionic-conference-app/commit/810a2574d07f6757db15a24ca546353f934c2c58 The variable doesn't exist because we don't change the styling on the iOS label. We could add one that just styles initial, but if you wanted to match the way material design works it would be the following styling: .item-input .label-ios, .item-select .label-ios, .item-datetime .label-ios { color: color($colors, primary); } Thanks for the clarification and the example, I appreciate it. Regarding adding the variable: I'd like to argue for adding it, for the sake of making customisation easier and more consistent across platforms. @Iyashu5040 I've added it back to master. Could you try out the following nightly and let me know if you have any issues? npm install --save --save-exact ionic-angular@3.1.1-201704282222 Thanks @brandyscarney! I've tested with the nightly version and the style applies correctly.
gharchive/issue
2017-04-26T12:59:03
2025-04-01T06:38:27.476783
{ "authors": [ "Iyashu5040", "brandyscarney", "jgw96" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/issues/11373", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
230388166
viewWillEnter is not triggered on App instance when switching between tabs Ionic version: (check one with "x") [ ] 1.x (For Ionic 1.x issues, please use https://github.com/driftyco/ionic-v1) [x] 2.x [x] 3.x I'm submitting a ... (check one with "x") [x] bug report [ ] feature request [ ] support request => Please do not submit support requests here, use one of these channels: https://forum.ionicframework.com/ or http://ionicworldwide.herokuapp.com/ Current behavior: When switching between tabs in ion-tabs ionic emits value on viewWillEnter observable on App instance only once (when component is created and doesn't emit for subsequent navigations) Expected behavior: When switching between tabs in ion-tabs ionic emits viewWillEnter observable on App instance Steps to reproduce: Create an ionic app with tabs Inject App and subscribe to viewWillEnter Navigate between tabs Other information: issue caused by this line https://github.com/driftyco/ionic/blob/master/src/components/tabs/tabs.ts#L383 I think to fix the issue the line should be changed to: selectedPage && selectedTab._willEnter(selectedPage); The same should be done for other leave/enter events. The fix is quite important for properly and easily tracking analytics in the whole application because it allows to track all navigation events through subscription to Apps viewWillEnter observable. I'm ready to create PR if you think the suggested approach is correct Ionic info: (run ionic info from a terminal/cmd prompt and paste output below): Cordova CLI: 7.0.1 Ionic Framework Version: 2.2.0 Ionic CLI Version: 2.1.18 Ionic App Lib Version: 2.1.9 ios-deploy version: 1.9.0 ios-sim version: 5.0.8 OS: OS X El Capitan Node Version: v6.9.1 Xcode version: Xcode 8.2.1 Build version 8C1002 Hello, thanks for using Ionic! I am going to close this issue as a duplicate of https://github.com/driftyco/ionic/issues/11694 and we can continue the discussion on that issue 😃 .
gharchive/issue
2017-05-22T13:00:56
2025-04-01T06:38:27.485055
{ "authors": [ "jgw96", "stalniy" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/issues/11752", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
79457641
Layout glitches on Android when keyboard is opening or closing. There are short visual distortions on keyboard transitions. However Ionic View app does not have such issue, thus it is seems to be solvable. My device is Nexus 4, Android 5.1. However I noticed similar behaviour on other devices. Link to the video: http://d.pr/v/16ErH Test app : http://d.pr/f/V56v Does anyone has a clue what is the reason for that? Greetings @timuric! My sensors indicate that you need to update your issue through our custom issue form. We are now requiring all issues to be submitted this way, to ensure that we have all of the information necessary to fix them as quickly as possible. Click Here To Update Your Issue I will have no choice but to close this issue if it is not resubmitted through the form. Please fill out the rest of the form, so that I may use my friendly robot powers to assist you. Thank you! Greetings @timuric! I've closed this issue because my sensors indicated it was old and inactive, and may have already been fixed in recent versions of Ionic. However, if you are still experiencing this issue, please feel free to reopen this issue by creating a new one, and include any examples and other necessary information, so that we can look into it further. Thank you for allowing me to assist you.
gharchive/issue
2015-05-22T13:34:54
2025-04-01T06:38:27.490148
{ "authors": [ "Ionitron", "timuric" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/issues/3814", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
108563571
bug: Meteor driftyco:ionic Blank Screen on ios simulator / ios devices Type: bug Platform: ios 7 webview I am building a Meteor hybrid app that using AngularJS and ionic, so I decided to use driftyco:ionic, since meteoric:ionic doesn't support Angular. Everything works fine on web browser, however, it is blank when it runs on ios devices or simulator. I get rid of driftyco:ionic, content is able to show on ios devices. Thus, I believe there is something I did wrong, so I got a blank page. Could you check my repo https://github.com/yumikohey/bringMe ? I deployed to 52.89.149.88(most updated) and bring-me.meteor.com(older version) Thanks for your help. Greetings @yumikohey! My sensors indicate that you need to update your issue through our custom issue form. We are now requiring all issues to be submitted this way, to ensure that we have all of the information necessary to fix them as quickly as possible. Click Here To Update Your Issue I will have no choice but to close this issue if it is not resubmitted through the form. Please fill out the rest of the form, so that I may use my friendly robot powers to assist you. Thank you! Hi @yumikohey. Seems like a better question for @Urigo @yumikohey can you please open an issue on the Angular Meteor repo with your source code? thanks @yumikohey Look at the repo, check pull request that i sent to your project. This is a meteor environment issue, i guess.
gharchive/issue
2015-09-27T22:04:11
2025-04-01T06:38:27.497077
{ "authors": [ "Ionitron", "Urigo", "kaiquewdev", "mlynch", "yumikohey" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/issues/4435", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
172418921
Need a new option for inputs at the alertController component Short description of the problem: We can't add some custom attributes for inputs in alertController. What behavior are you expecting? I think it is necessary that we could add custom attributes like maxlength, id, style and etc to inputs in alertController. I suggest we have a new option in inputs named "attr" that can set custom attributes for inputs. for example like this: let alert = this.alertCtrl.create({ title: 'Login', inputs: [ { name: 'test', placeholder: 'Test name', attr: { maxlength: 6, id: "test", style:"color:red; font-size: 14px;" } } ], buttons: [ { text: 'Cancel', role: 'cancel', handler: data => { console.log('Cancel clicked'); } } ] }); alert.present(); } Which Ionic Version? 1.x or 2.x 2.x Plunker that shows an example of your issue For Ionic 1 issues - http://plnkr.co/edit/Xo1QyAUx35ny1Xf9ODHx?p=preview For Ionic 2 issues - http://plnkr.co/edit/me3Uk0GKWVRhZWU0usad?p=preview Run ionic info from terminal/cmd prompt: (paste output below) Your system information: Cordova CLI: 6.2.0 Gulp version: CLI version 3.8.11 Gulp local: Ionic CLI Version: 2.0.0-beta.31 Ionic App Lib Version: 2.0.0-beta.17 OS: Windows 8 Node Version: v4.4.0 Hello! We are moving our feature requests to a new feature request doc. I have moved this feature request to the doc and because of this I will be closing this issue for now. Thanks for using Ionic! Hello all! While this is an awesome feature request it is not something that we plan on doing anytime soon. Because of this I am going to move this to our internal feature tracking repo for now as it is just "collecting dust" here. Once we decide to implement this I will move it back. Thanks everyone for using Ionic! This issue was moved to driftyco/ionic-feature-requests#54 alertoptions = { title: 'sometitle', inputs: [{name: 'someName', id: 'someID'}] } alert.present().then(v => { // will fire this when modal is completely loaded with animations let id = document.getElementById('someID'); if (id) { // Do your thing } }); Although this is just a workaround, I think Ionic team should include a feature to add custom attributes to handle form validation and other native HTML5 features for inputs.
gharchive/issue
2016-08-22T10:30:24
2025-04-01T06:38:27.503906
{ "authors": [ "jgw96", "navid045", "nkaredia" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/issues/7819", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
156769706
Update docs Sqlstorage Add documentation to clear function Thank you!
gharchive/pull-request
2016-05-25T14:46:34
2025-04-01T06:38:27.504915
{ "authors": [ "brandyscarney", "remithomas" ], "repo": "driftyco/ionic", "url": "https://github.com/driftyco/ionic/pull/6652", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
101775395
IOS pushnotification sound In NgCordova push notification sample code $rootScope.$on('$cordovaPush:notificationReceived', function(event, notification) { if (notification.alert) { navigator.notification.alert(notification.alert); } if (notification.sound) { var snd = new Media(event.sound); snd.play(); } A sound file is being played explicitly, is this mandatory or if i skip it will it play the default sound? And does the sound plays (default or custom) plays in background? I tested with all possibe scenarios but but currently with out the above sound code, sound is not playing Is the above code mandatory Greetings @Alphatiger! I've closed this issue because my sensors indicated it was old and inactive, and may have already been fixed in recent versions of Ionic. However, if you are still experiencing this issue, please feel free to reopen this issue by creating a new one, and include any examples and other necessary information, so that we can look into it further. Thank you for allowing me to assist you.
gharchive/issue
2015-08-18T23:55:26
2025-04-01T06:38:27.508941
{ "authors": [ "Alphatiger", "Ionitron" ], "repo": "driftyco/ng-cordova", "url": "https://github.com/driftyco/ng-cordova/issues/944", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2268197946
[FEATURE]: cloudflare d1 session support Describe what you want https://blog.cloudflare.com/building-d1-a-global-database This should be usable in graphql too. Have any plan for this? This will greatly enhance the user experience.
gharchive/issue
2024-04-29T06:05:51
2025-04-01T06:38:27.512196
{ "authors": [ "MJRT", "janat08" ], "repo": "drizzle-team/drizzle-orm", "url": "https://github.com/drizzle-team/drizzle-orm/issues/2226", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }