id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
445678416
|
DistributionsController#create should be cleaned up
There are currently 3 different branches in that action that result in render :new.
My gut says there is probably some redundancy here that could be reduced.
working on this
|
gharchive/issue
| 2019-05-18T04:44:19 |
2025-04-01T06:40:16.907250
|
{
"authors": [
"armahillo",
"jeduardo824"
],
"repo": "rubyforgood/diaper",
"url": "https://github.com/rubyforgood/diaper/issues/1003",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
263718563
|
Developer login
Hello, I've enabled development login as needed in issue #73. Now when you signin, you should see this screen:
And you can choose which authentication to use in development by using the environment variable FORCE_AMAZON_LOGIN. (true/false)
It disables csrf protection only in development mode and to the developer auth provider.
Thanks for the PR! :tada: I'll take a look at it tomorrow.
Hey @leesharma, did it! :D
Awesome, thanks! 👍
|
gharchive/pull-request
| 2017-10-08T13:39:26 |
2025-04-01T06:40:16.909329
|
{
"authors": [
"gabteles",
"leesharma"
],
"repo": "rubyforgood/playtime",
"url": "https://github.com/rubyforgood/playtime/pull/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
176919006
|
REST server 프로젝트 구성
Django REST를 이용하여 API server를 구성한다.
django rest가 아닌 express.js + ORM을 이용해 구성하도록 한다
|
gharchive/issue
| 2016-09-14T14:15:09 |
2025-04-01T06:40:16.953857
|
{
"authors": [
"ruci06"
],
"repo": "ruci06/dokhudokhu",
"url": "https://github.com/ruci06/dokhudokhu/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1230237338
|
Cannot auth with password only
Redis can be configured with requirepass which makes AUTH command accept a password without username. Currently, there is no option for this. AUTH command can only initiated with username and password.
Source: https://redis.io/commands/auth/
https://github.com/rueian/rueidis/blob/a525ad43300a4ea316a8ca1355a46752ee8be8ea/pipe.go#L73-L79
Hi @nerg4l,
Thank you for pointing it out.
In the new v0.0.45, it will use the default username if no username but password provided according to https://redis.io/commands/hello/
Thanks
|
gharchive/issue
| 2022-05-09T20:50:02 |
2025-04-01T06:40:16.977667
|
{
"authors": [
"nerg4l",
"rueian"
],
"repo": "rueian/rueidis",
"url": "https://github.com/rueian/rueidis/issues/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1264813393
|
question for image selected in every individual
Dear author,
Can you explain specifically which cross-section image each individual selected comes from, or was it selected manually? Your article is great, but I would like to know how to select CT cross-sections for specific individual images. Can you provide an image example?
Hello, thank you for your question. For each individual, 2D axial slices with the largest tumor area along the z-axis from 3D CT images were selected as the input of diagnostic model.
thanks for your response! .could you send me sample clinical data and i am trying to reproduce your paper. Here is my email-address(3346720994@qq.com)
|
gharchive/issue
| 2022-06-08T14:08:07 |
2025-04-01T06:40:17.022235
|
{
"authors": [
"hqhtiantian520",
"ruitian-olivia"
],
"repo": "ruitian-olivia/STIC-model",
"url": "https://github.com/ruitian-olivia/STIC-model/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
193682661
|
Archive site under 2014.rulu.eu
Ship this with rulu-eu/rulu2017/pull/1
👌
|
gharchive/pull-request
| 2016-12-06T04:49:51 |
2025-04-01T06:40:17.024222
|
{
"authors": [
"dmathieu",
"mehlah"
],
"repo": "rulu-eu/rulu2014",
"url": "https://github.com/rulu-eu/rulu2014/pull/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2474097825
|
[Question]: How to proper load a Redis Vector Store
Question Validation
[X] I have searched both the documentation and discord for an answer.
Question
Suppose i create a vector store like:
# Define the IndexSchema
embed_model = OpenAIEmbedding(
embed_batch_size=10,
model="text-embe,
api_key=API_KEY,
)
vector_store = RedisVectorStore(redis_client=redis_client, schema=index_schema, overwrite=True)
documents = [Document(text=item, metadata={"merchant_id": "test"}) for item in items]
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(documents, storage_context=storage_context, embed_model=embed_model)
If a query it like:
retriever = index.as_retriever(
similarity_top_k=3,
filters=MetadataFilters(filters=[ExactMatchFilter(key="merchant_id", value="test")])
)
query_bundle = QueryBundle("pizza")
retrieved_nodes = retriever.retrieve(query_bundle)
print(retrieved_nodes)
I got results. nice.
But when i try to load it like:
vector_store = RedisVectorStore(
index_schema=index_schema,
redis_client=redis_client,
overwrite=False,
)
index = VectorStoreIndex.from_vector_store(vector_store, embed_model=embed_model)
retriever = index.as_retriever(
similarity_top_k=3,
filters=MetadataFilters(filters=[ExactMatchFilter(key="merchant_id", value="test")])
)
query_bundle = QueryBundle("Pizza")
retrieved_nodes = retriever.retrieve(query_bundle)
print(retrieved_nodes)
I get :
merchant_id field was not included as part of the index schema, and thus cannot be used as a filter condition.
[]
What is wrong with the way I am loading it?
@luccazifood initially you have
RedisVectorStore(redis_client=redis_client, schema=index_schema, overwrite=True)
But then when loading,
vector_store = RedisVectorStore(index_schema=index_schema, redis_client=redis_client, overwrite=False)
Notice the switch from schema= to index_schema= ?
The correct kwarg is schema=
ohhh thanks @logan-markewich ! think I need to take a break lol
|
gharchive/issue
| 2024-08-19T19:57:47 |
2025-04-01T06:40:17.031955
|
{
"authors": [
"logan-markewich",
"luccazifood"
],
"repo": "run-llama/llama_index",
"url": "https://github.com/run-llama/llama_index/issues/15499",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1953815124
|
[Bug] Wrong Runtime if task failed
Hello, faced interesting issue:
state: RUNNING
createdAt: 2023-10-20T01:19:56.420301Z
scheduledAt: 2023-10-20T01:19:56.424951Z
startedAt: 2023-10-20T07:12:15.879177Z
failedAt: 2023-10-20T01:19:56.252516Z
So, if task was failed we can see incorrect Runtime in tork-web
Will be nice to see some Fail counter near the task State
When it completed time is correct:
@ppcololo this should be fixed. could you confirm?
I don't know what did you fix, but I still see this issue:
coordinator:
Can you try again using the latest release?
I will try, thanks.
But can't reproduce it right now.
If I have an issue then write here
createdAt: 2023-11-11T04:32:06.689706Z
scheduledAt: 2023-11-11T04:32:06.700287Z
startedAt: 2023-11-11T05:37:18.931082Z
failedAt: 2023-11-11T04:32:06.675842Z```
New data: no time at all
for running task
state: RUNNING
createdAt: 2023-11-15T11:46:35.067896Z
scheduledAt: 2023-11-15T11:46:35.091199Z
startedAt: 2023-11-15T11:46:35.214332Z
failedAt: 2023-11-15T11:46:35.0151Z
for completed
state: COMPLETED
createdAt: 2023-11-13T20:33:12.05211Z
scheduledAt: 2023-11-13T20:36:03.056434Z
startedAt: 2023-11-15T03:16:02.248571Z
completedAt: 2023-11-15T03:16:03.156508Z
Can you try again using the latest release?
I didn't see any issues
closing
|
gharchive/issue
| 2023-10-20T08:40:29 |
2025-04-01T06:40:17.039072
|
{
"authors": [
"ppcololo",
"runabol"
],
"repo": "runabol/tork-web",
"url": "https://github.com/runabol/tork-web/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
993913686
|
Hi, could we PLEASE get a larger health bar? The current one is too short
The problem is that I like to PvP and knowing what your health is, as exactly as possible, is important.
This is why I would LOVE to have a longer health bar, so it's easier to see how much hp I have
Numbers over the current health bar would also be amazing
Thank you!
You're welcome
The healthbar above the player is limited, since the values used for it aren't representative of the actual health values; this is a Jagex limitation. A wider health bar would not make it more granular. Status Bars or the hp orb are much better ways of showing this.
|
gharchive/issue
| 2021-09-11T18:55:01 |
2025-04-01T06:40:17.090078
|
{
"authors": [
"Hydrox6",
"MyopicHuman",
"Whowhos"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/issues/14127",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1065385000
|
Add combined option "Europe" to quick-hop region
If you are based in Europe, the UK and Germany worlds usually have very similar ping, being able to utilize a larger pool of worlds for quick hopping while maintaining a decent ping would be handy.
added in f6a3d222b4986ab7beab54b650797f2821f5852c
|
gharchive/issue
| 2021-11-28T17:08:57 |
2025-04-01T06:40:17.091033
|
{
"authors": [
"Adam-",
"Jin-Jiyunsun"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/issues/14424",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
320122522
|
Hot/Cold Clue Solver
https://twitter.com/bitterkoekjers/status/840219782372839424
I feel this would be hard to implement, but if done would be amazing. If you knew the exact ranges of what each temperature meant for how close you are to the clue (not sure if that is documented anywhere) then you could effectively have a plugin that narrows down the locations for you over time.
I've got a working solution for this. I need to clean up my code and work on the UI a bit, but I should be able to do a PR soon.
Here's some teasers:
https://streamable.com/370tj
https://streamable.com/hrgrq
https://streamable.com/gb8bn
Done in #2412
|
gharchive/issue
| 2018-05-04T00:26:49 |
2025-04-01T06:40:17.093556
|
{
"authors": [
"Adam-",
"Eadgars-Ruse",
"Jalopyy"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/issues/2318",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
405941443
|
"Reset others" in loot tab
Is your feature request related to a problem? Please describe.
Currently have to reset all manually when I want to track a single monsters loot after I have looted a few e.g. slayer tasks.
Describe the solution you'd like
A "Reset others" option like in the exp tracker.
Its not that hard to simply click reset multiple times
|
gharchive/issue
| 2019-02-02T03:07:21 |
2025-04-01T06:40:17.095009
|
{
"authors": [
"deathbeam",
"michaelcubel"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/issues/7661",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
312152025
|
player indicators: Add clan caller
adds a clan caller option to the player indicators which will show clan caller name as well as show clan callers target name
Any news on this?
My comments still apply, the options should be renamed to just highlight etc to be more universal
|
gharchive/pull-request
| 2018-04-07T00:01:20 |
2025-04-01T06:40:17.096061
|
{
"authors": [
"deathbeam",
"mrpker9",
"sethtroll"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/pull/1301",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
466711618
|
events: Fix ItemContainerChanged NPEs
`ItemContainerChanged.getItemContainer()' is nullable, therefore should
be null-checked in subscribers. This commit marks the field as nullable
and changes its usages to properly null-check it.
Is this necessary? Because its doing reference checks everywhere, so null checking it is irrelevant, unless both sides will be null, what I dont think can happen? Or at least, I never saw NPE from those methods before. Do you have stacktrace or something?
Does it make sense to send item container changed events for a null container?
I'll get some stacktraces later today, but I noticed NPEs from most plugins using ItemContainerChanged on startup with empty inventory/equipment.
FWIW I can't repro on release. This is from master, which is currently ad2cce60b.
stacktraces
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.cluescrolls.ClueScrollPlugin.onItemContainerChanged(ClueScrollPlugin.java:246)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.runecraft.RunecraftPlugin.onItemContainerChanged(RunecraftPlugin.java:186)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.timers.TimersPlugin.onItemContainerChanged(TimersPlugin.java:811)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.roguesden.RoguesDenPlugin.onItemContainerChanged(RoguesDenPlugin.java:99)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.cannon.CannonPlugin.onItemContainerChanged(CannonPlugin.java:164)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.cluescrolls.ClueScrollPlugin.onItemContainerChanged(ClueScrollPlugin.java:246)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.runecraft.RunecraftPlugin.onItemContainerChanged(RunecraftPlugin.java:186)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.timers.TimersPlugin.onItemContainerChanged(TimersPlugin.java:811)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.roguesden.RoguesDenPlugin.onItemContainerChanged(RoguesDenPlugin.java:99)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
2019-07-11 16:02:02 [Client] WARN n.runelite.client.eventbus.EventBus - Uncaught exception in event subscriber
java.lang.NullPointerException: null
at net.runelite.client.plugins.cannon.CannonPlugin.onItemContainerChanged(CannonPlugin.java:164)
at net.runelite.client.eventbus.EventBus$Subscriber.invoke(EventBus.java:72)
at net.runelite.client.eventbus.EventBus.post(EventBus.java:218)
at net.runelite.client.util.DeferredEventBus.replay(DeferredEventBus.java:69)
at net.runelite.client.callback.Hooks.updateNpcs(Hooks.java:469)
at client.lc(client.java:63774)
at w.hq(w.java)
at client.hg(client.java:6630)
at client.ew(client.java:3324)
at client.au(client.java:1621)
at ba.p(ba.java:393)
at ba.run(ba.java:372)
at java.lang.Thread.run(Thread.java:748)
this has been addressed internally.
|
gharchive/pull-request
| 2019-07-11T07:25:46 |
2025-04-01T06:40:17.101121
|
{
"authors": [
"Adam-",
"Nightfirecat",
"deathbeam"
],
"repo": "runelite/runelite",
"url": "https://github.com/runelite/runelite/pull/9336",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
255107090
|
Add travis' validation of "Signed-off-by:" line
This adds validation of "Signed-off-by:" line.
Signed-off-by: Fabio Utzig utzig@apache.org
Regarding the comment suggesting to add commiter checking as well as author:
[utzig@inspiron mcuboot]$ git show -s --format="%cn <%ce>" cb1bb48
David Brown <davidb@davidb.org>
[utzig@inspiron mcuboot]$ git show -s --format="%an <%ae>" cb1bb48
David Brown <david.brown@linaro.org>
I'm not sure it would be a great idea at the moment. I would prefer to work on it later in a new patch (if at all).
You should only ever see the davidb.org commit after github rebases it. It is using my primary account for the rebase, as far as it is concerned.
You should always be ok adding the committer check, and when added, it will usually be after the author Sob.
Added check for commiter as well, let's see how it goes.
|
gharchive/pull-request
| 2017-09-04T19:36:52 |
2025-04-01T06:40:17.116995
|
{
"authors": [
"d3zd3z",
"utzig"
],
"repo": "runtimeco/mcuboot",
"url": "https://github.com/runtimeco/mcuboot/pull/115",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2316926294
|
fastsdcpu offline?
I want to know if there's a way to use fastsdcpu offline.
How can i use the models I have on A1111/models/StableDiffusion folder on my PC
I will convert the models into OpenVINO models follown de scrits.
Should I modify stable-diffusion-models.txt or settings.yaml?
The models are located in D:/A1111/models/StableDiffusion
I'm new on Stable Diffusion, need help and step by step instruction
@bjivanovich Thanks for using FastSD CPU, enable this setting
Checkout this https://github.com/rupeshs/fastsdcpu?tab=readme-ov-file#models
@bjivanovich LCM-LoRA offline tutorial https://www.youtube.com/watch?v=T8kZL5l3K8c
|
gharchive/issue
| 2024-05-25T10:51:56 |
2025-04-01T06:40:17.120571
|
{
"authors": [
"bjivanovich",
"rupeshs"
],
"repo": "rupeshs/fastsdcpu",
"url": "https://github.com/rupeshs/fastsdcpu/issues/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
134275399
|
Setting public headers for Xcode framework target
I have built the 02-library example for Xcode with the following command:
build.py --clear --toolchain xcode --framework
This built a Framework, but it does not have any public headers in the framework Headers folder.
I added a public.h file to the example and added the following entry in the CMakeLists.txt
set_target_properties(
foo PROPERTIES
PUBLIC_HEADER public.h)
When building the public.h file is copied to '_install/xcode/lib/'
But the build fails with the error:
Expected only one lib in directory: .../_install/xcode/lib
But found: ['.../_install/xcode/lib/libfood.a', '.../_install/xcode/lib/public.h']
What is the correct way to configure the public headers for a framework build.
Try to install headers to include directory. Explanation: this is workaround script which expect only one file in lib folder (which should be the *.a library) so when there are some other files script can't decide what file is a library.
Just for your information there are few fixes in latest CMake version that allow creating frameworks without hacks/workarounds (just using CMake). I haven't tried it yet, but if it works okay --framework option will be removed.
Still using make 3.3
cmake_minimum_required(VERSION 3.3)
I added the FRAMEWORK property
set_target_properties(
foo PROPERTIES
FRAMEWORK TRUE
PUBLIC_HEADER public.h)
Then I built using the following command
./buildtools/polly/bin/build.py --clear --toolchain xcode
This created a Framework with 'public.h' in the Headers folder located at:
_builds/xcode/Debug/foo.framework
But this only appears to be a MAC OS X framework not a fat Framework, I ran the file command:
file _builds/xcode/Debug/foo.framework/foo
result was:
Mach-O 64-bit dynamically linked shared library x86_64
I also tried --toolchain ios-9-2 but the framework still appears to be created only for x86_64
But this only appears to be a MAC OS X framework not a fat Framework
This is by design. Fat library created only for iOS.
I also tried --toolchain ios-9-2 but the framework still appears to be created only for x86_64
I will test latest CMake version with few improvements, may be will create an example.
What is the correct way to configure the public headers for a framework build
Okay, I think I know what is the problem here. --framework option expect headers be located in directory <install-prefix>/include/<frameworkname>. I.e. if you have library libfoo.dylib installed, then headers should be located in <install-prefix>/include/foo and will be moved to foo.framework/Headers. The reason of this is that project which uses foo should do #include <foo/*.hpp> in both variants. I've added warning to build.py script, so you should see next message:
Warning: no headers found for framework (dir: /.../_install/ios-9-1-armv7/include/boo)
Which can be fixed by next CMake code:
install(FILES boo.hpp DESTINATION include/boo)
I also tried --toolchain ios-9-2 but the framework still appears to be created only for x86_64
Are you sure you're using patched CMake version?
https://github.com/ruslo/polly/wiki/Toolchain-list#ios
https://github.com/ruslo/hunter#notes-about-version-of-cmake
Example with framework and native Xcode project updated:
https://github.com/forexample/ios-dynamic-framework#headers
Great! I have successfully built and linked the ios-dynamic-framework project.
I did have to use patched CMake version, I tried with the latest 3.5.0-rc2 CMake but that failed to build the Fat framework bundle.
I'm going to try and replicate the project setup to my own library repo.
but that failed to build the Fat framework bundle
With CMake 3.5+ you have to use CMAKE_IOS_INSTALL_COMBINED=YES and CMAKE_XCODE_ATTRIBUTE_ONLY_ACTIVE_ARCH=NO and run install (no need to use --framework, now it's --install). However dynamic framework will not be signed properly so the last time I've tried it I failed to run bundle on real device.
I have replicated the demo project setup on my own repo, I did not copy over the custom jenkins.py as I am just trying to build a dynamic library component, I am calling build.py directly with:
./buildtools/polly/bin/build.py --clear --toolchain ios-9-2 --framework --config Release
The build is successful and the lib binary now reports as a Fat lib and the public header is copied over to the framework/Headers folder.
I have noticed that if I build with --config Debug, that 'd' gets appended to the project name for all folders and files and it also causes the public header not to get copied to framework/Headers (unless I install it to an include/'d' folder.
Is there a way to prevent the 'd' suffix getting added and maybe create the framework under a debug or release folder depending on the --config value.
Is there a way to prevent the 'd' suffix getting added
Set it in command line -DCMAKE_DEBUG_POSTFIX="", in terms of build.py script it will be --fwd CMAKE_DEBUG_POSTFIX=""
Both iOS and OSX framework builds are working, thanks.
|
gharchive/issue
| 2016-02-17T13:10:32 |
2025-04-01T06:40:17.141138
|
{
"authors": [
"novocodev",
"ruslo"
],
"repo": "ruslo/polly",
"url": "https://github.com/ruslo/polly/issues/66",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
176049513
|
Adding ec2 test autogen
Builds on https://github.com/rusoto/rusoto/pull/373 and will need a rebase after that is merged.
Found a few bugs!
test result: FAILED. 75 passed; 15 failed; 0 ignored; 0 measured
Trying to figure out how we should handle cases where the call output is a shape wrapped in a shape where the top shape is only used for the top level xml tag. Example test_parse_ec_2_request_spot_instances:
<RequestSpotInstancesResponse xmlns="http://ec2.amazonaws.com/doc/2014-06-15/">
<spotInstanceRequestSet>
...
</spotInstanceRequestSet>
</RequestSpotInstancesResponse>
The output shape looks like this:
"RequestSpotInstancesResult":{
"type":"structure",
"members":{
"SpotInstanceRequests":{
"shape":"SpotInstanceRequestList",
"documentation":"<p>One or more Spot instance requests.</p>",
"locationName":"spotInstanceRequestSet"
}
},
We want to generate code that ensures the top level is there then immediately calls the child shape deserializer.
After working on this more I see while the tests pass nothing is correctly populated in the output object. Needs a bit deeper of a dive than expected.
This is off in the weeds. I'm extracting a better solution of trimming code we generate by not populating serializers and deserializers if we know we don't need them. Then back to extracting the good bits from this into smaller scoped PRs.
Closing this, got the work extracted from this mess.
|
gharchive/pull-request
| 2016-09-09T16:18:48 |
2025-04-01T06:40:17.145015
|
{
"authors": [
"matthewkmayer"
],
"repo": "rusoto/rusoto",
"url": "https://github.com/rusoto/rusoto/pull/375",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
274717120
|
add metadata to S3 put/copy/create_multipart
The metadata field was already available on the S3 request objects, but it was not being serialized into the request headers. This fixes that. This automatically prepends the x-amz-meta- to all keys in the metadata hashmap. I haven't done anything on the get object side, which appears to properly have the z-amz-meta- stripped from the keys in the GetObjectOutput metadata field.
I believe it is correct, and I've tested with my own setup.
The failure appears to be transient:
Too many open files (os error 24)
Thanks for the PR! I've restarted the one Travis job that failed - hopefully it'll work this time. 😄
@bluejekyll - Have you run the s3 integration tests that we don't run on CI? If not I'd be happy to run them for you, but the code itself looks fine so running them is really just to make sure.
I didn’t realize there were tests not run by CI, if you point me to the doc, or give me the command I’ll happily post the results.
I can't find the docs on running integration tests, so I'll make sure that gets documented in the repo.
To get going, going into the integration_tests folder and running cargo test --features all runs all integration tests. To run just S3, run cargo test --features s3.
The OSX build has failed for two different reasons so far that appear to be transient so I'm restarting the failing one again.
This looks good to me, thanks!
Could you add an integration test to the S3 tests demonstrating this new behavior? The test goes in this file: https://github.com/rusoto/rusoto/blob/master/integration_tests/tests/s3.rs .
The OSX build issues should be reduced or eliminated from https://github.com/rusoto/rusoto/pull/875 which is now in master. 😄
I'll work on the integration test today. Thanks!
Some things I noticed while testing:
unicode in the keys is not supported, if anyone wants that it will need some form of encoding
unicode is also a problem in the values, in so far as unicode characters are encoded, but not decoded on the Output objects.
Those seem like more general issues with the way headers are treated in the Rusoto library, and I'm not explicitly handling those cases in this PR, but it's something we may want to enhance in the future.
Anyway, I've added GetObject and HeadObject tests for S3 to validate the metadata section. Let me know if there's anything else you'd like to see.
@matthewkmayer let me know if anything else is needed for this change, Thanks!
Planning on getting to it tonight. 👍
All good, thanks!
Thank you!
|
gharchive/pull-request
| 2017-11-17T00:59:26 |
2025-04-01T06:40:17.152998
|
{
"authors": [
"SecurityInsanity",
"bluejekyll",
"matthewkmayer"
],
"repo": "rusoto/rusoto",
"url": "https://github.com/rusoto/rusoto/pull/871",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1993303049
|
Change in float behavior in 0.30.0
Hi there! Thanks for the project, it's awesome.
We saw a change in float behavior from 0.29 to 0.30: https://github.com/PRQL/prql/actions/runs/6864485300/job/18666598362?pr=3797#step:13:101, from https://github.com/PRQL/prql/pull/3797
We test PRQL across a number of databases, and rusqlite==0.29.0 matched other databases on that query, while 0.30.0 doesn't. Over at PRQL we don't have particularly strong preferences about exact float behavior, though avoiding special-casing specific databases is good, and I thought it might be helpful for you to know about the change.
Thanks!
I am not sure but it may be related to this same regression in an Java / JDBC SQLite driver here:
https://github.com/xerial/sqlite-jdbc/commit/8880c338290e6505bc85eec96917552f027a99bd
Since version 3.43.0 the sqlite source doesn't round the value anymore, but returned the value including the 8-byte IEEE floating point number inaccuracies. Somehow different operating systems behave differently.
On SQLite forum, only 32-bit platform should be impacted ?
https://sqlite.org/forum/forumpost/b9511aa180bd1f4875ff20cf7f9953a216f5fe8092e95237321c39a0884ed930
|
gharchive/issue
| 2023-11-14T18:20:13 |
2025-04-01T06:40:17.157383
|
{
"authors": [
"gwenn",
"max-sixty"
],
"repo": "rusqlite/rusqlite",
"url": "https://github.com/rusqlite/rusqlite/issues/1415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
679734174
|
Hover details show unexpected crate name/path
Hey! I hope that this hasn't been posted before. I searched and could not find anything.
I expect the first line of the hover details to show std::collections::HashMap. Instead, it shows std::collections::hash::map.
std::collections::hash::map::HashMap is the full path of HashMap. std::collections::HashMap is just a re-export.
https://github.com/rust-lang/rust/blob/de32266a1780aa4ef748ce7f6200a1554fad0aca/library/std/src/collections/hash/map.rs#L200
https://github.com/rust-lang/rust/blob/de32266a1780aa4ef748ce7f6200a1554fad0aca/library/std/src/collections/mod.rs#L425
I see - I figured that something like this would be the case. Is there a way to get the re-exported path instead?
Rustc has the same problem with error messages: https://github.com/rust-lang/rust/issues/21934
I see. I understand that this isn't a priority for my people. I'll keep an eye on it though. Thank you :100:
|
gharchive/issue
| 2020-08-16T11:01:52 |
2025-04-01T06:40:17.165939
|
{
"authors": [
"bjorn3",
"samhedin"
],
"repo": "rust-analyzer/rust-analyzer",
"url": "https://github.com/rust-analyzer/rust-analyzer/issues/5771",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1034555515
|
feat: Add assist for replacing turbofish with explicit type.
Converts ::<_> to an explicit type assignment.
let args = args.collect::<Vec<String>>();
->
let args: Vec<String> = args.collect();
Closes #10285
|
gharchive/pull-request
| 2021-10-25T00:25:51 |
2025-04-01T06:40:17.167601
|
{
"authors": [
"lnicola",
"terrynsun"
],
"repo": "rust-analyzer/rust-analyzer",
"url": "https://github.com/rust-analyzer/rust-analyzer/pull/10629",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
852549806
|
Remove Ty::substs{_mut}
Almost all uses actually only care about ADT substs, so it's better to be explicit. The methods were a bad abstraction anyway since they already didn't include the inner types of e.g. TyKind::Ref anymore.
bors r+
changelog skip
bors r+
|
gharchive/pull-request
| 2021-04-07T15:50:47 |
2025-04-01T06:40:17.168885
|
{
"authors": [
"flodiebold"
],
"repo": "rust-analyzer/rust-analyzer",
"url": "https://github.com/rust-analyzer/rust-analyzer/pull/8402",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
936123499
|
feat: Make inline_function work on methods
Now called inline_call.
There is a lot of improvement potential for the assist here still.
For one better handling of the self param, which currently always emits ref (mut) for &(mut )self params even if not necessary.
A problem we currently have is that function parameters can have coercions happen, which this assist will lose as it doesnt emit type ascriptions for the let statements.
And there is the general potential for just not even generating some of the let statements and inline the expressions instead depending on the expression(I'll work on this next).
bors r+
We should start recording adjustments (for coercions and self parameter autoref / deref) in type inference sooner rather than later so we can handle these cases more correctly!
Definitely, I think this is the third issue/feature which wants coercions tracked. I'll open a new issue for this one to track them together.
|
gharchive/pull-request
| 2021-07-03T00:01:19 |
2025-04-01T06:40:17.171460
|
{
"authors": [
"Veykril",
"flodiebold"
],
"repo": "rust-analyzer/rust-analyzer",
"url": "https://github.com/rust-analyzer/rust-analyzer/pull/9468",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1106020901
|
Refactor use map_err
issue: https://github.com/rust-bitcoin/rust-bitcoin/issues/793
change to using map_err
i have refactor another place too
I approved run but unless I'm too confused something it'll fail on missing commas.
|
gharchive/pull-request
| 2022-01-17T15:54:32 |
2025-04-01T06:40:17.181133
|
{
"authors": [
"Kixunil",
"wim-web"
],
"repo": "rust-bitcoin/rust-bitcoin",
"url": "https://github.com/rust-bitcoin/rust-bitcoin/pull/794",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
499685699
|
Quickfix: Exclude the template file
This PR excludes the template file to fix this:
Follow up to #21
rustwasm wg seems to do exactly the same thing: https://github.com/rustwasm/rustwasm.github.io/blob/src/_config.yml
|
gharchive/pull-request
| 2019-09-27T23:07:20 |
2025-04-01T06:40:17.185212
|
{
"authors": [
"ozkriff"
],
"repo": "rust-gamedev/rust-gamedev.github.io",
"url": "https://github.com/rust-gamedev/rust-gamedev.github.io/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
594527737
|
Ash 0.30 release
I'm not the library author but the latest release seems worthy to be included in the newsletter!
Part of #89 ("Newsletter 8: Coordination/Tracking")
Nice, thanks!
|
gharchive/pull-request
| 2020-04-05T15:56:27 |
2025-04-01T06:40:17.186305
|
{
"authors": [
"msiglreith",
"ozkriff"
],
"repo": "rust-gamedev/rust-gamedev.github.io",
"url": "https://github.com/rust-gamedev/rust-gamedev.github.io/pull/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
506903288
|
Fix version issues
This should fix both #270 and #274 as well as an issue I noted with particular versions of backtrace-rs and Rust 1.31.0
I'm not convinced that 0.13 will ever actually be released, but I think its fair to bump the minimum version to 1.13 and move from try! to ? as in #279
Wish I had found this before I addressed the failures today :)
Ok, I'm closing this out now then now #279 is merged. Will open a new PR fixing the backtrace version issue.
|
gharchive/pull-request
| 2019-10-14T22:23:44 |
2025-04-01T06:40:17.200355
|
{
"authors": [
"AndyGauge",
"palfrey"
],
"repo": "rust-lang-nursery/error-chain",
"url": "https://github.com/rust-lang-nursery/error-chain/pull/275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
348453868
|
Check only_v6 mode setting before trying to set it
Not all platforms allow changing a socket's IPV6_V6ONLY mode. OpenBSD for example treats this property as read-only, it is always enabled. It returns an error for any attempt to set it, even when you're setting it to what it already was. You are allowed to read out the value, though.
This PR changes the only_v6/set_only_v6 functions so that they first check if the value is already what is desired, before attempting to set it. This allows the user to explicitly enable (or disable) IPV6_V6ONLY mode without having to worry about getting an error on some systems, as long as the default value for the system is the same as what is being requested. You only get an error when the system default is different, and unable to be changed.
Thanks for the PR! This crate, however, is intended to be pretty low-level and correspond 1:1 with setsockopt operations. In that sense I think this may not necessarily be the best place to put this abstraction for OpenBSD perhaps?
That's understandable, but at the moment there's no way to query the socket options before setting them. In my use case, I want to have separate sockets for IPv4 and IPv6 (with the same port), but to do that I have to ensure that IPV6_V6ONLY is set. How would I go about doing this in a portable way, so that I don't receive an error when trying to set it on some platforms?
Perhaps a method could be added to query the settings? (the socket2 crate may also be useful here)
|
gharchive/pull-request
| 2018-08-07T19:21:12 |
2025-04-01T06:40:17.203325
|
{
"authors": [
"Rua",
"alexcrichton"
],
"repo": "rust-lang-nursery/net2-rs",
"url": "https://github.com/rust-lang-nursery/net2-rs/pull/78",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
253127043
|
Bump version to 0.30.0
Version bump, primarily to get #832 and #892 in now that 1.19 is out and untagged unions are usable in stable!
r? @emilio
Thanks for the pull request, and welcome! The Servo team is excited to review your changes, and you should hear from @emilio (or someone else) soon.
From the test output, the stable rust side is still emitting BindgenUnion(https://github.com/rust-lang-nursery/rust-bindgen/blob/master/tests/expectations/tests/anon_struct_in_union_1_0.rs)
Those are testing Rust 1.0 output.
Anyway, doing a new release sounds fine to me, @fitzgen, any reason we shouldn't?
Did stylo folks end up making their own bug-fix-only branch yet?
I see no reason not to release a 0.30.0 crate.
@bors-servo r+
Didn't, but no reason we can't in the future if we need to. shrug
:pushpin: Commit 3cbdb98 has been approved by emilio
:hourglass: Testing commit 3cbdb9894c5e90631d7fced86f8adf9868672706 with merge 05caa820010d44b601b340228d4f95d81958a270...
@emilio I'll write release notes and publish to crates.io if you haven't already
I planned to do so when it merges, but feel free to :)
Both release notes and publishing, or just the latter?
Mostly publishing. If you could do relnotes that'd be awesome, because it's taking a bit and I'm about to head out.
Sure thing
:sunny: Test successful - status-travis
Approved by: emilio
Pushing 05caa820010d44b601b340228d4f95d81958a270 to master...
0.30.0 is published
And release notes published on r/rust and u.r-l.o
|
gharchive/pull-request
| 2017-08-26T22:28:09 |
2025-04-01T06:40:17.215619
|
{
"authors": [
"bors-servo",
"bradfier",
"emilio",
"fitzgen",
"highfive",
"photoszzt"
],
"repo": "rust-lang-nursery/rust-bindgen",
"url": "https://github.com/rust-lang-nursery/rust-bindgen/pull/935",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
303318864
|
Fix tyvar_behind_raw_pointer warning
See https://github.com/rust-lang/rust/issues/46906
r? @alexcrichton
Thanks!
|
gharchive/pull-request
| 2018-03-08T01:09:04 |
2025-04-01T06:40:17.217343
|
{
"authors": [
"nrc",
"thibaultdelor"
],
"repo": "rust-lang-nursery/rustup.rs",
"url": "https://github.com/rust-lang-nursery/rustup.rs/pull/1371",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
301988404
|
"cargo +X clean" cleans not only for X
Looks like cargo can cache result of cargo build for different toolchains,
if I run cargo +stable build && cargo +nightly build && cargo +stable build with empty target
it build only two times, the second run of cargo +stable build does nothing.
This is great.
But if I run cargo +nightly clean the next run of cargo +stable build also run full rebuild.
It would be great if cargo +nightly build clean cache related only to nightly toolchain,
cargo +stable clean clean cache related only to stable toolchain and so on.
Related to #5026 ("cargo target fills will outdated artifacts as toolchains are updated/changed").
How would one go about changing clean so it only cleans artefacts relevant to the selected channel?
cargo sweep is adding this functionality.
I don't think it'd make sense for cargo clean to only clean up for the toolchain being invoked but I could see the possibility of a flag. We'd need better tracking of what files are associated with what toolchain. #12633 is the most likely route for that.
|
gharchive/issue
| 2018-03-03T10:45:59 |
2025-04-01T06:40:17.240098
|
{
"authors": [
"Eh2406",
"davemilter",
"dwijnand",
"epage"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/issues/5113",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
488235434
|
cargo publish and cargo package incorrectly complain about .gitignored Cargo.lock in library
Recently (at some point since mid-July; but I don't know the exact version when it started) in my two library repos Cargo has started failing when attempting to cargo publish, with the error:
$ cargo publish
Updating crates.io index
error: 1 files in the working directory contain changes that were not yet committed into git:
Cargo.lock
to proceed despite this, pass the `--allow-dirty` flag
However, this is incorrect because:
This is a library, so Cargo.lock should not be committed or published.
Cargo.lock is in .gitignore.
Cargo.lock has never been checked in to Git.
For confirmation:
$ git status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
$ git log Cargo.lock
[no output]
$ git rm --cached Cargo.lock
fatal: pathspec 'Cargo.lock' did not match any files
$ cat .gitignore
Cargo.lock
target/
cargo publish fails in the same way/with the same error.
As a workaround, deleting Cargo.lock entirely allows publishing (and of course since it's a library it can just be safely recreated afterwards).
The two repositories where I have observed this happening are https://github.com/felixc/rexiv2 and https://github.com/felixc/gexiv2-sys. I tried to reproduce it by creating a minimal empty repo but was not immediately able to, so I'm not sure what part of the repo configuration is leading to this. Some discussion on Discord suggested it might have to do with the runnable examples/ targets leading to some mis-detection as a binary crate rather than a library one?
Notes
$ cargo version
cargo 1.37.0 (9edd08916 2019-08-02)
$ rustup --version
rustup 1.18.3 (435397f48 2019-05-22)
$ rustup toolchain list
stable-x86_64-unknown-linux-gnu (default)
beta-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu
1.20.0-x86_64-unknown-linux-gnu
$ uname -a
Linux mir 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux
CC @Eh2406 since you suggested on Discord that I should file this as you had some ideas about which parts of the relevant code might have changed recently.
Thanks, yes I would guess that it is something to do with #7026. CC @ehuss as this may have to do with your lockfile on publish work.
I have this same issue with my repo: boxcars. This was a bug introduced in Cargo 1.37.0. I've downgraded to Cargo 1.36.0 successfully. Deleting a .gitignored Cargo.lock before pushing a library seems too inconvenient of a workaround.
This bug still exist in cargo 1.39.0 (1c6ec66d5 2019-09-30)
$ git status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
$ bat .gitignore
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: .gitignore
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ /target
2 │ **/*.rs.bk
3 │ Cargo.lock
4 │ .idea
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
$ cargo publish
Updating crates.io index
error: 1 files in the working directory contain changes that were not yet committed into git:
Cargo.lock
to proceed despite this and include the uncommited changes, pass the `--allow-dirty` flag
$ bat Cargo.toml
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
│ File: Cargo.toml
───────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 │ [package]
2 │ name = "log-derive"
3 │ version = "0.3.2"
4 │ license = "MIT/Apache-2.0"
5 │ authors = ["Elichai <elichai.turkel@gmail.com>"]
6 │ repository = "https://github.com/elichai/log-derive"
7 │ readme = "README.md"
8 │ edition = "2018"
9 │ description = "Procedural Macros for logging the result and inputs of a function"
10 │ categories = ["development-tools::debugging"]
11 │ keywords = ["log", "macro", "derive", "logging", "function"]
12 │ include = [
13 │ "src/*.rs",
14 │ "Cargo.toml",
15 │ ]
16 │
17 │ [dependencies]
18 │ darling = "0.10.0"
19 │ proc-macro2 = "1.0.3"
20 │ #syn = { version = "0.15", features = ["full", "extra-traits"] } # -> For development
21 │ syn = { version = "1.0.5", features = ["full"] }
22 │ quote = "1.0.2"
23 │ log = "0.4"
24 │
25 │ [dev-dependencies]
26 │ simplelog = "0.7"
27 │
28 │ [badges]
29 │ travis-ci = { repository = "elichai/log-derive" }
30 │
31 │ [lib]
32 │ proc-macro = true
@elichai the fix is in 1.40, can you try that?
@elichai the fix is in 1.40, can you try that?
Yep, fixed. sorry :)
|
gharchive/issue
| 2019-09-02T15:11:27 |
2025-04-01T06:40:17.248839
|
{
"authors": [
"Eh2406",
"ehuss",
"elichai",
"felixc",
"nickbabcock"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/issues/7319",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
959380210
|
Please publish new version of cargo-platform to include the license files
The published version of cargo-platform does not include the LICENSE-APACHE or LICENSE-MIT files, which can be seen on https://docs.rs/crate/cargo-platform/0.1.1/source/. Could a new version be published that includes these files? This makes it much easier to audit that the code to make sure it is correctly licensed.
There were also a handful of small clippy fixes since cargo-platform was last published, but as best as I can tell, they don't appear to be significant changes.
Possible Solution(s)
It appears that all we need to do to address this is to:
Bump the cargo-platform to 0.1.2
Verify that cargo publish --list includes LICENSE-APACHE and LICENSE-MIT
Publish cargo-platform 0.1.2
Thanks so much!
Feel free to post a PR to bump the version, and I can publish it.
|
gharchive/issue
| 2021-08-03T18:29:47 |
2025-04-01T06:40:17.252314
|
{
"authors": [
"ehuss",
"erickt"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/issues/9758",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1186887221
|
Suggest cargo install --git when missing registry package looks like a git* URL
What does this PR try to resolve?
Fix #10485
Create a local Error type that that wraps existing error handling code
Giving the three error variants names improves readability and
makes it easier to share duplicate code
Move all string operations to create errors from 3 separate bail!
calls into anyhow::anyhow! calls that are encapsulated in the
From impl
How should we test and review this PR?
I left 2 comments asking for REVIEW
Please suggest other failure scenarios that SelectPackageError may have missed or too eagerly
bundled in the CouldntFindInRegistry variant.
I thought that adding error handling code is something we might be happy to pay a potential perf
penalty for if we can improve error messages, but please correct me if I am wrong
Additional information
I worked on this by adding a failing test case and implementing until it passed. Then I ran all install tests
cargo test --package cargo --test testsuite -- install --nocapture
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @ehuss (or someone else) soon.
Please see the contribution instructions for more information.
Thanks for your review. I have now restructured the PR to consist of 2 commits:
https://github.com/rust-lang/cargo/pull/10522/commits/b87ae285ce86cc5212e200e6862a2d15dccda39b
with the regression test and the implementation that makes it pass without refactoring into new errors types.
The original REVIEW comment is still there for anyone to suggest improvement on.
https://github.com/rust-lang/cargo/pull/10522/commits/174c80ec6fceeed732bd4d3f95190e18bb63efc8
Is the refactor commit that adds the local error type and refactors all error handling there.
I hope this makes it easier to review.
I am happy to revert the PR back to the first commit that implements the actionable fixit with minimal refactoring.
Let me know.
Last but not least, congrats on soon becoming the new maintainer of cargo - it's a great tool and I am happy to see such diligent stewardship.
:umbrella: The latest upstream changes (presumably #11243) made this pull request unmergeable. Please resolve the merge conflicts.
Unfortunately I think this PR has fallen through the cracks (over a year ago 😞), and I'm not sure what the status of it is. Does anyone have an update on where it stands?
Unfortunately I think this PR has fallen through the cracks (over a year ago 😞), and I'm not sure what the status of it is. Does anyone have an update on where it stands?
Let me see if I can resolve the merge conflicts and I would appreciate a review
I settled the merge conflicts - please review
Since this PR was originally opened, gitoxide started getting integrated into cargo https://github.com/rust-lang/cargo/issues/11813, but I am not sure if you want new PRs to use gitoxide already.
Using gix_url, I can minimise the risk of adding a completely new dependency for a Quality-of-Life improvement like an actionable error message.
https://docs.rs/gix-url/0.21.1/gix_url/
Please clarify if this is something you would like me to do in this PR or you prefer to keep the current implementation and port it to gix_url when that is more stable
Using gix_url, I can minimise the risk of adding a completely new dependency for a Quality-of-Life improvement like an actionable error message.
Sounds like a good idea! Not sure if their feature parity is at the same level. For reviewer we have to do the amount same work for either — auditing the dependency as much as possible. If it is something from @Byron I am more confident to rubber-stamp it (just kidding, we still need to audit it).
Thanks for bringing this up and welcome back!
Thanks for chiming me in. I recommend using gix-url for the reason that it's already part of the dependency tree, and it does't use regex and tracing. It's not 100% spec compatible yet but that is in the making. I hope that helps.
Thanks for chiming me in. I recommend using gix-url for the reason that it's already part of the dependency tree, and it does't use regex and tracing. It's not 100% spec compatible yet but that is in the making. I hope that helps.
I moved to use gix::url instead of git-url-parse and the diff is now smaller!
The previously passing test passes. The test that was failing is still failing.
I will leave a comment with the root cause of the failing test to share context and would appreciate your input
Just wanted to note that since this PR only cares about http/s URLs, you could also use url::Url::parse directly. The additional logic present in gitoxide does not provide any benefit for this particular case.
gix-url (especially after the rewrite) would be helpful if you want to additionally detect scp like target urls and local file paths.
@niklaswimmer is mostly right. Thanks for calling out that.
It's my fault not looking at this pull request closer. Yes Cargo doesn't support SCP-like URL for Git dependencies, so url crates seems pretty sufficient for this enhancement at this moment.Sorry for giving a wrong pointer.
We may also want to include ssh:// protocol for this check, as Cargo supports that.
Thank you @petr-tik for your work on making this error message better. I merged another PR (#12575) which implements a similar hint for suggesting --git when a URL is passed.
Apologies for the confusion of having two different PRs fixing the same issue. I'm glad to see this error improved.
|
gharchive/pull-request
| 2022-03-30T19:20:52 |
2025-04-01T06:40:17.267315
|
{
"authors": [
"Byron",
"arlosi",
"bors",
"ehuss",
"niklaswimmer",
"petr-tik",
"rust-highfive",
"weihanglo"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/pull/10522",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1379969310
|
fix(cli): Forward non-UTF8 arguments to external subcommands
Whether we allow non-UTF-8 arguments or not, we shouldn't preclude external subcommands from deciding to do so.
I noticed this because clap v4 changed the default for external subcommands from String to OsString with the assumption that this would help people to "do the right thing" more often.
r? @weihanglo
(rust-highfive has picked a reviewer for you, use r? to override)
:umbrella: The latest upstream changes (presumably #11119) made this pull request unmergeable. Please resolve the merge conflicts.
Should we add a test to prevent any regression?
I'm trying to weigh this out with updating the infrastructure so it can be tested.
As is, this is something the type system is verifying for us.
To test this, we need a platform-specific test that can generate a project capable of verifying the argument was passed through. The existing echo_subcommand would require platform-specific changes to allow it to sometimes not panic when echoing back out (windows could still panic) or a custom program that fails if expected values aren't received which requires updating the project builder to work with bytes instead of strings.
All of this can be done but weighing out, I'm unsure how worth it it is to do.
@bors r+
:pushpin: Commit 87fdf7660c21a4129a0bfa2385fe45da33065ff9 has been approved by weihanglo
It is now in the queue for this repository.
:hourglass: Testing commit 87fdf7660c21a4129a0bfa2385fe45da33065ff9 with merge 7c8a5a67d34ff4ae46f2b6b689653509af8b75b6...
:sunny: Test successful - checks-actions
Approved by: weihanglo
Pushing 7c8a5a67d34ff4ae46f2b6b689653509af8b75b6 to master...
|
gharchive/pull-request
| 2022-09-20T20:39:53 |
2025-04-01T06:40:17.273369
|
{
"authors": [
"bors",
"epage",
"rust-highfive",
"weihanglo"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/pull/11118",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1380039828
|
fix(cli): Error trailing args rather than ignore
This warning has been in for a sufficient time, requires a hack from clap to avoid all argument ID validation, and allows users to run the wrong command (imagine cargo -- publish --dry-run).
See also https://rust-lang.zulipchat.com/#narrow/stream/246057-t-cargo/topic/Cargo.20ignoring.20arguments.20with.20.60cargo.20--.20check.20--ignored.60
r? @ehuss
(rust-highfive has picked a reviewer for you, use r? to override)
Thanks!
@bors r+
:pushpin: Commit 8f8a79a5a44a439351e487d532d1c98ac4d225e2 has been approved by ehuss
It is now in the queue for this repository.
:hourglass: Testing commit 8f8a79a5a44a439351e487d532d1c98ac4d225e2 with merge e320c3e545741210bd7d9a30c186e6426cb13546...
:sunny: Test successful - checks-actions
Approved by: ehuss
Pushing e320c3e545741210bd7d9a30c186e6426cb13546 to master...
|
gharchive/pull-request
| 2022-09-20T21:55:31 |
2025-04-01T06:40:17.277872
|
{
"authors": [
"bors",
"ehuss",
"epage",
"rust-highfive"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/pull/11119",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
798055781
|
Flip 'foo' and 'bar' to be consistent
The "Renaming dependencies" section initially uses 'foo' as the crate name and 'bar' as a rename, but then swaps them and uses 'bar' as the example crate name in the context of optional dependencies. Now both examples in this section treat 'foo' as the original crate name.
Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @alexcrichton (or someone else) soon.
If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.
Please see the contribution instructions for more information.
@bors: r+
Thanks!
:pushpin: Commit 2663d7d3af1674cd669c6f19ae43673b75e85b4b has been approved by alexcrichton
:hourglass: Testing commit 2663d7d3af1674cd669c6f19ae43673b75e85b4b with merge 4e4490f337522e31d9c4bfe3fb8ce0dfa193b5ff...
:sunny: Test successful - checks-actions
Approved by: alexcrichton
Pushing 4e4490f337522e31d9c4bfe3fb8ce0dfa193b5ff to master...
|
gharchive/pull-request
| 2021-02-01T07:28:10 |
2025-04-01T06:40:17.282276
|
{
"authors": [
"alexcrichton",
"bors",
"dimo414",
"rust-highfive"
],
"repo": "rust-lang/cargo",
"url": "https://github.com/rust-lang/cargo/pull/9120",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1822678629
|
No-op shared_flag() and static_flag() methods.
The compiler flags enabled by the methods in question are link flags and are never actually used in the cc-rs context. The shared_flag() is documented as reserved for future support for linking shared libraries, and static_flag() is effectively deprecated.
This is with reference to https://github.com/rust-lang/cc-rs/issues/772#issuecomment-1651365558 and forward.
@dot-asm A lot of your PRs were closed in batch, is it an accident?
No, it's a conscious choice. Just in case, it's not a random "lot," but all open ones. I've also removed myself from the watchers' list.
@dot-asm Is it because members of cc-rs is too slow to review and respond to your PR and some of them has been open for a long time?
|
gharchive/pull-request
| 2023-07-26T15:31:29 |
2025-04-01T06:40:17.284840
|
{
"authors": [
"NobodyXu",
"dot-asm"
],
"repo": "rust-lang/cc-rs",
"url": "https://github.com/rust-lang/cc-rs/pull/838",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1744186117
|
Simplify builds.json to return what the clients actually want
AFAIK our primary consumers are:
shields.io
crates.io
They both only care:
will going to https://docs.rs/<crate>/<version> show docs
They emulate this by checking whether the latest build is successful, but this relies on docs.rs also using this as the signal (it currently mostly is for other reasons, but we should show docs in cases where a previous build was successful but a newer rebuild failed, and there were previously bugs around proc-macros).
So, remove all the extraneous data they don't depend on and just return a simple boolean showing whether the docs are good or not, the same one we use to determine whether to redirect to show a build failure warning or not:
https://github.com/rust-lang/docs.rs/blob/f1a7e46a620d763db485a25efb5e510fd3fe0594/src/web/rustdoc.rs#L476
(This helps simplify some changes I'm working on around what and how we store build details).
We're retuning the same structure they expect. I was considering adding a new route with a different structure, but at least at the moment I don't think it's worth the migration pain.
We're retuning the same structure they expect. I was considering adding a new route with a different structure, but at least at the moment I don't think it's worth the migration pain.
You're right, I missed the [] in the JSON response. So yes, assuming this is the only attribute used, this would be backwards compatible.
I was considering adding a new route with a different structure, but at least at the moment I don't think it's worth the migration pain.
if it's worth the migration pain probably also depends on your initial question, if anyone except crates.io / shields.io uses this endpoint / the result.
I see two ways here:
as you already proposed, try to get data from cloudfront about the user agents for this endpoint. I assume this has to be configured by infra.
or, use a new endpoint, switch over crates.io / shields.io, and see if any requests remain.
From my limited experience I couldn't say if we can just do the breaking change, without the data above.
A thing I just remembered : since the endpoint is uncached, you could also check the access logs on the server itself, if they contain what we seek
I did that, we just get Amazon CloudFront as the user-agent on all requests.
I've been persuaded to go the new endpoint route, I'll open a separate PR adding that and prepare PRs for crates.io/shields.io (and lib.rs assuming my gitlab login still works) using it for once it's been deployed. Then we can re-evaluate whether there's still any traffic to this endpoint.
|
gharchive/pull-request
| 2023-06-06T16:16:26 |
2025-04-01T06:40:17.323500
|
{
"authors": [
"Nemo157",
"syphar"
],
"repo": "rust-lang/docs.rs",
"url": "https://github.com/rust-lang/docs.rs/pull/2144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1748700991
|
= 0.17.0 issues running cargo semver-checks
Hi! I'm currently running into issues with cargo-semver-checks 0.21.0 (on Arch Linux, but can reproduce building straight from crates.io) and was asked to open a ticket here and link to this one.
Using git2 >= 0.17.0 cargo-semver-checks runs into TLS errors:
cargo semver-checks check-release
Updating index
Error: the server did not provide a certificate; class=Ssl (16)
Caused by:
the server did not provide a certificate; class=Ssl (16)
Current cargo-semver-checks 0.21.0 pulls in git2 0.17.0 and libgit2-sys 0.15.2+1.6.4, but only when downgrading git2 to 0.16.1 and libgit2-sys to 0.14.2+1.5.1, the issue described in the above ticket goes away.
To give further background info:
I am packaging libgit2 for Arch Linux. It is currently at 1.6.4 and we do not apply any patches (see PKGBUILD).
Our openssl in the stable repositories is at 3.0.9, soon 3.1.1 (see PKGBUILD).
Can you provide a reproduction that uses this crate directly? Like perhaps running the clone example? It's also important to know how which SSL backend it is using, which version, etc.
|
gharchive/issue
| 2023-06-08T21:52:57 |
2025-04-01T06:40:17.328386
|
{
"authors": [
"dvzrv",
"ehuss"
],
"repo": "rust-lang/git2-rs",
"url": "https://github.com/rust-lang/git2-rs/issues/961",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
164685623
|
struct linger is missing
Hey,
the linger {int l_onoff, int l_linger} struct is missing in this binding. I checked it's implementation in glibc and musl, but it's missing in this binding.
I guess the struct should look like in the crate winapi.
Ah yeah should be good to add at any time!
Ok, it's a bad idea to take the struct from winapi, because on windows you need to set u_short's, on *nix systems you need to set int's, so we need to set it so c_int, not u16.
Now fixed!
|
gharchive/issue
| 2016-07-09T21:03:30 |
2025-04-01T06:40:17.331535
|
{
"authors": [
"alexcrichton",
"sateffen"
],
"repo": "rust-lang/libc",
"url": "https://github.com/rust-lang/libc/issues/327",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1649583618
|
Passing argument without -- to cargo miri run bypasses miri
Check this out:
% cat src/main.rs
use std::ffi::c_char;
extern "C" {
fn printf(fmt: *const c_char, ...);
}
fn main() {
dbg!(std::env::args().skip(1).take(1).collect::<String>());
let fmt = b"Hello, world!\n\0".as_ptr() as *const c_char;
unsafe {
printf(fmt);
}
}
You'd expect miri run ... to error on the above code.
But:
% cargo +nightly miri run whoa
Preparing a sysroot for Miri (target: x86_64-unknown-linux-gnu)... done
Finished dev [unoptimized + debuginfo] target(s) in 0.00s
Running `target/debug/hello whoa --target-dir /local/home/pnkfelix/Dev/Rust/hello/target/miri --target x86_64-unknown-linux-gnu --config 'target.'\''cfg(all())'\''.runner=["/local/home/pnkfelix/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/cargo-miri", '\''runner'\'']' --`
[src/main.rs:8] std::env::args().skip(1).take(1).collect::<String>() = "whoa"
Hello, world!
Now, if someone is reading the above output properly, they might get a hint as to what's actually happening here: it's building and running a debug binary, rather than running cargo-miri itself.
(I.e., you need to pass inputs with an intervening --, like so: cargo +nightly miri run -- whoa, which does what you expect and issues a error at the point where printf is called.)
But what is the scenario where you want the current behavior when -- is omitted? Should miri run provide an error or warning when -- is omitted and the extra arguments are not recognized by miri's argument processor?
I am confused, probably because I am tired... what is happening here? All these arguments should just be forwarded to cargo run, while a whole bunch of env vars are being set to make binaries be executed by Miri.
The issue is that the actual binary gets executed:
Running `target/debug/hello
And not miri
I don't know how a binary even gets created (maybe left over from a previous non-miri run?).
This is caused by very strange cargo behavior: in cargo run foo --config blah, cargo ignores the --config and probably passes it to the program instead. I would expect all --flags before the -- to be interpreted by cargo.
With this cargo behavior, I don't know if the bug is fixable. It is not possible for Miri to know whether an argument is present before the --; that would require knowing whether some --flag takes a parameter or not.
Cc @ehuss
When we passed the arguments in a different order, that broke cargo nextest :joy: .
But anyway this still doesn't help:
Scan the args for anything that is cargo-specific, but that cargo-miri wants to know about (like --target-dir). If you find these special flags, strip them from the args, and add them to cmd.arg(…). You'll need to collect the args into a temporary Vec to do this.
Here we have to stop at the first argument that is passed to the program. So e.g. cargo miri run -- foo --target-dir=blah, we need to stop at the --. That already works. But with cargo miri run foo --target-dir=blah we need to stop at the foo. Worse, with cargo miri run --flag foo --target-dir=blah we need to stop at the foo if and only if --flag does not take a 'value'. In other words, we need to copy-paste the entire cargo argument passing logic (at least which flags exist and whether they they an argument or not). That's quite bad. :(
I personally wouldn't worry too much about having cargo miri run perfectly match the behavior of cargo run. If the concern is about these flags colliding with the user's program, I also wouldn't worry too much about that since they can use -- to avoid those collisions (and they seem like they would be unlikely).
An alternative is to use clap in cargo-miri, and just parse the cargo run flags. There aren't very many of them, and they don't change very often. That's what cargo expand does. I realize that isn't ideal, but otherwise there aren't a lot of options.
An alternative is to use clap in cargo-miri, and just parse the cargo run flags. There aren't very many of them, and they don't change very often. That's what cargo expand does. I realize that isn't ideal, but otherwise there aren't a lot of options.
I was living under the illusion that argument parsing worked out in a way that was just modular enough to make this work, but as this issue shows that was just wrong. :(
If we are willing to ignore the issues around cargo miri run foo --target-dir=blah, things are still not entirely trivial: if we add our --config immediately after the 'verb', we end up with cargo nextest --config ... run, which errors. That's why we moved to passing the --config at the end in the first place.
|
gharchive/issue
| 2023-03-31T15:28:53 |
2025-04-01T06:40:17.341664
|
{
"authors": [
"RalfJung",
"ehuss",
"oli-obk",
"pnkfelix"
],
"repo": "rust-lang/miri",
"url": "https://github.com/rust-lang/miri/issues/2829",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
463881483
|
Missing cmath function: ldexp
The libcore test suites calls ldexp via FFI. Unfortunately, libstd does not expose this function in any way. What would be the best way for us to implement this shim?
I found this crate. Alternatively we could try to just define the extern symbol ourselves and see if that works on all our host platforms?^^
I am taking care of this one
Awesome!
Do you need mentoring? The file to edit is this one.
Thank you Ralf. It seems my gut feeling was right this time :D that's the file I modified. I just added a new match under // math functions based on the [description] (http://www.cplusplus.com/reference/cmath/ldexp/) for ldexp:
"ldexp" => {
let x = f32::from_bits(this.read_scalar(args[0])?.to_u32()?);
let exp = f32::from_bits(this.read_scalar(args[1])?.to_u32()?);
this.write_scalar(Scalar::from_u32((x * 2.0f32.powf(exp)).to_bits()), dest)?;
}
Should that be enough? Or should I use the crate you mentioned?
I also suppose we should remove https://github.com/rust-lang/rust/blob/0f11354a9c1bf0c5ac250c7fa2bafc289a662f42/src/libcore/tests/num/flt2dec/mod.rs#L1, is that right?
I just added a new match under // math functions based on the description for ldexp
Hm. I guess it is a reasonable start. Maybe add a FIXME saying that if we see imprecise results, we should try to use the C math lib ldexp instead.
Your code should also, like the other math functions, have a FIXME because it uses host floats.
And finally, find an appropriate test case to extend and call the function there.
I also suppose we should remove
Once this lands in Miri and Miri has been updated in rustc, that can be removed, yes. One step after the other. :)
I just added a new match under // math functions based on the description for ldexp
Hm. I guess it is a reasonable start. Maybe add a FIXME saying that if we see imprecise results, we should try to use the C math lib ldexp instead.
I can use the C math library directly if you think that's a more definitive solution.
I also suppose we should remove
Once this lands in Miri and Miri has been updated in rustc, that can be removed, yes. One step after the other. :)
Ok then I will do the miri PR first and wait :)
I can use the C math library directly if you think that's a more definitive solution.
That is more in line with what we do with the other functions. I just don't know if it will work on all platforms.
Seems like ldexp is part of the normal C math lib so we can rely on it being present in Miri. So @christianpoveda if you want to switch it to import the function via extern instead, be my guest. :)
I am seeing test failures when turning on the libcore tests that use ldexp, so maybe we actually need the higher precision.
I'll fix this ASAP :)
|
gharchive/issue
| 2019-07-03T17:52:58 |
2025-04-01T06:40:17.350470
|
{
"authors": [
"RalfJung",
"christianpoveda"
],
"repo": "rust-lang/miri",
"url": "https://github.com/rust-lang/miri/issues/821",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
506492359
|
add lockfile
Fixes https://github.com/rust-lang/miri/issues/427
@bors r+
:pushpin: Commit 5a025ab273007e3219e888592595bac199b507b4 has been approved by RalfJung
:hourglass: Testing commit 5a025ab273007e3219e888592595bac199b507b4 with merge 420bba081b4b9d0e2d402ce59ef5ff04480da528...
:broken_heart: Test failed - checks-travis
@bors r+
:pushpin: Commit 917effada1e970e964520f56431119b7f6b332c9 has been approved by RalfJung
:hourglass: Testing commit 917effada1e970e964520f56431119b7f6b332c9 with merge 394a9d5d295078f7b846d6833c73b2661704e97e...
:sunny: Test successful - checks-travis, status-appveyor
Approved by: RalfJung
Pushing 394a9d5d295078f7b846d6833c73b2661704e97e to master...
|
gharchive/pull-request
| 2019-10-14T07:40:21 |
2025-04-01T06:40:17.355419
|
{
"authors": [
"RalfJung",
"bors"
],
"repo": "rust-lang/miri",
"url": "https://github.com/rust-lang/miri/pull/995",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
666674898
|
RegexSet replace
Is there a reason for not having replace-like methods on RegexSet? If not, I would be willing to contribute an implementation based on Regex's.
Yes, there is a reason. Replace APIs require finding the offset of matches, and the implementation doesn't support that, as stated in the docs.
|
gharchive/issue
| 2020-07-28T00:55:39 |
2025-04-01T06:40:17.358003
|
{
"authors": [
"BurntSushi",
"gahag"
],
"repo": "rust-lang/regex",
"url": "https://github.com/rust-lang/regex/issues/696",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
147202272
|
RFC for a Rust Memory Model
with thanks to Amanieu, huonw, durka42, aatch, acrichto, nmatsakis, and anyone
else I might have missed.
Fixes, or at least starts the process of fixing, #1447
cc @nikomatsakis @RalfJung @arielb1 @pnkfelix @Amanieu
This opens up the possibility of performing TBAA on types which have implementation-defined bytes since these types are only allowed to be read through a pointer of the correct type. I don't really see this as a big issue since this is only possible from unsafe code and the struct layout isn't guaranteed to match any other type.
It's not that they're only allowed to be read through a pointer of a correct type. It's that if they are then used, it's undefined behavior for them to be aliased.
Enthusiasm is commendable, but I'd prefer this particular work to be done by professionals.
I agree with @petrochenkov, there are several shortfallings in this model.
@Ticki
I'm not talking about concrete shortcomings, but about general approach.
I'd expect this to be a published academic paper by @RalfJung or someone else, with level appropriate for such papers, overview of prior art etc.
Anything less, e.g. "obvious" useful guarantees on struct layout or valid values of primitive types, can be specified on case by case basis.
I'm not talking about concrete shortcomings, but about general approach.
Exactly. It should be formalized (to avoid these ambigiuities). It is very vague for a memory model, as is.
@Ticki @petrochenkov Honestly, this has been simplified and such a lot from how it would appear originally, in order to fit in the RFC format. Being a "professional" also has very little to do with the work here; I've been thinking about this memory model for months, and I'm likely one of the experts on Rust's. People send people to me to explain it. And, I get it; you would be more comfortable if the memory model was written by a professional. But I'm not a professional, and I wrote (part of) a memory model (the important part, the pointer aliasing part). The other things are just a basis for pointer aliasing, which is what really needs to be defined.
@usban Don't get me wrong. I don't doubt that you know a lot about this subject, and have put a lot of effort into it. What I am saying is just that this wasn't really what was inteded with a memory model. What was inteded is a formalized model, instead.
@Ticki well, I am working on formalizing the model now... I just wanted to talk about it first, in public. It's important that we are all able to give feedback on it.
The fact is, anything is better than nothing, and nothing is exactly what Rust has right now. People are clueless as to the precise rules when it comes to pointer aliasing, which aren't currently documented anywhere.
This RFC reads to me as if there are two different operations that are both called ptr::read, but follow different rules (based on whether the argument is a pointer or reference).
In fact, if I see it correctly, replacing ptr::read(int_ref as *const i32) with *int_ref introduces the side-effect of making other pointers derived from that reference non-derived. Which means one can probably build programs where this harmless-looking code change introduces undefined behavior?
I am not gonna weigh in on this RFC very much, but one comment:
The fact is, anything is better than nothing,
When it comes to guarantees that you're making, nothing can often be better. If we were to adopt a poor model, it could inhibit Rust for the rest of its life. You really want something like this to be absolutely foolproof, or as close to it as possible. Adopting something that's not ready because "something is better than nothing" is not a good idea here.
For the moment, then, can we at least have some documentation that pointer aliasing of any form is Undefined Behavior? Because that's what it really is.
@archshift This is, in fact, true currently.
@steveklabnik One thing I found really compelling was something that dikaiosune said in IRC:
"is it just me, or is waiting for 5 or more years for any guarantees about unsafe pointers kinda untenable for rust adoption?"
Currently, this:
let mut x = 0;
let ptr1 = &mut x as *mut i32;
let ptr2 = &mut x as *mut i32;
*ptr1 = 1;
*ptr2 = 2;
*ptr1
is completely undefined. And most unsafe code assumes it is.
I'd expect this to be a published academic paper by @RalfJung or someone else, with level appropriate for such papers, overview of prior art etc.
Just to be clear here, I am not right now working on a memory model for Rust. Nobody should feel blocked by me here. For my Rust formalization I decided to shortcut this discussion, and instead define too many behaviors. This means that at least I can prove the desired libraries to be safe in my model (hopefully), but they may still turn out to rely on behavior that Rust wishes not to define.
@ubsan Even that is unclear, as according to the Rust Doc:
* pointers are allowed to alias, allowing them to be used to write shared-ownership types, and even thread-safe shared memory types
[Raw pointers] have no guarantees about aliasing or mutability other than mutation not being allowed directly through a *const T.
@archshift Exactly. It's completely undefined.
Since @ubsan pinged me here, I'd like to share some of my thoughts on the level of formalism or academic chops present in the RFC.
One of the things that most impresses me about and attracts me to open source in general (and to a greater degree, Rust specifically) is the power that "expert amateurs" can have when they ignore credentials and focus on a substantive problem, emphasizing collaboration and results over social status and signalling. In many cases when one lacks the time, specific knowledge, or some other resource to substantively critique a work, the easiest signal to process can be from a contributor's credentials and prior achievements, or from the format of the presentation. But I think that's a bad trap for an inclusive community like Rust's to fall into.
I have always been impressed by the pragmatism in the other discussions I've read here, and I think it's a distraction to focus (explicitly or implicitly) on ubsan's qualifications or the current level of formalism of the RFC, especially when its author has approached it in this way:
I am working on formalizing the model now... I just wanted to talk about it first, in public. It's important that we are all able to give feedback on it.
I don't see any reason to reject this out of hand. The product of this work will be very important for Rust's community and its future, and I think it deserves concrete feedback and a substantive appraisal.
I have to disagree with @steveklabnik on this one. In this case something is better than nothing, as long as we make sure that the guarantees are not going to get broken in the future.
Personally, I would prefer an access based model over the one presented here. I am using an access based model in my formal verification of Redox. I have outlined my idea below:
(unfortunately, github doesn't support Latex, so I have rendered it as images)
and formalized through:
I have to disagree with @steveklabnik on this one. In this case something is better than nothing, as long as we make sure that the guarantees are not going to get broken in the future.
I think we're on the same page. It's not so much that it has to be absolutely, fully, 100% perfect at first blush, just that we're not painted into a corner.
@steveklabnik, so what you are saying is that you think it should rather make one promise less than one promise more?
@Ticki I'm not familiar with that idiom. But I don't mean "something is worse than nothing" in the sense that it's an all-or-nothing enterprise. But I wouldn't want to rush to get something down just to have some kind of guarantee, only to find out that we're not actually happy with it later.
Anyway, I said I wasn't going to say much, and now I've made three posts, so I'll stop :wink:
Clearly we must aspire to the level of clarity and precision provided by the gold standard of standards, C. [1] [2] [3] [4] [5] [6] [7]
let mut x = 0;
let ptr1 = &mut x as *mut i32;
let ptr2 = &mut x as *mut i32;
*ptr1 = 1;
*ptr2 = 2;
*ptr1
This will actually work on every model that we plan to use. It is undefined in the exact same sense as everything else.
What is more dubious, is for example
#[derive(Debug)] struct Foo<T>(T);
fn main() {
let five = 5;
let five_ref = &five;
let mut x = Some(Foo(five_ref));
let x_addr = &mut x as *mut Option<_>;
match x {
x_copy => {
if let Some(ref inner) = x_copy {
unsafe { *x_addr = None; }
println!("{:?}", inner);
}
}
}
}
This generates code that crashes on current rustc, through I don't think it should.
In any case, I don't like the derived pointer model because it is fundamentally based on "confidentality", while Rust basically tries to guarantee "integrity".
Additionally, LLVM likes attributes on loads, so lvalue-to-rvalue-conversions require the rvalue to be valid.
@arielb1 I don't think I understand what you are saying about confidentiality versus integrity. Or about lvalue-to-rvalue conversion attributes. Which specific attributes are you talking about, which would not be allowed?
@arielb1 I think crashing makes perfect sense though in this case. Moving a value is only semantic, it doesn't necessarily need to be given a new address, and for performance reasons often shouldn't.
Yeah this needs to wait for formal verification. I don't even trust a formal model that doesn't have a proven implementation to go with it. [A formal model is definitely first step to a proven implementation, but absent the proven implementation demonstrating the model's assumptions are valid, there's little use it "ratifying" the formal model.]
I'll note that I specifically find the approach of specifying it in plain English unconvincing - even aside from the compelling evidence that doing so is a strategy that consistently fails to provide results that are worth the effort[1][2][3], the simple fact of the matter is that English is a lot easier to mistranslate or misconstrue than a formal spec. "Plain English" is, to be quite honest, anything but - and I say this as a native speaker.
As a result, I have a very strong preference for a canonical formal spec paired with an explanatory document, rather than a canonical English spec that will hopefully one day be formalized.
[1]: Java's concurrency model first being unimplementable and then forbidding CSE
[2]: C11's memory model forbidding common optimizations, amongst other issues
[3]: The heroic efforts needed to forumulate a memory model for x86 given prior plain-English descriptions
@eternaleye I understand that. I'm working on a formalization currently, but again, I wanted to talk about it in the open first to allow people to comment on it. This is a "Request For Comments".
@ubsan To clarify, by a formal spec, I mean one suitable for machine-checking - the borrow checker and type checker are essential to safe Rust because humans cannot reliably model all the concurrency present in any given program; as a memory model must be well-defined for all concurrency possible in any program, the full verification of the model is in my opinion at least as crucial as borrow and type checking in safe Rust.
Would it be possible to have a memory model without Undefined Behaviour?
Sure. "Exactly what Rust does right now" is a fully-defined memory model, albeit a rather useless one. You could also get rid of UB systematically.
But ultimately you need undefined behavior to give the optimizer room to maneuver. Rust protects you from undefined behavior within safe code, so it's not as much a problem as it is in other languages. You just need to worry about it when writing unsafe code or doing FFI.
@chrisvest even without "launch the nukes" undefined behavior, there are unavoidably plenty of non-deterministic machine programs due to the nature of multi processor computing. Yes, languages should be defined independent of implementation, but many features of Rust we care about only because they ensure the image of compiling avoids the peskier of those non-deterministic programs. So to prove Rust does we want, we need to model that non-determinism.
@Manishearth There's some room to debate how much UB-based optimization actually benefits optimization by compilers, especially in the broader context of what changes it forces programmers to make in order to cope.
I am quite grateful to @ubsan for writing this up.
I too think that we will require a machine-formalized model of some form.
But we can only benefit from having a variety of well-written documents to draw from in order to understand the design space and trade-offs here.
It is good to have dialogue about the particulars of a model like this.
And it is much easier to read an RFC than to trudge through the various github issues trying to digest the many threads of conversation regarding memory models. (Issue #1447 could be a central repository of knowledge and ideas on the topic, but we will require reference documents that summarize the information there.)
To summarize some discussion that occurred on #rust-offtopic (@ubsan, @eddyb, correct me if I misconstrue or miss something):
There exist formalized 'weak' CPU memory models that we can use to bound us on one side
Can shove libcore and libstd through miri and permute by model-permitted transformations to bound on the other side by seeing if anything goes splat
References:
[1]: A formal hierarchy of weak memory models
[2]: [Other work by the author](http://www0.cs.ucl.ac.uk/staff/J.Alglave/, "herd" and "goto-instrument" likely to be of particular interest (simulating CPU memory models and a similar approach to the miri proposal applied to C, respectively)
[2]: Software verification for weak memory (has source for goto-instrument, along with coq proofs, at the bottom)
@eternaleye The advantage of miri is that with some FFI support, it won't be limited to libcore and libstd but could run even (parts of) Servo (the use of IPC should make that easier).
@eddyb The main reason I'd rather stick to libcore and libstd for now is to try and keep the model small - more guarantees can be added later, but by focusing on something minimally-viable (possibly by also altering libcore and libstd to ask less of the model in tandem) we can have something small and easy to reason about while being able to extend it to be stronger in the future.
@eternaleye
If we have a model, we want to check all unsafe code we can to verify that it follows the model.
@ubsan
Maybe I did not understand LLVM metadata correctly, but we occasionally emit assume/metadata on loads that the loaded rvalues are valid. For example:
pub unsafe fn foo(zero_ref: &usize) -> &'static str {
let do_ub = *(zero_ref as *const usize as *const &());
let do_ub_ref : *const &() = &do_ub;
if *(do_ub_ref as *const usize) == 0 {
"zero"
} else {
"nonzero"
}
}
fn main() {
println!("{}", unsafe { foo(&11) });
println!("{}", unsafe { foo(&0) });
}
@arielb1
Correct. That's a use of an "Invalid" value; when you did do_ub_ref as *const usize, and then loaded it, and then did == 0, you used a value derived from an "Invalid" value, which is undefined behavior.
I need to specify this further. This is all in my head and not on the page.
@eternaleye I'm using coq right now, although I'm not very far.
LLVM's docs are really unclear as for whether loads require that the loaded value is valid. However, they assume that loads of dereferencable things are pure, so I guess that seals the deal.
So we probably can have poison rvalues like LLVM. I don't want to introduce the poison/undef distinction unless I am forced to, because nobody really understands undef, but I'm fine with just copying poison values around.
On the other hand, I am not sure how useful is that - what you want to do with poison values is either to have direct let foo = mem::uninitialized() or to put them in constructors, but clearly you can't put them in enum constructors. It probably makes sense to treat struct constructors as different from enum constructors here.
The reason I don't like derived-based analysis is that if you want to reason in a modular manner you have to play with "can-be-derived", which is very annoying to deal with.
@arielb1 The idea is that there is no can-be-derived. You know exactly which are derived, and which are not derived. It's a strict, at each point in the program, this pointer is either derived from or not derived from another pointer. There is no "can-be-derived", if I understand you.
@ubsan
I understand that at runtime the derived relation is definite. On the other hand, if you want to reason about the validity of optimizations in the presence of external functions, determining whether one pointer is derived from another is non-trivial.
Also, derived pointers are somewhat similar to C++'s consume memory model, which is a thing I want to stay away from (but maybe there is some significant difference?)
@arielb1 Function arguments are one area where derived pointer relations are cut off entirely. A move is counted as a rederivation, and you must move a reference into a function in order to pass the argument; therefore, each new function gets a "clean slate", as it were.
I don't know what you mean by "consume memory model". Could you give an example?
This looks like a very good start, but it is important to note that, so far, this RFC is only a memory model for a single thread. I don't necessarily think that concurrency should be a blocking issue, as even just a single-threaded memory model clears up a lot of questions about the correctness of unsafe code. However, the RFC should explicitly state that it leaves the details with respect to concurrency unspecified, and should eventually be extended to support concurrency.
@ubsan
I meant memory_order_consume.
@ubsan
I meant memory_order_consume.
@arielb1 I don't understand what you are saying. Could you explain it out, please?
I would like to reword the "using a value" paragraph to be more in line with the MIR nomenclature.
I am using pseudo-Rust as my language for data declarations because it is the most familiar for the people reading this.
Type Representation Kinds
At run-time, every type is eqty to a single unique concrete type, that's it, a type without type parameters and projections (FIXME projections within binders). Concrete types can be divided as follows:
A primitive type is a type that is treated as a set of bits. This is integers, floats, and raw pointers.
A scalar type is a type with an atomic representation - primitive types, plus bool, char, references, function pointers, and FnDef markers
A struct-like type is a struct, tuple, or closure.
An array type is a slice or array
A enum-like type is an enum
A universal type is a trait object
NOTE: should C-like enums be scalars?
Values
MIR Rvalues evaluate to a value, which is defined as follows:
pub enum Value {
Poison,
Scalar(Vec<u8>),
Struct(Vec<Value>),
Array(Vec<Value>),
Enum(Discr, Vec<Value>),
}
impl Value {
fn contains_poison(&self) {
match self {
&Poison => true,
&Scalar(..) => false, // NOTE: don't look through pointers
&Struct(ref vs) | &Array(ref vs) | &Enum(_, ref vs) => {
vs.iter().any(|v| v.contains_poison())
}
}
}
}
NOTE: Do we want to track pointers as separate from integers? This is
required for const-eval and derivation-path.
Non-Poison values must conform to their type representation. This means that:
primitives must be a Scalar of the right size
bool and char must be valid Scalars
pointers and function pointers must be non-null. [0]
enums must have a valid discriminant, and must not contain a Poison.
struct-likes, arrays and enums must have the correct number of fields, and they all must all conform to their representation.
a NonZero struct must contain a non-zero Scalar
If an operation would create a value that does not conform to its representation, a Poison value is created instead.
When a function returns, its return pointer must not contain a Poison. Also, unless otherwise specified, operations on rvalues invoke undefined behavior if one of their inputs contains a Poison. Notable exceptions are:
StatementKind::Assign: writes poison bytes to the memory ranges covered by poison.
Rvalue::Aggregate and Rvalue::Repeat: these operate normally, creating a Value that contains a Poison but that is of the correct constructor. Note that the rule about values having to conform to their representation applies - an enum that contains a poison will be immediately converted to a Poison.
[0] This prohibits using the !dereferenceable or !align metadata on loads of pointers. We currently don't do so but we may want to in the future. In any case, we want this pattern to be a GEPi with no extra requirements:
fn foo(random: *const Foo) -> *const u8 {
unsafe { &((&*random).1) }
}
or in MIR
let arg0: *const Foo;
let tmp0: &Foo;
let tmp1: &u8;
begin:
tmp0 = &*arg0;
tmp1 = &(*tmp0).1;
return = tmp0 as *const u8 (Misc);
return;
Hello all. I want to weigh in briefly. I am still digesting the RFC text and the comment thread, but I want to make some kind of "meta" comments.
Generalities
First, like @pnkfelix, I certainly would like to thank @ubsan for putting energy and effort into a "first draft" at some sort of memory model for Rust. Whatever model we eventually adopt, it's great to make forward progress of some kind.
I think the best place for us to focus our efforts at this stage is not on determining the precise model we will use, but rather the process by which we will arrive at a model, and the high-level principles that will guide us in this effort.
I think the process should be as top-down as possible. Basically I'd like to start just by agreeing on the kinds of examples we think are interesting and what makes them interesting. (More on this in a bit.) From there maybe we can come up with high-level rules that "summarize" the key ideas, and then drill down into making them more concrete. One nice part about this is that at each stage of the process we are giving more information to end users about what kinds of code we expect to be legal and illegal. Honestly, I hope that the list of examples alone will be sufficient advice for the most part.
An ideal end-point would be to arrive at something testable, for sure. The idea of an instrumented miri-like interpreter is particularly appealing. I would like to be able to "test" our proposed models by doing "crater runs" across crates.io and seeing what kind of code reports violations.
Now, as far as guiding principles go, I would like to see a model that aims to avoid crashes in the code that naive people write. Put another way, I expect that most programmers will either not read the memory at all, or will only read no more than a few paragraphs. I would like that, given that level of innattention, they still mostly do things right when they write unsafe code. Effectively this means the model must be "intuitive"; that is, whatever model we adapt, it has to permit the kinds of tricks that people pull in practice. I think that many models I have seen fail this criteria dramatically. The problem with an "intuitive" (or perhaps "permissive" is a better word) model is that it naturally runs somewhat counter to the ability to optimize. I think that Rust has a leg up here, though, both because of the nature of our type system and because of the safe/unsafe code division.
It is not clear that we can achieve a model that is both permissive and permits a sufficient level of optimization. But let's posit that we can for a second (and I do believe it is possible). In that case, it's interesting to note that the urgency of having a formal memory model is somewhat reduced, in that we expect most things people do in practice to wind up being sound.
Examples and repo
I mentioned that I thought we ought to start by agreeing upon a set of interesting examples and so forth. I've been making an effort to collect such a set, though I think it's still in relatively early stages (and I haven't had much time for this in the last month or so). You can see my current results here.
There are still a number of things I want to do:
Add some examples about when it is safe to "reuse" a lot; as @arielb1's example earlier demonstrates, this is not always clear).
Consolidate and clean up the examples, which I think contain duplication now, since I mostly just copied things I found on various threads.
Go through unsafe code and extract interesting patterns. I was planning to start with libstd.
I'd very much welcome contributions.
I've been hoping that once we have a pretty good set of examples etc, we can (a) consolidate and clean them up and then (b) use them as a basis for comparing proposals.
Specific responses
Sorry this comment doesn't have much in way of specific responses to this RFC or comments in the thread. It's long enough as is, I guess. I've been taking notes on the RFC as I read as well as other comments on the thread, and I'll try to get back and respond to more specific details there as well.
@nikomatsakis
I've been thinking about this a lot since then. I think this is a good start for a memory model, but I didn't understand optimizations well enough at that point. Yeah, I do believe it's a good idea to use Miri to find undefined behavior.
What I've been thinking about is this:
Unfortunately, I think that any use of an uninitialized value should be undefined behavior. On some architectures, even moving from an uninitialized register to another register will be bad news, due to flags. I think we might have to adopt the C standards ideas of u8/i8 is special; it can be undef, and nothing else. The only thing you can do with an undef variable is write to it, or take a pointer to it.
Our references are dereferenceable, which is good for optimization purposes, but which also means; you can't have a reference to an undef, in case you accidentally read an undef. Perhaps something like, the only thing you can do with a reference to an undef is 1) write through it, or 2) create a pointer from it?
@ubsan: Doesn't that then require the optimizer to be conservative with any reference it cannot prove is to a defined value? I'd much prefer saying that pointer-to-reference coercions are UB if the pointee is an undefined value, and use ptr::write() for the "write through" part.
@eternaleye No, the optimizer assumes that all references it doesn't know are to a defined value, are to a defined value (unless that reference is a reference to a byte, or a byte array). The following are the only times a reference to an undefined is okay, I think:
let mut x = std::mem::uninitialized();
let ptr = &mut x;
write(ptr, ...);
// or
let mut x = std::mem::uninitialized();
let ptr = &mut x as *mut ...;
@ubsan: Hm, I see. Perhaps a combination, by saying that the only valid op on ref-to-undef is immediate conversion to pointer, something trivially lintable. Note that I would also lint returning the ref.
It can't be an error (back-compat), but IMO ref-to-undef is an unequivocal error, and the first form should habe been disallowed. :(
@eternaleye Perhaps. this is far easier to deal with, however; just moving a reference to undef is undefined behavior.
@ubsan: If a reference to undef existing any more than ephemerally is UB, all timelines in which such a move occurs already encountered UB at an earlier point.
Consider:
unsafe fn foo(x: &mut T) {
*x = mem::uninitialized();
}
This function should lint, IMO, merely by existing. "Escape of ref to undef" permits that.
@eternaleye It makes sense to lint, because you're moving uninitialized data.
let x = std::mem::uninitialized();
is the only way to get uninitialized data, I think... That would be equivalent to something like
unsafe fn foo(x: &mut T) {
let uninit = mem::uninitialized();
*x = uninit;
}
Or... = std::mem::uninitialized() could be defined as a no-op.
fn foo(x: &mut T) {
*x = mem::uninitialized();
}
could be equivalent to
fn foo(x: &mut T) {
}
Mm, right - forgot the initializer/move distinction there. Still, I think losing the dereferenceable attribute on references is a bad trade - and if moving undef is UB, then proving refs to undef are immediately converted to pointers is an entirely tractable analysis AFAICT.
@eternaleye The point is there is no losing the dereferenceable attribute on references. Once you move a reference to an undef, it's completely dereferenceable.
Mm, I think we may actually being violent agreement, but were separated by terms. As 'escape' requires a move one way or another (assign or return). Sorry about that.
I do think the non-escaped ref-of-undef should only permit pointer converstion and not writes, though - we have ptr::write() already.
@eternaleye Yeah, I can agree with that.
I do like the idea of = std::mem::uninitialized() being a no-op in the future. Although... we definitely use undefined behavior with our current implementation if there are any trap representations. If we changed std::mem::uninitialized to pub use std::intrinsics::uninit as uninitialized, and just made any use of intriniscs::uninit on the rhs of an = a no-op.
This would make:
<expr> = intrinsics::uninit(); into <expr>;
Does LLVM dereferencable require that the value behind it is valid? I thought that it only requires that it can be read without triggering a page fault (i.e., it does not even guarantee that the value is not modified by other threads).
The other kinds of load metadata (e.g. nonnull) seem to turn invalid data loaded to undef/poison as I specified. It would be better if that was documented somewhere. LLVM has no problem moving undefs and even storing them to memory - it even performs optimizations that rely on that.
@arielb1 The issue is that, at least in my opinion, we shouldn't be using LLVM as a benchmark. I'm of the opinion that we should allow implementations almost as much wiggle room for trap representations as C does, perhaps excluding integer types. But, for example, if a computer traps on passing sNaN, that should be allowed behavior, or if a computer traps if you read from a register which has NaT set.
I thought you were for allowing the loading of undef rvalues?
The issue here is that we want box mem::uninitialized() and the likes to work, and that desugars into something of the form
tmp0 = Box::new();
(*tmp0) = std::mem::uninitialized::<u32>() -> [return: bb3, unwind: bb2];
Should we say that "RVO" writes of undef are ok?
In any case, I heard that people want to create structs with only some of their fields uninitialized, which currently generates e.g.
start:
tmp0 = Box::<Big>();
tmp2 = std::mem::uninitialized::<[u32; 65536]>() uwto unwind0
(*tmp0) = Big { tag: const 42u8, data: tmp2 };
return = tmp0;
return;
unwind0:
tmp1 = alloc::heap::box_free::<Big>(tmp0);
resume; // Scope(0) at <anon>:11:1: 13:2
Source code for ^:
fn make() -> Box<Big> {
unsafe { box Big { tag: 0, data: mem::uninitialized() } }
}
@arielb1 I thought they were. Then I had a big dose of reality with CPUs that weren't x86 or ARM:
https://blogs.msdn.microsoft.com/oldnewthing/20040119-00/?p=41003/
We may have to use (something like) the C/C++ definition of uninitialized:
An uninitialized variable may either be in a valid, but unspecified state, or a trap representation (and it may change whenever you read it). The only thing that's not allowed to have a trap representation is i8/u8. If you do anything except write to a variable with trap representation, UB happens.
So, for rustc on x86 and ARM, for example, the integer and pointer types would have no trap representations.
@arielb1 It doesn't really matter how we translate it, just how we define it; box mem::uninitialized::<T>() should be equivalent to Box::from_raw(allocate(size_of::<T>(), align_of::<T>()) as *mut T).
@ubsan: And placement box with mem::uninitialized::<T>() would simply skip initialization (i.e. noop)?
@eternaleye That's the idea.
@ubsan I'm interested in hearing your response to the comment I wrote almost a month ago. I wonder what your thoughts are?
@ticki 1) you don't account for shared mutable state, which is allowed for with both raw pointers, and with UnsafeCell, afaict, 2) What can you do with uninitialized variables besides write to them? 3) how is dropping a reference defined? I imagine something like "last use".
Seems good otherwise. Sorry, it kinda got missed.
I do. See point 1.:
mutating a non-aliased value [..]
Or do you mean mutating while having other (passive) pointers accesible?
Well, reading an uninitialized value will always be unsafe, writing or aliasing one is valid of course.
See point 3., basically the "hidden counter" is decremented. From a technical point of view, this has the effect of changing the type state, potentially allowing future mutation.
Fair point, but the idea is the same: the program's behavior is well defined.
Sorry, it kinda got missed.
np.
@ticki 1) Yes, but mutating an aliased value is defined, i.e. in the case of Cell. 2) Reading an uninitialized value may be unsafe, but when is it invalid (for example, i8/u8 should always be valid, other types may be valid/invalid). 3) When is a reference dropped, not what happens when a reference is dropped.
Reading an uninitialized u8 is undefined in LLVM. Despite the types having no inconsistent representation, they cannot be read uninitialized, since the behavior would be defined by how the stack frame was layed out.
Reading an uninitialized u8 is undefined in LLVM. Despite the types having no inconsistent representation, they cannot be read uninitialized, since the behavior would be defined by how the stack frame was layed out.
@ticki It results in an uninitialized value. It does not result in undefined behavior (and in fact, this is true of all integer types in LLVM).
On 3), the reference is dropped when it is inaccesible or dead.
@ticki "Inaccessible" and "dead" are two very different things, and could be defined in multiple ways. Please give examples?
Dead means that it won't be used at some later point in execution. Inaccessible means that it is out of scope or otherwise not possible to read or write.
Got it. I believe we want "dead" as the definition, otherwise our new SEME borrowck would be breaking the rules :P
And I'm still not clear on what aliased writes would mean?
An aliased write means that you write while another pointer is active.
That's the definition of an aliased write. What does it mean for the program? How does one define an aliased write in your model? (for example, when using Cell, or using raw pointers)
https://gist.github.com/ubsan/1fb0aa56dbab7ff2cd6713cbb5ae75f0
@nikomatsakis These are a few more litmus tests, and some thoughts on std::mem::uninitialized.
I wrote a blog post discussing my thoughts on a high-level approach to the memory model:
http://smallcultfollowing.com/babysteps/blog/2016/05/27/the-tootsie-pop-model-for-unsafe-code/
The heart of the proposal is the intution that:
when you enter the unsafe boundary, you can rely that the Rust type system invariants hold;
when you exit the unsafe boundary, you must ensure that the Rust type system invariants are restored;
in the interim, you can break a lot of rules (though not all the rules).
Discuss thread: http://internals.rust-lang.org/t/tootsie-pop-model-for-unsafe-code/3522
@ubsan
These are a few more litmus tests, and some thoughts on std::mem::uninitialized.
Great! I'm looking through now. As always, I apologize for my permanently backlogged state. Most of the initial examples are basically safe code, so it seems indeed pretty hard to imagine them being undefined.
unsafe fn memcpy(dst: *mut u8, mut src: *const u8, len: usize) {
for el in std::slice::from_raw_parts_mut(dst, len) {
*el = *src;
src = src.offset(1);
}
}
I guess the key point here is the std::slice::from_raw_parts_mut, which then produces an &mut [u8]? Or is the key point that this is applied to a struct with padding?
// dereferencing any non-null pointer to a ZST
fn f(x: *const ()) -> () { if x.is_null() { () } else { unsafe { *x } } }
I definitely think that dereferencing a pointer to a ZST is invalid. Put another way, you have the right to read 0 bytes of data through that pointer, which means dereferencing is invalid.
// Creating a raw pointer to an uninitialized value
let x: u32 = std::mem::uninitialized();
let y = &x as *const u32;
What about this example worries you? Is it the transient &x? It seems like a "pointer to an uninitialized value" will happen every time you call free, so the mere existence of such a pointer had better not be a big problem.
let x: u32 = std::mem::uninitialized();
let y = x;
Yeah, this is a good example to chew on. There are also some fine distinctions to be drawn, around questions like padding and so forth. (Like, just what constitutes uninitialized?) I need to read more into that blog post you cited also.
I'm curious what differentiates the u32 above from the u8 case you cite later on:
// C allows this; should we (I believe so)
let x: u8 = std::mem::uninitialized();
let y = x;
And note, I haven't talked about this, but std::mem::uninitialized is a function, and so we're (semantically, at least) moving uninitialized values when we call it. Why is it special?
Yes, a good question, I agree! Seems to argue that we should permit accesses from uninitialized memory, at least in certain ways.
As you can see from my blog post, I disagree that this ought to be undefined:
// Aliasing writes with &mut
fn f(x: &mut i32, y: &i32) -> i32 {
*x = *y + 1;
*y
}
let x = 10;
let y = &mut x as *mut _;
unsafe { f(&mut *ptr, &*ptr) }
I definitely think that dereferencing a pointer to a ZST is invalid. Put another way, you have the right to read 0 bytes of data through that pointer, which means dereferencing is invalid.
(Emphasis mine.)
By dereferencing it, aren't you... doing exactly that? I would expect dereferencing a pointer to type T to always read size_of::<T>() bytes of memory from the address that it points to. In the case of a zero-sized type, that's 0 bytes. A no-op. Safe as heck.
@glaebhoerl Despite being zero sized, it can still be unsafe to deref (e.g., segfaults), unless you want to add a zero-check for unsized types.
@nikomatsakis
Yes, the issue is that you're applying it to a struct with padding, which should be okay with u8s.
"you have the right to read 0 bytes of data through that pointer". Which is what you are doing.
The issue I have with it is that &T should always be safe to deref, I guess? They're dereferenceable in LLVM, so... Pointers, I have no problem with, they're not dereferenceable.
What differentiates u32 from u8 is that C says that char must be allowed to be uninitialized. Therefore, we can assume that, at least, char doesn't have any trap representations.
Assume that f is in a different, completely safe module :P
A problem that has come up in the discussion is assignment temporary elimination.
As you know, assignment of a struct without a destructor creates MIR like tmp = RVAL; LVAL = tmp. The question is when we can optimize that to LVAL = RVAL.
That is significant, because the computation of RVAL can depend on LVAL, For example (this compiles today):
fn must_work() {
let mut x = S::new();
x = foo(&x);
}
If we are talking about our optimization, the most simple interesting example is (everywhere Big is a struct without a destructor).
fn example0(b: &mut Big) {
println!("{:?}", b);
*b = Big::new();
}
Here, we would like to optimize out the assignment temporary. The problem is, there is no way we can prove the formatting code is not storing a copy of b inside some global, which Big::new will read after it invalidates it.
If we decide to special-case globals in some way, we still are in some trouble:
fn example0(b: &mut Big) {
let x = foo(&b);
*b = Big::new(x);
}
Here, there are a few principal possibilities.
foo returns a boolean or something. In that case, we would like to avoid generating a temporary if possible.
foo is the identity function. Then the borrow of b will be extended to contain the call to Big::new, and we have safe code that compiles, and we must generate the temporary.
foo is <*const Big>::as_ptr. In unsafe code, we would like that to work just like the previous case. The problem is that we can't distinguish this case from the first one reliably - every return value that contains a raw pointer is possibly suspect.
There is one pattern that is commonly used that @ubsan's definition seems to make make UB:
fn whatever(a: &mut u32) -> u32 {
// either order of assignments
let ptr0: *const u32 = a;
let ptr1: *mut u32 = a;
let result = access(&*a_ptr0); // for some `fn access(&u32) -> u32`
*a_ptr1 = 0;
result
}
fn whatever_mut(a: &mut u32) -> u32 {
// either order of assignments
let ptr0: *const u32 = a;
let ptr1: *mut u32 = a;
manipulate(&mut *a_ptr1); // for some `fn manipulate(&mut u32)`
*a_ptr0
}
For example, ptr::write desugars to the second.
@arielb1 Yes, your second example would be UB under my memory model. I have some ideas for solving it though.
Rust aliasing model
Base Definitions
References are either immutable, mutable, or moving.
Access kinds are read or write.
The aliasing rule
If a reference r with lifetime 'r is created, then afterwards, before 'r ends but at no particular order
* Some memory location l is accessed through a borrow-chain from r - this is the asserting access.
* The same memory location is accessed in any other way - this is the conflicting access.
* One of these accesses is a write.
Then an aliasing access has occured.
Aliasing accesses are UB, unless there exists a reference s to with lifetime 's, such that
* s was created by a borrow-chain from r
* s has sufficient permissions for the conflicting access.
* 's is alive at the point of access.
* s is potentially alive. This means either:
- s is an immutable reference
- s is the reference used for the conflicting access
- s was converted to a raw pointer before the conflicting access
In that case, s is called the guaranteeing reference.
Reference Creation
All references are created by reborrowing, either of a prior reference or of a raw pointer.
A reference is directly reborrowed from another if it is the borrow of some lvalue that came from the original reference. A borrow chain is the reflexive transitive closure of the directly-reborrowed relation.
NOTE: lvalues can contain dereferences of &mut - these have the borrow chain of their parent. However, dereferences of & have the borrow chain of the borrowed reference itself.
NOTE: raw pointers can't take part of a borrow chain.
References created by transmutation ex nihilo should be handled as if they were created by reborrowing a raw pointer.
unless there exists a reference s to with lifetime 's
Missing word or stray "to"?
Additional
Does anyone have any other interesting examples for me to analyse? @ubsan @nikomatsakis
@arielb1 I have a number of questions about the rules you propose here. I think a lot of this has to do with definitions.
For example, when you say "reference", do you mean specifically a value of type &T or &mut T (as opposed to, say, a raw pointer)?
Similarly, when you say "reborrow", what do you mean? I know of at least two definitions for that term (me and @RalfJung seem to have intuited different meanings, for example). I'm not sure how important it is. My definition is basically just a subset of borrows: that is, borrowing a value via some lvalue that involves dereferencing a reference (and thus reaching into content that was already borrowed). So if x: &i32, then &x is a borrow and &*x is a reborrow of the referent.
Certainly the part of this proposal that I find most worrisome was the fact that it relies on specific lifetimes, and in particular the "conservative" rule around reborrows (casts) from &T to *const T. For example, if I have an argument p: &'a u32, and I cast p to *const u32, then saying that this cast extends for all of 'a seems maybe plausible. But what if I cast &*p to *const u32? In that case, the reference I am casting actually has a very short lifetime, but surely we want the same result?
I've been hoping rather for a model where the legal aliasing doesn't consider the lifetime of the references in question at all. Rather, we can define the memory that a fn has permissions to affect and how -- and lifetimes do come into play here, since if you pass in a &'a i32, you are granting that fn permission to read from the reference until the end of 'a (and you are not granting permission to mutate). (And when you declare let x: u32, you are giving permission to read from that stack slot until the end of the enclosing scope.) But within the fn body itself, the optimizer would not consider lifetimes. (Note that you can have this without going to the full Tootsie Pop model I proposed in my blog post.)
@nikomatsakis
Sure. Reference = TyRef = &,&mut,&move. And in my scheme, all borrows are reborrows because locals are lifted into allocas.
Certainly the part of this proposal that I find most worrisome was the fact that it relies on specific lifetimes, and in particular the "conservative" rule around reborrows (casts) from &T to *const T. For example, if I have an argument p: &'a u32, and I cast p to *const u32, then saying that this cast extends for all of 'a seems maybe plausible. But what if I cast &*p to *const u32? In that case, the reference I am casting actually has a very short lifetime, but surely we want the same result?
And it would have the same result. The &*p borrow must also be treated conservatively for UB purposes, therefore it would also be extended. What I meant by being conservative is that if there exists some way of inferring lifetimes that is not UB, then the code is not UB. Basically, we can't infer lifetimes correctly, so we must assume the worst.
permissions
I am quite sure that my model can be stated equivalently in terms of capabilities. In fact, that was the first form I had in mind, and I switched to the access-based form for clarity.
you are granting that fn permission to read from the reference until the end of 'a (and you are not granting permission to mutate).
See how the lifetimes sneak back in? I don't see any way to have a workable model without lifetimes - for example, the only difference between rvo_safe and no_rvo_safe (which I added now) is the lifetime bounds in as_ptr vs. id, yet the functions are supposed to be optimized differently.
@arielb1
See how the lifetimes sneak back in? I don't see any way to have a workable model without lifetimes - for example, the only difference between rvo_safe and no_rvo_safe (which I added now) is the lifetime bounds in as_ptr vs. id, yet the functions are supposed to be optimized differently.
Named lifetimes in the fn signature are a totally different beast than random lifetimes that result from inference. I agree you can't "excise" them completely in some sense. They are the source of the permissions -- and of course if you continue to use memory you obtained from a reference after the fn returns, you need some way (e.g., mentioning the named lifetime in your type) to show that you are still within that lifetime.
@arielb1 This is such an unimportant nit I almost feel bad bringing it up:
let local = &move *alloca();.
I believe you want a scratch pointer since local is uninitialized at lifetime begin, and must be uninitialized at lifetime end.
@Ericson2314 could you explain further?
@ubsan &move is used for moving data out, so at the beginning of the lifetime it must point to an initialized location, and at the end of the lifetime, since the data is moved out, it must point to an uninitialized location.
&uninit or &scratch would be a pointer type where at both the beginning and end of the lifetime, the location pointed to must be uninitialized. The name "scratch" invokes the idea that this pointer gives access to scratch space / empty buffer / playpen etc. It is the proper return type for alloca, and encapsulated in a newtype for Drop to call free, it is the proper return type for malloc.
For more information see https://github.com/rust-lang/rfcs/pull/1617
@Ericson2314 Ah. We've taken to calling it &out.
&move can point to uninitialized memory (i.e., moving in and out, like in a local, would be allowed), and &out's semantics are not well known enough.
mm I thought the name &out referred to the pointer which starts uninitialized and must be initialized before it is dropped / lifetime ends. This one alone of the 4 possible is a linear type, but other than that I think all 4 are pretty understood.
Regardless all for readily fit the proposal of @arielb1 that I was quoting, hence why I felt bad bringing them up.
@Ericson2314 Ah, got it. Yeah, see, &uninit is definitely not well understood, I didn't understand what you were talking about ;P.
Anyways, &move does fill that hole; it has two states, just like a local variable (at least in my model of &move), "moved out", and "moved in". This would just be starting the reference in the "moved out" state.
@ericson2314
This would be a better approximation:
let local = &move *alloca();
mem::forget(*local); // mark `*local` as uninitialized
However, the presence/absence of the mem::forget does not affect the aliasing semantics, so I omitted it.
it has two states, just like a local variable
Well, there needn't be any extra initialized/uninialized "state" because 4 is enough pointer types to make it first-class https://github.com/rust-lang/rfcs/pull/1617#issuecomment-220697500.
This would be a better approximation:
With the above, the forget call wouldn't even be legal, unless we want "rvalue-initalized-ness" and forget to be polymphic over such a thing.
PS: at the MIR level, &move is just as linear as &out - drops do not insert themselves, after all.
Yes indeed! This symmetry is much nicer, and I love that I can say "well, we already have them!" on the next linear types proposal.
Back on topic, I'm still confused on ZST-deref being unsafe. IMO, *x: () should be safe, *Void: () should not. This is one of the reasons I want Void-iso types to have size -∞.
With the above, the forget call wouldn't even be legal, unless we want "rvalue-initalized-ness" and forget to be polymphic over such a thing.
I want Box<T> to be implementable as struct Box<T>(&'static move T) up to it not requiring T: 'static due to WF. You can mem::forget out of a box.
Back on topic, I'm still confused on ZST-deref being unsafe. IMO, *x: () should be safe, *Void: () should not. This is one of the reasons I want Void-iso types to have size -∞.
The unsafety rules take effect before monomorphization, so the size of types is not known yet. I think that deref of a unit type should always be a no-op (never UB) on a non-null pointer/reference, and that a deref of an empty type should be an intrinsics::unreachable().
I want Box to be implementable as struct Box(&'static move T) up to it not requiring T: 'static due to WF. You can mem::forget out of a box.
That looks like killing a fly with a nuke. We just need some control over the internal drop-flag - a shallow_drop - that could be unsafe too.
I suppose we can add &move before the others and then that will make sense. Empty boxes are unsafe, but it's a step in the right direction.
The unsafety rules take effect before monomorphization, so the size of types is not known yet. I think that deref of a unit type should always be a no-op (never UB) on a non-null pointer/reference, and that a deref of an empty type should be an intrinsics::unreachable().
I think that's in agreement with everything I've said? There many (safe) things we can and should do with generic code that assume >0 size, such as allowing match x: Option<A> { Some(x) => ..., ...} even though if A was Void we'd have a dead match arm.
@arielb1 @Ericson2314 The point, at least for me, of &move is that it gets rid of Drop::drop dropping inner fields that it derefs to. See the codegen section of https://gist.github.com/ubsan/9d79ad4299870faf8c7b
@ubsan
So have drop glue call DerefMove to get the interior to drop? That... could work.
OTOH, you ought to be able to do partial drops, which would make DerefMove unsafe:
struct Foo(Box<u32>, Box<u32>);
fn main() {
let x = Box::new(Foo(Box::new(0), Box::new(1)));
drop(x.1);
// drop glue:
let d = DerefMove::deref_move(&mut x); // unsafe! can access `x.1`!
drop(d.0);
mem::forget(d.0);
Drop::drop(&mut x);
}
@ubsan and I had talked a bit about &move on IRC, but I am wary of extending the language if we are still lift with (different) unsafety/magic. So yes &move alone is still useful (e.g. for drop in place) but we ought to hold of on DerefMove until it can be done right.
My "4 types of unique borrowed pointer" proposal is really less about pointers and more about making "initializedness" first-class---part of the type system instead some extra static analysis tacked onto borrow checking. Once you have that, the generalized pointer types are fairly inevitable.
@arielb1 well, in that case, it's not unsafe :)
Notice how you're deref_moveing in drop(x.1). That would turn into
let inner = x.deref_move();
drop(inner.1);
// drop glue
drop(inner.0);
Drop::drop(&mut x);
So there's gotta be some sort of guarantee about only being called once, I'm thinking... something like that.
I have just opened https://github.com/rust-lang/rfcs/pull/1643, which proposes that rather than adopting one RFC, we adopt a "strike-team-based" approach to work these rules out in a more systematic way.
@ubsan
I think we want the DerefPure unsafe trait, and to not allow using DerefMove unless it is implemented.
@arielb1
But why? DerefMove seems fine if we just... don't call DerefMove multiple times. I think it's actually a really nice guarantee that DerefMove doesn't get called multiple times, just like Drop.
This allows for things like dropping in DerefMove, since it's only called once. If, for example, you have a cache, and you don't need it once you're moved out of, it would be nice to get rid of it once you're no longer useful; and, I don't see the point of requiring DerefPure. It just makes it more annoying to implement DerefMove for users.
@Ericson2314
Back on topic, I'm still confused on ZST-deref being unsafe. IMO, *x: () should be safe, *Void: () should not. This is one of the reasons I want Void-iso types to have size -∞.
Most of the discussion here is well beyond the scope currently covered by my formalization, but this one part is actually already "in scope" and I have definitions which I think make sense. So I will try to share my formal thoughts on this. Funny enough, the outcome is exactly the opposite if what you suggest.
Let me try to explain the relevant parts of my model of types:
A type is a set of lists of values. Values are, for example, integers, addresses (i.e., memory locations), or booleans. These lists describe the layout of the type in memory. This is a very abstract model that does not distinguish integers of different size, or the fact that integers actually overflow. All these issues can be modeled faithfully, but they are mostly irrelevant for what I am interested in right now, so I am making my life simpler.
For example, the type i32 is the set of singleton lists whose only element is an integer.
The type struct { i32, bool } is the set of two-element lists, the first element being a number and the second being a boolean. The type () is the set consisting only of the empty list. The type Void is the empty set. Every type has a size, which is the length of the lists in the set. (They all have to have the same length -- no unsized types so far.) Clearly, the size of () must be 0. The size of Void can be anything, since there is no list in that set.
The type &mut 'a T is (very roughly speaking) the set of singleton lists whose element is a location l. Furthermore, the locations in [l, l+T.size) (left-inclusive, right-exclusive) have to be valid, allocated addresses that we can access and mutate for the duration of lifetime 'a, and the list of values stored at these addresses must be in T. (I am using mutable borrows here because they are way simpler than shared borrows. I hope to write a blog post some point on why that is, I "just" need to come up with a nice explanation...)
So, if we consider &mut 'a (), what do we have? We have a location l such that [l, l+0) are valid addresses, and the list of values stored at these locations is in the set representing (). This interval is empty. Hence we do actually not know anything about the location, so in particular, we must not dereference it.
Let's consider &mut 'a Void. We have a location l such that [l, l+Void.size) are valid addresses, and the list of values stored at these locations is in the set representing Void. That's the empty set. There cannot be anything in the empty set, so we have a contradiction. From this, we can derive anything, and hence in particular, we are allowed to dereference the pointer.
What I am saying here, essentially, is that &mut 'a Void is actually the same type as Void in the sense that there is no possible value of that type, and hence we can always assume during compilation that no value of type &mut 'a Void exists. (This is actually assuming that 'a is an active lifetime.) Because of this, you can do literally anything if you have a value of type around; that's unreachable code.
@RalfJung But the topic was *mut Void, not &'a mut Void - does the same reasoning apply? If so, wouldn't that make using it for FFI pointers violently invalid?
@RalfJung I believe there's a distinction to be made here that &mut Void is Void but *mut Void isn't.
Specifically, one can create a *mut Void in safe code, but dereferencing it (in unsafe code) creates a value of type Void, which can make reachable code behave as if it wasn't (literally "invoking" UB).
About ZSTs, I believe you're talking about memory reads, not syntactical dereferences, which do not touch memory at all to "read" a ZST.
Sorry if I sound like a broken record, but this distinction in general can lead to a lot of confusion, because *ptr is a memory no-op in C, C++ and Rust (the latter 2 only if not overloaded) in that it converts a pointer rvalue into an lvalue while performing no memory access, but even if, say, &*ptr == ptr can be considered to be trivially true, AFAIK syntactical dereference can be UB in C if ptr == NULL.
@RalfJung still reading the rest of your post, but btw I totally agree that &mut is way easier to understand deeply. In https://internals.rust-lang.org/t/a-stateful-mir-for-rust/3596/15 (which you might be interested in anyways :)) I have a pretty good idea of what I will write for &mut but much less so for &.
Ah OK, so even besides @eddyb's concerns I consider a load from &mut () to be safe because that load must have size 0, so the load is actually a no-op. Totally agree &mut Void is absurd.
Also I'd like to point out that @RalfJung's model is another good reason why the conflation of ()-isomorphic and Void-isomorphic types as both having size 0 is a dangerous thing to do.
@Ericson2314 To be specific, lvalue->rvalue conversion for a () is a no-op, whereas lvalue->rvalue conversion for a ! is undefined behavior (I believe I can start using ! now as the Void type, it seems that will go through).
@eternaleye
But the topic was *mut Void, not &'a mut Void - does the same reasoning apply? If so, wouldn't that make using it for FFI pointers violently invalid?
Oh, I see. As far as I can tell, Rust attaches absolutely no guarantees to raw pointers, so their dereferencability depends entirely on the meaning you assign to them. If your invariants ensure that the given *mut T points to X bytes of valid memory, then sure you can make use of that. Of course, if you put the result into a variable of type T, you have to make sure it is a valid T. So, you better don't dereference the *mut Void, and (naturally) there is a manual proof obligation when creating a *mut Void that this pointer actually satisfies the guarantees you'd like to attach to it.
@eddyb
Sorry if I sound like a broken record, but this distinction in general can lead to a lot of confusion, because *ptr is a memory no-op in C, C++ and Rust (the latter 2 only if not overloaded) in that it converts a pointer rvalue into an lvalue while performing no memory access, but even if, say, &*ptr == ptr can be considered to be trivially true, AFAIK syntactical dereference can be UB in C if ptr == NULL.
Oh, the good ol' lvalue-rvalue confusion. I am entirely sidestepping that issue in my formalization by not having lvalues, which is why I payed no attention to it. ;-) Anyway, when I wrote about dereferencing pointers above, I was talking about actual load operations that happen on the machine.
@Ericson2314
Also I'd like to point out that @RalfJung's model is another good reason why the conflation of ()-isomorphic and Void-isomorphic types as both having size 0 is a dangerous thing to do.
I am not sure that is the case. ;-) Actually, I did not even assume any particular size for ! above -- the reasoning works for any size. Truth is, the size of ! could be literally anything, it just doesn't matter as the type is not inhabited. I doubt it is worth complicating the algebra of sizes for this corner-case. It would be nice to have things like Option<!> having the same representation as (), but that won't fall out of any particular size for ! -- instead, that'd be an extension of the optimizations for enum layout that we already have (e.g., Option<&T> having the pointer size, without an added discriminant).
@ubsan
To be specific, lvalue->rvalue conversion for a () is a no-op, whereas lvalue->rvalue conversion for a ! is undefined behavior (I believe I can start using ! now as the Void type, it seems that will go through).
I would say that lvalue -> rvalue conversion for ! assumes that we have a !, so this cannot even happen. That's not UB, that's just impossible -- the UB happened earlier, when we obtained a value of type !. But the consequence is the same, the compiler is free to emit literally any assembly code when anything is done to a !.
@RalfJung Ah, that comment wasn't supposed to go through :)
The issue is raw pointers, where *mut ! is a valid type, despite being an lvalue to an impossible rvalue.
@ubsan Right, as long as you don't dereference it. Whether &*x is already UB, I am not so sure... I would tend to argue it should not be.
Quoting my comment from #1216 with respect to using *mut ! for things like FFI types:
Given that the contract of *const T/*mut T is that it shall only be dereferenced when it actually points to valid/live data (the type system can't know this, which is why it's unsafe; it's entirely up to the user to determine this): I think *mut UninhabitedType, with the understanding that it will never be dereferenced, and *mut UnitType, with the understanding that it can be dereferenced whenever but there's no point in doing so, make equal amounts of sense.
(In other words, this is unlike &mut UninhabitedType, which is logically equivalent to UninhabitedType itself, because *mut doesn't imply liveness.)
@RalfJung
I am not sure about &*bot, but a use (aka lvalue-to-rvalue conversion) of a Void is UB.
Agreed. I'd argue that &*x is a noop (when translating to an lvalue-free semantics, it compiles to x). But if an actual memory access happens, we get UB.
@RalfJung But the having &! is UB, so &*x is undefined behavior, because it creates an &!.
Here are some of my own thoughts (not an expert, but am working on a garbage collector):
The memory model needs to be formal. Otherwise you can't formally verify unsafe Rust code, which is a use case that (at least) Redox definitely wants.
We need to avoid falling into the trap that C/C++ fell into with strict aliasing, which as I understand it breaks so much code that major compilers implement -fno-strict-aliasing and major projects rely on this (the OCaml runtime, CPython 2.x, and the Linux kernel, as well as the output of an RPC compiler at a minimum), and yet as I understand it has only minimal performance gains in practice for typical code (i.e. not tight numeric loops).
The memory model needs to be easy for people to understand. That means that an equally normative, easy to understand prose description of the memory model needs to be present as well – it is, after all, what most people will be programming to. If the formal model and the prose model diverge, this is a bug in the spec that must be fixed.
It needs to be possible to do what needs to be done without unnecessary copies. In C++, the only standards-conforming way to pass the buffer stored in an std::string to a function that takes an unsigned char * is to copy the entire buffer!. This is clearly absurd.
The memory model needs to support writing memory allocators, garbage collectors, etc. without too much boilerplate or pitfalls beyond what C would have. If the Rust version of some code is twice as long as the C version (with -fno-strict-aliasing) and is much harder to read, then that is a problem with Rust.
Aliasing *mut pointers should be permitted. Furthermore, it should be possible for an &mut or & pointer to alias a *const or *mut pointer, so long as the pointee does not change (in the case of a &-pointer) or is only modified through the reference (in the case of an &mut) for as long as the reference is live.
In multithreaded code, a common type of data race is two threads racing to write the same value to the same address, with no racing reads. Such races should be allowed, as the operation of writing a value to an address is idempotent. Similarly, the case where two threads simultaneously read a value from an address, set and/or clear a bit, and then write the new value back should be allowed – again, no reordering that I can imagine would break the code, and I have read about several algorithms (garbage collectors, if I recall correctly) that rely in this to work.
Nominating for discussion at lang-team meeting - the unsafe guidelines team exists now, so we should consider moving this PR either somewhere else or just the team tag.
I agree with @nrc that, given that we have accepted https://github.com/rust-lang/rfcs/pull/1643, we should close this RFC, since its contents are subsumed by the discussion taking place at the unsafe guidelines repo.
I reviewing the thread and have tried to extract out the most notable questions that were debated. It would make sense to move many of these to issues on the rust-memory-model repo, I think:
how to handle uninitialized data and whether some CPUs will require loading of undef to be UB
some discussion of temporary elimination in MIR
an alternative aliasing model jointly proposed by @arielb1 and @ubsan
some thoughts on ZST interactions
The discussion on the RFC has focused on a number of topics, but surprisingly little on the meat of the rules here.
how much and what to formalize
how well-specified LLVMs docs are (not much)
a reframed proposal by @arielb1 and @ubsan
@rust-lang/lang members, please check off your name to signal agreement. Leave a comment with concerns or objections. Others, please leave comments. Thanks!
[x] @nikomatsakis
[x] @nrc (I took the liberty of pre-checking, given the previous comment)
[ ] @aturon
[ ] @eddyb
[x] @pnkfelix (pre-checked, since he is on vacation)
I'm just going to close it now, since I owns it :P
|
gharchive/pull-request
| 2016-04-10T07:17:09 |
2025-04-01T06:40:17.517244
|
{
"authors": [
"Amanieu",
"Connorcpu",
"Ericson2314",
"Manishearth",
"RalfJung",
"Ticki",
"archshift",
"arielb1",
"chrisvest",
"comex",
"dgrunwald",
"dikaiosune",
"drbo",
"eddyb",
"eternaleye",
"gereeter",
"glaebhoerl",
"nikomatsakis",
"nrc",
"petrochenkov",
"pnkfelix",
"steveklabnik",
"ticki",
"ubsan"
],
"repo": "rust-lang/rfcs",
"url": "https://github.com/rust-lang/rfcs/pull/1578",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2175723853
|
Fix rust-mode lazy loading
This PR fixes the following error that happends when opening a .rs
file:
File mode specification error: (void-function rust-mode)
It adds a stub function called rust-mode that selects the actual
rust-mode (prog or treesitter) based on the environment. Once the
actual rust-mode is loaded, the stub function gets redefined.
Fixes #528
Thanks, @jroimartin can you see why the tests are failing ?
Sure, I'm working on it.
@psibi PTAL. I think this is the smallest required change. On the flip side, it is a bit tricky. But having two major modes with the same name and loading them conditionally seems problematic. So, I opted for allowing the stub function (called rust-mode) to be redefined when loading the actual mode.
Thanks, LGTM. @condy0919 Can you see if it works in your setup too ? I will test drive this PR too.
Thanks, LGTM. @condy0919 Can you see if it works in your setup too ? I will test drive this PR too.
Sorry, I'm in a touring🥺 can't help that
No rush. There is a valid workaround which is just adding (require 'rust-mode) to init.el. It is just that it is nice to support lazy loading :slightly_smiling_face: This change also make rust-mode just work for people using plain package.el.
No worries, I'm also going to be on travel for the next two days. I will test this out more properly after that.
@psibi @condy0919 what do you think of db7d086? This option does not require messing up with the rust-mode function definition.
@psibi @condy0919 what do you think of db7d086? This option does not require messing up with the rust-mode function definition.
This solution comes to mind at first, and the trick is used in the evil-exchange package.
I did some testing and this seems to work fine. @jroimartin Thanks for the PR. @condy0919 Thanks for testing and review.
|
gharchive/pull-request
| 2024-03-08T10:20:20 |
2025-04-01T06:40:17.589725
|
{
"authors": [
"condy0919",
"jroimartin",
"psibi"
],
"repo": "rust-lang/rust-mode",
"url": "https://github.com/rust-lang/rust-mode/pull/530",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
410042930
|
[unstable option] ignore
Tracking issue for ignore
request for stabilisation: https://github.com/rust-lang/rustfmt/issues/3243
This will need to be re-done for 1.x
What's blocking stabilizing this feature? I want this feature a lot. I need to ignore to ignore git submodules which doesn't formatted with rustfmt.
#5365 #5367
I know there are several process but I can't know where this feature is now. In addition, it looks it's stabilized once for 2.0 so I want to know what's missing for 1.x.
@anatawa12 I think the issue is that even the rustfmt team isn't too sure what outstanding issues are blocking stabilization for this one.
I don't know if this is a blocker but I did a quick search through the issue tracker and found #4726. There may be other outstanding issues but I don't have the time to check.
If anyone is interested in moving this option closer to stabilization I'd recommend starting with bullet number 1 listed on https://github.com/rust-lang/rustfmt/discussions/5367#discussioncomment-2874888.
|
gharchive/issue
| 2019-02-13T22:57:18 |
2025-04-01T06:40:18.266837
|
{
"authors": [
"anatawa12",
"calebcartwright",
"karyon",
"scampi",
"ytmimi"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/issues/3395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
110811515
|
Macro use not formatted - braces, item position, trailing comma
When running rustfmt on https://github.com/rust-lang-nursery/rand/blob/master/src/rand_impls.rs
@nagisa Could you paste the long line here please?
I’m not sure which the one that causes the issue, but removing https://github.com/rust-lang-nursery/rand/blob/master/src/rand_impls.rs#L220 seems to make it not fail anymore.
That's
array_impl!{32, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T,}
So possibly we're doing something silly with the macro use? Given that it fits on one line already, that seems weird.
It doesn't fit within a 100 chars, right? We're not reformatting it yet because
it's an item level macro use;
it uses braces;
it has a trailing comma.
Oh right, it doesn't, that would explain it!
Removing tags because we don't touch it yet and it's a known issue (also we're not making anything worse which is my usual criteria for p-high).
This may be fixed now. I think we format item level macro uses and braces. Only the trailing comma may be an issue since there's no way to know if it's significant. Maybe we could just go ahead and assume it is not.?
As of 8ec0750bb88dfe84ca2d54ddc29b13f5900641d6 it formats as follows:
fn foo() {
- array_impl!{32, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T,}
+ array_impl! {32, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T,}
}
The line is still over the limit, but an error would be displayed only if the error_on_line_overflow option is enabled:
internal error: line formatted, but exceeded maximum width (maximum: 100 (see `max_width` option), found: 117)
--> /home/yfful/documents/code/rustfmt/438.rs:2
|
2 | array_impl! {32, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T, T,}
| ^^^^^^^^^^^^^^^^^
warning: rustfmt may have failed to format. See previous 1 errors.
|
gharchive/issue
| 2015-10-10T19:13:53 |
2025-04-01T06:40:18.272122
|
{
"authors": [
"marcusklaas",
"nagisa",
"nrc",
"scampi"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/issues/438",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
724623056
|
Rustfmt cannot find mod with #[path] on windows
Rustfmt on windows cannot resolve module to file if #[path] used.
Note, #[path] should contain windows-unfriendly relative path.
To Reproduce
Use #[path = "./some-relative-path.rs"] in code.
Pointed file should be and valid rust source or empty.
#[path = "../module_foo.rs"]
mod module_foo;
Error:
> cargo fmt -- --check
error: couldn't read \\?\D:\a\rustfmt-issue-4477\rustfmt-issue-4477\src\..\module_foo.rs: The filename, directory name, or volume label syntax is incorrect. (os error 123)
Error writing files: failed to resolve mod `module_foo`: \\?\D:\a\rustfmt-issue-4477\rustfmt-issue-4477\src\..\module_foo.rs does not exist
Check out demo for this issue. Also there is two cases reproduced on GHA:
this error "file not found"
all is ok, #[path] with just filename
Expected behavior
No rustfmt's IO errors.
All modules resolved.
Meta
rustfmt versions:
rustfmt 1.4.15-stable (530eadf4 2020-06-02)
rustfmt 1.4.15-nightly (aedff61f 2020-05-19)
From where did you install rustfmt?: rustup, crates.io
How do you run rustfmt: rustfmt, cargo-fmt
rustfmt versions:
* rustfmt 1.4.15-stable ([530eadf](https://github.com/rust-lang/rustfmt/commit/530eadf4b42ddf35b209d4f4acd120f3fcc467ce) 2020-06-02)
* rustfmt 1.4.15-nightly ([aedff61](https://github.com/rust-lang/rustfmt/commit/aedff61f7ac4fc2b287ff76d33f2584e1f63a3af) 2020-05-19)
Could you try with a more recent version? There were a lot of improvements on the module resolution in 1.4.20/1.4.22 if I recall correctly
...
Could you try with a more recent version? There were a lot of improvements on the module resolution in 1.4.20/1.4.22 if I recall correctly
@calebcartwright, I'm sorry, versions I previously mentioned is my local. But latest rustfmt on CI, there is rustfmt 1.4.20-stable (48f6c32e 2020-08-09). (proof, see "print versions")
Same problem on rustfmt 1.4.22-nightly (97d03010 2020-10-04) and rustfmt 1.4.22-nightly (97d03010 2020-10-04)
I built the latest rust-fmt from source code and found it has been fixed at #4022.
https://github.com/rust-lang/rustfmt/blob/2ce75b114716cf8900640b8f8e5db44438fc25da/src/config/file_lines.rs#L51-L53
The reason is caused by path across look like this path\..\..\a.rs.
e.g :
\\?\F:\Rust\tikv\tests\failpoints\cases\..\..\integrations\import\util.rs does not exist
This is the description in rust_span.
https://github.com/rust-lang/rust/blob/8cef65fde3f92a84218fc338de6ab967fafd1820/compiler/rustc_span/src/lib.rs#L106-L123
You can build a latest rust-fmt and replace it in toolchains .
Windows:
set CFG_RELEASE=1.45.0-nightly <--(Specify yours)
set export CFG_RELEASE_CHANNEL=nightly <--(Specify yours)
cargo build --bin rustfmt --features="rustfmt"
Just now I did a :
rustup component add rustfmt --toolchain nightly
Says it installed but Windows couldn't find the path.
|
gharchive/issue
| 2020-10-19T13:43:55 |
2025-04-01T06:40:18.284136
|
{
"authors": [
"calebcartwright",
"chadbrewbaker",
"francis-du",
"fzzr-"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/issues/4477",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
963633125
|
use_field_init_shorthand is listed as default: false but is enabled by default.
On rust 1.54 when I run cargo fmt -- --help=config it shows use_field_init_shorthand as default: false.
However running cargo fmt will format as if use_field_init_shorthand is true.
I think use_field_init_shorthand should list the default as true to reflect reality.
However running cargo fmt will format as if use_field_init_shorthand is true.
This does not sound accurate. Can you please provide a minimal example that reproduces the behavior you're describing?
Ah, sorry, the issue is not exactly as I described, but it is still an issue regardless.
It seems use_field_init_shorthand: false functions as an off despite the documentation clearly describing it as forcing longhand formatting. https://github.com/rust-lang/rustfmt/blob/master/Configurations.md#use_field_init_shorthand
consider:
struct Foo {
a: i32,
}
fn main() {
let foo = Foo { a };
}
and
struct Foo {
a: i32,
}
fn main() {
let foo = Foo { a: a };
}
which both remain unchanged after running cargo fmt
both remain unchanged after running cargo fmt
Can you please clarify what configuration options you are using, and what your expected behavior is?
It seems like in the first snippet you are expecting use_field_init_shorthand to convert shorthand to longhand which is not what the option does. The example snippets shown in the config docs only highlight formatting style that will remain as-is/pass --check with the corresponding option value. They do not imply any reformatting except in cases where the documentation contains a section/comment/etc. that highlights a "before/pre-formatting".
In the second snippet, I'd imagine you would expect that to be converted to:
fn main() {
let foo = Foo { a };
}
which is precisely what the option is supposed to do, and what it will do so long as you have use_field_init_shorthand overridden with the non-default value of true
I'm not saying to use this exact wording but the documentation would make a lot more sense if we wrote it something like this:
Nope, that wouldn't work either because we actually run automated tests against all the config options and variants to ensure that the formatting of the snippet is idempotent when formatted with the corresponding config. I don't really have much more to offer up that hasn't already said beyond general advice when viewing the docs to not make assumptions about things not explicitly stated nor shown, and always feel free to ask clarifying questions!
As mentioned before, when there's a feeling that showing a formatting diff is helpful, the actual "before" is shown explicitly, e.g.
https://github.com/rust-lang/rustfmt/blob/master/Configurations.md#always, but otherwise don't assume there was some different/specific input which was modified to produce the formatted snippets.
What if we did it like this?
This way we improve both the idempotentency test and the documentation.
I am willing to provide a PR if its just documentation changes and we find something we both agree on.
|
gharchive/issue
| 2021-08-09T04:31:12 |
2025-04-01T06:40:18.293181
|
{
"authors": [
"calebcartwright",
"rukai"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/issues/4948",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1204750546
|
Lines containing overly long strings are not formatted correctly
As per the title, lines containing overly long strings are "left alone" and not formatted correctly.
Observing this rust playground and pressing the rustfmt button, line 3 is correctly collapsed while line 4 keeps the weird spacing.
I believe that this should not be the case: Even if the line can't be reduced to a sensible length, it should still be formatted in a "best effort" way.
As for the question of "where would you ever encounter this?", my answer is: Unbroken SQL queries are frequently over the line limit and cause formatting to break in weird ways.
By the way, when this bug happens it seems to "spread" like a virus. Lines near (perhaps inside the same block?) the overly long one are not formatted either, but I'm not sure how to reproduce this in a minimum example.
Thank you all in advance.
Thanks for reaching out but closing as a duplicate of #3863
Ah, sorry about that. The discussion on that issue is surprisingly comprehensive! Sad to see it's been an issue since 2019 however.
|
gharchive/issue
| 2022-04-14T16:37:01 |
2025-04-01T06:40:18.296440
|
{
"authors": [
"PurpleMyst",
"calebcartwright"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/issues/5310",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1095140892
|
Fix lost import with Crate granularity
Currently use a::{c as cc, c}; will be formatted to use a::c as cc; when Crate granularity is set.
The regression was introduced in https://github.com/rust-lang/rustfmt/pull/4973
Thank you for your interest and willingness to submit a PR! My apologies for the delay in review, but I think it'd be best to close this in favor of #5209 which I suspect will cover a broader suite of variants
|
gharchive/pull-request
| 2022-01-06T09:44:18 |
2025-04-01T06:40:18.298287
|
{
"authors": [
"calebcartwright",
"ldm0"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/pull/5166",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1188911040
|
Add test for issue 3937
Closes #3937
It's unclear which change fixed the format_code_in_doc_comments=true
issue brought up in this issue, however I'm unable to reproduce the
error on the current master.
The added test cases should serve to prevent a regression.
As a side note #5171 fixes the first problem listed in the linked issue.
|
gharchive/pull-request
| 2022-03-31T22:19:09 |
2025-04-01T06:40:18.299731
|
{
"authors": [
"ytmimi"
],
"repo": "rust-lang/rustfmt",
"url": "https://github.com/rust-lang/rustfmt/pull/5287",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1471913279
|
Add support for SO_ORIGINAL_DST and IP6T_SO_ORIGINAL_DST.
Those values contain the original destination IPv4/IPv6 address
of the connection redirected using iptables REDIRECT or TPROXY.
Signed-off-by: Piotr Sikora piotr@aviatrix.com
Could also add a test, even if it's ignored by default.
Could also add a test, even if it's ignored by default.
Done.
Thanks @PiotrSikora.
|
gharchive/pull-request
| 2022-12-01T20:23:38 |
2025-04-01T06:40:18.308977
|
{
"authors": [
"PiotrSikora",
"Thomasdezeeuw"
],
"repo": "rust-lang/socket2",
"url": "https://github.com/rust-lang/socket2/pull/360",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599806094
|
Add tvOS and watchOS
Follow up to https://github.com/rust-lang/socket2/pull/392.
Thanks!
|
gharchive/pull-request
| 2023-02-25T19:24:03 |
2025-04-01T06:40:18.310045
|
{
"authors": [
"Thomasdezeeuw",
"thomcc"
],
"repo": "rust-lang/socket2",
"url": "https://github.com/rust-lang/socket2/pull/395",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
490482757
|
Document the "rand" feature.
When using this crate I had trouble figuring out why I couldn't use the RandomBits structure. Hopefully future users will be less confused!
Doc locations:
README
RandomBits struct
RandBigInt trait
Thanks!
bors r+
|
gharchive/pull-request
| 2019-09-06T19:15:39 |
2025-04-01T06:40:18.315834
|
{
"authors": [
"alex-ozdemir",
"cuviper"
],
"repo": "rust-num/num-bigint",
"url": "https://github.com/rust-num/num-bigint/pull/109",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1193087038
|
Locale device tree address
I wonder if there is a way to get the address of the device tree by UEFI in aarch64? It's wonderful if the crate can provide an interface to get the address of device tree.
Hello! You might want to look into navigating the EFI Configuration Table structure, accessible with the .config_table() method on the SystemTable. It provides access to a list of descriptors of well-known, predefined "configuration tables", and according to the UEFI spec (page 104), you should be able to find a config table entry for Device Tree:
//
// Devicetree table, in Flattened Devicetree Blob (DTB) format
//
#define EFI_DTB_TABLE_GUID \
{0xb1b621d5, 0xf19c, 0x41a5, \
{0x83, 0x0b, 0xd9, 0x15, 0x2c, 0x69, 0xaa, 0xe0}}
We have some constant GUIDs defined for common tables, but not for DeviceTree (yet). If you're interested in making it easier to access this information for users of the uefi-rs crate, we'd welcome a PR adding it to the list 🙂
|
gharchive/issue
| 2022-04-05T12:33:12 |
2025-04-01T06:40:18.319004
|
{
"authors": [
"GabrielMajeri",
"Luchangcheng2333"
],
"repo": "rust-osdev/uefi-rs",
"url": "https://github.com/rust-osdev/uefi-rs/issues/403",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
267948851
|
Some class miss its constructor, Qt5.9.1
I use this wonderful tool to generate rust bindings of Qt5.9.1. The Qt sdk is installed by brew. First I fixed a link err, which is caused by the incorrect is_framework determination. It's works. Then I find some class miss its new() method. One of the class is Widget, link to my output source.
You can run the generator with --debug-logging=save and then look at log/debug_parser_skips.log file in the cache directory. It should explain why the methods are missing.
Can you share what changes related to frameworks did you need to do? Does your Qt installation contain frameworks or libraries?
I'm currently changing implementation to allow supporting all Qt versions at once. Hopefully it will make parser inconsistency issues less critical. I don't know how long it will take, though.
debug_parser_skips.log
...
Method is removed: Qt::Orientations<Qt::Orientation> QSizePolicy::expandingDirections() const: unknown type: Qt::Orientations
Method is removed: QFlags<QSizePolicy::ControlType> operator|(QFlags::enum_type f1, QFlags::enum_type f2): unknown type: QFlags::enum_type
Method is removed: QFlags<QSizePolicy::ControlType> operator|(QFlags::enum_type f1, QFlags<QSizePolicy::ControlType> f2): unknown type: QFlags::enum_type
Method is removed: QIncompatibleFlag operator|(QFlags::enum_type f1, int f2): unknown type: QIncompatibleFlag
Method is removed: private QWidgetPrivate* QWidget::d_func(): unknown type: QWidgetPrivate
Method is removed: private const QWidgetPrivate* QWidget::d_func() const: unknown type: QWidgetPrivate
Method is removed: [constructor] void QWidget::QWidget(QWidget* parent = ?, Qt::WindowFlags<Qt::WindowType> f = ?): unknown type: Qt::WindowFlags
Method is removed: void QWidget::grabGesture(Qt::GestureType type, Qt::GestureFlags<Qt::GestureFlag> flags = ?): unknown type: Qt::GestureFlags
Method is removed: Qt::WindowStates<Qt::WindowState> QWidget::windowState() const: unknown type: Qt::WindowStates
Method is removed: void QWidget::setWindowState(Qt::WindowStates<Qt::WindowState> state): unknown type: Qt::WindowStates
...
It seems like some complex defines failed to parse.
Can you share what changes related to frameworks did you need to do? Does your Qt installation contain frameworks or libraries?
I just adjust the detect order. Here is my codes.
cpp_to_rust/qt_generator/qt_generator_common/src/lib.rs
...
/// Detects properties of current Qt installation using `qmake` command line utility.
pub fn get_installation_data(sublib_name: &str) -> Result<InstallationData> {
let qt_version = run_qmake_string_query("QT_VERSION")?;
log::status(format!("QT_VERSION = \"{}\"", qt_version));
log::status("Detecting Qt directories");
let root_include_path = run_qmake_query("QT_INSTALL_HEADERS")?;
log::status(format!("QT_INSTALL_HEADERS = \"{}\"", root_include_path.display()));
let lib_path = run_qmake_query("QT_INSTALL_LIBS")?;
log::status(format!("QT_INSTALL_LIBS = \"{}\"", lib_path.display()));
let docs_path = run_qmake_query("QT_INSTALL_DOCS")?;
log::status(format!("QT_INSTALL_DOCS = \"{}\"", docs_path.display()));
let folder_name = lib_folder_name(sublib_name);
let dir = lib_path.with_added(format!("{}.framework/Headers", folder_name));
if dir.exists() {
Ok(InstallationData {
root_include_path: root_include_path,
lib_path: lib_path,
docs_path: docs_path,
lib_include_path: dir,
is_framework: true,
qt_version: qt_version,
})
} else {
let dir2 = root_include_path.with_added(&folder_name);
if dir2.exists() {
Ok(InstallationData {
root_include_path: root_include_path,
lib_path: lib_path,
docs_path: docs_path,
lib_include_path: dir2,
is_framework: false,
qt_version: qt_version,
})
} else {
Err(format!("extra header dir not found (tried: {}, {})",
dir.display(),
dir2.display())
.into())
}
}
}
...
and my directory struct of the qt5.9.1 sdk library installed by home brew:
/usr/local/Cellar/qt/5.9.1/include
$ ls -l
total 512
lrwxr-xr-x 1 user admin 38 Jun 29 08:06 Qt3DAnimation -> ../lib/Qt3DAnimation.framework/Headers
lrwxr-xr-x 1 user admin 33 Jun 29 08:06 Qt3DCore -> ../lib/Qt3DCore.framework/Headers
lrwxr-xr-x 1 user admin 35 Jun 29 08:06 Qt3DExtras -> ../lib/Qt3DExtras.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 Qt3DInput -> ../lib/Qt3DInput.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 Qt3DLogic -> ../lib/Qt3DLogic.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 Qt3DQuick -> ../lib/Qt3DQuick.framework/Headers
lrwxr-xr-x 1 user admin 43 Jun 29 08:06 Qt3DQuickAnimation -> ../lib/Qt3DQuickAnimation.framework/Headers
lrwxr-xr-x 1 user admin 40 Jun 29 08:06 Qt3DQuickExtras -> ../lib/Qt3DQuickExtras.framework/Headers
lrwxr-xr-x 1 user admin 39 Jun 29 08:06 Qt3DQuickInput -> ../lib/Qt3DQuickInput.framework/Headers
lrwxr-xr-x 1 user admin 40 Jun 29 08:06 Qt3DQuickRender -> ../lib/Qt3DQuickRender.framework/Headers
lrwxr-xr-x 1 user admin 41 Jun 29 08:06 Qt3DQuickScene2D -> ../lib/Qt3DQuickScene2D.framework/Headers
lrwxr-xr-x 1 user admin 35 Jun 29 08:06 Qt3DRender -> ../lib/Qt3DRender.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtAccessibilitySupport
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtBluetooth -> ../lib/QtBluetooth.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtCglSupport
lrwxr-xr-x 1 user admin 33 Jun 29 08:06 QtCharts -> ../lib/QtCharts.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtClipboardSupport
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtConcurrent -> ../lib/QtConcurrent.framework/Headers
lrwxr-xr-x 1 user admin 31 Jun 29 08:06 QtCore -> ../lib/QtCore.framework/Headers
lrwxr-xr-x 1 user admin 31 Jun 29 08:06 QtDBus -> ../lib/QtDBus.framework/Headers
lrwxr-xr-x 1 user admin 44 Jun 29 08:06 QtDataVisualization -> ../lib/QtDataVisualization.framework/Headers
lrwxr-xr-x 1 user admin 35 Jun 29 08:06 QtDesigner -> ../lib/QtDesigner.framework/Headers
lrwxr-xr-x 1 user admin 45 Jun 29 08:06 QtDesignerComponents -> ../lib/QtDesignerComponents.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtDeviceDiscoverySupport
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtEventDispatcherSupport
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtFbSupport
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtFontDatabaseSupport
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 QtGamepad -> ../lib/QtGamepad.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtGraphicsSupport
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtGui -> ../lib/QtGui.framework/Headers
lrwxr-xr-x 1 user admin 31 Jun 29 08:06 QtHelp -> ../lib/QtHelp.framework/Headers
lrwxr-xr-x 1 user admin 35 Jun 29 08:06 QtLocation -> ../lib/QtLocation.framework/Headers
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtMacExtras -> ../lib/QtMacExtras.framework/Headers
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtMultimedia -> ../lib/QtMultimedia.framework/Headers
lrwxr-xr-x 1 user admin 44 Jun 29 08:06 QtMultimediaQuick_p -> ../lib/QtMultimediaQuick_p.framework/Headers
lrwxr-xr-x 1 user admin 44 Jun 29 08:06 QtMultimediaWidgets -> ../lib/QtMultimediaWidgets.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 QtNetwork -> ../lib/QtNetwork.framework/Headers
lrwxr-xr-x 1 user admin 38 Jun 29 08:06 QtNetworkAuth -> ../lib/QtNetworkAuth.framework/Headers
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtNfc -> ../lib/QtNfc.framework/Headers
lrwxr-xr-x 1 user admin 33 Jun 29 08:06 QtOpenGL -> ../lib/QtOpenGL.framework/Headers
drwxr-xr-x 8 user admin 272 Jun 29 08:06 QtOpenGLExtensions
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtPacketProtocol
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtPlatformCompositorSupport
drwxr-xr-x 30 user admin 1020 Jun 29 08:06 QtPlatformHeaders
lrwxr-xr-x 1 user admin 38 Jun 29 08:06 QtPositioning -> ../lib/QtPositioning.framework/Headers
lrwxr-xr-x 1 user admin 39 Jun 29 08:06 QtPrintSupport -> ../lib/QtPrintSupport.framework/Headers
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtPurchasing -> ../lib/QtPurchasing.framework/Headers
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtQml -> ../lib/QtQml.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtQmlDebug
lrwxr-xr-x 1 user admin 32 Jun 29 08:06 QtQuick -> ../lib/QtQuick.framework/Headers
lrwxr-xr-x 1 user admin 41 Jun 29 08:06 QtQuickControls2 -> ../lib/QtQuickControls2.framework/Headers
lrwxr-xr-x 1 user admin 41 Jun 29 08:06 QtQuickParticles -> ../lib/QtQuickParticles.framework/Headers
lrwxr-xr-x 1 user admin 42 Jun 29 08:06 QtQuickTemplates2 -> ../lib/QtQuickTemplates2.framework/Headers
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtQuickTest -> ../lib/QtQuickTest.framework/Headers
lrwxr-xr-x 1 user admin 39 Jun 29 08:06 QtQuickWidgets -> ../lib/QtQuickWidgets.framework/Headers
lrwxr-xr-x 1 user admin 40 Jun 29 08:06 QtRemoteObjects -> ../lib/QtRemoteObjects.framework/Headers
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtRepParser -> ../lib/QtRepParser.framework/Headers
lrwxr-xr-x 1 user admin 33 Jun 29 08:06 QtScript -> ../lib/QtScript.framework/Headers
lrwxr-xr-x 1 user admin 38 Jun 29 08:06 QtScriptTools -> ../lib/QtScriptTools.framework/Headers
lrwxr-xr-x 1 user admin 32 Jun 29 08:06 QtScxml -> ../lib/QtScxml.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 QtSensors -> ../lib/QtSensors.framework/Headers
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtSerialBus -> ../lib/QtSerialBus.framework/Headers
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtSerialPort -> ../lib/QtSerialPort.framework/Headers
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtSql -> ../lib/QtSql.framework/Headers
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtSvg -> ../lib/QtSvg.framework/Headers
lrwxr-xr-x 1 user admin 31 Jun 29 08:06 QtTest -> ../lib/QtTest.framework/Headers
lrwxr-xr-x 1 user admin 39 Jun 29 08:06 QtTextToSpeech -> ../lib/QtTextToSpeech.framework/Headers
drwxr-xr-x 7 user admin 238 Jun 29 08:06 QtThemeSupport
lrwxr-xr-x 1 user admin 35 Jun 29 08:06 QtUiPlugin -> ../lib/QtUiPlugin.framework/Headers
drwxr-xr-x 9 user admin 306 Jun 29 08:06 QtUiTools
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtWebChannel -> ../lib/QtWebChannel.framework/Headers
lrwxr-xr-x 1 user admin 36 Jun 29 08:06 QtWebEngine -> ../lib/QtWebEngine.framework/Headers
lrwxr-xr-x 1 user admin 40 Jun 29 08:06 QtWebEngineCore -> ../lib/QtWebEngineCore.framework/Headers
lrwxr-xr-x 1 user admin 43 Jun 29 08:06 QtWebEngineWidgets -> ../lib/QtWebEngineWidgets.framework/Headers
lrwxr-xr-x 1 user admin 37 Jun 29 08:06 QtWebSockets -> ../lib/QtWebSockets.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 QtWebView -> ../lib/QtWebView.framework/Headers
lrwxr-xr-x 1 user admin 34 Jun 29 08:06 QtWidgets -> ../lib/QtWidgets.framework/Headers
lrwxr-xr-x 1 user admin 30 Jun 29 08:06 QtXml -> ../lib/QtXml.framework/Headers
lrwxr-xr-x 1 user admin 38 Jun 29 08:06 QtXmlPatterns -> ../lib/QtXmlPatterns.framework/Headers
Same for Qt 5.7.1 with clang 3.9.1 on linux.
Looks like it because of qt_core fails.
qt_core/log/debug_parser.log https://gist.github.com/o01eg/6064fb33abb6914ba0feb770c77c2bbd
Looks like Qt::WindowFlags type is parsed successfully in qt_core, but when the generator works on qt_widgets, it doesn't have this information available. What are the command line arguments to qt_generator that you use when generating qt_core, qt_gui and qt_widgets?
Not sure if it successful:
Failed to tokenize method operator| at SourceRange { start: SourceLocation { file: Some(File { path: "/usr/include/qt5/QtCore/qnamespace.h" }), line: 1758, column: 1, offset: 53192 }, end: SourceLocation { file: Some(File { path: "/usr/include/qt5/QtCore/qnamespace.h" }), line: 1758, column: 47, offset: 53238 } }
The code extracted directly from header: "Q_DECLARE_OPERATORS_FOR_FLAGS(Qt::WindowFlags)"
Args are:
CLANG_SYSTEM_INCLUDE_PATH=/usr/lib64/clang/3.9.1/include/ QT_SELECT=5 cargo run --release -- -c /tmp/_qt_cache -C 0 --debug-logging save -l core gui widgets ui_tools -o /tmp/qt-test001/
What are the command line arguments to qt_generator that you use when generating qt_core, qt_gui and qt_widgets?
export PATH=$PATH:/usr/local/Cellar/qt/5.9.1/bin
CLANG_SYSTEM_INCLUDE_PATH=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift-migrator/sdks/MacOSX.sdk/usr/include DYLD_LIBRARY_PATH=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib ../cpp_to_rust/qt_generator/qt_generator/target/release/qt_generator -c cache -o qt_5.8.0 --libs all --debug-logging=save
The framework detection issue should be fixed now.
Source of the main issue is not clear yet.
Still occurs at Kubuntu 18.04.1, Qt 5.9.5, qt_generator taken from git.
It's correct that there are types we're not going to support, and things like QIntegerForSizeof and QAtomicTraits are among them. Luckily, Qt doesn't use advanced templates heavily, so the majority of Qt's public API will be available.
Is there a way then to improve Rust-Qt to be usable with the recent Qt versions? The generator from master cannot chew the templated types, and the crates version is old and leads to the random runtime segfaults.
The master version does support template types. However, its results are quite unstable across platforms, so some APIs may be missing. The new version is still in development, and the plan is to support newer Qt versions for it.
If you experience segfaults in the crates.io version, I would appreciate an issue that shows concrete code that produces it. Unfortunately, the wrapper is really unsafe in Rust's terms, so it's very easy to produce a segfault by incorrectly using C++-based APIs.
During trying to get the simplest code for segfaults reproduction I have accidentally fixed it. Did not use the create_and_exit method from the tutorial and was dropping args all the time.
@Riateche I see that you updated the "install_qt.py" script with Qt 5.13.0.
Does ritual support Qt 5.13.0? On docs.io it says that the crate was generated against Qt 5.8.0(if it supports Qt 5.13.0, could you please update it against Qt 5.13.0?).
Would really like to use that seamless ETC support in QtQuick :)
Qt 5.13 is currently supported, but new versions are not published on crates.io yet. I'm working on it.
New version supporting Qt 5.13 is published on crates.io.
In the new version, the recommended way is to use generated crates that support a variety of Qt versions and platforms, so lack of some random functions shouldn't be a thing anymore.
|
gharchive/issue
| 2017-10-24T08:53:53 |
2025-04-01T06:40:18.336095
|
{
"authors": [
"Riateche",
"aristotle9",
"lilianmoraru",
"o01eg",
"plyhun",
"snuk182"
],
"repo": "rust-qt/cpp_to_rust",
"url": "https://github.com/rust-qt/cpp_to_rust/issues/63",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
381901587
|
Strip the result body when creating default HEAD responses
If you don't specify a HEAD behavior but you do provide GET, Tide will automatically use the GET behavior. But it retains the body, which should be stripped.
@aturon did a little diggin into this, not sure if I am reading this right either, but it seems like hyper handles this for us. In the current code on master here are some example responses from curl
I'm running the named_path example:
url --get --url http://127.0.0.1:8000/add_two/2 --header 'content-type: application/json' -v
Returns
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> GET /add_two/2 HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
> Accept: */*
> Referer:
> content-type: application/json
>
< HTTP/1.1 200 OK
< content-type: text/plain
< content-length: 15
< date: Mon, 19 Nov 2018 04:53:55 GMT
<
* Connection #0 to host 127.0.0.1 left intact
2 plus two is 4
and the head request:
curl --head --url http://127.0.0.1:8000/add_two/2 --header 'content-type: application/json' -v
returns
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8000 (#0)
> HEAD /add_two/2 HTTP/1.1
> Host: 127.0.0.1:8000
> User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
> Accept: */*
> Referer:
> content-type: application/json
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< content-type: text/plain
content-type: text/plain
< content-length: 15
content-length: 15
< date: Mon, 19 Nov 2018 04:53:33 GMT
date: Mon, 19 Nov 2018 04:53:33 GMT
<
* Connection #0 to host 127.0.0.1 left intact
Perhaps I am missing something though. Thoughts?
@tzilist
I had observed the same behavior while testing #31.
Guess i missed something too :)
That's very interesting. I haven't been able to track down where this is happening in Hyper...
@seanmonstar maybe you can provide a quick answer: does Hyper have built-in treatment for HEAD requests and if so, what is it?
hyper will try to prevent illegal HTTP semantics, like sending a body in response to HEAD requests, or in 204/304 status codes.
It may still be useful to recognize a HEAD request is different from a GET in cases where preparing the body may have been expensive.
@aturon if this is the case, should we just write a test case to ensure that HEAD requests always strip the body? We can just rely on hyper to handle this and the test case to make sure it doesn't break.
@tzilist Yep, that seems reasonable for now! I'd also suggest adding a comment to make clear that's what we're doing.
I attempted to write a test for this given the mock service provided through http_service_mock but I'm running into something strange... There's more details in my PR: https://github.com/rustasync/tide/pull/179
Has this issue been fully resolved?
@WSeegers I think it may have been in https://github.com/rustasync/tide/pull/179. Going to go ahead and close this!
|
gharchive/issue
| 2018-11-17T21:13:12 |
2025-04-01T06:40:18.367458
|
{
"authors": [
"DeltaManiac",
"WSeegers",
"aturon",
"fairingrey",
"seanmonstar",
"tzilist",
"yoshuawuyts"
],
"repo": "rustasync/tide",
"url": "https://github.com/rustasync/tide/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2393249633
|
feat(textToSpeech): add text-to-speech feature
Change
Add TextToSpeech (Only read title, description, alt, text for now, Click icon button to play and stop)
Fix the data-cy conversion in actions/index.tsx
Add tests and story
Screenshot
Video
I would use "Start reading aloud" and "Stop reading aloud".
|
gharchive/pull-request
| 2024-07-06T00:00:28 |
2025-04-01T06:40:18.371386
|
{
"authors": [
"Laurendragonscale",
"lyjeileen"
],
"repo": "rustic-ai/ui-components",
"url": "https://github.com/rustic-ai/ui-components/pull/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
766223154
|
Does struct field renaming support private fields?
I am trying to map across the stripe.CreatePaymentMethod function, but I am running into a bit of a blocker.
Because type is a keyboard it cannot be used as an attribute name thankfully #2360 was pulled in and I've ended up with the following:
#[wasm_bindgen]
pub struct PaymentMethodData{
#[wasm_bindgen(js_name = type)]
foo: String,
card: Card,
billing_details: BillingDetails,
}
card is a JS type and BillingDetails is a struct.
However, the following results in the following from the compiler:
error: expected an inert attribute, found an attribute macro
I have also tried:
#[wasm_bindgen]
pub struct PaymentMethodData{
r#type: String,
card: Arc<Card>,
billing_details: BillingDetails,
}
Which does compile but stripe throws a runtime error (IntegrationError: Invalid value for createPaymentMethod: type should be string. You specified: undefined.).
Any ideas on where to go from here?
I don't think that private fields are exported into JS, so if you're passing this to an API that expects to be able to access a type property then that won't work? You'll need to probably make an impl with methods that work as accessors?
Okay I've finally figured out what is going on, internally stripe elements is accessing using bracket notation which is where the issue has cropped up from.
var i = r[o]
Where r is the object and o is a string with a field name, is there a way to expose fields for this or would I need to write a small translation layer?
Ah that makes sense! I'm unfortunately not enough of a JS-wizard to know what the options available to wasm-bindgen are to solve that. I'd probably have a small layer for now to translate.
|
gharchive/issue
| 2020-12-14T09:55:44 |
2025-04-01T06:40:18.383639
|
{
"authors": [
"alexcrichton",
"disconsented"
],
"repo": "rustwasm/wasm-bindgen",
"url": "https://github.com/rustwasm/wasm-bindgen/issues/2397",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1551905963
|
Web worker example fails to build
Describe the Bug
The Wasm in web worker examples fails to build when running ./build.sh:
error[E0599]: no method named dyn_ref found for struct Element in the current scope
Steps to Reproduce
git clone http://github.com/rustwasm/wasm-bindgen.git --depth 1
cd wasm-bindgen/examples/wasm-in-web-worker
./build.sh
Expected Behavior
I expected the example to compile.
Actual Behavior
It produced an error, see above.
Additional Context
cargo version: cargo 1.68.0-nightly (8c460b223 2023-01-04)
Arch Linux 6.1.6
I was able to build and run the example wasm_no_modules_js_worker from this repo by Simon B. Gasse.
I can't reproduce this. This sounds like it was caused by #3221, though, since it removed the manual import of JsCast, which would have caused such an error if the example was built with a version of wasm-bindgen before JsCast was added to wasm_bindgen::prelude.
Now that 0.2.84 has been released, though, and the example explicitly requires it and thus JsCast in wasm_bindgen::prelude, I think this should be fixed. Can you confirm whether you still have this problem?
I can confirm that the application builds now, thanks.
When serving the website using http or python3 -m http.server, and connecting to localhost:8000 with a browser, I observe that the following files are fetched in a seemingly endless loop:
worker.js
wasm_in_web_worker.js
wasm_in_web_worker_bg.wasm
Do you see that too?
cargo version: cargo 1.68.0-nightly (985d561f0 2023-01-20)
Arch Linux 6.1.8
Firefox 109.0.1
Chromium 109.0.5414.119
When serving the website using http or python3 -m http.server, and connecting to localhost:8000 with a browser, I observe that the following files are fetched in a seemingly endless loop:
worker.js
wasm_in_web_worker.js
wasm_in_web_worker_bg.wasm
Oh, right. Sorry, that was just fixed in #3278. I think I responded to you right before I merged #3248 and caused that.
Thanks for fixing this, Liam! The example works flawlessly now.
|
gharchive/issue
| 2023-01-21T20:35:28 |
2025-04-01T06:40:18.393569
|
{
"authors": [
"Liamolucko",
"mb720"
],
"repo": "rustwasm/wasm-bindgen",
"url": "https://github.com/rustwasm/wasm-bindgen/issues/3256",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2732373983
|
Using exported Rust structs causes detached Buffer issue
Describe the Bug
When creating a function that accepts a vector of exported rust structs in rust > 1.82 on big enough sizes function returns an error.
Steps to Reproduce
Use hello_world example and set wasm_bindgen to version 2.99.
Create a function and a struct in rust:
#[wasm_bindgen]
#[derive(Clone, Copy, Debug)]
pub struct PartialTestData {
pub a: u32,
pub b: f64,
pub c: f64,
}
#[wasm_bindgen]
impl PartialTestData {
#[wasm_bindgen(constructor)]
pub fn new(a: u32, b: f64, c: f64) -> Self {
return Self { a, b, c };
}
}
#[wasm_bindgen]
pub fn test_sum(param: Vec<PartialTestData>) -> u64 {
let mut k = 0u64;
for v in param {
k += u64::from(v.a)
}
return k;
}
Replace index.js file with call to a function:
import { PartialTestData, test_sum } from "./pkg";
const size = 1_000_000;
const values = new Array(size)
.fill(0)
.map((_, i) => new PartialTestData(i, 0, 0));
alert(test_sum(values));
Run npm install, npm run serve
Open the browser
See error
Expected Behavior
Alert with a result of a computation.
Actual Behavior
An exception in console TypeError: attempting to access detached ArrayBuffer in Firefox or TypeError: Cannot perform DataView.prototype.setUint32 on a detached ArrayBuffer in Chrome
Additional Context
The issue doesn't happen in versions of rust below 1.82. I also didn't test options other than bundler.
I am facing the same issue. A rollback to wasm-bindgen = "=0.2.92" seems to also fix the issue.
Note: the bug is specific to the reference type transformations. So it doesn't show up in 0.2.92 because reference type transformations were not enabled by default back then.
|
gharchive/issue
| 2024-12-11T09:35:25 |
2025-04-01T06:40:18.399249
|
{
"authors": [
"Anoromi",
"daxpedda",
"mProjectsCode"
],
"repo": "rustwasm/wasm-bindgen",
"url": "https://github.com/rustwasm/wasm-bindgen/issues/4352",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1309065943
|
Specify cargo build target dir?
💡 Feature description
I want to specify all the intermediate compilation files or wasm output into specified dir. I used wasm-pack build <input-dir> --out-dir <wasm-pack-output-dir> -- --target-dir <cargo-target-dir>, but the following error was emitted:
[INFO]: Installing wasm-bindgen...
error: failed reading '<input-dir>\target\wasm32-unknown-unknown\release\<not-important>.wasm'
Caused by:
系统找不到指定的路径。 (os error 3)
^^^^^^^^^^^^^^^
(Which means "system can not find specified path")
Seems like wasm-pack always assumes the cargo target output is emitted to <input-dir>/target?
💻 Basic example
Include a basic code example if possible. Omit this section if not applicable.
To me it looks like as wasm-pack changes directories incorrectly.
I specify CARGO_TARGET_DIR=wasm-target and it ends up compiling everything in the following folder: <workspace_dir>/<crate_dir>/wasm-target (which isn't the correct path).
But then wasm-pack looks for the binary in <workspace_dir>/wasm-target (which is the correct path) and fails.
Fixed in #1331.
|
gharchive/issue
| 2022-07-19T06:53:24 |
2025-04-01T06:40:18.402375
|
{
"authors": [
"drager",
"kaimast",
"shrinktofit"
],
"repo": "rustwasm/wasm-pack",
"url": "https://github.com/rustwasm/wasm-pack/issues/1156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1723803310
|
Add these tutorials to Polkadot.Study
To make these tutorials more visible, it would be neat to have them included in polkadot.study.
Cc @niklasp / niklas@eedee.net
That would be nice! Are you open for it @tdelabro? I recently included the work in progress "Substrate In Bits" (from @abdbee).
With your input I could also create a filterable landing page for rusty crewmates tutorials.
Sure go ahead !
I wasn't aware of this initiative, but it's great!
I haven't worked with rusty crewmates but it seems this describes it well.
Actually all the things to learn are inside your repository. However I could imagine that an index page would be good. Something similar to this: https://polka.study/tutorials/substrate-in-bits/#table-of-technical-content. With a short description of the exercises and a link to each one. Could you imagine putting something like this together in a markdown file? I would then include your repo as a submodule to make it easier to sync with new content.
Anyway I added your repo to the substrate tag page here: https://polka.study/tutorials/tags/substrate
@tdelabro could you imagine writing an index page and a short description?
What is Rusty Crewmates Substrate Tutorials
How to work with it
Content and description of the different sections
any additional information
So i could publish that as a tutorial itsself and direct users to this repo?
Sure. Is it just the text you are expecting. Or should it be formatted on a specific way (markdown, web)?
@tdelabro any update here? would be awesoome to include it for more visibility. your tutorials are great for new substrate devs!
Hey @niklasp
Do you think this will be good enough?
# Substrate tutorials
Substate tutorials is a collection of exercises that will theach you the basics of Substrate development and broaden your skillsets through real-world use cases.
## Getting started
Go to the tutorial [repository](https://github.com/rusty-crewmates/substrate-tutorials), fork it, clone it and start with the [first exercise](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex00-testing).
Run the crate tests, you will see they fail. Give a look to the `README.md` and code until all tests pass :)
If you want to run your pallet in a real runtime, you can easily edit the `substrate-node-template` and add your pallet to its runtime. It will allow you to interact with your code through tools like [polkadot.js](https://polkadot.js.org/apps/#/explorer).
## Table of content
| | name | objectives |
| ---| ---| ---|
|0| [testing](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex00-testing) | learn how to write simple tests for an existing pallet|
|1| [pallet easy](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex01-pallet-easy)| write a really simple erc20-like pallet |
|2| [runtime](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex02-runtime) | add your pallet to a substrate runtime and launch a node|
|3| [pallet intermediate](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex03-pallet-intermediate)| writing pallet is the bread and butter of substrate development, let's double down on those basics|
|4| [coupling](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex04-pallet-coupling) | pallets can interact with each other, in different complex ways |
|5| [hooks](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex05-hooks)| substrate allow you to write hooks that will multiply the possibilities of your chain|
|6| [weights](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex06-weights) | in order to be incentive the block consensus authorities fees are collected on users transactions |
|7| [imbalances](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex07-imbalances)| the supply of your chain token can vary, but there are some rules to respect when playing with it|
|8| [genesis config](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex08-genesis-config) | you can give your chain an initial state before launching it|
|9| [mock](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex09-mock)| learn how to mock your runtime in order to write handy and powerfull tests |
|10| [offchain worker](https://github.com/rusty-crewmates/substrate-tutorials/tree/main/exercises/ex10-offchain-worker) | another hook that allow nodes to do complex async computation in parallel of the chain execution |
## Contribute
This work is open-source, financed by a Web3 Foundation grant, so it really belongs to the community. Feel free to contribute to the repository with anything you think could help others.
Yeah, thats fine. In case you write new tutorials, please ping me so i can update the index.
Sure I will !
|
gharchive/issue
| 2023-05-24T11:50:00 |
2025-04-01T06:40:18.410310
|
{
"authors": [
"niklasp",
"sacha-l",
"tdelabro"
],
"repo": "rusty-crewmates/substrate-tutorials",
"url": "https://github.com/rusty-crewmates/substrate-tutorials/issues/43",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1778258267
|
3.30 updated to 3.31.1
Current version number was shown differently after fw update than when it's read from Bluetooth
Test and see what is shown
Ok leaving it for now
I think we had a similar issue on iOS earlier and we fixed it. If right now the current version info is shown before actually verifying it from the sensor, it is a totally wrong thing to do and may make user think that all is ok even if something actually went wrong during the update. I understand that this is minor and not fixing it is usually easier solution than fixing it, but still, let's keep this open in backlog, test more and estimate wotkload before deciding of not to do it.
|
gharchive/issue
| 2023-06-28T05:57:44 |
2025-04-01T06:40:18.415544
|
{
"authors": [
"laurijamsa",
"markoaamunkajo"
],
"repo": "ruuvi/com.ruuvi.station",
"url": "https://github.com/ruuvi/com.ruuvi.station/issues/1043",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
990714648
|
Regenerate man page
Was forgotten in 2714cfce91fb0fa4197f92a20a52dd475c20f70d.
Thanks.
|
gharchive/pull-request
| 2021-09-08T04:56:17 |
2025-04-01T06:40:18.423791
|
{
"authors": [
"eNV25",
"rvaiya"
],
"repo": "rvaiya/keyd",
"url": "https://github.com/rvaiya/keyd/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1827254784
|
How to change default shorcut on Windows?
How can I replace it?,
I can't find any configuration to change the shortcut
Let say I want to change "hint_activation_key" to Win+F
edit config not work, and I can't find edit file
Location config:
C:\Users\ajf\AppData\Roaming\warpd
#solved
|
gharchive/issue
| 2023-07-29T01:15:06 |
2025-04-01T06:40:18.425684
|
{
"authors": [
"ajfpay"
],
"repo": "rvaiya/warpd",
"url": "https://github.com/rvaiya/warpd/issues/262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
263915518
|
Example 3 is not working
Hi, when I try to run example 3 I get the following error :
File "/home/paul/Dropbox/SupAero/S5/SDD/Metaheuristiques/ABC/Hive-master/Example3_EvolveAPainting.py", line 170, in compare_images_mse
err = np.sum((source_image.astype("float") - new_image.astype("float")) ** 2)
ValueError: operands could not be broadcast together with shapes (516,370,3) (645,462,3)
do you know where this could come from ?
Really nice code you did here however and thanks for sharing !
I changed dpi from 80 to 100 and it works perfectely! I think it may be computer dependent.
Best,
|
gharchive/issue
| 2017-10-09T14:29:54 |
2025-04-01T06:40:18.458508
|
{
"authors": [
"PBarde"
],
"repo": "rwuilbercq/Hive",
"url": "https://github.com/rwuilbercq/Hive/issues/1",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1344338868
|
feat(execute): now handling type and enum cases + type-value for enums
Hey, great initiative on doing such a project, at my current company we are making a lot of GQL queries and need to write down mocks quite often which is getting quite ou tof control lately.
Your package provides a nice way to tackle that issue.
However I had to add support for pascalCase along with having enum values written as Enum.Value and not just Value because the typescript compiler threw me a bunch of errors if I just assigned the values directly to the enum field.
I also found that your package does not support some other features like having proper enum values for Union Types, i can open a pull request later on to handle that issue as well
@gromon42 sorry for the delay, this is great! thanks for the PR. Can you just revert the package-lock.json since I don't see any changes to the package.json and I'll merge and release this. Also thanks for adding a test 🙌
Sure will do ! Thanks for getting back to me !
Also, i have done a lot of modifications in your current codebase to handle several use cases that we are facing at my current company :
We needed to import the types from the a type.ts file generated from typescript-operations package and needed to have several output files instead of just one big file with everything in it. This file was breaking the linter and causing a lot of warning to be triggered in our pipeline which was really painful when trying to debug a test.
In order to do that i used the graphql-codegen programmatically along with the visitor pattern that they are currently using. I think we could use this pattern as well in order to simplify the recurring algo that you use in order to collect all the fields from the queries.
It would also fix a bug that i have found in the way you handle interface which are normally generated at runtime. If i have some query with the following interface implementation and aliases.
query test {
testInterface {
__typename
... on TestInterfaceType1 {
id
foo1: enum
}
... on TestInterfaceType2 {
id
foo2: enum
}
}
}
Normally you would expect to have a this ouput :
export const testQueryMock = { data: { testInterface: { __typename: 'TestInterfaceType1', id: 'e509e6ea-9fee-442a-9962-587ce7190430', foo1: 'Option1' } } };
But the current codebase is also treating foo2 as a property that need to be added at runtime so we have :
export const testQueryMock = { data: { testInterface: { __typename: 'TestInterfaceType1', id: 'e509e6ea-9fee-442a-9962-587ce7190430', foo1: 'Option1' , foo2: 'Option1' } } };
Which causes a typescript bug if foo1 and foo2 have different types.
Do you think of a way to address such an issue ? I have tried tweaking your code many times to fix it but found myself breaking something in the process
|
gharchive/pull-request
| 2022-08-19T11:51:26 |
2025-04-01T06:40:18.466517
|
{
"authors": [
"gromon42",
"ryan-m-walker"
],
"repo": "ryan-m-walker/graphql-codegen-mock-results",
"url": "https://github.com/ryan-m-walker/graphql-codegen-mock-results/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
548199188
|
Initialize ReadTheDocs Project
Need to claim the namespace for the readthedocs projects.
Create rst file in the docs folder.
Build and post to readthedocs site.
Available in master. Any work on my side to publish on documentation with github actions?
No, readthedocs has webhooks if you log in with your github account. You gave me sufficient permissions to allow them to pull the content and build the documentation. I just need to troubleshoot the configuration.
The pyartemis project has been deployed on readthedocs. There's some more work to be done in terms of setting up webhooks, but we should first set up a proper deployment strategy before automating publishing documentation. I see no reason to be wasteful of the readthedocs resources.
|
gharchive/issue
| 2020-01-10T17:22:30 |
2025-04-01T06:40:18.484416
|
{
"authors": [
"DominicParent",
"ryanmwhitephd"
],
"repo": "ryanmwhitephd/artemis",
"url": "https://github.com/ryanmwhitephd/artemis/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1174350750
|
Rename Package - Backend Server
Purpose
This is just to outline the changes we had spoken about in naming conventions for the project.
The current package containing all of our application code is currently named matrix_backend we need to change this to be app to follow Flask best practices
Since we are also considering moving away from a WSGI server and instead are favoring gunicorn which is a WSGI HTTP Server for UNIX we can also rename our wsgi.py file to run.py
End Product
All code relevant to Matrix Backend should now be contained in the package named app
A developer should be able to boot-up our server through the use of python3 run.py
Good work. Approved.
|
gharchive/issue
| 2022-03-19T20:37:20 |
2025-04-01T06:40:18.507766
|
{
"authors": [
"casaltarelli"
],
"repo": "ryanpepe2000/mscs710-backend",
"url": "https://github.com/ryanpepe2000/mscs710-backend/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
723942484
|
Attributes By Default, SSR Improvements, Bug Fixes
Dom Expression v0.22.0 updates mostly.
Yes thanks. This is why you shouldn't leave documentation to last thing at night when you should be sleeping.
|
gharchive/pull-request
| 2020-10-18T06:53:13 |
2025-04-01T06:40:18.510035
|
{
"authors": [
"ryansolid"
],
"repo": "ryansolid/solid",
"url": "https://github.com/ryansolid/solid/pull/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
572679216
|
Update data_science_colleges.csv
Insert new data science programs
Insert new Spanish data science programs
|
gharchive/pull-request
| 2020-02-28T10:52:48 |
2025-04-01T06:40:18.510854
|
{
"authors": [
"LinoGonzGar"
],
"repo": "ryanswanstrom/awesome-datascience-colleges",
"url": "https://github.com/ryanswanstrom/awesome-datascience-colleges/pull/64",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
262874618
|
CustomLinkedList
Саша Елыков
Coverage increased (+19.6%) to 67.669% when pulling fa48ddf067c02944308dcb3566fb58aad7b468e1 on Russiancold:lecture03 into 16dad782f5d596ae50d0b39665b32df40da60c19 on rybalkinsd:lecture03.
Coverage increased (+21.7%) to 69.767% when pulling e6ac8f9b9899c188ce24cb3ba70740cc85f8579b on Russiancold:lecture03 into 16dad782f5d596ae50d0b39665b32df40da60c19 on rybalkinsd:lecture03.
Молодец
|
gharchive/pull-request
| 2017-10-04T17:50:21 |
2025-04-01T06:40:18.522032
|
{
"authors": [
"IVSivak",
"Russiancold",
"coveralls"
],
"repo": "rybalkinsd/atom",
"url": "https://github.com/rybalkinsd/atom/pull/581",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2512707692
|
Create UI Mockup for Add Item Page
About the Add Item Page:
Users should be able to add a new item through a form that includes input fields based on the item database structure (e.g., item name, description, quantity, price, supplier).
The system should display a confirmation message to verify the user's intention to add the new item.
Task:
[ ] Design a form for users to input item details such as name, description, quantity, price, and supplier. Add a confirmation step before saving.
Tasks Done Today:
Added add item page
Contains:
Item Picture
Item Details
Added confirmation step before saving
|
gharchive/issue
| 2024-09-09T01:33:12 |
2025-04-01T06:40:18.531412
|
{
"authors": [
"rykieldc"
],
"repo": "rykieldc/cc17-3k-uti",
"url": "https://github.com/rykieldc/cc17-3k-uti/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1882188374
|
cmake error ( OpenCVConfig.cmake, opencv-config.cmake)
-- Detecting CXX compile features - done
CMake Error at CMakeLists.txt:4 (find_package):
Could not find a package configuration file provided by "OpenCV" with any
of the following names:
OpenCVConfig.cmake
opencv-config.cmake
Add the installation prefix of "OpenCV" to CMAKE_PREFIX_PATH or set
"OpenCV_DIR" to a directory containing one of the above files. If "OpenCV"
provides a separate development package or SDK, be sure it has been
installed.
-- Configuring incomplete, errors occurred!
Thanks for writing the issue.
If you are using the macOS device, please install opencv.
brew install opencv
tempdeltavalue@Ms-MacBook-Pro sam-cpp-macos % cmake --build build
[ 25%] Building CXX object CMakeFiles/sam_cpp_lib.dir/sam.cpp.o
make[2]: *** No rule to make target /tempdeltavalue/Desktop/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib', needed by libsam_cpp_lib.dylib'. Stop.
make[1]: *** [CMakeFiles/sam_cpp_lib.dir/all] Error 2
make: *** [all] Error 2
This is our CMakeLists.txt and the folder structure.
In our environment, cmake --build build works.
cmake_minimum_required(VERSION 3.21)
set(CMAKE_CXX_STANDARD 17)
project(SamCPP)
find_package(OpenCV CONFIG REQUIRED)
add_library(sam_cpp_lib SHARED sam.h sam.cpp)
target_include_directories(
sam_cpp_lib PUBLIC
/Users/ryo/Downloads/onnxruntime-osx-universal2-1.15.1/include
)
target_link_libraries(
sam_cpp_lib PUBLIC
/Users/ryo/Downloads/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib
${OpenCV_LIBS}
)
add_executable(sam_cpp_test test.cpp)
target_link_libraries(
sam_cpp_test PRIVATE
sam_cpp_lib
)
@ryouchinsa hmmm... thanks for your reply
@ryouchinsa Hmm.. thanks of your reply
log with invocations
Change Dir: '/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build'
Run Build Command(s): /usr/local/Cellar/cmake/3.27.4/bin/cmake -E env VERBOSE=1 /usr/bin/make -f Makefile
/usr/local/Cellar/cmake/3.27.4/bin/cmake -S/Users/tempdeltavalue/Desktop/test/sam-cpp-macos -B/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build --check-build-system CMakeFiles/Makefile.cmake 0
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_progress_start /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build//CMakeFiles/progress.marks
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/Makefile2 all
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/depend
cd /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build && /usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_depends "Unix Makefiles" /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles/sam_cpp_lib.dir/DependInfo.cmake "--color="
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/build
[ 25%] Building CXX object CMakeFiles/sam_cpp_lib.dir/sam.cpp.o
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -Dsam_cpp_lib_EXPORTS -I/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/include -isystem /usr/local/Cellar/opencv/4.8.0_5/include/opencv4 -std=gnu++17 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -MD -MT CMakeFiles/sam_cpp_lib.dir/sam.cpp.o -MF CMakeFiles/sam_cpp_lib.dir/sam.cpp.o.d -o CMakeFiles/sam_cpp_lib.dir/sam.cpp.o -c /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/sam.cpp
[ 50%] Linking CXX shared library libsam_cpp_lib.dylib
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_link_script CMakeFiles/sam_cpp_lib.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -dynamiclib -Wl,-headerpad_max_install_names -o libsam_cpp_lib.dylib -install_name @rpath/libsam_cpp_lib.dylib CMakeFiles/sam_cpp_lib.dir/sam.cpp.o -Wl,-rpath,/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib -Wl,-rpath,/usr/local/lib -lPUBLIC/Users /Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib /usr/local/lib/libopencv_gapi.4.8.0.dylib /usr/local/lib/libopencv_stitching.4.8.0.dylib /usr/local/lib/libopencv_alphamat.4.8.0.dylib /usr/local/lib/libopencv_aruco.4.8.0.dylib /usr/local/lib/libopencv_bgsegm.4.8.0.dylib /usr/local/lib/libopencv_bioinspired.4.8.0.dylib /usr/local/lib/libopencv_ccalib.4.8.0.dylib /usr/local/lib/libopencv_dnn_objdetect.4.8.0.dylib /usr/local/lib/libopencv_dnn_superres.4.8.0.dylib /usr/local/lib/libopencv_dpm.4.8.0.dylib /usr/local/lib/libopencv_face.4.8.0.dylib /usr/local/lib/libopencv_freetype.4.8.0.dylib /usr/local/lib/libopencv_fuzzy.4.8.0.dylib /usr/local/lib/libopencv_hfs.4.8.0.dylib /usr/local/lib/libopencv_img_hash.4.8.0.dylib /usr/local/lib/libopencv_intensity_transform.4.8.0.dylib /usr/local/lib/libopencv_line_descriptor.4.8.0.dylib /usr/local/lib/libopencv_mcc.4.8.0.dylib /usr/local/lib/libopencv_quality.4.8.0.dylib /usr/local/lib/libopencv_rapid.4.8.0.dylib /usr/local/lib/libopencv_reg.4.8.0.dylib /usr/local/lib/libopencv_rgbd.4.8.0.dylib /usr/local/lib/libopencv_saliency.4.8.0.dylib /usr/local/lib/libopencv_sfm.4.8.0.dylib /usr/local/lib/libopencv_stereo.4.8.0.dylib /usr/local/lib/libopencv_structured_light.4.8.0.dylib /usr/local/lib/libopencv_superres.4.8.0.dylib /usr/local/lib/libopencv_surface_matching.4.8.0.dylib /usr/local/lib/libopencv_tracking.4.8.0.dylib /usr/local/lib/libopencv_videostab.4.8.0.dylib /usr/local/lib/libopencv_viz.4.8.0.dylib /usr/local/lib/libopencv_wechat_qrcode.4.8.0.dylib /usr/local/lib/libopencv_xfeatures2d.4.8.0.dylib /usr/local/lib/libopencv_xobjdetect.4.8.0.dylib /usr/local/lib/libopencv_xphoto.4.8.0.dylib /usr/local/lib/libopencv_shape.4.8.0.dylib /usr/local/lib/libopencv_highgui.4.8.0.dylib /usr/local/lib/libopencv_datasets.4.8.0.dylib /usr/local/lib/libopencv_plot.4.8.0.dylib /usr/local/lib/libopencv_text.4.8.0.dylib /usr/local/lib/libopencv_ml.4.8.0.dylib /usr/local/lib/libopencv_phase_unwrapping.4.8.0.dylib /usr/local/lib/libopencv_optflow.4.8.0.dylib /usr/local/lib/libopencv_ximgproc.4.8.0.dylib /usr/local/lib/libopencv_video.4.8.0.dylib /usr/local/lib/libopencv_videoio.4.8.0.dylib /usr/local/lib/libopencv_imgcodecs.4.8.0.dylib /usr/local/lib/libopencv_objdetect.4.8.0.dylib /usr/local/lib/libopencv_calib3d.4.8.0.dylib /usr/local/lib/libopencv_dnn.4.8.0.dylib /usr/local/lib/libopencv_features2d.4.8.0.dylib /usr/local/lib/libopencv_flann.4.8.0.dylib /usr/local/lib/libopencv_photo.4.8.0.dylib /usr/local/lib/libopencv_imgproc.4.8.0.dylib /usr/local/lib/libopencv_core.4.8.0.dylib
ld: library not found for -lPUBLIC/Users
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libsam_cpp_lib.dylib] Error 1
make[1]: *** [CMakeFiles/sam_cpp_lib.dir/all] Error 2
make: *** [all] Error 2
@ryouchinsa Thanks for a reply
<img width="357" alt="Screenshot 2023-09-15 at 23 24 15" src="https://github.com/ryouchinsa/sam-cpp-macos/ass
ets/36921178/b8d93b5c-fbe0-457c-9790-cdcc2968512c">
Change Dir: '/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build'
Run Build Command(s): /usr/local/Cellar/cmake/3.27.4/bin/cmake -E env VERBOSE=1 /usr/bin/make -f Makefile
/usr/local/Cellar/cmake/3.27.4/bin/cmake -S/Users/tempdeltavalue/Desktop/test/sam-cpp-macos -B/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build --check-build-system CMakeFiles/Makefile.cmake 0
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_progress_start /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build//CMakeFiles/progress.marks
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/Makefile2 all
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/depend
cd /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build && /usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_depends "Unix Makefiles" /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles/sam_cpp_lib.dir/DependInfo.cmake "--color="
Dependencies file "CMakeFiles/sam_cpp_lib.dir/sam.cpp.o.d" is newer than depends file "/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles/sam_cpp_lib.dir/compiler_depend.internal".
Consolidate compiler generated dependencies of target sam_cpp_lib
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/build
[ 25%] Linking CXX shared library libsam_cpp_lib.dylib
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_link_script CMakeFiles/sam_cpp_lib.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -dynamiclib -Wl,-headerpad_max_install_names -o libsam_cpp_lib.dylib -install_name @rpath/libsam_cpp_lib.dylib CMakeFiles/sam_cpp_lib.dir/sam.cpp.o -Wl,-rpath,/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib -Wl,-rpath,/usr/local/lib -lPUBLIC/Users /Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib /usr/local/lib/libopencv_gapi.4.8.0.dylib /usr/local/lib/libopencv_stitching.4.8.0.dylib /usr/local/lib/libopencv_alphamat.4.8.0.dylib /usr/local/lib/libopencv_aruco.4.8.0.dylib /usr/local/lib/libopencv_bgsegm.4.8.0.dylib /usr/local/lib/libopencv_bioinspired.4.8.0.dylib /usr/local/lib/libopencv_ccalib.4.8.0.dylib /usr/local/lib/libopencv_dnn_objdetect.4.8.0.dylib /usr/local/lib/libopencv_dnn_superres.4.8.0.dylib /usr/local/lib/libopencv_dpm.4.8.0.dylib /usr/local/lib/libopencv_face.4.8.0.dylib /usr/local/lib/libopencv_freetype.4.8.0.dylib /usr/local/lib/libopencv_fuzzy.4.8.0.dylib /usr/local/lib/libopencv_hfs.4.8.0.dylib /usr/local/lib/libopencv_img_hash.4.8.0.dylib /usr/local/lib/libopencv_intensity_transform.4.8.0.dylib /usr/local/lib/libopencv_line_descriptor.4.8.0.dylib /usr/local/lib/libopencv_mcc.4.8.0.dylib /usr/local/lib/libopencv_quality.4.8.0.dylib /usr/local/lib/libopencv_rapid.4.8.0.dylib /usr/local/lib/libopencv_reg.4.8.0.dylib /usr/local/lib/libopencv_rgbd.4.8.0.dylib /usr/local/lib/libopencv_saliency.4.8.0.dylib /usr/local/lib/libopencv_sfm.4.8.0.dylib /usr/local/lib/libopencv_stereo.4.8.0.dylib /usr/local/lib/libopencv_structured_light.4.8.0.dylib /usr/local/lib/libopencv_superres.4.8.0.dylib /usr/local/lib/libopencv_surface_matching.4.8.0.dylib /usr/local/lib/libopencv_tracking.4.8.0.dylib /usr/local/lib/libopencv_videostab.4.8.0.dylib /usr/local/lib/libopencv_viz.4.8.0.dylib /usr/local/lib/libopencv_wechat_qrcode.4.8.0.dylib /usr/local/lib/libopencv_xfeatures2d.4.8.0.dylib /usr/local/lib/libopencv_xobjdetect.4.8.0.dylib /usr/local/lib/libopencv_xphoto.4.8.0.dylib /usr/local/lib/libopencv_shape.4.8.0.dylib /usr/local/lib/libopencv_highgui.4.8.0.dylib /usr/local/lib/libopencv_datasets.4.8.0.dylib /usr/local/lib/libopencv_plot.4.8.0.dylib /usr/local/lib/libopencv_text.4.8.0.dylib /usr/local/lib/libopencv_ml.4.8.0.dylib /usr/local/lib/libopencv_phase_unwrapping.4.8.0.dylib /usr/local/lib/libopencv_optflow.4.8.0.dylib /usr/local/lib/libopencv_ximgproc.4.8.0.dylib /usr/local/lib/libopencv_video.4.8.0.dylib /usr/local/lib/libopencv_videoio.4.8.0.dylib /usr/local/lib/libopencv_imgcodecs.4.8.0.dylib /usr/local/lib/libopencv_objdetect.4.8.0.dylib /usr/local/lib/libopencv_calib3d.4.8.0.dylib /usr/local/lib/libopencv_dnn.4.8.0.dylib /usr/local/lib/libopencv_features2d.4.8.0.dylib /usr/local/lib/libopencv_flann.4.8.0.dylib /usr/local/lib/libopencv_photo.4.8.0.dylib /usr/local/lib/libopencv_imgproc.4.8.0.dylib /usr/local/lib/libopencv_core.4.8.0.dylib
ld: library not found for -lPUBLIC/Users
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libsam_cpp_lib.dylib] Error 1
make[1]: *** [CMakeFiles/sam_cpp_lib.dir/all] Error 2
make: *** [all] Error 2
Thanks for a reply
cmake list
cmake_minimum_required(VERSION 3.21)
set(CMAKE_CXX_STANDARD 17)
project(SamCPP)
find_package(OpenCV CONFIG REQUIRED)
add_library(sam_cpp_lib SHARED sam.h sam.cpp)
target_include_directories(
sam_cpp_lib PUBLIC
/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/include
)
target_link_libraries(
sam_cpp_lib PUBLIC/Users
/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib
${OpenCV_LIBS}
)
add_executable(sam_cpp_test test.cpp)
target_link_libraries(
sam_cpp_test PRIVATE
sam_cpp_lib
)
logs
Change Dir: '/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build'
Run Build Command(s): /usr/local/Cellar/cmake/3.27.4/bin/cmake -E env VERBOSE=1 /usr/bin/make -f Makefile
/usr/local/Cellar/cmake/3.27.4/bin/cmake -S/Users/tempdeltavalue/Desktop/test/sam-cpp-macos -B/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build --check-build-system CMakeFiles/Makefile.cmake 0
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_progress_start /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build//CMakeFiles/progress.marks
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/Makefile2 all
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/depend
cd /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build && /usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_depends "Unix Makefiles" /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build /Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles/sam_cpp_lib.dir/DependInfo.cmake "--color="
Dependencies file "CMakeFiles/sam_cpp_lib.dir/sam.cpp.o.d" is newer than depends file "/Users/tempdeltavalue/Desktop/test/sam-cpp-macos/build/CMakeFiles/sam_cpp_lib.dir/compiler_depend.internal".
Consolidate compiler generated dependencies of target sam_cpp_lib
/Applications/Xcode.app/Contents/Developer/usr/bin/make -f CMakeFiles/sam_cpp_lib.dir/build.make CMakeFiles/sam_cpp_lib.dir/build
[ 25%] Linking CXX shared library libsam_cpp_lib.dylib
/usr/local/Cellar/cmake/3.27.4/bin/cmake -E cmake_link_script CMakeFiles/sam_cpp_lib.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -dynamiclib -Wl,-headerpad_max_install_names -o libsam_cpp_lib.dylib -install_name @rpath/libsam_cpp_lib.dylib CMakeFiles/sam_cpp_lib.dir/sam.cpp.o -Wl,-rpath,/Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib -Wl,-rpath,/usr/local/lib -lPUBLIC/Users /Users/tempdeltavalue/Desktop/test/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib /usr/local/lib/libopencv_gapi.4.8.0.dylib /usr/local/lib/libopencv_stitching.4.8.0.dylib /usr/local/lib/libopencv_alphamat.4.8.0.dylib /usr/local/lib/libopencv_aruco.4.8.0.dylib /usr/local/lib/libopencv_bgsegm.4.8.0.dylib /usr/local/lib/libopencv_bioinspired.4.8.0.dylib /usr/local/lib/libopencv_ccalib.4.8.0.dylib /usr/local/lib/libopencv_dnn_objdetect.4.8.0.dylib /usr/local/lib/libopencv_dnn_superres.4.8.0.dylib /usr/local/lib/libopencv_dpm.4.8.0.dylib /usr/local/lib/libopencv_face.4.8.0.dylib /usr/local/lib/libopencv_freetype.4.8.0.dylib /usr/local/lib/libopencv_fuzzy.4.8.0.dylib /usr/local/lib/libopencv_hfs.4.8.0.dylib /usr/local/lib/libopencv_img_hash.4.8.0.dylib /usr/local/lib/libopencv_intensity_transform.4.8.0.dylib /usr/local/lib/libopencv_line_descriptor.4.8.0.dylib /usr/local/lib/libopencv_mcc.4.8.0.dylib /usr/local/lib/libopencv_quality.4.8.0.dylib /usr/local/lib/libopencv_rapid.4.8.0.dylib /usr/local/lib/libopencv_reg.4.8.0.dylib /usr/local/lib/libopencv_rgbd.4.8.0.dylib /usr/local/lib/libopencv_saliency.4.8.0.dylib /usr/local/lib/libopencv_sfm.4.8.0.dylib /usr/local/lib/libopencv_stereo.4.8.0.dylib /usr/local/lib/libopencv_structured_light.4.8.0.dylib /usr/local/lib/libopencv_superres.4.8.0.dylib /usr/local/lib/libopencv_surface_matching.4.8.0.dylib /usr/local/lib/libopencv_tracking.4.8.0.dylib /usr/local/lib/libopencv_videostab.4.8.0.dylib /usr/local/lib/libopencv_viz.4.8.0.dylib /usr/local/lib/libopencv_wechat_qrcode.4.8.0.dylib /usr/local/lib/libopencv_xfeatures2d.4.8.0.dylib /usr/local/lib/libopencv_xobjdetect.4.8.0.dylib /usr/local/lib/libopencv_xphoto.4.8.0.dylib /usr/local/lib/libopencv_shape.4.8.0.dylib /usr/local/lib/libopencv_highgui.4.8.0.dylib /usr/local/lib/libopencv_datasets.4.8.0.dylib /usr/local/lib/libopencv_plot.4.8.0.dylib /usr/local/lib/libopencv_text.4.8.0.dylib /usr/local/lib/libopencv_ml.4.8.0.dylib /usr/local/lib/libopencv_phase_unwrapping.4.8.0.dylib /usr/local/lib/libopencv_optflow.4.8.0.dylib /usr/local/lib/libopencv_ximgproc.4.8.0.dylib /usr/local/lib/libopencv_video.4.8.0.dylib /usr/local/lib/libopencv_videoio.4.8.0.dylib /usr/local/lib/libopencv_imgcodecs.4.8.0.dylib /usr/local/lib/libopencv_objdetect.4.8.0.dylib /usr/local/lib/libopencv_calib3d.4.8.0.dylib /usr/local/lib/libopencv_dnn.4.8.0.dylib /usr/local/lib/libopencv_features2d.4.8.0.dylib /usr/local/lib/libopencv_flann.4.8.0.dylib /usr/local/lib/libopencv_photo.4.8.0.dylib /usr/local/lib/libopencv_imgproc.4.8.0.dylib /usr/local/lib/libopencv_core.4.8.0.dylib
ld: library not found for -lPUBLIC/Users
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[2]: *** [libsam_cpp_lib.dylib] Error 1
make[1]: *** [CMakeFiles/sam_cpp_lib.dir/all] Error 2
make: *** [all] Error 2
For the SAM feature, please use Edit -> Create polygon using SAM menu.
For the build error, check the OpenCV_INCLUDE_DIRS and OpenCV_LIBS are correct.
Adding message() after find_package() in CMakeLists.txt, you can show those paths.
cmake_minimum_required(VERSION 3.21)
set(CMAKE_CXX_STANDARD 17)
project(SamCPP)
find_package(OpenCV CONFIG REQUIRED)
message(STATUS "OpenCV_INCLUDE_DIRS = ${OpenCV_INCLUDE_DIRS}")
message(STATUS "OpenCV_LIBS = ${OpenCV_LIBS}")
add_library(sam_cpp_lib SHARED sam.h sam.cpp)
target_include_directories(
sam_cpp_lib PUBLIC
/Users/ryo/Downloads/onnxruntime-osx-universal2-1.15.1/include
)
target_link_libraries(
sam_cpp_lib PUBLIC
/Users/ryo/Downloads/onnxruntime-osx-universal2-1.15.1/lib/libonnxruntime.dylib
${OpenCV_LIBS}
)
add_executable(sam_cpp_test test.cpp)
target_link_libraries(
sam_cpp_test PRIVATE
sam_cpp_lib
)
This is our terminal printed messages when cmake -S . -B build.
-- The C compiler identification is AppleClang 14.0.3.14030022
-- The CXX compiler identification is AppleClang 14.0.3.14030022
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenCV: /opt/homebrew/Cellar/opencv/4.8.0_6 (found version "4.8.0")
-- OpenCV_INCLUDE_DIRS = /opt/homebrew/Cellar/opencv/4.8.0_6/include/opencv4
-- OpenCV_LIBS = opencv_calib3d;opencv_core;opencv_dnn;opencv_features2d;opencv_flann;opencv_gapi;opencv_highgui;opencv_imgcodecs;opencv_imgproc;opencv_ml;opencv_objdetect;opencv_photo;opencv_stitching;opencv_video;opencv_videoio;opencv_alphamat;opencv_aruco;opencv_bgsegm;opencv_bioinspired;opencv_ccalib;opencv_datasets;opencv_dnn_objdetect;opencv_dnn_superres;opencv_dpm;opencv_face;opencv_freetype;opencv_fuzzy;opencv_hfs;opencv_img_hash;opencv_intensity_transform;opencv_line_descriptor;opencv_mcc;opencv_optflow;opencv_phase_unwrapping;opencv_plot;opencv_quality;opencv_rapid;opencv_reg;opencv_rgbd;opencv_saliency;opencv_sfm;opencv_shape;opencv_stereo;opencv_structured_light;opencv_superres;opencv_surface_matching;opencv_text;opencv_tracking;opencv_videostab;opencv_viz;opencv_wechat_qrcode;opencv_xfeatures2d;opencv_ximgproc;opencv_xobjdetect;opencv_xphoto
-- Configuring done (7.9s)
-- Generating done (0.0s)
-- Build files have been written to: /Users/ryo/Downloads/sam-cpp-macos/build
Thanks for your detailed feedback.
In our macOS app RectLabel, we are using the universal OpenCV framework.
https://github.com/opencv/opencv/issues/18049
The purpose of this repository is not sharing how to use OpenCV in your macOS or iOS apps.
We assume that you can build and run your image processing code using the OpenCV framework.
Using the sam-cpp-macos code, you can run the Segment Anything Model feature using C++ code.
The usability is the same as running the image processing code using the OpenCV framework in your apps.
In July 2023, we stared implementing Segment Anything Model feature on RectLabel.
If we had this sam-cpp-macos code at that time, we could release the update more than 2 weeks earlier.
Please let us know your opinion.
Hi, just want you to know that I took this macOS cpp code and put it inside iOS app and everything working (thank you a lot for that)
fyi
https://github.com/tempdeltavalue/SceneKitTest
Thanks for letting us know that you could run Segment Anything Model on your iOS app. Other users will be interested in your iOS code.
|
gharchive/issue
| 2023-09-05T15:17:19 |
2025-04-01T06:40:18.582887
|
{
"authors": [
"ryouchinsa",
"tempdeltavalue"
],
"repo": "ryouchinsa/sam-cpp-macos",
"url": "https://github.com/ryouchinsa/sam-cpp-macos/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1965007937
|
Error on "Request Sending..." Step.
[ INFO ] Request sending...
[ FAILED ] Message: javascript error: Cannot read properties of undefined (reading 'click')
(Session info: chrome=118.0.5993.118)
Stacktrace:
GetHandleVerifier [0x00604DE3+43907]
(No symbol) [0x00590741]
(No symbol) [0x004833ED]
(No symbol) [0x00486F6D]
(No symbol) [0x004889D9]
(No symbol) [0x004E5DEB]
(No symbol) [0x004D2B5C]
(No symbol) [0x004E55CA]
(No symbol) [0x004D2956]
(No symbol) [0x004AE17E]
(No symbol) [0x004AF32D]
GetHandleVerifier [0x008B5AF9+2865305]
GetHandleVerifier [0x008FE78B+3163435]
GetHandleVerifier [0x008F8441+3138017]
GetHandleVerifier [0x0068E0F0+605840]
(No symbol) [0x0059A64C]
(No symbol) [0x00596638]
(No symbol) [0x0059675F]
(No symbol) [0x00588DB7]
BaseThreadInitThunk [0x76F0FA29+25]
RtlGetAppContainerNamedObjectPath [0x77717A9E+286]
RtlGetAppContainerNamedObjectPath [0x77717A6E+238]
Let me know if I can provide anymore information to debug this issue.
try expanding the chrome window
and take a screenshot and send it to me
The chrome window is stuck in an infinite reload loop.
Try use vpn.
Nothing like this is happening to me.
I tried using with a VPN and no VPN.
It's probably only an issue on my end. I'll keep trying, but I don't know what ells I can really do.
I don't have any suggestions either :(
It should never hang up in this place.
|
gharchive/issue
| 2023-10-27T08:16:05 |
2025-04-01T06:40:18.590442
|
{
"authors": [
"SoggyBurritoVR",
"rzc0d3r"
],
"repo": "rzc0d3r/ESET-KeyGen",
"url": "https://github.com/rzc0d3r/ESET-KeyGen/issues/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
106504385
|
Serial version UID
It would be nice to have an annotation to automatically create a field like
private static final long serialVersionUID = 8310165171840482032L;
for all classes which implements java.io.Serializable
This has been discussed before and deemed not worth the effort; see e.g. https://groups.google.com/forum/?hl=en#!searchin/project-lombok/@serializable/project-lombok/RQ6VQRRMY38/jli-UYPSgfgJ
|
gharchive/issue
| 2015-09-15T08:07:43 |
2025-04-01T06:40:18.592387
|
{
"authors": [
"askoning",
"gualtierotesta"
],
"repo": "rzwitserloot/lombok",
"url": "https://github.com/rzwitserloot/lombok/issues/923",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1603821224
|
Error when rendering
Getting the following error on the last step when rendering:
`
Error: Command failed with exit code 1: /Users/.../Downloads/sonic-annotator-1.6-macos/sonic-annotator -t /private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459 -d vamp:qm-vamp-plugins:qm-segmenter:segmentation -w csv /Users/.../Documents/MusicLibrary/File.wav --csv-force Have audio source: "/Users/.../Documents/MusicLibrary/File.wav" Decoding File.wav... 1% Decoding File.wav... 2% Decoding File.wav... 3% Decoding File.wav... 4% Decoding File.wav... 5% Decoding File.wav... 6% Decoding File.wav... 7% Decoding File.wav... 8% Decoding File.wav... 9% Decoding File.wav... 10% Decoding File.wav... 11% Decoding File.wav... 12% Decoding File.wav... 13% Decoding File.wav... 14% Decoding File.wav... 15% Decoding File.wav... 16% Decoding File.wav... 17% Decoding File.wav... 18% Decoding File.wav... 19% Decoding File.wav... 20% Decoding File.wav... 21% Decoding File.wav... 22% Decoding File.wav... 23% Decoding File.wav... 24% Decoding File.wav... 25% Decoding File.wav... 26% Decoding File.wav... 27% Decoding File.wav... 28% Decoding File.wav... 29% Decoding File.wav... 30% Decoding File.wav... 31% Decoding File.wav... 32% Decoding File.wav... 33% Decoding File.wav... 34% Decoding File.wav... 35% Decoding File.wav... 36% Decoding File.wav... 37% Decoding File.wav... 38% Decoding File.wav... 39% Decoding File.wav... 40% Decoding File.wav... 41% Decoding File.wav... 42% Decoding File.wav... 43% Decoding File.wav... 44% Decoding File.wav... 45% Decoding File.wav... 46% Decoding File.wav... 47% Decoding File.wav... 48% Decoding File.wav... 49% Decoding File.wav... 50% Decoding File.wav... 51% Decoding File.wav... 52% Decoding File.wav... 53% Decoding File.wav... 54% Decoding File.wav... 55% Decoding File.wav... 56% Decoding File.wav... 57% Decoding File.wav... 58% Decoding File.wav... 59% Decoding File.wav... 60% Decoding File.wav... 61% Decoding File.wav... 62% Decoding File.wav... 63% Decoding File.wav... 64% Decoding File.wav... 65% Decoding File.wav... 66% Decoding File.wav... 67% Decoding File.wav... 68% Decoding File.wav... 69% Decoding File.wav... 70% Decoding File.wav... 71% Decoding File.wav... 72% Decoding File.wav... 73% Decoding File.wav... 74% Decoding File.wav... 75% Decoding File.wav... 76% Decoding File.wav... 77% Decoding File.wav... 78% Decoding File.wav... 79% Decoding File.wav... 80% Decoding File.wav... 81% Decoding File.wav... 82% Decoding File.wav... 83% Decoding File.wav... 84% Decoding File.wav... 85% Decoding File.wav... 86% Decoding File.wav... 87% Decoding File.wav... 88% Decoding File.wav... 89% Decoding File.wav... 90% Decoding File.wav... 91% Decoding File.wav... 92% Decoding File.wav... 93% Decoding File.wav... 94% Decoding File.wav... 95% Decoding File.wav... 96% Decoding File.wav... 97% Decoding File.wav... 98% Decoding File.wav... 99% Decoding File.wav... 100% Decoding File.wav... Done File or URL "/Users/.../Documents/MusicLibrary/File.wav" opened successfully Taking default channel count of 2 from audio file Taking default sample rate of 44100Hz from audio file (Note: Default may be overridden by transforms) [dataquay] BasicStore::clear [dataquay] BasicStoreSord::import: QUrl("file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459") [dataquay] namespace: "xsd" -> "http://www.w3.org/2001/XMLSchema#" [dataquay] namespace: "vamp" -> "http://purl.org/ontology/vamp/" [dataquay] namespace: "" -> "file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#" [dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Transform )" [dataquay] BasicStore::match result (size 1 ): [dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Transform )" [dataquay] BasicStore::clear [dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )" [dataquay] BasicStore::match result (size 0 ): [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/plugin [] )" [dataquay] BasicStoreSord::import: QUrl("file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459") [dataquay] namespace: "xsd" -> "http://www.w3.org/2001/XMLSchema#" [dataquay] namespace: "vamp" -> "http://purl.org/ontology/vamp/" [dataquay] namespace: "" -> "file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#" [dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )" [dataquay] BasicStore::match result (size 1 ): [dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( [] http://purl.org/ontology/vamp/available_plugin <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_library> http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/output [] )" [dataquay] BasicStore::match: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [] )" [dataquay] BasicStore::match result (size 4 ): [dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid1] )" [dataquay] 1 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid3] )" [dataquay] 2 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid5] )" [dataquay] 3 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid7] )" [dataquay] BasicStore::complete: "( [blank genid1] http://purl.org/ontology/vamp/parameter [] )" [dataquay] BasicStore::complete: "( [blank genid1] http://purl.org/ontology/vamp/value [] )" [dataquay] BasicStore::complete: "( [blank genid2] http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( [blank genid3] http://purl.org/ontology/vamp/parameter [] )" [dataquay] BasicStore::complete: "( [blank genid3] http://purl.org/ontology/vamp/value [] )" [dataquay] BasicStore::complete: "( [blank genid4] http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( [blank genid5] http://purl.org/ontology/vamp/parameter [] )" [dataquay] BasicStore::complete: "( [blank genid5] http://purl.org/ontology/vamp/value [] )" [dataquay] BasicStore::complete: "( [blank genid6] http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( [blank genid7] http://purl.org/ontology/vamp/parameter [] )" [dataquay] BasicStore::complete: "( [blank genid7] http://purl.org/ontology/vamp/value [] )" [dataquay] BasicStore::complete: "( [blank genid8] http://purl.org/ontology/vamp/identifier [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/program [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/summary_type [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/step_size [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/block_size [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/window_type [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/sample_rate [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/start [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/duration [] )" [dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/plugin_version [] )" RDFTransformFactory: NOTE: Transform is: NOTE: Transform does not specify a sample rate, using default rate of 44100 ERROR: Failed to load plugin for transform "vamp:qm-vamp-plugins:qm-barbeattracker:" [dataquay] ~World: About to lower refcount from 2 ERROR: Failed to add feature extractor from transform file "/private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459" ERROR: Failed to load plugin for transform "vamp:qm-vamp-plugins:qm-segmenter:segmentation" ERROR: Failed to add default feature extractor for transform "vamp:qm-vamp-plugins:qm-segmenter:segmentation" sonic-annotator: no feature extractors added
Command failed with exit code 1: /Users/.../Downloads/sonic-annotator-1.6-macos/sonic-annotator -t /private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459 -d vamp:qm-vamp-plugins:qm-segmenter:segmentation -w csv /Users/.../Documents/MusicLibrary/File.wav --csv-force
Have audio source: "/Users/.../Documents/MusicLibrary/File.wav"
Decoding File.wav... 1%
Decoding File.wav... 2%
Decoding File.wav... 3%
Decoding File.wav... 4%
Decoding File.wav... 5%
Decoding File.wav... 6%
Decoding File.wav... 7%
Decoding File.wav... 8%
Decoding File.wav... 9%
Decoding File.wav... 10%
Decoding File.wav... 11%
Decoding File.wav... 12%
Decoding File.wav... 13%
Decoding File.wav... 14%
Decoding File.wav... 15%
Decoding File.wav... 16%
Decoding File.wav... 17%
Decoding File.wav... 18%
Decoding File.wav... 19%
Decoding File.wav... 20%
Decoding File.wav... 21%
Decoding File.wav... 22%
Decoding File.wav... 23%
Decoding File.wav... 24%
Decoding File.wav... 25%
Decoding File.wav... 26%
Decoding File.wav... 27%
Decoding File.wav... 28%
Decoding File.wav... 29%
Decoding File.wav... 30%
Decoding File.wav... 31%
Decoding File.wav... 32%
Decoding File.wav... 33%
Decoding File.wav... 34%
Decoding File.wav... 35%
Decoding File.wav... 36%
Decoding File.wav... 37%
Decoding File.wav... 38%
Decoding File.wav... 39%
Decoding File.wav... 40%
Decoding File.wav... 41%
Decoding File.wav... 42%
Decoding File.wav... 43%
Decoding File.wav... 44%
Decoding File.wav... 45%
Decoding File.wav... 46%
Decoding File.wav... 47%
Decoding File.wav... 48%
Decoding File.wav... 49%
Decoding File.wav... 50%
Decoding File.wav... 51%
Decoding File.wav... 52%
Decoding File.wav... 53%
Decoding File.wav... 54%
Decoding File.wav... 55%
Decoding File.wav... 56%
Decoding File.wav... 57%
Decoding File.wav... 58%
Decoding File.wav... 59%
Decoding File.wav... 60%
Decoding File.wav... 61%
Decoding File.wav... 62%
Decoding File.wav... 63%
Decoding File.wav... 64%
Decoding File.wav... 65%
Decoding File.wav... 66%
Decoding File.wav... 67%
Decoding File.wav... 68%
Decoding File.wav... 69%
Decoding File.wav... 70%
Decoding File.wav... 71%
Decoding File.wav... 72%
Decoding File.wav... 73%
Decoding File.wav... 74%
Decoding File.wav... 75%
Decoding File.wav... 76%
Decoding File.wav... 77%
Decoding File.wav... 78%
Decoding File.wav... 79%
Decoding File.wav... 80%
Decoding File.wav... 81%
Decoding File.wav... 82%
Decoding File.wav... 83%
Decoding File.wav... 84%
Decoding File.wav... 85%
Decoding File.wav... 86%
Decoding File.wav... 87%
Decoding File.wav... 88%
Decoding File.wav... 89%
Decoding File.wav... 90%
Decoding File.wav... 91%
Decoding File.wav... 92%
Decoding File.wav... 93%
Decoding File.wav... 94%
Decoding File.wav... 95%
Decoding File.wav... 96%
Decoding File.wav... 97%
Decoding File.wav... 98%
Decoding File.wav... 99%
Decoding File.wav... 100%
Decoding File.wav... Done
File or URL "/Users/.../Documents/MusicLibrary/File.wav" opened successfully
Taking default channel count of 2 from audio file
Taking default sample rate of 44100Hz from audio file
(Note: Default may be overridden by transforms)
[dataquay] BasicStore::clear
[dataquay] BasicStoreSord::import: QUrl("file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459")
[dataquay] namespace: "xsd" -> "http://www.w3.org/2001/XMLSchema#"
[dataquay] namespace: "vamp" -> "http://purl.org/ontology/vamp/"
[dataquay] namespace: "" -> "file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#"
[dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Transform )"
[dataquay] BasicStore::match result (size 1 ):
[dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Transform )"
[dataquay] BasicStore::clear
[dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )"
[dataquay] BasicStore::match result (size 0 ):
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/plugin [] )"
[dataquay] BasicStoreSord::import: QUrl("file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459")
[dataquay] namespace: "xsd" -> "http://www.w3.org/2001/XMLSchema#"
[dataquay] namespace: "vamp" -> "http://purl.org/ontology/vamp/"
[dataquay] namespace: "" -> "file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#"
[dataquay] BasicStore::match: "( [] http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )"
[dataquay] BasicStore::match result (size 1 ):
[dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://purl.org/ontology/vamp/Plugin )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( [] http://purl.org/ontology/vamp/available_plugin <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_plugin> )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform_library> http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/output [] )"
[dataquay] BasicStore::match: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [] )"
[dataquay] BasicStore::match result (size 4 ):
[dataquay] 0 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid1] )"
[dataquay] 1 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid3] )"
[dataquay] 2 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid5] )"
[dataquay] 3 . "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/parameter_binding [blank genid7] )"
[dataquay] BasicStore::complete: "( [blank genid1] http://purl.org/ontology/vamp/parameter [] )"
[dataquay] BasicStore::complete: "( [blank genid1] http://purl.org/ontology/vamp/value [] )"
[dataquay] BasicStore::complete: "( [blank genid2] http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( [blank genid3] http://purl.org/ontology/vamp/parameter [] )"
[dataquay] BasicStore::complete: "( [blank genid3] http://purl.org/ontology/vamp/value [] )"
[dataquay] BasicStore::complete: "( [blank genid4] http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( [blank genid5] http://purl.org/ontology/vamp/parameter [] )"
[dataquay] BasicStore::complete: "( [blank genid5] http://purl.org/ontology/vamp/value [] )"
[dataquay] BasicStore::complete: "( [blank genid6] http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( [blank genid7] http://purl.org/ontology/vamp/parameter [] )"
[dataquay] BasicStore::complete: "( [blank genid7] http://purl.org/ontology/vamp/value [] )"
[dataquay] BasicStore::complete: "( [blank genid8] http://purl.org/ontology/vamp/identifier [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/program [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/summary_type [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/step_size [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/block_size [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/window_type [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/sample_rate [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/start [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/duration [] )"
[dataquay] BasicStore::complete: "( <file:///private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459#transform> http://purl.org/ontology/vamp/plugin_version [] )"
RDFTransformFactory: NOTE: Transform is:
NOTE: Transform does not specify a sample rate, using default rate of 44100
ERROR: Failed to load plugin for transform "vamp:qm-vamp-plugins:qm-barbeattracker:"
[dataquay] ~World: About to lower refcount from 2
ERROR: Failed to add feature extractor from transform file "/private/var/folders/rp/j7gp6vy531l71r77wq2j88sr0000gp/T/6cc2007c-ca97-4812-a66c-73d6102cd459"
ERROR: Failed to load plugin for transform "vamp:qm-vamp-plugins:qm-segmenter:segmentation"
ERROR: Failed to add default feature extractor for transform "vamp:qm-vamp-plugins:qm-segmenter:segmentation"
sonic-annotator: no feature extractors added
`
@dinohusejnovic have you found a solution for this?
@dinohusejnovic have you found a solution for this?
No, closed as duplicate as someone else reported as well.
Yes, seems so, but maybe slightly different. Maybe better error information log. I will re open unless we know more.
fixed in https://github.com/s-a/sonic-sound-picture/releases/tag/1.0.10
|
gharchive/issue
| 2023-02-28T21:11:55 |
2025-04-01T06:40:18.666739
|
{
"authors": [
"dinohusejnovic",
"s-a"
],
"repo": "s-a/sonic-sound-picture",
"url": "https://github.com/s-a/sonic-sound-picture/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2719206625
|
will it work again?
will toofake ever work again or is it gone?
I unfortunately don't really have the time (nor skills) to keep this up :(
Project is always open to contributors!
oh man, so no more toofake? can i ask how come it's not working?
i don't know much about coding or anything but it looks like macedonga's "Beunblurred" project is still working when ran locally. would it be possible to copy whatever it is they're doing to keep the login working and put that into toofake if you get the time to?
|
gharchive/issue
| 2024-12-05T03:12:23 |
2025-04-01T06:40:18.669084
|
{
"authors": [
"n7icoo",
"s-alad"
],
"repo": "s-alad/toofake",
"url": "https://github.com/s-alad/toofake/issues/149",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2178125763
|
fix: number of observations format
Hi Alex,
I am fixing a stupid thing I overlooked in formatting the table. Previously I mistakenly make the number of observations displayed in scientific notation if the number is large.
After the fix, it will be:
Thanks for fixing this Dave!
|
gharchive/pull-request
| 2024-03-11T03:06:11 |
2025-04-01T06:40:18.702057
|
{
"authors": [
"Wenzhi-Ding",
"s3alfisc"
],
"repo": "s3alfisc/pyfixest",
"url": "https://github.com/s3alfisc/pyfixest/pull/347",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2672317963
|
🛑 Motheo Koffiefontein is down
In 7981fb1, Motheo Koffiefontein ($KOFFIEFONTEIN) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Motheo Koffiefontein is back up in ae41a7f after 32 minutes.
|
gharchive/issue
| 2024-11-19T14:12:53 |
2025-04-01T06:40:18.704304
|
{
"authors": [
"s3ase"
],
"repo": "s3ase/squashed",
"url": "https://github.com/s3ase/squashed/issues/190",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2089001115
|
fixed error:output file for fuzzed data
At line 5853 in afl-fuzz.c, the current fuzzed data is stored in the file named cur_input.java This is feasible only if the application being targeted is either a Java compiler or an application capable of accepting Java file types. This causes a crash while Polyglot fuzzer is run to fuzz, say, gcc because gcc only accepts c type files.
In line with the advanced techniques of AFL, it is referred to as cur_input when saving the file intended for the target application. Modifications have been made to this process.
Thanks!
|
gharchive/pull-request
| 2024-01-18T20:27:45 |
2025-04-01T06:40:18.722211
|
{
"authors": [
"Changochen",
"Yeaseen"
],
"repo": "s3team/Polyglot",
"url": "https://github.com/s3team/Polyglot/pull/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
354149853
|
The wiki info about installing IBM i prerequisites should be updated
I suggest updating the wiki to reflect the new RPM-based set of deliveries from IBM.
Here is suggested new content for the "Installing IBM i prerequisites" page.
Installing-IBM-i-prerequisites.zip
Here's Jesse's attachment as a diff, in case it helps. Unfortunately github doesn't support PRs for wikis, but at least git apply is trivial too.
commit d94d7c8778d9c9fe5900dd82eecf692a199e5718
Author: Ken Kuhlman <ken.kuhlman@ftr.com>
Date: Tue Jun 4 12:32:17 2019 -0500
Apply Jesse's suggestions for install documentation from issue #4.
diff --git a/Installing-IBM-i-prerequisites.md b/Installing-IBM-i-prerequisites.
md
index 13acd07..0d43f9f 100644
--- a/Installing-IBM-i-prerequisites.md
+++ b/Installing-IBM-i-prerequisites.md
@@ -20,11 +20,8 @@ The build system uses Unix and GNU tools which run inside of
PASE, so PASE must
https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_72/rzalf/rzalfinstall.htm
## Install IBM i Open Source Technologies
-IBM provides a licensed product and PTFs that install ported versions of some t
ools used by Bob, such as Bash, rsync, and (optionally) curl. The installation
process is a little confusing, but in general the 5733-OPS licensed product is i
nstalled first to install product placeholders, and then 5733-OPS PTFs are appli
ed to install the actual content.
+IBM provides the required software in RPM format. See [the open source package
manager documentation on bitbucket](http://ibm.biz/ibmi-rpms) for instructions o
n how to get started.
-IBM provides [information and installation instructions](https://www.ibm.com/de
veloperworks/community/wikis/home?lang=en#!/wiki/IBM%20i%20Technology%20Updates/
page/Open%20Source%20Technologies) for installing 5733-OPS. It is simplest to f
ollow IBM's recommendation and install all options, even if you don't plan to us
e everything, but at the very least install options 6 and 7.
-
-When installing the 5733-OPS PTFs, also install PTF [SI64092](http://www-01.ibm
.com/support/docview.wss?uid=nas3SI64092), which applies a pertinent fix to the
Bash shell.
## Install the OpenSSH daemon
OpenSSH is an open source implementation of the SSH protocol, and is used by Bo
b to provide a secure connection between PC and IBM i so that source files can b
e transferred and remote commands issued. From the [OpenSSH website](https://ww
w.openssh.com):
@@ -41,58 +38,28 @@ After installing, start the server:
### Make Bash the default shell
-SSH's default shell on the IBM i is the Bourne shell (_see:_ [_IBM PASE for i s
hells and utilities V7R2_](https://www.ibm.com/support/knowledgecenter/en/ssw_ib
m_i_72/rzalf/rzalfpase.htm)). We recommend changing this to the [Bash](https://
en.wikipedia.org/wiki/Bash_(Unix_shell)) shell, because Bash is more user-friend
ly and feature rich. To do this, add these lines to the `sshd_config` file:
-
-```shell
-# ibm pase for IBM i shell
-ibmpaseforishell /QOpenSys/QIBM/ProdData/OPS/tools/bin/bash
-```
-
-The location of `sshd_config` varies by OS version. On V7R2 and higher it is i
n directory `/QOpenSys/QIBM/UserData/SC1/OpenSSH/etc`. See [this IBM document](
http://www-01.ibm.com/support/docview.wss?uid=nas8N1011555) for details.
-
-The SSH server must be restarted for changes to take effect:
-
-```
-===> ENDTCPSVR SERVER(*SSHD)
-===> STRTCPSVR SERVER(*SSHD)
-```
+SSH's default shell on the IBM i is the Bourne shell (_see:_ [_IBM PASE for i s
hells and utilities V7R2_](https://www.ibm.com/support/knowledgecenter/en/ssw_ib
m_i_72/rzalf/rzalfpase.htm)). We recommend changing this to the [Bash](https://
en.wikipedia.org/wiki/Bash_(Unix_shell)) shell, because Bash is more user-friend
ly and feature rich. For steps on how to do this, see [this documentation](http
s://bitbucket.org/ibmi/opensource/src/master/docs/troubleshooting/SETTING_BASH.m
d).
### Work around Git certificate verification
-OpenSSH on the IBM i doesn't ship with SSL certificates, which mean that out of
the box, Git is unable to get repositories via HTTPS. Either certificates must
be installed to `/QOpenSys/QIBM/ProdData/SC1/OpenSSL/certs`, which is beyond th
e scope of this document, or Git can be told not to verify certificates:
+OpenSSL on the IBM i doesn't ship with SSL certificates, which mean that out of
the box, Git is unable to get repositories via HTTPS. One work around is to te
ll git not to verify certificates:
```shell
git config --global http.sslVerify false
```
+Alternatively, if you have access to the Internet from your IBM i system, you c
an use an IBM-provided script that is hosted on GitHub:
+```shell
+curl -k https://gist.githubusercontent.com/kadler/547bb36ddadb9bfec3ff9c16a164a
148/raw/c740a1d425b2006f668467baa77d5ecabb274366/git_ssl_setup.sh | sh
+```
## Install GNU tools
-Sed, Gawk, Make, and Grep are required. As of this writing, the standard IBM v
ersions of these tools lack features, so the GNU versions that Bob uses can be i
nstalled via a script on the [YIPS site](http://yips.idevcloud.com/wiki/index.ph
p/PASE/OpenSourceBinaries#perzl).
-
-Go to the YIPS site, download the `download-2.0.tar.zip` file, and follow the i
nstructions to unzip it, copy the .tar file to the IBM i, expand the tar archive
, and install the list of packages. The instructions on that site are a little
vague, but there are two updated files (`setup2.sh` and `wwwperzl.sh`) outside o
f the .tar.zip file that should be copied into the `/QOpenSys/download` director
y before running the installer.
-
-Once this is done, and `setup2.sh` has been run, the script can be used to inst
all the GNU tools:
-
-1. Start a shell (`call qp2term` from the i or use `ssh` from PC).
-2. `cd /QOpenSys/download`
-3. Enter `./wwwperzl.sh` to see its help.
-4. Enter `./wwwperzl.sh aix61 list my_package` to see what versions exist of th
e desired package:
- ```shell
- $ ./wwwperzl.sh aix61 list gawk
- gawk-4.0.1-1.aix5.1.ppc.deps
- gawk-4.0.2-1.aix5.1.ppc.deps
- ```
-5. `./wwwperzl.sh aix61 wget my_package` to download the package and its depend
encies:
- ```shell
- ./wwwperzl.sh aix61 wget gawk-4.0.2-1
- ```
-6. `./wwwperzl.sh aix61 rpm my_package` to install the package and its dependen
cies:
- ```shell
- ./wwwperzl.sh aix61 rpm gawk-4.0.2-1
- ````
-
-Perform the above steps to install Gawk, Grep, Sed, and Make. We have been una
ble to get the 64 bit version of Make (`Make_64`) to work, but the 32 bit versio
n runs perfectly.
-
-_Note:_ IBM has accepted our request to provide native ports of these tools, so
this installation step will soon become much easier. For the time being, howev
er, the Perzl packages are the best way to get GNU tools working on the i.
+Sed, Gawk, Make, and Grep are required. As of this writing, the standard IBM v
ersions of these tools lack features, so the GNU versions are required. Now that
these are provided in RPM form, it is quite simple to install them. Once you ha
ve [the open source environment](http://ibm.biz/ibmi-rpms) set up, install the f
ollowing packages:
+- `make-gnu`
+- `sed-gnu`
+- `gawk`
+- `grep-gnu`
+- `git`
+- `openssl`
## Set the system path
The IBM open source and GNU tools have now been installed into directories in t
he IFS, so those directories need to be added to the shell's path so that the to
ols can be found. The method to do this varies depending on the shell in use (I
BM i offers Bourne, Korn, and Bash shells). Following are instructions to set t
he path in `/QOpenSys/etc/profile`, which gets loaded for all users by the Korn
and Bash shells. Adjust as necessary to work with your default shell.
@@ -103,12 +70,8 @@ The IBM open source and GNU tools have now been installed in
to directories in th
```
2. Copy the following into `/QOpenSys/etc/profile` (using Unix EOL linefeeds, n
ot Windows' CRLF):
```shell
- # Stop PASE core dumps
- PASE_SYSCALL_NOSIGILL=ALL:quotactl=EPERM:audit=0
- export PASE_SYSCALL_NOSIGILL
-
# Set path to find IBM open source ports as well as Perzl AIX binaries
- PATH="/QOpenSys/QIBM/ProdData/OPS/tools/bin:/opt/freeware/bin:${PATH}"
+ PATH="/QOpenSys/pkgs/bin:${PATH}"
export PATH
Closing, since this repo is ultimately superceded by https://github.com/ibm/ibmi-bob
|
gharchive/issue
| 2018-08-27T01:29:06 |
2025-04-01T06:40:18.726812
|
{
"authors": [
"ThePrez",
"kskuhlman"
],
"repo": "s4isystems/Bob",
"url": "https://github.com/s4isystems/Bob/issues/4",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1899703876
|
Is there any way to cut process and save images?
When I kill process (control + C) I lose all rendered frames.
There isn't.
|
gharchive/issue
| 2023-09-17T08:30:08 |
2025-04-01T06:40:18.771659
|
{
"authors": [
"ManuelMultiverse",
"s9roll7"
],
"repo": "s9roll7/animatediff-cli-prompt-travel",
"url": "https://github.com/s9roll7/animatediff-cli-prompt-travel/issues/33",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
771564093
|
Missing contract description properties
This broke during one of the recent commits:
When compiling a contract to contract_desc.json and then loading the file and passing the object to buildContractClass, desc2CompileResult throws an error because it expects a sources property.
sources property will auto gen when re compile the contract
Got it. The issue was that I didn't have the lastest VSCode extension.
you can upgrade to 0.4.13 , which have nice new feature.
Getting yet another error when loading from file:
Error: abi constructor params mismatch with args provided. This doesn't happen when the output from compileContract is used.
I have rechecked that I have the most recent version everywhere.
I think the issue is that the compiler extracts the asm out of the sourceMap property (which is empty in the generated file) and ignores the asm property in the generated file: https://github.com/sCrypt-Inc/scryptlib/blob/af2801be18a187525248d7a3173cdca49363f02e/src/compilerWrapper.ts#L518
OK, i will fix it soon.
I think the issue is that the compiler extracts the asm out of the sourceMap property (which is empty in the generated file) and ignores the asm property in the generated file:
https://github.com/sCrypt-Inc/scryptlib/blob/af2801be18a187525248d7a3173cdca49363f02e/src/compilerWrapper.ts#L518
This leads to asm being empty in the loaded contract.
the bug has been fix at pr https://github.com/sCrypt-Inc/scryptlib/pull/54
you update your boilerplate to bump scryptlib to versoin 0.2.25
Thank you! Works fine.
|
gharchive/issue
| 2020-12-20T11:03:22 |
2025-04-01T06:40:18.777293
|
{
"authors": [
"MerlinB",
"zhfnjust"
],
"repo": "sCrypt-Inc/scryptlib",
"url": "https://github.com/sCrypt-Inc/scryptlib/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1216975716
|
Error at compiling solana-contrib
I'm getting the error below while trying to build a project that has anchor-contrib as a dependency. Any ideas?
node_modules/@saberhq/solana-contrib/dist/cjs/transaction/PendingTransaction.d.ts:7:49 - error TS2312: An interface can only extend an object type or intersection of object types with statically known members.
7 export interface TransactionWaitOptions extends OperationOptions {
~~~~~~~~~~~~~~~~
Found 1 error in node_modules/@saberhq/solana-contrib/dist/cjs/transaction/PendingTransaction.d.ts:7
Feels like it is because OperationOptions is defined this way:
export type OperationOptions = WrapOptions | number[];
So it may not be extensible by TransactionWaitOptions because of number[] - but this is a casual lay opinion.
Yeah this was fixed. you may need to refresh your yarn.lock
Thanks @macalinao, closing this
|
gharchive/issue
| 2022-04-27T08:14:10 |
2025-04-01T06:40:18.791715
|
{
"authors": [
"macalinao",
"moshthepitt"
],
"repo": "saber-hq/saber-common",
"url": "https://github.com/saber-hq/saber-common/issues/570",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1323520330
|
UB inside dependency lockfree 0.5.1 will cause panic in new versions of rust
Hello! I'm coming to this crate from https://github.com/rust-lang/rust/pull/99389 where we're making the checks stricter for uninitialized data, and this crate fails under those checks, due to the lockfree dependency.
It has already been reported, with no response. Best bet would be to either replace it or fork it.
Hey! Thanks for the heads up. Let me consider the options and revisit this.
|
gharchive/issue
| 2022-07-31T17:01:59 |
2025-04-01T06:40:18.823813
|
{
"authors": [
"5225225",
"sachanganesh"
],
"repo": "sachanganesh/eventador-rs",
"url": "https://github.com/sachanganesh/eventador-rs/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1006125364
|
複数のデフォルト値対応
coreの--addrやinputsの--destなどで複数のデフォルト値を利用したい。
例えばinputsの--destの場合、現在はunix:autoscaler.sockがデフォルトとなっている。
これを修正し、--destが未指定の場合は
unix:autoscaler.sock
unix:/var/run/autoscaler/autoscaler.sock
などを順番に探し最初に見つかった利用可能な値を使うという挙動にしたい。
coreについては対応せずinputsだけ対応する。
|
gharchive/issue
| 2021-09-24T06:25:02 |
2025-04-01T06:40:18.827506
|
{
"authors": [
"yamamoto-febc"
],
"repo": "sacloud/autoscaler",
"url": "https://github.com/sacloud/autoscaler/issues/237",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1806468635
|
🛑 PEAKE CW Control is down
In 62b3e87, PEAKE CW Control (https://help.peakesupport.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PEAKE CW Control is back up in bb5b8ab.
|
gharchive/issue
| 2023-07-16T07:14:01 |
2025-04-01T06:40:18.831676
|
{
"authors": [
"sadams0978"
],
"repo": "sadams0978/sam-upptime",
"url": "https://github.com/sadams0978/sam-upptime/issues/50",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1821402547
|
🛑 PEAKE CW Control is down
In dcbf828, PEAKE CW Control (https://help.peakesupport.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: PEAKE CW Control is back up in 1288dd4.
|
gharchive/issue
| 2023-07-26T01:03:48 |
2025-04-01T06:40:18.834401
|
{
"authors": [
"sadams0978"
],
"repo": "sadams0978/sam-upptime",
"url": "https://github.com/sadams0978/sam-upptime/issues/54",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1516538121
|
[safe-apps-wagmi] Support wagmi 0.8.x
I see the version range is "@wagmi/core": ">= 0.3.8 < 0.8.0" (explicitly upper bounded) but wagmi has been on 0.8.x for about a month. A notable change is that it's pure ESM now https://github.com/wagmi-dev/wagmi/pull/1173 so it might not be a trivial change.
Have you guys already looked into this? I noticed that @mikhailxyz created a branch for 0.8.x but deleted it.
this pr was wrongfully closed: https://github.com/safe-global/safe-apps-sdk/issues/395
+1 on this one.
1 too, would be great to have this supported!
I have created this pr to support up to wagmi 10.10: https://github.com/safe-global/safe-apps-sdk/pull/441
I have tested this approach and would be happy to show how exactly I was able to resolve the issue
I see that it got updated but it doesn't seem quite right to me. wagmi moved to pure ESM modules while this package is still outputting CommonJS format with require's.
https://github.com/wagmi-dev/wagmi/blob/main/packages/core/package.json
@kasparkallas why would wagmi's module system influence the module system we use? I'm not getting it. It's a library not a bundler
@gnosis.pm/safe-apps-wagmi@2.1.0 now supports wagmi 0.8.x
@kasparkallas why would wagmi's module system influence the module system we use? I'm not getting it. It's a library not a bundler
Good question, I'm not an expert on the matter, I'm trying to figure it out.
I'm using Next.js and running into this on next build:
Collecting page data .Error [ERR_REQUIRE_ESM]: require() of ES Module /my-dapp/node_modules/@wagmi/core/dist/index.js from /my-dapp/node_modules/@gnosis.pm/safe-apps-wagmi/dist/index.js not supported.
Instead change the require of /my-dapp/node_modules/@wagmi/core/dist/index.js in /my-dapp/node_modules/@gnosis.pm/safe-apps-wagmi/dist/index.js to a dynamic import() which is available in all CommonJS modules.
next build does pre-rendering while running in Node runtime without any bundling (I think). It gets to @gnosis.pm/safe-apps-wagmi which tries to load a ES module with requirestatement.
But this is what the Node docs state (https://nodejs.org/api/esm.html#interoperability-with-commonjs):
Messing around with next.config.js, trying to transpile @gnosis.pm/safe-apps-wagmi and using experimental.esmExternals: 'loose', I run into the same error in another shape:
Module not found: ESM packages (@wagmi/core) need to be imported. Use 'import' to reference the package instead. https://nextjs.org/docs/messages/import-esm-externals
I could probably theoretically work around this by using Next.js' dynamic import for @gnosis.pm/safe-apps-wagmi but the problem with this is that wagmi & @rainbow-me/rainbowkit wrap the whole React component tree and are set up synchronously with the connectors.
I know this is an annoying matter. Maybe @tmm can give us pointers? :pray:
@kasparkallas thanks for the explanation. good news - Safe connector will be included in wagmi's references repo and will be distributed alongside other wagmi connectors
Just a small follow-up that using the Safe connector from wagmi's references repo worked without any problems! :pray:
|
gharchive/issue
| 2023-01-02T16:09:02 |
2025-04-01T06:40:18.853910
|
{
"authors": [
"bannik",
"cruzdanilo",
"kasparkallas",
"mihoward21",
"mikhailxyz",
"mwawrusch"
],
"repo": "safe-global/safe-apps-sdk",
"url": "https://github.com/safe-global/safe-apps-sdk/issues/432",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2265577155
|
Fix @typescript-eslint/no-base-to-string instances
Summary
When updating to ESLint 9, some rules had to be ignored as their recommendations changed. Of which, was the @typescript-eslint/no-base-to-string rule.
This removes instances of where values that can't be stringified are being stringified.
Changes
Enable @typescript-eslint/no-base-to-string ESLint rule
Stringify addresses, not objects of signers in:
routes/email/guards/email-edit.guard.spec.ts
routes/email/guards/email-retrieval.guard.spec.ts
Pull Request Test Coverage Report for Build 8847493600
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage decreased (-0.01%) to 91.871%
Totals
Change from base Build 8847215532:
-0.01%
Covered Lines:
6890
Relevant Lines:
7233
💛 - Coveralls
|
gharchive/pull-request
| 2024-04-26T11:23:18 |
2025-04-01T06:40:18.861024
|
{
"authors": [
"coveralls",
"iamacook"
],
"repo": "safe-global/safe-client-gateway",
"url": "https://github.com/safe-global/safe-client-gateway/pull/1465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1298802828
|
Redis: Max Number of Clients Reached
Describe the bug
We are facing a Redis: Max Number of Clients Reached on the web instance of the transaction service
Multiple endpoints seem to be affected (all-transactions, balances)
Workers seem to be ok
Expected behavior
No issues
Proposed solution
Add limit for the Redis Connection Pool and switch to BlockingConnectionPool: https://redis-py.readthedocs.io/en/stable/connections.html#connection-pools
Add timeout for connections on the server and add health-check: https://github.com/redis/redis-py#connections . I don't think this will work.
Increase number of connections on the server
Additional context
Last 8 hours
Last week
We are going to increment redis maxclients and check how it works
@luarx @moisses89 @fmrsabino Do you think we can close this for now?
|
gharchive/issue
| 2022-07-08T10:12:45 |
2025-04-01T06:40:18.868850
|
{
"authors": [
"Uxio0",
"luarx"
],
"repo": "safe-global/safe-transaction-service",
"url": "https://github.com/safe-global/safe-transaction-service/issues/951",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
159607541
|
Некорректно отрабатывает multiple: false
Если указать multiple: false , то возникает ошибка в случае попытки повторной загрузки одиночного файла потому что срабатывает условие:
if ($this._duItemsCount === set.limit) {
$this.trigger(limitEvent);
return false;
}
Соответственно гасится возможность повторного аплоада одиночного файла.
Пофиксил у себя так:
if (set.limit > 1 && $this._duItemsCount === set.limit) {
Спс за репорт. Оформите пулл-реквест?
Не удалось воспроизвести описанный баг: ставлю в демке параметры {multiple: false, limit: 1} - и все работает как и должно. Так что тут либо что-то настроено не верно у Вас, либо необходимо точнее описать кейс.
|
gharchive/issue
| 2016-06-10T10:39:23 |
2025-04-01T06:40:18.929549
|
{
"authors": [
"safronizator",
"shulya"
],
"repo": "safronizator/damnUploader",
"url": "https://github.com/safronizator/damnUploader/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1179573864
|
JOLL-47: Frontend: Medical record list data
@kkinggmann @SalamanderHP review t
@SalamanderHP @kkinggmann updated
|
gharchive/pull-request
| 2022-03-24T14:10:19 |
2025-04-01T06:40:18.932357
|
{
"authors": [
"sagara11"
],
"repo": "sagara11/JustOneLife",
"url": "https://github.com/sagara11/JustOneLife/pull/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
421327877
|
LanguageClient-neovim+bingo不能显示错误信息
补全跳转等特性都是非常正常的,但是不能显示错误信息,请问是我哪里配置有问题吗?谢谢
在vscode中是可以的,显示使用golint
@hujianxin
你发这个的配置不全。看上去,是否只有保存时候才会显示错误信息。
不好意思,发的图里面不小心粘贴上没用的东西了
这个样子也是不行的
langserver-go: reading on stdin, writing on stdout
--> request #1: initialize: {"capabilities":{"textDocument":{"colorProvider":null,"completion":{"completionItem":{"snippetSupport":false}},"signatureHelp":{"signatureInformation":{"parameterInformation":{"labelOffsetSupport":true}}}},"workspace":{"applyEdit":true,"didChangeWatchedFiles":{"dynamicRegistration":true}}},"processId":5170,"rootPath":"/Users/hujianxin/Tmp/testgo","rootUri":"file:///Users/hujianxin/Tmp/testgo","trace":"off"}
Passing an initialize rootPath URI ("file:///Users/hujianxin/Tmp/testgo") is deprecated. Use rootUri instead.
<-- notif: window/logMessage: {"type":3,"message":"GOPATH: [/Users/hujianxin/go], import path: "}
<-- notif: window/logMessage: {"type":3,"message":"GO111MODULE=auto, module mode"}
<-- notif: window/logMessage: {"type":3,"message":"/Users/hujianxin/Tmp/testgo/go.mod"}
<-- notif: window/showMessage: {"type":3,"message":"load /Users/hujianxin/Tmp/testgo successfully! elapsed time: 0 seconds, cache: true, go module: true."}
<-- result #1: initialize: {"capabilities":{"textDocumentSync":2,"hoverProvider":true,"completionProvider":{"triggerCharacters":["."]},"signatureHelpProvider":{"triggerCharacters":["(",","]},"definitionProvider":true,"typeDefinitionProvider":true,"referencesProvider":true,"documentSymbolProvider":true,"workspaceSymbolProvider":true,"implementationProvider":true,"documentFormattingProvider":true,"documentRangeFormattingProvider":true,"renameProvider":true,"xworkspaceReferencesProvider":true,"xdefinitionProvider":true,"xworkspaceSymbolByProperties":true}}
--> notif: initialized: {}
--> notif: textDocument/didOpen: {"textDocument":{"languageId":"go","text":"package main\n\nimport \"fmt\"\n\nfunc hello() {\n\tfmt.Println(\"hello\")\n}\n\nfunc main() {\n a := 1\n\tfmt.Println(\"hello\")\n}\n","uri":"file:///Users/hujianxin/Tmp/testgo/test.go","version":0}}
--> notif: textDocument/didSave: {"textDocument":{"uri":"file:///Users/hujianxin/Tmp/testgo/test.go"}}
--> notif: textDocument/didSave: {"textDocument":{"uri":"file:///Users/hujianxin/Tmp/testgo/test.go"}}
--> notif: textDocument/didSave: {"textDocument":{"uri":"file:///Users/hujianxin/Tmp/testgo/test.go"}}
--> notif: exit: null
这个是log日志
不好意思,发的图里面不小心粘贴上没用的东西了,这个样子也是不行的
我其余的配置不懂,将LanguageServer换成gopls,再打开go文件,就可以lint了。
同时,编写python也是可以lint的。
所以,感觉可能是bingo与client之间的通信有问题?
我也遇到一样的问题: https://github.com/neoclide/coc.nvim/issues/570
bingo应该是能正确正常返回错误信息的,这个可以设置一下--trace flag,然后看一下bingo的返回结果。
目前vscode/coc.nvim处理都是正常的。vscode处理是最好的,但每个客户端处理的机制都不一样。所以我个人感觉这是客服端处理机制导致的。
|
gharchive/issue
| 2019-03-15T02:40:08 |
2025-04-01T06:40:18.960429
|
{
"authors": [
"hujianxin",
"jackielii",
"saibing"
],
"repo": "saibing/bingo",
"url": "https://github.com/saibing/bingo/issues/152",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.