id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1315781799
|
Add support for -vv flag
Proposed changes
This PR implements the -vv flag as described in https://github.com/projectdiscovery/proxify/issues/139.
Checklist
[x] Pull request is created against the dev branch
[ ] All checks passed (lint, unit/integration/regression tests etc.) with my changes
[ ] I have added tests that prove my fix is effective or that my feature works
[x] I have added necessary documentation (if appropriate)
@mjkim610 Please add the exe file to the .gitignore file.
PR reopened in https://github.com/projectdiscovery/proxify/pull/142
|
gharchive/pull-request
| 2022-07-24T01:27:47 |
2025-04-01T04:35:34.851202
|
{
"authors": [
"gy741",
"mjkim610"
],
"repo": "projectdiscovery/proxify",
"url": "https://github.com/projectdiscovery/proxify/pull/141",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
792456835
|
Added details of new hackerone programs
Adding new H1 programs OpenMage, Cirrus Insight, Panther Labs.
Please let me know if any changes are needed.
Thank you @nikhilgeo for adding new programs to the list.
Thank you @nikhilgeo for adding new programs to the list.
|
gharchive/pull-request
| 2021-01-23T06:48:55 |
2025-04-01T04:35:34.852475
|
{
"authors": [
"bauthard",
"nikhilgeo"
],
"repo": "projectdiscovery/public-bugbounty-programs",
"url": "https://github.com/projectdiscovery/public-bugbounty-programs/pull/146",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1167427565
|
adding DEVELOPER_README.md
Some quick notes to help folks get started, ref #246
I also updated the Wiki and main README, thanks!
|
gharchive/pull-request
| 2022-03-12T23:29:13 |
2025-04-01T04:35:34.896261
|
{
"authors": [
"LukePrior",
"yaleman"
],
"repo": "projecthorus/sondehub-tracker",
"url": "https://github.com/projecthorus/sondehub-tracker/pull/247",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
282474023
|
Move default value for tag outside of SidecarProperties.java
Would be better for release process not to have to change Java code
good point - we can add application.properties and make it a required property from the code perspective
|
gharchive/issue
| 2017-12-15T16:08:26 |
2025-04-01T04:35:34.903038
|
{
"authors": [
"markfisher",
"trisberg"
],
"repo": "projectriff/function-controller",
"url": "https://github.com/projectriff/function-controller/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
278642143
|
messages sent from KafkaTemplate are missing embedded headers
the FunctionController sends messages to the "function-replicas" topic from within FunctionDeployer like so:
public <T> void publish(String topic, Object event) {
try {
byte[] payload = this.mapper.writeValueAsString(event).getBytes(StandardCharsets.UTF_8.name());
Message<byte[]> message = MessageBuilder.withPayload(payload)
.setHeader(MessageHeaders.CONTENT_TYPE, "text/plain")
.build();
byte[] bytes = EmbeddedHeaderUtils.embedHeaders(new MessageValues(message));
this.kafkaTemplate.send(topic, bytes);
}
catch (Exception e) {
logger.warn("failed to publish event", e);;
}
}
But when the message is deserialized in the function-sidecar no headers are present:
[redis-writer-1800636090-t9rzc sidecar] 2017/12/02 01:26:00 >>> Message{{"square":1}, map[]}
[redis-writer-1800636090-t9rzc sidecar] 2017/12/02 01:26:00 Wrapper received Message{{"square":1}, map[]}
this was actually an issue in function-controller which needed to pass the header name in the header-embedding method: https://github.com/projectriff/function-controller/commit/4406b4784f81e9e5f111fd5125fac7ff26d6060a
|
gharchive/issue
| 2017-12-02T01:27:30 |
2025-04-01T04:35:34.905434
|
{
"authors": [
"markfisher"
],
"repo": "projectriff/function-sidecar",
"url": "https://github.com/projectriff/function-sidecar/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
191053724
|
Removed hard coded email address
Removed hard coded email address; Made email config (email.yml) server specific.
Reviving this change for CQL in a new pr.
|
gharchive/pull-request
| 2016-11-22T16:23:54 |
2025-04-01T04:35:34.909194
|
{
"authors": [
"holmesie"
],
"repo": "projecttacoma/bonnie",
"url": "https://github.com/projecttacoma/bonnie/pull/593",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
243173855
|
Bonnie 820 cql to elm exception
Bonnie bundler support of cql translation exception reporting.
JIRA test can be found at https://jira.mitre.org/browse/BONNIE-831
we don't know when we can incorporate this fix
|
gharchive/pull-request
| 2017-07-15T12:57:44 |
2025-04-01T04:35:34.910404
|
{
"authors": [
"alexanderelliott121",
"c-monkey"
],
"repo": "projecttacoma/bonnie_bundler",
"url": "https://github.com/projecttacoma/bonnie_bundler/pull/92",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2300047004
|
System Firmware Revision throws exception
System Firmware Revision info is somehow is not available on all firmwares, and it throws an exception that also prevents entire metric collection. It could be fallback to Firmware Revision if System Firmware Version is missing from bmc-info command output. You can see the outputs from Dell and Super Micro Machines for missing System Firmware Version
Dell (Has SFR)
Manufacturer ID : Dell Inc. (674)
Product ID : <redacted>
Auxiliary Firmware Revision Information : <redacted>
Device GUID : <redacted>
System GUID : <redacted>
System Firmware Version : 2.17.1
System Name :
Primary Operating System Name :
Operating System Name :
Present OS Version Number :
BMC URL : https://172.....
Super Micro (Does not have SFR)
Manufacturer ID : Super Micro Computer Inc. (...)
Product ID : <redacted>
Auxiliary Firmware Revision Information : <redacted>
Device GUID : <redacted>
System GUID : <redacted>
Channel Information
Channel Number : 0
Medium Type : IPMB (I2C)
Protocol Type : IPMB-1.0
Active Session Count : 0
Session Support : session-less
Vendor ID : Intelligent Platform Management Interface forum (...)
Thanks for bringing this up, but I am a bit confused. The System Firmware Version is already optional exactly for this reason, it just logs a message if absent. Are you sure that this is what brings down the exporter? Or is it maybe some other attribute that is missing? Maybe the Firmware Revision is what's missing? I can't tell from the data you provide, it seems to be cropped at the top...
If in doubt, can you maybe post the exporter log of a scrape attempt?
Hi @bitfehler somehow I was using the outdated version of the collector, I don't even have bmc_collector. I will try this again, and will inform you here about the status.
Recent version just fixed the issue, thanks.
|
gharchive/issue
| 2024-05-16T10:41:50 |
2025-04-01T04:35:34.924103
|
{
"authors": [
"bitfehler",
"huseyinbabal"
],
"repo": "prometheus-community/ipmi_exporter",
"url": "https://github.com/prometheus-community/ipmi_exporter/issues/194",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1104682525
|
Release 0.4.0
[FEATURE] Add support for HTTP POST body content #123
Signed-off-by: SuperQ superq@gmail.com
@rustycl0ck Thanks! I pushed the tag. The build should pop out of CI in a bit.
|
gharchive/pull-request
| 2022-01-15T11:14:05 |
2025-04-01T04:35:34.925595
|
{
"authors": [
"SuperQ"
],
"repo": "prometheus-community/json_exporter",
"url": "https://github.com/prometheus-community/json_exporter/pull/136",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1508014549
|
Emacs Support
When the support for emacs will be available?
In theory it should be possible use the language server as is with LSP mode, similar to the configuration example provided in the README for vim.
How to configure new language servers with Emacs LSP Mode is described here. I currently don't have the capacities to implement this myself but would be happy to help if anyone want's to try it.
|
gharchive/issue
| 2022-12-22T14:44:59 |
2025-04-01T04:35:34.926924
|
{
"authors": [
"jwillker",
"slrtbtfs"
],
"repo": "prometheus-community/promql-langserver",
"url": "https://github.com/prometheus-community/promql-langserver/issues/255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1300806263
|
fix device regext adjusted for aks and eks
Signed-off-by: zanhsieh zanhsieh@gmail.com
Description
Fix the dashboard templates used in kube-prometheus-stack.
See:
https://github.com/prometheus-community/helm-charts/pull/2154
Type of change
[X] BUGFIX (non-breaking change which fixes an issue)
Changelog entry
Add (/dev/)? in diskDeviceSelector in
jsonnet/kube-prometheus/components/k8s-control-plane.libsonnet
jsonnet/kube-prometheus/components/node-exporter.libsonnet
Re-generate the dashboard templates
Fix the dashboard templates used in kube-prometheus-stack.
See:
https://github.com/prometheus-community/helm-charts/pull/2154
any idea when this will be merged?
@paulfantom
Would you mind to take a look please? 🙇♂️
|
gharchive/pull-request
| 2022-07-11T14:37:24 |
2025-04-01T04:35:34.930671
|
{
"authors": [
"alexey-boyko",
"zanhsieh"
],
"repo": "prometheus-operator/kube-prometheus",
"url": "https://github.com/prometheus-operator/kube-prometheus/pull/1810",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
519325563
|
Notify for alerts failed because of context deadline exceeded
We upgraded alertmanager from v0.14 to v0.16.
Configuration Chnages ::
Changed HA configuration (mesh to cluster).
Added proxy_url in http_config with our proxy.
Alertmanager is failing to send slack go1.11.13, following error we can see in logs.
System information:
Oracle Linux Server release 7.4
Alertmanager version:
v0.16 ( GoVersion - go1.11.13)
Prometheus version:
v2.2.0 ( GoVersion - go1.10.1)
Log Messages
level=error ts=2019-11-05T09:55:38.377773199Z caller=notify.go:332 component=dispatcher msg="Error on notify" err="Post https://hooks.slack.com/services/XXXX/XXXX/XXXX: context deadline exceeded"
level=error ts=2019-11-05T09:55:38.377861637Z caller=dispatch.go:177 component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="Post https://hooks.slack.com/services/XXXX/XXXX/XXXX: context deadline exceeded"
Do we need to change any other configuration in this upgrade?
Thanks for your report. It looks as if this is actually a question about usage and not development. The context deadline exceeded message means that Alertmanager can't connect to the Slack API. Given that you use a proxy, I would check there.
To make your question, and all replies, easier to find, we suggest you move this over to our user mailing list, which you can also search. If you prefer more interactive help, join or our IRC channel, #prometheus on irc.freenode.net. Please be aware that our IRC channel has no logs, is not searchable, and that people might not answer quickly if they are busy or asleep. If in doubt, you should choose the mailing list.
Once your questions have been answered, please add a short line pointing to relevant replies in case anyone stumbles here via a search engine in the future.
|
gharchive/issue
| 2019-11-07T15:09:13 |
2025-04-01T04:35:34.936952
|
{
"authors": [
"simonpasquier",
"vishnu-vardhan-reddy"
],
"repo": "prometheus/alertmanager",
"url": "https://github.com/prometheus/alertmanager/issues/2095",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
476843340
|
CollectorRegistry(auto_describe=True) problem - registry with auto_describe does not proccess several metrics while sending to pushregistry
Hi, I'm working on modified version of django-prometheus that is able to send metrics to pushgateway. I've encountered very strange problem - once registry is initialized with auto_describe it does not store metrics defined like:
requests_by_view_transport_method = Counter(
'django_http_requests_total_by_view_transport_method',
'Count of requests by view, transport, method.',
['view', 'transport', 'method'],
)
in fact once metrics also are served like:
from prometheus_client import make_wsgi_app
from wsgiref.simple_server import make_server
app = make_wsgi_app()
httpd = make_server('', 8000, app)
httpd.serve_forever()
those metrics are present, but once doing
def PushMetrics(registry, name="django"):
push_to_gateway(
f"{settings.PUSHGATEWAY_HOST}",
job=name,
grouping_key={
'container': socket.gethostname(),
'service': settings.SERVICE,
},
registry=registry,
)
PushMetrics(prometheus_client.REGISTRY, f"django-info")
they are not visible in pushgateway:port/metrics
Firstly, this is not a recommended use of the pushgateway.
Secondly, have you checked exactly what's going on in the pgw interaction?
Ok, maybe it's not recommended use of pushgateway but it works and I haven't found anything at least promising - so I chosen pgw.
I've debugged it a little and I've found that creating custom registry without auto_describe and applying those registery to metric definitions works perfectly with pushing to pgw.
btw. the problem that I'm trying to solve requires gathering metrics from swarm services like scaled python apps, so I'm not able to gather data in 'usual prometheus way' - to scrape every swarm task separately, because swarm LB routes the traffic to random tasks..
Sounds like you need a swarm SD.
If you could link me some article that speaks about gathering application metrics from swarm and swarm SD I'd be thankful. I'm researching this topic since 2months and pgw it the best what I've found so far.
PS1 why usage is pgw is not recommended? 😅
PS2 this is still bug in pgw - it does not push all metrics while in autodiscovery mode, what do we do about it?
https://prometheus.io/docs/practices/pushing/
Where the bug is here is not determined.
Ok, so I've made my research, now prom instance is running within swarm and it uses dns_sd_configs to get container list. Fun fact - still does not work as suppose. The metric requests_by_view_transport_method (mentioned earlier) is still not correctly gathered by Swarm Prometheus scrape. However metrics is present in containers metrics when manually reached container:port/metrics any ideas?
Swarm prom config
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'PROMETHEUS'
scrape_interval: 15s
static_configs:
- targets:
- 'localhost:9090'
- job_name: 'django-some-backend'
dns_sd_configs:
- names:
- 'tasks.XXX-YYY-ZZZ-swarm_some-backend'
type: 'A'
port: 10000
Sounds like you there's something multi-process going on, or tricky networking.
Yes it was exactly multi-process issue, thanks for help, I've found what i was looking for. I discovered that prom_client http server was not really working with django middleware metrics 😅 that should be highlighted somewhere (in fact it is but deep in code). Closing, thanks for help.
|
gharchive/issue
| 2019-08-05T12:54:29 |
2025-04-01T04:35:34.944177
|
{
"authors": [
"amadeuszkryze",
"brian-brazil"
],
"repo": "prometheus/client_python",
"url": "https://github.com/prometheus/client_python/issues/444",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1112479558
|
Implement Summary metric
Hi! Thank your for the project :)
I found myself missing an implementation of the Summary metric, so decided to file an issue in case anyone (maybe myself) decides to contribute an implementation.
Open Metrics spec defines a metric type that computes quantiles locally on the client: Summary.
It's quite useful if you want to learn/discover how a system behaves, especially if you don't have much data a-priori. In that sense, Summary is dual to Histogram - both can be used to understand data distribution (e.g. latency data), but with different use-cases and tradeoffs.
Good overview on the differences between Summary and Histogram metrics is given in Prometheus doc https://prometheus.io/docs/practices/histograms/
Thank your for the project :)
Glad that it is useful. :heart:
I found myself missing an implementation of the Summary metric, so decided to file an issue in case anyone (maybe myself) decides to contribute an implementation.
Would be good to support the Summary metric type. Contributions are most certainly welcome!
Whoever is picking this up, let me know if you need any help.
@mxinden hi i would like to work on this could you assign the issue to me?
Done. Thanks @palash25. Let me know in case you need any help.
hi @mxinden do you have any preference on what crate to use for the underlying quantile algorithm? I found one that implements CKMS would it be ok to use this? https://github.com/blt/quantiles
@palash25 unfortunately I don't have any experience with quantile algorithms, neither in general nor in Rust. Thus no preference. Sorry.
I found one that implements CKMS would it be ok to use this? https://github.com/blt/quantiles
Looks fine to me.
hi @mxinden sorry for the multiple pings but can you please take a look at this https://github.com/prometheus/client_rust/pull/67#issuecomment-1407467095 ? i updated the PR
#249
|
gharchive/issue
| 2022-01-24T10:52:11 |
2025-04-01T04:35:34.950873
|
{
"authors": [
"folex",
"mxinden",
"palash25",
"zhangtianhao"
],
"repo": "prometheus/client_rust",
"url": "https://github.com/prometheus/client_rust/issues/40",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
88062483
|
Table Widget
For certain data nothing beats an old fashioned table view. Examples being "Top 10" and such lists.
Ideally it would take a datasource and allow sorting of values or labels and display the latest value.
:+1: in general for the concept. Not sure if anyone will be able to get to it soon though.
|
gharchive/issue
| 2015-06-13T20:39:44 |
2025-04-01T04:35:34.953868
|
{
"authors": [
"bluecmd",
"juliusv"
],
"repo": "prometheus/promdash",
"url": "https://github.com/prometheus/promdash/issues/416",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1232847230
|
Adding type hidden label to leverage it in metric_relabel_configs
Proposal
I would like to add as hidden label __type__ in addition to __name__ to be able to leverage it with metric_relabel_configs.
Use case. Why is this important?
It would be possible to take decisions based on the metric type
I am currently sending metrics to a Newrelic endpoint.
Since Remote Write doe not include the metric type Newrelic asks to add the addition label "newrelic_metric_type" as a workaround . (this is needed if the metric type cannot be inferred from the metric name)
I would like to add such label to all metrics leveraging metric_relabel_configs, however right now I think it is not possible since we do not have access to metadata at that point
Is there a different way to attach the metric type as a label to metrics? (I know it is a workaround while waiting for a proper solution)
about type, very often you should be able to tell by the name:
_count _sum should be histogram / summary
if there is a le label it's histogram / summary
counters should end with _total
other should be gauge
Yes, that is the point, Newrelic tries to infer that from the name, but from time to time the "guessed" one is different from the one specified by the metric. Sadly the metric name, most of the times, cannot be changed since it belongs to a third party exporter.
The currently workaround works, but the label is added manually 😢
Having the type available as an hidden label would allow us to automatically add the type as a label to the metric
Prometheus also sends the full metadata to remote write endpoints, which rewrelic could use.
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write
Checking the code I think it would be enough something like:
*l = append(*l, labels.Label{
Name: labels.MetricName,
Value: s[:p.offsets[0]-p.start],
},
labels.Label{
Name: labels.MetricType,
Value: string(p.mtype),
},
)
in the PromParser and the OpenMetricsParser. (adding as well some tests and fixing existing ones 😄 )
That's not sufficient, the parser accepts Prometheus text format in any order plus the OM ingestion parser isn't smart enough to know what metric family it is in (and thus what type it might be). The parser is designed for speed, not to be fully featured.
It's also semantically inappropriate to have type as a label, as it is not a dimension, so it only adds clutter and would break downstream usage by adding an unexpected instrumentation label.
If you need this information, you need a fully-featured parser. There's one in Go for the Prometheus text format, and in Python for OpenMetrics.
Metadata is not always included in the payload but it's literally designed at usecases like yours.
cc @cstyan any opinion here?
Thanks for the feedback I really appreciate the responsiveness 😃
That's not sufficient, the parser accepts Prometheus text format in any order plus the OM ingestion parser isn't smart enough to know what metric family it is in (and thus what type it might be). The parser is designed for speed, not to be fully featured.
I thought that the type was parsed (if available) into the mtype attribute:
here for the OpenMetricsParser
here for the PromParser.
That it why I was thinking I could use mtype MetricType to populate the temporary type label in the Metric(l *labels.Labels) string method of the two parsers.
In case the type is not available I was thinking to avoid adding such temporary label.
It's also semantically inappropriate to have type as a label, as it is not a dimension, so it only adds clutter and would break downstream usage by adding an unexpected instrumentation label.
If specified as __type__ it would be discarded automatically after the relabelling similarly to __name__ ?
__name__ is not discarded, it's the name of the metric.
That it why I was thinking I could use mtype MetricType to populate the temporary type label in the Metric(l *labels.Labels) string method of the two parsers.
That's only when parsing the TYPE line self, it doesn't apply when parsing any other line.
Is it way more complex to add it?
For OM it shouldn't be too hard to do it efficiently, as the spec is locked down to make that practical. The problem is as the decision was made that the Prometheus text format can take in lines in a random order, supposedly to make things easier. However that also prevents one-pass constant-ish memory linear time algorithms that we'd need to do it efficiently impossible for parsers. So that kinda scuppers us, as we could see the sample before we see the TYPE line.
Regarding the "fully-featured parser" did you mean the https://github.com/prometheus/common/expfmt?
Yes, that's the official parser for the Prometheus text format.
name is not discarded, it's the name of the metric, which is a regular label.
My bad, I read in the docs "Labels starting with __ will be removed from the label set after target relabeling is completed." and I thought it was applying to all labels and therefore also to __type__ . I imagine that it is not enough if I add it at that point 😕
The problem is as the decision was made that the Prometheus text format can take in lines in a random order, supposedly to make things easier. [...] as we could see the sample before we see the TYPE line.
I was not aware of this and it looks like a stopper 🤯 We are going to check the code again to see if we come up with some ideas to do that efficiently without assuming that the "type line" is parsed before the sample ones.
I was not aware of this and it looks like a stopper
What I'd suggest looking at is seeing if you can leverage the various caches in the scrape code to do something smart. Having a more correct parser would be great, if it can be done efficiently. Particularly for checking that OM input is meeting the spec, as there's a common misconception that if Prometheus accepts input that it is valid. In reality Prometheus will currently accept all valid OM/Prometheus text format input, but also some invalid inputs.
Metadata is not always included in the payload but it's literally designed at usecases like yours.
cc @cstyan any opinion here?
I would prefer to avoid hacking yet another thing into remote write and just to move forward with the metadata in WAL PR. I will take a more active hand in moving that forward starting this coming week.
@paologallinaharbur let me know if you think the metadata WAL pr and including the metadata alongside each sample rather than as separate messages would not solve your use case. Otherwise I think we can close this in favour of that work.
Hello, @cstyan yes it would likely solve our usecase. Thanks for the help! 😄
|
gharchive/issue
| 2022-05-11T15:36:42 |
2025-04-01T04:35:34.969885
|
{
"authors": [
"brian-brazil",
"cstyan",
"paologallinaharbur",
"roidelapluie"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/10684",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
186275747
|
Investigate RAM consumption during crash recovery
We have received occasional reports of servers OOMing during crash recovery.
Obviously, the checkpoint has to be loaded in its' entirety, but if more is loaded from disk, it could explain the OOMing as no series maintenance or chunk eviction is running. After a quick check, I could only see chunk descs being loaded. In extreme cases, even the relatively small chunk descs might cause an OOM, so unloading chunk descs will definitely be a way to reduce RAM usage during crash recovery.
But there might be other code paths where chunks might be loaded. This has to be investigated more thoroughly.
Obviously, having #447 in place would come in handy.
@matthiasr as discussed earlier today.
Random observation: A beefy Prometheus server seemed to ramp up its RAM usage during rebuilding the metrics index (xxx metrics queued for indexing).
Wild guess: If LevelDB gets a lot of updates, it might run into trouble cleaning up and hogs too much RAM.
I have decided to not tackle the LevelDB issues. This will be hairy at best, and it is going away in v2.0 anyway.
Evicting chunkdescs is however low hanging fruit. I'll create a PR shortly (for the 1.6 release).
|
gharchive/issue
| 2016-10-31T12:49:39 |
2025-04-01T04:35:34.973669
|
{
"authors": [
"beorn7"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/2139",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
775745883
|
fatal error: index out of range -- Prometheus docker container exits after it started in seconds.
What did you do?
run Prometheus docker container
What did you expect to see?
the container is running
What did you see instead? Under which circumstances?
The container has run for seconds and exited unexpectedly.
Environment
host: a VM with CentOS 7
Prometheus: prom/prometheus:v2.23.0
System information:
Linux 3.10.0-1160.11.1.el7.x86_64 x86_64
Prometheus version:
prometheus, version 2.23.0 (branch: HEAD, revision: 26d89b4b0776fe4cd5a3656dfa520f119a375273)
build user: root@37609b3a0a21
build date: 20201126-10:56:17
go version: go1.15.5
platform: linux/amd64
Prometheus configuration file:
# https://prometheus.io/docs/prometheus/latest/getting_started/
# https://prometheus.io/docs/guides/node-exporter/
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 15s
static_configs:
- targets: [ '192.168.100.88:9090' ]
- job_name: 'node_exporter'
scrape_interval: 15s
static_configs:
- targets:
- '192.168.100.2:9100'
- '192.168.100.3:9100'
- '192.168.100.5:9100'
- '192.168.100.7:9100'
- '192.168.100.8:9100'
- '192.168.100.11:9100'
- '192.168.100.83:9100'
- '192.168.100.88:9100'
Logs:
prometheus-docker-container-start-failed-log1.txt
Different error log.
More details. It worked yesterday, the host OS was crashed unexpectedly. After rebooting the OS, I tried many times to restart the Prometheus docker container but always failed. I also removed the container and run a new one, still failed. I rebooted the OS, started new docker again, it failed again, and output the following log:
level=info ts=2020-12-29T07:37:41.522Z caller=main.go:322 msg="No time or size retention was set so using the default time retention" duration=15d
level=info ts=2020-12-29T07:37:41.522Z caller=main.go:360 msg="Starting Prometheus" version="(version=2.23.0, branch=HEAD, revision=26d89b4b0776fe4cd5a3656dfa520f119a375273)"
level=info ts=2020-12-29T07:37:41.522Z caller=main.go:365 build_context="(go=go1.15.5, user=root@37609b3a0a21, date=20201126-10:56:17)"
level=info ts=2020-12-29T07:37:41.523Z caller=main.go:366 host_details="(Linux 3.10.0-1160.11.1.el7.x86_64 #1 SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 bd55263ca522 (none))"
level=info ts=2020-12-29T07:37:41.523Z caller=main.go:367 fd_limits="(soft=1048576, hard=1048576)"
level=info ts=2020-12-29T07:37:41.523Z caller=main.go:368 vm_limits="(soft=unlimited, hard=unlimited)"
level=info ts=2020-12-29T07:37:41.524Z caller=main.go:722 msg="Starting TSDB ..."
level=info ts=2020-12-29T07:37:41.527Z caller=head.go:645 component=tsdb msg="Replaying on-disk memory mappable chunks if any"
level=info ts=2020-12-29T07:37:41.527Z caller=head.go:659 component=tsdb msg="On-disk memory mappable chunks replay completed" duration=9.505µs
level=info ts=2020-12-29T07:37:41.527Z caller=head.go:665 component=tsdb msg="Replaying WAL, this may take a while"
level=info ts=2020-12-29T07:37:41.528Z caller=web.go:528 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2020-12-29T07:37:41.530Z caller=head.go:717 component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
level=info ts=2020-12-29T07:37:41.530Z caller=head.go:722 component=tsdb msg="WAL replay completed" checkpoint_replay_duration=32.123µs wal_replay_duration=2.406787ms total_replay_duration=2.462151ms
level=info ts=2020-12-29T07:37:41.530Z caller=main.go:742 fs_type=XFS_SUPER_MAGIC
level=info ts=2020-12-29T07:37:41.530Z caller=main.go:745 msg="TSDB started"
level=info ts=2020-12-29T07:37:41.530Z caller=main.go:871 msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
level=info ts=2020-12-29T07:37:41.534Z caller=main.go:902 msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=3.603241ms remote_storage=7.714µs web_handler=222ns query_engine=1.867µs scrape=2.234995ms scrape_sd=50.966µs notify=716ns notify_sd=1.408µs rules=4.728µs
level=info ts=2020-12-29T07:37:41.534Z caller=main.go:694 msg="Server is ready to receive web requests."
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x1fefb5a]
goroutine 117 [running]:
github.com/prometheus/prometheus/tsdb/chunkenc.(*bstream).bytes(...)
/app/tsdb/chunkenc/bstream.go:56
github.com/prometheus/prometheus/tsdb/chunkenc.(*xorAppender).Append(0xc001627e00, 0x176ad6fb87e, 0x4168000000000000)
/app/tsdb/chunkenc/xor.go:152 +0x3a
github.com/prometheus/prometheus/tsdb.(*memSeries).append(0xc0014a7180, 0x176ad6fb87e, 0x4168000000000000, 0x33, 0xc000124840, 0x1)
/app/tsdb/head.go:2148 +0x103
github.com/prometheus/prometheus/tsdb.(*headAppender).Commit(0xc001a14e00, 0x0, 0x0)
/app/tsdb/head.go:1248 +0x265
github.com/prometheus/prometheus/tsdb.dbAppender.Commit(0x312cb60, 0xc001a14e00, 0xc0008ce000, 0x0, 0x0)
/app/tsdb/db.go:774 +0x35
github.com/prometheus/prometheus/storage.(*fanoutAppender).Commit(0xc00151f500, 0xbff2d5272f90e700, 0x1456c14c9e)
/app/storage/fanout.go:174 +0x49
github.com/prometheus/prometheus/scrape.(*scrapeLoop).scrapeAndReport.func1(0xc00107bc18, 0xc00107bc28, 0xc0008cd130)
/app/scrape/scrape.go:1086 +0x49
github.com/prometheus/prometheus/scrape.(*scrapeLoop).scrapeAndReport(0xc0008cd130, 0x37e11d600, 0x2540be400, 0xbff2d5236f92ac1e, 0x10d8b13bb9, 0x4240660, 0xbff2d5272f90e700, 0x1456c14c9e, 0x4240660, 0x0, ...)
/app/scrape/scrape.go:1153 +0xb45
github.com/prometheus/prometheus/scrape.(*scrapeLoop).run(0xc0008cd130, 0x37e11d600, 0x2540be400, 0x0)
/app/scrape/scrape.go:1039 +0x39e
created by github.com/prometheus/prometheus/scrape.(*scrapePool).sync
/app/scrape/scrape.go:510 +0x9ce
It works after I run the following command:
docker system prune -a
When we changed the host's memory, it runs well now!
I'm closing this issue.
When we changed the host's memory, it runs well now!
I'm closing this issue.
|
gharchive/issue
| 2020-12-29T07:21:49 |
2025-04-01T04:35:34.981875
|
{
"authors": [
"iridiumcao"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/issues/8326",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1045220614
|
Reduce Prometheus pull requests builds
Signed-off-by: Julien Pivotto roidelapluie@inuits.eu
Still took 20min to build, but seems slightly better.
Okay, let's first experiment this and see how it's going.
Still took 20min to build, but seems slightly better.
Yes, I did not change the number of builds per thread. I am more interested in reducing CPU usage than speed, and reducing the TB's we use each month (+/- 80 to 100 TB).
|
gharchive/pull-request
| 2021-11-04T21:23:32 |
2025-04-01T04:35:34.984380
|
{
"authors": [
"SuperQ",
"roidelapluie"
],
"repo": "prometheus/prometheus",
"url": "https://github.com/prometheus/prometheus/pull/9666",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
860796289
|
Implement new Image Source component
Although the URI Decode Source can play files of .jpeg format, it's not well suited for displaying a still image for a specified or indefinite amount of time.
Want to implement a new, specialized source component that uses an imagefreeze plugin/element and an async display timer that will send and EOS event on timeout.
/**
* @brief creates a new, uniquely named JPEG Image Source component.
* @param[in] name Unique name for the Image Source
* @param[in] file_path absolute or relative path to the jpeg image file to play
* @param[in] timeout source will send an EOS event on timeout, set to 0 to disable
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_SOURCE_RESULT otherwise.
*/
DslReturnType dsl_source_image_new(const wchar_t* name,
const wchar_t* file_path, uint timeout);
/**
* @brief Gets the current File Path in use by the named JPEG Image Source
* @param[in] name name of the Image Source to query
* @param[out] FilePath in use by the Image Source
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_SOURCE_RESULT otherwise.
*/
DslReturnType dsl_source_image_path_get(const wchar_t* name, const wchar_t** file_path);
/**
* @brief Sets the current File Path for the named JPEG Image Source to use
* @param[in] name name of the Image Source to update
* @param[in] file_path new file path to use by the Image Source
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_SOURCE_RESULT otherwise.
*/
DslReturnType dsl_source_image_path_set(const wchar_t* name, const wchar_t* file_path);
/**
* @brief Gets the current Timeout setting for the Image Source
* @param[in] name name of the Image Source to query
* @param[out] timeout current timeout value for the EOS Timer, 0 means the
* timer is disabled
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_SOURCE_RESULT otherwise.
*/
DslReturnType dsl_source_image_timeout_get(const wchar_t* name, uint* timeout);
/**
* @brief Sets the current Timeout setting for the Image Source
* @param[in] name name of the Image Source to update
* @param[in] timeout new timeout value for the EOS Timer (in seconds), 0 to disable.
* @return DSL_RESULT_SUCCESS on success, DSL_RESULT_SOURCE_RESULT otherwise.
*/
DslReturnType dsl_source_image_timeout_set(const wchar_t* name, uint timeout);
Done. needs docs
|
gharchive/issue
| 2021-04-18T23:46:45 |
2025-04-01T04:35:34.988110
|
{
"authors": [
"rjhowell44"
],
"repo": "prominenceai/deepstream-services-library",
"url": "https://github.com/prominenceai/deepstream-services-library/issues/438",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
55325318
|
Bug fix - Procs returning multiple values
When caching methods that take multiple parameters, you'll have to use a proc to return the params when configuring cache clears. This fixes a bug where the cache key generated from a multiple return value situation didn't match the original cache key.
:+1:
|
gharchive/pull-request
| 2015-01-23T19:58:25 |
2025-04-01T04:35:34.993948
|
{
"authors": [
"jeffdeville",
"justincampbell"
],
"repo": "promptworks/cache_shoe",
"url": "https://github.com/promptworks/cache_shoe/pull/4",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
296759490
|
Fix for syntax error discarding
The result of pronto-rubocop does not include syntactic errors.
Missing syntactic error in the result of pronto-rubocop was solved by mapping this error on the last element of the patch.
File containing syntax error:
while a < 15
(
print a, " "
if a == 10 then
prin"made it to ten!!"
a = a + 1
end
Variable offences's value in method inspect(patch) in file lib/pronto/rubocop.rb:
https://github.com/prontolabs/pronto-rubocop/blob/master/lib/pronto/rubocop.rb#L38
[42] pry(#<Pronto::Rubocop>)> offences
=> [#<RuboCop::Cop::Offense:0x0000564daa700d20
@cop_name="Lint/Syntax",
@location=#<Parser::Source::Range /root/miq_bot/testrepo/error.rb 99...99>,
@message=
"unexpected token $end\n(Using Ruby 2.3 parser; configure using `TargetRubyVersion` parameter, under `AllCops`)",
@severity=#<RuboCop::Cop::Severity:0x0000564daa7014f0 @name=:error>,
@status=:uncorrected>]
before:
The syntactic error is missing in the result of the rubocop.
[168] pry(#<Pronto::Rubocop>)> offences.sort.reject(&:disabled?).map do |offence|
[168] pry(#<Pronto::Rubocop>)* patch.added_lines
[168] pry(#<Pronto::Rubocop>)* .select { |line| line.new_lineno == offence.line }
[168] pry(#<Pronto::Rubocop>)* .map { |line| new_message(offence, line) }
[168] pry(#<Pronto::Rubocop>)* end
=> [[]]
after:
The syntactic error message is assigned to the last element of patch variable.
[174] pry(#<Pronto::Rubocop>)> offences.sort.reject(&:disabled?).map do |offence|
[174] pry(#<Pronto::Rubocop>)* patch.added_lines
[174] pry(#<Pronto::Rubocop>)* .select { |line| line.new_lineno == offence.line }
[174] pry(#<Pronto::Rubocop>)* .map { |line| new_message(offence, line) }
[174] pry(#<Pronto::Rubocop>)* end.concat(
[174] pry(#<Pronto::Rubocop>)* offences.sort.reject(&:disabled?).select do |offence|
[174] pry(#<Pronto::Rubocop>)* offence.cop_name == "Lint/Syntax"
[174] pry(#<Pronto::Rubocop>)* end.map do |offence|
[174] pry(#<Pronto::Rubocop>)* new_message(offence, patch.added_lines.last)
[174] pry(#<Pronto::Rubocop>)* end
[174] pry(#<Pronto::Rubocop>)* )
=> [[],
#<Pronto::Message:0x0000564da7503070
@commit_sha="dfa09b8920c8932aa0454dcffb93966fc1c9b2f5",
@level=:error,
@line=
#<struct Pronto::Git::Line
line=
#<Rugged::Diff::Line:47445775201620 {line_origin: :addition, content: "end\n">,
patch=
#<struct Pronto::Git::Patch
patch=#<Rugged::Patch:47445789709400>,
repo=
#<Pronto::Git::Repository:0x0000564daae22db0
@repo=
#<Rugged::Repository:47445789710020 {path: "/root/miq_bot/testrepo/.git/"}>>>,
hunk=
#<Rugged::Diff::Hunk:47445775202180 {header: "@@ -0,0 +1,8 @@\n", count: 8}>>,
@msg=
"unexpected token $end\n(Using Ruby 2.3 parser; configure using `TargetRubyVersion` parameter, under `AllCops`)",
@path="error.rb",
@runner=Pronto::Rubocop>]
Informations (version, platform, engine)
[175] pry(#<Pronto::Rubocop>)> RUBY_VERSION
=> "2.3.1"
[176] pry(#<Pronto::Rubocop>)> RUBY_PLATFORM
=> "x86_64-linux-gnu"
[177] pry(#<Pronto::Rubocop>)> RUBY_ENGINE
=> "ruby"
[178] pry(#<Pronto::Rubocop>)> Pronto::RubocopVersion::VERSION
=> "0.9.0"
[179] pry(#<Pronto::Rubocop>)> ::RuboCop::Version.version
=> "0.52.1"
/cc
@skateman
@romanblanco
@mmozuras is this something you would want? Maybe this is not The Right Way™ but the syntax errors are not always mappable to patch changes :disappointed:
@europ sorry it's taken someone so long to get back to you. I'm open to accepting this change if you'd be willing to rebase and resolve conflicts 👌
@prontolabs/core any thoughts or objections?
@skateman, looks good. If you would rebase, that would be excellent. (I am not a maintainer, just would like to see this in).
@skateman, looks good. If you would rebase, that would be excellent. (I am not a maintainer, just would like to see this in).
Agreed, there should be a 0.11.1 release soon -- I'd like to include this PR in it 🙂
Agreed, there should be a 0.11.1 release soon -- I'd like to include this PR in it 🙂
@skateman can you fix this? I am currently unavailable.
I have no push rights to your repo...
@skateman you have them, from now
Any luck on rebasing this PR? I'm hoping to make a release soon with RuboCop >= 1.0 support.
Just checking, is this mandatory / the fix for RuboCop 1.x? Or can there be another PR to get us to that version?
@hlascelles the 0.11.1 release with 1.0 support has already happened; I guess this will be present in a followup release (0.11.2?).
Ah, fantastic thank you!
@europ would you be willing to rebase this PR?
Hi @europ... Could you rebase this?
|
gharchive/pull-request
| 2018-02-13T14:38:46 |
2025-04-01T04:35:35.004157
|
{
"authors": [
"ashkulz",
"doomspork",
"europ",
"hlascelles",
"skateman"
],
"repo": "prontolabs/pronto-rubocop",
"url": "https://github.com/prontolabs/pronto-rubocop/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
153612560
|
CSV editor should mark file 'dirty' when col or row is created/removed
Fixes #959
Note that there is one case where it doesn't recognize the change, which I believe is an issue with Handsontable and have reported to them.
@timwis thanks!
|
gharchive/pull-request
| 2016-05-07T19:22:42 |
2025-04-01T04:35:35.039384
|
{
"authors": [
"dereklieu",
"timwis"
],
"repo": "prose/prose",
"url": "https://github.com/prose/prose/pull/960",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2540659415
|
🛑 nftschool.dev is down
In 544e8a0, nftschool.dev (https://nftschool.dev) was down:
HTTP code: 0
Response time: 0 ms
Resolved: nftschool.dev is back up in ebecfb0 after 1 hour, 35 minutes.
|
gharchive/issue
| 2024-09-22T06:10:34 |
2025-04-01T04:35:35.051281
|
{
"authors": [
"mastrwayne"
],
"repo": "protocol/upptime-pln",
"url": "https://github.com/protocol/upptime-pln/issues/2738",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
363796258
|
Improve multiple contracts naming
From this conversation https://github.com/protofire/eth-cli/pull/29#discussion_r219836577
Contracts naming in the REPL context can be improved.
A suggestion I have is to use the abi file name and part of its name, instead of adding a suffix. For instance:
eth lc --mainnet path/to/ERC20.abi 0x another/path/to/ERC20.abi 0x path/to/ERC725.abi 0x
Will result in: ERC20, ERC20_to_path_another, ERC725
Now that I wrote it down, it may be a bit awkward if the path is too long, but we can think of a name length limit and from there start to add suffixes.
Now that we have the erc20 known ABI, this isn't a great experience (if you load two tokens, one of them is called erc20_1), but I think it's good enough. You can always do foo = erc20; bar = erc20_1 manually.
There is room for improvement here, of course. Maybe the contract loading "syntax" can be extends with an optional name. Or maybe, if you are loading erc20 contracts, you can call the "symbol" method and use that as the name (I hate this idea, btw). But I don't think it's necessary right now. If this annoys someone, please open a new issue.
|
gharchive/issue
| 2018-09-25T22:55:15 |
2025-04-01T04:35:35.067365
|
{
"authors": [
"fernandomg",
"fvictorio"
],
"repo": "protofire/eth-cli",
"url": "https://github.com/protofire/eth-cli/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
105786591
|
Test performance time
A timer that checks how long each method takes, and returns the totalt time it took to run the script. This could also return a average time for each test.
Made a now - then timer in http.send.
Stores to response.time and logs to protocols.log.
|
gharchive/issue
| 2015-09-10T11:19:30 |
2025-04-01T04:35:35.070115
|
{
"authors": [
"bischjer",
"guru3n"
],
"repo": "protojour/aux",
"url": "https://github.com/protojour/aux/issues/6",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1746633868
|
Broken links
Hi, I found some broken links:
[ ] https://github.com/ecmwf-projects/copernicus-training (file: education.md, status code: 404)
[ ] https://carbonmapperdata.org/ (file: README.md, error: ECONNRESET)
[ ] https://github.com/fineprint-global/forbio (file: README.md, status code: 404)
[ ] https://forge.ipsl.jussieu.fr/nemo/wiki/Users (file: README.md, status code: 404)
[ ] https://git.outils-is.ird.fr/grelet/TSG-QC (file: README.md, error: ENOTFOUND)
[ ] https://juliaclimate.github.io/GlobalOceanNotebooks/ (file: README.md, status code: 404)
[ ] https://ts-gitlab.iup.uni-heidelberg.de/dorie/dorie (file: README.md, status code: 404)
[ ] https://bitbucket.org/anyelacamargo/aquacropr/ (file: README.md, status code: 404)
[ ] https://github.com/FZJ-IEK2-VSA/HIM (file: docs/blog/gathering_open_sustainable_technology.md, status code: 404)
[ ] https://protontypes.eu/about_free_innovation/ (file: docs/blog/gathering_open_sustainable_technology.md, status code: 404)
[ ] https://https://www.weforum.org/agenda/2021/12/natural-climate-solutions-carbon-markets-climate-justice/ (file: docs/blog/impact_and_potential_of_open_source_on_climate_technology.md, error: ENOTFOUND)
[ ] https://https://chaoss.community/metrics/ (file: docs/blog/impact_and_potential_of_open_source_on_climate_technology.md, error: ENOTFOUND)
Thank you so much. I will have a deeper look into the links today. What tool have you used to find them?
No problem.
I used https://github.com/tcort/markdown-link-check
Thanks again! I was able to save some of the links.
|
gharchive/issue
| 2023-06-07T20:32:16 |
2025-04-01T04:35:35.079565
|
{
"authors": [
"Ly0n",
"ndsvw"
],
"repo": "protontypes/open-sustainable-technology",
"url": "https://github.com/protontypes/open-sustainable-technology/issues/131",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1512743189
|
Broker: Config: Implement search by the Value
Actual behavior
It is not possible to search by the config Value and only works for the Key now.
Expected behavior
Search by the config Value can be useful for maintenance and troubleshooting.
Set up
f4e6afe
Steps to Reproduce
Login to Kafka UI
Navigate to Borkers
Select broker and switch to Configs tab.
Screenshots
Additional context
Discussed with @Haarolean to be created as a separate issue from #2651
@Haarolean I would like to contribute to this issue.
@malavmevada any updates?
|
gharchive/issue
| 2022-12-28T12:21:53 |
2025-04-01T04:35:35.091023
|
{
"authors": [
"BulatKha",
"Haarolean",
"malavmevada"
],
"repo": "provectus/kafka-ui",
"url": "https://github.com/provectus/kafka-ui/issues/3163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
957055816
|
Create a digest for data stream signing.
Add a message digest in the SmartKey class (output of a hash function) for proper cryptographic signing, adds the necessary padding and out-of-the-box data buffering for streams via update(). SmartKey will now sign the hash of the data instead of the data itself (proper way of doing digital signature 🤦 ).
Codecov Report
Merging #81 (74d242f) into main (db7c015) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #81 +/- ##
=========================================
Coverage 11.71% 11.71%
Complexity 35 35
=========================================
Files 65 65
Lines 3381 3381
Branches 359 359
=========================================
Hits 396 396
Misses 2929 2929
Partials 56 56
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update db7c015...74d242f. Read the comment docs.
No, this was only impacting when the bootstrapping key was a SmartKey type, I never converted the bootstrapping key (FigureLending) to SmartKey
|
gharchive/pull-request
| 2021-07-30T20:59:25 |
2025-04-01T04:35:35.097842
|
{
"authors": [
"codecov-commenter",
"rchaing-figure"
],
"repo": "provenance-io/p8e",
"url": "https://github.com/provenance-io/p8e/pull/81",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
77449434
|
Bump attoparsec boundary
attoparsec got update.
Thanks, released as fb-1.0.10.
Thank you.
|
gharchive/pull-request
| 2015-05-18T01:20:44 |
2025-04-01T04:35:35.099042
|
{
"authors": [
"meteficha",
"tolysz"
],
"repo": "prowdsponsor/fb",
"url": "https://github.com/prowdsponsor/fb/pull/37",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
347810960
|
可不可以做成web服务界面
用下载机,或者nas的情况下,如果软件做成web服务在nas上运行的话,会比较方便,
现在server是有了,不过ui界面的话还没做呢
https://github.com/proxyee-down-org/pdown-rest
前端,用Vue做成SPA或者PWA会好很多。
@SilverLeaves 单独分离出去的话,其实目前已经分离的,只是没有分的那么彻底。
@dream1986 ui界面我们3.0正在做
|
gharchive/issue
| 2018-08-06T07:34:38 |
2025-04-01T04:35:35.110202
|
{
"authors": [
"BlackHole1",
"SilverLeaves",
"dream1986",
"monkeyWie"
],
"repo": "proxyee-down-org/proxyee-down",
"url": "https://github.com/proxyee-down-org/proxyee-down/issues/570",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
572638675
|
Use testinfo.desc when importing beakerlib tests
Instead of reading Makefile directly we could use testinfo.desc file.
It would require extra system call when converting which might take more time, however it would solve issue with Makefiles that use variables to create variables we read.
@lukaszachy you mentioned reading testinfo.desc makes more sense to you, do you have another use case than the one I mentioned?
Not sure about his. It would probably bring an additional dependency as Makefiles quite often contain:
include /usr/share/rhts/lib/rhts-make.include
So rhts-test-env which is not in Fedora would need to be installed on the box to make this working.
In the past I've run into examples where people were very good users of Make and modified the 'testinfo.desc' to suite their needs -> a lot of variable expansions.
Keep in mind that only the content of produced testinfo.desc it the authority source for Beaker (make bkradd) and provides initial values for TCMS as well.
There is no additional dependency needed (apart from make). If we made
include /usr/share/rhts/lib/rhts-make.include
optional or didn't include at all only thing we need to declare is variable
METADATA=testinfo.desc
and it should work.
And it could be done 'on-the-fly' with no changes to Makefile needed from test maintainers.
So you mean to adjust the Makefile (or better its copy) by removing the include, defining the METADATA variable and removing rhts-lint so that you can actually do make testinfo.desc?
Exactly, reading Makefile as a string, doing necessary changes and then feeding it to
make testinfo.desc -f -
should work (@lukaszachy thought of it and tested it).
I see. That makes sense to me. In this way we should be able to handle more special / custom user setups in the Makefile. Plus one for this implementation.
Support for testinfo.desc parsing merged in 3dff02a.
|
gharchive/issue
| 2020-02-28T09:39:33 |
2025-04-01T04:35:35.332291
|
{
"authors": [
"hegerj",
"lukaszachy",
"psss"
],
"repo": "psss/tmt",
"url": "https://github.com/psss/tmt/issues/131",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
144616758
|
understand use/re-use conditions
As an end user,
I need to be able to understand the use/re-use conditions associated with an object,
So I can be confident that I am using the materials appropriately.
provide proper copyright and use conditions.
@lmballinger I assume you mean something other than a rights statement? Such as having rights statements auto-link to rightstatements.org to clarify?
|
gharchive/issue
| 2016-03-30T14:49:46 |
2025-04-01T04:35:35.341144
|
{
"authors": [
"kestlund",
"lmballinger",
"ntallman"
],
"repo": "psu-libraries/cho-req",
"url": "https://github.com/psu-libraries/cho-req/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
679175711
|
Individual User Statistics page
We need an aggregate user statistics page so users that get an email about their downloads are given a link to view your overall statistics in ScholarSphere. We need a page to represent this views and downloads
closing this; see #1210
|
gharchive/issue
| 2020-08-14T13:58:20 |
2025-04-01T04:35:35.346548
|
{
"authors": [
"DanCoughlin",
"srerickson"
],
"repo": "psu-libraries/scholarsphere",
"url": "https://github.com/psu-libraries/scholarsphere/issues/468",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
358497963
|
Outdated Docker file - docker build -t jdart . fails
The babelfish.arc.nasa.gov server is not accessible publicly while the Dockerfile still tries to retrieve code from it. Which causes the failure in container initialization. See below extract from Dockerfile:
# Install jpf-core
WORKDIR ${JDART_DIR}
RUN hg clone http://babelfish.arc.nasa.gov/hg/jpf/jpf-core
# We know that rev 29 works with jdart
WORKDIR ${JDART_DIR}/jpf-core
RUN hg update -r 29
RUN ant
Should be fixed now. Thanks!
Hi, the docker build now fails with E: Unable to locate package oracle-java8-installer.
Is there any update to the fix ?
i think that the project is not maintained so I fixed the issue here .https://github.com/samsbp/jdart
|
gharchive/issue
| 2018-09-10T07:22:22 |
2025-04-01T04:35:35.359692
|
{
"authors": [
"kfrajtak",
"ksluckow",
"samsbp",
"shafirpl",
"zubair527"
],
"repo": "psycopaths/jdart",
"url": "https://github.com/psycopaths/jdart/issues/32",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1118735660
|
GitHub Jobs is now closed - replacing with DevITjobs
Thank you for taking the time to work on a Pull Request for this project!
To ensure your PR is dealt with swiftly please check the following:
[ x] Your submissions are formatted according to the guidelines in the contributing guide
[x ] Your additions are ordered alphabetically
[x ] Your submission has a useful description
[ x] The description does not end with punctuation
[ x] Each table column should be padded with one space on either side
[x ] You have searched the repository for any relevant issues or pull requests
[x ] Any category you are creating has the minimum requirement of 3 items
[ x] All changes have been squashed into a single commit
GitHub jobs does not exist anymore: https://github.blog/changelog/2021-04-19-deprecation-notice-github-jobs-site
|
gharchive/pull-request
| 2022-01-30T22:11:11 |
2025-04-01T04:35:35.492789
|
{
"authors": [
"Vrq"
],
"repo": "public-api-lists/public-api-lists",
"url": "https://github.com/public-api-lists/public-api-lists/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
910618915
|
ADD - News Api
[x] My submission is formatted according to the guidelines in the contributing guide
[x] My addition is ordered alphabetically
[x] My submission has a useful description
[x] The description does not end with punctuation
[x] Each table column is padded with one space on either side
[x] I have searched the repository for any relevant issues or pull requests
[x] Any category I am creating has the minimum requirement of 3 items
[x] All changes have been squashed into a single commit
Oh sorry. It seems to me that the Zomato API has been removed before: #1600
I will close this pr. 😅
|
gharchive/pull-request
| 2021-06-03T15:38:28 |
2025-04-01T04:35:35.496005
|
{
"authors": [
"matheusfelipeog",
"rajnish824"
],
"repo": "public-apis/public-apis",
"url": "https://github.com/public-apis/public-apis/pull/1755",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
265671185
|
Travis-CI builds
Already done in public-law/nevada-revised-statutes-parser#19, so why not adding this here too?
Resolves #5
Excellent - thank you!
|
gharchive/pull-request
| 2017-10-16T07:29:48 |
2025-04-01T04:35:35.497047
|
{
"authors": [
"dogweather",
"kobim"
],
"repo": "public-law/analyze-oregon-law-haskell",
"url": "https://github.com/public-law/analyze-oregon-law-haskell/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2527992805
|
23SeptDemo : Issues to be fixed
The following is based on the discussions with regards to the new Judge screen, static cause list, etc. @Tahera Bharmal you can confirm if all these are needed for the list for 23rd Sept.
Pending Tasks need to have a status to track expired tasks. Need to know what tasks were assigned, but not completed in time.
Hearing API - update Hearing to remove filingNumber and cnrNumber. Instead add only 1 field - caseReferenceNumber. The way this should work is - when a hearing is created
if(null != case.courtCaseNumber)
hearing.caseReferenceNumber = case.courtCaseNumber;
else
hearing.caseReferenceNumber = case.cmpNumber;
Hearing workflow will need to have a new action "Skip" and associated State "Skipped". Skip can happen after a hearing has started. So from Start, a hearing can go to Heard, Adjourned or Skipped (new state with associated new action 'Skip'). From Skipped, it can go back to "Start" state
It should not be possible to raise an application for reschedule of a hearing if that Hearing is in_progress/heard/adjourned.
In the Judge's screen, it is proposed that the pending tasks for today be called as "Actions due Today" and the rest as "Action due later". @Tahera Bharmal to confirm if this change will happen now or later
An application is to be given a number following the same format as CMP number (isolated per establishment - courtroom). This number however is to be generated and assigned only after an application has been approved or rejected. While the application is in pre-approved state, it will not have a number associated with it.
@suresh12 @subhashini-egov @Taherabharmal
@nileshgupta111 as also shared in slack channel
Application will still get an ApplicationNumber (based on CNR, similar to Order, Hearing etc) -- CNR-AP[X], so
application.applicationNumber = KLKM520000012024-AP01
Once an Application is approved/rejected then it's number will update to the CMP Number .
application.applicationNumber = CMP/10/2024
Note that the CMP number sequence and the application number sequence will not be the same. They can be different.
Hi @atulgupta2024 Please confirm the severity
|
gharchive/issue
| 2024-09-16T09:50:52 |
2025-04-01T04:35:35.538191
|
{
"authors": [
"Ramu-kandimalla",
"atulgupta2024"
],
"repo": "pucardotorg/dristi",
"url": "https://github.com/pucardotorg/dristi/issues/1680",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1679689875
|
Setting the WireGuard connection go through another socks proxy
SOLVED.
I need this too. some help please.
|
gharchive/issue
| 2023-04-22T20:12:26 |
2025-04-01T04:35:35.547381
|
{
"authors": [
"0xNeu",
"mu4um"
],
"repo": "pufferffish/wireproxy",
"url": "https://github.com/pufferffish/wireproxy/issues/63",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
}
|
186133960
|
publish for scala 2.12.0
please...
Any ETA on a 2.12 release? Thanks!
I should have time to cut a release this weekend.
Wartremover 1.2.0 has been released for 2.12.
|
gharchive/issue
| 2016-10-30T12:40:05 |
2025-04-01T04:35:35.548671
|
{
"authors": [
"ChrisNeveu",
"mpilquist",
"ritschwumm"
],
"repo": "puffnfresh/wartremover",
"url": "https://github.com/puffnfresh/wartremover/issues/278",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
759009962
|
Integrates support for Lando
Resolves #865 by integrating Lando support and updating the CircleCI config. (also inserts stubs into the request test suite for the BibliographicController actions)
Coverage remained the same at 82.059% when pulling 79c010e261a69b10de554b4da8ecf67c1245450f on issues-865-jrgriffiniii-lando into 6a0227df4a4a0fa3cb3693b9e51fd93a610183a0 on main.
I wanted to refactor one area of the code and ensure that the processing for holding location values can be run even when lando isn't installed. This is ready for another review please.
Thank you very much for reviewing this @christinach , I will merge this and proceed with looking to assist with #940
@jrgriffiniii after @hackartisan mentioned it I remember that I got the same errors but when I ran the test a second time I had no errors.
@jrgriffiniii thank you again! There are no spec failures now. The only error that I keep seeing but it doesn't break the test is :error: Cannot read property 'split' of undefined followed by the warning [2020-12-14T14:08:42.257310 #18775] WARN -- : Failed to start the container services using Lando: A JSON text must at least contain two octets!
I am sorry for this error, did reproduce the failing tests. https://github.com/pulibrary/marc_liberation/pull/930/commits/7306bea48814d3e012b3b77cda95d700dc598027 remedied this for me locally.
|
gharchive/pull-request
| 2020-12-08T02:42:35 |
2025-04-01T04:35:35.591759
|
{
"authors": [
"christinach",
"coveralls",
"jrgriffiniii"
],
"repo": "pulibrary/marc_liberation",
"url": "https://github.com/pulibrary/marc_liberation/pull/930",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2257172538
|
Use structs that impl Display for escaping
Stacked on top of #880
As discussed in https://github.com/pulldown-cmark/pulldown-cmark/pull/870#issuecomment-2068144932, instead of
escape_html(&mut writer, text);
write
write!(writer, "{}", EscapedHtml(text));
At the same time we can make StrWrite and friends private and less complicated.
I haven't evaluated the performance implications of this yet.
Looks very good and smart to me! But I think we need to test the performance penalty before merging it. Thanks!
Not so great:
cargo bench
crdt_total time: [131.44 µs 131.50 µs 131.58 µs]
change: [+4.0821% +4.1774% +4.2607%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
crdt_html time: [50.832 µs 50.851 µs 50.870 µs]
change: [+14.604% +14.823% +15.036%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
5 (5.00%) high mild
3 (3.00%) high severe
crdt_all_options_parse time: [107.51 µs 107.55 µs 107.59 µs]
change: [+2.2069% +2.2952% +2.3752%] (p = 0.00 < 0.05)
Performance has regressed.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
crdt_parse time: [88.568 µs 88.690 µs 88.835 µs]
change: [-1.1698% -0.5629% -0.1917%] (p = 0.02 < 0.05)
Change within noise threshold.
Found 5 outliers among 100 measurements (5.00%)
2 (2.00%) high mild
3 (3.00%) high severe
smart_punctuation time: [872.05 ns 872.58 ns 873.11 ns]
change: [-3.6300% -3.5129% -3.4036%] (p = 0.00 < 0.05)
Performance has improved.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) low mild
2 (2.00%) high mild
links_n_emphasis time: [1.2686 µs 1.2692 µs 1.2698 µs]
change: [-2.4608% -2.3504% -2.2480%] (p = 0.00 < 0.05)
Performance has improved.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
unescapes time: [4.4079 µs 4.4094 µs 4.4111 µs]
change: [+0.7463% +0.8114% +0.8848%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 9 outliers among 100 measurements (9.00%)
6 (6.00%) high mild
3 (3.00%) high severe
autolinks_n_html time: [1.6281 µs 1.6346 µs 1.6491 µs]
change: [-3.6629% -3.2513% -2.5614%] (p = 0.00 < 0.05)
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
2 (2.00%) high mild
3 (3.00%) high severe
cargo bench --features simd
crdt_total time: [109.21 µs 109.27 µs 109.32 µs]
change: [+3.8755% +4.0736% +4.2547%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
4 (4.00%) low mild
2 (2.00%) high mild
2 (2.00%) high severe
crdt_html time: [38.696 µs 39.201 µs 39.622 µs]
change: [+17.250% +18.419% +19.686%] (p = 0.00 < 0.05)
Performance has regressed.
crdt_all_options_parse time: [94.254 µs 94.695 µs 95.282 µs]
change: [+1.3280% +1.5903% +1.9182%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low mild
4 (4.00%) high mild
5 (5.00%) high severe
crdt_parse time: [78.402 µs 78.460 µs 78.520 µs]
change: [-0.7182% -0.5799% -0.4425%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) low mild
5 (5.00%) high mild
smart_punctuation time: [906.87 ns 907.30 ns 907.73 ns]
change: [+2.2129% +2.3546% +2.4982%] (p = 0.00 < 0.05)
Performance has regressed.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
links_n_emphasis time: [1.3646 µs 1.3659 µs 1.3677 µs]
change: [+0.4749% +0.6685% +0.9421%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 11 outliers among 100 measurements (11.00%)
2 (2.00%) low mild
3 (3.00%) high mild
6 (6.00%) high severe
unescapes time: [4.7764 µs 4.7776 µs 4.7788 µs]
change: [+1.6330% +1.6841% +1.7336%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) low severe
2 (2.00%) low mild
5 (5.00%) high mild
1 (1.00%) high severe
autolinks_n_html time: [1.7091 µs 1.7153 µs 1.7273 µs]
change: [-0.9870% -0.7915% -0.4940%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) low mild
2 (2.00%) high mild
6 (6.00%) high severe
crtd_total and crdt_html looks too bad... Have those benches been compiled with the optimization flags in Cargo.toml? I don't think so because they are not in the 0.11 branch.
If this performance downgrade is not due the compilation flags, I think we should try to improve the performance of this approach before merging it because the performance regression is significant.
I have run the benchmarks with the Cargo.toml optimizations and the difference of performance hurts :disappointed: :
cargo bench
crdt_total time: [267.05 µs 267.42 µs 267.91 µs]
change: [+18.049% +18.136% +18.241%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
2 (2.00%) high mild
7 (7.00%) high severe
crdt_html time: [97.275 µs 97.308 µs 97.345 µs]
change: [+17.381% +17.530% +17.668%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
6 (6.00%) high mild
4 (4.00%) high severe
crdt_all_options_parse time: [225.13 µs 225.31 µs 225.56 µs]
change: [+16.989% +17.244% +17.451%] (p = 0.00 < 0.05)
Performance has regressed.
Found 3 outliers among 100 measurements (3.00%)
3 (3.00%) high mild
crdt_parse time: [198.01 µs 198.43 µs 199.24 µs]
change: [+17.302% +17.700% +18.028%] (p = 0.00 < 0.05)
Performance has regressed.
Found 19 outliers among 100 measurements (19.00%)
4 (4.00%) low severe
5 (5.00%) low mild
3 (3.00%) high mild
7 (7.00%) high severe
smart_punctuation time: [1.6568 µs 1.6607 µs 1.6672 µs]
change: [+4.7981% +5.2080% +5.5433%] (p = 0.00 < 0.05)
Performance has regressed.
Found 6 outliers among 100 measurements (6.00%)
1 (1.00%) high mild
5 (5.00%) high severe
links_n_emphasis time: [2.2786 µs 2.2829 µs 2.2886 µs]
change: [+2.1132% +2.5259% +2.9040%] (p = 0.00 < 0.05)
Performance has regressed.
Found 10 outliers among 100 measurements (10.00%)
1 (1.00%) high mild
9 (9.00%) high severe
unescapes time: [7.3092 µs 7.3848 µs 7.4697 µs]
change: [+2.6863% +3.2635% +3.8276%] (p = 0.00 < 0.05)
Performance has regressed.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) high mild
14 (14.00%) high severe
autolinks_n_html time: [3.0157 µs 3.0189 µs 3.0232 µs]
change: [+5.4275% +5.8034% +6.0782%] (p = 0.00 < 0.05)
Performance has regressed.
Found 11 outliers among 100 measurements (11.00%)
3 (3.00%) high mild
8 (8.00%) high severe
cargo bench --all-features
crdt_total time: [201.01 µs 201.21 µs 201.52 µs]
change: [+6.8731% +6.9886% +7.1038%] (p = 0.00 < 0.05)
Performance has regressed.
Found 7 outliers among 100 measurements (7.00%)
2 (2.00%) high mild
5 (5.00%) high severe
crdt_html time: [76.339 µs 76.357 µs 76.376 µs]
change: [+28.286% +28.717% +29.100%] (p = 0.00 < 0.05)
Performance has regressed.
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe
crdt_all_options_parse time: [177.79 µs 178.14 µs 178.76 µs]
change: [-1.0443% -0.4188% +0.0965%] (p = 0.17 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
6 (6.00%) low severe
1 (1.00%) low mild
2 (2.00%) high mild
5 (5.00%) high severe
crdt_parse time: [146.35 µs 146.76 µs 147.47 µs]
change: [+1.0230% +1.1972% +1.4127%] (p = 0.00 < 0.05)
Performance has regressed.
Found 13 outliers among 100 measurements (13.00%)
6 (6.00%) low severe
1 (1.00%) low mild
1 (1.00%) high mild
5 (5.00%) high severe
smart_punctuation time: [1.6830 µs 1.6834 µs 1.6837 µs]
change: [+1.8682% +2.3031% +2.6376%] (p = 0.00 < 0.05)
Performance has regressed.
Found 8 outliers among 100 measurements (8.00%)
2 (2.00%) high mild
6 (6.00%) high severe
links_n_emphasis time: [2.3000 µs 2.3017 µs 2.3037 µs]
change: [+0.5134% +0.7322% +1.1485%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 15 outliers among 100 measurements (15.00%)
2 (2.00%) high mild
13 (13.00%) high severe
unescapes time: [7.6489 µs 7.6584 µs 7.6717 µs]
change: [+2.0050% +2.2402% +2.5636%] (p = 0.00 < 0.05)
Performance has regressed.
Found 12 outliers among 100 measurements (12.00%)
5 (5.00%) high mild
7 (7.00%) high severe
autolinks_n_html time: [2.7526 µs 2.7533 µs 2.7543 µs]
change: [+0.1459% +0.2366% +0.3171%] (p = 0.00 < 0.05)
Change within noise threshold.
Found 13 outliers among 100 measurements (13.00%)
3 (3.00%) low severe
1 (1.00%) low mild
2 (2.00%) high mild
7 (7.00%) high severe
I guess it's pretty conclusive anyway but crdt_parse shouldn't call the escaping functions at all, right?
I did some profiling on crdt.md, and the overhead is pretty apparent there too. (write_fmt calls not inlined, significant amount of time spent in plumbing such as fmt::Formatter::new)
Abandoning this PR for now. I will preserve the branch in my own fork.
|
gharchive/pull-request
| 2024-04-22T18:10:39 |
2025-04-01T04:35:35.609352
|
{
"authors": [
"Martin1887",
"ollpu"
],
"repo": "pulldown-cmark/pulldown-cmark",
"url": "https://github.com/pulldown-cmark/pulldown-cmark/pull/881",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2471550445
|
Master-Slave transaction on same Chimney
I want to confirm that if a master connected on one chimney can initiate transaction with a slave on the same chimney or not?
Theoretically yes. The router will just send the incoming packet into the same direction again to the subordinate. You just have to make sure that Loopback is enabled in the router
|
gharchive/issue
| 2024-08-17T14:41:09 |
2025-04-01T04:35:35.610934
|
{
"authors": [
"fischeti",
"mubashir913"
],
"repo": "pulp-platform/FlooNoC",
"url": "https://github.com/pulp-platform/FlooNoC/issues/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
796369752
|
Broken anchor tags in python SDK docs
Problem description
The python SDK doc's anchor tags don't work. For example, automation API:
The link from the anchor icon is https://www.pulumi.com/docs/reference/pkg/python/pulumi/#automation-api-1 - which doesn't work.
However, linking https://www.pulumi.com/docs/reference/pkg/python/pulumi/#module-pulumi.x.automation actually gets you to the correct spot.
This issue is probably due to how we dump sphinx-generated html into markdown and then Hugo does its thing to the markdown to generate into Pulumi format.
Related to: https://github.com/pulumi/docs/issues/4615
This may require changing the styles we have in https://github.com/pulumi/docs/blob/master/assets/sass/_api-python.scss
This may require changing the styles we have in https://github.com/pulumi/docs/blob/master/assets/sass/_api-python.scss
|
gharchive/issue
| 2021-01-28T21:03:00 |
2025-04-01T04:35:35.626899
|
{
"authors": [
"justinvp",
"komalali"
],
"repo": "pulumi/docs",
"url": "https://github.com/pulumi/docs/issues/5047",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1816465499
|
Bad formatting of deprecation notices in resource docs
https://www.pulumi.com/registry/packages/azure/api-docs/containerservice/registrywebook/
Looks like this is malformed markup resulting from doc-gen templating:
@interurban Assigning to you, probably needs to go to the Docs board
Duplicate of https://github.com/pulumi/pulumi-hugo/issues/2832.
|
gharchive/issue
| 2023-07-21T23:06:48 |
2025-04-01T04:35:35.629236
|
{
"authors": [
"cnunciato",
"scottslowe"
],
"repo": "pulumi/docs",
"url": "https://github.com/pulumi/docs/issues/9541",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2113074875
|
ast: ranges for accessors in single-line flow strings
These changes add support for reporting ranges for accessors in single-line flow strings (i.e. single-line strings that are not quoted).
Multi-line strings are quite a bit more complicated to support. The string we get from the YAML parser has already been processed to remove indentation, fold newlines, etc., so its bytes do not represent the bytes in the original file. We will want to add support for these strings in the future, but doing so is something of an open problem.
Split this into two commits: one for the code changes + one for the updates to test baselines. The test updates are extremely noisy due to the addition of the range information.
|
gharchive/pull-request
| 2024-02-01T17:44:11 |
2025-04-01T04:35:35.630908
|
{
"authors": [
"pgavlin"
],
"repo": "pulumi/esc",
"url": "https://github.com/pulumi/esc/pull/231",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
255152802
|
How to allow ?
sanitize-html is replacing the src with data-filename event he I added the allowedAttributes: false
You need to update the scheme:
allowedSchemes: ['http', 'https', 'ftp', 'mailto', 'data']
|
gharchive/issue
| 2017-09-05T03:46:20 |
2025-04-01T04:35:35.653704
|
{
"authors": [
"kamgasimo",
"michelsalib"
],
"repo": "punkave/sanitize-html",
"url": "https://github.com/punkave/sanitize-html/issues/159",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1855872095
|
[Feature]: How to use SSL websocket with remote debugging on Puppeteer?
Feature description
Hope doing well. I am going to use SSL web socket for remote debugging on Puppeteer. But in regular, we are using ws://127.0.0.1:9222 for the remote debugging.
I will appreciate if you could please help me. Thank you.
ws://localhost:9222/devtools/browser/b0b8a4fb-bb17-4359-9533-a8d9f3908bd8 to wss://localhost:9222/devtools/browser/b0b8a4fb-bb17-4359-9533-a8d9f3908bd8
I am not sure if the browsers support SSL sockets. The recommended way for better security is to use pipes (see the pipe option in https://pptr.dev/api/puppeteer.launchoptions/#properties).
If puppeteer is used on remote computer using puppeteer.connect function then it would be dangerous because of the not secured communication.
Do you have a alternative method to avoid this issule?
@honestydeveloper the secure solution is not expose the browser’s debugging server directly, instead it’s better to start the browser with the debugging pipe and have a custom secure web socket server forward commands. Or automate over a secure network.
if I use pipe option then it could never pass the some security companies.
Thus, I have to use remote debugging as the websocket.
I am not familiar with the automate over a secure network so could you please more explain how to explain it more detail ?
I am not sure either as it depends on your use case, but that goes beyond the scope of Puppeteer.
|
gharchive/issue
| 2023-08-18T00:28:46 |
2025-04-01T04:35:35.673059
|
{
"authors": [
"OrKoN",
"honestydeveloper"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/10747",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
714096565
|
The example search.js uses an invalid selector
Steps to reproduce
Tell us about your environment:
Puppeteer version: N/A
Platform / OS version: N/A
URLs (if applicable): N/A
Node.js version: N/A
What steps will reproduce the problem?
See broken example
Please include code that reproduces the issue.
N/A
What is the expected result?
The correct selector is used
What happens instead?
The selector is invalid
I will raise a PR to fix this issue.
|
gharchive/issue
| 2020-10-03T14:00:23 |
2025-04-01T04:35:35.676781
|
{
"authors": [
"thomaschaplin"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/issues/6470",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1559724086
|
chore: Add launch.json for debugging tests
This makes running or debugging test easier for VSCode user.
Have to check DevContainer behavior, but should make contributing from there easier as well.
TODO: Add documentation.
Adding launch.json is generally not done because debugging is very use-case specific. This particular JSON doesn't really take advantage of breakpoint debugging too. It's just running the tests from what I see. What may make sense is a launch.tmpl.json that contains an example configuration and also implement debugging as featured in pptr.dev. This would help developers of Puppeteer, but also model current usage.
VsCode attaches a debugger to node when running from launch.json so brakepoints put in puppeteer or test do stop. Implementing the thing in the debugging page is for users of puppeteer, not contributes this PR is aimed for.
|
gharchive/pull-request
| 2023-01-27T12:53:51 |
2025-04-01T04:35:35.678775
|
{
"authors": [
"Lightning00Blade",
"jrandolf"
],
"repo": "puppeteer/puppeteer",
"url": "https://github.com/puppeteer/puppeteer/pull/9599",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
489885067
|
It's hard to debug dynamic inventory.
Use Case
It can be hard to debug dynamic inventory files. For example it's easy for a user to accidentally refer to dynamic information outside of a plugin block and other than writing a specific plan there is no way to see what is going on.
Describe the Solution You Would Like
A command that resolves dynamic information in the inventory file and prints the results.
It probably should support both resolving the entire file or the data for a specific target. Should it show information only for a specific group or unmerged info for a target?
Use case 1: A target I'm trying to run on isn't using the config I expect. I want to see what config it's using.
Use case 2: I want to see the entire inventory resolved.
Use case 3: I also want to be able to know where a target got the config it's using.
I think we should focus on usecase 1 for now.
bolt inventory show -n mynode --verbose seems like a good starting syntax for that but there a few complications:
We should try not resolve all references needed for the target when --verbose is not passed. Do we do this now?
We should try to keep --verbose as an outputter option rather then using it multiple places in the code.
Show all config/data fully resolved from cli, config, plugins, defaults, groups etc for a target. Try to implement it where given a --detail flag, instead of returning Target strings return an array of fully populated target data.
If --detail cannot be used (or person who implements it feels strongly about new command), we will implement a new sub-command: bolt target show <target name>.
|
gharchive/issue
| 2019-09-05T16:58:11 |
2025-04-01T04:35:35.682904
|
{
"authors": [
"adreyer",
"donoghuc",
"lucywyman"
],
"repo": "puppetlabs/bolt",
"url": "https://github.com/puppetlabs/bolt/issues/1200",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1084609876
|
Add GitHub Actions annotations
It is possible to get inline annotations with GitHub Actions. This is based on Rubocop's formatter.
It's implemented as a configuration option that's dynamic by default. This means it will show up in actions, but not in other environments. Overriding is still possible.
This is currently very light on tests. I also wonder about the additional class.
See https://github.com/voxpupuli/puppet-example/pull/19 for an example:
It looks like no CI ran, only the CLA check. I'm not sure why that is.
It looks like no CI ran, only the CLA check. I'm not sure why that is.
That’s weird…
Found it and it's the same thing we suffer from in theforeman: https://github.com/puppetlabs/puppet-lint/actions/workflows/ci.yml shows
This scheduled workflow is disabled because there hasn't been activity in this repository for at least 60 days. Enable this workflow to resume scheduled runs.
This is because there's a schedule and regular PR testing in the same workflow, which is very annoying.
See, it was good that we enabled CI. Looks like I need to please Rubocop.
It looks like there's an interesting interaction I didn't foresee: the CI runs on GitHub actions as well so that's generating the warnings in this test suite, which was not intended. Looks like I need to use the configuration option I implemented.
@ekohl Apologies, I forgot to get back to you. This does indeed break PDK rendering rather spectacularly when running with a GITHUB_ACTION env variable set. Looks like both output formats get used.
output.txt
Fascinating. I wonder what it's doing. Is it by any chance using JSON output?
I believe https://github.com/puppetlabs/puppet-lint/pull/35 should fix it, or at least unbreak JSON mode. Could you verify that?
👍 looks good
|
gharchive/pull-request
| 2021-12-20T10:19:47 |
2025-04-01T04:35:35.701882
|
{
"authors": [
"binford2k",
"ekohl",
"genebean"
],
"repo": "puppetlabs/puppet-lint",
"url": "https://github.com/puppetlabs/puppet-lint/pull/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
107250287
|
(PUP-4932) Deprecate --cfacter option
This commit deprecates the --cfacter setting, as it is no longer
needed with native Facter being shipped in AIO by default.
@joshcooper is this wording reasonable? I wasn't sure what all of the possible scenarios would be that someone could be using this option, so maybe mentioning 'puppet-agent' wouldn't always make sense?
Also, I wanted to add a spec test for this, but we don't have any for features, and I couldn't seem to get it to fire off in spec/unit/default_spec.rb. I've tested it in a puppet-agent build, though, and the deprecation goes off as expected.
I suppose someone could be running puppet not within puppet-agent (from source, maybe?)
yeah, or as a gem, e.g. vagrant+puppet. They'll get ruby facter (2.x), and cfacter doesn't make sense there either.
@nfagerlund thoughts on the deprecation warning wording?
For tests, you could add a test to the agent or apply application to make sure a deprecation is issued, like we do about webrick in spec/unit/application/master_spec.rb? Or not, I could go either way.
For the setting description, I'm fine with what @whopper changed it to.
For the deprecation warning, what do you think about The cfacter setting is deprecated. You can use Facter 3 and higher without this setting.
@nfagerlund Ah yeah, much better - Updated.
@joshcooper Working on the test, but having some annoying 'Facter has already evaluated facts' issues..
I also removed my original deprecated => completely from the setting in defaults.rb, as that causes a second deprecation warning to fire off, and I'm not sure that we commonly use that option anyway.
@whopper sorry I think I just internalized what you said earlier about other use cases. So if someone is running puppet from source/gem, which pulls in ruby facter, but they really do want to use native facter, e.g. it's in their PATH, LOAD_PATH, etc. With this PR, they would get a deprecation warning even though they really do want native facter and the cfacter switch is the only way to tell puppet to use native facter:
0 ~/work/puppet (master) $ be puppet apply -e "notify { 'foo': }" --cfacter
Warning: The cfacter setting is deprecated. You can use Facter 3 and higher without this setting.
(at /Users/josh/work/puppet/lib/puppet/feature/cfacter.rb:5:in `block in <top (required)>')
Error: Could not initialize global default settings: cfacter version 0.2.0 or later is not installed.
But perhaps this isn't even possible anymore? I don't know if native facter still provides the cfacter shim to load native facter. @peterhuene, @kylog?
Facter 3 does not provide the cfacter.rb shim and cfacter as a separate project is abandoned at this point, so this setting doesn't make sense.
We don't have great documentation on how to run puppet from source with facter 3, but I think that should be addressed outside of this ticket (and shouldn't involve the --cfacter setting).
Btw, not the same concern but somewhat related: https://github.com/puppetlabs/puppet-agent/pull/289
@joshcooper or @whopper: while your'e thinking cfacter-is-gone thoughts, I PR'd the file_paths spec also: https://github.com/puppetlabs/puppet-specifications/pull/49
|
gharchive/pull-request
| 2015-09-18T18:11:41 |
2025-04-01T04:35:35.709289
|
{
"authors": [
"joshcooper",
"kylog",
"nfagerlund",
"whopper"
],
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/4265",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
171195921
|
(PUP-6602) Confine pcore loader test to master
This commit updates the pcore loader precedence test to execute
on the master only. Prior to this commit, the test failed when run
on an agent.
CLA signed by all contributors.
|
gharchive/pull-request
| 2016-08-15T15:37:28 |
2025-04-01T04:35:35.710738
|
{
"authors": [
"johnduarte",
"puppetcla"
],
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/5199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
195370249
|
(DO NOT MERGE)(PUP-6481) Remove external_facts feature
The external_facts feature is deprecated, since Puppet now relies on
a version of Facter with that support. This commit is part of the Puppet
5.0 code removal effort.
Note: this PR can't be merged until master is ready for 5.0 work.
CLA signed by all contributors.
|
gharchive/pull-request
| 2016-12-13T21:14:13 |
2025-04-01T04:35:35.711996
|
{
"authors": [
"puppetcla",
"whopper"
],
"repo": "puppetlabs/puppet",
"url": "https://github.com/puppetlabs/puppet/pull/5429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
158547967
|
apt/params: Remove unused LSB facts
These facts aren't used anywhere in the APT module and at least lsbminordistrelease doesn't seem to be set at all anymore in recent versions of Puppet. This causes the following warning to show up on Puppet 4.3:
Warning: Undefined variable 'lsbminordistrelease';
(file & line not available)
On 4.5 we get the slightly more helpful:
Warning: Unknown variable: '::lsbminordistrelease'. at /etc/puppetlabs/code/environments/production/modules/apt/manifests/params.pp:15:32
@bmjen If you're looking at apt PRs anyway, here's another 😄.
Thanks @daenney !
|
gharchive/pull-request
| 2016-06-05T10:30:02 |
2025-04-01T04:35:35.713717
|
{
"authors": [
"bmjen",
"daenney"
],
"repo": "puppetlabs/puppetlabs-apt",
"url": "https://github.com/puppetlabs/puppetlabs-apt/pull/610",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
64106423
|
Fix check_password function and use the standard rabbitmq api call instead
Old one was failing with:
Error: Execution of '/usr/sbin/rabbitmqctl eval rabbit_auth_backend_internal:check_user_login(<<"root">>, [{password, <<"pass">>}]).' returned 2: Error: {undef,
[{rabbit_auth_backend_internal,check_user_login,
[<<"root">>,[{password,<<"pass">>}]],
[]},
{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,569}]},
{rpc,'-handle_call_call/6-fun-0-',5,
[{file,"rpc.erl"},{line,205}]}]}
rabbitmq-server 3.5.0-1
This is a duplicate of #322, though the implementation here uses the suggestions found in code review. I like that the implementation in #322 is backwards compatible with older versions of rabbit. Perhaps collaboration is possible?
I think this is the preferred approach over #322, hoping to get more feedback there. Since tests aren't failing and no new functionality was added I think this PR is complete, just waiting for more community feedback.
I am currently using this fix with no issues on Debian 7.
I have also tested and am using this fix as it seems cleaner.
It is noted not to support <2.3.0 in the other PR, but module readme states tested on >3.0 so should be no problem.
Wouldn't be hard to add support for <2.3.0, but can't imagine it's required. It would be a regression if anyone is on a 4 year old rabbitmq.
thank you guys.
works perfect on debian jessie.
|
gharchive/pull-request
| 2015-03-24T21:37:25 |
2025-04-01T04:35:35.731107
|
{
"authors": [
"cmurphy",
"dalees",
"errygg",
"khaefeli",
"nibalizer",
"tamaskozak"
],
"repo": "puppetlabs/puppetlabs-rabbitmq",
"url": "https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/332",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
168012210
|
Add definition validation and munge for ha-sync-batch-size
With the release of RabbitMQ 3.6 there is an option to set the ha-sync-batch-size.
This option takes an integer >0. This commit adds a validation statement to check
the supplied value is an integer and then converts the string input into an
integer.
I'm not sure how what I changed has caused tests to fail. Would someone with more experience be able to help me figure out why?
@combatdud3 You have a typo in raise ArgumentError.
I fixed that in my PR https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/500 and added a Test for it.
#500
|
gharchive/pull-request
| 2016-07-28T04:36:05 |
2025-04-01T04:35:35.733695
|
{
"authors": [
"combatdud3",
"mxftw"
],
"repo": "puppetlabs/puppetlabs-rabbitmq",
"url": "https://github.com/puppetlabs/puppetlabs-rabbitmq/pull/489",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2002901149
|
Fix primitive assignment effect
We need to return the value in addition to just setting the value of the box. This is not a super nice fix. I think we should refactor the prim effect stuff so that we get more control over what bindings we create, then we can also compile these to plain unboxed assigment.
Fixes #204
I guess we don't need to unbox it right after box-set! and could just return the value that was set.
|
gharchive/pull-request
| 2023-11-20T19:59:43 |
2025-04-01T04:35:35.795280
|
{
"authors": [
"anttih"
],
"repo": "purescm/purescm",
"url": "https://github.com/purescm/purescm/pull/205",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
103826840
|
Unable to initialize client: unable to use endpoint scheme , http/https only
I get this error anytime I try to use the webui.
I was running fleet ui as
/usr/bin/docker run --rm --name fleet_ui --memory="128m" \
-p 3000:3000 \
-e ETCD_PEER=172.17.9.101 \
-v /home/core/.ssh/id_rsa:/root/id_rsa \
purpleworks/fleet-ui
I also tried
/usr/bin/docker run --rm --name fleet_ui --memory="128m" \
-p 3000:3000 \
-v /home/core/.ssh/id_rsa:/root/id_rsa \
purpleworks/fleet-ui
it appears that this ultimately called fleetctl as an example
fleetctl --endpoint 172.17.9.101 start --no-block=true ./new-service.service
which returns the same error
I am now running fleet ui like so:
/usr/bin/docker run --rm --name fleet_ui --memory="128m" \
-p 3000:3000 \
-e ETCD_PEER=http://172.17.9.101:4001 \
-v /home/core/.ssh/id_rsa:/root/id_rsa \
purpleworks/fleet-ui
and it is now working.
So something in the default options is incorrect and so is the readme.
Thanks reporting! Option is changed, I'll update document as soon as possible.
|
gharchive/issue
| 2015-08-29T03:58:05 |
2025-04-01T04:35:35.802287
|
{
"authors": [
"lancehudson",
"subicura"
],
"repo": "purpleworks/fleet-ui",
"url": "https://github.com/purpleworks/fleet-ui/issues/31",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2178614126
|
"e3dc/set/interval" not considered
It seems that "e3dc/set/interval" is not considered, while the INTERVAL value of config file is taken into account on startup.
When starting rscp2mqtt, INTERVAL setting in .config file sets the initial cadence of writing data to MQTT broker. When INTERVAL is changed and rscp2mqtt got a restart, new frequency is applied.
README.md reports that the cadence can be changed by e.g. mosquitto_pub -h localhost -p 1883 -t "e3dc/set/interval" -m 2, to change the interval. MQTT Explorer reflects the change in correct topic, but rscp2mqtt's update frequency doesn't change.
Commenting INTERVAL in config file results in default 1s cadence, but also here, newly published interval doesn't change this.
Btw.-1 "set/*" topics are not created by rscp2mqtt on startup, which might or might not be expected, e.g. to avoid feedback loops.:check:
Btw.-2 mosquitto_pub -h localhost -p 1883 -t "e3dc/set/force" -m 1 does result in a full topics refresh (except e3dc/set/*) :+1:
yes, it's a bug. I can reproduce it. Will be fixed by the next release.
fixed by v3.17 today
|
gharchive/issue
| 2024-03-11T09:10:20 |
2025-04-01T04:35:35.811966
|
{
"authors": [
"cwihne",
"pvtom"
],
"repo": "pvtom/rscp2mqtt",
"url": "https://github.com/pvtom/rscp2mqtt/issues/52",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
467074332
|
Unable to jailbreak with newest release on iOS 11.4.1
I know for a fact that the issue is not with machswap. There must be something that was changed that is interfering with exploiting the kernel. I haven't had a single successful attempt even being on iOS 11 with the latest update.
Device (please complete the following information):
iOS Version: 11.4.1
iOS Device: iPhone 5S 6,1
unc0ver Version: 3.0.0-b1
Place an "x" between the brackets if true:
[ ] this is a bug others will be able to reproduce
[ ] this issue is present with all tweaks uninstalled(except for default packages) or disabled
[ ] this issue is present after a rootfs restore
[ x ] this issue is present on the latest version of unc0ver
just use u0 v.3.2.1 for 11.4.1, i have use u0 3.2.1 on 11.4.1 and uptime has been reach 10 days and still stable.
I know I can use that, I'm just here to help out debugging issues as I don't care about my device as its a test device anyways.
|
gharchive/issue
| 2019-07-11T19:46:44 |
2025-04-01T04:35:35.823568
|
{
"authors": [
"Merculous",
"coenkcore"
],
"repo": "pwn20wndstuff/Undecimus",
"url": "https://github.com/pwn20wndstuff/Undecimus/issues/1123",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
627686458
|
Jailbreak bypass
I have simple app called QueQ but can not go through bypass
Any advise? Thank you
This is nothing to do with the jailbreak itself, ask on reddit or something, or maybe try google, because the jailbreak devs aren't gonna do anything about it.
Your question is pointless, you simply cannot ask a dev about bypasses, thats a problem you have to accept when you jailbreak, and if you want a bypass, you should ask other devs that already make bypasses, such as kernbypass (not recommended) or liberty lite
If my question is pointless, why you respond?! your unrecommended and recommended bypasses still not work. Try and see
The only way to work for me is to get an old version of this app ( I used AppStore++ to get an old version).. still the app (with or without bypasses) won't work until back to unjailbreak stage (by a reboot). but still, this is inconvenient and looking for a better way
I responded because I'm still trying to help, but you shouldn't be asking for help here, this is completely unrelated to the actual jailbreak, you should make a post on reddit,
The issues here are for actual important and relevant problems, jailbreak detection has been a problem for years, and if you are part of the jailbreak community you should know about the problems.
I am simply trying to help the dev by clearing up some useless issues, which he shouldn't have to deal with. And why would you ask why I responded, do you want to be helped or not?
In short, ask on reddit, and close this issue, and stop filing useless questions like this
bypassing jailbreak not related to jailbreak?! ok thank you
No that’s not what I’m saying, you are essentially blaming the jailbreak dev, ie pwn20wnd that the app you need has jailbreak detection, when you should be asking for help on reddit.
Jailbreak detection is not relevant here in an unc0ver issue report
blaming?! check again what I have said in the first line
I can see you keep promoting Reddit but unc0ver needs to know the feedback and all issues need to be discussed
But unc0ver and pretty much every single jailbreak out there knows about this, this is basic stuff, every single person that jailbreaks knows that some apps can’t be used, and what makes pwn20wnd different. This is completely irrelevant and shouldn’t be here end of
Your not doing the jailbreak community a favour by asking here, in fact, your just making things harder for the devs,
Nothing easy. Developers looking for challenges
unc0ver 5.0.1 better than the previous versions but we need to report apps that can not bypass the jailbreak to let devs do a better job
I have reported a simple app that should be easy to bypass
apps like "QNB Mobile" is more challenging
I know what you are trying to say, but this ain’t the place, pwn20wnd has nothing to do with any jailbreak bypasses, and I doubt he would want to try, If you had to report an app that doesn’t work this isn’t the place to do it. I bet you that the app doesn’t work with checkra1n either, which means that it isn’t anything to do with the actual jailbreak, which means this isn’t the place you should be complaining. Instead of arguing you could help yourself by asking in discord or reddit, which is not advertising, but they are the 2 places someone which you are supposed to go to, not the unc0ver issues
|
gharchive/issue
| 2020-05-30T09:14:23 |
2025-04-01T04:35:35.829747
|
{
"authors": [
"SarKaa",
"the1ulike"
],
"repo": "pwn20wndstuff/Undecimus",
"url": "https://github.com/pwn20wndstuff/Undecimus/issues/2039",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1223886199
|
Reply to comments from pull request view
Issue Description
feature request
Describe what happened (or what feature you want)
It seems that commend add only works in the context of a review. As a creator of a PR I want to be able to reply to comments from my reviewers without "starting" a review. Currently if I press <space>ca or run :Octo comment add in the PR overview tab I get this message
Start a new review to reply to a thread
Describe what you expected to happen
pressing <space>ca in pr comment thread should open a comment buffer
How to reproduce it (as minimally and precisely as possible)
Open a PR from :Octo pr list
press <space>ca in a comment thread or run :Octo comment add
see this message Start a new review to reply to a thread
Tell us your environment
MacOS Monterey 12.3
Neovim 0.7
octo.nvim latest (installed with vim-plug)
Anything else we need to know?
Thanks for reporting, this was figured out by @ldelossa here. Will try to implement it soon
@pwntester awesome, thank you 😃
@axkirillov Just implemented this as part of #294 would you mind testing it?
@axkirillov thanks for letting me know, no need to debug though, if you could please send me error messassge or screenshot and I will look into it
Thanks @alex-popov-tech yep, please cursor on THREAD COMMENT line or any or the comments on the thread and press <space>ca or :Octo comment add
@pwntester it works! but for some reason it duplicates comment, ie i put one comment in buffer, then do :w to send it, then i do Octo pr reload and then i see 2 same comments :(
@alex-popov-tech thats weird, I cannot reproduce it. Can you please send a video so I can see the exact line where you run the add comment command?
|
gharchive/issue
| 2022-05-03T09:43:31 |
2025-04-01T04:35:35.836478
|
{
"authors": [
"alex-popov-tech",
"axkirillov",
"pwntester"
],
"repo": "pwntester/octo.nvim",
"url": "https://github.com/pwntester/octo.nvim/issues/288",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1349495360
|
🛑 NARLabs is down
In a67d853, NARLabs (https://www.narlabs.org.tw/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NARLabs is back up in d1cab31.
|
gharchive/issue
| 2022-08-24T13:58:44 |
2025-04-01T04:35:35.850037
|
{
"authors": [
"pwtsai"
],
"repo": "pwtsai/upptime",
"url": "https://github.com/pwtsai/upptime/issues/235",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
680661831
|
[gpuCI] Auto-merge branch-0.2 to branch-0.3 [skip ci]
Auto-merge triggered by push to branch-0.2 that creates a PR to keep branch-0.3 up-to-date. If this PR is unable to be immediately merged due to conflicts, it will remain open for the team to manually merge.
SUCCESS - Auto-merge complete.
|
gharchive/pull-request
| 2020-08-18T03:33:35 |
2025-04-01T04:35:35.854572
|
{
"authors": [
"pxLi"
],
"repo": "pxLi/spark-rapids",
"url": "https://github.com/pxLi/spark-rapids/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
394877914
|
Add searchable addresses as well
As a lot of people aren't reading that the breach site only works with public keys, AroDev has requested this includes addresses as well.
Closing this as I'm not sure this would be used much.
|
gharchive/issue
| 2018-12-30T14:53:31 |
2025-04-01T04:35:35.856203
|
{
"authors": [
"pxgamer"
],
"repo": "pxgamer/arionum-breaches",
"url": "https://github.com/pxgamer/arionum-breaches/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1696330919
|
Unable to use the search function of the PyAEDT home page
Discussed in https://github.com/pyansys/pyaedt/discussions/2963
Originally posted by Dicksky May 4, 2023
Hello, I can't use the search function on PyAEDT's documentation home page, no matter what keywords I enter, I don't get any entries. Can you help me?
Adding @Revathyvenugopal162 and @jorgepiloto for visibility.
|
gharchive/issue
| 2023-05-04T16:37:18 |
2025-04-01T04:35:35.877390
|
{
"authors": [
"MaxJPRey"
],
"repo": "pyansys/pyaedt",
"url": "https://github.com/pyansys/pyaedt/issues/2964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
461519722
|
More input currents
Add some more input currents which are a function of (dimensional) time. Allow to read a drive cycle from .csv file (e.g. C-rate vs time [seconds])
@rtimms something like this should work:
import scipy.interpolate as interp
data = ...
# Shape preserving interpolation of current: do the hard work offline
interp_function = interp.PchipInterpolator(data["time"], data["current"])
def current_data(t):
return interp_function(t)
|
gharchive/issue
| 2019-06-27T13:08:52 |
2025-04-01T04:35:35.885673
|
{
"authors": [
"rtimms",
"tinosulzer"
],
"repo": "pybamm-team/PyBaMM",
"url": "https://github.com/pybamm-team/PyBaMM/issues/483",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2294039160
|
Automate URL/Slug genration
Program should automatically generate Slugs from TItle if not explecitly specified.
fixed with 1243d34
|
gharchive/issue
| 2024-05-13T23:45:58 |
2025-04-01T04:35:35.926245
|
{
"authors": [
"seowings"
],
"repo": "pybodensee/meetlify",
"url": "https://github.com/pybodensee/meetlify/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124323083
|
Allow dynamic discovery entry_points configuration
We currently allow console_scripts, however dynamic discovery entry points are not configurable.
@mriehl please review when available
|
gharchive/issue
| 2015-12-30T09:12:09 |
2025-04-01T04:35:35.927664
|
{
"authors": [
"arcivanov"
],
"repo": "pybuilder/pybuilder",
"url": "https://github.com/pybuilder/pybuilder/issues/308",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
355452994
|
Environment variables error in base.py and local.py
When I try to run my project using cookiecutter setup, I got this exception :
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
To solve this, Followed this steps and include the code in manage.py :
i)import django
ii)django.setup()
inside the If block.
After that, the new exception raised i.e
django.core.exceptions.ImproperlyConfigured: Set the DATABASE_URL environment variable
So, it's saying that values are not getting set from the .env files.
To solve this I made changes and set the default value for the variables.
Before setting the default values for my environment variable in my base.py:
DATABASES = {
'default': env.db('DATABASE_URL'),
}
CELERY_BROKER_URL = env('CELERY_BROKER_URL')
After setting the default values for my environment variable in my base.py:
DATABASES = {
'default': env.db('DATABASE_URL',default="postgres://user_name:password@127.0.0.1:5432/db_name"),
}
CELERY_BROKER_URL = env('CELERY_BROKER_URL',default='redis://redis:6379/0')
I can't hard code values for every service. it should be taken from the .env files i.e .django and .postgres from .local folder of .envs directory
Os: macOS
version : 10.13.3
Are you using docker?
No, Right now we are not using Docker but we have opted it for local and production environment for the future.
How we will run migration if we are using Docker
@connectgautam your question seems off-topic, please could you find a more appropriate channel?
First, make sure to examine the docs. If that doesn't help post a question on StackOverflow tagged with cookiecutter-django. Finally, feel free to join Gitter and ask around.
@Chhaya02 if you're not using Docker, I would recommend answering "no" when being asked the question by cookiecutter.
I can't hard code values for every service. it should be taken from the .env files i.e .django and .postgres from .local folder of .envs directory
Docker enables to control the target environment from the code, by settings all the required environment variables in the .envs, and telling in the docker-compose file what to use. Without Docker, the equivalent is to set things manually on your host machine (i.e. your Mac).
One thing that can help, which unfortunately not well documented, is to create a single .env file for the local dev. It's where you define all the variables for Django https://cookiecutter-django.readthedocs.io/en/latest/settings.html
Then on your host machine, you should be able to get your app to read it when DJANGO_READ_DOT_ENV_FILE=True.
If you find this helpful, you are welcome to send us a pull request to update our documentation. Maybe the page to develop locally can be updated and link to the setting page.
Thank you. I will try this.
@browniebroke It seems like one would need to remove .env from the .dockerignore file if they wanted to use them in docker. I might be thinking about this wrong. Can you give your opinion?
|
gharchive/issue
| 2018-08-30T07:10:53 |
2025-04-01T04:35:35.972571
|
{
"authors": [
"Chhaya02",
"browniebroke",
"connectgautam",
"h0h0h0",
"sfdye"
],
"repo": "pydanny/cookiecutter-django",
"url": "https://github.com/pydanny/cookiecutter-django/issues/1775",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
70914746
|
Major refractor to move PYTHON_PATH to top-level repo dir
PYTHON_PATH now should point to root_dir instead of apps_dir
manage.py now lives at the root dir
config is moved out to root dir
We have now a new directory config/settings
root_url is moved to config/urls.py
wsgi.py now resides at config/wsgi.py
New Structure, after the project is generated:
├── config
│ └── settings
├── docs
├── myproject
│ ├── contrib
│ │ └── sites
│ │ └── migrations
│ ├── static
│ │ ├── css
│ │ ├── fonts
│ │ ├── images
│ │ ├── js
│ │ └── sass
│ ├── templates
│ │ ├── account
│ │ ├── avatar
│ │ ├── pages
│ │ └── users
│ └── users
│ └── migrations
└── requirements
├── CONTRIBUTORS.txt
├── Gruntfile.js
├── LICENSE.rst
├── Procfile
├── README.rst
├── Vagrantfile
├── config
│ ├── __init__.py
│ ├── settings
│ │ ├── __init__.py
│ │ ├── common.py
│ │ ├── local.py
│ │ └── production.py
│ ├── urls.py
│ └── wsgi.py
├── docs
│ ├── Makefile
│ ├── ...
│ └── make.bat
├── install_os_dependencies.sh
├── install_python_dependencies.sh
├── manage.py
├── myproject
│ ├── __init__.py
│ ├── contrib
│ │ ├── __init__.py
│ │ └── sites
│ │ ├── __init__.py
│ │ └── migrations/
│ ├── static
│ │ ├── css
│ │ │ └── project.css
│ │ ├── fonts
│ │ ├── images
│ │ │ └── favicon.ico
│ │ ├── js
│ │ │ └── project.js
│ │ └── sass
│ │ └── project.scss
│ ├── templates
│ │ ├── 404.html
│ │ ├── 500.html
│ │ ├── account
│ │ │ ├── base.html
│ │ │ ├── ...
│ │ │ └── verified_email_required.html
│ │ ├── avatar
│ │ │ ├── add.html
│ │ │ ├── ...
│ │ │ └── confirm_delete.html
│ │ ├── base.html
│ │ ├── pages
│ │ │ ├── about.html
│ │ │ └── home.html
│ │ └── users
│ │ ├── user_detail.html
│ │ ├── user_form.html
│ │ └── user_list.html
│ └── users
│ ├── __init__.py
│ ├── admin.py
│ ├── forms.py
│ ├── migrations/
│ ├── models.py
│ ├── urls.py
│ └── views.py
├── package.json
├── requirements
│ ├── base.txt
│ ├── local.txt
│ ├── production.txt
│ └── test.txt
├── requirements.apt
├── requirements.txt
└── setup.cfg
This is second PR for #220
closes #214
(Work in Progress)
Looks great! Amazing work! I've tested it and it appears to work. Want me to merge it in?
I tested it locally, and all seems to work fine too. I was about to test the wsgi.py by deploying to Heroku, then I had to leave for some work. I'll test the heroku deployment and merge it myself.
Btw. If you happened to test it on heroku, please merged it in! :)
This is PR is not tested against heroku deployment. There were issues[1], unrelated to this PR, which were first resolved in master and this PR is rebased version of master.
Merging this PR in!
[1] https://github.com/pydanny/cookiecutter-django/compare/e0e6d8a8ea9c...0cc9958635c
|
gharchive/pull-request
| 2015-04-25T14:06:04 |
2025-04-01T04:35:35.978605
|
{
"authors": [
"pydanny",
"theskumar"
],
"repo": "pydanny/cookiecutter-django",
"url": "https://github.com/pydanny/cookiecutter-django/pull/225",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2093133797
|
iOS support?
pydantic and pydantic-core do not appear to build on iOS. Do you plan to add support for it?
Or if it is supported, where can I find documentation for it?
I am using the Beeware mobile-forge and Briefcase and can't generate a binary wheel for pydantic-core, which means I can't install it on iOS.
Many libraries use Pydantic, which blocks them from being used on iOS. For example, spaCy, Presidio, thinc, and others.
Here's my mobile-forge recipe:
package:
name: pydantic-core
version: 2.14.6
Here's the output from the forge build:
$ forge iphoneos:12.0:arm64 pydantic-core
================================================================================
Building pydantic-core 2.14.6 for ios_12_0_iphoneos_arm64
================================================================================
[venv3.10-ios_12_0_iphoneos_arm64] Unpack sources
Unpacking downloads/pydantic-core-2.14.6.tar.gz...
[venv3.10-ios_12_0_iphoneos_arm64] Apply patches
No patches to apply.
[venv3.10-ios_12_0_iphoneos_arm64] Create clean build environment
Creating venv3.10-ios_12_0_iphoneos_arm64...
Verifying cross-platform environment...
done.
Cross platform-environment venv3.10-ios_12_0_iphoneos_arm64 created.
[venv3.10-ios_12_0_iphoneos_arm64] Install forge host requirements
No host requirements.
[venv3.10-ios_12_0_iphoneos_arm64] Install forge build requirements
No build requirements.
[venv3.10-ios_12_0_iphoneos_arm64] Install pyproject.toml build requirements
Looking in links: /Users/adam/src/mobile-forge/dist
Collecting build
Using cached build-1.0.3-py3-none-any.whl (18 kB)
Collecting wheel
Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
Processing ./dist/maturin-1.4.0-cp310-cp310-ios_12_0_iphoneos_arm64.whl
Collecting typing-extensions!=4.7.0,>=4.6.0
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Collecting pyproject_hooks
Using cached pyproject_hooks-1.0.0-py3-none-any.whl (9.3 kB)
Collecting tomli>=1.1.0
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting packaging>=19.0
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Installing collected packages: wheel, typing-extensions, tomli, packaging, pyproject_hooks, maturin, build
Successfully installed build-1.0.3 maturin-1.4.0 packaging-23.2 pyproject_hooks-1.0.0 tomli-2.0.1 typing-extensions-4.9.0 wheel-0.42.0
Looking in links: /Users/adam/src/mobile-forge/dist
Collecting build
Using cached build-1.0.3-py3-none-any.whl (18 kB)
Collecting wheel
Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
Collecting maturin<2,>=1
Downloading maturin-1.4.0-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl (15.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 15.7/15.7 MB 33.5 MB/s eta 0:00:00
Collecting typing-extensions!=4.7.0,>=4.6.0
Using cached typing_extensions-4.9.0-py3-none-any.whl (32 kB)
Collecting pyproject_hooks
Using cached pyproject_hooks-1.0.0-py3-none-any.whl (9.3 kB)
Collecting tomli>=1.1.0
Using cached tomli-2.0.1-py3-none-any.whl (12 kB)
Collecting packaging>=19.0
Using cached packaging-23.2-py3-none-any.whl (53 kB)
Installing collected packages: wheel, typing-extensions, tomli, packaging, pyproject_hooks, maturin, build
Successfully installed build-1.0.3 maturin-1.4.0 packaging-23.2 pyproject_hooks-1.0.0 tomli-2.0.1 typing-extensions-4.9.0 wheel-0.42.0
* Getting build dependencies for wheel...
* Building wheel...
Running `maturin pep517 build-wheel -i /Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/venv3.10-ios_12_0_iphoneos_arm64/venv3.10-ios_12_0_iphoneos_arm64/bin/python --compatibility off`
💥 maturin failed
Caused by: Cargo metadata failed. Do you have cargo in your PATH?
Caused by: No such file or directory (os error 2)
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/venv3.10-ios_12_0_iphoneos_arm64/venv3.10-ios_12_0_iphoneos_arm64/bin/python', '--compatibility', 'off'] returned non-zero exit status 1
ERROR Backend subprocess exited when trying to invoke build_wheel
********************************************************************************
Failed build: pydantic-core 2.14.6 for iphoneos 12.0 on arm64
********************************************************************************
Traceback (most recent call last):
File "/Users/adam/src/mobile-forge/src/forge/build.py", line 267, in build
self._build()
File "/Users/adam/src/mobile-forge/src/forge/build.py", line 518, in _build
self.cross_venv.run(
File "/Users/adam/src/mobile-forge/src/forge/cross.py", line 356, in run
return subprocess.run(logfile, *args, **self.cross_kwargs(kwargs))
File "/Users/adam/src/mobile-forge/src/forge/subprocess.py", line 49, in run
raise stdlib_subprocess.CalledProcessError(return_code, args)
subprocess.CalledProcessError: Command '(['python', '-m', 'build', '--no-isolation', '--wheel', '--outdir', '/Users/adam/src/mobile-forge/dist'],)' returned non-zero exit status 1.
Failed builds for:
* pydantic-core (default version) (ios_12_0_iphoneos_arm64)
It appears that this is because maturin does not support iOS. If I run the command that the bdist_wheel script is trying to run, I get this:
$ maturin pep517 build-wheel -i /Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/venv3.10-ios_12_0_iphoneos_arm64/venv3.10-ios_12_0_iphoneos_arm64/bin/python --compatibility off --target aarch64-apple-ios
📦 Including license file "/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/LICENSE"
🍹 Building a mixed python/rust project
🔗 Found pyo3 bindings
💥 maturin failed
Caused by: The operating system Ios is not supported
I looked at trying to add iOS support to maturin, but my changes don't seem to work. Here's the branch with my iOS changes:
https://github.com/adamfeuer/maturin/tree/feature/ios-support
When I use my changed maturin, I get the following output:
$ ~/src/maturin/target/release/maturin pep517 build-wheel -i python3.10 --compatibility off --target aarch64-apple-ios
📦 Including license file "/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/LICENSE"
🍹 Building a mixed python/rust project
🔗 Found pyo3 bindings
🐍 Found CPython 3.10 at /Users/adam/src/mobile-forge/venv3.10/bin/python3.10
📡 Using build options features, bindings from pyproject.toml
Compiling autocfg v1.1.0
Compiling proc-macro2 v1.0.69
Compiling quote v1.0.29
Compiling unicode-ident v1.0.10
Compiling target-lexicon v0.12.9
Compiling python3-dll-a v0.2.9
Compiling once_cell v1.18.0
Compiling libc v0.2.147
Compiling static_assertions v1.1.0
Compiling heck v0.4.1
Compiling version_check v0.9.4
Compiling cfg-if v1.0.0
Compiling lexical-util v0.8.5
Compiling rustversion v1.0.13
Compiling parking_lot_core v0.9.8
Compiling tinyvec_macros v0.1.1
Compiling smallvec v1.11.1
Compiling scopeguard v1.1.0
Compiling tinyvec v1.6.0
Compiling ahash v0.8.6
Compiling num-traits v0.2.16
Compiling lock_api v0.4.10
Compiling num-integer v0.1.45
Compiling memoffset v0.9.0
Compiling num-bigint v0.4.4
Compiling lexical-parse-integer v0.8.6
Compiling lexical-write-integer v0.8.5
Compiling memchr v2.6.3
Compiling pyo3-build-config v0.20.0
Compiling serde v1.0.190
Compiling syn v2.0.38
Compiling aho-corasick v1.0.2
Compiling lexical-write-float v0.8.5
Compiling lexical-parse-float v0.8.5
Compiling unicode-normalization v0.1.22
Compiling getrandom v0.2.10
Compiling parking_lot v0.12.1
Compiling equivalent v1.0.1
Compiling percent-encoding v2.3.0
Compiling serde_json v1.0.108
Compiling hashbrown v0.14.0
Compiling indoc v2.0.4
Compiling unicode-bidi v0.3.13
Compiling regex-syntax v0.8.2
Compiling zerocopy v0.7.20
Compiling unindent v0.2.3
Compiling idna v0.4.0
Compiling indexmap v2.0.0
Compiling form_urlencoded v1.2.0
Compiling lexical-core v0.8.5
Compiling ryu v1.0.14
Compiling itoa v1.0.8
Compiling base64 v0.21.5
Compiling url v2.4.1
Compiling uuid v1.5.0
Compiling pyo3-ffi v0.20.0
Compiling pyo3 v0.20.0
Compiling pydantic-core v2.14.6 (/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6)
Compiling pyo3-macros-backend v0.20.0
error: failed to run custom build command for `pyo3-ffi v0.20.0`
Caused by:
process didn't exit successfully: `/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/target/release/build/pyo3-ffi-b9fbb289a993548c/build-script-build` (exit status: 1)
--- stdout
cargo:rerun-if-env-changed=PYO3_CROSS
cargo:rerun-if-env-changed=PYO3_CROSS_LIB_DIR
cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_VERSION
cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_IMPLEMENTATION
cargo:rerun-if-env-changed=PYO3_NO_PYTHON
--- stderr
error: PYO3_CROSS_PYTHON_VERSION or an abi3-py3* feature must be specified when cross-compiling and PYO3_CROSS_LIB_DIR is not set.
warning: build failed, waiting for other jobs to finish...
💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.10-64bit" PYO3_PYTHON="/Users/adam/src/mobile-forge/venv3.10/bin/python3.10" PYTHON_SYS_EXECUTABLE="/Users/adam/src/mobile-forge/venv3.10/bin/python3.10" "cargo" "rustc" "--features" "pyo3/extension-module" "--target" "aarch64-apple-ios" "--message-format" "json-render-diagnostics" "--manifest-path" "/Users/adam/src/mobile-forge/build/cp310/pydantic-core/2.14.6/Cargo.toml" "--release" "--lib" "--crate-type" "cdylib"
Do you know where the problem lies? In maturin, in pydantic-core, or both? I'm happy to work on this, I'm just not sure where to go from here.
If rust/maturin/pyo3 can build for iOS, we would absolutely support it. Until then there's nothing we can do.
When I checked last, there were virtually no (10s out of 250m) of installs of pydantic v1 on iOS. So it's very rarely used.
You might try the warm build which should work.
@adamfeuer I think the best course of action is to start with an issue / PR in maturin to understand how it should detect the mobile-forge build configuration, and then we can proceed from there to land changes in PyO3 also if needed.
@davidhewitt Ah! Thank you! That is great! If I need to see the source code, is it on the v1 branch?
Do you know how the other iOS users installed pydantic v1?
v1 is pure python. (It was optimized by Cython at build time, but you can skip that if needed.)
I believe the maintenance branch is at https://github.com/pydantic/pydantic/tree/1.10.X-fixes
Note that we don't intend to support v1 much longer, and we only merge occasional bugfixes now.
V1 is on the 1.10.X-fixes branch. Pydantic-core didn't exist until V2.
I've updated my comment above to say.
You might try the wasm build which should work.
As you'll see here we already have a preview of running pydantic-core with webassembly, if that runs for you in iOS, it might be the easiest solution.
@davidhewitt Pure python, perfect. I just built it for iOS using the mobile-forge, so it looks like that will work. I have to rebuild the other stack of software that uses it now.
Re: the 1.10.x branch not being supported soon, I understand. If I can get this working with 1.10.x, it will solve my immediate problem, and then I can devote some effort to fixing maturin for iOS.
@samuelcolvin I'll try the wasm if the pure python 1.10.x doesn't work.
Thank you both so much for your fast help! I'll post here if I get this working, and let you know what's happening with maturin.
@adamfeuer See https://github.com/PyO3/maturin/issues/1742
@messense Thank you for letting me know about this! Wonderful news!
|
gharchive/issue
| 2024-01-22T05:48:09 |
2025-04-01T04:35:35.992282
|
{
"authors": [
"adamfeuer",
"davidhewitt",
"messense",
"samuelcolvin"
],
"repo": "pydantic/pydantic-core",
"url": "https://github.com/pydantic/pydantic-core/issues/1170",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2678827493
|
2.10: Issue with forward annotations and serializers/computed_field (raises: <model> is not fully defined)
Initial Checks
[X] I confirm that I'm using Pydantic V2
Description
MRE from https://github.com/pydantic/pydantic/issues/10905#issuecomment-2489915307 (thanks @logan-markewich):
base.py
from __future__ import annotations
from typing import Any, Dict
from pydantic import BaseModel, model_serializer
class BaseComponent(BaseModel):
@model_serializer(mode="wrap")
def custom_model_dump(self, handler: Any) -> Dict[str, Any]:
return handler(self)
main.py
from base import BaseComponent
class Reproduce2(BaseComponent):
is_base: bool = False
if __name__ == "__main__":
repro = Reproduce2()
Example Code
No response
Python, Pydantic & OS Version
2.10
Looking into this now, thanks for the report.
Hey folks! We've just released v2.10.1 with a fix for this issue!
Let us know if you're still experiencing any difficulties. Thanks!
|
gharchive/issue
| 2024-11-21T09:59:50 |
2025-04-01T04:35:35.996980
|
{
"authors": [
"Viicos",
"sydney-runkle"
],
"repo": "pydantic/pydantic",
"url": "https://github.com/pydantic/pydantic/issues/10919",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1731023465
|
'color' is not recoginized by the IDE and type checkers.
Initial Checks
[X] I confirm that I'm using Pydantic V2 installed directly from the main branch, or equivalent
Description
I encounter the issue in both the IDE and using pyright:
error: "color" is not a known member of module "pydantic" (reportGeneralTypeIssues)
It suggests that color is not declared.
Example Code
No response
Python, Pydantic & OS Version
python 3.11
pydantic 2.0a4
You probably mean Color.
@Kludex can you take a look.
Honestly I think we should deprecate Color and suggest people use pydantic-extra-types.
You probably mean Color.
@Kludex can you take a look.
Honestly I think we should deprecate Color and suggest people use pydantic-extra-types.
No, i import from pydantic.color import Color
then something else must be wrong, with your setup, from pydantic.color import Color works fine for me on 2.0a4
then something else must be wrong, with your setup, from pydantic.color import Color works fine for me on 2.0a4
there is nothing wrong with my setup, the issue is that color is not defined in __all__ top level, and Color is not defined in all inside color and hence both the IDE and type checkers complain.
there is no __all__ defined in pydantic.color, so that shouldn't be a problem.
there is no __all__ defined in pydantic.color, so that shouldn't be a problem.
And yet it is. As you can see from the pyright error - the issue its complaining about is that color is not a known member of pydantic, because its not declared in __all__ top level.
You can deprecate it of course, but thats a different thing.
modules shouldn't be defined in the top level __init__.py::__all__ AFAIK.
If you think there's something wrong that can be fixed, please create a PR.
I'm using strict mode on the IDE, and I'm running pyright 1.1.310 and I can't reproduce with neither. How can I reproduce it with pyright?
I'm using strict mode on the IDE, and I'm running pyright 1.1.310 and I can't reproduce with neither. How can I reproduce it with pyright?
you can see it here: https://github.com/litestar-org/polyfactory/pull/222
ok, this is my bad - i didnt pin to v2 in my linting setup. Apologies.
No problem. Thanks for coming back and letting us know. 🙏
|
gharchive/issue
| 2023-05-29T16:43:44 |
2025-04-01T04:35:36.005925
|
{
"authors": [
"Goldziher",
"Kludex",
"samuelcolvin"
],
"repo": "pydantic/pydantic",
"url": "https://github.com/pydantic/pydantic/issues/5924",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1918081963
|
Fix Cython build
fixes https://github.com/pydantic/pydantic/issues/7689
Selected Reviewer: @samuelcolvin
Please review
closing as no updates on this. Happy to reconsider if we get more reports and more details.
PiWheels removed the broken wheel builds, and home assistant patched their wheel builds (https://github.com/home-assistant/core/pull/101976) so the original reporters 'effectively stopped' running into this issue would be my understanding. I would think the root of the issue still needs to be fixed however?
|
gharchive/pull-request
| 2023-09-28T18:46:11 |
2025-04-01T04:35:36.008264
|
{
"authors": [
"cp2004",
"hramezani",
"samuelcolvin"
],
"repo": "pydantic/pydantic",
"url": "https://github.com/pydantic/pydantic/pull/7696",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
55254994
|
Series.str should only be defined for strings, not all Series with dtype=object
#9322 will make Series.str raise an exception if it is accessed on Series instances with non-object dtype. In principle, the exception should really be raised for any non-strictly string-like data, but that's not practical until pandas has a true string dtype to use (currently we abuse np.object_ for this purpose).
closing in favor of #13877
|
gharchive/issue
| 2015-01-23T08:04:35 |
2025-04-01T04:35:36.030040
|
{
"authors": [
"jreback",
"shoyer"
],
"repo": "pydata/pandas",
"url": "https://github.com/pydata/pandas/issues/9343",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
173634787
|
DEPR: Deprecate pandas.core.datetools
Title is self-explanatory. Closes #14094.
Current coverage is 85.27% (diff: 92.45%)
Merging #14105 into master will increase coverage by <.01%
@@ master #14105 diff @@
==========================================
Files 139 139
Lines 50502 50551 +49
Methods 0 0
Messages 0 0
Branches 0 0
==========================================
+ Hits 43063 43107 +44
- Misses 7439 7444 +5
Partials 0 0
Powered by Codecov. Last update a0151a7...1cf89fc
Can you leave out the changes in the benchmarks? (Avoiding conflicts with my pr, i will incorporate changes there)
@jorisvandenbossche : Ah, I didn't see #14099. Sure thing. Once I get Travis to pass, I'll remove the commit. In the meantime, I'll add a reminder to yours.
@jreback , @jorisvandenbossche : Travis is passing. Ready to merge if there are no other concerns.
@jreback , @jorisvandenbossche : Travis is passing. Ready to merge if there are no other concerns.
need to add __dir__ to _DeprecatedModule to get something reasonable, IOW the list of removals should be there and the namespace imported from the alts.
In [11]: pd.datetools.alts
Out[11]:
frozenset({'pandas.tseries.frequencies',
'pandas.tseries.offsets',
'pandas.tseries.tools'})
In [12]: pd.datetools.removals
Out[12]:
frozenset({'bday',
'bmonthBegin',
'bmonthEnd',
'bquarterEnd',
'businessDay',
'byearEnd',
'cbmonthBegin',
'cbmonthEnd',
'cday',
'customBusinessDay',
'customBusinessMonthBegin',
'customBusinessMonthEnd',
'day',
'monthEnd',
'quarterEnd',
'week',
'yearBegin',
'yearEnd'})
further I am not sure this actually works
In [1]: dir(pandas.datetools)
Out[1]:
['__class__',
'__delattr__',
'__dict__',
'__doc__',
'__format__',
'__getattr__',
'__getattribute__',
'__hash__',
'__init__',
'__module__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'alts',
'deprmod',
'removals',
'self_dir']
In [17]: from pandas.datetools import to_datetime
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-17-014abe5066ec> in <module>()
----> 1 from pandas.datetools import to_datetime
ImportError: No module named datetools
something is wrong
pls add some tests to verify the actual deprecations themselves (e.g. that you can import the previous things, or a sample of them)
@jreback : Have you tried running that command on master? It doesn't work either.
@jreback : I don't want to overload __dir__ as you described because then I can't differentiate methods that apart of the class itself and methods that are meant to be in removals OR alts. That's the purpose of my first check in __getattr__.
@jreback : Don't fully understand your comments about testing. Aren't I doing that already?
In [2]: pd.__version__
Out[2]: '0.18.1+425.gd26363b'
In [3]: dir(pd.datetools)[0:10]
Out[3]:
['ABCDataFrame',
'ABCIndexClass',
'ABCSeries',
'AmbiguousTimeError',
'BDay',
'BMonthBegin',
'BMonthEnd',
'BQuarterBegin',
'BQuarterEnd',
'BYearBegin']
@gfyoung you can easily override __dir__, you know whats in alt/ removals (well you know as soon as you introspect them), which you can do lazily, e.g. when its needed
@jreback : Ah, that's a fair point. I can just first check if it's in dir and then introspect.
yep. The ideal thing is to replicate as much as possible the existing behavior (and just show a depreaction warning).
@gfyoung The consequence of using a frozenset for the alts is that they have no longer a preferred order. For example, you now get the message to Please use pandas.tseries.frequencies.xxx instead. for many of the offsets, while in our docs / internals we import those from pandas.tseries.offsets
@jorisvandenbossche : That isn't a property frozenset, but rather one of set in particular. set is unordered by definition. However, your point about the "wrong" import being used is indicative of namespace pollution within each module.
What would you suggest I do then?
I didn't want to imply it was a consequence of the 'frozen' aspect of the sets :-). But initially you used a list I think? And performance was the reason to change it to set?
Personally, my preference in this case would go for correctness rather than performance (although what is correct is also debatable ... as you noted correctly the polluted namespaces between frequencies and offsets), alts is in this case a list/set of 3 elements, the in check for that will not be that critical here IMO
@jorisvandenbossche : It was a general performance consideration that @jreback brought up. I originally used a list, but then he pointed out that set is faster.
While I do see your point about the imports and how they should come from specific places to be consistent with documentation, IMO the code is correct as is and shouldn't have to tailor to the namespace pollution that I pointed out.
@gfyoung its a bit more work, but you can figure out where the imports are actually from, e.g. you can actually do the from pandas.tseries.tools import * (for each of the removals), then creating a mapping from the attr to the import. I know its a pain, but I think its necessary.
I actually did something like this (for some code I am working on which is old). I know where things are, and it still took some time / trial and error to figure out the correct imports :)
not to mention the monthEnd = MonthEnd() is really odd (though it IS kind of like a singelton)
@jreback : from ... import * won't help since it still will import all other namespaces that are polluting it IINM.
I know it is not generic, but just using a list for alts instead of set works for this case. I don't see the need to make it more complicated than that.
Actually, can't we get the information from the object itself? (of course in this case where we want the full path it will give the right thing (frequencies or offsets), it will also not work generically for all imports where more top-level paths are used).
In [23]: getattr(pd.dateools, 'BDay')
Out[23]: pandas.tseries.offsets.BusinessDay
gives you that it should be 'offsets' and not frequencies
@jorisvandenbossche : How do you extract the full class path from this?
>>> getattr(pd.dateools, 'BDay')
<class 'pandas.tseries.offsets.BusinessDay'>
Perhaps there's an obvious way, but I don't see one.
If there isn't, then we might need to switch back to a list. Or an OrderedSet cookbook here :smile
__module__ gives me the path:
In [40]: obj = getattr(pd.datetools, 'BDay')
In [41]: obj.__module__
Out[41]: 'pandas.tseries.offsets'
```
@jorisvandenbossche : That was indeed an obvious way. Completely escaped my mind. :smile:
I am going to merge this, we can fix the actual depr warnings with correct modules in a follow-up PR, but then at least the deprecations are included in the rc
|
gharchive/pull-request
| 2016-08-28T07:54:54 |
2025-04-01T04:35:36.047185
|
{
"authors": [
"codecov-io",
"gfyoung",
"jorisvandenbossche",
"jreback"
],
"repo": "pydata/pandas",
"url": "https://github.com/pydata/pandas/pull/14105",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2054177363
|
pydata-sphinx-theme yields an error when deploying on gh pages
Describe the bug
Error:
When running the sphinx build of the sphinx generated documentation with github actions, I get the following error in the workflow: specifically, on the build job:
...
generating indices... genindex done
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
Extension error (pydata_sphinx_theme.pygment):
Handler <function overwrite_pygments_css at 0x7f5ce13e[155](https://github.com/aparmendariz/mcf_docs/actions/runs/7277686492/job/19830251260#step:6:156)0> for event 'build-finished' threw an exception (exception: 'HtmlFormatter' object has no attribute 'get_linenos_style_defs')
make: *** [Makefile:20: html] Error 2
[sphinx-action] Starting sphinx-action build.
Running:
Building docs in docs/
[sphinx-action] Running: ['make', 'html', '-e']
Traceback (most recent call last):
File "/entrypoint.py", line 22, in <module>
action.build_all_docs(github_env, [os.environ.get("INPUT_DOCS-FOLDER")])
File "/sphinx_action/action.py", line [167](https://github.com/aparmendariz/mcf_docs/actions/runs/7277686492/job/19830251260#step:6:168), in build_all_docs
raise RuntimeError("Build failed")
RuntimeError: Build failed
[sphinx-action] Build failed with 0 warnings`
I am not sure how to proceed or what changes to make in the docs to solve the issue.
Any guidance is much appreciated since I am new to sphinx.
How to Reproduce
Minimal method:
The index.rst looks like this:
.. toctree::
:maxdepth: 2
:caption: Contents:
:hidden:
getting_started.rst
user_guide.rst
algorithm_reference.rst
python_api.rst
changelog.rst
and the conf.py looks like this:
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
# https://www.sphinx-doc.org/en/master/usage/configuration.html
# -- Path setup --------------------------------------------------------------
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
import os
import sys
sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('../..'))
#sys.path.insert(0, os.path.abspath('../../mcf'))
# -- Project information -----------------------------------------------------
project = 'mcf 0.4.4'
copyright = '2023, ML'
author = 'ML'
# The full version, including alpha/beta/rc tags
#release = '0.4.4'
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.coverage',
'sphinx.ext.intersphinx',
'sphinx.ext.mathjax',
'sphinx.ext.napoleon',
'sphinx_copybutton',
'sphinx.ext.githubpages',
'sphinx.ext.doctest',
]
napoleon_use_ivar = False
#This will allow your docs to import the example code without requiring those modules be installed
autodoc_mock_imports = ['bs4', 'requests']
autoclass_content = 'class'
autosummary_generate = True
# Custom sidebar templates, maps page names to templates.
# html_sidebars = {
# "index": [
# "sidebar_versions.html",
# ]}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
source_suffix = ['.rst', '.md']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This pattern also affects html_static_path and html_extra_path.
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'pydata_sphinx_theme'
html_sidebars = {
"index": ["page-toc"],
}
html_theme_options = {
"show_nav_level": 3,
"navigation_depth": 4,
"show_toc_level": 4,
"show_version_warning_banner": True
}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
OR
git clone method:
$ git clone https://aparmendariz.github.io/mcf_docs
$ cd mcf_docs
$ pip install -r requirement.txt
$ cd docs
$ make html
Leave a comment
Environment Information
sphinx==5.0.2
pydata-sphinx-theme==0.14.4
Sphinx extensions
I deleted some extensions:
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
]
and got the same error.
Additional context
https://aparmendariz.github.io/mcf_docs/
Do you somehow end up with an older pygments package?
#778 looks related.
Agreed, a quick search on HtmlFormatter' object has no attribute 'get_linenos_style_defs') suggested that you probably have pygments < 2.7.0
If indeed that is the problem, we should make sure our requirements set an appropriate min version for pygments
The minimum version came in via #778, but maybe another, older pygments version is found due to e.g. tinkering with sys.path/PYTHONPATH?
ah indeed (sorry I was responding on my mobile, didn't click through to #778)
I'll assume that an old version of pygments is the problem, but please feel free to reopen @aparmendariz if updating the pygments version doesn't fix this for you!
|
gharchive/issue
| 2023-12-22T16:37:45 |
2025-04-01T04:35:36.055410
|
{
"authors": [
"aparmendariz",
"cmarqu",
"drammock"
],
"repo": "pydata/pydata-sphinx-theme",
"url": "https://github.com/pydata/pydata-sphinx-theme/issues/1608",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1411125991
|
FIX: table width of nbsphinx dataframes
Fix #1017
I though initially that the problem was solved by #954, but it wasn't. nbsphinx is not a very nice neighbour as it's injecting its css within the article tag instead of the header. giving it a super high priority over custom and/or our theme css.
Instead of setting an !important statement I simply copy/pasted the complete selector + html making it more specific and thus giving it the priority.
We cannot test it in our theme documentation as we are using mystNB but @seanlaw confirmed it worked from his side.
If this works for @seanlaw then I saw we just merge it in and can keep iterating if folks report more bugs in the future 👍
|
gharchive/pull-request
| 2022-10-17T08:22:20 |
2025-04-01T04:35:36.057814
|
{
"authors": [
"12rambau",
"choldgraf"
],
"repo": "pydata/pydata-sphinx-theme",
"url": "https://github.com/pydata/pydata-sphinx-theme/pull/1018",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2094469827
|
Add "main" role to article in docs_body
Fixes https://github.com/pydata/pydata-sphinx-theme/issues/1676
As mentioned in the above issue, I'm not totally certain that this is the appropriate place to put role="main", but it does seem to help.
Closing in favor of https://github.com/pydata/pydata-sphinx-theme/pull/1678
|
gharchive/pull-request
| 2024-01-22T18:06:59 |
2025-04-01T04:35:36.059635
|
{
"authors": [
"michael-wisely-gravwell"
],
"repo": "pydata/pydata-sphinx-theme",
"url": "https://github.com/pydata/pydata-sphinx-theme/pull/1677",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
237710101
|
Dataset.to_dataframe loads dask arrays into memory
to_dataframe should return a Dask Dataframe, instead of eagerly loading data. This is probably pretty easy to implement (thanks to dask), but will require some care to ensure that no intermediate results (or indices!) are loaded. We should also check the to_series method.
Today, I find myself in need exact functionality. Assuming no one else is working on it, I'll give a shot at trying to fix this.
Closing this in favor of https://github.com/pydata/xarray/issues/1093
|
gharchive/issue
| 2017-06-22T01:46:30 |
2025-04-01T04:35:36.061908
|
{
"authors": [
"Zac-HD",
"jmunroe",
"shoyer"
],
"repo": "pydata/xarray",
"url": "https://github.com/pydata/xarray/issues/1462",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
454677926
|
Assigning values to a subset of a dataset
Hi, can somebody tell me what is the "correct" way to manipulate a subset of the data contained in a Dataset?
Consider the following example:
import numpy as np
import xarray as xr
shape = (3, 2)
da1 = xr.DataArray(np.zeros(shape), dims=('x', 'y'), coords=dict(x=[1, 2, 3], y=[4, 5]), name='var1')
da2 = xr.DataArray(np.zeros(shape), dims=('x', 'y'), coords=dict(x=[1, 2, 3], y=[4, 5]), name='var2')
I can easily change the value of variable 1 at a given coordinate in the first DataArray using the following syntax:
da1.loc[dict(x=1, y=4)] = 1
However, if I merge both DataArrays into a single Dataset and want to change both variables at the same time, there seems to be no straightforward solution:
ds = xr.merge([da1, da2])
ds.loc[dict(x=1, y=4)] = ... <-- what to write here?
The only solution I could come up with is to modify the two values separately, but this is neither very elegant nor scales with the number of variables:
ds['var1'].loc[dict(x=1, y=4)] = 2
ds['var2'].loc[dict(x=1, y=4)] = 3
All I could find in the docs about this issue is:
Using indexing to assign values to a subset of dataset (e.g., ds[dict(space=0)] = 1) is not yet supported.
If not by indexing, what other (more compact) way exists? A potential solution might be to create a separate Dataset and then use the update method, but this seems overly complicated, too.
One easy work around is to loop over the variables in a Dataset, e.g.,
for da in ds.values():
da.loc[dict(x=1, y=3)] = 1
It's a little ugly but it works.
I don't think there's a more compact way to do this in general. In some cases the where() function/method can be a good option, e.g., ds.where((ds.x == 1) & (ds.y == 4), 1).
"not yet supported" basically means that there's no reason why it isn't supported, other than that nobody has bothered to implement it yet. Xarray tends to get new features when users implement them :).
Thanks for the quick reply! I really like the xarray package and hope that someone will add this functionality in the future since it would significantly improve the usability in certain situations. I am unfortunately not yet experienced enough with the package to take care of it myself - perhaps at a later point in time... For the time being, I will therefore stick with the above solutions ;)
|
gharchive/issue
| 2019-06-11T13:03:16 |
2025-04-01T04:35:36.066379
|
{
"authors": [
"AdrianSosic",
"shoyer"
],
"repo": "pydata/xarray",
"url": "https://github.com/pydata/xarray/issues/3015",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1838282462
|
module 'xarray' has no attribute 'Coordinates'
What happened?
I have made some documentation changes to the user-guide/terminology.rst file of xarray. When doing make html. This is the error message that is appearing. This problem has occured in two different folders, but while using the same file and command.
WARNING: [autosummary] failed to import xarray.Coordinates.
Possible hints:
ImportError:
ModuleNotFoundError: No module named 'xarray.Coordinates'
AttributeError: module 'xarray' has no attribute 'Coordinates'
What did you expect to happen?
No response
Minimal Complete Verifiable Example
No response
MVCE confirmation
[ ] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
[X] Complete example — the example is self-contained, including all data and the text of any traceback.
[ ] Verifiable example — the example copy & pastes into an IPython prompt or Binder notebook, returning the result.
[ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
Relevant log output
No response
Anything else we need to know?
No response
Environment
from xarray import Coordinates should work on the main branch.
Perhaps you are building the documentation locally using main but still with an older version of Xarray installed in your environment?
My main branch is up to date with origin-main. Do I need to still do this?
AFAIK the make html command builds the documentation but doesn't (re)installs xarray in your local Python environment, which is needed to execute the examples in the documentation, etc.
Make sure to follow these steps before running make html.
I have done this, but when I run make html all the files in doc are not built. In the _build file of doc folder I could only see some files of doc getting built. Let's say there are 10 .rst files in doc folder, but after running make html only 2 or 3 files are built.
I'm afraid I cannot help you much further as I cannot reproduce it. Try running make clean before make html. You could also check the full logs of the last build on readthedocs (which runs successfully) to see if it helps: https://readthedocs.org/projects/xray/builds/21535646/
I'm going to close this issue as I don't see any bug on the xarray side, but feel free to re-open it or comment if the problem persists.
|
gharchive/issue
| 2023-08-06T17:19:43 |
2025-04-01T04:35:36.074850
|
{
"authors": [
"benbovy",
"harshitha1201"
],
"repo": "pydata/xarray",
"url": "https://github.com/pydata/xarray/issues/8050",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
945560052
|
Fix gen_cluster failures; dask_version tweaks
fixes one of the issues reported in #5600
distributed.utils_test.gen_cluster no longer accepts timeout=None for the sake of robustness
deleted ancient dask backwards compatibility code
clean up code around dask.__version__
Hello @crusaderky! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:
In the file xarray/tests/test_computation.py:
Line 4:1: F401 'distutils.version.LooseVersion' imported but unused
In the file xarray/tests/test_formatting_html.py:
Line 1:1: F401 'distutils.version.LooseVersion' imported but unused
You'll need a commit with [test-upstream] to run the upstream-dev CI
the remaining error is the other one reported in #5600 so this should be ready to merge. Unless you want to add a "internals" whats-new.rst entry?
Unless you want to add a "internals" whats-new.rst entry?
I think it's overkill?
|
gharchive/pull-request
| 2021-07-15T16:26:21 |
2025-04-01T04:35:36.080641
|
{
"authors": [
"crusaderky",
"dcherian",
"keewis",
"pep8speaks"
],
"repo": "pydata/xarray",
"url": "https://github.com/pydata/xarray/pull/5610",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2214644473
|
Engine: Do not apply setter after unmount
This PR avoids applying set effects on a component once it has been deleted. This could lead to re-renders triggered after components had been unmounted when set operations happened on clean up, causing un-expected side effects.
I'm not completely sure we can be sure that once a component has been deleted from the component tree it will never be added to the component tree again. For example, in the _recycle_children method we might recycle an old version of a component when rendering a new one. This could then end up in the situation where the new component basically cannot use the use_state hook.
I don't know off the top of my head how to best solve this issue right now but I think this could be almost as tricky to debug as the current behavior of renders triggered for deleted components.
I was not sure if it could be troublesome as well. We cannot use the element in self._component_tree because sometimes the tree is empty and it is expected to run the setter. This is the case of test_render_view_replacement in the test suite.
So if we cannot check this, I could only thing on tagging the element itself as deleted somehow. But if that element can be reused, then we should un-mark it again in that case.
It's already impossible for a setter to be called after a component is deleted, because the setters for a component are deleted too.
https://github.com/pyedifice/pyedifice/blob/599dd4f784c00722659d87d8458fdc4e2296a150/edifice/engine.py#L1483-L1485
Have you seen a case where you believe a setter is called after the component is deleted?
I'm not completely sure we can be sure that once a component has been deleted from the component tree it will never be added to the component tree again. For example, in the _recycle_children method we might recycle an old version of a component when rendering a new one.
To "delete" a component means to call _delete_component on it.
https://github.com/pyedifice/pyedifice/blob/f0684a015d72d9ed693eb5b328315dc0b30a9513/edifice/engine.py#L1446
It is true that after a component has been deleted then it will never again be added to the component tree.
_recycle_children will never call _delete_component on a component which will be recycled.
If anything that I said here is not true then that is a bug.
Thought about this more and I'm inclined to merge it. I think this behavior is good and I think we can explain this in the documentation without making it too complicated.
|
gharchive/pull-request
| 2024-03-29T04:47:23 |
2025-04-01T04:35:36.095715
|
{
"authors": [
"JoanCoCo",
"considerate",
"jamesdbrock"
],
"repo": "pyedifice/pyedifice",
"url": "https://github.com/pyedifice/pyedifice/pull/130",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1070557463
|
Update handling of FFT normalization in pyfar
General
pyfar version: 0.2.3
[ ] Concepts: arithmetic (include, after #243 ),
[ ] FFT normalization (link to arithmetic, after #243 )
[ ] regulated inversion
[ ] arithmetics
I tried not to allow 1 / power-signal in the regulated inversion, however, there is a problem. The regulated inversion is used as part of the deconvolution where we can pass to power signals (fine with our rules for arithmetics). The second power signal is then passed to the regulated inversion, which would throw a value error, which we do not want. I thus would suggest to keep the regulated inversion as an exception to the 1 / power-signal rule.
|
gharchive/issue
| 2021-12-03T12:48:30 |
2025-04-01T04:35:36.133113
|
{
"authors": [
"f-brinkmann"
],
"repo": "pyfar/pyfar",
"url": "https://github.com/pyfar/pyfar/issues/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
991058365
|
Refactor colorbars
Which issue(s) are closed by this pull request?
Closes #208
Changes proposed in this pull request:
merged private functions plot._line._spectrogram and plot._line._spectrogram_cb
added optional parameter to control if colorbar is plotted
extended existing ax parameter to control where colorbar is plotted
minor structural improvements
added testing for colorbar options
Next steps #210, #214, #215, and #200
Nice improvement! Only minor comments. What do you think about an example on how to use the returned colorbar? Maybe in a configuration with several subplots?
Adding an example is a good idea - I will keep that in mind and would suggest to add it to ``pyfar.plot.init.py`. However, we have to wait until we decided on #214. At the moment complicated examples like the one you suggested are smashed by always using Matplotlib's tight layout.
|
gharchive/pull-request
| 2021-09-08T12:01:03 |
2025-04-01T04:35:36.136482
|
{
"authors": [
"f-brinkmann"
],
"repo": "pyfar/pyfar",
"url": "https://github.com/pyfar/pyfar/pull/211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2021755088
|
Renamed text_tokenized_cfg into col_to_text_tokenized_cfg
Renamed text_tokenized_cfg into col_to_text_tokenized_cfg
The next step is to edit LinearModelEncoder and replace the current model into col_to_model?
Also please do not forget to update example and tutorial documentation accordingly. Thanks!
|
gharchive/pull-request
| 2023-12-02T00:09:08 |
2025-04-01T04:35:36.138228
|
{
"authors": [
"weihua916",
"zechengz"
],
"repo": "pyg-team/pytorch-frame",
"url": "https://github.com/pyg-team/pytorch-frame/pull/257",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
512013018
|
Use pyganja
Builds on top of #166, pushing now to test it works on readthedocs
Looks cool, but the build is broken? https://external-builds.readthedocs.io/html/clifford/167/Example 1 Interpolating Conformal Objects.html
I think that's an upstream readthedocs bug. It works locally, with the pyganja changes.
Going to just merge this, its doc-only, and the ReadTheDocs CI stuff isn't mature enough to work for these filenames. Let's just see if it works.
Crap:
ModuleNotFoundError: No module named 'pyganja'
|
gharchive/pull-request
| 2019-10-24T15:16:25 |
2025-04-01T04:35:36.140332
|
{
"authors": [
"eric-wieser",
"hugohadfield"
],
"repo": "pygae/clifford",
"url": "https://github.com/pygae/clifford/pull/167",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
487790023
|
make mapfile fails
(Original issue 784 created by icholy on 2012-06-19T04:20:35.804903+00:00)
in {{{pygments/formatters/_mapping.py}}} there is an {{{import}}} from {{{pygments.util}}} on line 16. However, this path is not accessible until after the {{{sys.path.insert}}} call on line 56. Also, all of the generated formatter imports have the same problem. If I set {{{FORMATTERS = {}}}} then I get an error coming from {{{init.py}}} on line 21 about {{{fcls}}} not being defined.
The only way I can get it to work is by sticking
{{{
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(file), '..', '..'))
}}}
at the very top of {{{_mapping.py}}}
(Original issue 784 last updated on 2014-05-19T22:58:10.456031+00:00)
(Issue automaticaly closed due to status in Bitbucket: resolved)
(Original comment by icholy on 2012-06-19T13:29:01.723947+00:00)
I think that the formatter loading should be refactored to work more like the lexers.
(Original comment by tshatch on 2012-08-28T15:48:12.812671+00:00)
In the meantime, this should work if Pygments is otherwise on your path, via setup.py develop, setup.py install, or PYTHONPATH=...
(Original comment by tshatch on 2014-05-17T01:23:12.857961+00:00)
I'll try to get this in before the next version (the trivial fix is trivial)
(Original comment by tshatch on 2014-05-19T22:58:10.449855+00:00)
Make the formatters _mapping.py work like lexers wrt. PYTHONPATH
Resolves #784
→ <<cset 27fed8103aec>>
(Original comment by tshatch on 2014-05-19T22:58:41.432521+00:00)
Issue #992 was marked as a duplicate of this issue.
|
gharchive/issue
| 2019-08-31T16:59:57 |
2025-04-01T04:35:36.148592
|
{
"authors": [
"Anteru"
],
"repo": "pygments/pygments-migration-test",
"url": "https://github.com/pygments/pygments-migration-test/issues/491",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
487789252
|
new lexer request: spip (with patch)
(Original issue 703 created by Maieul on 2011-09-19T15:54:59.339560+00:00)
I made a Lexer for the language of the french CMS SPIP.
I propose it : https://github.com/maieul/spip-pygments.
Must I push it on the hg repository ?
(Original comment by Maieul on 2013-11-16T19:04:16.862957+00:00)
Hi, some news ?
(Original comment by tshatch on 2016-06-01T06:59:54.794065+00:00)
(This issue got overlooked for far too long.) If you're still interested in having it included, I would appreciate a link to the part of SPIP (either code or documentation) that defines this format. Do the files really use .html as the extension?
(Original comment by camilstaps on 2016-06-01T07:14:58.793770+00:00)
Syntax for loops ('boucles'): http://www.spip.net/en_article2042.html
Syntax for tags ('balises'): http://www.spip.net/en_article2055.html
In general, SPIP is ill-documented in English (French docs are somewhat better, still not great) and it can be a pain to work with. (There are also several issues in the code, which has been moved to http://zone.spip.org/trac/spip-zone/browser/contribs/pygments. I can better comment on those when a pull request has been created.) However, in general this looks like a nice addition.
(Original comment by Maieul on 2016-06-01T07:59:09.112736+00:00)
Yes, It really use .html as extension. The two links of Camil explain the syntax, and are covered 90 % of the code. But for the need of developing this module, I used http://programmer.spip.net/ and all the fragment of code where well formated.
I'm closing all requests for lexers/formatters without a patch/PR.
|
gharchive/issue
| 2019-08-31T16:52:12 |
2025-04-01T04:35:36.154199
|
{
"authors": [
"Anteru",
"birkenfeld"
],
"repo": "pygments/pygments",
"url": "https://github.com/pygments/pygments/issues/410",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
36232959
|
Python 3.4
Şu an siteyi Python 3.4'e geçirmemize engel 3 bağımlılık var:
[x] django-gravatar (issue #30)
[x] django-markitup
[x] Django 1.7'ya geç (issue #41)
[x] #38
[x] #41
[x] Siteyi Python 3'e port et
Bütün eksikler tamam. Geriye deploy işi kaldı.
https://github.com/pyistanbul/website/commit/8f9c1ffc25fed2c768cbb5b071480f83f999d174
|
gharchive/issue
| 2014-06-21T22:06:19 |
2025-04-01T04:35:36.187647
|
{
"authors": [
"berkerpeksag"
],
"repo": "pyistanbul/website",
"url": "https://github.com/pyistanbul/website/issues/31",
"license": "WTFPL",
"license_type": "permissive",
"license_source": "github-api"
}
|
2371520053
|
Got different inference results when using ort
Hi team,
Really nice cargo! It has been amazing since I used it.
However, for now, I am trying to rewrite a python service that takes a user input, use embedding models to vectorize it, then return the vectorized text.
The model I am using is bge-m3-onnx. In my python service, it returns perfect result and the vector search was good. However, the result happens to be different when using ort. The vector search becomes less accurate.
After a quick investigation, I found that the onnxruntime used by ort is older than the python package which is 1.17.3. I had tried disabled the graph optimizations but no avail. For now, the difference between the python code and the rust one is that the python code uses ndarray to manipulate data whereas the rust version uses vec. I am not sure if this could affect the results. Perplexing.
Any idea on this subject? Really great project tho!
Best,
How different are the results? ± $1*10^{-4}$ is typical, but if it's higher that's definitely a cause for concern. Double-check to make sure your preprocessing in Rust is the same as in Python; breaking down the preprocessing into steps and comparing the results at each step can help to identify issues.
How different are the results? ± 1∗10−4 is typical, but if it's higher that's definitely a cause for concern. Double-check to make sure your preprocessing in Rust is the same as in Python; breaking down the preprocessing into steps and comparing the results at each step can help to identify issues.
Great thanks to your information provided.
I tracked down to the Tokenizers cargo provided by Huggingface. I ended up finding out that the Rust cargo of Tokenizers need to enable "add_special_tokens" options for the tokenization to be functioned the same as the one in python, otherwise it outputs incorrect tokens, therefore, wrong outputs.
Again, many thanks to your cargo. Really amazing!
|
gharchive/issue
| 2024-06-25T02:54:31 |
2025-04-01T04:35:36.191882
|
{
"authors": [
"AspadaX",
"decahedron1"
],
"repo": "pykeio/ort",
"url": "https://github.com/pykeio/ort/issues/219",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1589726464
|
Features for the release announcement blogpost
The last commit of this PR is not supposed to be merged, just included to facilitate reviewing!!!
TODO:
[x] Change CLV section to show BG/NBD model instead and probability customer is still alive
[x] Perhaps combine with Gamma-Gamma model to show full CLV estimation I think it would be too long
@juanitorduz, @twiecki I addressed the notebook comments. WDYT?
Just added a very tiny comment! Otherwise looks very nice :D
Reminder: The notebook is not supposed to be merged, only PRs that directly support it. Will remove the last commits once we agree the announcement blogpost looks alright.
|
gharchive/pull-request
| 2023-02-17T17:33:16 |
2025-04-01T04:35:36.215180
|
{
"authors": [
"juanitorduz",
"ricardoV94"
],
"repo": "pymc-labs/pymc-marketing",
"url": "https://github.com/pymc-labs/pymc-marketing/pull/163",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
319030900
|
Custom Index No Longer Honored since 11.10.0
Custom private indexes no longer seem to be honored as of version 11.10.0 (and is still present in 11.10.1). Downgrading back to 11.9.0 resolves the issue.
The specific package causing me trouble is grpcio~=1.10.0. Our private repository has 1.10.0 but 1.10.1 from pypi is being preferred.
$ python -m pipenv.help output
Pipenv version: '11.10.1'
Pipenv location: '/usr/lib/python3.6/site-packages/pipenv'
Python location: '/usr/bin/python'
Other Python installations in PATH:
2.7: /usr/bin/python2.7
2.7: /usr/bin/python2.7
2.7: /bin/python2.7
3.4: /usr/bin/python3.4m
3.4: /usr/bin/python3.4
3.4: /bin/python3.4
3.6: /usr/bin/python3.6m
3.6: /usr/bin/python3.6
3.6: /bin/python3.6
3.6.5: /usr/bin/python
3.6.5: /bin/python
2.7.14: /usr/bin/python2
2.7.14: /bin/python2
3.6.5: /usr/bin/python3
3.6.5: /bin/python3
PEP 508 Information:
{'implementation_name': 'cpython',
'implementation_version': '3.6.5',
'os_name': 'posix',
'platform_machine': 'x86_64',
'platform_python_implementation': 'CPython',
'platform_release': '4.16.5-1-ARCH',
'platform_system': 'Linux',
'platform_version': '#1 SMP PREEMPT Thu Apr 26 16:53:40 UTC 2018',
'python_full_version': '3.6.5',
'python_version': '3.6',
'sys_platform': 'linux'}
System environment variables:
LANG
DISPLAY
TERMINAL
XDG_VTNR
XDG_SESSION_ID
XDG_GREETER_DATA_DIR
USER
DESKTOP_SESSION
PWD
HOME
XDG_SESSION_TYPE
XDG_SESSION_DESKTOP
GTK_MODULES
MAIL
SHELL
XDG_SEAT_PATH
XDG_CURRENT_DESKTOP
SHLVL
XDG_SEAT
GDMSESSION
LOGNAME
DBUS_SESSION_BUS_ADDRESS
XDG_RUNTIME_DIR
XAUTHORITY
XDG_SESSION_PATH
PATH
TERMINOLOGY
TERM
XTERM_256_COLORS
OLDPWD
ZSH
PAGER
LESS
LC_CTYPE
LSCOLORS
LS_COLORS
EDITOR
PIPENV_SHELL_FANCY
GOPATH
SSH_AUTH_SOCK
_
PYTHONDONTWRITEBYTECODE
PIP_PYTHON_PATH
Pipenv–specific environment variables:
PIPENV_SHELL_FANCY: true
Debug–specific environment variables:
PATH: /home/matt/.Go/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl
SHELL: /bin/zsh
EDITOR: vim
LANG: en_US.UTF-8
PWD: /home/matt/Repositories/Arroyo/Inflow/treasury
Contents of Pipfile ('/home/matt/Repositories/Arroyo/Inflow/treasury/Pipfile'):
[[source]]
url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"
[[source]]
url = "https://<redacted>:<redacted>@pypi.arroyo.io/simple"
verify_ssl = true
name = "arroyo"
[packages]
grpcio = {index="arroyo", version="~=1.10.0"}
arroyo-inflow-firmament = {index="arroyo"}
cerberus = "~=1.2"
treasury = {path=".", editable=true}
[dev-packages]
[requires]
python_version = "3.6"
Contents of Pipfile.lock ('/home/matt/Repositories/Arroyo/Inflow/treasury/Pipfile.lock'):
{
"_meta": {
"hash": {
"sha256": "36a23264bf775173f8a8577b3d4806d6ed6bbf0c03a3c65da2ac9e630c4c4d0e"
},
"pipfile-spec": 6,
"requires": {
"python_version": "3.6"
},
"sources": [
{
"name": "pypi",
"url": "https://pypi.python.org/simple",
"verify_ssl": true
},
{
"name": "arroyo",
"url": "https://<redacted>:<redacted>@pypi.arroyo.io/simple",
"verify_ssl": true
}
]
},
"default": {
"arroyo": {
"hashes": [
"sha256:938c1d53237c14765559dd35fa5ac875c5db926ae932e6f599d4485e2e1a3c89"
],
"version": "==1.3.0.post2"
},
"arroyo-inflow-firmament": {
"hashes": [
"sha256:340335994606ee130c4995993f8a93d2a77d29c9b295692de34e1783d299b73e"
],
"index": "arroyo",
"version": "==0+untagged.61.g4c2cfea"
},
"arroyo-settings": {
"hashes": [
"sha256:c1ef2c686f0d23e0264c6d073fecc070fe2dfe874ff9d1cd4910d1292d88678a"
],
"version": "==0+untagged.9.gd911bdc"
},
"async-timeout": {
"hashes": [
"sha256:00cff4d2dce744607335cba84e9929c3165632da2d27970dbc55802a0c7873d0",
"sha256:9093db5b8ddbe4b8f6885d1a6e0ad84ae3155464cbf6877c387605244c285f3c"
],
"version": "==2.0.1"
},
"cerberus": {
"hashes": [
"sha256:f5c2e048fb15ecb3c088d192164316093fcfa602a74b3386eefb2983aa7e800a"
],
"index": "pypi",
"version": "==1.2"
},
"grpcio": {
"hashes": [
"sha256:022dc2a6d1537a5a16af4ccc3355ad7b512f9c627a1d5d579cd7c18830378bb3",
"sha256:025a591606b0aca13bec3e019d6acec01a39421f01b915b98a3a93ea0a53b412",
"sha256:03265472d39bf26f124c3ef68446f7873c8260893e6ae65b323a5b51ed52e580",
"sha256:10efe2e016c3ca7a370771ffcf1de9baa3456d4bccefde0f4ce3be091d871c8f",
"sha256:14bca481188c3f19135012aaff9abefa4e15529c7e1aca3084183d78094d06bd",
"sha256:17240d672b5c1c9ff22e52236c1870413b7fb5af762b97ce5a747a55e0a57e98",
"sha256:1bc36e512741f82c1d73f42df536aa2ab75d840f0d35c149b5d0bee1aed16862",
"sha256:224c513fbe0c3ca546870e5c21b08a8a56cd25795b76b3192ee9702a3344764b",
"sha256:2894466c499d9752e0d49ee8adc5ee12c676d86211fc1b292bf713cc7cfe9853",
"sha256:435b3bab2e34814666854eec203c77b169df1cd56cf22fe449cf5510af416e7d",
"sha256:4765600467d7cdb8f62a591d4427ddbeefcf4dbbe46e2f1b10af555e815ecbcb",
"sha256:4fa658a7e1ba5727ca066b1c8bb64c6befb98f2b8007f04a16c7c84555bf11b9",
"sha256:575b918e17a611bf1a22782291215cf34fc4b0a4c16316300ee3684a49729918",
"sha256:87e52924a99ac5935a468b3fe49c4b0090bd9b05470b55ed1192308791e6d332",
"sha256:88afda198adb0a9da52a66152062027a57877b46f59ffcf55acc3cbfaff77160",
"sha256:982439a872d41f969724efc139e0416ba45e0d7446e9a41fd2ebe19351adff9a",
"sha256:a1bc37c9910d0fbf4d9e80d5822f92c6e01e28dd1eb01323636ed19666b537cb",
"sha256:aa473b8276de39eeccc4ad6cbb7fd7feab0868180d72c0c93226033c79fa69b7",
"sha256:ae82bf2f7ceac6ba956e816120b4f66bda035571350e46b61bbdde1808aed1dd",
"sha256:b56e4f355c2499bb0bf8f8f4d0362b618b06afdfd2c10722710596dc7e295c6c",
"sha256:d2accc8e354f0ed5b337865260a78b3c6851d2fe3c0e1b025d437122cc15dd31",
"sha256:d410835e7554d064c2d99cfa0dd393ffbb0ccf52145ab51c725a8472ed254a3c",
"sha256:d9e3105f6de6cb759b028702bdd21cb36d27e010227669e43c675b9957a3c180",
"sha256:da306c80d69801a3e4115c448ed4ad481957d723ec1e00b99497c6661573c3e5",
"sha256:e579e4124d2a0931ce39639c60e0711918d6659b933eb97e67f60f84666ea488",
"sha256:e86639989c03831912fd9924beda26f6e9ffcc267656cea035bde9d88cf793b2",
"sha256:ea9564f58144e2f07995d57fb8e636be5efb084cd59c8651391ada2bb75dc0ff",
"sha256:f4a38071dd27f140cfe774f56aecdf0e33de926c21289cc9c7521ce8dd91fc1c"
],
"index": "arroyo",
"version": "~=1.10.0"
},
"grpclib": {
"hashes": [
"sha256:158b5f77037b11bae1fd7775183b96a7f76581b1ba8e3716770abfacfaf79e68"
],
"version": "==0.1.0"
},
"h2": {
"hashes": [
"sha256:4be613e35caad5680dc48f98f3bf4e7338c7c429e6375a5137be7fbe45219981",
"sha256:b2962f883fa392a23cbfcc4ad03c335bcc661be0cf9627657b589f0df2206e64"
],
"version": "==3.0.1"
},
"hpack": {
"hashes": [
"sha256:0edd79eda27a53ba5be2dfabf3b15780928a0dff6eb0c60a3d6767720e970c89",
"sha256:8eec9c1f4bfae3408a3f30500261f7e6a65912dc138526ea054f9ad98892e9d2"
],
"version": "==3.0.0"
},
"hyperframe": {
"hashes": [
"sha256:87567c9eb1540de1e7f48805adf00e87856409342fdebd0cd20cf5d381c38b69",
"sha256:a25944539db36d6a2e47689e7915dcee562b3f8d10c6cdfa0d53c91ed692fb04"
],
"version": "==5.1.0"
},
"multidict": {
"hashes": [
"sha256:0dcf4f2893bf22839c7bd825f688d5fe60c8eb989b4eb817103a71d7f84058e5",
"sha256:1524bb334b605f6c7cf447e19e70d6fb96f68aefefe018bdceb9674572548c45",
"sha256:1b93d1b72b12566a6e238acb4f547cdf6de069c5b555faabfe9071852434b61a",
"sha256:24052724195e46872739faa10c611957bbaceae28eec92e1ce49150b115ec5ed",
"sha256:27643705c5a04cfbf7834b914e5367618f77b2692f920c734b18f476ea328f04",
"sha256:2b07135edc953e6a7e94d8628868715093efb015fc6a0ebf54c5ecd84064e5f8",
"sha256:3876b617228d60d655062f9ddaecb0f770777b8ae753e661de9a7d5eb4ef2933",
"sha256:3f11e31935d20822c977397e9ec868ab2287c82461bc74663c7df1bd8a5b61d0",
"sha256:4e0dbf5a204c462d6b129b9598e5124077244a9b91c255c5341f679472dc54a5",
"sha256:5a56ec27a528fce6a3fdf537929b039b8af01edec761b09f7fcde3915d3fbbe7",
"sha256:5bd46d01a49264f059d4c7b1f26cb5cbbcacb549edb77ad5caa2070ec4bffe47",
"sha256:6e5128658f82cd8d1830f159027c2be0af496b1b6aa710353a6c862a9285bc89",
"sha256:6ef4cf27db3424bcc6e7f9ead3abee53bca2c3a1db5821585cb3e386ae55178f",
"sha256:7aeccfbc7ccc29dcceca11ee295f5a267b093d90cf50808d4649d87e72bdbd89",
"sha256:7c43ca71db568e13d301e3a6153996a3e0da5d4b19c3517d2418fa5775d2d173",
"sha256:9deae50f23511e5639fadf8df68917027535e090b163c5bbb03e26f6a208dbfc",
"sha256:aab8e9063ff623387f72c04b55c43211da2edd4e0db9943057522a995c330877",
"sha256:ae1002b4c793a6b88c8208c8a312038185ffcf76a57fdbe6c5d0f62052737a65",
"sha256:af340843de65c4d678379e8e0b5bcfd2e614da8ed666f1ba006704fe456edbba",
"sha256:d3161a3697f8a332aa86da1402f4020499d50ccedc6eb3d8a40d5e3aca3c2afd",
"sha256:eafe84d62e45b17c82483a6b4b1a9757c91a1d6d9511d8b32ded52936582fdcb",
"sha256:f16c517bb33c8ce75ba5e8d5212e3591bc59f4ab837b5b9906a3ee5868180449"
],
"version": "==4.2.0"
},
"protobuf": {
"hashes": [
"sha256:01ccd6d03449ae75b779fb5bf4ed62177d61afe3c5e6465ccf3f8b2e1a84afbe",
"sha256:1d92cc30b0b46cced33adde5853d920179eb5ea8eecdee9552502a7f29cc3f21",
"sha256:242e4c7ae565267a8bc8b92d707177f915607ea4bd73244bec6cbf4a49b96661",
"sha256:3b60685732bd0cbdc802dfcb6071efbcf5d927ce3127c13c33ea1a8efae3aa76",
"sha256:3f655e1f99c3e14d56ca900af1b9a4715b691319a295cc38939d7f77eabd5e7c",
"sha256:560a38e692a69957a70ba0e5839aa67430efd63072bf91b0539dac19055694cd",
"sha256:5c1c8f6a0a68a874e3beff89255959dd80fad45870e96c88944a1b81a22dd5f5",
"sha256:628a3bf0794a8b3cabb18db11eb67cc10e0cc6e5525d557ae7b682bb73fa2018",
"sha256:7222d6616108b33ad6cbeff8117062a73c43cdc8fa8f64f6a322ebeb663e710e",
"sha256:76ef6ca3c50e4cfd044861586d5f1b352e0fe7f17f883df6c165bad5b4d0e10a",
"sha256:7c193e6964e752bd056735594826c5b03274ceb8f07349d3ae47d9766250ba96",
"sha256:869e12bcfb5759e683f53ec1dd6155b7be034065431da289f0cb4510040a0799",
"sha256:905414e5ea6cdb78d8730f66335755152b46685fcb9fc2f2134024e3ea9e8dcc",
"sha256:ac0067e3c60737865ed72bb7416e02297d229d960902802d874c0e167128c809",
"sha256:adf716a89c9cc1891ead79a861c427071ef59172f0e11967b00565a9547b3bd0",
"sha256:bcfa99f5a82f5eaaf6e5cee5bfdca5a1670f5740aec1d93dae170645ed1a16b0",
"sha256:cc94079ae6cbcea5ae194464a30f3223f075e06a0446f52bca9ddbeb6e9f412a",
"sha256:d5d9edfdc5a3a01d06062d677b121081629782edf0e05ca1be14f15bb947eeee",
"sha256:e269ab7a50bf0fa6fe6a88ea7dcc7a1079ae9450d9ab9b7730ac32916d55508b",
"sha256:e7fd33a3474cbe18fd5b5620784a0fa21fcae3e402b1806e29c6b450c7f61706"
],
"version": "==3.5.2.post1"
},
"pymongo": {
"hashes": [
"sha256:051770590ddbd5fb7db17d3315d4c1b0f18039d830dd18e1bae39451c30d31cd",
"sha256:061085dfe4fbf1d9d6ed2f2e52fe6ab72559e48b4294370b433751638160d10b",
"sha256:07fdee1c5567f237796a8550233e04853785d8dcf95929f96ab519ed91543109",
"sha256:0d98731aaea8cb32b535c376f6785927e4e3d9459ffe1440b8a639827a849350",
"sha256:10f683950f70626ccedf4a662d1c0b3244e8e013c2067872af5633830abd1bfd",
"sha256:192ee5e33821931f4ec6df5fff4361220c0c92bb5b7437c6db52e20a0c9b4d98",
"sha256:2954b99cfeb76776879e9f8a4cae9c5e19d5eff92d0b7b663ceddcf192adb66b",
"sha256:36a992e02fced328de5304145dc3729a8cea12e58ad34b842a6f46d7941c9fc7",
"sha256:419ed5d5b76ef304815f354d9df7f2085acfd6ff7cc1b714ca702e2239b341c2",
"sha256:42ec201fd9a26e7c1e611e3db19324dead51dd4646391492eb238b41749340e8",
"sha256:4400fa92af310bf66b76c313c7ded3bb63f3d63b4f43c3bfbff552cf294dc9fa",
"sha256:44abdc26989600bb03b62d57616ec7c1b9182290720167c39e38c3a2b0d44e44",
"sha256:45fb9f589c0f35436dbe391c53a387ffffa8d086b8521a86fca4f3e1d0edbf71",
"sha256:4807dfbb5cdcfe0224329992dc48b897c780d0ad7553c3799d34f84ba5cab446",
"sha256:54daf67e1e7e7e5a5160c86123bdd39b1d3b25876c2ab38230dc2a764cb3d98f",
"sha256:5f2814a9492a724fd77c90ffc01f810276ef9972ae02587bfaae40835f9b8407",
"sha256:5fd6ce5ed3c6c92d2c94756e6bf041304e5c7c5a5dbea31b8957d52a78bdf01d",
"sha256:601e00fe7fb283f04c95f5dafb787c0862f48ca015a6f1f81b460c74e4303873",
"sha256:63a47a97b5cb4c67c86552b15e08df12ff026a648211120adf5ebe00453e85e9",
"sha256:6c4459d5c2b45ba55e14360e03078426015c1b0881facaec51bd9bd9e2304cec",
"sha256:7fbd9233e8b6741b047c5857e2ad5efb74091f167d7fa8a2a3379217165058f9",
"sha256:7ffac35362c07c103b024b89875e8d7f0625129b65c56fa8a3ecebbd56110405",
"sha256:833bc6cb2ec7058dea9f5840a9314ac74738d2117486a044e88f3976e37ea7a0",
"sha256:92cb26a2a9b38e8df5215803f950b20a6c847d5e00d1dd125eaa84f05f9472d7",
"sha256:97d6a218c4ad4f8fdde0143776d5224e884cbcfe631e7446379fa1790d8cf04f",
"sha256:9e5f0e8967d95a256038817460844a8aab588b9bc9ba6296507a1863960a0e44",
"sha256:9e6db7ff63fb836d56e62216e10e868c23a99f3cb02875411eb2cb787acf58c7",
"sha256:a0a695eef38c15570f6da3b4900e1a1d85fa92c754177d5f05267b49da79c92b",
"sha256:aa46076524471729430afacca3dd8ad4578878eca6fc9e2b593a0b381b5bbeb7",
"sha256:abf83b908e535b1386a7732825994e6e36eff6394c1829f3e7a23888136484fa",
"sha256:adb2dba52c8a2a2d7bcd3b267f7bbf7c822850cf6a7cd15211b9f386c3a670ef",
"sha256:ae7b3479822a03f6f651913de84ba67101f23e051ae88034085e974f472dcfff",
"sha256:c596af57286ef28cae7a48e3070d222f96f5f0eab76ad39d680ae6b9bbc957c7",
"sha256:cc15b30f0ac518e6cbd4b6e6e6162f8aa14edfe255d0841146f146151bd58865",
"sha256:d23498d62063b715078947bef48fa4d34dc354f3b268ed15dc6b46fc809a88e9",
"sha256:dd29bb5bc9068ccc248c8c145efd839421f04363b468b47cfa2d4902ca369afe",
"sha256:e2745dd408a26d4517702d1686afc8e1e1638d2167e857c684f912192cc00dcf",
"sha256:e53ad0cc6c489f83e7f6bb6121aa73bb6f6488410024a3bd77c16af1aa3a1000",
"sha256:ecb11113407d919f8714cc7d0841985044633d0b561ef3d797e1b494a3e73537",
"sha256:ece2c2add66d3ec2720a963bf073ca11fc3b0b58159767fc3bc5ddaad791d481",
"sha256:ef25c8675f5c8c19832f69cd97d728d99bb4ab9c3b200e28a5c8416631afaf3c",
"sha256:f62a818d643776873713c5676f17bd95ac4176220b13dd12c14edd3a450d1ac9",
"sha256:f7ebcb846962ee40374db2d9014a89bea9c983ae63c1877957c3a0a756974796"
],
"version": "==3.6.1"
},
"raven": {
"hashes": [
"sha256:e4edf648829a64234800a10ed94ca08e0b38592f7449fa5e70931db62f5cd851",
"sha256:f908e9b39f02580e7f822030d119ed3b2e8d32300a2fec6373e5827d588bbae7"
],
"version": "==6.7.0"
},
"six": {
"hashes": [
"sha256:70e8a77beed4562e7f14fe23a786b54f6296e34344c23bc42f07b15018ff98e9",
"sha256:832dc0e10feb1aa2c68dcc57dbb658f1c7e65b9b61af69048abc87a2db00a0eb"
],
"version": "==1.11.0"
},
"treasury": {
"editable": true,
"path": "."
}
},
"develop": {}
}
Expected result
grpcio==1.10.0 to be installed from our private index.
Actual result
grpcio==1.10.1 is installed from pypi.
See https://gist.github.com/seglberg/7425c0bee2cb17fefa00397badaf889e for pipenv install --verbose output.
If I had to wager a guess as to what I'm seeing, its that the newer versions try to install -e . first, and then resolve the newest version of grpcio, thus not adhering to what is set in the Pipfile.
The older versions seems to install -e . last, thus the dependencies have already been installed and resolved correctly.
This happens even with version constraints as well, not just indexes.
[packages]
grpcio = {index = "arroyo", version = "==1.10.0"}
arroyo-inflow-firmament = {index = "arroyo"}
cerberus = "~=1.2"
"e1839a8" = {path = ".", editable = true}
Using pipenv 11.10.1, grpcio==1.11.0 is installed, but downgrading to 11.9.0 installs grpcio==1.10.0 as expected. Both tests were conducted with the exact same lockfile.
The same lockfile or the same Pipfile
Pipenv now uses the specified index and all additional indexes as —extra-index-url so if you have the same name and an earlier version in the specified index you’ll lose. We do have a fix planned that will help you — see #1921
lockfile.
Ah okay this all makes sense now, why we see a difference in the behavior.
We will continue to use the older version of pipenv for now until #1921 is released.
See #2159, it better articulates the issue.
|
gharchive/issue
| 2018-04-30T20:49:52 |
2025-04-01T04:35:36.349634
|
{
"authors": [
"seglberg",
"techalchemy"
],
"repo": "pypa/pipenv",
"url": "https://github.com/pypa/pipenv/issues/2102",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1123478540
|
ERROR:display_layout.cc
Be sure to check the existing issues (both open and closed!), and make sure you are running the latest version of Pipenv.
Check the diagnose documentation for common issues before posting! We may close your issue if it is very similar to one of them. Please be considerate, or be on your way.
Make sure to mention your debugging experience if the documented solution failed.
Issue description
I am trying to execute a script with python -m pipenv run main.py but it is throwing a display_layout.cc error.
I can't find a solution to this problem anywhere online.
Expected result
The program to execute without throwing any errors.
Actual result
Console output after executing the command: [9464:0203/141858.364:ERROR:display_layout.cc(562)] PlacementList must be sorted by first 8 bits of display_id
The script's contents:
print('Hello, World!')
Pipfile contents:
[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
[dev-packages]
[requires]
python_version = "3.10"
Locations:
Pipenv version: `'2021.11.23'`
Pipenv location: `'C:\\Users\\grantwforsythe\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\pipenv'`
Python location: `'C:\\Users\\grantwforsythe\\AppData\\Local\\Programs\\Python\\Python310\\python.exe'`
Steps to replicate
I am unsure of how to replicate the issue since I assume this is a problem with my system.
Please run $ pipenv --support, and paste the results here. Don't put backticks (`) around it! The output already contains Markdown formatting.
If you're on macOS, run the following:
$ pipenv --support | pbcopy
If you're on Windows, run the following:
> pipenv --support | clip
If you're on Linux, run the following:
$ pipenv --support | xclip
Are you using Windows? If so, which terminal are using?
Yes, I am, Windows 10 Pro v10.0.19044 and I currently using the Powershell Integrated Console v2021.12.0 in VS Code.
Can you please see if this problem occurs on the normal cmd.exe? Unfortunately, I don't use Windows and I can't help much here. It seems like there is an issue with the console. Also, I am guessing that your script is invoking some browser? Seems related to chrome?
@grantwforsythe I think the issue is the command you are running, it doesn't work on git shell either and tries to open a new window. Specifically this: python -m pipenv run main.py
pipenv should be used to run python, not the other way around, so like: pipenv run python main.py worked for me:
$ pipenv run python main.py
Hello, World!
Hi, @matteius
Your comment was a solution to my problem.
Thanks, @oz123, for your help as well.
|
gharchive/issue
| 2022-02-03T19:36:14 |
2025-04-01T04:35:36.358393
|
{
"authors": [
"grantwforsythe",
"matteius",
"oz123"
],
"repo": "pypa/pipenv",
"url": "https://github.com/pypa/pipenv/issues/4943",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1462038233
|
pipenv check - Ruamel vendoring isn't working
Issue description
2022.11.23 included the new Safety version, which requires ruamel, the wheel for this pipenv version is missing the ruamel package therefore, pipenv check is failing with the following error:
Checking PEP 508 requirements...
Passed!
Checking installed packages for vulnerabilities...
Traceback (most recent call last):
File "/Users/foo-user/.pyenv/versions/3.11.0/bin/pipenv", line 8, in <module>
sys.exit(cli())
^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/cli/options.py", line 57, in main
return super().main(*args, **kwargs, windows_expand_args=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1053, in main
rv = self.invoke(ctx)
^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/decorators.py", line 84, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/vendor/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/cli/command.py", line 510, in check
do_check(
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/core.py", line 2985, in do_check
from pipenv.patched.safety.cli import cli
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/patched/safety/cli.py", line 12, in <module>
from pipenv.patched.safety import safety
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/patched/safety/safety.py", line 21, in <module>
from .util import RequirementFile, read_requirements, Package, build_telemetry_data, sync_safety_context, SafetyContext, \
File "/Users/foo-user/.pyenv/versions/3.11.0/lib/python3.11/site-packages/pipenv/patched/safety/util.py", line 17, in <module>
from pipenv.vendor.ruamel.yaml import YAML
ModuleNotFoundError: No module named 'pipenv.vendor.ruamel'
Inspecting the wheel, we can see there isn't the ruamel package.
Expected result
pipenv check works as expected.
It is checked in though, I wonder if this is another case where the wheel built is excluding these files for some reason.
I merged a small change to setup.py to main that worked locally to include ramel in the wheel. Feel free to try it out @yeisonvargasf -- I plan to do a follow-up release 2022.11.24 either later tonight or tomorrow.
It is checked in though, I wonder if this is another case where the wheel built is excluding these files for some reason.
@matteius I reviewed this more in detail. I think the issue is that the ruamel directory under /pipenv/vendor/ruamel/ doesn't have a __init__.py, so in the build wheel process,ruamel is ignored.
I merged a small change to setup.py to main that worked locally to include ruamel in the wheel.
This change also worked for me.
I think adding a __init__.py makes more sense.
@yeisonvargasf adding an init file would get blown away during revendoring unless it was created via the patch, so I like this way for now.
I got it, it makes sense, and I agree. Thank you!
Ah bummer, the published wheel is still missing ruamel.
It is weird -- when I build with the same command that the github action locally uses, it does put ruamel in wheel file, but github actions does not.
Ok so will need a more permanent solution to keeping the init.py in place in the pipenv/vendor/rumael directory -- I'll keep this ticket open for that task. However I just verified that new release 2022.11.25 does in fact include the required ruamel.
|
gharchive/issue
| 2022-11-23T16:14:45 |
2025-04-01T04:35:36.366834
|
{
"authors": [
"matteius",
"yeisonvargasf"
],
"repo": "pypa/pipenv",
"url": "https://github.com/pypa/pipenv/issues/5493",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.