id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
33138761
|
Luna County, NM
Contact:
http://www.lunacountynm.us/assessors-office/
Closing as stale - Feel free to open a new issue if someone is actively going after these.
|
gharchive/issue
| 2014-05-09T01:18:19 |
2025-04-01T06:39:52.017002
|
{
"authors": [
"ingalls",
"tlpinney"
],
"repo": "openaddresses/openaddresses",
"url": "https://github.com/openaddresses/openaddresses/issues/226",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
94742522
|
skip us-mn-scott, add kandiyohi
Looks like we might need to wait a bit for this one to settle. There is a new "OpenData" folder on the root of the endpoint, which is encouraging.
The scott file has failed, which is why I want to skip it. What do I need to do to make this pull request happier?
Get one of us to merge it :smile:. Thanks! Hopefully we can get Scott County back.
Looks like they've moved their stuff behind a proxy:
http://gis.co.scott.mn.us/Proxy/proxy.ashx?http://gis.co.scott.mn.us/scgis/rest/services/Property_Info/PUBLIC_PARCEL_APP_RW/MapServer/25
The proxy uses some ESRI API token to gain access to the backing ESRI endpoint. I don't think our code will handle that as-is.
There is a lot of peer pressure in the metro to open up the data, and I hopefully will hear about it if it happens. Thanks.
Yes, the proxy establishes a login, and once you have a session open you can view the REST endpoint. But that session expires and then you're toast. However, still holding out hope that "OpenData" folder might have a service in it someday.
|
gharchive/pull-request
| 2015-07-13T15:05:15 |
2025-04-01T06:39:52.019907
|
{
"authors": [
"iandees",
"mmdolbow"
],
"repo": "openaddresses/openaddresses",
"url": "https://github.com/openaddresses/openaddresses/pull/1074",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1980215236
|
CreateResponse fails to create response with new tools format
CreateResponse is not yet aware of new field in responses called tool_calls and returns null as a content in response.
Done in https://github.com/openai-php/client/releases/tag/v0.7.7
|
gharchive/issue
| 2023-11-06T22:47:59 |
2025-04-01T06:39:52.021407
|
{
"authors": [
"chekalsky",
"gehrisandro"
],
"repo": "openai-php/client",
"url": "https://github.com/openai-php/client/issues/238",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
371469306
|
Arguments 'network' and 'seed' are not documented in the docstring of the learn function of deepq.py
Both the arguments network and seed are not documented in the docstring of the learn function of deepq.py. Additionally, the function's docstring contains a reference to an inexistent argument q_func (outdated).
Fixed, thanks!
|
gharchive/issue
| 2018-10-18T10:43:31 |
2025-04-01T06:39:52.023453
|
{
"authors": [
"JulianoLagana",
"pzhokhov"
],
"repo": "openai/baselines",
"url": "https://github.com/openai/baselines/issues/659",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
312063387
|
Don't use system libzip
Issue summary
The build can be made more reliable by not attempting to use the system libzip. The build process already downloads libzip into third_party, rather than trying to ensure that the installed version of libzip has the interface we expect, just always build the 3p library and consume it.
For example, on Ubuntu 14.04, the file zip.h does not contain a definition for zip_t. This results in the build failing.
System information
[Operating system]
Ubuntu 14.04
[Python version]
3.6
[Gym Retro version]
06629f649062fb0bf4e81bb217d8bf01f72d83f6
system libzip is now ignored if it's too old.
Cool; thanks for the explanation and fixing the issue! :)
|
gharchive/issue
| 2018-04-06T17:54:25 |
2025-04-01T06:39:52.035910
|
{
"authors": [
"cwgreene",
"endrift"
],
"repo": "openai/retro",
"url": "https://github.com/openai/retro/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1341707624
|
MINIBALL: Memory virtualization
Refer to the vmm-reference project to implement Miniball, and use dragonball-sandbox creats to implement Memory virtualization.
Fixes: https://github.com/openanolis/dragonball-sandbox/issues/182
Signed-off-by: yaoyinnan yaoyinnan@foxmail.com
This PR include https://github.com/openanolis/dragonball-sandbox/pull/178 temporarily.
Hi @yaoyinnan , could you help to rebase this PR?
Hi @yaoyinnan , could you help to rebase this PR?
has been rebased
|
gharchive/pull-request
| 2022-08-17T12:43:13 |
2025-04-01T06:39:52.041404
|
{
"authors": [
"studychao",
"yaoyinnan"
],
"repo": "openanolis/dragonball-sandbox",
"url": "https://github.com/openanolis/dragonball-sandbox/pull/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1389601330
|
Question: Which template file I should work on to tweak "_parse_response" generated code?
Describe the bug
While the openapi-python-client tool works well for generating code as per the Open API Specification for https://codebeamer.com/cb/v3/swagger/editor.spr , Some times the server sends unexpected response which is outside of the OpenAPI Spec contract.
In such case, For example, Take a situation of HTTP Error code 429,
def _parse_response(
*, response: httpx.Response
) -> Optional[Union[TooManyRequestsException, TrackerItem]]:
if response.status_code == 403:
response_403 = TrackerItem.from_dict(response.json())
return response_403
if response.status_code == 404:
response_404 = TrackerItem.from_dict(response.json())
return response_404
if response.status_code == 200:
response_200 = TrackerItem.from_dict(response.json())
return response_200
if response.status_code == 429:
response_429 = TooManyRequestsException.from_dict(response.json())
return response_429
return None
Server has to send an JSON object but it sends string message. This makes
response.json()
to fail.
I would like to handle such a situations via templates , I would like to write code like below,
if response.status_code == 429:
try:
response_429 = TooManyRequestsException.from_dict(response.json())
except JSONDecodeError as ex:
....#Do error handling / Log it well
return response_429
I tried to achieve this behavior by changing jinja templates. I looked into templates/endpoint_module.py.jinja but I am lost.
Question:
Which template I should edit to achieve Json decoder error handling?
To Reproduce
NA
Expected behavior
NA
OpenAPI Spec File
NA
Desktop (please complete the following information):
OS: Ubuntu 22.04.1 LTS
Python Version: 3.10
openapi-python-client version: 0.11.6
Additional context
NA
I think you're looking in the write file, you probably want to edit this block:
{% if parsed_responses %}
def _parse_response(*, response: httpx.Response) -> Optional[{{ return_string }}]:
{% for response in endpoint.responses %}
if response.status_code == {{ response.status_code }}:
{% import "property_templates/" + response.prop.template as prop_template %}
{% if prop_template.construct %}
{{ prop_template.construct(response.prop, response.source) | indent(8) }}
{% else %}
{{ response.prop.python_name }} = cast({{ response.prop.get_type_string() }}, {{ response.source }})
{% endif %}
return {{ response.prop.python_name }}
{% endfor %}
return None
{% endif %}
to add your statement around {{ prop_template.construct(response.prop, response.source) | indent(8) }} (and adjust indents appropriately).
To be clear, though, I don't think it's the generated client's job to handle the server misbehaving or the spec being wrong—so this isn't the sort of thing that should be added universally to the generator. Instead, this should be part of custom templates which are unstable—so make sure to pin to an exact version of openapi-python-client if you're going that route.
Thanks @dbanty , Yes its clear that its not generator's Job. That is why I am looking in Custom templates. I have already used the custom templates to handle datetime related issue.
Ex: datetime_property.py.jinja
{% set transformed = source + ".isoformat(timespec='milliseconds')" %}
Awesome, great! Just wanted to make sure I wasn't accidentally setting false expectations 😅
Thanks for letting me know , I will be careful with custom templates :)
On Thu, Sep 29, 2022 at 8:01 PM Dylan Anthony @.***>
wrote:
Awesome, great! Just wanted to make sure I wasn't accidentally setting
false expectations 😅
—
Reply to this email directly, view it on GitHub
https://github.com/openapi-generators/openapi-python-client/issues/678#issuecomment-1262370424,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB7TGMK7G2FFX7ER335YDUDWAWR5XANCNFSM6AAAAAAQX5WLMU
.
You are receiving this because you authored the thread.Message ID:
@.***
com>
--
Regards,
Bakkiaraj M
|
gharchive/issue
| 2022-09-28T15:57:23 |
2025-04-01T06:39:52.055353
|
{
"authors": [
"bakkiaraj",
"dbanty"
],
"repo": "openapi-generators/openapi-python-client",
"url": "https://github.com/openapi-generators/openapi-python-client/issues/678",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
159869311
|
Bolus via wizard not appearing in Nightscout
Here are the commits from two iter_pump_hours 4 reports, one loop apart. The first commit adds a bolus wizard entry to history, the second adds the bolus itself to history (presumably the record was created after delivery finished?). Because the bolus has the same timestamp as the already-uploaded bolus wizard entry, it is excluded by nightscout cull-latest-openaps-treatments:
First commit: https://gist.github.com/mddub/f64b5ae8cacbaa1b67b308654450ec46
Second commit: https://gist.github.com/mddub/ec20634e5a579a7225655b715b2bf930
From my openaps.ini:
latest-ns-treatment-time = ! bash -c "nightscout latest-openaps-treatment $NIGHTSCOUT_HOST | json created_at"
format-latest-nightscout-treatments = ! bash -c "nightscout cull-latest-openaps-treatments monitor/pumphistory-zoned.json settings/model.json $(openaps latest-ns-treatment-time) > upload/latest-treatments.json"
upload-recent-treatments = ! bash -c "openaps format-latest-nightscout-treatments && test $(json -f upload/latest-treatments.json -a created_at eventType | wc -l ) -gt 0 && (ns-upload $NIGHTSCOUT_HOST $API_SECRET treatments.json upload/latest-treatments.json ) || echo \"No recent treatments to upload\""
upload = ! bash -c "openaps upload-recent-treatments 2>/dev/null >/dev/null && echo -n \"Uploaded; most recent treatment event @ \" && openaps latest-ns-treatment-time || echo \"Error; could not upload\""
I set these aliases up a long time ago according to the docs. Is there a newer way to upload treatments which prevents this issue?
(Running openaps 0.1.0, oref0 0.1.4, decocare 0.0.23)
I've seen things like this when the history is read and uploaded before the bolus finishes. The trick for fixing that is checking the bolusing status.
The hack I'm using for that is here: https://github.com/jasoncalabrese/indy-e1b/blob/master/openaps.ini#L3
Sounds like the read history commands (and possibly others?) should check the bolusing (optionally taking a pump-status.json?) and... maybe wait a bit before continuing?
Not sure of it should wait or fail, but does make sense to have a read
history use that takes a status.
On Jun 13, 2016 9:30 PM, "Ben West" notifications@github.com wrote:
Sounds like the read history commands (and possibly others?) should check
the bolusing (optionally taking a pump-status.json?) and... maybe wait a
bit before continuing?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/openaps/oref0/issues/130#issuecomment-225756299, or mute
the thread
https://github.com/notifications/unsubscribe/AAt2J2Uk2RWu2_ZL3PH1hxeJ3qoanA64ks5qLgQkgaJpZM4Iz-ns
.
Thanks @jasoncalabrese, I like your bolusing workaround. Sounds like the right way to fix this is a change to history commands in openaps/decocare, so I'll close this.
|
gharchive/issue
| 2016-06-13T03:54:22 |
2025-04-01T06:39:52.065168
|
{
"authors": [
"bewest",
"jasoncalabrese",
"mddub"
],
"repo": "openaps/oref0",
"url": "https://github.com/openaps/oref0/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
260059936
|
wait for bolus to complete and refresh pumphistory after SMB/enact
This makes oref0 refresh pumphistory after an SMB/enact so that it can be displayed more promptly in NS.
only refresh after SMB/enact; wait_for_silence 10 if blousing - would this not refresh after a temp? That's OK if so, just trying to clarify. Otherwise this one looks good to me!
Yes, that is intentional. It also refreshes any time a >30m low temp is still running, even if not new, which is IMO an acceptable side effect for simplicity's sake. If we wanted to instead to modulus math we could, but my first attempt didn't work so I decided not to.
|
gharchive/pull-request
| 2017-09-24T05:39:22 |
2025-04-01T06:39:52.066939
|
{
"authors": [
"jaylagorio",
"scottleibrand"
],
"repo": "openaps/oref0",
"url": "https://github.com/openaps/oref0/pull/667",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
309480836
|
IPMI test to check channel supported
Reference : https://github.com/openbmc/openbmc/issues/3036
[x] Enumerate network interface and get the active channels
[x] Check the channels are functional or not
In review https://gerrit.openbmc-project.xyz/#/c/10059/
|
gharchive/issue
| 2018-03-28T18:31:21 |
2025-04-01T06:39:52.086544
|
{
"authors": [
"gkeishin"
],
"repo": "openbmc/openbmc-test-automation",
"url": "https://github.com/openbmc/openbmc-test-automation/issues/1313",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2430899887
|
[Bug]: OB2 WEB 0.3.1 CLOSES ON IT OWN
This issue respects the following points:
[X] This is a bug, not a question or a configuration issue; Please visit our forum first to troubleshoot with volunteers, before creating a report. The links can be found here.
[X] This issue is not already reported on GitHub (I've searched it).
[X] I'm using an up to date version of OpenBullet2. We generally do not support previous older versions. If possible, please update to the latest version before opening an issue.
[X] I agree to follow OpenBullet's Code of Conduct.
[X] This report addresses only a single issue; If you encounter multiple issues, kindly create separate reports for each one.
Description of the bug
Unhandled exception. System.Net.NetworkInformation.NetworkInformationException (0x80004005): An error was encountered while querying information from the operating system.
—> System.AggregateException: One or more errors occurred. (An error was encountered while querying information from the operating system.)
—> System.Net.NetworkInformation.NetworkInformationException (0x80004005): An error was encountered while querying information from the operating system.
at System.Net.NetworkInformation.BsdNetworkInterface…ctor(String name, Int32 index)
at System.Net.NetworkInformation.BsdNetworkInterface.Context.GetOrCreate(Byte* pName, Int32 index)
at System.Net.NetworkInformation.BsdNetworkInterface.ProcessLinkLayerAddress(Void* pContext, Byte* ifaceName, LinkLayerAddressInfo* llAddr)
— End of inner exception stack trace —
at System.Net.NetworkInformation.BsdNetworkInterface.GetBsdNetworkInterfaces()
at System.Net.NetworkInformation.NetworkInterfacePal.GetIsNetworkAvailable()
at System.Net.NetworkInformation.NetworkChange.OnAddressChanged(IntPtr store, IntPtr changedKeys, IntPtr info)
zsh: abort dotnet ./OpenBullet2.Web.dll
Reproduction steps
nn
What is the current bug behavior?
When OB2 start working after a hour/s stop working
What is the expected correct behavior?
nn
Version of the client
0.3.1
Type of client
Web client
Environment
- OS: MacOS 12.7.5 (21H1222)
- Virtualization:
- Browser: Chrome Version 126.0.6478.183
OpenBullet2 logs
No response
Client / Browser logs
No response
Relevant screenshots or videos
No response
Relevant LoliCode if needed
No response
Additional information
No response
It seems like it's an issue with other pieces of software too, possibly related to the .NET runtime on MacOS.
https://emby.media/community/index.php?/topic/126585-arm-version-crashing-every-few-days/
I'm pretty sure the code that causes this is the following
https://github.com/openbullet/OpenBullet2/blob/master/OpenBullet2.Web/Services/PerformanceMonitorService.cs#L143-L159
For now I'll try to add a try/catch statement around it, I hope it will work, otherwise I will have to disable network I/O stats for macOS as a whole.
|
gharchive/issue
| 2024-07-25T20:08:45 |
2025-04-01T06:39:52.097636
|
{
"authors": [
"chrislefron",
"openbullet"
],
"repo": "openbullet/OpenBullet2",
"url": "https://github.com/openbullet/OpenBullet2/issues/1079",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
340167113
|
MH-12998: Clear conflicts when closing “Edit Scheduled Events” modal
When you have conflicts displayed in the “Edit scheduled events”
modal, then close the modal an reopen it again (possibly with
different events and without changing anything), the previous error
message resides.
This commit clears existing conflicts before the modal is closed.
This work was sponsored by SWITCH.
Travis reports failing tests, but I didn't even change any .java files.
I think greg was still working on the travis build at the moment. lets wait a moment and see what happens
@pmiddend @doofy just ignore Buildbot for now. If Travis is happy then we're happy. Buildbot is experiencing #189, which has not been solved yet.
Any update on this?
Oh, thanks for the reminder, will look into it today.
Reviewing now…
Seems to work fine now. Thanks for the patch → merged
|
gharchive/pull-request
| 2018-07-11T09:41:06 |
2025-04-01T06:39:52.115821
|
{
"authors": [
"doofy",
"gregorydlogan",
"lkiesow",
"pmiddend"
],
"repo": "opencast/opencast",
"url": "https://github.com/opencast/opencast/pull/346",
"license": "ECL-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2156369842
|
Add Cloud Masking/Detection Algorithm
There is a paper here: https://www.meteoswiss.admin.ch/services-and-publications/publications/scientific-publications/2013/the-heliomont-surface-solar-radiation-processing.html that describes how surface solar radiation is determined for MeteoSwiss, but also includes detecting different types of clouds and creating a cloud mask in SEVIRI imagery.
Detailed Description
Context
This can be quite useful for making our own cloud masks from the raw imagery, or cloud types. The paper also includes an interesting way of correcting for orbital maneuvers of the satellite, to realign the imagery, which might be very helpful.
Possible Implementation
The paper is quite detailed, so possibly just going directly off of that into Satip.
Can you assign me this issue and give me a brief on what and how to do so that i can work on this issue?
Regards
Hi, the details are in the paper linked to in that website, they have their approach to cloud masking that should work here. For adding it to Satip, you could add a cloud_mask file that has the cloud masking algorithm implementation, and add some tests that run on the public Zarrs to see how well it works?
.
i have read those details but when i was bbuilding the project and downloading the data with eumetsat api i have come up with this erroe can you please provide me some information or solution regarding this error.
File "/home/grewal/Satip/venv/lib/python3.10/site-packages/botocore/auth.py", line 418, in add_auth
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Can i get your inputs on this issue?
Hi, sorry for the delay, it seems that you need to log in to AWS for those credentials. That seems like you are using the app.py, which currently does upload to S3 by default. For this, you should be able to use the public google cloud dataset here to get the raw data to use with the cloud masking algorithm.
Also, I would recommend focusing on a single issue @vikasgrewal16 if possible? There are quite a few different potential GSoC contributors who are wanting different good first issues. I've seen you also commented on #231, are you more interested in this one or that one? Or a different one?
Hi @jacobbieker !
I've read the details of the SPARC cloud masking algorithm as mentioned in the reference provided by you. Currently, I'm looking through the properties of the raw data from the shared data bucket and would like to work on the implementation part of the algorithm. I would appreciate it if you could assign this issue to me. Thank you!
Hi @Surya-29, that sounds great!
I'm having some trouble finding attributes necessary for calculating the SPARC score (used for cloud mask). The problem is that these attributes, specifically clear sky/cloud free brightness temperature $T_{cf}$ and background reflectance $\rho_{cf}$ , aren't available in the SEVIRI dataset provided. They can either be calculated (Section 6 Clear Sky Compositing 1) or retrieved from other datasets (All Sky Radiances 2) provided by EUMETSAT. Can I go with the latter option since calculating these attributes might involve fitting a model over the diurnal course? However, the issue with accessing the ASR dataset is that it is only available on the EUMETSAT Data Center (which requires us to order it) and not on the Data Store, so downloading via API is not possible right? @jacobbieker How should I approach this now?
Ah okay, I would have thought that info would have been in the attributes of the native files. Yeah, for a first pass on getting this in, I think getting some data from the data center, and using that is probably the right way to go for now. We can always try to then add calculating the values ourselves later, as the data center can be quite slow to give data. You are right there is no api access to the data center unfortunately. Another, less ideal option, would be to see if we can find an average value, either for the year or per month, that we could use instead? But not sure if there is that published or not somewhere.
Yes, I'll probably go with averaging for background reflectance $\rho_{cf}$. As for brightness temperature $T_{cf}$, I would prefer to implement the model mentioned in the paper, if possible, since the final $sparc_{score}$ requires at least $T_{score}$ to be calculated. Although this aggregate score cloud masking algorithm could compensate for other missing attributes in $sparc_{score}$ calculation.
I've made progress on implementing the cloud masking algorithm and have committed the changes to my remote repository
( changes ), should I raise a PR even though the functionality of the code is partial?
Where should I add the cloud_mask.py file? Would it be appropriate to create a subpackage in Satip, or do I add it directly under Satip? (Better if we could have it as a subpackage since you've mentioned the possibility of extending the architecture to include other algorithms also,).
How do I handle the data? I've been only using the data values (numpy array) for my convenience, but the final result should be xarray.DataArray type including attribute information, etc., just like what you get from EUMETSAT Cloud Mask Dataset right?
Lastly, is there a specific Area of Interest? Are we focusing only on the European region?
Awesome! I would open a PR as a draft PR even if it's incomplete, and just keep adding to it that way.
Yeah, a subpackage would be really good to have.
Yes, the output should be in an Xarray data format, primarily to keep the coordinates and satellite attribute information, you could probably essentially just swap out the data values in the xarray satellite image with the cloud mask data and it would be good to go.
If it is easier, focusing on the European area of interest is fine for now, but we would want to extend it to work over Africa and with the Indian Ocean imagery as well.
|
gharchive/issue
| 2024-02-27T11:38:34 |
2025-04-01T06:39:52.161130
|
{
"authors": [
"Surya-29",
"jacobbieker",
"vikasgrewal16"
],
"repo": "openclimatefix/Satip",
"url": "https://github.com/openclimatefix/Satip/issues/227",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
379014865
|
New presentation page with partner platforms to raise money for your collective
It would be great to have a page somewhere with the list of ways one collective can bring money in (and link to it from one of the automated onboarding emails, the FAQ, other?)
I'm thinking of CodeFund (@coderberry), Carbonads (@sayzlim), Threadless, Eventbrite, etc.
For each, we should have a logo, picture (e.g. of webpack t-shirt), and links to some of the transactions to different collectives made by those platforms.
As a step 2, it would be great to keep track of those partners and follow how much money each is bringing across all collectives.
@piamancini other partners we should include here?
@alannallama can you start a copy for this?
@cuiki can you come up with a design (following the same template as other marketing pages you've made recently)
Let me know if you need anything from us.
@xdamman Sorry I'm not totally clear about what you mean. Are you talking about income sources through Open Collective such as...
Small recurring donations
Large one-off sponsorship
Event tickets
Gift cards
Reward tiers (ex: selling t-shirts, VIP support hours)
... or is it something else? I don't know what 'partner platforms' are.
Are you thinking about something in the wiki, or a blog post, or...?
I’m more talking about platforms that collectives can use to make more money for their collectives. Basically services like the ones mentioned above that make it easy to get the earnings to your collective. So it’s not about special tiers. It’s more about what other platforms you can use out there that can help you make money for your collective.
|
gharchive/issue
| 2018-11-09T04:03:21 |
2025-04-01T06:39:52.191759
|
{
"authors": [
"alannallama",
"sayzlim",
"xdamman"
],
"repo": "opencollective/opencollective",
"url": "https://github.com/opencollective/opencollective/issues/1420",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
451579291
|
Collective URL with Slash
Describe the bug
A clear and concise description of what the bug is.
To Reproduce
Steps to reproduce the behavior:
Go to create a new collective
Pick Open Source Project
Click on Get Started
Select the repository you want to create a collective for
Enter the collective name
Enter the collective URL but include a slash in it e.g.: your-org/your-collective
Click on Create Collective
Navigate to http://www.opencollective.com/your-org/your-collective to find that it is not found. No way to edit it either.
Expected behavior
Create an opencollective url that would be sub-directoried to your org.
http://www.opencollective.com/your-org/your-collective
Screenshots
Desktop (please complete the following information):
OS: Windows 10
Browser Chrome
Version 74.0.3729.169
Thank you for reporting that. We're working on this, will be fixed soon. Closing as duplicate of https://github.com/opencollective/opencollective/issues/2077
In the meantime, please contact support if you need to be unblocked.
Anything that we can do in the meantime? Can you delete that collective so I can re-create it?
We can do support here also :-) Should we edit it to be slimphp or slim?
slim is fine thanks!
Done http://opencollective.com/slim
Looks like you also created an Organization and a User account with the same name. If it was by accident, maybe you want to rename them or delete them.
I'm sorry if these are noob questions but I thought an organization could have collectives. I just followed the regular process when signing up. Do I need a collective? An org? Both?
Normally, you just need one Collective "Slim Framework", whatever the number of repositories. It doesn't make sense to split into multiple collectives.
An Organization at Open Collective is different from the GitHub definition. It's mostly used by companies donating to Collectives. It can be also be used to host (as in fiscal hosting) Collectives.
Finally, the User account is usually under your real name, to show who is administrating and contribution to the Collective.
More info here https://docs.opencollective.com/ or contact support https://opencollective.com/support.
Okay gotcha, then in this case I will change my user to be myself, delete the organization and can we change the collective to be slimphp instead of Slim?
Thank you.
|
gharchive/issue
| 2019-06-03T16:33:44 |
2025-04-01T06:39:52.202668
|
{
"authors": [
"l0gicgate",
"znarf"
],
"repo": "opencollective/opencollective",
"url": "https://github.com/opencollective/opencollective/issues/2084",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
836782112
|
Expense attachments icons are not respecting their container size on Firefox mobile
On mobile:
See https://opencollective.com/captainfact_io/expenses/35329
It looks like it's fixed.
@snitin315 You're viewing an expense where you don't have permissions over the attachments. The bug only seems to affect the preview icon as seen in the screenshot above. I just tested on Firefox Mobile + Android and it's still there.
@Betree : Quick fix for this done by making sure the attachment holder stays the same size. Hope that looks good now. Let me know. 😄
Facing something similar on Safari:
This is still happening. Context:
Latest Firefox mobile on Android
https://opencollective.com/captainfact_io/expenses/64909
Logged in as a collective admin
Still seeing this issue, I'm adding a bounty on it
:trophy: This issue has been completed by @snitin315
To claim the bounty, you can either:
Submit an invoice to https://opencollective.com/engineering/expenses/new (please mention the issue number)
Or ask for an Open Collective gift card of the same value (to donate to other collectives)
|
gharchive/issue
| 2021-03-20T11:10:58 |
2025-04-01T06:39:52.210520
|
{
"authors": [
"Betree",
"SudharakaP",
"kewitz",
"snitin315"
],
"repo": "opencollective/opencollective",
"url": "https://github.com/opencollective/opencollective/issues/4099",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1810465540
|
Recurring direct debits
Who is your user?
Financial contributors
What are they trying to achieve?
They want to set up a
_I want to set up a recurring bank transfer when contributing to a collective. They do this via direct debit and provide the reference number from the first transfer so that they can contribute periodically.
How are they currently doing this?
They are currently doing this and are putting the reference number for the first transfer, not knowing that the reference number needs to be unique for each bank transfer.
I know we don’t allow recurring bank transfers from the platform, but people can set them up from the banking side via direct debit.
Each manual bank transfer gets given an individual reference number for allocation purposes. Can we give donors wishing to set up direct debit a reference number that will work every month? https://opencollective.freshdesk.com/a/tickets/222083
How well understood is this problem?
[ ] Surfacing: if you feel this is a seed of an idea that needs to be researched.
[ ] Understanding: if you've done some research and feel it is already well understood and ready for shaping.
[ ] Shaping: if you feel a solution is clearly defined and can be made into a pitch (read more about shaping)
[ ] Pitching: if you feel that it is ready to be presented for prioritisation (read more about pitching)
This was specifically asked for by the Social Change Nest which is based in the UK
@hdiniz can we enable Stripe elements for them?
I enabled the stripe elements for the the-social-change-nest, they now have bacs direct debit as an option for contribution.
@hdiniz thank you! This had been annoying me for a year or so - I can finally remove my (unreliable) self from the donation loop :grin:
@shannondwray : I suppose we can close this given we support Stripe elements now for the collective in question.
|
gharchive/issue
| 2023-07-18T18:20:58 |
2025-04-01T06:39:52.216605
|
{
"authors": [
"Betree",
"SudharakaP",
"hdiniz",
"phlash",
"shannondwray"
],
"repo": "opencollective/opencollective",
"url": "https://github.com/opencollective/opencollective/issues/6873",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2610467213
|
MTU-1.3: Added new testcases
Added new testcases according to the requirement mentioned in MTU-1.3.2
Pull Request Functional Test Report for #3539 / adf1cc275c1398369aff389ce9bda072b2216d7b
Virtual Devices
Device
Test
Test Documentation
Job
Raw Log
Arista cEOS
MTU-1.3: Large IP Packet Transmission
Cisco 8000E
MTU-1.3: Large IP Packet Transmission
Cisco XRd
MTU-1.3: Large IP Packet Transmission
Juniper ncPTX
MTU-1.3: Large IP Packet Transmission
Nokia SR Linux
MTU-1.3: Large IP Packet Transmission
Openconfig Lemming
MTU-1.3: Large IP Packet Transmission
Hardware Devices
Device
Test
Test Documentation
Raw Log
Arista 7808
MTU-1.3: Large IP Packet Transmission
Cisco 8808
MTU-1.3: Large IP Packet Transmission
Juniper PTX10008
MTU-1.3: Large IP Packet Transmission
Nokia 7250 IXR-10e
MTU-1.3: Large IP Packet Transmission
Help
Pull Request Test Coverage Report for Build 11492920714
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 55.268%
Totals
Change from base Build 11492895831:
0.0%
Covered Lines:
1983
Relevant Lines:
3588
💛 - Coveralls
|
gharchive/pull-request
| 2024-10-24T05:01:30 |
2025-04-01T06:39:52.253701
|
{
"authors": [
"OpenConfigBot",
"coveralls",
"hattikals"
],
"repo": "openconfig/featureprofiles",
"url": "https://github.com/openconfig/featureprofiles/pull/3539",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2755985436
|
new test mpls-1.1 script
Adding the new test mpls-1.1 script for ancx
Pull Request Functional Test Report for #3669 / a5492b9e25819824bf0fa7b1811d4a1462a71db1
Virtual Devices
Device
Test
Test Documentation
Job
Raw Log
Arista cEOS
MPLS-1.1: MPLS label blocks using ISIS
Cisco 8000E
MPLS-1.1: MPLS label blocks using ISIS
Cisco XRd
MPLS-1.1: MPLS label blocks using ISIS
Juniper ncPTX
MPLS-1.1: MPLS label blocks using ISIS
Nokia SR Linux
MPLS-1.1: MPLS label blocks using ISIS
Openconfig Lemming
MPLS-1.1: MPLS label blocks using ISIS
Hardware Devices
Device
Test
Test Documentation
Raw Log
Arista 7808
MPLS-1.1: MPLS label blocks using ISIS
Cisco 8808
MPLS-1.1: MPLS label blocks using ISIS
Juniper PTX10008
MPLS-1.1: MPLS label blocks using ISIS
Nokia 7250 IXR-10e
MPLS-1.1: MPLS label blocks using ISIS
Help
Pull Request Test Coverage Report for Build 12466879571
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 55.268%
Totals
Change from base Build 12460989461:
0.0%
Covered Lines:
1983
Relevant Lines:
3588
💛 - Coveralls
|
gharchive/pull-request
| 2024-12-23T12:34:01 |
2025-04-01T06:39:52.277009
|
{
"authors": [
"OpenConfigBot",
"coveralls",
"ram-mac"
],
"repo": "openconfig/featureprofiles",
"url": "https://github.com/openconfig/featureprofiles/pull/3669",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1061064043
|
Connected file share folder doesn't work
My actions before raising this issue
[ x] Read/searched the docs
[ x] Searched past issues
Problem mounting existing folders as connected
Expected Behaviour
Connected file share folder mounted and working as proposed to the documentation.
Current Behaviour
Hello newbie here. I installed cvat release 1.70 and also tried 1.6.0, but there is one serious problem.
I cannot access connected file share. I have followed the example of
the manual: https://openvinotoolkit.github.io/cvat/docs/administration/basics/installation/#share-path
and I created the file docker-compose.override.yml for binding an existing folder and also creating a volume and mounting it as external. Both volumes are not empty, but populated with image files. In both cases when I login into the container, using "docker exec" command, I can see the existing files I have in both volumes with success, but cvat refuses to display them into the tab. It still shows the message: "No Data
Please, be sure you had mounted share before you built CVAT and the shared storage contains files".
I have to say that I've checked the permissions of the folders and the files, and the owner is the superuser of cvat (default: "django"). I have also searched into the issues section in github but no solution until now.
Steps for reproduction:
I create the docker-compose.override.yml and start the container.
I invoke docker-compose for starting the container with the command:
docker-compose -f docker-compose.yml -f docker-compose.override.yml up
My docker- compose.override.yml file:
#docker-compose.override.yml
version: "3.3"
services:
cvat:
environment:
CVAT_SHARE_URL: "Here we share!"
volumes:
cvat_share:/home/django/my_shared_folder:ro # external
cvat_shared:/home/django/sharedfiles:ro # binded folder
volumes:
cvat_share:
external: true
cvat_shared:
driver_opts:
type: none
device: /home/myuser/cvat_shared/
o: bind
System Information:
My system is a Linux Mint 20.2 Cinnamon. My docker version is Community 20.10.4 . i don't use Docker Swarm.
My linux kernel is 5.11.0-40-generic.
The output of git log -1 is:
commit 2bb8643c1aba53b9ee4fd836b0955340f61d80ed (HEAD -> release-1.7.0, origin/release-1.7.0)
There is no error logs concerning volumes in cvat container. Docker mounts them correctly.
Possible Solution
Fix the bug or modify the documentation.
Next steps
You may join our Gitter channel for community support.
@jstefanis Hi, it looks like your configuration is wrong:
cvat_shared:/home/django/sharedfiles:ro
container path must be /home/django/share. If you want to mount 2 volumes try to use /home/django/share/my_shared_folder and /home/django/share/sharedfiles.
Hello.
You are right. It finally worked!
The connected folders can be mounted under "/home/django/share/" to be available.
I think that it is not really clear in the documentation and a newbie could make mistakes.
Thank you for your time.
Hello, T have a same problem.
docker-compose.yml file:
`cvat_server:
container_name: cvat_server
image: cvat/server:${CVAT_VERSION:-dev}
restart: always
depends_on:
- cvat_redis
- cvat_db
- cvat_opa
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ''
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: 'cvat_redis'
CVAT_POSTGRES_HOST: 'cvat_db'
CVAT_SHARE_URL: ''
ADAPTIVE_AUTO_ANNOTATION: 'false'
IAM_OPA_BUNDLE: '1'
no_proxy: elasticsearch,kibana,logstash,nuclio,opa,${no_proxy:-}
NUMPROCS: 1
USE_ALLAUTH_SOCIAL_ACCOUNTS:
command: -c supervisord/server.conf
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_share:/home/django/share:ro
cvat_worker_import:
container_name: cvat_worker_import
image: cvat/server:${CVAT_VERSION:-dev}
restart: always
depends_on:
- cvat_redis
- cvat_db
environment:
CVAT_REDIS_HOST: 'cvat_redis'
CVAT_POSTGRES_HOST: 'cvat_db'
no_proxy: elasticsearch,kibana,logstash,nuclio,opa,${no_proxy:-}
NUMPROCS: 2
command: -c supervisord/worker.import.conf
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
- cvat_share:/home/django/share:ro
networks:
- cvat
volumes:
cvat_db:
cvat_data:
cvat_keys:
cvat_logs:
cvat_share:
driver_opts:
type: none
device: /mnt/share
o: bind
`
It doesn't work and has no error, can you help me?
Hello, I have a same problem.
docker-compose.override.yml file:
services:
cvat_server:
labels:
- traefik.http.routers.cvat.rule=(Host(10.10.10.114) || Host(61.160.97.86)) &&
PathPrefix(/api/, /git/, /opencv/, /static/, /admin, /documentation/, /django-rq)
volumes:
- cvat_share:/home/django/share:ro
cvat_worker_import:
volumes:
- cvat_share:/home/django/share:ro
cvat_ui:
labels:
- traefik.http.routers.cvat-ui.rule=(Host(10.10.10.114)
traefik:
ports:
- 8019:8080
- 8090:8090
cvat_vector:
ports:
- '8282:80'
cvat_clickhouse:
ports:
- '8123:8123'
volumes:
cvat_share:
driver_opts:
type: none
device: /data/cvat
o: bind
|
gharchive/issue
| 2021-11-23T10:19:46 |
2025-04-01T06:39:52.349100
|
{
"authors": [
"KDD2018",
"Monlter",
"azhavoro",
"jstefanis"
],
"repo": "opencv/cvat",
"url": "https://github.com/opencv/cvat/issues/3938",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1407740276
|
Cannot use mounted volume with s3fs (S3 bucket) with cvat
My actions before raising this issue
[X] Read/searched the docs
[X] Searched past issues
Hi, I've been really trying to get this to work but there is not a lot of documentation and none of the previous issues seem to be the same I have.
What Im trying to achieve is to use another data storage for CVAT, an s3 bucket on AWS for the volume cvat_data. I've also tried using cvat_share with the same results.
I've mounted the bucket using the tool s3fs as state in the documentation and the I tried modifying the docker-compose use that volume instead of the name volume that comes by default.
My docker-compose.yml
cvat_server:
container_name: cvat_server
image: cvat/server:${CVAT_VERSION:-dev}
restart: always
depends_on:
- cvat_redis
- cvat_opa
env_file:
- .env
environment:
DJANGO_MODWSGI_EXTRA_ARGS: ''
ALLOWED_HOSTS: '*'
CVAT_REDIS_HOST: 'cvat_redis'
ADAPTIVE_AUTO_ANNOTATION: 'false'
no_proxy: elasticsearch,kibana,logstash,nuclio,opa,${no_proxy}
NUMPROCS: 1
command: -c supervisord/server.conf
labels:
- traefik.enable=true
- traefik.http.services.cvat.loadbalancer.server.port=8080
- traefik.http.routers.cvat.rule=Host(`${CVAT_HOST:-localhost}`) &&
PathPrefix(`/api/`, `/git/`, `/opencv/`, `/static/`, `/admin`, `/documentation/`, `/django-rq`)
- traefik.http.routers.cvat.entrypoints=web
volumes:
- cvat_data:/home/django/data
- cvat_keys:/home/django/keys
- cvat_logs:/home/django/logs
networks:
- cvat
...
volumes:
cvat_data:
driver_opts:
type: none
device: /home/ubuntu/mnt/s3-cvat-bucket
o: bind
cvat_keys:
cvat_logs:
The directory "/home/ubuntu/mnt/s3-cvat-bucket" exist
The s3fs is mounted successfully, it works (I`ve trying creating some files and those are reflected on AWS), is mounted with the allow_other options to avoid issues with permissions.
The mount output is this: s3fs on /home/ubuntu/mnt/s3-cvat-bucket type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other)
Current Behaviour
When I attempt to run the docker-compose I get the following error:
ERROR: for cvat_server Cannot create container for service cvat_worker_low: failed to chmod on /var/lib/docker/volumes/cvat_cvat_data/_data: chmod /var/lib/docker/volumes/cvat_cvat_data/_data: input/output error
If a use any other folder that I create it works. It seems to be a problem with this mounted folder.
The error seems to be highly related with privileges, and not directly with CVAT I guess. This is the output of the "ls -l" command.
drwxrwxrwx 1 ubuntu ubuntu 0 Jan 1 1970 s3-cvat-bucket-unusuals
I've tried so many things that I dont even remember all. This is all that I've been capable of narrowing down the error.
Some issues that were helpful but didnt help solving the problem https://github.com/opencv/cvat/issues/2263, https://github.com/opencv/cvat/issues/3935, https://github.com/opencv/cvat/issues/1472, https://github.com/opencv/cvat/issues/4965
@Marishka17, could you look at this problem?
Hi @omar-ogm, Could you please try to remove previous volumes, create docker-compose.override.yml file with content
version: '3.3'
services:
cvat_server:
volumes:
- cvat_share:/home/django/share:ro
cvat_worker_default:
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: none
device: /home/ubuntu/mnt/s3-cvat-bucket
o: bind
and run cvat_server docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d --build cvat_server
Don't use cvat_data volume because we mounted it without ro option and in case of success, additional CVAT files will be written to your s3 bucket.
I can't reproduce this problem on my side and I have no more idea what can be a problem. I see that bucket mounting is correct, I have the same directory permissions and all work for me. I guess this problem is not related to CVAT. If it's possible, could you please try to reproduce this problem on another machine? (e.g use a virtual machine)
@omar-ogm , I will close the issue. Feel free to reopen if you can provide more information about the issue.
hi @Marishka17, I have the same config, but it dosen't work and has no error. It's unthinkable!
version: '3.3'
services:
cvat_server:
volumes:
- cvat_share:/home/django/share:ro
cvat_worker_default:
volumes:
- cvat_share:/home/django/share:ro
volumes:
cvat_share:
driver_opts:
type: none
device: /home/ubuntu/mnt/s3-cvat-bucket
o: bind
|
gharchive/issue
| 2022-10-13T12:46:19 |
2025-04-01T06:39:52.360436
|
{
"authors": [
"KDD2018",
"Marishka17",
"nmanovic",
"omar-ogm",
"zhiltsov-max"
],
"repo": "opencv/cvat",
"url": "https://github.com/opencv/cvat/issues/5110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
658704905
|
Mounting local files and directories onto the dlstreamer_test container
Hello,
I am trying to mount a directory present on my machine into the docker container: dlstreamer_test, on Windows 10 OS. I referred the tutorial available at: https://www.youtube.com/watch?v=ht4gqt4zSyw to achieve this.
I used the following command:
docker cp C:/Users/ShraddhaM/OneDrive/Desktop/streamlit-docker-master dlstreamer_test:/opt/intel/openvino_2020.3.194/streamlit-docker-master
I also tried this command:
docker cp C:/Users/ShraddhaM/OneDrive/Desktop/streamlit-docker-master dlstreamer_test:/openvino/ubuntu18_data_dev/streamlit-docker-master
I getting the following errors respectively, for both commands:
Error: No such container:path: dlstreamer_test:\opt\intel\openvino_2020.3.194
Error: No such container:path: dlstreamer_test:\openvino\ubuntu18_data_dev
@nnshah1 @brmarkus Can you please guide me to resolve this?
Sorry, I don't use Docker under MS-Win (yet), only under Linux environments.
Under Linux I use command lines like this:
docker run -it -v /outside/folder1/:/inside/folder1/ -v /outside/writable/:/inside/writable/:rw mycontainer:1.2.3 /bin/bash
=> "mapping" two volumes, the host folder /outside/folder1/ will be available inside the container as /inside/folder1/, and another one where I specify that I want to write/change/add something from within the container.
@brmarkus Thank you for the solution.
|
gharchive/issue
| 2020-07-17T00:43:28 |
2025-04-01T06:39:52.364977
|
{
"authors": [
"MauryaShraddha",
"brmarkus"
],
"repo": "opencv/gst-video-analytics",
"url": "https://github.com/opencv/gst-video-analytics/issues/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
496932085
|
face_recognition_demo: add options to override the FD input size
background: original face detection model have 300x300 input, it could not detect small faces in some use case(e.g, smart classroom). we need add function to reshape fd-input-shape in demo app to detect more smaller faces. This commit used to fulfill this requirement
Jenkins please retry a build
Thanks Wovchena's good suggestions. I committed all my changes per your comments.
Jenkins please retry a build
You have made face_detector.py executable. Please revert that; it is not a script, so it should not be executable.
sorry for the mistake. I will correct it.
|
gharchive/pull-request
| 2019-09-23T07:18:46 |
2025-04-01T06:39:52.367398
|
{
"authors": [
"Wovchena",
"maozhong1",
"vladimir-dudnik"
],
"repo": "opencv/open_model_zoo",
"url": "https://github.com/opencv/open_model_zoo/pull/440",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
295190495
|
Add SQL interaction API
This enables the tabular result of a Cypher query to be registered as a named table in the session, and adds a SQL query call to CAPSSession which may refer to such registered tables.
Changed my mind; this is ready for review. The rest will come in a follow-up PR.
Very cool, but I don't think we need the additional CAPSRecords.wrap method.
Changed my mind again, this PR will do on its own.
This depends on the changes made in https://github.com/neo4j/neo4j/pull/10981
I've extracted the dependant bits of this PR to #281 to enable this to be merged. #281 seems like it would need to wait for a while.
|
gharchive/pull-request
| 2018-02-07T15:55:27 |
2025-04-01T06:39:52.422648
|
{
"authors": [
"Mats-SX",
"s1ck"
],
"repo": "opencypher/cypher-for-apache-spark",
"url": "https://github.com/opencypher/cypher-for-apache-spark/pull/279",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1644318867
|
E2E Step 5: Delete distributed workloads kfdef
Check that MCAD pod is gone
Check that InstaScale pod is gone
Check that KubeRay pod is gone
Should notebook imagestream go away?
/assign @jbusche
Yes @anishasthana, I just don't have the ability to assign or close these issues.
|
gharchive/issue
| 2023-03-28T16:42:28 |
2025-04-01T06:39:52.465576
|
{
"authors": [
"Maxusmusti",
"jbusche"
],
"repo": "opendatahub-io/distributed-workloads",
"url": "https://github.com/opendatahub-io/distributed-workloads/issues/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
94136814
|
Aggregate will not Install to App Engine
Migrated to opendatakit/aggregate#75 by spacetelescope/github-issues-import
Originally reported on Google Code with ID 1154
What steps will reproduce the problem?
1. Download ODK Aggregate v1.4.7 windows-installer
2. Insert required information
3. Run
What is the expected output? What do you see instead?
I expect it to load on the Google project I created in Developer
What version of the product are you using? On what operating system?
v1.4.7 on Windows Vista (I know, I'm working on an upgrade, its a nonprofit, emphasis
on "non")
Please provide any additional information below.
Here is the Log:
nable to update:
com.google.appengine.tools.admin.HttpIoException: Error posting to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=ActiveStreetsSurveys&version=1&
400 Bad Request
Client Error (400)The request is invalid for an unspecified reason.
at com.google.appengine.tools.admin.AbstractServerConnection.send1(AbstractServerConnection.java:336)
at com.google.appengine.tools.admin.AbstractServerConnection.send(AbstractServerConnection.java:287)
at com.google.appengine.tools.admin.AbstractServerConnection.post(AbstractServerConnection.java:266)
at com.google.appengine.tools.admin.LoggingClientDeploySender.send(LoggingClientDeploySender.java:47)
at com.google.appengine.tools.admin.ResourceLimits.remoteRequest(ResourceLimits.java:173)
at com.google.appengine.tools.admin.ResourceLimits.request(ResourceLimits.java:139)
at com.google.appengine.tools.admin.AppAdminImpl.doUpdate(AppAdminImpl.java:505)
at com.google.appengine.tools.admin.AppAdminImpl.updateAllBackends(AppAdminImpl.java:80)
at com.google.appengine.tools.admin.AppCfg$BackendsUpdateAction.execute(AppCfg.java:2068)
at com.google.appengine.tools.admin.AppCfg.executeAction(AppCfg.java:337)
at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:218)
at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:128)
at com.google.appengine.tools.admin.AppCfg.main(AppCfg.java:124)
com.google.appengine.tools.admin.AdminException: Unable to update app: Error posting
to URL: https://appengine.google.com/api/appversion/getresourcelimits?app_id=ActiveStreetsSurveys&version=1&
400 Bad Request
Client Error (400)The request is invalid for an unspecified reason.
at com.google.appengine.tools.admin.AppAdminImpl.doUpdate(AppAdminImpl.java:517)
at com.google.appengine.tools.admin.AppAdminImpl.updateAllBackends(AppAdminImpl.java:80)
at com.google.appengine.tools.admin.AppCfg$BackendsUpdateAction.execute(AppCfg.java:2068)
at com.google.appengine.tools.admin.AppCfg.executeAction(AppCfg.java:337)
at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:218)
at com.google.appengine.tools.admin.AppCfg.<init>(AppCfg.java:128)
at com.google.appengine.tools.admin.AppCfg.main(AppCfg.java:124)
Caused by: com.google.appengine.tools.admin.HttpIoException: Error posting to URL:
https://appengine.google.com/api/appversion/getresourcelimits?app_id=ActiveStreetsSurveys&version=1&
400 Bad Request
Client Error (400)The request is invalid for an unspecified reason.
at com.google.appengine.tools.admin.AbstractServerConnection.send1(AbstractServerConnection.java:336)
at com.google.appengine.tools.admin.AbstractServerConnection.send(AbstractServerConnection.java:287)
at com.google.appengine.tools.admin.AbstractServerConnection.post(AbstractServerConnection.java:266)
at com.google.appengine.tools.admin.LoggingClientDeploySender.send(LoggingClientDeploySender.java:47)
at com.google.appengine.tools.admin.ResourceLimits.remoteRequest(ResourceLimits.java:173)
at com.google.appengine.tools.admin.ResourceLimits.request(ResourceLimits.java:139)
at com.google.appengine.tools.admin.AppAdminImpl.doUpdate(AppAdminImpl.java:505)
... 6 more
Reported by ActiveStreets on 2015-06-29 23:41:49
This message was created automatically by mail delivery software.
A message that you sent could not be delivered to one or more of its
recipients. This is a temporary error. The following address(es) deferred:
jeffisveryhungry@gmail.com
Domain beorse.net has exceeded the max emails per hour (267/250 (106%)) allowed. Message will be reattempted later
------- This is a copy of the message, including all the headers. ------
Received: from o11.sgmail.github.com ([167.89.101.202]:43332)
by hp159.hostpapa.com with esmtps (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
(Exim 4.89)
(envelope-from bounces+848413-c877-jeff=beorse.net@sgmail.github.com)
id 1dgMCP-000eWj-JZ
for jeff@beorse.net; Fri, 11 Aug 2017 22:30:49 -0400
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=github.com;
h=from:reply-to:to:cc:in-reply-to:references:subject:mime-version:content-type:content-transfer-encoding:list-id:list-archive:list-post:list-unsubscribe;
s=s20150108; bh=kH1nldYS4OEDeBRmuiehA4bm0hs=; b=XkFuqW+5u16YOz+C
qxpa+AzkizzywmHl73loZHbDfZzpT26GlsGhXfEvdAKCq0Q3OhPf9hgOmQ1kkNIs
udAf+HVegjTxs3zNHPNSupbnJTje7onAzRUgjh0SOfs5caDRwnzIimc3h/oLehye
LYGXU2PbDPUnNyAOBkPNE5olMZk=
Received: by filter0576p1mdw1.sendgrid.net with SMTP id filter0576p1mdw1-19803-598E682E-55
2017-08-12 02:30:06.804554289 +0000 UTC
Received: from github-smtp2b-ext-cp1-prd.iad.github.net (github-smtp2b-ext-cp1-prd.iad.github.net [192.30.253.17])
by ismtpd0031p1mdw1.sendgrid.net (SG) with ESMTP id KghsbySSThmSrDJ8tsjZag
for Jeff@Beorse.net; Sat, 12 Aug 2017 02:30:06.706 +0000 (UTC)
Date: Sat, 12 Aug 2017 02:30:07 +0000 (UTC)
From: Open Data Kit notifications@github.com
Reply-To: opendatakit/opendatakit reply@reply.github.com
To: opendatakit/opendatakit opendatakit@noreply.github.com
Cc: Subscribed subscribed@noreply.github.com
Message-ID: opendatakit/opendatakit/issue/1155/issue_event/1203484203@github.com
In-Reply-To: opendatakit/opendatakit/issues/1155@github.com
References: opendatakit/opendatakit/issues/1155@github.com
Subject: Re: [opendatakit/opendatakit] Aggregate will not Install to App
Engine (#1155)
Mime-Version: 1.0
Content-Type: multipart/alternative;
boundary="--==_mimepart_598e682e68be0_5feb3f82a7db5c2c68123";
charset=UTF-8
Content-Transfer-Encoding: 7bit
Precedence: list
X-GitHub-Sender: opendatakit-bot
X-GitHub-Recipient: jbeorse
X-GitHub-Reason: subscribed
List-ID: opendatakit/opendatakit <opendatakit.opendatakit.github.com>
List-Archive: https://github.com/opendatakit/opendatakit
List-Post: mailto:reply@reply.github.com
List-Unsubscribe: mailto:unsub+000519af332c4f4581a0f9e56a7239331e57ebef3ddc027a92cf0000000115a62a2e92a169ce059c69ee@reply.github.com,
https://github.com/notifications/unsubscribe/AAUZr6yJ0UkH2jVbS2OXaTi2tJTJRQzxks5sXQ4ugaJpZM4O1SQh
X-Auto-Response-Suppress: All
X-GitHub-Recipient-Address: Jeff@Beorse.net
X-SG-EID: JVRD81wLmvjBgNlAmNtibjEbEEhcPGKEESixoxP2c47S+lAXmLlfD4L5zN6UmE4ys5MHN9AIqTP6jE
rsOT86Zjy6kvsJDbijLmVmxrn3XgWLLok/Ud3rnwzU9SOpgD1FLTKl71GFKI3Xk5Wyt7fFL5tS4nVT
rlv8omKzE3yBY1byJlekUjK/rCrI1K+rd/A3HCuQPpX5Y/Nky05wv68rihLXycjpISXPs4rU2A69mP
A=
----==_mimepart_598e682e68be0_5feb3f82a7db5c2c68123
Content-Type: text/plain;
charset=UTF-8
Content-Transfer-Encoding: 7bit
Closed #1155.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/opendatakit/opendatakit/issues/1155#event-1203484203
----==_mimepart_598e682e68be0_5feb3f82a7db5c2c68123
Content-Type: text/html;
charset=UTF-8
Content-Transfer-Encoding: 7bit
Closed #1155.
—You are receiving this because you are subscribed to this thread.Reply to this email directly, view it on GitHub, or mute the thread.
----==_mimepart_598e682e68be0_5feb3f82a7db5c2c68123--
|
gharchive/issue
| 2015-07-09T19:46:30 |
2025-04-01T06:39:52.494066
|
{
"authors": [
"jbeorse",
"mitchellsundt"
],
"repo": "opendatakit/opendatakit",
"url": "https://github.com/opendatakit/opendatakit/issues/1155",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1766272234
|
FW/child_debug: use multiple sockets
In this case, we're referring to network sockets, not CPU sockets, but triggered by the upcoming changes to better support multiple CPU sockets and massive logical processor counts.
The current implementation used a simple socketpair to communicate between parent and child. This would break if multiple child processes attempted to send their state to the parent and halt at
// now wait for the parent process to be done with us
char c;
IGNORE_RETVAL(read(crashsocket, &c, sizeof(c)));
This PR refactors to use a server socket in the parent application and make each child open its own client socket to send the state to the parent.
On Linux, we use the Unix autobind feature, which works just like UDP sockets. If that fails, we fall back to actual IP/UDP. Both solutions require no clean up after closing the sockets, unlike non-abstract Unix sockets.
Tested on Linux and on FreeBSD.
Linux:
socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3<socket:[1678669]>
bind(3<socket:[1678669]>, {sa_family=AF_UNIX}, 2) = 0
getsockname(3<socket:[1678669]>, {sa_family=AF_UNIX, sun_path=@"d6894"}, [128 => 8]) = 0
setsockopt(3<socket:[1678669]>, SOL_SOCKET, SO_RCVBUF, [3072], 4) = 0
/bin/strace: Process 109058 attached
[pid 109058] socket(AF_UNIX, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 3<socket:[1681633]>
[pid 109058] bind(3<socket:[1681633]>, {sa_family=AF_UNIX}, 2) = 0
[pid 109058] setsockopt(3<socket:[1681633]>, SOL_SOCKET, SO_SNDBUF, [3072], 4) = 0
/bin/strace: Process 109059 attached
[pid 109059] --- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x8a9} ---
[pid 109059] sendmsg(3<socket:[1681633]>, {msg_name={sa_family=AF_UNIX, sun_path=@"d6894"}, msg_namelen=8, msg_iov=[{iov_base="\2\252\1\0\3\252\1\0\251\10\0\0\0\0\0\0\251\10\0\0\0\0\0\0\0\0\0\0\v\0\0\0"..., iov_len=48}, {iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\223\2\0\0\0\0\0\0"..., iov_len=184}, {iov_base="\177\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\200\37\0\0\377\377\0\0"..., iov_len=2696}], msg_iovlen=3, msg_controllen=0, msg_flags=0}, MSG_NOSIGNAL) = 2928
[pid 109056] recvmsg(3<socket:[1678669]>, {msg_name={sa_family=AF_UNIX, sun_path=@"bc448"}, msg_namelen=110 => 8, msg_iov=[{iov_base="\2\252\1\0\3\252\1\0\251\10\0\0\0\0\0\0\251\10\0\0\0\0\0\0\0\0\0\0\v\0\0\0"..., iov_len=48}, {iov_base="\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\223\2\0\0\0\0\0\0"..., iov_len=184}, {iov_base="\177\3\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\200\37\0\0\377\377\0\0"..., iov_len=2696}], msg_iovlen=3, msg_controllen=0, msg_flags=0}, 0) = 2928
[pid 109056] sendto(3<socket:[1678669]>, "\1", 1, MSG_NOSIGNAL, {sa_family=AF_UNIX, sun_path=@"bc448"}, 8) = 1
FreeBSD:
23199: socket(PF_INET,SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK,0) = 3 (0x3)
23199: bind(3,{ AF_INET 127.0.0.1:0 },16) = 0 (0x0)
23199: getsockname(3,{ AF_INET 127.0.0.1:59640 },0x82102119c) = 0 (0x0)
23199: setsockopt(3,SOL_SOCKET,SO_RCVBUF,0x821021194,4) = 0 (0x0)
...
23200: socket(PF_INET,SOCK_DGRAM|SOCK_CLOEXEC,0) = 3 (0x3)
23200: bind(3,{ AF_INET 127.0.0.1:0 },16) = 0 (0x0)
23200: setsockopt(3,SOL_SOCKET,SO_SNDBUF,0x821020f08,4) = 0 (0x0)
...
23200: SIGNAL 11 (SIGSEGV) code=SEGV_MAPERR trapno=12 addr=0xa84
23200: sendmsg(3,{{ AF_INET 127.0.0.1:59640 },16,[{"\240Z\0\0\0\0\0\0\0n\^D*\b\0\0\0"...,56},{"\0\0\0\0\0\0\0\0\M^D\n\0\0\0\0\0"...,800},{"\M^?\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,2176}],3,{},0,0},MSG_NOSIGNAL) = 3032 (0xbd8)
23199: recvmsg(3,{{ AF_INET 127.0.0.1:30227 },16,[{"\240Z\0\0\0\0\0\0\0n\^D*\b\0\0\0"...,56},{"\0\0\0\0\0\0\0\0\M^D\n\0\0\0\0\0"...,800},{"\M^?\0\0\0\0\0\0\0\0\0\0\0\0\0\0"...,2176}],3,{},0,0},0) = 3032 (0xbd8)
23199: 23199: sendto(3,"\^A",1,MSG_NOSIGNAL,{ AF_INET 127.0.0.1:30227 },16) = 1 (0x1)
...
23199: sendto(3,"\^A",1,MSG_NOSIGNAL,{ AF_INET 127.0.0.1:30227 },16) = 1 (0x1)
I'm thinking of a wholly-different approach for this, to using a pipe + futex.
|
gharchive/pull-request
| 2023-06-20T22:09:31 |
2025-04-01T06:39:52.503154
|
{
"authors": [
"thiagomacieira"
],
"repo": "opendcdiag/opendcdiag",
"url": "https://github.com/opendcdiag/opendcdiag/pull/252",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1823366881
|
FW: rewrite wait_for_child() as wait_for_children()
We only start one child for now, but now we can wait for more than one.
This looks good. Cannot tell for the Windows part though.
"It works" suffices for me. We will need to get back to it when we get more than 64 child processes.
|
gharchive/pull-request
| 2023-07-26T23:45:42 |
2025-04-01T06:39:52.504610
|
{
"authors": [
"thiagomacieira"
],
"repo": "opendcdiag/opendcdiag",
"url": "https://github.com/opendcdiag/opendcdiag/pull/350",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1684871947
|
Remove deprecated npm config definition always-auth
When running the tasks test in GitHub actions we run into the following error when using the npm-node18-toolset:
npm ERR! `always-auth` is not a valid npm option
(more details here: https://github.com/opendevstack/ods-pipeline/actions/runs/4807341283/jobs/8557135297#step:16:2872)
The npm team removed the config definition always-auth with release v7.11.1 (changelog) and the node18 base image seems to now use a higher version than that.
We should remove the following line from the build script:
https://github.com/opendevstack/ods-pipeline/blob/4a8716535d51acb099a81b6619c4c2f747896f77/build/package/scripts/build-npm.sh#L74
The npm team removed the config definition always-auth with release v7.11.1 (changelog) and the node18 base image seems to now use a higher version than that.
Thanks Henning for figuring this out. Breaking change in a patch version ... no comment.
Breaking change in a patch version ... no comment.
It seems to be a common theme with npm ... we might want to sponsor them a training in SemVer 😄
|
gharchive/issue
| 2023-04-26T11:50:36 |
2025-04-01T06:39:52.508809
|
{
"authors": [
"henninggross",
"michaelsauter"
],
"repo": "opendevstack/ods-pipeline",
"url": "https://github.com/opendevstack/ods-pipeline/issues/687",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
485710388
|
Command "export" is deprecated, use the oc get --export
update oc cli commands used on scripts to not get deprected messages and wrong behaviours
also related to issue: https://github.com/opendevstack/ods-jenkins-shared-library/issues/109
I think this might get removed completely in 4.1, need to check that.
then we just keep it as it is for now? I will wait answer from your check!
I am just popping this issue since the script seems not fully work 100%, might still be related to SA and rolebindings...
for now I just can confirm that the deprecated export command behaves different, for sure it gets less data... and that might be the issue we are facing now
Yes the behaviour is different. I just know that export is using something from upstream Kubernetes, and that is going to be removed so OpenShift is following suit.
I think we need to tackle this issue soon - once there is an easy way to install Openshift 4.1 locally to play around with it and see what breaks.
|
gharchive/issue
| 2019-08-27T09:58:15 |
2025-04-01T06:39:52.511882
|
{
"authors": [
"gerardcl",
"michaelsauter"
],
"repo": "opendevstack/ods-project-quickstarters",
"url": "https://github.com/opendevstack/ods-project-quickstarters/issues/317",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1300550902
|
doc(yf): add distributed_rl_eng.rst
translate distributed_rl_zh.rst
wrong operation
|
gharchive/pull-request
| 2022-07-11T11:04:15 |
2025-04-01T06:39:52.517901
|
{
"authors": [
"VaninaY"
],
"repo": "opendilab/DI-engine-docs",
"url": "https://github.com/opendilab/DI-engine-docs/pull/152",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1726642709
|
Analysis api
Closes #625
This is an initial, crude implementation of the oft-discussed higher-level api.
Expect many API changes, and please provide feedback to help us evolve it!
Current dependencies on/for this PR:
main
PR #750 👈
This comment was auto-generated by Graphite.
|
gharchive/pull-request
| 2023-05-26T00:26:04 |
2025-04-01T06:39:52.520296
|
{
"authors": [
"Shoeboxam"
],
"repo": "opendp/opendp",
"url": "https://github.com/opendp/opendp/pull/750",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2512256315
|
Use sudo hostapd/hostapd/hostapd beacon.conf and Get an error!
Hi,I compiled the program, used the "sudo hostapd/hostapd/hostapd beacon.conf" in raspberry 4b ,and find the error:Configuration file: beacon.conf
rfkill: WLAN soft blocked
wlan0: interface state UNINITIALIZED->COUNTRY_UPDATE
wlan0: Could not connect to kernel driver
Using interface wlan0 with hwaddr d8:3a:dd:da:a1:7b and ssid "DroneIDTest"
Failed to set beacon parameters
wlan0: Could not connect to kernel driver
Interface initialization failed
wlan0: interface state COUNTRY_UPDATE->DISABLED
wlan0: AP-DISABLED
wlan0: Unable to setup interface.
wlan0: interface state DISABLED->DISABLED
wlan0: AP-DISABLED
wlan0: CTRL-EVENT-TERMINATING
hostapd_free_hapd_data: Interface wlan0 wasn't started
nl80211: deinit ifname=wlan0 disabled_11b_rates=0
How could i solve it?
Many thanks!
The "rfkill" line looks suspicious. From the readme of the repo:
"Check that there is nothing preventing the usage of the Wi-Fi HW by running the tool rfkill. Any SW block should be possible to unblock via the same tool."
|
gharchive/issue
| 2024-09-08T08:07:26 |
2025-04-01T06:39:52.523886
|
{
"authors": [
"friissoren",
"shanexia1818"
],
"repo": "opendroneid/transmitter-linux",
"url": "https://github.com/opendroneid/transmitter-linux/issues/17",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
802093199
|
ci: e2e refinements 1
Fix scripts/e2e-cluster-dump.sh for
Readability and maintainability
Generation of logs in directories
Fix scripts/e2e-test.sh to
Support generation of logs in directories
Move tests source from nighlty sub directory to top level
Pretty much rewritten scripts/e2e-cluster-dump.sh so probably best reviewed as a new rather than comparing with previous version.
bors try
bore merge
bors merge
|
gharchive/pull-request
| 2021-02-05T11:35:24 |
2025-04-01T06:39:52.526936
|
{
"authors": [
"blaisedias"
],
"repo": "openebs/Mayastor",
"url": "https://github.com/openebs/Mayastor/pull/679",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2225303689
|
fix(rebuild): connect the io log when detaching
When we detach a device, ensure that the io logs are connected. This usually happens on the fault path, however this change ensures that this happens during the detach itself and thus hardening it against races.
bors merge
|
gharchive/pull-request
| 2024-04-04T12:03:07 |
2025-04-01T06:39:52.527863
|
{
"authors": [
"tiagolobocastro"
],
"repo": "openebs/mayastor",
"url": "https://github.com/openebs/mayastor/pull/1620",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1726127486
|
Switch off of deprecated pkg_resources library
Background
The pkg_resources library, used by XBlock to load static assets, is deprecated. The removal timeline is unknown.
XBlock uses pkg_resources in a couple places.
XBlock also recommend pkg_resources in its docs: https://edx.readthedocs.io/projects/xblock-tutorial/en/latest/anatomy/python.html
To do
First, choose a new resource loading interface. Options:
Use importlib.resources. Unfortunately, this will become deprecated in Python 3.11 and replaced with a yet-to-be-determined interface.
Wait until Python 3.11, and then switch to the 3.11 replacement.
Merge xblock-utils into this repository. That library provides a ResourceLoader abstraction; adopt that.
Then:
Update the docs to recommend the new interface.
Update any uses of pkg_resources in this repo.
Reach out to XBlock maintainers (including maintainers of edx-platform, which defines several XBlocks!) to request that they switch from pkg_resources to the new interface.
Related
https://github.com/openedx/XBlock/issues/675
https://github.com/openedx/XBlock/issues/676
Its a duplicate story of following
https://github.com/openedx/XBlock/issues/676
|
gharchive/issue
| 2023-05-25T16:20:13 |
2025-04-01T06:39:52.533984
|
{
"authors": [
"farhan",
"kdmccormick"
],
"repo": "openedx/XBlock",
"url": "https://github.com/openedx/XBlock/issues/641",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2049423873
|
Proposal Date
2023-12-19
Target Ticket Acceptance Date
2023-01-10
Earliest Open edX Named Release Without This Functionality
Redwood 2023-04
Rationale
Codecov does not work OOTB for most repositories created using this cookiecutter and most people remove it once their repository has been created. There are known alternatives that do not require additional steps for setup.
Removal
https://github.com/openedx/edx-cookiecutters/blob/8180c89864d9d596e57c0bec8fe4ac8daedd7081/cookiecutter-xblock/{{cookiecutter.repo_name}}/.github/workflows/ci.yml#L39
https://github.com/openedx/edx-cookiecutters/blob/8180c89864d9d596e57c0bec8fe4ac8daedd7081/cookiecutter-django-app/{{cookiecutter.repo_name}}/.github/workflows/ci.yml#L39
Replacement
TBD
Deprecation
Should not be needed
Migration
Should not be needed
Additional Info
No response
I'm fully in favor of removing a dependency on a 3rd party service that doesn't work (and manages to break our builds , too), but enforcing code coverage still strikes me as being as important as it's always been. Rather than totally ripping out the coverage check, could we set it up to enforce some generally-reasonable threshhold by checking the output of the coverage.py? Teams could still lower the threshhold or rip it out, but at least they'd be start out with a reasonable default coverage check.
Updated description with a possible alternative: https://github.com/py-cov-action/python-coverage-comment-action
I'd really like this to get someone's attention, because the failing uploads that breaks PRs is a frustration that many people are probably feeling. I guess we have scripts where we might be able to quantify. I think this issue deserves a shorter term solution, which either means someone [Axim? Community? Arch-BOM?] picking up this work so the long term is now, or implementing a retry or some other purely short term solution. Thoughts?
As I understand it:
This is an issue for people creating new repos. Existing repo are already working around it, either by using something else or by disabling coverage-checking altogether.
The short-term solution would be to simply remove codecov from the cookiecutter.
The medium-term solution would be to either:
make codecov work OOTB in the cookiecutter,
replace it with something else in the cookiecutter.
The long-term solution would be to go back through the existing repos and apply the medium-term solution, with the end-goal of having the same coverage tool running on all Python repos.
Given that this only affects new repositories, my inclination is that it's not high-priority for Axim. Right now, our most acute challenge is ensuring a state of good maintenance for all Core Product repositories, which is already an overwhelming list.
@kdmccormick: I see. I didn't realize this issue was in edx-cookiecutters, and I was really asking about existing repos, like openedx-events where I saw a recent codecov upload failure.
Existing repo are already working around it, either by using something else or by disabling coverage-checking altogether.
So I guess for openedx-events, the options is to disable the check altogether, or use something else? Is there an example of the something else? This ticket lists a hackathon project as a potential replacement, but I'm not sure how far that got. Ideally we would not disable altogether without a replacement. I imagine there must be more repos with this problem.
We actually added documentation to the developer docs on how to set up and use a CodeCov alternative: https://docs.openedx.org/en/latest/developers/how-tos/use-python-coverage-comment.html
2U uses it in one of our private repos and it seems to be working acceptably.
@robrap Ah, I was confused too. I didn't realize that this was an active issue for existing repos.
Those docs look great @dianakhuang ! Would you both agree that:
As a short-term unblocker for affected repos, let's circulate that document around.
For the medium-term fix, the cookiecutter should be updated to use python-coverage-comment instead of codecov.
For the long-term fix, we should ticket up the work of moving all Python repos off of codecov.
This has passed the acceptance date, and I think we're all in agreement that we should move off CodeCov at least by default.
Note that the ticket https://github.com/edx/edx-arch-experiments/issues/528 got blocked/abandoned due 2U issues.
openedx-events is attempting to use the replacement here: https://github.com/openedx/openedx-events/pull/323
We don't yet have a good plan for replacing JS coverage, and we need to figure out if we'll use codecov's bundle asset size tooling or if we need a different alternative.
|
gharchive/issue
| 2023-12-19T20:58:23 |
2025-04-01T06:39:52.551191
|
{
"authors": [
"dianakhuang",
"feanil",
"kdmccormick",
"rgraber",
"robrap"
],
"repo": "openedx/edx-cookiecutters",
"url": "https://github.com/openedx/edx-cookiecutters/issues/429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1526206739
|
Tutor plugins
Ralph plugin for Tutor
ClickHouse plugin for Tutor - in progress
SuperSet plugin for Tutor - in progress, Jill
These plugins exist and updates to them are being managed separately now!
|
gharchive/issue
| 2023-01-09T19:43:55 |
2025-04-01T06:39:52.605724
|
{
"authors": [
"bmtcril",
"jmakowski1123"
],
"repo": "openedx/openedx-oars",
"url": "https://github.com/openedx/openedx-oars/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1673794660
|
feat: add dependency management and upgrade Makefile
Description
This PR adds dependency management and upgrades the Makefile accordingly. And adds a CI workflow to publish to pipy
This opens the door for a pre-commit hook (for formatting).
Thanks for the pull request, @Ian2012! Please note that it may take us up to several weeks or months to complete a review and merge your PR.
Feel free to add as much of the following information to the ticket as you can:
supporting documentation
Open edX discussion forum threads
timeline information ("this must be merged by XX date", and why that is)
partner information ("this is a course on edx.org")
any other information that can help Product understand the context for the PR
All technical communication about the code itself will be done via the GitHub pull request interface. As a reminder, our process documentation is here.
Please let us know once your PR is ready for our review and all tests are green.
There should be a constrain for tutor>=15
Resolved
Added more workflows for tests and to upgrade python dependencies
@Ian2012 🎉 Your pull request was merged! Please take a moment to answer a two question survey so we can improve your experience in the future.
|
gharchive/pull-request
| 2023-04-18T20:32:50 |
2025-04-01T06:39:52.611513
|
{
"authors": [
"Ian2012",
"openedx-webhooks"
],
"repo": "openedx/tutor-contrib-ralph",
"url": "https://github.com/openedx/tutor-contrib-ralph/pull/10",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
532242598
|
Move to Cobra from go flags
Signed-off-by: Alex Ellis (OpenFaaS Ltd) alexellis2@gmail.com
Description
Moving to Cobra allows for additional commands to be added
easily.
How Has This Been Tested?
Tested e2e with k3d
/usr/local/bin/k3d create --server-arg="--no-deploy=traefik"
export KUBECONFIG="$(/usr/local/bin/k3d get-kubeconfig --name='k3s-default')"
go build && ./ofc-bootstrap apply --file example.init.yaml
Whilst using K8s 1.16 I fixed an issue with of-builder.
import-secrets is now patched via the API instead of kubectl patch
For the old syntax, users get a hint on what to use instead:
./ofc-bootstrap --yaml example.init.yaml
Error: a breaking change was introduced, you now need to use ofc-bootstrap apply --file init.yaml
Checklist:
I have:
[x] checked my changes follow the style of the existing code / OpenFaaS repos
[ ] updated the documentation and/or roadmap in README.md
[ ] read the CONTRIBUTION guide
[x] signed-off my commits with git commit -s
[x] added unit tests
cc @csakshaug
CI Scripts need updating to new command syntax
I will test this tomorrow locally
Thanks for letting me know about the unused methods, I am not worried about merging them at present, they can be removed easily later.
How was the functionality? Does TLS / OAuth still work as a setting?
I have tested:
No TLS, TLS, no Oauth, Oauth
I got no errors and the cert requests and endpoints were configured correctly.
this was done in k3d on my laptop so no real certs were generated, but the requests were fired etc.
This looks good. LGTM
Thank you
|
gharchive/pull-request
| 2019-12-03T20:23:07 |
2025-04-01T06:39:52.643817
|
{
"authors": [
"Waterdrips",
"alexellis"
],
"repo": "openfaas-incubator/ofc-bootstrap",
"url": "https://github.com/openfaas-incubator/ofc-bootstrap/pull/162",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
433762212
|
Functions with same name in different stacks collide
Expected Behaviour
It should be possible to deploy functions independently into two separate openfaas stacks in the same swarm while avoiding name collisions.
Current Behaviour
I have two openfaas gateways in two different docker swarm openfaas stacks in the same swarm. When I deploy a function into the gateway of the first stack, a function service gets created for that gateway and works fine. But when I deploy the same function (same name) into the other gateway, then the first gateway loses the connection to that function. Redeploying the function into the first gateway does not help. In the gateway logs I see:
2019/04/16 09:20:10 error with upstream request to: , Post http://my-function:8080: dial tcp: lookup my-function on 127.0.0.11:53: no such host
bemk_gateway.1.2jy84fkyzxz9@m1rrzsbem01t | 2019/04/16 09:20:10 Forwarded [POST] to /function/my-function - [502] - 0.004523 seconds
Ultimately I had to remove all functions and all openfaas stacks and recreate everything from scratch to get it working again.
My understanding is that function services do not become part of a stack. The gateway finds its function services by means of a label. Here is my function in faas-cli describe:
Name: my-function
Status: Ready
Replicas: 1
Available replicas: 1
Invocations: 0
Image: bem-docker-registry.kvnurs.intra:443/my-function:latest
Function process:
URL: http://bemk.kvnurs.intra/function/my-function
Async URL: http://bemk.kvnurs.intra/async-function/my-function
Labels: function : true
com.openfaas.function : my-function
I assume the gateway finds its functions using the com.openfaas.function label. On deployment the gateway apparently attaches the function to its own network.
Now, when the gateway in the second stack deploys the same function, it seems to disconnect the function from its current network and attaches it to its own network (and probably does other things which cause more confusion).
Possible Solution
A solution must allow the function to be a separate entity for docker if it is deployed into a different stack. The function must become a separate service for each stack, which scales independently etc. The only way to achieve that is a different service name, I am not aware that service labeling alone can help us here.
As long as the service has the same name in both stacks, it is probably not possible to achieve that.
Suggestion:
The openfaas yml format could introduce a name field below each function key whose value supports env variable substitution. The gateway uses the service key to build the service url and the name field to create the service name in docker.
Docker has done similar things with name fields for configs and volumes, so maybe that is not so far-fetched after all.
Workaround:
Create separate stack.yml files with separate service names prefixed for each stack and let callers use the appropriate url for each stack, i.e. http://localhost:8080/functions/develop_my-function.
Steps to Reproduce
deploy the openfaas stack twice under two different stack names. (To make this possible, let the gateway of the second stack run under a different port)
use faas-cli to deploy a function into the gateway of the first stack. Check that it works properly
deploy the same function into the gateway of the second stack
Result:
faas-cli list shows that both stacks know the function, fass-cli describe even claims that the function is ready in both stacks
the function does not work, rather you get a 502
redeploying the function into the gateway of the first stack does not fix it. I had to delete all functions and all stacks and deploy a new stack from scratch to get it working again with a single stack.
Context
In my scenario we have used swarm stacks for staging. Each swarm stack is a different stage (stages as in development - acceptance test etc. where placement constraints are used to distribute the containers to separate nodes depending on the stage).
Your Environment
FaaS-CLI version ( Full output from: faas-cli version ):
commit: 87ca614cfe27c4cf2975f7629992c1351b18c2bc
version: 0.8.8
Docker version docker version (e.g. Docker 17.0.05 ):
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.7
Git commit: e68fc7a215d7
Built: Wed Dec 19 10:23:04 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.7
Git commit: e68fc7a215d7
Built: Tue Aug 21 17:16:31 2018
OS/Arch: linux/amd64
Experimental: false
Are you using Docker Swarm or Kubernetes (FaaS-netes)?
Swarm
Operating System and version (e.g. Linux, Windows, MacOS):
Linux Suse SLE12 sp 4
Link to your project or a code example to reproduce issue:
n/a yet
I think you are right about needing to add a prefix to the service name, but I think i would rather see tis as something that is configured in the gateway and applied to all functions that are deployed through it instead of extending the yaml. This seems like something that the function author should not really need to care about and that the gateway can solve. It would be even better if the gateway could infer this from the Docker stack in someway so that it can automatically be created without any additional configuration flags
Hi @dschulten thanks for your interest in faas-swarm.
Can I ask if you are using exactly the same Docker network and network label for both stacks?
Can you give step by step bash commands so that one of us can reproduce what you're seeing?
Thanks
Alex
You may also be interested in OpenFaaS Cloud which scopes or namespaces functions to a user-account or organization/project. This may be more suitable for your needs than deploying OpenFaaS to the same set of nodes or same Swarm cluster multiple times.
@LucasRoesler that was my first thought, too, but I wasn't sure if a service should know in which swarm stack it is running from an architectural point of view. OTOH if an openfaas stack had a name - which could be passed with faas-cli deploy or a top-level name key in stack.yml , there would be no dependency on swarm apis.
@LucasRoesler If we depend on the swarm api anyway, it's fine. Roughly where in the code would that happen?
The handling of the service name would need to be updated in each of the handlers, which are conveniently located in handlers/. We would need to update to add/remove the name prefix as appropriate
@alexellis what do you think? We could update the the swarm provider so that the service name in docker has a prefix/namespace per gateway. This prefix would be completely internal and would not be exposed to the user in urls or api responses.
@LucasRoesler I just hit this issue and checked out faas-swarm handlers. The suggestion you proposed seems reasonable to me.
It would avoid polluting function names with prefixes at the caller side.
|
gharchive/issue
| 2019-04-16T12:50:22 |
2025-04-01T06:39:52.665117
|
{
"authors": [
"LucasRoesler",
"alexellis",
"davidecavestro",
"dschulten"
],
"repo": "openfaas/faas-swarm",
"url": "https://github.com/openfaas/faas-swarm/issues/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
76419044
|
Failure on /guides/new
Just went to create a guide but get this error at page load so the crop typeahead (I'm presuming it's one) doesn't work, and the 'Choose crop to continue' button never un-disables.
Error: [$injector:unpr] http://errors.angularjs.org/1.3.0-beta.19/$injector/unpr?p0=tProvider%20%3C-%20t%20%3C-%20autoFocusDirective
at Error (native)
at https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:3:25891
at https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:7958
at Object.n [as get] (https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:6975)
at https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:8031
at n (https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:6975)
at Object.s [as invoke] (https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:7233)
at https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:12129
at r (https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:3:26273)
at Object.<anonymous> (https://openfarm.cc/assets/application-aca79496e6ca574a833bf1523ba469ec.js:4:12096)
Yep this one has been fixed in #608, but its not yet on production.
|
gharchive/issue
| 2015-05-14T16:43:55 |
2025-04-01T06:39:52.671030
|
{
"authors": [
"andru",
"roryaronson"
],
"repo": "openfarmcc/OpenFarm",
"url": "https://github.com/openfarmcc/OpenFarm/issues/635",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2368482074
|
Add security warning for disabled logs
Description
This pull request adds a startup warning when the log level is set to 'none'. This change helps to ensure that developers are aware of the potential risks when logs are turned off.
References
fixes #1703
Review Checklist
[x] I have clicked on "allow edits by maintainers".
[x] I have added documentation for new/changed functionality in this PR or in a PR to openfga.dev [Provide a link to any relevant PRs in the references section above] - https://github.com/openfga/openfga.dev/pull/776
[x] The correct base branch is being used, if not main
[ ] I have added tests to validate that the change in functionality is working as expected
Codecov tests are failing, because codecov token not found (because PR is from a fork)
I've also added 2 tests for the same in 2b52336
CI is finally green now 😍
What's the remaining action item here? What's stopping us from merging & releasing it?
Bump to this @miparnisari
|
gharchive/pull-request
| 2024-06-23T10:04:16 |
2025-04-01T06:39:52.676271
|
{
"authors": [
"Siddhant-K-code"
],
"repo": "openfga/openfga",
"url": "https://github.com/openfga/openfga/pull/1705",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2505132892
|
g++ compiler
I know this is a noobie question but please provide the g++ command line code which includes the g++ -l -I to compile a new cpp file i have written ,im getting a lot of errors doing it myself after manually including all the packages .
You may be able to find some answers by searching the OpenFHE forum. We encourage OpenFHE users to post their questions there.
For your question: you may find some answers on this documentation page.
|
gharchive/issue
| 2024-09-04T11:55:01 |
2025-04-01T06:39:52.678517
|
{
"authors": [
"AniD-z",
"dsuponitskiy"
],
"repo": "openfheorg/openfhe-development",
"url": "https://github.com/openfheorg/openfhe-development/issues/858",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1184861474
|
Finetuning for onboarding darkmode
What
[ ] The next page button has too less opacity
[ ] The title is dark on dark on several pages
Note: you can reset onboarding quickly from the dev mode (see README to enable it)
Part of
#684
#561
Screenshot/Mockup/Before-After
I was looking the same bug and thought of raising an issue,I would be able to work on this
Assign me plz
|
gharchive/issue
| 2022-03-29T13:19:13 |
2025-04-01T06:39:52.693354
|
{
"authors": [
"AshAman999",
"teolemon"
],
"repo": "openfoodfacts/smooth-app",
"url": "https://github.com/openfoodfacts/smooth-app/issues/1386",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1658431395
|
fix: 3854 - fastlane - use "xcodes" syntax instead of "xcversion"
What
Minor fix in order to avoid using deprecated build settings.
Fixes bug(s)
Fixes: #3854
Codecov Report
Merging #3855 (3dc5ac7) into develop (9d15114) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## develop #3855 +/- ##
========================================
Coverage 10.73% 10.73%
========================================
Files 273 273
Lines 13476 13476
========================================
Hits 1447 1447
Misses 12029 12029
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
We'll see if it works...
|
gharchive/pull-request
| 2023-04-07T06:35:27 |
2025-04-01T06:39:52.697421
|
{
"authors": [
"codecov-commenter",
"monsieurtanuki"
],
"repo": "openfoodfacts/smooth-app",
"url": "https://github.com/openfoodfacts/smooth-app/pull/3855",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1731747187
|
feat: 4031 - different layout for "empty" product list page
Impacted files:
app_en.arb: removed the ! from the "start scanning" label
app_fr.arb: removed the ! from the "start scanning" label
product_list_page.dart: FAB instead of custom button; different layout for svg and text
What
Different layout for "empty" product list page
FAB instead of custom button
Different layout for svg and text
Screenshot
Fixes bug(s)
Fixes: #4031
Codecov Report
Merging #4052 (304ab7a) into develop (a007367) will increase coverage by 0.00%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## develop #4052 +/- ##
========================================
Coverage 11.00% 11.00%
========================================
Files 270 270
Lines 13369 13364 -5
========================================
Hits 1471 1471
+ Misses 11898 11893 -5
Impacted Files
Coverage Δ
...pp/lib/pages/product/common/product_list_page.dart
0.00% <0.00%> (ø)
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
|
gharchive/pull-request
| 2023-05-30T08:11:03 |
2025-04-01T06:39:52.705600
|
{
"authors": [
"codecov-commenter",
"monsieurtanuki"
],
"repo": "openfoodfacts/smooth-app",
"url": "https://github.com/openfoodfacts/smooth-app/pull/4052",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
306690590
|
Allowing for one sample per plot in the center
I am trying to classify Landsat pixels and assign just one value (30 meter plots with one sample); however, the samples seem to start from a plot corner. Can we please add functionality to add just one sample per plot in the center on the project design page?
As a temporary way to accomplish this, you can select gridded samples, and set the sample resolution to a value much greater than the size of the plot shape.
Was this ever resolved? The random and gridded sample distribution options should default to placing one point in the center of the plot.
|
gharchive/issue
| 2018-03-20T00:58:54 |
2025-04-01T06:39:52.782485
|
{
"authors": [
"KMarkert",
"lambdatronic",
"snasaio"
],
"repo": "openforis/collect-earth-online",
"url": "https://github.com/openforis/collect-earth-online/issues/129",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1087719620
|
Fix geoObject visibility method
Remove _visibility in geoObject add method, event non visible geoobjects get into handler
Add shader uniform visibility flag and use with opacity or discard cases inside geooject shader
Done
|
gharchive/issue
| 2021-12-23T13:30:19 |
2025-04-01T06:39:52.848297
|
{
"authors": [
"Zemledelec",
"pavletto"
],
"repo": "openglobus/openglobus",
"url": "https://github.com/openglobus/openglobus/issues/455",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1478371790
|
将revision版本符号发布到Maven中央仓库,导致不能拉取依赖。
问题支持
如果我的项目也定义revision,而hippo4j-spring-boot-starter也定义了revision,并且把revision提交到中央仓库,这将导致不能拉取依赖,因为revision定义使用了我的项目的revision值,而不是1.4.3-upgrade 。
https://repo1.maven.org/maven2/cn/hippo4j/hippo4j-spring-boot-starter/1.4.3-upgrade/hippo4j-spring-boot-starter-1.4.3-upgrade.pom
没明白你的问题,你的项目 revision 和 hippo4j 没关系吧
会导致找不到依赖。@pirme
有时候 Pom 报红不代表运行有问题,实际运行中会出现问题么
编译都不通过,因为找不到依赖。例如我的revision 是1.0,但是hippo4j-spring-boot-starter并没有1.0,所以找不到依赖。
ok,有时间我试下。现在的解决方案就是,你把标记了 revision 的依赖,在你 pom 里重新依赖下我认为就可以了。如果确实有问题,下个版本修复该问题。
好的。
会导致找不到依赖。@pirme
这里使用 ${project.version}会更合适点吧
@baymax55 是的,你有兴趣提个 PR 修复下么
|
gharchive/issue
| 2022-12-06T06:40:54 |
2025-04-01T06:39:52.852953
|
{
"authors": [
"HuangDayu",
"baymax55",
"pirme"
],
"repo": "opengoofy/hippo4j",
"url": "https://github.com/opengoofy/hippo4j/issues/1024",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
20419744
|
Provide ol.source.FixedElement
Similar to #1085, but in the case of a fixed element, the renderer does not apply any transform to the element.
Want to back this issue? Place a bounty on it! We accept bounties via Bountysource.
Closing this due to lack of activity.
|
gharchive/issue
| 2013-10-02T20:31:28 |
2025-04-01T06:39:53.271101
|
{
"authors": [
"tschaub"
],
"repo": "openlayers/ol3",
"url": "https://github.com/openlayers/ol3/issues/1086",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
759962905
|
Allow Select interaction to stop event propagation
Is your feature request related to a problem? Please describe.
I think there are multiple use cases for the Select interaction to be able to stop propagation on the click event in some circumstances.
One use case I have in mind is when multiple Select interactions are active on the map. I would like to be able to give one Select interaction priority so that if it handles the click event, then it stops the click event from propagating so it doesn't 'fall through' to the other Select interaction(s).
Describe the solution you'd like
I think the ideal way to support this might be to update PluggableMap to respect MapBrowserEvent.stopPropagation/propagationStopped in handleMapBrowserEvent:
https://github.com/openlayers/openlayers/blob/f3ad86e8e4808da510303aa637d4243a748eb076/src/ol/PluggableMap.js#L1031-L1034
That would allow listeners to "select" on a Select interaction to call stop propagation on the mapBrowserEvent:
let select = new Select();
select.on('select', function(e) {
// stop downstream listeners from handling this event
e.mapBrowserEvent.stopPropagation();
});
map.addInteraction(select);
Of course this would also allow other MapBrowserEvent listeners to also use stopPropagation, which I think would be expected behavior. I think this could make the event handling more consistent throughout.
If this sounds like a nice enhancement, I would be happy to submit a PR for it.
For completeness, I'll mention... I also considered that we could have the Select interaction return the propagation value from dispatchEvent here:
https://github.com/openlayers/openlayers/blob/fb9c239d726cb002c4f97af2aceaf18b291f36da/src/ol/interaction/Select.js#L512-L522
Something like:
if (selected.length > 0 || deselected.length > 0) {
const propagate = this.dispatchEvent(
new SelectEvent(
SelectEventType.SELECT,
selected,
deselected,
mapBrowserEvent
)
);
return propagate !== false;
}
return true;
This option would also fulfill the use case specifically for the Select interaction. I thought the changes in PluggableMap were the better option. it wasn't clear to me that stopping propagation on the Select event should stop the underlying mapBrowserEvent.
|
gharchive/issue
| 2020-12-09T03:36:46 |
2025-04-01T06:39:53.275791
|
{
"authors": [
"greggian"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/issues/11816",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
797016007
|
A point style with a rotated icon is in the wrong position for VectorTileLayer when pixelRatio > 1
Describe the bug
Rotating the style of a point element causes it to shift from the correct position if the map's pixelRatio is greater than 1
To Reproduce
Steps to reproduce the behavior:
OL version 6.5.0,
6.4.3 is working only in constrainResolution is true
Create a map object with pixelRatio and a vectorTileLayer layer
const map = new Map({
target: "map",
pixelRatio: 1.25, // > 1
layers: [
new TileLayer({
source: new OSMSource()
}),
vectorTileLayer
],
view: new View()
});
Create the vectorTileLayer layer like this:
const vectorTiles = new VectorTileLayer({
source: new VectorTileSource({
format: new MVT(),
}),
style(feature) {
const coords = feature.getGeometry().getFlatMidpoint();
const style = new Style({
stroke: new Stroke({
color: "green",
width: 5
})
});
const styleArrow2 = new Style({
geometry: new Point(coords),
image: new Icon({
src: "arrow.svg",
rotateWithView: true,
color: "blue",
scale: 2
})
});
const styleArrow3 = new Style({
geometry: new Point(coords),
image: new Icon({
src: "arrow.svg",
rotateWithView: true,
color: "red",
scale: 2,
rotation: Math.PI * 2 // should be applied after full rotation
})
});
return [style, styleArrow2, styleArrow3];
},
renderMode: 'image',
});
See error
Expected behavior
@mike-000, would you be able to take a look at this? Thanks in advance.
@mike-000, would you be able to take a look at this? Thanks in advance.
Integer pixel ratios > 1 work correctly. The problem is related to fractional pixels ratios (including < 1) in triple combination with renderMode: 'image' and declutter: false (there is no problem with hybrid or vector render modes or when decluttering).
The problem became more apparent with 6.4.4-dev.1599475256503 which corresponds to PR #11521
Previously icons overlapped as expected when opening at integer zoom levels https://codesandbox.io/s/vector-tile-info-forked-4u2ty but the problem can be seen when using mousewheel zoom.
With 6.4.4-dev.1599475256503 https://codesandbox.io/s/vector-tile-info-forked-c5xlm it can be seen even at integer zoom levels.
Integer pixel ratios > 1 work correctly. The problem is related to fractional pixels ratios (including < 1) in triple combination with renderMode: 'image' and declutter: false (there is no problem with hybrid or vector render modes or when decluttering).
The problem became more apparent with 6.4.4-dev.1599475256503 which corresponds to PR #11521
Previously icons overlapped as expected when opening at integer zoom levels https://codesandbox.io/s/vector-tile-info-forked-4u2ty but the problem can be seen when using mousewheel zoom.
With 6.4.4-dev.1599475256503 https://codesandbox.io/s/vector-tile-info-forked-c5xlm it can be seen even at integer zoom levels.
Thanks for the investigation, @mike-000. So there are quite a few corner cases. Do you have an idea what the problem could be, and maybe even create a pull request? I think we'll also need more rendering tests, with simple tiles where we know exactly what's in there.
This also affects labels, and when incorrectly placed both labels and images are incorrectly scaled.
The transforms produced by https://github.com/openlayers/openlayers/blob/main/src/ol/render/canvas/Executor.js#L401-L415 are not suitable for renderMode: 'image' as it is currently implemented.
|
gharchive/issue
| 2021-01-29T16:27:47 |
2025-04-01T06:39:53.286735
|
{
"authors": [
"ahocevar",
"mike-000",
"pecet86"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/issues/11962",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
269585796
|
Edge tile stretching for XYZ source
I have created a simple example with XYZ source, which shows a sample image from DeepZoom pyramid: https://jsfiddle.net/q1upru60/1/
It uses last version 4.4.2.
As you can see the tiles on the edges (right and bottom are stretched to fill the 256*256 tilesize). Which is incorrect IMO, because DeepZoom format has a cropped tiles on edges.
BUT I found that until the 3.19 version there was a fine behavior: https://jsfiddle.net/57L2st7d/1/
As you can see everything is fine, and OL doesn't stretch the tiles on edges.
Starting from 3.20 tiles will be stretched...
If my code is correct - Is it a bug or is it an intentional feature?
Thanks!
PS. Original image
PSS. Right now we use Zoomify source to show DeepZoom images. But with Zoomify source you cant specify different tile sizes (only 256). That's why we started to try out the XYZ source.
It's not a bug - the XYZ source simply assumes a fixed tile size. You should be using the Zoomify source. A pull request to make the tile grid of the Zoomify source configurable would be appreciated.
#7379
Ok, thanks.
I will reference the links to Custom tile size for Zoomify source:
Issue #6608
Pull request #7379
|
gharchive/issue
| 2017-10-30T12:43:09 |
2025-04-01T06:39:53.291322
|
{
"authors": [
"ahocevar",
"yurykovshov"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/issues/7400",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
437994449
|
BingMaps not working with different projection
I have a set of layers on a map which all use EPSG:32632 as projection. When adding a BingMaps layer I'm getting the following error from a proj4 transformer method:
Uncaught TypeError: Cannot read property 'x' of null
The stack trace is this:
Uncaught TypeError: Cannot read property 'x' of null
at transformer (myMapApp.js:20574)
at inverse (myMapApp.js:20616)
at myMapApp.js:17954
at Triangulation.transformInv_ (myMapApp.js:42776)
at new Triangulation (myMapApp.js:42833)
at new ReprojTile (myMapApp.js:42274)
at BingMaps.getTile (myMapApp.js:41303)
at CanvasTileLayerRenderer.manageTilePyramid (myMapApp.js:67577)
at CanvasTileLayerRenderer.prepareFrame (myMapApp.js:67914)
at CanvasMapRenderer.renderFrame (myMapApp.js:65297)
To reproduce the issue, add a layer using for example EPSG:32632 and add a BingMaps layer.
I'd expect that OpenLayers reprojects the BingMaps layer, is this correct? Otherwise I'd expect a more meaningful error message.
What version of OpenLayers are you using? Did you load proj4js library before, did you declare proj4js projection for EPSG:32632?
Can you provide an example? Without context, it's difficult to have a clue about origin of the issue e.g real BingMaps issue or other issue elsewhere?
I tried both OL 5.3.1 and the latest 5.3.2. The projection is set for the view on map creation, and projection works otherwise (using lots of projection code in the project). It happens only for tiles loaded via BingMaps.
|
gharchive/issue
| 2019-04-27T23:36:55 |
2025-04-01T06:39:53.294235
|
{
"authors": [
"ThomasG77",
"benstadin"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/issues/9473",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
544399213
|
Fix for undefined source in Vector layer
Fixes #10464
The documentation implies leaving a source undefined and setting it later is valid but that causes an error in OL6 for Vector layers. For a vector layer a source left or set null or undefined is equivalent to an empty source so treat it as such to prevent errors in prepareFrame
Thanks, @mike-000.
|
gharchive/pull-request
| 2020-01-01T21:12:13 |
2025-04-01T06:39:53.295579
|
{
"authors": [
"mike-000",
"tschaub"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/pull/10473",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
674648131
|
Fixes extent calculation in WMTS optionsFromCapabilities when BoundingBox exists
As mentioned in #11350 when creating the extent, the WMTS optionsFromCapabilities dose not consider the Wgs84BoundingBox. This pull request attempts to fix this by using the Wgs84BoundingBox extent in cases where it exists.
@amir-ba A minimal fix would look something like this:
diff --git a/src/ol/format/WMTSCapabilities.js b/src/ol/format/WMTSCapabilities.js
index 9e72eb213e..9dceae171d 100644
--- a/src/ol/format/WMTSCapabilities.js
+++ b/src/ol/format/WMTSCapabilities.js
@@ -103,7 +103,7 @@ const LAYER_PARSERS = makeStructureNS(
makeStructureNS(OWS_NAMESPACE_URIS, {
'Title': makeObjectPropertySetter(readString),
'Abstract': makeObjectPropertySetter(readString),
- 'WGS84BoundingBox': makeObjectPropertySetter(readWgs84BoundingBox),
+ 'WGS84BoundingBox': makeObjectPropertySetter(readBoundingBox),
'Identifier': makeObjectPropertySetter(readString),
})
);
@@ -196,6 +196,7 @@ const TMS_PARSERS = makeStructureNS(
makeStructureNS(OWS_NAMESPACE_URIS, {
'SupportedCRS': makeObjectPropertySetter(readString),
'Identifier': makeObjectPropertySetter(readString),
+ 'BoundingBox': makeObjectPropertySetter(readBoundingBox),
})
);
@@ -304,9 +305,9 @@ function readResourceUrl(node, objectStack) {
/**
* @param {Element} node Node.
* @param {Array<*>} objectStack Object stack.
- * @return {Object|undefined} WGS84 BBox object.
+ * @return {Object|undefined} BBox object.
*/
-function readWgs84BoundingBox(node, objectStack) {
+function readBoundingBox(node, objectStack) {
const coordinates = pushParseAndPop(
[],
WGS84_BBOX_READERS,
diff --git a/src/ol/source/WMTS.js b/src/ol/source/WMTS.js
index b61af5eee2..b618a1edbc 100644
--- a/src/ol/source/WMTS.js
+++ b/src/ol/source/WMTS.js
@@ -6,6 +6,7 @@ import TileImage from './TileImage.js';
import WMTSRequestEncoding from './WMTSRequestEncoding.js';
import {appendParams} from '../uri.js';
import {assign} from '../obj.js';
+import {containsExtent} from '../extent.js';
import {createFromCapabilitiesMatrixSet} from '../tilegrid/WMTS.js';
import {createFromTileUrlFunctions, expandUrl} from '../tileurlfunction.js';
import {equivalent, get as getProjection} from '../proj.js';
@@ -479,7 +480,9 @@ export function optionsFromCapabilities(wmtsCap, config) {
const tileSpanX = matrix.TileWidth * resolution;
const tileSpanY = matrix.TileHeight * resolution;
- const extent = [
+ const matrixSetExtent = matrixSetObj['BoundingBox'];
+
+ let extent = [
origin[0] + tileSpanX * selectedMatrixLimit.MinTileCol,
// add one to get proper bottom/right coordinate
origin[1] - tileSpanY * (1 + selectedMatrixLimit.MaxTileRow),
@@ -487,6 +490,10 @@ export function optionsFromCapabilities(wmtsCap, config) {
origin[1] - tileSpanY * selectedMatrixLimit.MinTileRow,
];
+ if (!containsExtent(matrixSetExtent, extent)) {
+ extent = matrixSetExtent;
+ }
+
if (projection.getExtent() === null) {
projection.setExtent(extent);
}
I think the wrapX handling you added makes sense, and could be added here too.
Would you be able to modify your pull request like this? If not let me know, then I'll start a new one.
thank you @ahocevar for your suggestion I will fix my pull request shortly.
Thank you, @amir-ba
@ahocevar can you tell why the tests are failing? I cant quite figure out what to fix to make them work.
@amir-ba The failing test are due to a random problem that's unrelated. I restarted all tests. They should all be green in a few minutes.
I can add the tests, but since Its my first time contributing to this repo, I am not clear about the test data structures. should I extent the exiting capabilities_wgs84.xml with a layer that has such configuration and add the tests or do you have a better Idea?
Thanks for your continued effort on this, @amir-ba! You could save a copy of https://www.basemap.at/wmts/1.0.0/WMTSCapabilities.xml in the test/spec/ol/format/wmts/ folder and use that for the tests.
@amir-ba Thanks for your work on this so far! Do you think you'd be able to add the test as discussed above, or do you need additional guidance?
This issue is still unresolved and should not have been closed.
@ahocevar , sorry for the Absence, can I still add my test and push the current develop into this feature branch?
Feel free to continue working on this PR here.
@MoonE can you please restart the failing spec test here.
Thanks, @amir-ba
Thanks for your review🎉🎉
|
gharchive/pull-request
| 2020-08-06T23:00:15 |
2025-04-01T06:39:53.303003
|
{
"authors": [
"FrankyBoy",
"MoonE",
"ahocevar",
"amir-ba"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/pull/11405",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
121075612
|
Rename the GitHub repository to ol2
This pull request should be merged at the same time as openlayers/openlayers.github.io#56. For more information, also see there.
This sounds good to me. GitHub will set up redirects. So URLs for issues and git clone etc will still work (as long as we don't use the name openlayers for anything else).
|
gharchive/pull-request
| 2015-12-08T19:00:40 |
2025-04-01T06:39:53.304737
|
{
"authors": [
"ahocevar",
"tschaub"
],
"repo": "openlayers/openlayers",
"url": "https://github.com/openlayers/openlayers/pull/1476",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1141158662
|
ReShare reports - update report titles output on various report pages
Hi @scwiek,
As promised in our call yesterday, attached here is a list of the title, label changes proposed for the ReShare reports to make them read a bit more user friendly. Please let me know if any of this work is something that Ian Hardy could do independent of you.
ReShare Reports Title_Label Updates.odt
Please let me know if you have any questions.
Debra
Hey @debradenault, I think all of this is controlled within the reports web UI, so I think this might be an Ian thing.
|
gharchive/issue
| 2022-02-17T10:35:00 |
2025-04-01T06:39:53.307829
|
{
"authors": [
"debradenault",
"scwiek"
],
"repo": "openlibraryenvironment/reshare-analytics",
"url": "https://github.com/openlibraryenvironment/reshare-analytics/issues/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
255058948
|
Web templates config 5.4.0
See https://trello.com/c/A8AjAxmj/122-fix-omerowebindextemplate-config-docs
Read doc changes and see if they make sense.
Try setting bin/omero config set omero.web.index_template ... and see if it works (do the docs explain enough what this does)?
Try setting login_redirect (fixed in https://github.com/openmicroscopy/openmicroscopy/pull/5485) and again see if docs explain what this does clearly enough.
Try setting base_include_template from PR: https://github.com/openmicroscopy/openmicroscopy/pull/5463
Looks good. Tried all the steps mentioned in the docs, works fine. 👍
|
gharchive/pull-request
| 2017-09-04T14:32:06 |
2025-04-01T06:39:53.349651
|
{
"authors": [
"dominikl",
"will-moore"
],
"repo": "openmicroscopy/ome-documentation",
"url": "https://github.com/openmicroscopy/ome-documentation/pull/1747",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1251873131
|
Install the latest dev release feature is broken on new OSes.
Test on any recent system.
This is fixed for version 4.0.0 with new SSL.
Ibrahim confirmed fixed.
|
gharchive/issue
| 2022-05-29T12:15:57 |
2025-04-01T06:39:53.432112
|
{
"authors": [
"iabdalkader",
"kwagyeman"
],
"repo": "openmv/openmv-ide",
"url": "https://github.com/openmv/openmv-ide/issues/155",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
420880322
|
The Monitor Class for new poller service is N/A and the service is always marked as running
I added a poller service named MySQL3307 to the poller-configuration.xml file and using the TCPMonitor for it then run the event reloadDaemonConfig for Pollerd service. But after that, in the Web UI, the Monitor Class is N/A, and this service is always marked as running.
I defined the MySQL3307 in the configuration file as below:
<service name="MySQL3307" interval="300000" user-defined="true" status="on">
<parameter key="retry" value="1"/>
<parameter key="timeout" value="3000"/>
<parameter key="port" value="3307"/>
<parameter key="banner" value="*"/>
</service>
<monitor service="MySQL3307" class-name="org.opennms.netmgt.poller.monitors.TcpMonitor"/>
Here is a screenshot when I open the Web UI.
I also tried to modify some parameters and the class-name of the service MySQL (from the TCPMonitor to JDBCMonitor) and send the event reloadDaemonConfig. When I check on the Web UI, the parameter is updated but the Monitor Class of MySQL is still TCP.
How can I fix it? Or if there is any document or link related to this issue, please show me.
I installed OpenNMS using docker with the latest version currently. Just clone the repository and run the command docker-compose up -d.
Thank you
Hello @namnhatdoan this topic is not related to the docker image itself and seems to be a problem with the application itself. Would you be so kind, moving it to our Community Support where the peoples live answering this type of questions?
I'll close this issue, further discussions on this topic can be followed up in this topic here
I'll close this issue, further discussions on this topic can be followed up in this topic here.
|
gharchive/issue
| 2019-03-14T08:07:11 |
2025-04-01T06:39:53.438390
|
{
"authors": [
"indigo423",
"namnhatdoan"
],
"repo": "opennms-forge/docker-horizon-core-web",
"url": "https://github.com/opennms-forge/docker-horizon-core-web/issues/55",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1697487682
|
Fix hvg subsetting bug
Changelog
This line in scvi integration component is supposed to subset highly-variable genes:
adata = adata[:,adata.var['var_input']].copy()
However, this will not happen for the following reasons:
Instead of "var_input" par["var_input"] should be used
Even with proper key, adata.var[key] will return a column from adata.var and not the list of selected genes as expected
This PR fixes the issue
Checklist before requesting a review
[x] I have performed a self-review of my code
[ ] Conforms to the Contributor's guide
Check the correct box. Does this PR contain:
[ ] Breaking changes
[ ] New functionality
[ ] Major changes
[ ] Minor changes
[x] Bug fixes
[ ] Proposed changes are described in the CHANGELOG.md
[ ] CI tests succeed!
@VladimirShitov Good catch! Would you be able to implement tests?
Added test and fixed small comments. The function for HVG subsetting is now moved to a separate utils script, as it is used in other components as well. The problem is that the test pipeline fails because there is "No space left on device", so I'm not 100% sure that importing utils works correctly
@VladimirShitov The disk issue seemed like a temporary one (?). I am now getting this:
Yep, this is a problem from my side now 😅 Fixing
Fixed it :) @DriesSchaumont , can you check?
@VladimirShitov Could you adress the comment from Robrecht?
Done, please check :)
LGTM! @rcannood If you would like to have a look
|
gharchive/pull-request
| 2023-05-05T11:50:27 |
2025-04-01T06:39:53.459226
|
{
"authors": [
"DriesSchaumont",
"VladimirShitov"
],
"repo": "openpipelines-bio/openpipeline",
"url": "https://github.com/openpipelines-bio/openpipeline/pull/385",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1235859368
|
RuntimeError of Shape op during Calibration dataset progress and finetune progress
配置信息:
TARGET_PLATFORM = TargetPlatform.NXP_INT8 # choose your target platform
MODEL_TYPE = NetworkFramework.ONNX # or NetworkFramework.CAFFE
INPUT_LAYOUT = 'chw' # input data layout, chw or hwc
NETWORK_INPUTSHAPE = [16, 1, 40, 61] # input shape of your network
CALIBRATION_BATCHSIZE = 16 # batchsize of calibration dataset
EXECUTING_DEVICE = 'cuda' # 'cuda' or 'cpu'.
REQUIRE_ANALYSE = True
DUMP_RESULT = False
SETTING = UnbelievableUserFriendlyQuantizationSetting(
platform = TARGET_PLATFORM, finetune_steps = 2500,
finetune_lr = 1e-3, calibration = 'percentile',
equalization = True, non_quantable_op = None)
dataloader = DataLoader(
dataset=calibration_dataset,
batch_size=32, shuffle=True)
quantized = quantize(
working_directory=WORKING_DIRECTORY, setting=SETTING,
model_type=MODEL_TYPE, executing_device=EXECUTING_DEVICE,
input_shape=NETWORK_INPUTSHAPE, target_platform=TARGET_PLATFORM,
dataloader=dataloader, calib_steps=250)
问题描述:
在213次迭代时shape算子报上述错误,计算后发现这一次迭代batch size=19, 在dataload迭代器内部打印了下log,发现这一批次finetune确实只送出来了19个样本。后来发现数据集样本数刚好在213次迭代时遍历完一遍。
后面我将finetune step和calib_step都改为100, Calibration数据集样本数调整为32*100个之后就能正常运行。
下面是模型文件:
model.zip
你现在导出的reshape2算子上的shape是写死的,必须要batchsize=16,在这种情况下你不能送入其他batchsize。
你是如何导出的该模型?torch导出好像不会有这样的问题
我是通过torch.onnx转的:
input1 = torch.randn(16, 1, 40, 61).cuda()
input_names = [ "input"]
output_names = [ "output" ]
torch.onnx.export(net, input1, model_path, verbose=True, input_names=input_names, output_names=output_names)
这里确实指定了input tensor,请问应该如何导出不指定batchsize的模型呢?
你可以参考这个 https://blog.csdn.net/ChuiGeDaQiQiu/article/details/119065818
不过如果你是要做推理加速,我建议你导出固定尺寸的onnx模型。
你需要注意,对于不同shape的tensor,推理时选用的算法不会一样,只有比较方的shape在计算gemm和conv时会达到最高执行效率。
比如 shape: [1,3,224,224]就不太方,算的就比较慢
比如 shape: [96, 96, 48, 48],就是个方形的 tensor,计算效率就高得多。
非常感谢,按照您的提示我的代码能正确运行了,那我是否可以这么理解:
PPQ接收的量化模型需要再训练的话,最好导出batchsize为dynamic 的onnx模型做finetune,
否则需要保证calibration dataset的数据数目是所固定的batchsize的整数倍?
是的,我们无法得知你的onnx模型里面那一维是batchsize,或者说那一维是可变的,对于你导出的reshape而言,它要求的输入尺寸是固定的,我们只能给你报错,因为你的模型有语义上的错误。
不过更希望你对推理部署有更深入的了解,在pytorch中我们可以随意改变batchsize,这样你的模型一直都可以跑。但是对于推理而言,batchsize=1的模型与batchsize=128的模型是不一样的,会触发不同的部署优化过程。
明白了,非常感谢!
|
gharchive/issue
| 2022-05-14T03:55:25 |
2025-04-01T06:39:53.470335
|
{
"authors": [
"ZhangZhiPku",
"lycfly"
],
"repo": "openppl-public/ppq",
"url": "https://github.com/openppl-public/ppq/issues/122",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
204599387
|
Set the TTL for NS delegation of subdomains on openregister.org
I think the right thing was trying to be achieved here (ie trying to
keep the NS delegation for {test, discovery, ...}.openregister.org
short so it can be changed easily). However, the TTL here is for the
NS record in the wrong zone.
NS records at the point-of-delegation are non-authoritative. It is the
TTL of the NS record in the authoritative zone that is respected:
Defined in RFC 1035. NS RRs appear in two places. Within the zone file,
in which case they are authoritative records for the zone's name
servers. At the point of delegation for either a subdomain of the zone
or in the zone's parent in which case they are non-authoritative. Thus,
the zone example.com's parent zone (.com) will contain
non-authoritative NS RRs for the zone example.com at its point of
delegation (point of delegation is the term frequently used to describe
the NS RRs in the parent that delegate a zone of subdomain) and
subdomain.example.com will have non-authoritative NS RRS in the zone
example.com at its point of delegation. NS RRs at a point of delegation
are never authoritative only NS RRs within the zone are regarded as
authoritative. While this may look a fairly trivial point, is has
important implications for DNSSEC.
Source: http://www.zytrax.com/books/dns/ch8/ns.html
Deploying
The current plan for this in the test environment is:
-/+ module.core.aws_route53_record.zone_delegation
fqdn: "test.openregister.org" => "<computed>"
name: "test.openregister.org" => "test.openregister.org"
records.#: "4" => "4"
records.1217762515: "ns-607.awsdns-11.net" => "ns-607.awsdns-11.net"
records.1440870510: "ns-2041.awsdns-63.co.uk" => "ns-2041.awsdns-63.co.uk"
records.3777510865: "ns-1310.awsdns-35.org" => "ns-1310.awsdns-35.org"
records.4230963353: "ns-201.awsdns-25.com" => "ns-201.awsdns-25.com"
ttl: "300" => "300"
type: "NS" => "NS"
zone_id: "<parent-zone>" => "<delegated-zone>" (forces new resource)
I think we'll need to be careful about applying this change because it appears as though it will try to delete NS records in the parent zone in R53 which should(?) cause an error. It might be sensible to first remove the resource from Terraform's state file (with terraform state rm) before applying.
The TTL on the NS records at the point-of-delegation should probably be reset to their defaults too.
Please don't merge until #252 has been pushed all the way to beta. This should be on Friday 3 Feb.
This is ready to merge.
👍
|
gharchive/pull-request
| 2017-02-01T14:33:13 |
2025-04-01T06:39:53.489952
|
{
"authors": [
"karlbaker02",
"samcrang"
],
"repo": "openregister/deployment",
"url": "https://github.com/openregister/deployment/pull/253",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1586316336
|
Add direct links to recipes
Previously, we linked to the top-level Git repository from recipes. Now, we should link directly to the file that contains the recipe.
This handles both imperative and declarative recipes as well as recipes from nested folders in rewrite.
Now that I've posted this, I realize that I could optimize this a bit by making it so things aren't quite as hard-coded. I could use just the list coded as the core stuff and build up links like that. I'll make that change later today.
nice!
|
gharchive/pull-request
| 2023-02-15T18:08:38 |
2025-04-01T06:39:53.522060
|
{
"authors": [
"kunli2",
"mike-solomon"
],
"repo": "openrewrite/rewrite-recipe-markdown-generator",
"url": "https://github.com/openrewrite/rewrite-recipe-markdown-generator/pull/43",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2231364536
|
Update 51-ganglia-ce-dashboards.conf to configure reset metrics
Enable reset metrics functionality in the condor_gangliad, which first appears in HTCSS version 23.6.x. This functionality is critical to keeping the CE Dashboard data correct. Note the ospool-container container should ideally not be rebooted until it such time as the image will contain HTCSS 23.6.x.
LGTM. @brianhlin please review/merge.
Go for it. Thanks!
On Mon, Apr 8, 2024 at 8:49 AM Brian Lin @.***> wrote:
@.**** approved this pull request.
Sure. Let me know when it's safe to merge
—
Reply to this email directly, view it on GitHub
https://urldefense.us/v2/url?u=https-3A__github.com_opensciencegrid_images_pull_177-23pullrequestreview-2D1986832410&d=DwMFaQ&c=qzHnJIRvjI6L-clJH8JwLQvf_Iq43fzikf6aoxZgMb8&r=3Rv5mAmzpJXJ0sHJErwJog&m=Fx7wM4Ff0BAbrUkMkGVXVa9IgbrGZcXZ3s65bv3KC1K8VSxqsdjen7tF71EVsdQM&s=B6GWU6aeU7EdyxruM81vz9Xbr0irtkxCgvQQgPSj-M8&e=,
or unsubscribe
https://urldefense.us/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AAIEDDBI4SNRZMYJGQI7IXDY4K36RAVCNFSM6AAAAABF42ISQKVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMYTSOBWHAZTENBRGA&d=DwMFaQ&c=qzHnJIRvjI6L-clJH8JwLQvf_Iq43fzikf6aoxZgMb8&r=3Rv5mAmzpJXJ0sHJErwJog&m=Fx7wM4Ff0BAbrUkMkGVXVa9IgbrGZcXZ3s65bv3KC1K8VSxqsdjen7tF71EVsdQM&s=qQSAX2ebUuBtnbvni04MahWgIocIlJchselmgFWHuIY&e=
.
You are receiving this because you were assigned.Message ID:
@.***>
|
gharchive/pull-request
| 2024-04-08T14:24:50 |
2025-04-01T06:39:53.534352
|
{
"authors": [
"rynge",
"tannenba"
],
"repo": "opensciencegrid/images",
"url": "https://github.com/opensciencegrid/images/pull/177",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1937360345
|
Restriction of pandas < 2 presents bottleneck
Since scmdata currently restricts pandas to <2 it is no longer compatible with the latest version of pyam-iamc.
This is a problem since for Scenario Explorer processing I would require both.
Is updating to pandas 2 currently on the radar, if so is there a time estimate?
Thanks :)
UPDATE: I just saw that there is already an automatically created PR #235 seems like some things are failing for now.
Yes it is on the radar, but I don't have an ETA and I won't get to it in the next few weeks if it requires any real time. Maybe @znicholls can find the time as we do need to get it done. Happy
IIRC most of the failures were relatively minor changes related to somewhat "cosmetic" changes, but I'm a little worried that it will cause a big mess with lots of if pandas_less_than_2: blocks sprinkled around the tests.
Was the pyam migration relatively painless?
Hmm just rebased #235 and only one test is failing now. Maybe they back-pedalled some things as we didn't change anything on our end
The failing test is also in our (thin) pyam compatibility layer https://github.com/openscm/scmdata/blob/master/src/scmdata/pyam_compat.py. @phackstock Does an IamDataFrame support dates later than 2300? If so we can likely remove this code and the associated tests which would make supporting pandas==2 easy
I can have a look before you @lewisjared. Thanks for rebasing, that's a great start.
Thanks a lot @lewisjared and @znicholls for taking a look so quickly.
@phackstock Does an IamDataFrame support dates later than 2300?
I haven't encountered yet since all of the modeling goes until the end of the century but in principle I don't see why pyam wouldn't support that.
I think if you had datetimes that went beyond 2267 you'd hit the same issue that we hit (and might need our long data frame fix). However, probably a while before you hit that use case.
#235 is now just waiting on a review then we can merge and release
@phackstock 0.15.3 should have the restriction removed
Sweet, thanks a lot
|
gharchive/issue
| 2023-10-11T10:00:03 |
2025-04-01T06:39:53.541904
|
{
"authors": [
"lewisjared",
"phackstock",
"znicholls"
],
"repo": "openscm/scmdata",
"url": "https://github.com/openscm/scmdata/issues/266",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
704025583
|
Database
Pull request
Please confirm that this pull request has done the following:
[x] Tests added
[x] Documentation added (where applicable)
[x] Example added (either to an existing notebook or as a new notebook, where applicable)
[x] Description in CHANGELOG.rst added
Closes #103
If you're happy, merge
|
gharchive/pull-request
| 2020-09-18T02:10:57 |
2025-04-01T06:39:53.544026
|
{
"authors": [
"lewisjared",
"znicholls"
],
"repo": "openscm/scmdata",
"url": "https://github.com/openscm/scmdata/pull/123",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
528032642
|
Support Put, Get acl for bucket and object
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
Anything else we need to know?:
The PR's requirement is from the link https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html. We only need to support Canned ACL.
The most common cases for ACL are used as the following cases:
Object whose ACL is "private" is not allowed to access by other user.
Object whose ACL is "public-read" is allowed to read access by anyone.
The Canced ACL include "private", "public-read", "public-read-write", "authenticated-read", "bucket-owner-read", "bucket-owner-full-control", "aws-exec-read".
Environment:
Gelato(release/branch) version:
OS (e.g. from /etc/os-release):
Kernel (e.g. uname -a):
Install tools:
Others:
the feature have been supported
|
gharchive/issue
| 2019-11-25T11:43:55 |
2025-04-01T06:39:53.553786
|
{
"authors": [
"sunfch"
],
"repo": "opensds/multi-cloud",
"url": "https://github.com/opensds/multi-cloud/issues/748",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
261812425
|
Modified the ceph driver to adapt the southbound changes
Due to the OpenSDS southbound has changed, so ceph driver need to modify .
@xxwjj It looks good, but IMO I prefer using GetId() rather than Id when fetching a filed in structure:
name := opt.Name
size := opt.Size
Because usually we use opt.Name to change Name field value of opt, what do you think?
@leonwanghui ok, I will change it. but I am afraid this usage design is too complex.
I think it's a good way to prepare for designing some read-only field, because in this way we just need to change the first letter of field from upper to lower, rather than modifying too much code.
|
gharchive/pull-request
| 2017-09-30T02:54:51 |
2025-04-01T06:39:53.555978
|
{
"authors": [
"leonwanghui",
"xxwjj"
],
"repo": "opensds/opensds",
"url": "https://github.com/opensds/opensds/pull/88",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
55277714
|
navigate a predefined path
Is it possible to configure the viewer so that it moves along a predefined path that takes a user to specific points of interest? My use case is one where I would like to override the mouse-scroll to move along this predefined path?
One way is to use the OpenSeadragonViewerInputHook plugin. Then you can override a viewer's default handling of touch, mouse (and possibly keyboard) input, something like:
var viewer = OpenSeadragon({...});
viewer.addViewerInputHook({hooks: [
{tracker: 'viewer', handler: 'dragHandler', hookHandler: onViewerDrag},
{tracker: 'viewer', handler: 'dragEndHandler', hookHandler: onViewerDragEnd}
]});
function onViewerDrag(event) {
// Do your own custom drag handling here...
event.preventDefaultAction = true;
}
function onViewerDragEnd(event) {
event.preventDefaultAction = true;
}
There's no mechanism in OpenSeadragon to define such a path, but if you have a path, you can move the viewport wherever you want via panTo(point, true) and zoomTo(zoom, null, true).
Thanks. I think panTo might do the trick.
That's how I did it, with jquery json polling.
fitBounds might be an option too.
|
gharchive/issue
| 2015-01-23T12:45:48 |
2025-04-01T06:39:53.558654
|
{
"authors": [
"boskar",
"iangilman",
"msalsbery",
"sharmaashish"
],
"repo": "openseadragon/openseadragon",
"url": "https://github.com/openseadragon/openseadragon/issues/578",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
126179559
|
imageToViewportRectangle data from db
I get a rect using selection plugin :
selection =viewer.selection({
onSelection: function(rect) {
console.log(rect.x);
console.log(rect.y);
console.log(rect.width);
console.log(rect.height);
var elt = document.createElement("div");
elt.id = "runtime-overlay";
elt.className = "highlight";
viewer.addOverlay({
element: elt,
location: viewer.viewport.imageToViewportRectangle(rect)
});
input = "rect_x="+rect.x+"&rect_y="+rect.y+"&rect_w="+rect.width+"&rect_h="+rect.height+"&plink=<?=$plink?>&case_id=<?=$case_id?>&mode=add";
$.ajax({
url : "handlers/H_AnnotationHandler.php",
data : input,
type : "post",
dataType : "json",
success : function(response) {
if (!response.error)
alert("success");
else
alert("failed");
}
});
}
});
And i store the rect in mysql via H_AnnotationHandler.php. After that, when i try to use this data like that :
$.ajax({
url : "handlers/H_AnnotationHandler.php",
data : "case_id=&plink=&mode=get",
type : "post",
dataType : "json",
success : function (response) {
if (!response.error) {
for (var i = 0; i < response.annots.length; i++) {
var elt = document.createElement("div");
elt.id = "runtime-overlay" + i;
elt.className = "highlight";
viewer.addOverlay({
element: elt,
location : viewer.viewport.imageToViewportRectangle(parseInt(response.annots[i].rect_x), parseInt(response.annots[i].rect_y), parseInt(response.annots[i].rect_w), parseInt(response.annots[i].rect_h))
});
}
}
}
});
it create a runtime-overlay division but left, top, height and width seems very wrong. (27094e+04 etc.)
note : i am sure, sql data is true. i compare it before take.
See my comment there: http://stackoverflow.com/questions/34744842/openseadragon-imagetoviewport
@iangilman This can be closed (resolved on SO).
|
gharchive/issue
| 2016-01-12T13:28:30 |
2025-04-01T06:39:53.566179
|
{
"authors": [
"avandecreme",
"ozgunlu"
],
"repo": "openseadragon/openseadragon",
"url": "https://github.com/openseadragon/openseadragon/issues/817",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
132907794
|
getZoom for fabricjs overlay and zoom synchronization
Hi folks!
I am struggling with this (http://www.c-ev.com/osdragon/dragonfabric1.html) test of mine. What I am trying to do is to sync fabricjs canvas with openseadragon. If you scroll with a mouse you will see how both get zoomed in/out.
When I open browser console and run viewer.viewport.getZoom(), it returns 0.0013750000000000001 when the image is fully zoomed in and I am not understanding that number. In this example (http://msalsbery.github.io/openseadragonimaginghelper/index.html) it shows 15 when fully zoomed in. The reason why I was looking at the getZoom, is because I was hoping to sync both canvas.
Somewhere I read that it is possible to change some z value in order to make one or another element active. Under what class I should look for.
Any suggestions of how to achieve this would be highly + highly appreciated, otherwise it looks like I have to big of a bite.
Kindly
When I open browser console and run viewer.viewport.getZoom(), it returns 0.0013750000000000001 when the image is fully zoomed in and I am not understanding that number. In this example (http://msalsbery.github.io/openseadragonimaginghelper/index.html) it shows 15 when fully zoomed in. The reason why I was looking at the getZoom, is because I was hoping to sync both canvas.
The value is a viewport zoom. To convert to image zoom, see http://openseadragon.github.io/examples/viewport-coordinates/
Somewhere I read that it is possible to change some z value in order to make one or another element active. Under what class I should look for.
Are you refering to the z-index CSS property? It does not make an element active but defines which element should be on top.
Thank you avandecreme! You posted on issue #844 and that looks like is what I was trying to do my self. I'll research! Thank you sooo much!
@Kampii Are you using https://github.com/altert/OpenseadragonFabricjsOverlay by @altert? If not, it might be helpful...
@iangilman Ill test this https://github.com/altert/OpenseadragonFabricjsOverlay as it allows to select the element. Thanks.
|
gharchive/issue
| 2016-02-11T07:31:31 |
2025-04-01T06:39:53.573004
|
{
"authors": [
"Kampii",
"avandecreme",
"iangilman"
],
"repo": "openseadragon/openseadragon",
"url": "https://github.com/openseadragon/openseadragon/issues/843",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1607798697
|
[Searchable Snapshot] Cache path not deleted after index is deleted
Describe the bug
A path in the file system is created to store the cached blocks for a searchable snapshot index, such as:
data-2.x/nodes/0/cache/0/8hJW-teFRruFwKyEB4-_aA/0/RemoteLocalStore
When the index with ID 8hJW-teFRruFwKyEB4-_aA is deleted I expect the data-2.x/nodes/0/cache/0/8hJW-teFRruFwKyEB4-_aA path to be removed, but it is not. All the contents of that directory are deleted, but the directory itself is leaked.
To Reproduce
Create a searchable snapshot index, then delete it.
Expected behavior
Assuming there are no searchable snapshot indexes in my cluster, I expect the data-2.x/nodes/0/cache/0 directory to be empty.
@andrross Please assign this issue to me. I have started working on this issue.
@harshjain2 I have assigned the issue to you.
@kotwanikunal Thanks for assigning this issue. I have started working on this issue.
@andrross @kotwanikunal I am able to find the code fix. For testing the issue, I am not able to repro the issue. I tried the following steps:
https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/snapshots/searchable_snapshot/ Followed this configuration.
https://opensearch.org/docs/latest/tuning-your-cluster/availability-and-recovery/snapshots/snapshot-restore/#shared-file-system
./gradlew run
After that I tried to test the OpenSearch service on local machine.
Request :
POST localhost:9200/_snapshot/my-fs-repository/
{
"type": "fs",
"settings": {
"location": "/Users/harshjai/Desktop/opensearch/snapshot/snapshot2"
}
}
Response :
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my-fs-repository] location [/Users/harshjai/Desktop/opensearch/snapshot/snapshot2] doesn't match any of the locations specified by path.repo"
}
],
"type": "repository_exception",
"reason": "[my-fs-repository] failed to create repository",
"caused_by": {
"type": "repository_exception",
"reason": "[my-fs-repository] location [/Users/harshjai/Desktop/opensearch/snapshot/snapshot2] doesn't match any of the locations specified by path.repo"
}
},
"status": 500
}
I have tried everything I can do, but find a solution to this issue. Can you please help me out with this issue.
@harshjain2 Can you try passing in the repo link as follows -
./gradlew run -Dtests.opensearch.path.repo=/Users/harshjai/Desktop/opensearch/snapshot/snapshot2
@anasalkouz I have the fix ready, but I couldn't test it out. I can attempt fixing it by Sunday, 25th March. In case someone else is interested in picking this up, feel free to re-assign to the interested ones.
@harshjain2 Thanks for testing it out. Can you check if the index is in a green state i.e the shards are assigned?
You can do that by using the following API - GET http://localhost:9200/_cat/shards
I think you might be missing the search role on the node that you are testing.
If you are testing locally, I would suggest using integration tests because of the required config by the nodes.
This class hosts the tests for searchable snapshots and you can extend this function to run your tests - https://github.com/opensearch-project/OpenSearch/blob/879ade758668c81d29476f38afb022dd338a1048/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java#L544-L580
In the meanwhile, feel free to create a draft PR and we can work this out on the PR.
|
gharchive/issue
| 2023-03-03T01:33:15 |
2025-04-01T06:39:53.583042
|
{
"authors": [
"andrross",
"harshjain2",
"kotwanikunal"
],
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/issues/6532",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2038457647
|
Add release notes for 1.3.14
Description
Add release notes for 1.3.14 up to commit 21940d8239b50285ef7f98a1762ef281a5b1c7ee
Check List
[ ] New functionality includes testing.
[ ] All tests pass
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] Failing checks are inspected and point to the corresponding known issue(s) (See: Troubleshooting Failing Builds)
[x] Commits are signed per the DCO using --signoff
[x] Commit changes are listed out in CHANGELOG.md file (See: Changelog)
[ ] Public documentation issue/PR created
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Not a code change - merging before checks
|
gharchive/pull-request
| 2023-12-12T19:55:05 |
2025-04-01T06:39:53.588738
|
{
"authors": [
"mch2"
],
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/11592",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1247220559
|
Support dynamic node role
Signed-off-by: Yaliang Wu ylwu@amazon.com
Description
Currently OpenSearch only supports several built-in nodes like data node
role. If specify unknown node role, OpenSearch node will fail to start.
This limit how to extend OpenSearch to support specific tasks. For
example, user may prefer to run ML tasks on some dedicated node which
doesn't serve as any built-in node roles. So the ML tasks won't impact
OpenSearch core function. This PR removed the limitation. So user can
specify any node role and OpenSearch will start node correctly with that
unknown role. This opens the door for plugin developer to run specific
tasks on dedicated nodes.
Issues Resolved
https://github.com/opensearch-project/OpenSearch/issues/2877
Check List
[x] New functionality includes testing.
[x] All tests pass
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Is Unknown the right name? With this feature there's nothing unknown about it, so can we rename it? I think NamedRole would be closer to what we're trying to do.
:+1: , suggesting the DynamicRole (to complement builtin roles) but name is up to the debate (NamedRole seems to be general since all roles do have name, imho)
Is Unknown the right name? With this feature there's nothing unknown about it, so can we rename it? I think NamedRole would be closer to what we're trying to do.
👍 , suggesting the DynamicRole (to complement builtin roles) but name is up to the debate (NamedRole seems to be general since all roles do have name, imho)
Thanks, I think DynamicRole or NamedRole is better than UnknownRole, like @dblock said, there will be no unknown role as we are going to support any custom role.
I'm ok for either. Considering the role is complementing builtin roles, maybe DynamicRole more reasonable?
I am good with this! @reta any objections?
LGTM, thanks @ylwu-amzn !
Since you added some lower case node name transform, please also add unit tests for the fact that these are now case-insensitive.
Sure, working on testing now
@reta @dblock Added tests for role name case insensitive change and Github workflow passed. Can you help review?
LGTM!
If no more comments, can anyone help merge this PR? I have no permission to merge.
|
gharchive/pull-request
| 2022-05-24T23:21:47 |
2025-04-01T06:39:53.597898
|
{
"authors": [
"reta",
"ylwu-amzn"
],
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/3436",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1359077934
|
[Backport] [1.3] Update to Netty 4.1.80.Final (#4359)
Backport of https://github.com/opensearch-project/OpenSearch/pull/4359 to 2.x
Codecov Report
Merging #4379 (27dd72a) into 1.3 (1cf7ea2) will increase coverage by 0.02%.
The diff coverage is n/a.
@@ Coverage Diff @@
## 1.3 #4379 +/- ##
============================================
+ Coverage 77.80% 77.83% +0.02%
- Complexity 63340 63371 +31
============================================
Files 4453 4453
Lines 274899 274899
Branches 41165 41165
============================================
+ Hits 213884 213959 +75
+ Misses 44069 44023 -46
+ Partials 16946 16917 -29
Impacted Files
Coverage Δ
...nsearch/search/dfs/DfsPhaseExecutionException.java
0.00% <0.00%> (-66.67%)
:arrow_down:
...a/org/opensearch/client/cluster/ProxyModeInfo.java
0.00% <0.00%> (-57.90%)
:arrow_down:
...n/indices/upgrade/post/UpgradeSettingsRequest.java
30.00% <0.00%> (-45.00%)
:arrow_down:
...regations/metrics/AbstractHyperLogLogPlusPlus.java
51.72% <0.00%> (-44.83%)
:arrow_down:
...java/org/opensearch/threadpool/ThreadPoolInfo.java
56.25% <0.00%> (-37.50%)
:arrow_down:
.../opensearch/indices/InvalidAliasNameException.java
62.50% <0.00%> (-25.00%)
:arrow_down:
...pensearch/cluster/routing/PlainShardsIterator.java
75.00% <0.00%> (-25.00%)
:arrow_down:
.../opensearch/transport/ProxyConnectionStrategy.java
62.12% <0.00%> (-23.49%)
:arrow_down:
...g/opensearch/action/get/MultiGetShardResponse.java
76.92% <0.00%> (-23.08%)
:arrow_down:
...ain/java/org/opensearch/geometry/MultiPolygon.java
80.00% <0.00%> (-20.00%)
:arrow_down:
... and 484 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
@kotwanikunal mind approving the last one in series please, thank you!
The WhiteSource Security Check is failing consistenly on 1.3 :(
|
gharchive/pull-request
| 2022-09-01T16:00:32 |
2025-04-01T06:39:53.614274
|
{
"authors": [
"codecov-commenter",
"reta"
],
"repo": "opensearch-project/OpenSearch",
"url": "https://github.com/opensearch-project/OpenSearch/pull/4379",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
909899360
|
Create shared parent class for HttpInputClient and DestinationHttpClient
Issue by ann3431
Friday Jul 19, 2019 at 22:20 GMT
Originally opened as https://github.com/opendistro-for-elasticsearch/alerting/issues/87
HttpInputClient and DestinationHttpClient has duplicate functionalities. We should create a shared parent class for these two.
Closing this since HttpInput eventually became the LocalUriInput feature which is currently in PR and this issue no longer applies there.
|
gharchive/issue
| 2021-06-02T21:27:13 |
2025-04-01T06:39:53.617640
|
{
"authors": [
"aditjind",
"qreshi"
],
"repo": "opensearch-project/alerting",
"url": "https://github.com/opensearch-project/alerting/issues/44",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
939176622
|
Generating CSV doesn’t include seconds and seconds fractions
Is your feature request related to a problem? Please describe.
Currently for the date field in csv report. In the code there are 2 steps to handle date value.
during data query stage, we added a format: date_hour_mintue as docValue(e.g. yyyy-MM-dd'T'HH:mm)
https://github.com/opensearch-project/dashboards-reports/blob/e5174537800c60b1bf3145a80c4a36ab4227b80b/dashboards-reports/server/routes/utils/savedSearchReportHelper.ts#L270
During csv rendering stage, we use moment.js to format it into 'MM/DD/YYYY h:mm:ss a', (e.g. 06/27/2021 9:59:00 pm)
https://github.com/opensearch-project/dashboards-reports/blob/e5174537800c60b1bf3145a80c4a36ab4227b80b/dashboards-reports/server/routes/utils/dataReportHelpers.ts#L173-L175
https://github.com/opensearch-project/dashboards-reports/blob/e5174537800c60b1bf3145a80c4a36ab4227b80b/dashboards-reports/server/routes/utils/constants.ts#L67
Notice is the first step above, we are cutting off the seconds and seconds fractions, comparing with what seems like the default date field format in advanced UI setting. And in step 2, the seconds fields will always be 00, becuase of the cut off in step 1.
Describe the solution you'd like
maybe cutting off seconds for date field is not a good choice. We have the following format as available options. Maybe we should use date_hour_mintue_seconds or date_hour_minute_second_fraction
Describe alternatives you've considered
Retrieve date format setting from Advanced UI setting and use that, but I feel like this will not only apply to date format, but also other settings, such as csv seperator, timezone, url prefix, etc. It's better to add it as a compete feature to support advanced UI setting loading
Additional context
This issue was originally raised from Opensearch forum https://discuss.opendistrocommunity.dev/t/generating-csv-doesnt-include-seconds-on-timestemp-fields/6413/6
We can use uiSettings.get('dateFormat') in frontend to get the date format in advanced settings.
The API request needs to be modified or a new route needs to be added to support all advanced settings (context menu and dashboards server doesn't have direct access to uiSettings). Also the default timeFormat MMM D, YYYY @ HH:mm:ss.SSS doesn't look very common. I'm thinking to just extend current time format to MM/DD/YYYY h:mm:ss.SSS a to fix this.
@joshuali925
uiSettings are available in CoreStart as this PR suggests, is it doable in our case? So overall we want to make sure at least for values, what users see in Discover is the same as what they get in csv report.
we can do this step by step, no need to support everyting in advanced settings at once
@zhongnansu Yes this is doable, my concern is that 10/21/2021 6:55:21.748 am should be easier to parse and more standard than Oct 20, 2021 @ 23:55:21.748 in CSV in general. Also this way we don't introduce a big change to time fields in CSV.
But I'm not that familiar with csv processing tools, and if you feel it's ok to make the change then I can update the PR
@joshuali925 Found some reference. https://github.com/elastic/kibana/issues/56153
Instead of using some predefined format, we'll need to move to use UI settings anyway
@kgcreative Hi Kevin, any thoughts?
@zhongnansu Got it, then it's better to let users decide. I updated the PR
Moving the discussion here https://github.com/opensearch-project/dashboards-reports/issues/208
|
gharchive/issue
| 2021-07-07T19:02:51 |
2025-04-01T06:39:53.628479
|
{
"authors": [
"joshuali925",
"zhongnansu"
],
"repo": "opensearch-project/dashboards-reports",
"url": "https://github.com/opensearch-project/dashboards-reports/issues/114",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2139530542
|
Add append option to add_entries processor
Description
Adds an append_if_key_exists option to add_entries processor as described in #4129
Issues Resolved
Resolves #4129
Check List
[x] New functionality includes testing.
[ ] New functionality has a documentation issue. Please link to it in this PR.
[ ] New functionality has javadoc added
[x] Commits are signed with a real name per the DCO
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Would it make sense to instead have something like this
action_when_key_exists: append // can be one of `append`, `overwrite`, `skip`
Not a blocking comment just something to consider
action_when_key_exists: append // can be one of append, overwrite, skip
@graytaylor0 I like it, making the config more concise. I can do the change if other reviewers think the same.
Would it make sense to instead have something like this
action_when_key_exists: append // can be one of `append`, `overwrite`, `skip`
Not a blocking comment just something to consider
@graytaylor0 , I like this idea. But, I also think we should aim for consistency. Right now grok has overwrite_when_key_exists. Do you think grok could also move toward using this approach? I'd really like to see all the processors use similar behavior. And I think this enum could facilitate that.
These processors have similar overwrite_if_key_exists options:
copy_values
rename_keys
parse_json
parse_ion
key_value
grok processor has a keys_to_overwrite option that determines overwrite or append (default) for each grok generated fields.
I will open a follow-up issue for these and we can discuss further there.
|
gharchive/pull-request
| 2024-02-16T21:45:44 |
2025-04-01T06:39:53.635954
|
{
"authors": [
"dlvenable",
"graytaylor0",
"oeyh"
],
"repo": "opensearch-project/data-prepper",
"url": "https://github.com/opensearch-project/data-prepper/pull/4143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2587225185
|
ByFieldRerank Processor (ReRankProcessor enhancement)
Description
{
"response_processors": [
{
"rerank": {
"by_field": {
"target_field": "ml_score",
"remove_target_field": true ## Default false
}
}
}
]
}
The ByFieldRerank Processor applies a 2nd level re ranking on a search hit by specifying a targetField in the _source mapping of a search hit. Currently we support both shallow target and also a nested field target such a.b.c given that this mapping exists.
When deleting a target field as specified by "remove_target_field": true the processor will delete any empty maps as a result of this action for example, when the targetField is ml_info.score_report. It will transform
{
"my_field": "value"
"ml_info" : {
"score_report" : 27
}
}
into the following
{
"my_field": "value"
}
It was chosen this way to give users the flexibility to clean up their searchHit, however the default behavior was disabled to let new users adapt to the feature.
Related Issues
Resolves https://github.com/opensearch-project/neural-search/issues/926
Resolves https://github.com/opensearch-project/OpenSearch/issues/15631
Check List
[X] New functionality includes testing.
[ ] New functionality has been documented.
[ ] API changes companion pull request created.
[X] Commits are signed per the DCO using --signoff.
[ ] Public documentation issue/PR created.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
The functionality has been implemented all that is left is more UTs, hence I left the PR as a draft. I can't think of integration testing for it as it doesn't interact with a remote process.
The functionality has been implemented all that is left is more UTs, hence I left the PR as a draft. I can't think of integration testing for it as it doesn't interact with a remote process.
We can interact with local model, it should be fine for the sake of integ test, something similar to how it's done in existing test for rerank processor https://github.com/opensearch-project/neural-search/blob/main/src/test/java/org/opensearch/neuralsearch/processor/rerank/MLOpenSearchRerankProcessorIT.java
Will do the second round of review tonight
Add changelog in the PR.
@vibrantvarun why we're flipping this request from feature to enhancement? To me that's a clear feature, we're adding functionality and extending interface with new parameters
I thought it is the enhancement of rerank processor which already exist in the project. @martin-gaievski I am fine with either of the label.
|
gharchive/pull-request
| 2024-10-14T22:55:19 |
2025-04-01T06:39:53.644709
|
{
"authors": [
"brianf-aws",
"martin-gaievski",
"vibrantvarun"
],
"repo": "opensearch-project/neural-search",
"url": "https://github.com/opensearch-project/neural-search/pull/932",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1519794378
|
Add bwc tests against the distribution
Signed-off-by: Zelin Hao zelinhao@amazon.com
Description
Add option to run BWC tests in distribution level, which is running tests in the test cluster with latest distribution bundle installed and all plugins (included in the latest manifest) present.
The command to run BWC tests in distribution level would be ./gradlew bwcTestSuite -Dtests.security.manager=false -PcustomDistributionDownloadType=bundle
If property customDistributionDownloadType is not set nor set to bundle, the BWC tests would be default to run in plugins level as previously configured.
Issues Resolved
Part of https://github.com/opensearch-project/opensearch-build/issues/2870
Check List
[ ] New functionality includes testing.
[ ] All tests pass, including unit test, integration test and doctest
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] New functionality has user manual doc added
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Codecov Report
Merging #1366 (e2a7ac2) into 2.x (d2a17a3) will increase coverage by 28.21%.
The diff coverage is n/a.
@@ Coverage Diff @@
## 2.x #1366 +/- ##
=============================================
+ Coverage 41.89% 70.10% +28.21%
Complexity 315 315
=============================================
Files 302 46 -256
Lines 17849 2482 -15367
Branches 4332 253 -4079
=============================================
- Hits 7477 1740 -5737
+ Misses 10199 600 -9599
+ Partials 173 142 -31
Flag
Coverage Δ
dashboards-observability
?
opensearch-observability
70.10% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...nalytics/redux/slices/viualization_config_slice.ts
...config_panes/config_controls/config_text_input.tsx
...izations/config_panel/config_panes/json_editor.tsx
...rds-observability/common/constants/autocomplete.ts
...nfig_panes/config_controls/config_style_slider.tsx
...ublic/components/visualizations/charts/bar/bar.tsx
...tebooks/components/helpers/legacy_route_helpers.ts
...ponents/visualizations/charts/maps/treemap_type.ts
.../common/query_manager/ast/builder/stats_builder.ts
...isualizations/shared_components/toolbar_button.tsx
... and 246 more
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
@joshuali925 Please help re-review this PR as I just rebase the branch to resolve conflicts. Thanks!
|
gharchive/pull-request
| 2023-01-05T00:41:03 |
2025-04-01T06:39:53.665413
|
{
"authors": [
"codecov-commenter",
"zelinh"
],
"repo": "opensearch-project/observability",
"url": "https://github.com/opensearch-project/observability/pull/1366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1083329036
|
Created 1.2.3 manifest.
Signed-off-by: dblock dblock@dblock.org
Description
Created 1.2.3 manifest.
Check List
[x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Didn't want to use the automation?
Current automation doesn't add to the cron, that's still a TODO :(
|
gharchive/pull-request
| 2021-12-17T14:45:11 |
2025-04-01T06:39:53.668691
|
{
"authors": [
"dblock"
],
"repo": "opensearch-project/opensearch-build",
"url": "https://github.com/opensearch-project/opensearch-build/pull/1369",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2375351808
|
Field, Function and test fixes, adding missing struct to node stats
Description
Fix FLS role type
Fix json compare when response is null
Fix function where the wrong var is given to parse the response
Add missing node stats cache field
Issues Resolved
Closes https://github.com/opensearch-project/opensearch-go/issues/560
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Want to write some tests for this one? Looks like lots of tests failed.
Test are failing because of the omitempty tag causing the json compare to omit them and resulting in missing fields when comparing. I am currently thinking about a solution.
I am not totally happy with this solution as I would have liked to not duplicate the structs but there is not lib that compares a struct against a json blob and respects the omitempty tag. Or at least I did not find such a lib.
Therefore duplicating the structs but without omitemtpy is simplest and easiest way of solving this.
|
gharchive/pull-request
| 2024-06-26T13:25:56 |
2025-04-01T06:39:53.672340
|
{
"authors": [
"Jakob3xD"
],
"repo": "opensearch-project/opensearch-go",
"url": "https://github.com/opensearch-project/opensearch-go/pull/572",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1607733673
|
Task-able long running index operation enhancement and admin UI panel for composite index template.
Describe the blog post your would like to write
A trackable index operation mechanism applied to Split / Shrink / Force merge / Close operations.
A centralized notification management page to help manage notifications for default / specific operations.
Admin UI panel for composite index template.
What is the title of the blog post?
Enhancement on long running index operation and admin UI panel for composite index template.
Who are the authors?
lxuesong, suzhou, gbinlong, ihailong, zhichaog.
What is the proposed posting date?
2023.04.28
Synced with SMEs to establish workflow and writing schedule.
As the Notification Management won't be release on v2.7, the blog release date will be slipped to the release date of v2.8. Sorry for the inconvenience.
@SuZhou-Joe Please provide updated review and publish dates for this blog.
@SuZhou-Joe Please provide updated review and publish dates for this blog since 2.8 has been released.
Thanks for reminding, the estimated date for review will be 06.30 and the expected publish date will be 07.07
|
gharchive/issue
| 2023-03-03T00:11:19 |
2025-04-01T06:39:53.676652
|
{
"authors": [
"SuZhou-Joe",
"pajuric",
"vagimeli"
],
"repo": "opensearch-project/project-website",
"url": "https://github.com/opensearch-project/project-website/issues/1397",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2094785713
|
Text Queries in OpenSearch
Describe the blog post your would like to write
Text Queries in OpenSearch
What is the title of the blog post?
Text Queries in OpenSearch - match_only text field optimization. Tied to OpenSearch 2.12 release on 2/20.
Who are the authors?
Saurabh Singh, Rishabh Kumar Maurya
What is the proposed posting date?
2/20/24
Closing this request. Please feel free to reopen if you plan to move this forward.
@getsaurabh02 - I am going to close this issue for now. When you are ready to move it forward, we'll reopen ing.
|
gharchive/issue
| 2024-01-22T21:29:20 |
2025-04-01T06:39:53.679200
|
{
"authors": [
"pajuric"
],
"repo": "opensearch-project/project-website",
"url": "https://github.com/opensearch-project/project-website/issues/2538",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1901414241
|
Cms/blogs/try again what in the ml is going on around here
Description
Blog PR, Draft 1
Issues Resolved
#2001
Check List
[X] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the BSD-3-Clause License.
Thanks for your help Nate. I'll make a few changes and then let you have at it.
@nateynateynate @dtaivpp - This is ready to post once the date and meta is updated. Let's get this shipped today, please.
|
gharchive/pull-request
| 2023-09-18T17:21:58 |
2025-04-01T06:39:53.681505
|
{
"authors": [
"nateynateynate",
"pajuric"
],
"repo": "opensearch-project/project-website",
"url": "https://github.com/opensearch-project/project-website/pull/2007",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1874652478
|
[CVE-2023-2976] Fix google-java-format-1.17.0.jar: 1 vulnerabilities
Description
Fix google-java-format-1.17.0.jar: 1 vulnerabilities
Issues Resolved
Resolves https://github.com/opensearch-project/security-analytics/issues/511
CVE: https://www.mend.io/vulnerability-database/CVE-2023-2976
Check List
[ ] New functionality includes testing.
[ ] All tests pass
[ ] New functionality has been documented.
[ ] New functionality has javadoc added
[ ] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Only main (fix merged) and 2.x branch has this vulnerability. Please merge the backport https://github.com/opensearch-project/security-analytics/pull/527
Older branches 2.4-2.9 do not have this vulnerability. We can safely remove those backport tags since they are not required.
|
gharchive/pull-request
| 2023-08-31T02:08:09 |
2025-04-01T06:39:53.686451
|
{
"authors": [
"sandeshkr419"
],
"repo": "opensearch-project/security-analytics",
"url": "https://github.com/opensearch-project/security-analytics/pull/526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
452243267
|
JWT authentication: kibana rejects valid tokens after session expiry
Symptoms:
user successfully logs in using jwt access token. Token based user, roles all OK
access token refreshed on a 10 minute basis
new access token delivered to kibana, usually kibana responds without an issue.
However, sometimes:
4-a) kibana responds with 302 /kibana/customerror?type=sessionExpired#?_g=()
4-b) I can test the token delivered in step 3 with elastic and it authenticates fine.
clearing all cookies, starting in incognito session, trying in different browser, all produce same 302 error
Workaround:
Only restarting kibana seems to work. :-(
After restarting only kibana, and using same token, authentication works.
Comments:
Suspect, but can't validate that kibana server is somehow maintaining state associated with token or it's cookie ?
kibana log in verbose mode
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:11Z","tags":["plugin","debug"],"pid":1,"message":"Checking Elasticsearch version"}
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:13Z","tags":["debug","legacy-proxy"],"pid":1,"message":"Event is being forwarded: connection"}
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:13Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy POST:/elasticsearch/_msearch?rest_total_hits_as_int=true&ignore_throttled=true."}
kibana-service | {"type":"response","@timestamp":"2019-06-04T22:24:13Z","tags":[],"pid":1,"method":"post","statusCode":302,"req":{"url":"/elasticsearch/_msearch?rest_total_hits_as_int=true&ignore_throttled=true","method":"post","headers":{"connection":"upgrade","host":"localhost","content-length":"1383","accept":"application/json, text/plain, */*","origin":"http://localhost","kbn-version":"6.7.1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36","dnt":"1","content-type":"application/x-ndjson","referer":"http://localhost/kibana/app/kibana","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9","securitytenant":"__user__"},"remoteAddress":"172.19.0.11","userAgent":"172.19.0.11","referer":"http://localhost/kibana/app/kibana"},"res":{"statusCode":302,"responseTime":8,"contentLength":9},"message":"POST /elasticsearch/_msearch?rest_total_hits_as_int=true&ignore_throttled=true 302 8ms - 9.0B"}
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:13Z","tags":["debug","legacy-proxy"],"pid":1,"message":"Event is being forwarded: connection"}
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:13Z","tags":["debug","legacy-service"],"pid":1,"message":"Request will be handled by proxy GET:/customerror?type=sessionExpired."}
kibana-service | {"type":"response","@timestamp":"2019-06-04T22:24:13Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/customerror?type=sessionExpired","method":"get","headers":{"connection":"upgrade","host":"localhost","accept":"application/json, text/plain, */*","kbn-version":"6.7.1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36","dnt":"1","referer":"http://localhost/kibana/app/kibana","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9"},"remoteAddress":"172.19.0.11","userAgent":"172.19.0.11","referer":"http://localhost/kibana/app/kibana"},"res":{"statusCode":200,"responseTime":8,"contentLength":9},"message":"GET /customerror?type=sessionExpired 200 8ms - 9.0B"}
kibana-service | {"type":"log","@timestamp":"2019-06-04T22:24:13Z","tags":["plugin","debug"],"pid":1,"message":"Checking Elasticsearch version"}
I've verified that a new token was sent. However, kibana does not respond with it's own security_authentication cookie.
@hardik-k-shah
https://github.com/opendistro-for-elasticsearch/security-kibana-plugin/blame/v0.9.0.0/lib/auth/types/AuthType.js#L149-L155
I'm suspecting this section. My tokens always have the same header:
{
"alg": "RS256",
"typ": "JWT",
"kid": "XXXX_key_2019-03-08T05:28:18Z"
}
Would that cause the " no need to return new credentials" test to ignore the new token?
I suspect, but cannot prove, that the security_authentication cookie is out of step with the Authorization: Bearer token.
My brute force workaround seems to confirm this.
In nginx:
# remove kibana's security_authentication cookie.
set $sans_security_authentication $http_cookie;
if ($http_cookie ~ "(.*)(?:^|;)\s*security_authentication=[^;]+(.*)") {
set $sans_security_authentication $1$2;
}
proxy_set_header Cookie $sans_security_authentication;
...
location /kibana {
proxy_pass_request_headers off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
rewrite /kibana/(.*) /$1 break;
proxy_set_header Authorization "$access_token";
proxy_set_header kbn-version "$http_kbn_version";
proxy_pass http://kibana-service:5601/;
}
Not that I'm experiencing the same but I believe there is something wrong on how Kibana refreshes cookies' TTL when receiving requests. In my case I'm using SAML, not JWT, but user experience I believe is similar (session expiry is not updated even user is actively using kibana).
I'll create a separate issue for this (currently github is giving me 500 errors =:O) but here's the post I created sometime back:
https://discuss.opendistrocommunity.dev/t/kibana-session-keepalive-when-using-saml/724
any updates?
Any updates?
Try setting your cookie ttl to the same exact time as your access token expiration opendistro_security.cookie.ttl
Any updates?
I managed to get it working by adding the following configurations:
opendistro_security.cookie.ttl : [jwtExiprationTime]
opendistro_security.session.ttl: 2 * [jwtExiprationTime]
opendistro_security.session.keepalive: true
We are doing some "spring cleaning in the fall", and to make sure we focus our energies on the right issues and we get a better picture of the state of the repo, we are closing all issues that we are carrying over from the ODFE era (ODFE is no longer supported/maintained, see post here).
If you believe this issue should still be considered for current versions of OpenSearch, apologies! Please let us know by re-opening it.
Thanks!
|
gharchive/issue
| 2019-06-04T22:56:46 |
2025-04-01T06:39:53.698683
|
{
"authors": [
"bwalsh",
"davidlago",
"horacimacias",
"kakulukia",
"maismail",
"samoilenko",
"shamsalmon"
],
"repo": "opensearch-project/security-dashboards-plugin",
"url": "https://github.com/opensearch-project/security-dashboards-plugin/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1051251237
|
Update integTest gradle scripts to run via remote cluster independently
Is your feature request related to a problem?
With the existing implementation of integration tests, opensearch plugins use the same test framework as local integration tests which spin up test clusters by itself. This is a hacky way to override the endpoint from local spun up test clusters (via opensearch.testclusters gradle plugin) with the remote cluster endpoint.
The solution doesn't work when we would like to test the opensearch bundle as part of the release. The test frameworks expect the artifact to be available before we could test them. To clean this up, plugins have to create gradle scripts for remote endpoints to run https://github.com/opensearch-project/opensearch-build#integration-tests for the release cycle.
What solution would you like?
Refer to: https://github.com/opensearch-project/anomaly-detection/pull/298 and make similar changes to the plugin.
Also if you have a custom name to invoke these tests, update the integtest.sh script in https://github.com/opensearch-project/opensearch-build/blob/main/scripts/default/integtest.sh
@saratvemulapalli Does this issue apply to frontend plugins?
This request doesn't apply to dashboards plugin. Ref: https://github.com/opensearch-project/opensearch-plugins/issues/103#issuecomment-1064458549
This request doesn't apply to dashboards plugin. Ref: https://github.com/opensearch-project/opensearch-plugins/issues/103#issuecomment-1064458549
|
gharchive/issue
| 2021-11-11T18:29:13 |
2025-04-01T06:39:53.704158
|
{
"authors": [
"cliu123",
"saratvemulapalli"
],
"repo": "opensearch-project/security-dashboards-plugin",
"url": "https://github.com/opensearch-project/security-dashboards-plugin/issues/858",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1161781194
|
Add IP filtering to OpenSearch security
When securing a cluster, being able to filter connections based on IP addresses is a nice and simple feature.
By defining lists of IPs or ranges via the CIDR notation, it should be possible to restrict access to a node.
OpenSearch security already allows the restriction of connections based on distinguished names (https://opensearch.org/docs/latest/security-plugin/configuration/yaml/#nodes_dnyml). IP filtering would be a complementary feature of the same type.
IntraFind (my employer) has developed a proprietary IP filtering plugin with the features described above for Elasticsearch v1, which we have been upgrading and maintaining since then. We are interested in contributing the functionality to OpenSearch.
[Triage] This is a great feature request, we would welcome a contribution that adds this functionality.
I would like to do it a bit more granular: add ip whitelist for a role in opensearch. We have implemented this ourselfs so that a JWT with appid X can only access the cluster from specific CIDR's.
Currently you can apply a role based on ip-ranges, but this is different from that.
@peternied @cwperks I'm very interested in taking on this issue as a more influential project, but I'm very lost on where to start, where should I start referencing to get the basic information on features like this? Tysm!
This would enable us to drop a custom reverse proxy. If you need any input on the functional side: no problem. Happy to schedule a call.
@prabhask5 For where to start in the code during the life cycle checkAndAuthenticateRequest [1] is where the security plugin determines who is making the request, if we don't know we reject the request. This seems like an 'easy' existing point in the process to allow/reject requests based on a CIDR range.
However, I'd caution, this is a feature of sizable scope, so rather than start with a only a draft pull request; I'd recommend creation an issue with this design template [2] that will help you plan out and get buy in from the maintainers of the direction you are thinking.
[1] Relevant code path for authentication https://github.com/opensearch-project/security/blob/905c97d4fed185362a12c4b46aa56ad8febabb4d/src/main/java/org/opensearch/security/filter/SecurityRestFilter.java#L254
[2] Design doc template https://github.com/opensearch-project/security/blob/main/.github/ISSUE_TEMPLATE/DESIGN_DOCUMENT_TEMPLATE.md?plain=1
|
gharchive/issue
| 2022-03-07T18:34:20 |
2025-04-01T06:39:53.710140
|
{
"authors": [
"br3no",
"ict-one-nl",
"peternied",
"prabhask5"
],
"repo": "opensearch-project/security",
"url": "https://github.com/opensearch-project/security/issues/1667",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2181688656
|
[BUG] simple joins on keywords don't seem to work
What is the bug?
Joins on keyword fields do not seem to work.
From the Query Workbench:
select
q.query_id
from
.ubi_log_queries q
join .ubi_log_events e
on q.query_id = e.query_id
Yields this output:
: Bad Request, this query is not runnable.
Both indices have data and the following mapping:
"query_id" : {
"type" : "keyword"
},
The OpenSearch logs have:
[2024-03-12T13:43:44,456][INFO ][o.o.s.l.p.RestSqlAction ] [opensearch] [a77b58fa-bf13-4e8d-b9e7-c517f3793a96] Incoming request /_plugins/_sql
[2024-03-12T13:43:44,459][WARN ][stderr ] [opensearch] line 1:49 mismatched input 'join' expecting {<EOF>, ';'}
[2024-03-12T13:43:44,459][INFO ][o.o.s.l.p.RestSqlAction ] [opensearch] [a77b58fa-bf13-4e8d-b9e7-c517f3793a96] Request SQLQueryRequest(jsonContent={"query":"select q.query_id from .ubi_log_queries q join .ubi_log_events e on q.query_id = e.query_id"}, query=select q.query_id from .ubi_log_queries q join .ubi_log_events e on q.query_id = e.query_id, path=/_plugins/_sql, format=jdbc, params={}, sanitize=true, cursor=Optional.empty) is not supported and falling back to old SQL engine
[2024-03-12T13:43:44,470][WARN ][o.o.s.l.u.QueryDataAnonymizer] [opensearch] Caught an exception when anonymizing sensitive data.
[2024-03-12T13:43:44,470][INFO ][o.o.s.l.p.RestSqlAction ] [opensearch] Request Query: Failed to anonymize data.
[2024-03-12T13:43:44,470][ERROR][o.o.s.l.p.RestSqlAction ] [opensearch] a77b58fa-bf13-4e8d-b9e7-c517f3793a96 Client side error during query execution
com.alibaba.druid.sql.parser.ParserException: Illegal SQL expression : select q.query_id from .ubi_log_queries q join .ubi_log_events e on q.query_id = e.query_id
at org.opensearch.sql.legacy.utils.Util.toSqlExpr(Util.java:279) ~[legacy-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.query.OpenSearchActionFactory.create(OpenSearchActionFactory.java:89) ~[legacy-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.plugin.SearchDao.explain(SearchDao.java:48) ~[legacy-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.plugin.RestSqlAction.explainRequest(RestSqlAction.java:243) [legacy-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.plugin.RestSqlAction.lambda$prepareRequest$1(RestSqlAction.java:172) [legacy-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.plugin.RestSQLQueryAction$1.onFailure(RestSQLQueryAction.java:130) [legacy-2.12.0.0.jar:?]
at org.opensearch.sql.sql.SQLService.execute(SQLService.java:43) [sql-2.12.0.0.jar:?]
at org.opensearch.sql.legacy.plugin.RestSQLQueryAction.lambda$prepareRequest$3(RestSQLQueryAction.java:107) [legacy-2.12.0.0.jar:?]
at org.opensearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:128) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.rest.RestController.dispatchRequest(RestController.java:334) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.rest.RestController.tryAllHandlers(RestController.java:425) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.rest.RestController.dispatchRequest(RestController.java:263) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:387) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:468) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:370) [opensearch-2.12.0.jar:2.12.0]
at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:56) [transport-netty4-client-2.12.0.jar:2.12.0]
at org.opensearch.http.netty4.Netty4HttpRequestHandler.channelRead0(Netty4HttpRequestHandler.java:42) [transport-netty4-client-2.12.0.jar:2.12.0]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at org.opensearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:72) [transport-netty4-client-2.12.0.jar:2.12.0]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) [netty-handler-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) [netty-codec-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:689) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:652) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562) [netty-transport-4.1.106.Final.jar:4.1.106.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) [netty-common-4.1.106.Final.jar:4.1.106.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.106.Final.jar:4.1.106.Final]
at java.base/java.lang.Thread.run(Thread.java:1583) [?:?]
[2024-03-12T13:47:17,192][INFO ][o.o.j.s.JobSweeper ] [opensearch] Running full sweep
What is the expected behavior?
Rows of data should have been returned instead of an error.
What is your host/environment?
all versions are OS and plugins are 2.12
this is a bug when handing string with ., https://github.com/opensearch-project/sql/blob/main/legacy/src/main/java/org/opensearch/sql/legacy/utils/StringUtils.java#L93
to migrate the issue, is it possible to remove . from index name
@penghuo I'm happy to work this issue if you want to assign it to me. It will benefit our OpenSearch UBI work.
I suspect that you don't need it assigned to you to submit a PR ;-)
|
gharchive/issue
| 2024-03-12T13:55:56 |
2025-04-01T06:39:53.717793
|
{
"authors": [
"RasonJ",
"epugh",
"jzonthemtn",
"penghuo"
],
"repo": "opensearch-project/sql",
"url": "https://github.com/opensearch-project/sql/issues/2550",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1619210012
|
Add STR_TO_DATE Function To The SQL Plugin
Description
Adds the str_to_date function to the SQL plugin. The function takes two arguments, an input string and a format string. The format string contains string format specifiers which describe how the characters in the input string are parsed to a DATETIME. Based off of MySQL, but differs in some areas due to the limitations of the plugin and Java's built-in DatetimeFormatter. Specifically, the format string used must exactly match the string being parsed, and any dates/times with 0 for the year/month/day will return NULL since these fields must be valid for Java LocalDate/LocalTime/LocalDatetime. The arguments must have enough information to build a DATE, TIME, or DATETIME.
Example:
SELECT str_to_date("May 1, 2013", "%M %d, %Y") -> 2013-05-01 00:00:00
SELECT str_to_date("9,23,11", "%h,%i,%s") -> 0001-01-01 09:23:11
Issues Resolved
#722
Check List
[X] New functionality includes testing.
[X] All tests pass, including unit test, integration test and doctest
[X] New functionality has been documented.
[X] New functionality has javadoc added
[X] New functionality has user manual doc added
[X] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Codecov Report
Merging #1420 (3532f4c) into main (ef38389) will increase coverage by 0.01%.
The diff coverage is 100.00%.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## main #1420 +/- ##
============================================
+ Coverage 98.38% 98.40% +0.01%
- Complexity 3698 3723 +25
============================================
Files 343 343
Lines 9121 9211 +90
Branches 586 600 +14
============================================
+ Hits 8974 9064 +90
Misses 142 142
Partials 5 5
Flag
Coverage Δ
sql-engine
98.40% <100.00%> (+0.01%)
:arrow_up:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...c/main/java/org/opensearch/sql/expression/DSL.java
100.00% <100.00%> (ø)
...sql/expression/datetime/DateTimeFormatterUtil.java
100.00% <100.00%> (ø)
...arch/sql/expression/datetime/DateTimeFunction.java
100.00% <100.00%> (ø)
...h/sql/expression/function/BuiltinFunctionName.java
100.00% <100.00%> (ø)
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
|
gharchive/pull-request
| 2023-03-10T16:01:17 |
2025-04-01T06:39:53.732991
|
{
"authors": [
"GabeFernandez310",
"codecov-commenter"
],
"repo": "opensearch-project/sql",
"url": "https://github.com/opensearch-project/sql/pull/1420",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
239637235
|
Binding retries in stateless brokers
According to OSB Bindings spec:
200 OK: MUST be returned if the binding already exists and the requested parameters are identical to the existing binding. The expected response body is below.
How can this requirement be achieved in stateless brokers (see https://github.com/openservicebrokerapi/servicebroker/issues/203 and https://github.com/openservicebrokerapi/servicebroker/issues/225#issuecomment-311230413)? This requirement implicitly stands that:
Broker stores the request payload for each binding somewhere (i.e. broker is stateful).
Broker is able to re-retrieve the credentials for existing Binding (despite that there is no GET endpoint).
I'm pretty sure that existing stateless brokers don't respect this requirement, otherwise they are stateful 😕
Shall we remove this requirement from the spec?
This seems to be a subset of #225
@arschles it's related, but not a subset.
The question for this issue is whether this requirement is always respected in the current world of brokers. If not, we should remove it from the spec.
Also see related issue in Service Catalog: #1062
If this requirement is respected, there is no need for preventing duplicate requests from the Service Catalog side.
related #291
proposal: https://github.com/openservicebrokerapi/servicebroker/issues/291#issuecomment-323344056
I'm not sure what a good solution to this issue is. I understand the reasoning behind a stateless broker not being able to respond with a 200 OK here as it can't remember what parameters were used originally, but those brokers could always just respond with a 409 Conflict:
409 Conflict: MUST be returned if a Service Binding with the same id, for the same Service Instance, already exists or is being created but with different parameters.
What do you think @fmui ?
Looks like we solved that already by accident. See #528
You're right, that does technically solve this, but I still think that the spec doesn't offer much guidance as to what service brokers should do if they don't have state and they can't determine if the configuration parameters were the same as before. Should they return a 200 or 409 in that case?
Is this a problem for anyone, or can we close this for now and wait until it becomes a problem?
Closing due to inactivity. Please reopen if this becomes a problem for anyone.
|
gharchive/issue
| 2017-06-29T23:19:31 |
2025-04-01T06:39:53.741377
|
{
"authors": [
"arschles",
"duglin",
"fmui",
"mattmcneeney",
"nilebox"
],
"repo": "openservicebrokerapi/servicebroker",
"url": "https://github.com/openservicebrokerapi/servicebroker/issues/260",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1050144981
|
Planning Milestone v1.1.0
This GitHub Issue is to be used for planning Open Service Mesh release v1.1.0
Please comment with recommendations on GitHub Issues or general areas that we should bundle with the v1.1.0 release.
Proposed tasks for v1.1.0:
#4351
Closing this as we will be planning v1.1.0 differently
|
gharchive/issue
| 2021-11-10T18:08:49 |
2025-04-01T06:39:53.743721
|
{
"authors": [
"draychev",
"snehachhabria"
],
"repo": "openservicemesh/osm",
"url": "https://github.com/openservicemesh/osm/issues/4349",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1288883612
|
Fix MRC status
Signed-off-by: Keith Mattix II keithmattix2@gmail.com
Description:
Correctly sets the preset MRC's status on initial create and fixes a typo on the printer column of the MRC CRD
Testing done:
Ran the demo and checked the MRC manifest
Affected area:
Functional Area
Certificate Management
[X]
Please answer the following questions with yes/no.
Does this change contain code from or inspired by another project? no
Did you notify the maintainers and provide attribution? N/A
Is this a breaking change? no
Has documentation corresponding to this change been updated in the osm-docs repo (if applicable)? N/A
In osm-bootstrap_test.go we could add a check to TestCreateMeshRootCertificate to verify the MRC status has been set as expected.
Codecov Report
Merging #4856 (f42a565) into main (d970b24) will decrease coverage by 0.00%.
The diff coverage is 66.66%.
@@ Coverage Diff @@
## main #4856 +/- ##
==========================================
- Coverage 69.51% 69.50% -0.01%
==========================================
Files 219 219
Lines 16033 16041 +8
==========================================
+ Hits 11146 11150 +4
- Misses 4833 4837 +4
Partials 54 54
Flag
Coverage Δ
unittests
69.50% <66.66%> (-0.01%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
cmd/osm-bootstrap/osm-bootstrap.go
47.93% <66.66%> (+0.05%)
:arrow_up:
pkg/messaging/workqueue.go
89.28% <0.00%> (-10.72%)
:arrow_down:
pkg/ticker/ticker.go
87.17% <0.00%> (+3.84%)
:arrow_up:
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d970b24...f42a565. Read the comment docs.
|
gharchive/pull-request
| 2022-06-29T15:16:16 |
2025-04-01T06:39:53.758234
|
{
"authors": [
"codecov-commenter",
"jaellio",
"keithmattix"
],
"repo": "openservicemesh/osm",
"url": "https://github.com/openservicemesh/osm/pull/4856",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2524637773
|
Add ibgu package
Add ibgu package for ImageBasedGroupUpgrades custom resource.
cc @josclark42
@mcornea Can you please split this PR into 2 different PRs. 1 PR for sync and scheme update 2 PR for ibgu package
Sure, I opened https://github.com/openshift-kni/eco-goinfra/pull/663 for the sync and schema.
I'll rebase once https://github.com/openshift-kni/eco-goinfra/pull/663 gets merged.
|
gharchive/pull-request
| 2024-09-13T11:46:52 |
2025-04-01T06:39:53.779704
|
{
"authors": [
"mcornea"
],
"repo": "openshift-kni/eco-goinfra",
"url": "https://github.com/openshift-kni/eco-goinfra/pull/662",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2621537555
|
Fix RBAC role for ImageBasedUpgrade
This adds the missing RBAC role for provisioning request controller to access and modify IBGUs.
/lgtm
retested on my system. it works now.
/approve
|
gharchive/pull-request
| 2024-10-29T15:05:02 |
2025-04-01T06:39:53.780957
|
{
"authors": [
"Missxiaoguo",
"alegacy",
"sudomakeinstall2"
],
"repo": "openshift-kni/oran-o2ims",
"url": "https://github.com/openshift-kni/oran-o2ims/pull/282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2728739173
|
Update manager config to use postgresql-16-c9s image
Replaced the postgresql image with one that is publicly available in order to avoid requiring a pull secret in deployment automation being contributed to the ORAN-SC community.
/lgtm
/approve
|
gharchive/pull-request
| 2024-12-10T01:57:19 |
2025-04-01T06:39:53.782034
|
{
"authors": [
"clwheel",
"donpenney",
"mlguerrero12"
],
"repo": "openshift-kni/oran-o2ims",
"url": "https://github.com/openshift-kni/oran-o2ims/pull/391",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.