id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1419530657 | Cannot paste multiple lines into the example program.
Highlight and copy the 10 lines below:
This is line 1.
This is line 2.
This is line 3.
This is line 4.
This is line 5.
This is line 6.
This is line 7.
This is line 8.
This is line 9.
This is line 10.
Now run the example program and paste these10 lines into it. This is what you get:
$ ./linenoise_example
hello> This is line 1.
echo: 'This is line 1.'
hello>
Only the first line is received. Lines 2 - 9 are ignored. Each line should be printed with a prompt and a response. Pasting short code snippets into a REPL is a very common thing to do.
I think I found the explanation for this problem in the linux termios man page where it explains the TCSAFLUSH flag to tcsetattr:
the change occurs after all output written to the object
referred by fd has been transmitted, and all input that
has been received but not read will be discarded before
the change is made.
That flag is used in disableRawMode. So everything in the input buffer after the first newline gets discarded when disableRawMode is called after processing the first line.
The fix for this appears to be to use TCSADRAIN instead of TCSAFLUSH. That drains the output but does not discard the input.
| gharchive/issue | 2022-10-22T22:20:58 | 2025-04-01T06:37:51.445479 | {
"authors": [
"culler"
],
"repo": "antirez/linenoise",
"url": "https://github.com/antirez/linenoise/issues/208",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2435069227 | Labels in argument lists
When formatting a label in an argument list, the label is attached to the comma following the previous argument or open brace, e.g.,
#set heading(numbering: "1.")
= Heading <label>
#ref( <label>
,
)
Instead, I would expect labels to be treated like other values in argument lists:
#set heading(numbering: "1.")
= Heading <label>
#ref(
<label>,
)
With b8870c069ee1c5b579a08e40ac4dac1c47915b4b the special handling for labels is only used in markup mode. If some issues persist, please reopen the issue.
| gharchive/issue | 2024-07-29T10:41:35 | 2025-04-01T06:37:51.461373 | {
"authors": [
"antonWetzel",
"m-haug"
],
"repo": "antonWetzel/prettypst",
"url": "https://github.com/antonWetzel/prettypst/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
859206493 | Analyse --mode reads error- missing script
Hello,
I am able to run haystac analyse --mode abundances, but haystac analyse --mode reads fails. I've included the snakemake log. My goal is to recover the reads that are being assigned to each category, i.e. dark and grey matter, but it looks like this code may have been refactored.
haystaclog.txt
I installed haystac through conda. Thank you for your help and please let me know if I can provide more information.
Best,
Peter
Hello Peter,
I hope you are doing great and apologies for not getting back to you earlier !
Thank you for using haystac and for raising this issue.
Indeed the code for haystac analyse --mode reads has been recently refactored and optimised as a new version will be released soon on conda.
I went through the log file and I was wondering if you could upload the likelihood matrix that haystac analyse --mode abundances has produced. From your log file it should be here /local/workdir/pk445/haystac/both_notrim_reads_.05/probabilities/both_sample/both_sample_likelihood_ts_tv_matrix.csv.
I will make sure I get back to you ASAP.
Thank you for your help and patience !
Best,
Antony
Hello Antony,
Thanks for the response. In the meantime, is there a workaround for using this feature?
| gharchive/issue | 2021-04-15T20:00:46 | 2025-04-01T06:37:51.472877 | {
"authors": [
"Pkaps25",
"antonisdim"
],
"repo": "antonisdim/haystac",
"url": "https://github.com/antonisdim/haystac/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1370450064 | Creative Development
Quiz using app lab in code.org: https://studio.code.org/projects/applab/jau_YPo5u3K2kjagwJ9Ow50YORprzrWIiwIWRQOfrSw
Quiz plan post on my fast page: https://antonyrc6.github.io/Antony-s-fast-page/markdown/2022/09/11/quiz-plan.html
I bet Anthony deserves a 2.8. His quiz is completely organized. He even put up notifications to figure out if an answer is right or wrong. When the quiz was finished, the message said that all answers were correct. He did really well on the College Board Task as well.
| gharchive/issue | 2022-09-12T20:01:16 | 2025-04-01T06:37:51.478751 | {
"authors": [
"antonyrc6",
"bushku"
],
"repo": "antonyrc6/Antony-s-fast-page",
"url": "https://github.com/antonyrc6/Antony-s-fast-page/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2425813524 | Add logs for the kind tests in jenkins
Closes: #6538
To prevent log files from piling up and consuming too much storage, you can implement a cleanup strategy. This can be done by periodically deleting old log files or by keeping only a limited number of the most recent log files.
To prevent log files from piling up and consuming too much storage, you can implement a cleanup strategy. This can be done by periodically deleting old log files or by keeping only a limited number of the most recent log files.
Files in the job's workspace will be cleaned at the beginning of the job. The retention period for job history (including saved log files) is configured by Jenkins, so there is no need to build another cleanup script.
@XinShuYang could you take another look and let me know if I can merge this?
@XinShuYang could you take another look and let me know if I can merge this?
Yes we can merge it.
/test-kind-all
/test-kind-all
it will trigger only ipv4 jobs, lemme confirm all other ipv6 and dual jobs once.
/test-kind-ipv6-conformance
/test-kind-ipv6-conformance
/test-kind-ipv6-all
/test-kind-ipv6-all
| gharchive/pull-request | 2024-07-23T18:08:54 | 2025-04-01T06:37:51.482699 | {
"authors": [
"KMAnju-2021",
"XinShuYang",
"antoninbas",
"rajnkamr"
],
"repo": "antrea-io/antrea",
"url": "https://github.com/antrea-io/antrea/pull/6543",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1163139680 | Obj file parser
Do we actually even really need our own file format?
Is there any additional information we put inside the *.csl files, or could we theoretically just write an *.obj parser and not have to use a custom blender plugin?
Yes we really do need our own file format.
.obj is a generic format that supports many things we do not support (for example quads instead of only triangles like .csl. Quads and NGons are made into triangles inside the Blender exporter automatically) Do you want to handle all of that inside the Interpreter with much higher performance requirements as well?
We would not have flexibility in how we store scenes, materials, shaders and textures.
How would we store animations and rigging.., or properties for the physics engine..., or .obj files linking to other .obj files to make open-world loading straightforward for example? I could list more but this is just off the top of my head. Or even packing textures inside the .csl. The possibilities are endless with our own format.
One would have to translate all the different .obj types to our own engine structure anyways, so why not do it through Blender and let Blender handle all the complexity of different file formats and have the exporter work for all of them? I have successfully imported a random .obj model from the internet into Blender and after a few adjustments of materials exported it to .csl and displayed it correctly in the renderer. Its name is some\_car.csl and can be found in the assets.
Every game engine has its own format and for good reason, why not we? :) The closest to a generic 3D game engine format would be COLLADA, but we would still lose flexibility and functionality and we cannot expect to support everything in COLLADA. Blender can still import COLLADA.
Assuming we adopted for example COLLADA, which would in theory be a better fit. What would we do with .obj files? Import them into Blender, export them into COLLADA and interpret them into our own structure? Adopt the COLLADA structure? Write yet another interpreter for .obj? For .fbx as well?
Blender supports custom object properties. We could use those to store various parameters (for example physics, storage or open-world related) and have them picked up by the exporter.
You could still make a limited direct .obj to .csl translator if you like, but i personally would not consider that high priority.
Compressed binary format when/how?
Summary: It is IMHO much simpler to write ONE exporter and ONE fast parser and interpreter for a flexible file format suited to our engine capabilities than limiting ourselves to any specific generic asset format. We would need to convert to our internal format anyways unless we completely rewrite everything yet again.
Do you need any more arguments or are these enough?
Ok ok, I get it. No obj parser :D
| gharchive/issue | 2022-03-08T21:20:18 | 2025-04-01T06:37:51.490397 | {
"authors": [
"antsouchlos",
"philsegeler"
],
"repo": "antsouchlos/OxygenEngine2",
"url": "https://github.com/antsouchlos/OxygenEngine2/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1226479950 | ScrollBar 横向滚动到最后
[ ] I have searched the issues of this repository and believe that this is not a duplicate.
What problem does this feature solve?
ScrollBar 横向滚动无法初始化的时候滚动到最后。
What does the proposed API look like?
boolean 开启是否滚动到最后
可以用 4.x, 设置下 range 就可以了
https://f2.antv.vision/zh/examples/line/line#pan
<ScrollBar mode="x" range={[0.8, 1]} />
感谢,可以啦
| gharchive/issue | 2022-05-05T10:18:12 | 2025-04-01T06:37:51.496065 | {
"authors": [
"xienas",
"zengyue"
],
"repo": "antvis/F2",
"url": "https://github.com/antvis/F2/issues/1464",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2661497876 | 「wip」feat(transform): add exponential smoothing data transform methods
Checklist
[ ] npm test passes
[ ] benchmarks are included
[ ] commit message follows commit guidelines
[ ] documents are updated
Description of change
add exponential smoothing data transform methods
这个算法有来源出处吗?以及使用场景?
这个算法有来源出处吗?以及使用场景?
好的,晚点补一下
| gharchive/pull-request | 2024-11-15T10:07:27 | 2025-04-01T06:37:51.499276 | {
"authors": [
"hustcc",
"lulusir"
],
"repo": "antvis/G2",
"url": "https://github.com/antvis/G2/pull/6522",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
731349140 | 当渲染的shape为图片的时候,一直在请求拿图片数据。特别消耗浏览器性能
G6 Version: 3.2.3
Platform: Chrome86.0.4240.111(正式版本) (x86_64)
Mini Showcase(like screenshots):
CodePen Link: https://codepen.io/fred_zhao/pen/abZExEx
做Promise处理是为了可以处理图片异常然后给到图片的默认值
应该是异常图片在渲染引擎内部没有处理。目前在给到 G6 图片之前,先检查一下异常图片吧
应该是异常图片在渲染引擎内部没有处理。目前在给到 G6 图片之前,先检查一下异常图片吧
。。。。我做了处理了,特意用promise处理了。。但是无限请求太可怕了,这个有解决方案么
应该是异常图片在渲染引擎内部没有处理。目前在给到 G6 图片之前,先检查一下异常图片吧
能处理下你们钉钉交流群的加群请求么?好久了都没有人通过
解决方案是在图片加载的时候处理一层,将图片转换成base64。然后再给到graph渲染。
应该是异常图片在渲染引擎内部没有处理。目前在给到 G6 图片之前,先检查一下异常图片吧
能处理下你们钉钉交流群的加群请求么?好久了都没有人通过
两个钉钉群都满了。。。。。有问题的话,建议还是在 issue 里提,我们看到的话会及时处理
解决方案是在图片加载的时候处理一层,将图片转换成base64。然后再给到graph渲染。
下个小版本我们内置处理掉
| gharchive/issue | 2020-10-28T11:17:14 | 2025-04-01T06:37:51.511903 | {
"authors": [
"Yanyan-Wang",
"wustzhaohui"
],
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/2242",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
808739329 | Smooth animation for data updates
[x] I have searched the issues of this repository and believe that this is not a duplicate.
What problem does this feature solve?
Is there a possibility to make a smooth transition between data updates in graph so that there is no "hard cut" but instead an animation? This would make it much easier for users to follow and understand updates to a graph (e.g. when filtering the data)
What does the proposed API look like?
A new option in GraphOptions
It is already possible, animate: True and change the data using changeData
| gharchive/issue | 2021-02-15T18:20:08 | 2025-04-01T06:37:51.514224 | {
"authors": [
"konstantinjdobler"
],
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/2657",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
940578848 | 增加 非鼠标事件触发节点或边 tooltip 的api ,
[ ] I have searched the issues of this repository and believe that this is not a duplicate.
What problem does this feature solve?
例如我在图之外给某个节点/边添加属性后, 在没有鼠标事件,需要用户在图上以tooltip 的形式感知一下,
What does the proposed API look like?
例如 graph.tooltipFoucs(node/nodeId, offset, )
这个内置 tooltip 不会考虑,可以自己加 DOM 来实现。或者模拟鼠标触发事件 graph.emit('node:mouseenter', {item, target});
| gharchive/issue | 2021-07-09T09:14:15 | 2025-04-01T06:37:51.516192 | {
"authors": [
"Yanyan-Wang",
"unicode6674"
],
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/3031",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2035864721 | G6 5.0.0-beta.27版本tooltip使用预定义DOM元素无法正常显示
问题描述
G6 5.0.0-beta.27版本tooltip的getContent回调函数使用预定义的DOM元素作为返回值,导致tooltip无法正常显示,tooltip直接把图的画布给挤占了,而G6 4.8.23版本tooltip可以在正确的节点/边的位置上显示预定义的DOM元素,下面的图是不同版本下鼠标悬浮在节点上tooltip的显示情况:
5.0.0-beta.27版:
4.8.23版:
重现链接
5.0.0-beta.27版复现链接: https://codesandbox.io/p/sandbox/stupefied-feather-354gpk?file=%2Findex.ts%3A23%2C4 ### 4.8.23版复现链接: https://codesandbox.io/p/sandbox/vibrant-matan-5skrtw?file=%2Findex.js%3A74%2C20
重现步骤
如复现链接所示
预期行为
期望5.0.0-beta.27版tooltip可以使用预定义的DOM元素作为显示的元素
平台
操作系统: [Windows]
网页浏览器: [edge]
G6 版本: [ 5.0.0-beta.27]
屏幕截图或视频(可选)
No response
补充说明(可选)
No response
v5版本需要return outDiv.outerHTML
| gharchive/issue | 2023-12-11T14:55:32 | 2025-04-01T06:37:51.521735 | {
"authors": [
"l-besiege-l",
"uhobnil"
],
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/5251",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2112000553 | 为什么我使用g6的元素实例方法item.hide()就正常,item.changeVisibility()就报错呢??
问题描述
为什么我使用g6的元素实例方法item.hide()就正常,item.changeVisibility()就报错呢??
重现链接
.
重现步骤
.
预期行为
.
平台
操作系统: [macOS, Windows, Linux, React Native ...]
网页浏览器: [Google Chrome, Safari, Firefox]
G6 版本: [4.5.1 ... ]
屏幕截图或视频(可选)
补充说明(可选)
No response
从你的代码看,buttons和buttonParents似乎是图形(Shape),而不是元素(item,比如node,combo、edge)
操作图形显示/隐藏只能通过shape.hide(), shape.show()达成
参考图形(Shape)的Typescript type: IShape extends IElement, IElement存在hide/show定义,但没有changeVisibility
| gharchive/issue | 2024-02-01T09:27:44 | 2025-04-01T06:37:51.527117 | {
"authors": [
"Fzt1120",
"ravengao"
],
"repo": "antvis/G6",
"url": "https://github.com/antvis/G6/issues/5409",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2608148400 | 🐛 大数据性能表现栈溢出 Maximum call stack size exceeded
🏷 Version
Package
Version
@antv/s2
@antv/s2-v2.0.0-next.27
@antv/s2-react
@antv/s2-vue
Sheet Type
[ ] PivotSheet
[x] TableSheet
[ ] GridAnalysisSheet
[ ] StrategySheet
[ ] EditableSheet
🖋 Description
官网“100w条数据性能表现-明细表” 示例,下滑过程中栈溢出,不到 1w条数据稳定复现 https://s2.antv.antgroup.com/zh/examples/case/performance-compare#table
⌨️ Code Snapshots
🔗 Reproduce Link
https://s2.antv.antgroup.com/zh/examples/case/performance-compare#table
🤔 Steps to Reproduce
😊 Expected Behavior
😅 Current Behavior
💻 System information
Environment
Info
System
macos 14.3
Browser
chrome 129.0.6668.101(正式版本) (arm64)
已知问题, 等待底层渲染引擎解决中 https://github.com/antvis/G/issues/1712
https://github.com/antvis/S2/issues/2771
| gharchive/issue | 2024-10-23T10:50:51 | 2025-04-01T06:37:51.535738 | {
"authors": [
"liangyuqi",
"lijinke666"
],
"repo": "antvis/S2",
"url": "https://github.com/antvis/S2/issues/2938",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2616046791 | Notification page added
#3861
New Notification Page added
Fixes: #3861
Description
New Notification page is added. That page will make very easy to navigate notifications , the page sort the notifications into 4 categories that is All, Unread, Read, Archived. that interface easy to use as well as to understand.
#3861
Type of PR
[x] Feature enhancement
Screenshots / videos (if applicable)
Checklist:
[x] I have made this change from my own.
[x] I have taken help from some online resources.
[x] My code follows the style guidelines of this project.
[x] I have performed a self-review of my own code.
[x] I have commented my code, particularly in hard-to-understand areas.
[x] My changes generate no new warnings.
[x] I have tested the changes thoroughly before submitting this pull request.
[x] I have provided relevant issue numbers and screenshots after making the changes.
please record a video from home page like end to end @Pratikpawar13
are u working on this issue? gssoc ext is about to end.
work with updated branch and raise new PR @Pratikpawar13
| gharchive/pull-request | 2024-10-26T18:17:50 | 2025-04-01T06:37:51.550383 | {
"authors": [
"Pratikpawar13",
"abhi03ruchi",
"sailaja-adapa"
],
"repo": "anuragverma108/SwapReads",
"url": "https://github.com/anuragverma108/SwapReads/pull/4062",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
381708384 | Managed::with operator closes resource to early
Need to pass both the current instance and the transforming function down the chain rather than calling map.
To replicate assert the session is still open in HibernateTest (in 10.0.3 this will fail)
@Test
public void hibernate(){
Try<String, Throwable> res = Managed.of(factory::openSession)
.with(Session::beginTransaction)
.map((session, tx) ->{
try {
verify(session,never()).close();
}catch(Exception e) {
e.printStackTrace();
}
return deleteFromMyTable(session)
.bipeek(success -> tx.commit(),error -> tx.rollback());
} ).foldRun(Try::flatten);
assertThat(res,equalTo(Try.success("deleted")));
}
Merged
| gharchive/issue | 2018-11-16T18:18:32 | 2025-04-01T06:37:51.580437 | {
"authors": [
"johnmcclean"
],
"repo": "aol/cyclops",
"url": "https://github.com/aol/cyclops/issues/948",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1708444792 | fixing scrolling issue mentioned in Issue #142
I approached the safer solution:
Made a border around the window screen to make it obvious that it's a screen
There were many solutions I could think of, but I approached the safest solution that taking into consideration the future development of that feature, I simply drew a border line around that screen to make it obvious to users that it's a screen, not a normal white space. Solutions I thought of doing:
I could have set the Height of the screen to auto and that should have solved the problem FOR NOW as the feature isn't already in use (from my understanding)
I could have made a simple code checking if the number of elements is 0, and if so it should remove that white space (screen) till the number of elements becomes at least 1, then shows the screen. But this solution will edit the code structure a bit and could be against the future plans as this screen functionality is still not 100% clear to me.
How it looks after putting the borderline:
| gharchive/pull-request | 2023-05-13T05:35:57 | 2025-04-01T06:37:51.585883 | {
"authors": [
"Mghrabi"
],
"repo": "apache/age-viewer",
"url": "https://github.com/apache/age-viewer/pull/143",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
342782337 | AMBARI-24316 Inconsistent Ambari warnings
How was this patch tested?
21825 passing (31s)
48 pending
retest this please
Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/Ambari-Github-PullRequest-Builder/3190/
Test PASSed.
| gharchive/pull-request | 2018-07-19T15:36:46 | 2025-04-01T06:37:51.641819 | {
"authors": [
"asfgit",
"atkach"
],
"repo": "apache/ambari",
"url": "https://github.com/apache/ambari/pull/1813",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1317756878 | proposal: need an interface or function to translate old resource objects
backgroup
At present, for ApisixRoute Ingress ... resource update, we use the TranslateUpstream method to translate old objects. This method requires parsing the latest services to build objects, resulting in data inconsistency.
Due to the wrong construction of upstream objects, upstream cannot be deleted, resulting in object redundancy.
proposal
Implement an interface or function for related resources
Get route object from cache.
Using route to construct Upstream and PluginConfig.
Compare and write APISIX.
dependencies
[ ] ApisixRoute #1177
[ ] Ingress
[ ] Gateway
[ ] HTTPRoute
[ ] ...
#1050 Or we can realize the function of regularly cleaning up redundant objects in upstream.
Using route to construct Upstream and PluginConfig.
why we need this one?
Using route to construct Upstream and PluginConfig.
why we need this one?
Because the route object contains upstream_ id and plugin_id. For delete event, we only need these fields
| gharchive/issue | 2022-07-26T06:24:14 | 2025-04-01T06:37:51.679076 | {
"authors": [
"AlinsRan",
"tao12345666333"
],
"repo": "apache/apisix-ingress-controller",
"url": "https://github.com/apache/apisix-ingress-controller/issues/1186",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
893529927 | docs: fix APISIX helm installation
This change was introduced from
https://github.com/apache/apisix-helm-chart/pull/74
Signed-off-by: Jintao Zhang zhangjintao9020@gmail.com
Please answer these questions before submitting a pull request
Why submit this pull request?
[x] Bugfix
[ ] New feature provided
[ ] Improve performance
[ ] Backport patches
Related issues
Bugfix
Description
How to fix?
New feature or improvement
Describe the details and related test reports.
Backport patches
Why need to backport?
Source branch
Related commits and pull requests
Target branch
Codecov Report
Merging #459 (56aefa5) into master (5d479ae) will decrease coverage by 0.01%.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #459 +/- ##
==========================================
- Coverage 37.04% 37.03% -0.02%
==========================================
Files 47 46 -1
Lines 3841 3840 -1
==========================================
- Hits 1423 1422 -1
Misses 2233 2233
Partials 185 185
Impacted Files
Coverage Δ
test/e2e/e2e.go
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4a55307...56aefa5. Read the comment docs.
| gharchive/pull-request | 2021-05-17T17:03:49 | 2025-04-01T06:37:51.691374 | {
"authors": [
"codecov-commenter",
"tao12345666333"
],
"repo": "apache/apisix-ingress-controller",
"url": "https://github.com/apache/apisix-ingress-controller/pull/459",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
787468217 | chore: Fix incorrect lilnks
Signed-off-by: imjoey majunjiev@gmail.com
Changes:
Fix the broken the links for Contributor Guide and Committer Guide in team.md
Add missing link in committer-guide.md
ping @juzhiyuan @liuxiran @nic-chen for reviewing. Thanks.
ping @juzhiyuan @liuxiran @nic-chen for reviewing. Thanks.
| gharchive/pull-request | 2021-01-16T13:38:32 | 2025-04-01T06:37:51.694222 | {
"authors": [
"imjoey"
],
"repo": "apache/apisix-website",
"url": "https://github.com/apache/apisix-website/pull/144",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2488951572 | dev/release: publish adbc_core crate on https://crates.io
What would you like help with?
Would it please be possible to publish the latest release of adbc_core on https://crates.io?
[ ] Update release docs
[ ] Update release scripts
[ ] Check if we can do a prerelease
[ ] Update verification script
CC: @alexandreyc
I'm planning a release this week. I'll give it a shot.
Many thanks @lidavidm!
Thanks @lidavidm
If you need some help don't hesitate to reach out!
Thanks!
| gharchive/issue | 2024-08-27T10:38:36 | 2025-04-01T06:37:51.724991 | {
"authors": [
"alexandreyc",
"lidavidm",
"rz-vastdata"
],
"repo": "apache/arrow-adbc",
"url": "https://github.com/apache/arrow-adbc/issues/2104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2057083996 | Cannot find the fields when making an aggregation on parquet file.
Describe the bug
Hey there. It seems that datafusion cannot recognize the field name existence when making an aggregation on a parquet file. The code fail to run with following error:
Error: SchemaError(FieldNotFound { field: Column { relation: None, name: "fl_date" }, valid_fields: [Column { relation: Some(Bare { table: "?table?" }), name: "FL_DATE" }, Column { relation: Some(Bare { table: "?table?" }), name: "DEP_DELAY" }, Column { relation: Some(Bare { table: "?table?" }), name: "FL_DATE" }, Column { relation: Some(Bare { table: "?table?" }), name: "DEP_DELAY" }] })
Maybe I was making some mistakes?
To Reproduce
Download the flights 1m data:
https://www.tablab.app/datasets/sample/parquet
Run the code below:
use datafusion::{
arrow::datatypes::{DataType, Field, Schema},
prelude::*,
};
#[tokio::main]
async fn main() -> datafusion::error::Result<()> {
let ctx: SessionContext = SessionContext::new();
let schema = Schema::new(vec![
Field::new("FL_DATE", DataType::Utf8, true),
Field::new("DEP_DELAY", DataType::Int32, true),
]);
let df = ctx
.read_parquet(
"../../dataset/flights.parquet",
ParquetReadOptions::default().schema(&schema),
)
.await?;
let df = df
.select_columns(&["FL_DATE", "DEP_DELAY"])?
.aggregate(vec![col("FL_DATE")], vec![sum(col("DEP_DELAY"))])?;
df.show().await?;
Ok(())
}
Expected behavior
The aggregated data is displayed.
Additional context
cargo.toml
[package]
name = "data_engines"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
datafusion = "34"
tokio = { version = "1.35.1", features = ["full"] }
@VungleTienan
You simply need to use raw strings in order to prevent the normalizing of the column names.
let df = df
.select_columns(&["FL_DATE", "DEP_DELAY"])?
.aggregate(vec![col(r#""FL_DATE""#)], vec![sum(col(r#""DEP_DELAY""#))])?;
Thanks so much @marvinlanhenke
@VungleTienan I think you can close the issue now? Best regards
| gharchive/issue | 2023-12-27T08:38:31 | 2025-04-01T06:37:51.729766 | {
"authors": [
"VungleTienan",
"marvinlanhenke"
],
"repo": "apache/arrow-datafusion",
"url": "https://github.com/apache/arrow-datafusion/issues/8660",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2022658411 | Update custom-table-providers.md
Fixes a typo in the documentation.
Thanks @nickpoorman
| gharchive/pull-request | 2023-12-03T18:31:01 | 2025-04-01T06:37:51.731041 | {
"authors": [
"Dandandan",
"nickpoorman"
],
"repo": "apache/arrow-datafusion",
"url": "https://github.com/apache/arrow-datafusion/pull/8409",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
867662415 | Array sum result is wrong with remainder fields when simd is on
Note: migrated from original JIRA: https://issues.apache.org/jira/browse/ARROW-11051
Minimal example
{quote}use arrow::{array::PrimitiveArray, datatypes::Int64Type};
fn main() {
let mut s = vec![];
for _ in 0..32
{
s.push(Some(1i64)); s.push(None);
}
let v: PrimitiveArray = s.into();
dbg!(arrow::compute::sum(&v));
}
{quote}
dependency
{quote}arrow = {version = "2", features = ["simd"]}
{quote}
The following code in compute::sum is wrong. The bit mask is checked reversed.
{quote} remainder.iter().enumerate().for_each(|(i, value)| {
if remainder_bits & (1 << i) != 0 {
remainder_sum = remainder_sum + *value;
}
});
{quote}
Comment from Jörn Horstmann(jhorstmann) @ 2021-01-01T10:11:26.248+0000:
Hi [~niuzr], could you try the same with the latest master branch? There were some changes and also a bugfix how the vector masking is calculated after the 2.0 release in ARROW-10216. | gharchive/issue | 2021-04-26T12:43:45 | 2025-04-01T06:37:51.736104 | {
"authors": [
"alamb"
],
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/issues/161",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2539221667 | Add a method to return the number of skipped rows in a RowSelection
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
RowSelection has a row_count method that returns the number of selected rows, but is missing a way to count the number of de-selected rows without iterating on the selectors
Describe the solution you'd like
Implement it as RowSelection::skipped_row_count
Describe alternatives you've considered
current state, which is that users have to reimplement it themselves
Additional context
Datafusion had to implement it here:
https://github.com/apache/datafusion/blob/f2159e6cae658a0a3f561ec2d15ea948213fd0f8/datafusion/core/src/datasource/physical_plan/parquet/page_filter.rs#L271-L277
Suggested by @alamb here: https://github.com/apache/datafusion/pull/12545#discussion_r1768748882
label_issue.py automatically added labels {'parquet'} from #6429
| gharchive/issue | 2024-09-20T16:39:43 | 2025-04-01T06:37:51.739760 | {
"authors": [
"alamb",
"progval"
],
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/issues/6428",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2454071672 | fix: Correctly handle take on dense union of a single selected type
Which issue does this PR close?
Closes #6206.
What changes are included in this PR?
At #5873, I naively called filter_primitive instead of filter to avoid arcing and downcasting, but this bypass the check for when all values match the predicate, which filter_primitive expects to happen, leading unreachable!() to be hit
This PR calls filter instead of filter_primitive, and removes the added pub(crate) from filter_primitive to avoid future misuse.
Are there any user-facing changes?
No
Thank you @gstvg and @tustvold
| gharchive/pull-request | 2024-08-07T18:24:39 | 2025-04-01T06:37:51.742762 | {
"authors": [
"alamb",
"gstvg"
],
"repo": "apache/arrow-rs",
"url": "https://github.com/apache/arrow-rs/pull/6209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1669520000 | write_dataset freezes
Describe the bug, including details regarding any error messages, version, and platform.
An otherwise perfectly functioning arrow dataset does not finish the command write_dataset when passing on a hive structure, and I have to interrupt R. Looking at the folder structure, it seems to be writing files perfectly well until some point after which no new files are written -- but the job isn't finished.
The dataset also writes well into a single file (write_dataset without partitioning or grouping). It also writes well when I create less groups that I would like to. I haven't seen anyone complain about this, so I suspect that I am doing something so silly that no one has attempted before. Am I creating too many groups?
Grouping that works: A, B, C, D, E, where all groups are binary.
Grouping that doesn't work: A, B, C, D, X, where X has 90+ values (and not all values exist for each level of other variable. So, say, a combination A=1, B=1, C=1, D=1 might not have X=67.
Grouping that crashes: X, A, B, C, D.
In fact, just splitting into X crashes (not freezes).
I am on Garuda Linux (Arch-based) with R version 4.2.3.
Component(s)
R
Are you able to capture a core dump or create a small script that reproduces this? Which version of Arrow are you using?
I can confirm similar behaviour with Python using pyarrow==12.0.1 with both write_dataset and the older write_to_dataset with a large number of partitions (over 5000 in my case). I'll post more details and try to dig in a bit deeper, but for now, this is mostly just to say "you're not alone" :)
Another thing to check is to monitor memory. write_dataset, if it runs long enough, will fill up the OS's disk cache. This can often lead to swapping / etc which can cause the entire system to freeze and run slowly.
Also, if you can create any kind of reproducible example we can take a look further.
| gharchive/issue | 2023-04-15T18:15:30 | 2025-04-01T06:37:51.747682 | {
"authors": [
"joshbode",
"sometimesabird",
"westonpace"
],
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/issues/35156",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2609115906 | Pyarrow.Table.join() breaks on large tables v.18.0.0.dev486
Describe the bug, including details regarding any error messages, version, and platform.
Hi,
In my task I need to join two tables. One of 18m rows and another of 487m rows.
t_18m.join(t_487m, keys=[''col1, 'col2', 'col3'], join_type="left outer")
I was using the most actual pyarrow version which is 17 at the moment. While performing a join it breaks with Segmentation fault (core dumped)
I tried to investigate and found the most recent version w/o such behaviour is v13. But I instead of Segmentation fault it:
either silently produces wrong result
or breaks with ! Invalid: There are more than 2^32 bytes of key data. Acero cannot process a join of this magnitude
Next I searched on the github issues and found there are many similar user cases around. That's why I didn't include to many details in this report. You probably know these issues well.
There was #43495 the enchancement request, which as far as I understand has been included to the v18. I installed the v.18.0.0.dev486 package in order to test, but unfortunately It still throws the segmentation fault error on my case.
So if the enchancement is already merged into v.18.00 it still does not fix the problem.
Component(s)
Python
I also tryied to reduce the size of the right table and the working limit actually varies for me. Not able to find the exact number. I'm getting either the seg fault or the join result is incorrect.
My system has 4Tb memory in total, so it's not connected to the out-of-memory issue.
Here is the other system specs:
Oracle Linux Server 7.8
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16511255
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Python 3.10.15
import pyarrow as pa
pa.version
'18.0.0.dev486'
Hi @kolfild26 , thanks for reporting this.
There are lots of solved issues from v13 to v18 that may cause silent wrong answer or segfault in hash join, and possibly more unrevealed ones as well. So it is not too surprising that different versions behave differently.
Could you please provide us the complete schemas and the estimated sizes of both tables? And better yet, could you give a more-or-less working limit of your case? These are essential informations to investigate this issue.
Also, there might be a workaround that worth a try, change t_18m.join(t_487m, keys=[''col1, 'col2', 'col3'], join_type="left outer") to t_487m.join(t_18m, keys=[''col1, 'col2', 'col3'], join_type="right outer"). (I assume t_18m is much smaller than t_487m and this will make our hash join to use the small table to build the hash table.)
Thanks.
Hi @kolfild26 , thank you for the feedback and further information. I'll try to reproduce the issue. However it will be helpful if you can supply the following information as well:
Any stacktrace of the segfault;
The join cardinality, or equally, the number of rows of the (left/right) join result.
Hi @kolfild26 , thank you for the feedback and further information. I'll try to reproduce the issue. However it will be helpful if you can supply the following information as well:
Any stacktrace of the segfault;
The join cardinality, or equally, the number of rows of the (left/right) join result.
My first attempt to reproduce the issue using non-null arbitrarily random distributed columns at the same schema and scale, failed (that is, my test passed w/o segfault). So I also need information about the distributions of each key column: null probability, min/max, any high cardinality value. Thank you.
@zanmato1984
Stacktrace:
Dec 16 01:07:44 kernel: python[37938]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f10b09018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37971]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f002b0018 error 4
Dec 16 01:07:44 kernel: python[37961]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f052d0018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37957]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f072d8018 error 4
Dec 16 01:07:44 kernel: python[37940]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3f0fb07018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37974]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3d18f6d018 error 4 in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37966]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f02abf018 error 4
Dec 16 01:07:44 kernel: python[37951]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f0a2ec018 error 4
Dec 16 01:07:44 kernel: python[37973]: segfault at 7f3004626050 ip 00007f3fc25441cd sp 00007f3efb7fe018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: python[37953]: segfault at 7f3004626050 ip 00007f3fc25441db sp 00007f3f092e6018 error 4
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 kernel: in libarrow.so.1801[7f3fc1670000+2269000]
Dec 16 01:07:44 abrt-hook-ccpp: Process 35963 (python3.10) of user 1000 killed by SIGSEGV - dumping core
Here is the tables's statistics:
Script to get stats
import pyarrow as pa
import pyarrow.compute as pc
import pandas as pd
import pyarrow.types as patypes
def get_column_distributions(table):
distributions = {}
total_rows = table.num_rows
for column in table.schema.names:
col_data = table[column]
null_count = pc.sum(pc.is_null(col_data)).as_py()
null_percentage = (null_count / total_rows) * 100 if total_rows > 0 else 0
# Compute the cardinality (unique count / total count)
unique_count = pc.count_distinct(col_data.filter(pc.is_valid(col_data))).as_py()
cardinality_percentage = round((unique_count / total_rows)*100,3) if total_rows > 0 else 0
if patypes.is_integer(col_data.type) or patypes.is_floating(col_data.type):
stats = {
"count": pc.count(col_data).as_py(),
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"min": pc.min(col_data).as_py(),
"max": pc.max(col_data).as_py(),
}
elif patypes.is_string(col_data.type) or patypes.is_binary(col_data.type):
value_counts = pc.value_counts(col_data.filter(pc.is_valid(col_data)))
stats = {
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"value_counts": value_counts.to_pandas().to_dict("records"),
}
else:
stats = {
"nulls": null_count,
"null_percentage": null_percentage,
"cardinality_percentage": cardinality_percentage,
"message": f"Statistics not supported for type: {col_data.type}"
}
distributions[column] = stats
return distributions
small
large
Would it be easier if I attached the tables here?
Would it be easier if I attached the tables here?
@kolfild26 yeah please, that's even more useful.
The join cardinality, or equally, the number of rows of the (left/right) join result.
And also, do you have this one?
Cardinality can refer to different things. In a database context, cardinality usually refers to the number of unique values in a relational table column relative to the total number of rows in the table. So, if are both talking about the same, cardinality is presented in the report above, cardinality_percentage = (unique_count / total_rows)*100
But "cardinality" can also represent the size of the join result which is what I originally asked about. Do you have that? (You can just run the right join and count the number of rows).
And thank you for the source files. I'll try to reproduce the issue using these files in my local.
Hi @kolfild26 , I've successfully run the case in my local (M1 MBP with 32GB memory, arrow 18.1.0) but didn't reproduce the issue.
My python script:
import pandas
import pickle
import pyarrow
def main():
print("pandas: {0}, pyarrow: {1}".format(pandas.__version__, pyarrow.__version__))
with open('small.pkl', 'rb') as f: small = pickle.load(f)
with open('large.pkl', 'rb') as f: large = pickle.load(f)
print("small size: {0}, large size: {1}".format(small.num_rows, large.num_rows))
join = small.join(large, keys=['ID_DEV_STYLECOLOR_SIZE', 'ID_DEPARTMENT', 'ID_COLLECTION'], join_type='left outer')
print("join size: {0}".format(join.num_rows))
if __name__ == "__main__":
main()
Result:
python test.py
pandas: 2.2.3, pyarrow: 18.1.0
small size: 18201475, large size: 360449051
join size: 18201475
Did I miss something?
The resulted join size looks correct.
Could you please check:
apply filter ID_DEV_STYLECOLOR_SIZE = 88506230299 and ID_DEPARTMENT = 16556030299. It should return 2 in PL_VALUE column.
Apply sum(PL_VALUE) and it should return 58360744
That's just to eliminate 'false positive'. I mentioned that I tested on different versions and it sometimes caused a silent wrong answer even though there were no seg.fault.
If all above is correct, might the segfault error be caused by any system/os settings?
my setup
Oracle Linux Server 7.8
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16511255
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 4096
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Python 3.10.15
import pyarrow as pa
pa.version
'18.1.0'
apply filter ID_DEV_STYLECOLOR_SIZE = 88506230299 and ID_DEPARTMENT = 16556030299. It should return 2 in PL_VALUE column.
Correct:
>>> cond = pc.and_(pc.equal(large['ID_DEV_STYLECOLOR_SIZE'], 88506230299), pc.equal(large['ID_DEPARTMENT'], 16556030299))
>>> filtered = large.filter(cond)
>>> print(filtered)
pyarrow.Table
ID_DEV_STYLECOLOR_SIZE: int64
ID_DEPARTMENT: int64
ID_COLLECTION: int64
PL_VALUE: int64
----
ID_DEV_STYLECOLOR_SIZE: [[88506230299]]
ID_DEPARTMENT: [[16556030299]]
ID_COLLECTION: [[11240299]]
PL_VALUE: [[2]]
>
Apply sum(PL_VALUE) and it should return 58360744
No:
>>> sum = pc.sum(large['PL_VALUE'])
>>> print(sum)
461379027
That's just to eliminate 'false positive'. I mentioned that I tested on different versions and it sometimes caused a silent wrong answer even though there were no seg.fault.
Hmm, I think we should only focus on v18.1.0. As I mentioned, there are a lot of fixes ever since, so the behavior in prior versions will vary for sure, and I think most of the issues (if not all) are already addressed.
If all above is correct, might the segfault error be caused by any system/os settings?
I also verified on my Intel MBP (I just realized that we have x86-specialized SIMD code path for hash join so I wanted to see if the issue was there), but still unable to reproduce. And your setup doesn't seem to have any particular thing to do with this issue.
To proceed with the debugging:
Did you run my python script on your env to see if it runs into segfault? (And in case it doesn't, would you kindly help to fix it to make the segfault happen?) I think this is quite essential, because we need to agree on a minimal reproducible case (at least on either env of us). Then I can ask some other people to help verifying on broader environments.
Would you help to confirm the difference of sum(PL_VALUE) in my run (461379027) against yours (58360744)?
What is your CPU model?
In your original run of segfault (again, on v18.1.0), is it always reproducible or by chance?
Debugging this kind of issue is tricky and takes time and communication. I really appreciate your patience @kolfild26 , thank you!
2️⃣ I meant filter() and sum() to be applied to the resulted table, i.e. join while you have applied to large.
3️⃣ Intel(R) Xeon(R) Gold 6246 CPU @ 3.30GHz. 4 sockets * 12 cores = 48 logical cpus
1️⃣ 4️⃣ Yes, segfault occures always, having the fixed size of the input tables. All recent tests I refer to, are on v18.1.0
I can now confirm that the problem does exist.
By applying filter and sum on the join result, I found my previous non-segfault runs were false positive:
join = small.join(large, keys=['ID_DEV_STYLECOLOR_SIZE', 'ID_DEPARTMENT', 'ID_COLLECTION'], join_type='left outer')
print("join size: {0}".format(join.num_rows))
cond = pc.and_(pc.equal(join['ID_DEV_STYLECOLOR_SIZE'], 88506230299), pc.equal(join['ID_DEPARTMENT'], 16556030299))
filtered = join.filter(cond)
print("filtered")
print(filtered)
sum = pc.sum(join['PL_VALUE'])
print("sum")
print(sum)
Result:
filtered: PL_VALUE: [[null]]
...
sum: 33609597 # Another run emits 33609997
And I also happen to have access to a x86 Ubuntu desktop, on which I reproduced the segfault.
I'm now digging into it.
Also, considering the silent wrong answer on some platforms, I'm marking this issue critical.
Thanks alot @kolfild26 for helping me to reproduce the issue!
| gharchive/issue | 2024-10-23T15:59:43 | 2025-04-01T06:37:51.775546 | {
"authors": [
"kolfild26",
"zanmato1984"
],
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/issues/44513",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1992182622 | GH-38699: [C++][FS][Azure] Implement CreateDir()
Rationale for this change
It seems that we can't create a directory explicitly without hierarchical namespace support.
It seems that Azure Blob Storage supports only virtual directory. There is no directory. If a file (blob) name has "/", it's treated that the file (blob) exists under a virtual directory.
It seems that Azure Data Lake Storage Gen2 supports a real directory.
See also:
https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction
What changes are included in this PR?
This change chooses the following behavior:
Container can be created with/without hierarchical namespace support.
Directory can be created with hierarchical namespace support.
Directory can't be created without hierarchical namespace support. So do nothing without hierachical namespace support. (arrow::Status::OK() is just returned.)
Are these changes tested?
Azurite doesn't support hierarchical namespace yet. So I can't test the implementation for hierarchical namespace yet. Sorry.
Are there any user-facing changes?
Yes.
Closes: #38699
@Tom-Newton What do you think about this behavior?
This change chooses the following behavior:
Container can be created with/without hierarchical namespace support.
Directory can be created with hierarchical namespace support.
Directory can't be created without hierarchical namespace support. (arrow::Status::NotImplemented is returned for this case.)
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
What do you think about this behavior?
The only part I'm questioning is whether to return arrow::Status::NotImplemented on flat blob storage accounts. Possibly it would be better just to return a success status without doing anything.
I think the right choice will depend on how much CreateDir() is used. For example if CreateDir() is used every time arrow writes a partitioned parquet table, then returning an error status could be a bit of a problem.
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
You will need an azure account. You should be able to create a free account at https://azure.microsoft.com/en-gb/free/. You should the. Be able to create a storage account through the portal web UI.
I can probably write a more specific doc if needed but this is Azure's doc https://learn.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account
A few suggestions on configuration:
Use Standard general-purpose v2 not premium
Use LRS redundancy
Obviously you will want to enable hierarchical namespace.
Set the default access tier to hot
SFTP, NFS and file shares are not required.
The only part I'm questioning is whether to return arrow::Status::NotImplemented on flat blob storage accounts. Possibly it would be better just to return a success status without doing anything.
I think the right choice will depend on how much CreateDir() is used. For example if CreateDir() is used every time arrow writes a partitioned parquet table, then returning an error status could be a bit of a problem.
Good point! I'll change the behavior to just return arrow::Status::OK().
Do you have any (simple) document how to setup an account for AzureHierarchicalNamespaceFileSystemTest?
You will need an azure account. You should be able to create a free account at https://azure.microsoft.com/en-gb/free/. You should the. Be able to create a storage account through the portal web UI.
I can probably write a more specific doc if needed but this is Azure's doc https://learn.microsoft.com/en-us/azure/storage/blobs/create-data-lake-storage-account
A few suggestions on configuration:
...
Thanks! I could create an account and confirm that the implementation passes the added tests.
I'll add the provided information as a comment for other developers.
I've updated.
I've updated.
I'll merge this.
| gharchive/pull-request | 2023-11-14T07:55:57 | 2025-04-01T06:37:51.789818 | {
"authors": [
"Tom-Newton",
"kou"
],
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/pull/38708",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2072664306 | GH-39537: [Packaging][Python] Add a numpy<2 pin to the install requirements for the 15.x release branch
Rationale for this change
PyArrow wheels for the 15.0.0 release will not be compatible with future numpy 2.0 packages, therefore it is recommended to add this upper pin now for releases. We will keep the more flexible pin on the development branch (by reverting this commit on main, but so it can be cherry-picked in the release branch)
Closes: #39537
@github-actions crossbow submit -g wheel -g python
I think the macOS wheel failures are unrelated and started happening yesterday, probably due to: https://github.com/apache/arrow/pull/39065 being merged. I'll investigate further and open an issue.
We can remove this pin once we build our wheels with NumPy 2, right?
Yes, indeed.
| gharchive/pull-request | 2024-01-09T15:56:58 | 2025-04-01T06:37:51.793298 | {
"authors": [
"jorisvandenbossche",
"raulcd"
],
"repo": "apache/arrow",
"url": "https://github.com/apache/arrow/pull/39538",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1283581668 | What is the version of beam-runners-flink-1.11 which support JDK 17
What needs to happen?
Need to know version of beam-runners-flink-1.11 which support JDK 17
Issue Priority
Priority: 1
Issue Component
Component: beam-community
Like #22041, please ask on user@beam.apache.org.
| gharchive/issue | 2022-06-24T10:34:33 | 2025-04-01T06:37:51.802239 | {
"authors": [
"chiransiriwardhana",
"manuzhang"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/22040",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2174715374 | [Failing Test]: PostCommit Java SingleStoreIO IT failing
What happened?
Since Jan 31, 2024
Fails Install Singlestore cluster
Run kubectl apply -f /runner/_work/beam/beam/.test-infra/kubernetes/singlestore/sdb-cluster.yaml
memsqlcluster.memsql.com/sdb-cluster created
error: timed out waiting for the condition on memsqlclusters/sdb-cluster
Error: Process completed with exit code 1.
Performance test also failing
Issue Failure
Failure: Test is continually failing
Issue Priority
Priority: 2 (backlog / disabled test but we think the product is healthy)
Issue Components
[ ] Component: Python SDK
[ ] Component: Java SDK
[ ] Component: Go SDK
[ ] Component: Typescript SDK
[ ] Component: IO connector
[ ] Component: Beam YAML
[ ] Component: Beam examples
[ ] Component: Beam playground
[ ] Component: Beam katas
[ ] Component: Website
[ ] Component: Spark Runner
[ ] Component: Flink Runner
[ ] Component: Samza Runner
[ ] Component: Twister2 Runner
[ ] Component: Hazelcast Jet Runner
[ ] Component: Google Cloud Dataflow Runner
workload logs:
Status: Pods have warnings. node-sdb-cluster-master
2024-03-07 15:19:31.496 EST
✓ Created node with node ID A40217C2599E6693E3D37C2BCB195DA378E230AA
2024-03-07 15:19:31.908 EST
memsqlctl will perform the following actions:
2024-03-07 15:19:31.908 EST
· Update configuration setting on node with node ID A40217C2599E6693E3D37C2BCB195DA378E230AA on port 3306
2024-03-07 15:19:31.908 EST
- Update node config file with setting minimum_core_count=0
2024-03-07 15:19:31.908 EST
{}
2024-03-07 15:19:31.908 EST
Would you like to continue? [Y/n]:
2024-03-07 15:19:31.908 EST
Automatically selected yes, non-interactive mode enabled
...
2024-03-07 15:19:38.746 EST
2024-03-07 20:19:38.746 INFO: Thread 115121 (ntid 225, conn id -1): memsqld_main: Flavor: 'production'
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.756 EST
2024-03-07 20:19:38.756 ERROR: Thread 115104 (ntid 361, conn id -1): Run: Error getting cluster database
2024-03-07 15:19:38.757 EST
2024-03-07 20:19:38.757 INFO: Thread 115121 (ntid 225, conn id -1): CreateDatabase: CREATE DATABASE `memsql` with sync durability / sync input durability, 0 partitions, 0 sub partitions, 0 logical partitions, log file size 16777216.
...
2024-03-07 15:19:40.691 EST
Started singlestore (199)
2024-03-07 15:19:40.694 EST
Ensuring the root password is setup
2024-03-07 15:19:40.787 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:19:40.845 EST
2024-03-07 20:19:40.845 INFO: Thread 115120 (ntid 344, conn id -1): OnAsyncCompileCompleted: Query information_schema.'SELECT 1' submitted 177 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 160 milliseconds
2024-03-07 15:19:40.847 EST
2024-03-07 20:19:40.847 ERROR: [0 messages suppressed] ProcessHandshakeResponsePacket() failed. Sending back 1045: Access denied for user 'root'@'localhost' (using password: NO)
...
2024-03-07 15:19:41.181 EST
2024-03-07 20:19:41.181 INFO: Thread 115120 (ntid 344, conn id -1): OnAsyncCompileCompleted: Query (null).'SELECT @@MEMSQL_VERSION' submitted 133 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 116 milliseconds
2024-03-07 15:19:50.497 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:00.494 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:10.569 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:20.496 EST
Error 2277: This node is not part of the cluster.
2024-03-07 15:20:30.496 EST
Error 2277: This node is not part of the cluster.
Status: Pods have warnings. node-sdb-cluster-leaf-ag1
2024-03-07 15:19:49.683 EST
Initializing OpenSSL 1.0.2u-fips 20 Dec 2019
2024-03-07 15:19:49.688 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:19:49.704 EST
[2024-03-07 20:19:49 startup-probe] Aborting due to query failure
2024-03-07 15:19:49.808 EST
2024-03-07 20:19:49.808 INFO: Thread 115120 (ntid 388, conn id -1): OnAsyncCompileCompleted: Query (null).'select @@version_comment limit 1' submitted 142 milliseconds ago, queued for 17 milliseconds, compiled asynchronously in 125 milliseconds
2024-03-07 15:19:54.664 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:19:54.669 EST
[2024-03-07 20:19:54 startup-probe] Aborting due to query failure
...
2024-03-07 15:24:34.665 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:24:34.671 EST
[2024-03-07 20:24:34 startup-probe] Aborting due to query failure
2024-03-07 15:24:39.661 EST
ERROR 2277 (HY000) at line 1: This node is not part of the cluster.
2024-03-07 15:24:39.667 EST
[2024-03-07 20:24:39 startup-probe] Aborting due to query failure
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115101 (ntid 408, conn id -1): Run: Error getting cluster database
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115103 (ntid 406, conn id -1): Run: Error getting cluster database
2024-03-07 15:24:42.829 EST
2024-03-07 20:24:42.829 ERROR: Thread 115104 (ntid 405, conn id -1): Run: Error getting cluster database
The k8s configurations has been changed for months but the cluster failing to create suddenly since Jan 31. CC: @AdalbertMemSQL the author
Hey @Abacn
Is it possible to somehow retrieve full workload logs?
Fixed by #30725
| gharchive/issue | 2024-03-07T20:20:00 | 2025-04-01T06:37:51.809636 | {
"authors": [
"Abacn",
"AdalbertMemSQL"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/30564",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2633874265 | [Task]: Improve Enrichment docs
What needs to happen?
There are a few targeted fixes needed for the Enrichment docs:
https://beam.apache.org/documentation/transforms/python/elementwise/enrichment/ should mention BigQuery
https://beam.apache.org/documentation/transforms/python/elementwise/enrichment/ should describe how we do batching and how we do caching (with_redis_cache)
https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/bigtable_enrichment_transform.ipynb and https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/vertex_ai_feature_store_enrichment.ipynb should elaborate a little more on what a cross-join means in this context (maybe a picture would be nice?)
In https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/bigtable_enrichment_transform.ipynb the example handler with composite row key support link is dead
Issue Priority
Priority: 2 (default / most normal work should be filed as P2)
Issue Components
[X] Component: Python SDK
[ ] Component: Java SDK
[ ] Component: Go SDK
[ ] Component: Typescript SDK
[ ] Component: IO connector
[ ] Component: Beam YAML
[ ] Component: Beam examples
[ ] Component: Beam playground
[ ] Component: Beam katas
[X] Component: Website
[ ] Component: Infrastructure
[ ] Component: Spark Runner
[ ] Component: Flink Runner
[ ] Component: Samza Runner
[ ] Component: Twister2 Runner
[ ] Component: Hazelcast Jet Runner
[ ] Component: Google Cloud Dataflow Runner
@claudevdm this would be good to pick up at some point when you have space (don't drop other things, just when this fits in nicely)
| gharchive/issue | 2024-11-04T21:22:45 | 2025-04-01T06:37:51.817490 | {
"authors": [
"damccorm"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/issues/33012",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
590715267 | [BEAM-9641] Support ZetaSQL DATE type as a Beam LogicalType
This PR adds support of all ZetaSQL (BigQuery Standard SQL) DATE functions to BeamSQL:
CURRENT_DATE
EXTRACT
DATE (constructing DATE from DATETIME not supported)
DATE_ADD
DATE_SUB
DATE_DIFF
DATE_TRUNC
FORMAT_DATE
PARSE_DATE
UNIX_DATE
DATE_FROM_UNIX_DATE
WEEK part not supported
r: @apilloud
cc: @TheNeuralBit @kennknowles
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
Post-Commit Tests Status (on master branch)
Lang
SDK
Apex
Dataflow
Flink
Gearpump
Samza
Spark
Go
---
---
---
---
Java
Python
---
---
---
XLang
---
---
---
---
---
Pre-Commit Tests Status (on master branch)
---
Java
Python
Go
Website
Non-portable
Portable
---
---
---
See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.
What do you think about going ahead and defining the date logical type in org.apache.beam.sdk.schemas.logicaltypes? It would be useful in other contexts - for example it would give us something to map Avro's logical date type to (currently it is just overloaded with millis-instant onto DATETIME)
cc: @reuvenlax
What do you think about going ahead and defining the date logical type in org.apache.beam.sdk.schemas.logicaltypes? It would be useful in other contexts - for example it would give us something to map Avro's logical date type to (currently it is just overloaded with millis-instant onto DATETIME)
Done. Thanks for the suggestion. I made the Date type a public logical type in org.apache.beam.sdk.schemas.logicaltypes and added a layer of indirection by letting SqlTypes.DATE reference it.
Oops, forgot to include in my comments: ZetaSQL's range is much smaller than the underlying type, can you add a test or two for that? How do out of range values fail? (Also worth asking, do we need any special treatment for boundary conditions (LocalDate.MIN, LocalDate.MAX)? Probably not for now.)
Ah, just realized that the previous comments were not sent out.
Could you help trigger the tests again?
For the comment on range: Thanks for pointing it out. I overlooked this problem. I would like to create a separate PR to address it, along with range testing for other types as well.
retest this please
The failing test SparkPortableExecutionTest.testExecution should be unrelated to this change.
Run Java PreCommit
Run SQL Postcommit
Rebased against master. Please run precommit tests again.
retest this please
retest this please
Java PreCommit failed due to a build failure. Please help run again.
Run Java PreCommit
Run Java PreCommit
Run Java PreCommit
Something just occurred to me - are there any tests that use the DATE Type in an aggregation (e.g. MAX)?
I'd think that would run into the same issue I have in #11456
Interesting question. You should probably add a test for JOIN as well, which will have a similar class of problems.
are there any tests that use the DATE Type in an aggregation (e.g. MAX)?
No. Thanks for bringing this up. I think it is likely to run into the problem.
| gharchive/pull-request | 2020-03-31T01:30:19 | 2025-04-01T06:37:51.855415 | {
"authors": [
"TheNeuralBit",
"apilloud",
"robinyqiu"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/11272",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
664949338 | [BEAM-10572] Eliminate nullability errors from :sdks:java:extensions:sql:datacatalog
Fixing the nullability issues for sub-module sdks:java:extensions:sql:datacatalog
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[x] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
Post-Commit Tests Status (on master branch)
Lang
SDK
Dataflow
Flink
Samza
Spark
Twister2
Go
---
---
---
Java
Python
---
---
XLang
---
---
---
Pre-Commit Tests Status (on master branch)
---
Java
Python
Go
Website
Non-portable
Portable
---
---
---
See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.
Run SQL_Java11 PreCommit
Run SQL_Java11 PreCommit
Hi Jayendra, thanks for volunteering to fix these issues.
The SQL_Java11 precommit failure doesn't look like a flake, it seems like this change introduced an error. Please take a look.
05:09:22 > Task :sdks:java:extensions:sql:datacatalog:compileTestJava FAILED
05:09:22 error: option -Xbootclasspath/p: cannot be used together with --release
05:09:22 Usage: javac <options> <source files>
05:09:22 use --help for a list of possible options
Hi Jayendra, thanks for volunteering to fix these issues.
The SQL_Java11 precommit failure doesn't look like a flake, it seems like this change introduced an error. Please take a look.
05:09:22 > Task :sdks:java:extensions:sql:datacatalog:compileTestJava FAILED
05:09:22 error: option -Xbootclasspath/p: cannot be used together with --release
05:09:22 Usage: javac <options> <source files>
05:09:22 use --help for a list of possible options
When I run that task locally I don't get any error.
jayendra@alienware:~/beam$ ./gradlew :sdks:java:extensions:sql:datacatalog:compileTestJava
To honour the JVM settings for this build a new JVM will be forked. Please consider using the daemon: https://docs.gradle.org/5.2.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Configuration on demand is an incubating feature.
Deprecated Gradle features were used in this build, making it incompatible with Gradle 6.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See https://docs.gradle.org/5.2.1/userguide/command_line_interface.html#sec:command_line_warnings
BUILD SUCCESSFUL in 42s
73 actionable tasks: 37 executed, 1 from cache, 35 up-to-date
is it something specific to java version ?
Ah, yes. To run the same commands as the Java11 builds you must set some properties. For example on mac:
git checkout github/pr/12366 # I have fetch spec set up like this
./gradlew \
-PcompileAndRunTestsWithJava11 \
-Pjava11home=/Library/Java/JavaVirtualMachines/jdk-11-latest/Contents/Home \
:sdks:java:extensions:sql:datacatalog:compileTestJava
I actually got a different failure: https://gradle.com/s/gegzdydsbu2si
Execution failed for task ':sdks:java:extensions:sql:datacatalog:compileTestJava'.
> release version 11 not supported
That seems a bit odd. I think I did not reproduce the problem properly or there are differences between my JDK11 and the one on Jenkins.
CC @tysonjh as we talked the other day about whether it mattered to make these unique Gradle targets versus just configuring these properties in the Jenkins job.
I expect this is a conflict between checkerframework and Java 11. Java 9+ are supported since 3.x but it seems it might rely on -Xbootclasspath/p option which has been removed. It may be that there is something specific to Gradle and how it uses a forked JVM in order to compile and run with Java 11.
Since checker is only needed for static analysis of the source, it can be disabled for Java 11 builds.
According to https://github.com/kelloggm/checkerframework-gradle-plugin/issues/43#issuecomment-551104939 it should be OK. I will submit a report or ask on the user list.
Ah, yes. To run the same commands as the Java11 builds you must set some properties. For example on mac:
git checkout github/pr/12366 # I have fetch spec set up like this
./gradlew \
-PcompileAndRunTestsWithJava11 \
-Pjava11home=/Library/Java/JavaVirtualMachines/jdk-11-latest/Contents/Home \
:sdks:java:extensions:sql:datacatalog:compileTestJava
I actually got a different failure: https://gradle.com/s/gegzdydsbu2si
Execution failed for task ':sdks:java:extensions:sql:datacatalog:compileTestJava'.
> release version 11 not supported
That seems a bit odd. I think I did not reproduce the problem properly or there are differences between my JDK11 and the one on Jenkins.
CC @tysonjh as we talked the other day about whether it mattered to make these unique Gradle targets versus just configuring these properties in the Jenkins job.
When I run with Java 11 I use the following (example from different task):
./gradlew clean
./gradlew -Dorg.gradle.java.home=/usr/local/buildtools/java/jdk11 :runners:direct-java:validatesRunner --scan
Tyson's command will use JDK11 also for the main gradle task, while still having source and target Java version 8. Since checker does support Java 11 and is aware of the removal of -Xbootclasspath/p perhaps that approach causes the plugin to configure flags appropriately.
Confirmed that Tyson's command worked.
Yup: https://github.com/kelloggm/checkerframework-gradle-plugin/blob/6739a86cf030ab35634a2b0ab6ac8859fe835473/src/main/groovy/org/checkerframework/gradle/plugin/CheckerFrameworkPlugin.groovy#L373
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Disable checker(for java 11) for whole project or for just this package ?
Filed kelloggm/checkerframework-gradle-plugin#117. We don't have to wait for a fix, though. We can just disable checker for Java 11 for now, and we will probably at some point switch to Tyson's invocation for most tests - compiling and running on JRE11 but with Java 8 settings.
Disable checker(for java 11) for whole project or for just this package ?
Disable the checker for the whole project.
Can some one point me how to disable that. I know we have to modify some groovy script under .test-infra/jenkins, but don't know exactly where and how ?
Here: https://github.com/apache/beam/blob/de8ff705145cbbc41bea7750a0a5d3553924ab3a/buildSrc/src/main/groovy/org/apache/beam/gradle/BeamModulePlugin.groovy#L763
This block should be skipped if compileAndRunTestsWithJava11 is set and Gradle's configuration phase runs using JDK8. I think it would be fine to just skip that block whenever compileAndRunTestsWithJava11 is set.
It also looks like -PskipCheckerFramework is supported by kelloggm/checkerframework-gradle-plugin so you can just add that flag to the Jenkins job. That is probably best.
Run SQL_Java11 PreCommit
It also looks like -PskipCheckerFramework is supported by kelloggm/checkerframework-gradle-plugin so you can just add that flag to the Jenkins job. That is probably best.
It's was there so just triggered the job.
| gharchive/pull-request | 2020-07-24T06:32:54 | 2025-04-01T06:37:51.899921 | {
"authors": [
"aromanenko-dev",
"ibzib",
"jayendra13",
"kennknowles",
"tysonjh"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/12366",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
781554351 | Remove redundant or inappropriate pieces of the capability matrix
Each commit is an independent change that removes something that should not be in the capability matrix:
runners that are on branches and not released
features that are not designed and may even require inventing new things, or may be impossible
redundant columns
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[x] Choose reviewer(s) and mention them in a comment (R: @username).
[x] Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
[x] Update CHANGES.md with noteworthy changes.
[x] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
Post-Commit Tests Status (on master branch)
Lang
SDK
Dataflow
Flink
Samza
Spark
Twister2
Go
---
---
---
Java
Python
---
---
XLang
---
---
Pre-Commit Tests Status (on master branch)
---
Java
Python
Go
Website
Whitespace
Typescript
Non-portable
Portable
---
---
---
---
---
See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
http://apache-beam-website-pull-requests.storage.googleapis.com/13694/documentation/runners/capability-matrix/index.html
http://apache-beam-website-pull-requests.storage.googleapis.com/13694/documentation/runners/capability-matrix/index.html
@griscz wdyt?
@griscz wdyt?
Let me know when it is a good time to rebase and fix this one up.
the website revamp is done, you should be fine to rebase now.
Done. PTAL. Incidentally, the hack to add /index.html to all the URLs seems to be gone (needed to browse seemlessly on GCS staging).
Incidentally, the hack to add /index.html to all the URLs seems to be gone (needed to browse seemlessly on GCS staging).
I think this only happened on the runner detail links for some reason. I filed BEAM-11860 for it
| gharchive/pull-request | 2021-01-07T19:30:21 | 2025-04-01T06:37:51.937759 | {
"authors": [
"TheNeuralBit",
"kennknowles"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/13694",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1390027142 | Add documentation link to the interactive environment
Part of a documentation audit to add relevant gcloud docs links.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
R: @KevinGG
| gharchive/pull-request | 2022-09-28T22:47:49 | 2025-04-01T06:37:51.945543 | {
"authors": [
"rohdesamuel"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/23409",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1411749349 | [Playground] Examples CD
Examples CD optimization
Main feature comparing to the old workflow is running the examples on production backend. This ensures that examples really work after deployment, and saves resources on building a temporary runner
Introduce BEAM_USE_WEBGRPC variable in grpc_client.py. It enables WEBGRPC protocol instead of GRPC, to access public production endpoints.
Reusable workflow to optimize GH workflows
addresses #23463
addresses #23464
addresses #23465
See example CD run of this job with necessary amendments (private GCP credentials, push GH triggers)
python passes and go doesn't: known issue #23600
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Choose reviewer(s) and mention them in a comment (R: @username).
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI.
lgtm
| gharchive/pull-request | 2022-10-17T15:05:57 | 2025-04-01T06:37:51.955345 | {
"authors": [
"eantyshev",
"olehborysevych"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/23664",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2049051281 | Build and publish multi-arch wheels separately from main wheels
Right now, building wheels is failing because it tries to upload an artifact for the multi-arch wheels with the same name as the regular wheels. Previously this worked silently and one was dropped. This fixes that issue by appending -aarch64 to the multi-arch wheels and makes sure that it gets uploaded to gcs as well.
Example succeeding now - https://github.com/apache/beam/actions/runs/7266076148
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
[ ] Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
[ ] Update CHANGES.md with noteworthy changes.
[ ] If this contribution is large, please file an Apache Individual Contributor License Agreement.
See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.
R: @jrmccluskey
| gharchive/pull-request | 2023-12-19T16:49:48 | 2025-04-01T06:37:51.963279 | {
"authors": [
"damccorm"
],
"repo": "apache/beam",
"url": "https://github.com/apache/beam/pull/29821",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2345473725 | BIGTOP-4126. Remove obsolete test resources for Hue.
https://issues.apache.org/jira/browse/BIGTOP-4126
Obsolete test resources for Hue causes completion error. This causes issue on mvn deploy in the release process.
[INFO] -------------------------------------------------------------
[ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy:[40,23] 1. ERROR in /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy (at line 40)
Shell sh = new Shell();
^
Groovy:expecting ']', found ';' @ line 40, column 25.
[ERROR] /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy:[48,1] 2. ERROR in /home/iwasakims/srcs/bigtop/bigtop-tests/test-artifacts/hue/src/main/groovy/org/apache/bigtop/itest/huesmoke/TestHueSmoke.groovy (at line 48)
sh.exec("curl -m 60 --data '${creds}' ${loginURL}");
^
Groovy:unexpected token: sh @ line 48, column 5.
...
+1, I made sure that the mvn commands before deploying artifacts described the release process are successful with this PR. Thanks @iwasakims.
cherry-picked this to master branch.
| gharchive/pull-request | 2024-06-11T06:05:34 | 2025-04-01T06:37:51.966283 | {
"authors": [
"iwasakims",
"sekikn"
],
"repo": "apache/bigtop",
"url": "https://github.com/apache/bigtop/pull/1280",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
800762691 | Move source cache to proto based service
See original issue on GitLab
In GitLab by [Gitlab user @raoul].hidalgocharman on May 29, 2019, 13:48
Background
The source cache should move towards using a protocol buffer based format similar to new artifact service architecture (overall plan described in #909). The reference service should be kept for now to allow it to be used with older buildstream clients.
Task description
[x] Design new source protos. This should be similar to artifact protos and may include metadata such as the sources provenance data.
[x] Implement new SourceCacheService that uses this.
[x] Use new SourceCacheService in SourceCache for pulling and pushing sources.
Acceptance Criteria
All old source cache tests pass using new source protos.
In GitLab by [Gitlab user @raoul].hidalgocharman on May 29, 2019, 13:48
mentioned in merge request !1362
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 19, 2019, 12:11
mentioned in merge request !1410
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 21, 2019, 14:13
assigned to [Gitlab user @raoul].hidalgocharman
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 8bbea0cc9d1c07dbcf7bba563082c31694d92220
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 35214d0d87c759788a1be22d5e9b77ddecf9806e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 8e264f093816a63f77e52d58332e8b32d713eb92
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 09:30
mentioned in commit 55bf188cb681ca0683dda3c82202ee21369cb47d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 12:16
marked the task Design new source protos. This should be similar to artifact protos and may include metadata such as the sources provenance data. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 25, 2019, 12:16
marked the task Implement new SourceCacheService that uses this. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit c9f8581531a0d583ef3cb21519aefb9e1ba66bd4
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit ef712320ebd39f2259b90ce7cd3f7d9ffdb28a5c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:08
mentioned in commit 544c02367784e2e401760bf171d8b61ae8d3959a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 8c9df8706969b5346b404c1581299fbcf4e3676e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 4533989e94d68b80ea1dfd7a59dd7177417f91e7
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 0d6a1e9fa891e27883689bd65a8f7b46e991162c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 83352742821de343b499b597c751667fc4d6a763
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 14:15
mentioned in commit 12874bd9a67492a6d58d5aadd9ed8d5b737a1e0a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 26, 2019, 15:50
mentioned in commit 4dc530b338a4e434bd315d5f2d4102b563f49c77
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 4032c81019e8dd69ca541e87f99f484ffde3db52
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 357ab273a01711a592b0ca8de99b501e2c1cb62e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 78d567dddea3c28f4e737db00a092ae8a82c405e
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit 1581ff9cd6f6d8a2ce32d61446de300d85bae4e8
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:12
mentioned in commit a8e04529d62f5c7c9894c45b83330823d4eb6bba
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:42
mentioned in commit 6c540804827d18deb562afb3da19f4039a25c81a
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:42
mentioned in commit 0397b66e9bda61035253fa718eff59538a7d211d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:48
mentioned in merge request !1435
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:55
mentioned in commit c76881a12a4dc10194ea3999f6d7d76181c34244
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 27, 2019, 15:55
marked the task Use new SourceCacheService in SourceCache for pulling and pushing sources. as completed
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:30
mentioned in commit 5ce4cc7499d8eab81f0fa5f9f3e93249d443605f
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:30
mentioned in commit 6b42c2da0cc248ef0d0819c892feba6d10ca19f1
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit abea6490cb6036b1fa9c898879e5e20956007393
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit caba7d3a59ed24f1edf86366b8a7cf3675b9c1fc
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit b5e84029b07e95b281dfcc0da73f274e1b55b1c3
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:31
mentioned in commit 47ace4076c34f43b9c92ebc01ef3f354b501f5d2
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit 4e2c3b28d89ded00f62e226ee5a3db8a96530c73
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit f9ffbfe9af2bb8563af54b4b7ccc98f5292da030
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit ac7a02fbef5fea5f8ace2357111e3e99794fb515
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit b80260e694c9ac88bd45b79319e10e8c82f7c84f
In GitLab by [Gitlab user @raoul].hidalgocharman on Jun 28, 2019, 15:45
mentioned in commit 3f3a29fed5e3dac3634208bf7b724314cfb2896d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 9497b02f9eae40e9dca97c1b325cc6ca6a52fe81
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 2a58f66d09600bdc79d6d25d771274bcc1544434
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 16:59
mentioned in commit 6b13a80e5f4e9792768b04e45266b74944daa643
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit eee1512447206b28d62b7bce38e86d96e6c19a3c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit e41517f305c23d5efce24ae4a91326fc81d93a32
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 82de872b3ec4eba90e8c022476bf760f350b81b2
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 5fc9ba356b967100260bbfc2aedfa19a193fbf86
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 2259e0453aedfa35f1a7539c19a629302fa9fd82
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 3cd1e50a38595da169e10f9c7f6b5912cb745264
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 14be3c3070f06f3d8194daa8474c035b5776fbc3
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 3063d1df3dba3ceab7ffa9fc7227cf4102d13c3d
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit 97858e21acf8497543e94f5338b035b12ff1ca1c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 1, 2019, 17:39
mentioned in commit ec012e5cd21ce3e0840ad54cd0becd2cb21eb889
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 0d69662b5d4711098d0bbf6f15b5c0f7da8098a7
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 5dc76fdb7833fa71d89c2f56a4be098cecf7495b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit a4e8907c60bffcb51c4057327a9300938df87956
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit 5158c9fce3dae4f88f20a1e6bf48a0d426ac2036
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 2, 2019, 16:23
mentioned in commit c12587ce8e1ef32f1b66ff0399d5b76650f57c90
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit ab06c00ded23866ec88ce9132fe3000c9bfe823b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit d06bacd228313b943224abb92435915b3a177d23
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 46418bf79a740fb6c906962e52c243243d2849bb
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 8d7afd7514f701870ed8333b822a2bb2682ef0de
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:06
mentioned in commit 1fb4716df7caae754501717ad519f5f1e5268a7c
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 13:47
mentioned in commit 2d48dc85f2814d6d2b04e48917c09a415cdbd540
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 5, 2019, 15:44
mentioned in commit b15b32376f6abe1735233bd5d85c42fd1ad5a703
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit bb2cf18be0aef7d6e394a0c6ff6d83eac737c60b
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit 6b6e04ddb1c03f683e3e5591f057207d1303e6b6
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit d493682609f8f96ed127f4083bad42fa2fabb250
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit c20eac1e7ac80e1dc36b23b04affacfbe2cca338
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 11:40
mentioned in commit d61e21448953942ce90d457dad7189c0dda61bc7
In GitLab by [Gitlab user @marge-bot123] on Jul 8, 2019, 12:17
closed via merge request !1435
In GitLab by [Gitlab user @marge-bot123] on Jul 8, 2019, 12:17
mentioned in commit cf0516ba92fb4a220ae0086c411314cec4974df5
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit d1be05c771bdbe054f01eaaea977d4b20e401354
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit dc689098d164510eb22820776f9c8cf1cf1fd642
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit 38f7bffd87ffd901e316a7966fefc9d189658e19
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit c02c0170058f36eff722bb341edbed3feae18145
In GitLab by [Gitlab user @raoul].hidalgocharman on Jul 8, 2019, 16:41
mentioned in commit 107bae99f159d22cbedd155c5db9255782fad3c0
| gharchive/issue | 2021-02-03T22:48:12 | 2025-04-01T06:37:52.037348 | {
"authors": [
"BuildStream-Migration-Bot"
],
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1038",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
800801003 | Add API for composing lists in yaml nodes
See original issue on GitLab
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:14
Background
As part of #1061 we need to perform composition between lists like so:
# bar.bst
kind: autotools
dependencies:
(>):
- foo-lib.bst
# project.conf
elements:
autotools:
dependencies:
- autotools.bst
These lists are expected to be composed into this:
- autotools.bst
- foo-lib.bst
This means we need to do something along the lines of:
default_deps = _yaml.node_get(project_conf, list, "dependencies")
deps = _yaml.node_get(element, list, "dependencies")
_yaml.composite(default_deps, deps)
But this will not work, because _yaml.composite() will only deal with _yaml.Nodes.
Task description
We should add some form of an API to allow doing this - I can see either of these things working:
_yaml.composite() learns to deal with plain lists - the problem here is that we'd struggle providing provenance data, and the behavior of composite(list, list) isn't obvious (although !1601 will probably make that "safe append", at least for dependencies).
_yaml.get_node() returns proper _yaml.Nodes when type=_yaml.Node for lists - this feels a bit more reasonable, but I'm likely overlooking something :)
Acceptance Criteria
We should be able to compose lists without creating naughty synthetic nodes.
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:15
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:15
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:16
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 11:18
changed the description
In GitLab by [Gitlab user @tlater] on Jul 3, 2019, 12:02
changed the description
| gharchive/issue | 2021-02-03T23:54:26 | 2025-04-01T06:37:52.045863 | {
"authors": [
"BuildStream-Migration-Bot"
],
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1062",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
777202019 | Checkout needs sandbox even when run with --no--intergration
See original issue on GitLab
In GitLab by [Gitlab user @willsalmon] on Jul 7, 2020, 11:27
Summary
Checkout needs sandbox even when run with --no--intergration
This means that if a project was built in CI on a different arch or with RE and the RE bots are busy, artefacts can not be checked out even if they are just a tar/docker-image/single file.
We could do it with a approach like https://gitlab.com/BuildStream/buildstream/-/merge_requests/1983 or we tweak how the sandbox is invoked so that it dose not need a real sandbox.
In GitLab by [Gitlab user @cs-shadow] on Jul 14, 2020, 22:02
I'm not sure if I follow. This is already possible, and precisely the reason we have the dummy sandbox (SandboxDummy). This should be automatically get created when buildbox-run isn't available for whatever reason.
If this is not happening, this would be a bug in BuildStream. If so, please share more details about it, and how to reproduce it.
In GitLab by [Gitlab user @cs-shadow] on Jul 14, 2020, 22:02
I'm not sure if I follow. This is already possible, and precisely the reason we have the dummy sandbox (SandboxDummy). This should be automatically get created when buildbox-run isn't available for whatever reason.
If this is not happening, this would be a bug in BuildStream. If so, please share more details about it, and how to reproduce it.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 10:51
For this case buildbox-run is available but dose not support the target arch.
The element was made by using remote exicution or CI with a different arch. by setting the sandbox arch you can get the right cache key and pull down the artefact. But when you try to check it out then the sandbox dose not create a dummy but complains that the arch is not supported for a full sandbox.
This makes seance when --no-intergration is not used but dose not make sense if --no-intergration is specified.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 10:51
For this case buildbox-run is available but dose not support the target arch.
The element was made by using remote exicution or CI with a different arch. by setting the sandbox arch you can get the right cache key and pull down the artefact. But when you try to check it out then the sandbox dose not create a dummy but complains that the arch is not supported for a full sandbox.
This makes seance when --no-intergration is not used but dose not make sense if --no-intergration is specified.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:03
Thanks. I think the fix in that case should be to ensure that we do use the dummy sandbox in such code paths, rather than circumventing that in places other than the sandbox module. This keeps all related logic in one place and avoids unncessary forks in the code.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:03
Thanks. I think the fix in that case should be to ensure that we do use the dummy sandbox in such code paths, rather than circumventing that in places other than the sandbox module. This keeps all related logic in one place and avoids unncessary forks in the code.
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:04
mentioned in merge request !1983
In GitLab by [Gitlab user @cs-shadow] on Jul 15, 2020, 11:04
mentioned in merge request !1983
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 13:49
The issue that i had was that AFAICT we pick which sandbox to use for the run really early, at the platform level so you lose the chance to fall back at the point were we actually invoke it. Im not sure if that's true but that's what it looked like when I looked. Duno if [Gitlab user @cs-shadow] or [Gitlab user @juergbi] can point me in the right direction for how to fix this sensibly.
In GitLab by [Gitlab user @willsalmon] on Jul 15, 2020, 13:49
The issue that i had was that AFAICT we pick which sandbox to use for the run really early, at the platform level so you lose the chance to fall back at the point were we actually invoke it. Im not sure if that's true but that's what it looked like when I looked. Duno if [Gitlab user @cs-shadow] or [Gitlab user @juergbi] can point me in the right direction for how to fix this sensibly.
| gharchive/issue | 2021-01-01T05:18:57 | 2025-04-01T06:37:52.059776 | {
"authors": [
"BuildStream-Migration-Bot"
],
"repo": "apache/buildstream",
"url": "https://github.com/apache/buildstream/issues/1351",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1063769227 | removed deprecated version of copy to clipboard method to clipboard.w…
…riteText
Thanks for the copyToClipboard refactoring.
Could you please remove changes in render() method from PR. They make code cleaner but they are not related to the PR (removed deprecated version of copy to clipboard method). It is better to make them in the separate one.
| gharchive/pull-request | 2021-11-25T16:22:44 | 2025-04-01T06:37:52.067836 | {
"authors": [
"httpsOmkar",
"mgubaidullin"
],
"repo": "apache/camel-karavan",
"url": "https://github.com/apache/camel-karavan/pull/129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
726843647 | Camel Avro RPC component native support
This one is for camel-avro-rpc component. We already have the avro dataformat in native https://github.com/apache/camel-quarkus/issues/782
Please, assign to me.
| gharchive/issue | 2020-10-21T20:33:29 | 2025-04-01T06:37:52.069261 | {
"authors": [
"JiriOndrusek",
"ppalaga"
],
"repo": "apache/camel-quarkus",
"url": "https://github.com/apache/camel-quarkus/issues/1941",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1838100706 | (chores) camel-optaplanner: reduce the time spent trying to solve the problems
Signed-off-by: Otavio R. Piske angusyoung@gmail.com
Before: Total time: 14:04 min
After: Total time: 01:14 min
| gharchive/pull-request | 2023-08-06T08:02:36 | 2025-04-01T06:37:52.070487 | {
"authors": [
"orpiske"
],
"repo": "apache/camel",
"url": "https://github.com/apache/camel/pull/11012",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
240187817 | [CARBONDATA-1259] CompareTest improvement
changes:
check query result details, report error if result is not the same
add support for comparison with ORC file
add decimal data type
Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/294/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2878/
retest this please
Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/314/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2900/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2902/
Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/316/
Build Success with Spark 1.6, Please check CI http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/317/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2903/
LGTM
| gharchive/pull-request | 2017-07-03T14:22:55 | 2025-04-01T06:37:52.076687 | {
"authors": [
"CarbonDataQA",
"chenliang613",
"jackylk"
],
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/1129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
283167863 | [CARBONDATA-1903] Fix code issues in carbondata
Be sure to do all of the following checklist to help us incorporate
your contribution quickly and easily:
[x] Any interfaces changed?
No
[x] Any backward compatibility impacted?
No
[x] Document update required?
No
[x] Testing done
Please provide details on
- Whether new unit test cases have been added or why no new tests are required?
No, only fixed code related issues.
- How it is tested? Please attach test report.
Tested in local machine
- Is it a performance related change? Please attach the performance test report.
No, only fixed code related issues.
- Any additional information to help reviewers in testing this change.
No
[x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA.
Not related
Modification
Remove unused code like FileUtil
Fix/Optimize code issues in carbondata
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2126/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/902/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2412/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2437/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2153/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/924/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2162/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/933/
retest this please
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2241/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1017/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2499/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2242/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1018/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2246/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1023/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2505/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2251/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1028/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2507/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2514/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2262/
retest this please
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1040/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2285/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1069/
retest this please
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2291/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1075/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1078/
retest this please
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2307/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1091/
retest this please
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1110/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2330/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2352/
retest this please
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1135/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2355/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1137/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1139/
SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/2568/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1143/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2373/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2404/
Build Failed with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1181/
retest this please
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/2435/
Build Success with Spark 2.2.0, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/1211/
| gharchive/pull-request | 2017-12-19T10:01:28 | 2025-04-01T06:37:52.109662 | {
"authors": [
"CarbonDataQA",
"ravipesala",
"xuchuanyin"
],
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/1678",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
320571195 | [CARBONDATA-2440] doc updated to set the property for SDK user
[x] Any interfaces changed? NO
[x] Any backward compatibility impacted? No
[x] Document update required? ==> Yes
[x] Testing done ==> All UT and SDV succes report is enough.
[x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4745/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5667/
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4507/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4746/
retest this please
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5671/
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4511/
LGTM
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4686/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5842/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4887/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5883/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4918/
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4731/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/4919/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/5885/
@xubo245 review comments resolved .
please import CarbonProperties before using it.
@xubo245 import done
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5016/
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4844/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6003/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6014/
Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4856/
Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6022/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5029/
Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4863/
SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/5036/
Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/4874/
Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/6033/
retest this please
LGTM
| gharchive/pull-request | 2018-05-06T07:51:06 | 2025-04-01T06:37:52.128057 | {
"authors": [
"CarbonDataQA",
"rahulforallp",
"ravipesala",
"sgururajshetty",
"xubo245"
],
"repo": "apache/carbondata",
"url": "https://github.com/apache/carbondata/pull/2274",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
127687962 | CLIMATE-379 - Allows dataset customisation
This patch allows the user to customise the name of the local dataset that is being uploaded in the web-app.
Can one of the admins verify this patch?
hey @lewismc @MJJoyce, please have a look at this patch.
I am +1, any comments @MJJoyce ?
:+1:
Please commit @Omkar20895
I haven't been granted the write access to the repository yet. right??
No you have, the canonical source is here
https://git-wip-us.apache.org/repos/asf/climate.git
The Github code is merely a mirror and is not the canonical source as it is
not hosted at the Apache Software Foundation.
The link above is hosted at the ASF and therefore canonical.
Thanks
Lewis
On Wed, Jan 27, 2016 at 9:38 AM, Omkar notifications@github.com wrote:
I haven't been granted the write access to the repository yet. right??
—
Reply to this email directly or view it on GitHub
https://github.com/apache/climate/pull/276#issuecomment-175762787.
--
Lewis
| gharchive/pull-request | 2016-01-20T13:57:59 | 2025-04-01T06:37:52.146927 | {
"authors": [
"MJJoyce",
"OCWJenkins",
"Omkar20895",
"lewismc"
],
"repo": "apache/climate",
"url": "https://github.com/apache/climate/pull/276",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
731333422 | packaging: enforce new min. CloudStack version 4.15 starting GA/1.0
There are many changes, including API changes in upstream master/4.15
which makes it challenging to maintain backward compability of Primate
with older versions of CloudStack. Therefore we need to ensure that the
rpm and deb Primate pkgs require CloudStack 4.15 as minimum version.
This would still leave some flexibility for advanced users of archive
builds (which adds risks that some features don't work with 4.14 or
older versions).
Following this we need to update https://github.com/apache/cloudstack-documentation/pull/150 as well wrt the min. version Primate will support and installation instructions. By default, we'll ship primate with every cloudstack repo so users won't need to setup the repo themselves (the other way is for cloudstack-management to install the repo config automatically).
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build primate packages. I'll keep you posted as I make progress.
Packaging result: :heavy_check_mark:centos :heavy_check_mark:debian :heavy_check_mark:archive.
QA: http://primate-qa.cloudstack.cloud:8080/client/pr/841 (JID-3629)
| gharchive/pull-request | 2020-10-28T10:55:38 | 2025-04-01T06:37:52.157650 | {
"authors": [
"blueorangutan",
"rhtyd"
],
"repo": "apache/cloudstack-primate",
"url": "https://github.com/apache/cloudstack-primate/pull/841",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
463346204 | Misuses of cryptographic APIs
Hi
The following lines have cryptographic API misuses. File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest: File name => utils/src/main/java/com/cloud/utils/nio/Link.java: Line number => 371: API name => KeyStore:Second parameter should never be of type java.lang.String. File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 30: API name => MessageDigest:Unexpected call to method <java.security.MessageDigest: byte[] digest()> on object of type java.security.MessageDigest. Expect a call to one of the following methods <java.security.MessageDigest: void update(byte[])>,<java.security.MessageDigest: void update(byte[],int,int)>,<java.security.MessageDigest: byte[] digest(byte[])>,<java.security.MessageDigest: void update(java.nio.ByteBuffer)>,<java.security.MessageDigest: void update(byte)> File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 37: API name => MessageDigest: File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 52: API name => MessageDigest:Unexpected call to method reset on object of type java.security.MessageDigest. Expect a call to one of the following methods digest,update File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 81: API name => Cipher: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 67: API name => MessageDigest:First parameter (with value "MD5") should be any of {SHA-256, SHA-384, SHA-512} File name => utils/src/main/java/com/cloud/utils/EncryptionUtil.java: Line number => 63: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/SwiftUtil.java: Line number => 234: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/SwiftUtil.java: Line number => 234: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest: File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 72: API name => KeyStore:Unexpected call to method store on object of type java.security.KeyStore. Expect a call to one of the following methods getKey,getEntry File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 117: API name => KeyStore:Unexpected call to method store on object of type java.security.KeyStore. Expect a call to one of the following methods getKey,getEntry File name => utils/src/main/java/com/cloud/utils/EncryptionUtil.java: Line number => 63: API name => SecretKeySpec: File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 79: API name => Cipher:First parameter (with value "RSA/None/PKCS1Padding") should be any of RSA/{Empty String, ECB} File name => utils/src/main/java/com/cloud/utils/security/CertificateHelper.java: Line number => 99: API name => KeyStore:Second parameter should never be of type java.lang.String. File name => utils/src/main/java/com/cloud/utils/crypt/RSAHelper.java: Line number => 81: API name => Cipher:
@mhp0rtal can you give expoits for any of those isses?
Can you also please give a version on which these apply, as the first three do not show code matching the message;
1: File name => utils/src/main/java/com/cloud/utils/ssh/SSHKeysHelper.java: Line number => 75: API name => MessageDigest:
line 71 is an empty line
2: File name => utils/src/main/java/com/cloud/utils/nio/Link.java: Line number => 371: API name => KeyStore:Second parameter should never be of type java.lang.String.
call on line 371 has only one parameter
3: File name => utils/src/main/java/org/apache/cloudstack/utils/security/DigestHelper.java: Line number => 30: API name => MessageDigest:Unexpected call to method <java.security.MessageDigest: byte[] digest()> on object of type java.security.MessageDigest. Expect a call to one of the following methods <java.security.MessageDigest: void update(byte[])>,<java.security.MessageDigest: void update(byte[],int,int)>,<java.security.MessageDigest: byte[] digest(byte[])>,<java.security.MessageDigest: void update(java.nio.ByteBuffer)>,<java.security.MessageDigest: void update(byte)>
line 30 is empty
I stopped checking there but I propose you debug your tool of investigation.
I'm closing this issue but if you feel it is still valid, please add needed extra info and reopen.
| gharchive/issue | 2019-07-02T17:20:43 | 2025-04-01T06:37:52.167024 | {
"authors": [
"DaanHoogland",
"mhp0rtal"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/3459",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1211758533 | Place VM's on the same network as the CloudStack-management server
ISSUE TYPE
Other
COMPONENT NAME
UI, IP
CLOUDSTACK VERSION
4.16.0.0
SUMMARY
I am trying to find a method of placing newly deployed VM's on the same network as my management server, but so far it only allows me to create guest VM's behind a NAT with a private IP address. I need to create multiple VM's hosting services that can be routed to an external DNS. My concern is that the NAT will make routing to these machines impossible.
I have tried changing the network offering on my guest network from "offering for isolated network with NAT service enabled" to "offering for isolated network with no NAT service enabled", but it gives me this error:
"can't upgrade from network offering edad787d-baa0-4ca8-a67b-bd288adc6d37 to 052cee24-272b-44f1-a4df-4b5da7a30744; check logs for more information"
I would choose this during network deployment, but I do not have the option to choose "offering for isolated network with no NAT service enabled" until after deployment.
I would appreciate any advice you can give to help guide me from here as I am new to cloudstack
EXPECTED RESULTS
Place all deployed VM's on same IP range as the management console itself.
ACTUAL RESULTS
Forced to deploy VM's behind NAT per default configuration
@CKrieger2020 this is certainly a new use case. After a quick read I think you might want to investigate going to IPv6.
Another possible way to go is to deploy in a shared network.
@CKrieger2020 - have you tried to create a new Network or create a L2 network (assuming your CS instance is on a separate L2 network than what is available in CS for users to choose from)
In my home lab I have been able to create instance using L2 which talks to my other computers on the home network.
It's possible to do this by deploying VMs on a L2 network whose vlan is vlan://untagged essentially you'll be on the same network as your host/mgmt network.
@CKrieger2020 can you check the above suggestion and re-open the issue to discuss more.
To discuss further questions, you can raise them on the users@ and/or dev@ ML https://cloudstack.apache.org/mailing-lists.html
| gharchive/issue | 2022-04-22T03:07:09 | 2025-04-01T06:37:52.173459 | {
"authors": [
"CKrieger2020",
"DaanHoogland",
"nxsbi",
"rohityadavcloud"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/6298",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1575078517 | Issue while setting up CloudStack Advance Zone with security group
ISSUE TYPE
Bug Report
COMPONENT NAME
Advanced Zone with Security Groups setup
CLOUDSTACK VERSION
4.17.2
CONFIGURATION
Zone:
IPV4 DNS: 8.8.8.8
Internal DNS: 10.4.1.1
Pysical Network 1:
Management Traffic: cloudbr0
Guest Traffic: cloudbr1
Pod:
Gateway: 10.4.1.1
Netmask: 255.255.0.0
IP Range: 10.4.2.1 to 10.4.2.255
Guest Traffic:
Gateway: 10.6.1.1
Netmask: 255.255.0.0
IP Range 10.6.2.1 to 10.6.2.255
Host:
IP: 10.4.1.20
User: root
Password: password
Tag: h1
OS / ENVIRONMENT
Ubuntu 22.04 server with two network bridge cloudbr0 and cloudbr1
SUMMARY
Apache CloudStack v4.17.2
I am trying to setup CloudStack Advance Zone with security groups.
I have two network bridges cloudbr0 (10.4.1.1/16) and cloudbr1 (10.6.1.1/16). I am using cloudbr0 for Management Network and cloudbr1 for the Guest Network.
However, the zone creation keeps failing adding host with the error message - failed to add host as resource already exists as LibvirtComputingResource.
For some reason it seems like CloudStack is trying to add the same host twice.
STEPS TO REPRODUCE
Configuring CloudStack Advance Zene with security group on Ubuntu 22.04 server
EXPECTED RESULTS
Successfully create advance zone with security group.
ACTUAL RESULTS
Host setup fails with the following error:
Could not add host at [http://10.4.1.20] with zone [1], pod [1] and cluster [1] due to: [ can't setup agent, due to com.cloud.utils.exception.CloudRuntimeException: Skipping host 10.4.1.20 because 2f02300b-d9bf-3229-acb8-21054c500f47 is already in the database for resource 2f02300b-d9bf-3229-acb8-21054c500f47-LibvirtComputingResource with ID 86f5dcd2-9d6e-444e-b0df-e0dcb1509699 - Skipping host 10.4.1.20 because 2f02300b-d9bf-3229-acb8-21054c500f47 is already in the database for resource 2f02300b-d9bf-3229-acb8-21054c500f47-LibvirtComputingResource with ID 86f5dcd2-9d6e-444e-b0df-e0dcb1509699].
Was there an issue faced during zone creation after the host addition step, maybe during setting up the stores? I had faced a similar issue in the past, where in if the zone creation fails at any point and we are prompted to rectify the issue, and then restart the zone creation workflow, it attempts to re-add the host. Can you check the database if an entry already exists in the host table and if it does, delete them and restart the zone creation process.
@Atiqul-Islam
Can you upload the full management server log ?
@Pearl1594 I am installing Cloud Stack on a fresh Ubuntu Sever, there was no host created before the zone creation.
@weizhouapache
Management Server Log
@Atiqul-Islam
it looks you use a server as both management server and cloudstack agent.
from the log, host was added twice and of course it failed at 2nd attempt. everything else looks good.
@weizhouapache
Why was the host added twice is it because I am using the same server as both management and agent?
I didn't do manually anything to create a host, I just started cloudstack and tried setting up the advanced zone with security group. Thats where I configured the host. During the process of creating the zone it seemed like cloudstack was trying to add the same zone twice.
@weizhouapache
Why was the host added twice is it because I am using the same server as both management and agent?
I didn't do manually anything to create a host, I just started cloudstack and tried setting up the advanced zone with security group. Thats where I configured the host. During the process of creating the zone it seemed like cloudstack was trying to add the same zone twice.
@Atiqul-Islam
I just wanted to confirm your configurations.
I will try to reproduce the issue.
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think.
The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think. The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
It seems like there was possibly network issues during the setup process, some component of the Zone could be in a bad state, as there was no Virtual Router created for the guest network.
I am also getting the following error when I am trying to add an Ubuntu 20.04 iso.
Unable to resolve releases.ubuntu.com
@weizhouapache
Really appreciate the help.
We are testing out CloudStack as it is part of our stack for our next generation of software and systems. So far been stuck in that roadblock for a while. Any help is greatly appreciated.
@Atiqul-Islam no problem.
it seems like a minor issue for you I think. The zone has been created successfully, and system vms are Running when you enabed the zone, right ?
@weizhouapache
Systems VMs are up and running after I enabled the Zone. However, it seems like the zone network might not be properly configured. Some component of the Zone could be in a bad state, as there was no Virtual Router created for the guest network.
I am also getting the following error when I am trying to add an Ubuntu 20.04 iso.
Unable to resolve releases.ubuntu.com
I did check the bare metal system running the management server and the host can ping releases.ubuntu.com
@Atiqul-Islam I have checked your log. It seems everything went smoothly, except the extra step to add host again when all are done. I think you can ignore the error.
for the issue with DNS , you need to log into Secondary storage Vm (a.k.a SSVM) and check if the domain can be resolved. you might need to update the DNS and internal DNS in zone configuration
@weizhouapache
I am unable to get into SSVM console. When I try to get into the console using the GUI, it seems to cannot load the page. In addition, where do I find the login credentials to the SSVM.
Also shouldn't there be a virtual router created as well for the gateway of the guest network?
@weizhouapache
I am unable to get into SSVM console. When I try to get into the console using the GUI, it seems to cannot load the page. In addition, where do I find the login credentials to the SSVM.
Also shouldn't there be a virtual router created as well for the gateway of the guest network?
@Atiqul-Islam sorry for late response.
you can ssh into system vms and virtual routers from the kvm host.
ssh -p 3922 -i /root/.ssh/id_rsa.cloud 169.254.x.x
or "virsh console s-xx-VM"
the credential is root/password
The virtual router will be created when a vm is created I think.
@Atiqul-Islam I am closing this issue. please reopen or create a new one if ou think that is invalid.
| gharchive/issue | 2023-02-07T21:53:57 | 2025-04-01T06:37:52.189814 | {
"authors": [
"Atiqul-Islam",
"DaanHoogland",
"Pearl1594",
"weizhouapache"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/7178",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2161994075 | [UI] Storage menu not showing even with API permissions
ISSUE TYPE
Bug Report
COMPONENT NAME
UI
CLOUDSTACK VERSION
Main
SUMMARY
As reported in https://github.com/apache/cloudstack/pull/8713#issuecomment-1969866705, the Storage submenu in the sidebar is not displayed to users when they do not have permission to the API listVolumesMetrics. However, roles can have permissions to other APIs, such as listBackups and listSnapshots, in which case the submenu should be displayed. This scenario is probably not exclusive to the Storage menu.
STEPS TO REPRODUCE
Create a role with permission to allow the APIs listBackups and listSnapshots and deny the API listVolumesMetrics. The UI dashboard will not show the Storage menu in the sidebar.
EXPECTED RESULTS
The UI should show the Storage submenu alongside with its own Backups and Snapshots submenus.
ACTUAL RESULTS
The submenu Storage is not displayed, even though the role has permission to list snapshots and backups.
@DaanHoogland @winterhazel @sureshanaparti, I am probably overthinking this scenario, however, the permission property in the JS component (storage.js) works like an AND operator. Maybe it could function like an OR operator as well; what do you guys think?
@DaanHoogland @winterhazel @sureshanaparti, I am probably overthinking this scenario, however, the permission property in the JS component (storage.js) works like an AND operator. Maybe it could function like an OR operator as well; what do you guys think?
This was exactly my thought when I learned of it @BryanMLima . Let's try.
Hey everyone,
I've implemented @BryanMLima's idea in #8978. The filtering still works as an AND operator for routes that correspond to a page; however, I have changed so that routes corresponding to sections get shown if the user has access to any of its pages.
Closing this as this was addressed in PR #8978.
| gharchive/issue | 2024-02-29T19:53:45 | 2025-04-01T06:37:52.196677 | {
"authors": [
"BryanMLima",
"DaanHoogland",
"winterhazel"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/issues/8730",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
164447805 | Allow CGN (RFC6598) to be used within a VPC
Add the CGN network to the list of allowed netblocks for guest networks. Additionally convert the previous strings to a list to allow easier modification/expansion in the future if needed.
Tested in a 4.8 lab. Verified as per screen shots that all RFC 1918 and RFC 6598 ranges work. Attempting to use a public cidr produces an error as expected. Interface is correctly configured on the VR and the routing table is correct and matches the interface the private gateway was configured on (eth3).
LGTM
LGTM (did not test it, kicked a packaging build)
Packaging result: ✖centos6 ✖centos7 ✖debian repo: http://packages.shapeblue.com/cloudstack/pr/1606
Packaging result: ✔centos6 ✔centos7 ✖debian repo: http://packages.shapeblue.com/cloudstack/pr/1606
@blueorangutan package
@rhtyd a Trillian-Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian repo: http://packages.shapeblue.com/cloudstack/pr/1606
@kiwiflyer are there Marvin tests that verify this behavior?
John, I'll defer to Aaron on this, as he submitted the PR.
@leprechau following up regarding Marvin tests. Unless they are ready, we are likely going to have to push this PR to 4.9.2.0 since I am trying to get a 4.8.2.0 RC cut ASAP (we are week late already).
@jburwell Not really sure what you would want as far as tests. There shouldn't be any change that would alter existing functionality as demonstrated by the above screenshots. If there is something I'm missing please let me know.
@leprechau you have expanded the type of networks blocks supported by the system. Therefore, there should be a Marvin test case that attempts to specify a CGN network block for a guest network and verifies that the system behaves as expected. In particular, the test should verify that the API calls are successful, and that the networks are implemented as expected when a CGN network block is specified.
@jburwell Is there already an existing test for the previous behavior that could be modified? I haven't had a chance to do much with Marvin other than briefly browse the tests folder.
@leprechau You may add an unit test as well for NetUtils.validateGuestCidr(). Check NetUtilsTest.java, there is already an existing test for 192.168.
@leprechau I would like to get this fix into 4.8.2.0, 4.9.1.0, and 4.10.0.0. Have you had a chance to add the unit tests requested? Also, is there a JIRA ticket for this bug?
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-209
LGTM on tests and code review.
@leprechau @kiwiflyer can we have a JIRA id for this and use that in the commit summary.
Ping @leprechau @kiwiflyer
I've added issue Jira CLOUDSTACK-9661 to track this.
https://issues.apache.org/jira/browse/CLOUDSTACK-9661
@leprechau @rhtyd
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-378
What's the current status of this PR? Is this acceptable as is?
Is it appropriate to add a comparatively complex integration test to cover a functional predicate whose cognitive load is so small?
I will expand the associated unit test to include boundary conditions testing of the CGN address range, and that would seem to me to be sufficient. Please advise.
@karuturi
@rossor unittest would be nice to have. Can you also rebase this PR to 4.9?
@rhtyd Can you run tests on this please?
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-678
@borisstoyanov @rhtyd @DaanHoogland Can you run tests on this PR? it only has package build
@leprechau @karuturi I can run tests but there are no added unit or integration tests. This has no value for this fix, only for detecting regression in other code or in the tests due to this change.
@blueorangutan test
@DaanHoogland a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
Trillian test result (tid-1045)
Environment: kvm-centos7 (x2), Advanced Networking with Mgmt server 7
Total time taken: 29262 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr1606-t1045-kvm-centos7.zip
Intermitten failure detected: /marvin/tests/smoke/test_outofbandmanagement.py
Intermitten failure detected: /marvin/tests/smoke/test_privategw_acl.py
Intermitten failure detected: /marvin/tests/smoke/test_snapshots.py
Test completed. 47 look ok, 2 have error(s)
Test
Result
Time (s)
Test File
test_04_rvpc_privategw_static_routes
Failure
350.99
test_privategw_acl.py
test_02_list_snapshots_with_removed_data_store
Error
0.04
test_snapshots.py
test_01_vpc_site2site_vpn
Success
180.27
test_vpc_vpn.py
test_01_vpc_remote_access_vpn
Success
71.19
test_vpc_vpn.py
test_01_redundant_vpc_site2site_vpn
Success
265.66
test_vpc_vpn.py
test_02_VPC_default_routes
Success
299.61
test_vpc_router_nics.py
test_01_VPC_nics_after_destroy
Success
528.51
test_vpc_router_nics.py
test_05_rvpc_multi_tiers
Success
500.14
test_vpc_redundant.py
test_04_rvpc_network_garbage_collector_nics
Success
1423.61
test_vpc_redundant.py
test_03_create_redundant_VPC_1tier_2VMs_2IPs_2PF_ACL_reboot_routers
Success
635.89
test_vpc_redundant.py
test_02_redundant_VPC_default_routes
Success
741.67
test_vpc_redundant.py
test_01_create_redundant_VPC_2tiers_4VMs_4IPs_4PF_ACL
Success
1328.48
test_vpc_redundant.py
test_09_delete_detached_volume
Success
156.56
test_volumes.py
test_08_resize_volume
Success
156.50
test_volumes.py
test_07_resize_fail
Success
161.62
test_volumes.py
test_06_download_detached_volume
Success
151.32
test_volumes.py
test_05_detach_volume
Success
150.75
test_volumes.py
test_04_delete_attached_volume
Success
151.32
test_volumes.py
test_03_download_attached_volume
Success
156.47
test_volumes.py
test_02_attach_volume
Success
84.79
test_volumes.py
test_01_create_volume
Success
621.14
test_volumes.py
test_deploy_vm_multiple
Success
268.00
test_vm_life_cycle.py
test_deploy_vm
Success
0.03
test_vm_life_cycle.py
test_advZoneVirtualRouter
Success
0.03
test_vm_life_cycle.py
test_10_attachAndDetach_iso
Success
61.86
test_vm_life_cycle.py
test_09_expunge_vm
Success
125.22
test_vm_life_cycle.py
test_08_migrate_vm
Success
36.00
test_vm_life_cycle.py
test_07_restore_vm
Success
0.13
test_vm_life_cycle.py
test_06_destroy_vm
Success
125.82
test_vm_life_cycle.py
test_03_reboot_vm
Success
125.95
test_vm_life_cycle.py
test_02_start_vm
Success
10.23
test_vm_life_cycle.py
test_01_stop_vm
Success
30.28
test_vm_life_cycle.py
test_CreateTemplateWithDuplicateName
Success
105.88
test_templates.py
test_08_list_system_templates
Success
0.06
test_templates.py
test_07_list_public_templates
Success
0.04
test_templates.py
test_05_template_permissions
Success
0.06
test_templates.py
test_04_extract_template
Success
5.14
test_templates.py
test_03_delete_template
Success
5.12
test_templates.py
test_02_edit_template
Success
90.09
test_templates.py
test_01_create_template
Success
55.66
test_templates.py
test_10_destroy_cpvm
Success
131.70
test_ssvm.py
test_09_destroy_ssvm
Success
168.99
test_ssvm.py
test_08_reboot_cpvm
Success
131.52
test_ssvm.py
test_07_reboot_ssvm
Success
133.92
test_ssvm.py
test_06_stop_cpvm
Success
131.52
test_ssvm.py
test_05_stop_ssvm
Success
133.94
test_ssvm.py
test_04_cpvm_internals
Success
0.98
test_ssvm.py
test_03_ssvm_internals
Success
3.61
test_ssvm.py
test_02_list_cpvm_vm
Success
0.13
test_ssvm.py
test_01_list_sec_storage_vm
Success
0.14
test_ssvm.py
test_01_snapshot_root_disk
Success
11.17
test_snapshots.py
test_04_change_offering_small
Success
209.89
test_service_offerings.py
test_03_delete_service_offering
Success
0.04
test_service_offerings.py
test_02_edit_service_offering
Success
0.08
test_service_offerings.py
test_01_create_service_offering
Success
0.11
test_service_offerings.py
test_02_sys_template_ready
Success
0.16
test_secondary_storage.py
test_01_sys_vm_start
Success
0.19
test_secondary_storage.py
test_09_reboot_router
Success
40.44
test_routers.py
test_08_start_router
Success
35.45
test_routers.py
test_07_stop_router
Success
10.18
test_routers.py
test_06_router_advanced
Success
0.07
test_routers.py
test_05_router_basic
Success
0.05
test_routers.py
test_04_restart_network_wo_cleanup
Success
5.82
test_routers.py
test_03_restart_network_cleanup
Success
65.78
test_routers.py
test_02_router_internal_adv
Success
1.18
test_routers.py
test_01_router_internal_basic
Success
0.64
test_routers.py
test_router_dns_guestipquery
Success
74.75
test_router_dns.py
test_router_dns_externalipquery
Success
0.08
test_router_dns.py
test_router_dhcphosts
Success
236.57
test_router_dhcphosts.py
test_router_dhcp_opts
Success
21.78
test_router_dhcphosts.py
test_01_updatevolumedetail
Success
5.16
test_resource_detail.py
test_01_reset_vm_on_reboot
Success
130.94
test_reset_vm_on_reboot.py
test_createRegion
Success
0.04
test_regions.py
test_create_pvlan_network
Success
5.28
test_pvlan.py
test_dedicatePublicIpRange
Success
0.55
test_public_ip_range.py
test_03_vpc_privategw_restart_vpc_cleanup
Success
546.27
test_privategw_acl.py
test_02_vpc_privategw_static_routes
Success
416.44
test_privategw_acl.py
test_01_vpc_privategw_acl
Success
97.64
test_privategw_acl.py
test_01_primary_storage_nfs
Success
35.96
test_primary_storage.py
test_createPortablePublicIPRange
Success
15.22
test_portable_publicip.py
test_createPortablePublicIPAcquire
Success
15.51
test_portable_publicip.py
test_isolate_network_password_server
Success
59.26
test_password_server.py
test_UpdateStorageOverProvisioningFactor
Success
0.13
test_over_provisioning.py
test_oobm_zchange_password
Success
25.76
test_outofbandmanagement.py
test_oobm_multiple_mgmt_server_ownership
Success
16.39
test_outofbandmanagement.py
test_oobm_issue_power_status
Success
5.23
test_outofbandmanagement.py
test_oobm_issue_power_soft
Success
15.38
test_outofbandmanagement.py
test_oobm_issue_power_reset
Success
15.40
test_outofbandmanagement.py
test_oobm_issue_power_on
Success
10.47
test_outofbandmanagement.py
test_oobm_issue_power_off
Success
15.35
test_outofbandmanagement.py
test_oobm_issue_power_cycle
Success
10.34
test_outofbandmanagement.py
test_oobm_enabledisable_across_clusterzones
Success
67.78
test_outofbandmanagement.py
test_oobm_enable_feature_valid
Success
5.55
test_outofbandmanagement.py
test_oobm_enable_feature_invalid
Success
0.28
test_outofbandmanagement.py
test_oobm_disable_feature_valid
Success
5.29
test_outofbandmanagement.py
test_oobm_disable_feature_invalid
Success
0.21
test_outofbandmanagement.py
test_oobm_configure_invalid_driver
Success
0.14
test_outofbandmanagement.py
test_oobm_configure_default_driver
Success
0.15
test_outofbandmanagement.py
test_oobm_background_powerstate_sync
Success
23.42
test_outofbandmanagement.py
test_extendPhysicalNetworkVlan
Success
15.35
test_non_contigiousvlan.py
test_01_nic
Success
454.73
test_nic.py
test_releaseIP
Success
278.18
test_network.py
test_reboot_router
Success
383.49
test_network.py
test_public_ip_user_account
Success
10.27
test_network.py
test_public_ip_admin_account
Success
40.32
test_network.py
test_network_rules_acquired_public_ip_3_Load_Balancer_Rule
Success
66.92
test_network.py
test_network_rules_acquired_public_ip_2_nat_rule
Success
61.85
test_network.py
test_network_rules_acquired_public_ip_1_static_nat_rule
Success
121.72
test_network.py
test_delete_account
Success
303.00
test_network.py
test_02_port_fwd_on_non_src_nat
Success
55.80
test_network.py
test_01_port_fwd_on_src_nat
Success
111.79
test_network.py
test_nic_secondaryip_add_remove
Success
227.76
test_multipleips_per_nic.py
test_list_zones_metrics
Success
0.27
test_metrics_api.py
test_list_volumes_metrics
Success
5.54
test_metrics_api.py
test_list_vms_metrics
Success
222.04
test_metrics_api.py
test_list_pstorage_metrics
Success
0.42
test_metrics_api.py
test_list_infrastructure_metrics
Success
0.64
test_metrics_api.py
test_list_hosts_metrics
Success
0.48
test_metrics_api.py
test_list_clusters_metrics
Success
0.47
test_metrics_api.py
login_test_saml_user
Success
19.90
test_login.py
test_assign_and_removal_lb
Success
133.15
test_loadbalance.py
test_02_create_lb_rule_non_nat
Success
187.31
test_loadbalance.py
test_01_create_lb_rule_src_nat
Success
207.95
test_loadbalance.py
test_03_list_snapshots
Success
0.08
test_list_ids_parameter.py
test_02_list_templates
Success
0.05
test_list_ids_parameter.py
test_01_list_volumes
Success
0.03
test_list_ids_parameter.py
test_07_list_default_iso
Success
0.07
test_iso.py
test_05_iso_permissions
Success
0.07
test_iso.py
test_04_extract_Iso
Success
5.19
test_iso.py
test_03_delete_iso
Success
95.15
test_iso.py
test_02_edit_iso
Success
0.06
test_iso.py
test_01_create_iso
Success
21.04
test_iso.py
test_04_rvpc_internallb_haproxy_stats_on_all_interfaces
Success
184.00
test_internal_lb.py
test_03_vpc_internallb_haproxy_stats_on_all_interfaces
Success
153.19
test_internal_lb.py
test_02_internallb_roundrobin_1RVPC_3VM_HTTP_port80
Success
520.03
test_internal_lb.py
test_01_internallb_roundrobin_1VPC_3VM_HTTP_port80
Success
430.26
test_internal_lb.py
test_dedicateGuestVlanRange
Success
10.25
test_guest_vlan_range.py
test_UpdateConfigParamWithScope
Success
0.13
test_global_settings.py
test_rolepermission_lifecycle_update
Success
6.15
test_dynamicroles.py
test_rolepermission_lifecycle_list
Success
6.00
test_dynamicroles.py
test_rolepermission_lifecycle_delete
Success
5.85
test_dynamicroles.py
test_rolepermission_lifecycle_create
Success
5.93
test_dynamicroles.py
test_rolepermission_lifecycle_concurrent_updates
Success
6.00
test_dynamicroles.py
test_role_lifecycle_update_role_inuse
Success
5.92
test_dynamicroles.py
test_role_lifecycle_update
Success
10.96
test_dynamicroles.py
test_role_lifecycle_list
Success
5.89
test_dynamicroles.py
test_role_lifecycle_delete
Success
10.93
test_dynamicroles.py
test_role_lifecycle_create
Success
5.89
test_dynamicroles.py
test_role_inuse_deletion
Success
5.89
test_dynamicroles.py
test_role_account_acls_multiple_mgmt_servers
Success
8.12
test_dynamicroles.py
test_role_account_acls
Success
8.59
test_dynamicroles.py
test_default_role_deletion
Success
6.02
test_dynamicroles.py
test_04_create_fat_type_disk_offering
Success
0.07
test_disk_offerings.py
test_03_delete_disk_offering
Success
0.04
test_disk_offerings.py
test_02_edit_disk_offering
Success
0.05
test_disk_offerings.py
test_02_create_sparse_type_disk_offering
Success
0.07
test_disk_offerings.py
test_01_create_disk_offering
Success
0.11
test_disk_offerings.py
test_deployvm_userdispersing
Success
20.57
test_deploy_vms_with_varied_deploymentplanners.py
test_deployvm_userconcentrated
Success
20.68
test_deploy_vms_with_varied_deploymentplanners.py
test_deployvm_firstfit
Success
116.08
test_deploy_vms_with_varied_deploymentplanners.py
test_deployvm_userdata_post
Success
10.40
test_deploy_vm_with_userdata.py
test_deployvm_userdata
Success
45.68
test_deploy_vm_with_userdata.py
test_02_deploy_vm_root_resize
Success
6.03
test_deploy_vm_root_resize.py
test_01_deploy_vm_root_resize
Success
5.99
test_deploy_vm_root_resize.py
test_00_deploy_vm_root_resize
Success
247.71
test_deploy_vm_root_resize.py
test_deploy_vm_from_iso
Success
202.41
test_deploy_vm_iso.py
test_DeployVmAntiAffinityGroup
Success
55.94
test_affinity_groups.py
test_03_delete_vm_snapshots
Skipped
0.00
test_vm_snapshots.py
test_02_revert_vm_snapshots
Skipped
0.00
test_vm_snapshots.py
test_01_create_vm_snapshots
Skipped
0.00
test_vm_snapshots.py
test_06_copy_template
Skipped
0.00
test_templates.py
test_static_role_account_acls
Skipped
0.02
test_staticroles.py
test_01_scale_vm
Skipped
0.00
test_scale_vm.py
test_01_primary_storage_iscsi
Skipped
0.05
test_primary_storage.py
test_06_copy_iso
Skipped
0.00
test_iso.py
test_deploy_vgpu_enabled_vm
Skipped
0.01
test_deploy_vgpu_enabled_vm.py
@rossor Can you check on the travis and jenkins failures?
looks like there are checkstyle issues in the new code.
@rossor @nathanejohnson Can we get the code formatted correctly so the tests pass?
@kiwiflyer @karuturi Tests passed!
| gharchive/pull-request | 2016-07-08T02:32:52 | 2025-04-01T06:37:52.318340 | {
"authors": [
"DaanHoogland",
"blueorangutan",
"jburwell",
"karuturi",
"kiwiflyer",
"koushik-das",
"leprechau",
"rhtyd",
"rossor"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/pull/1606",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
221792509 | CLOUDSTACK-7958: Add configuration for limit to CIDRs for Admin API calls
The global setting 'management.admin.cidr' is set to 0.0.0.0/0,::/0
by default preserve the current behavior and thus allow API calls
for Admin accounts from all IPv4 and IPv6 subnets.
Users can set it to a comma-separated list of IPv4/IPv6 subnets to
restrict API calls for Admin accounts to certain parts of their network(s).
This is to improve Security. Should a attacker steal the Access/Secret key
of a Admin account he/she still needs to be in a subnet from where Admin accounts
are allowed to perform API calls.
This is a good security measure for APIs which are connected to the public internet.
This PR also includes a commit to cleanup and improve NetUtils.
No existing methods have been altered. That has been verified by adding additional Unit Tests for this.
@DaanHoogland: I improved the logging as you suggested/requested.
A TRACE for every request and WARN when a request is denied. Tried this locally:
2017-04-14 15:45:58,901 WARN [c.c.a.ApiServlet] (catalina-exec-17:ctx-5955fcab ctx-c572b42e) (logid:7b251506) Request by accountId 2 was denied since 192.168.122.1 does not match 127.0.0.1/8,::1/128
In this case only localhost (IPv4/IPv6) is allowed to perform requests.
@PaulAngus This is what we talked about in Prague. Mind taking a look?
Nice @wido, will give it a go soon!
Thanks @remibergsma. Added the return statement.
Thinking about it. Does a 401 sound good? Or should we maybe use a 403 Forbidden?
@wido I'm playing a bit with it, also because currently CloudMonkey displays a clear message but the UI will simply freeze and do nothing. Only when you look at the underlying API calls you'll see why it isn't working. I'm testing to see the http status code makes any difference of that we need to handle it in the UI somewhere.
We should also think about the order: first alert on the CIDR check and then check user/pass (as it is now) or the other way around.
Also noticed it doesn't work with spaces before/after comma's so we might want to add a .replaceAll("\\s","") or similar.
@wido very nice feature.
It would also be nice if this can be added to domain or account setting.
@remibergsma: Yes, I am aware of that UI problem. Not sure how to fix it.
After thinking about it, I went for '403 Forbidden' and also stripped whitespace from the config key.
I think the order is OK right now unless other opinions?
@ustcweizhou: That is a lot more difficult then a global value, isn't it? Since you have to query it every time.
Or should configkey allow this very easily?
@wido I think we need to do the check on two places, also on the login() method. That makes sure we don't issue a session key when user/pass are OK but we still reject it based on the CIDR. In my testing that also fixes the UI issue. There are two ways to authenticate so that makes sense I'd say. It'll then also work with authentication plugins, such as LDAP/AD.
Switching the scope of the config is easy, but indeed you'll be querying it on every API call. That does have the benefit you don't need to restart the mgt server when you make a change, but the downside is also obvious. One way to resolve it, is to make a global config setting that switches the feature on/off (and that config is loaded at bootstrap) so you can opt-in for the more heavy checks.
I'll play a bit more with it tonight.
@remibergsma: I pulled your code, thanks! It now works per account @ustcweizhou
How does this look?
Good one @remibergsma. I reverted that piece and also the baremetal refusal of users.
There were some conflicts after changes were made in master. Fixed those.
As this one is merge ready, can it go into master now?
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-839
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-844
@wido can you check/fix build failures?
@rhtyd: I went through the logs, but there is nothing that I can find that points to this PR.
What I do see:
2017-07-19 15:16:07,845 DEBUG [o.a.c.s.i.BaseImageStoreDriverImpl] (RemoteHostEndPoint-1:ctx-09893152) (logid:21cb76f0) Performing image store createTemplate async callback
2017-07-19 15:16:07,859 WARN [o.a.c.alerts] (RemoteHostEndPoint-1:ctx-09893152) (logid:21cb76f0) alertType:: 28 // dataCenterId:: 1 // podId:: null // clusterId:: null // message:: Failed to register template: d8e68e73-fa32-4309-97e5-436da98c7568 with error: HTTP Server returned 403 (expected 200 OK)
2017-07-19 15:16:07,863 ERROR [o.a.c.s.i.BaseImageStoreDriverImpl] (RemoteHostEndPoint-1:ctx-09893152) (logid:21cb76f0) Failed to register template: d8e68e73-fa32-4309-97e5-436da98c7568 with error: HTTP Server returned 403 (expected 200 OK)
Looking at the logs even further:
2017-07-19 17:07:59,887 DEBUG [c.c.a.ApiServer] (qtp411594792-25:ctx-019cac6d ctx-c78c28dc) (logid:92eb3c37) CIDRs from which account 'Acct[b707c234-6ca4-11e7-849c-00163e174325-admin]' is allowed to perform API calls: 0.0.0.0/0,::/0
2017-07-19 17:07:59,920 DEBUG [c.c.a.ApiServer] (qtp411594792-24:ctx-65cfefc0 ctx-264d5734) (logid:a06700ca) CIDRs from which account 'Acct[b707c234-6ca4-11e7-849c-00163e174325-admin]' is allowed to perform API calls: 0.0.0.0/0,::/0
2017-07-19 17:08:59,944 DEBUG [c.c.a.ApiServer] (qtp411594792-29:ctx-b8e61282 ctx-e1ad15c2) (logid:54c2e421) CIDRs from which account 'Acct[b707c234-6ca4-11e7-849c-00163e174325-admin]' is allowed to perform API calls: 0.0.0.0/0,::/0
2017-07-19 17:08:59,970 DEBUG [c.c.a.ApiServer] (qtp411594792-25:ctx-3901b66d ctx-5861a4b0) (logid:220067e9) CIDRs from which account 'Acct[b707c234-6ca4-11e7-849c-00163e174325-admin]' is allowed to perform API calls: 0.0.0.0/0,::/0
2017-07-19 17:09:00,840 DEBUG [c.c.a.ApiServer] (qtp411594792-24:ctx-b991e50f ctx-df021a2a) (logid:f3c648cf) CIDRs from which account 'Acct[b707c234-6ca4-11e7-849c-00163e174325-admin]' is allowed to perform API calls: 0.0.0.0/0,::/0
Seems like this PR isn't blocking any API-calls or such. Everything seems to pass.
@wido it's a build failure issue, please see travis (job#1) failure:
[[1;34mINFO[m] Compiling 45 source files to /home/travis/build/apache/cloudstack/vmware-base/target/classes
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] COMPILATION ERROR :
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] /home/travis/build/apache/cloudstack/vmware-base/src/com/cloud/hypervisor/vmware/mo/HypervisorHostHelper.java:[1456,20] error: cannot find symbol
[[1;34mINFO[m] 1 error
...
[[1;34mINFO[m] Apache CloudStack VMware Base ...................... [1;31mFAILURE[m [ 4.785 s]
Fixed @rhtyd
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:52 min (Wall Clock)
[INFO] Finished at: 2017-07-24T14:07:11+02:00
[INFO] Final Memory: 118M/1939M
[INFO] ------------------------------------------------------------------------
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✖centos6 ✖centos7 ✖debian. JID-856
@wido still failing, please do a clean rebuild, check Travis?
ACS CI BVT Run
Sumarry:
Build Number 1006
Hypervisor xenserver
NetworkType Advanced
Passed=102
Failed=11
Skipped=12
Link to logs Folder (search by build_no): https://www.dropbox.com/sh/r2si930m8xxzavs/AAAzNrnoF1fC3auFrvsKo_8-a?dl=0
Failed tests:
test_scale_vm.py
ContextSuite context=TestScaleVm>:setup Failing since 33 runs
test_loadbalance.py
test_01_create_lb_rule_src_nat Failed
test_02_create_lb_rule_non_nat Failed
test_non_contigiousvlan.py
test_extendPhysicalNetworkVlan Failed
test_deploy_vm_iso.py
test_deploy_vm_from_iso Failing since 63 runs
test_volumes.py
test_06_download_detached_volume Failing since 3 runs
test_vm_life_cycle.py
test_10_attachAndDetach_iso Failing since 63 runs
test_routers_network_ops.py
test_01_isolate_network_FW_PF_default_routes_egress_true Failing since 96 runs
test_02_isolate_network_FW_PF_default_routes_egress_false Failing since 96 runs
test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failing since 94 runs
test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failing since 94 runs
Skipped tests:
test_vm_nic_adapter_vmxnet3
test_01_verify_libvirt
test_02_verify_libvirt_after_restart
test_03_verify_libvirt_attach_disk
test_04_verify_guest_lspci
test_05_change_vm_ostype_restart
test_06_verify_guest_lspci_again
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm
Passed test suits:
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_vm_snapshots.py
test_over_provisioning.py
test_global_settings.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_login.py
test_list_ids_parameter.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_metrics_api.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_disk_offerings.py
I tried again @rhtyd
wido@wido-laptop:~/repos/cloudstack$ mvn -T2C clean install
This works! All unit tests also pass. No build failure either:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 06:30 min (Wall Clock)
[INFO] Finished at: 2017-07-25T09:27:50+02:00
[INFO] Final Memory: 119M/1556M
[INFO] ------------------------------------------------------------------------
@wido make sure you're not breaking noredist builds as well, see Travis and try to get it green;
[[1;34mINFO[m] [1m--- [0;32mmaven-compiler-plugin:3.2:compile[m [1m(default-compile)[m @ [36mcloud-plugin-hypervisor-vmware[0;1m ---[m
[[1;34mINFO[m] Changes detected - recompiling the module!
[[1;34mINFO[m] Compiling 51 source files to /home/travis/build/apache/cloudstack/plugins/hypervisors/vmware/target/classes
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] COMPILATION ERROR :
[[1;34mINFO[m] -------------------------------------------------------------
[[1;31mERROR[m] /home/travis/build/apache/cloudstack/plugins/hypervisors/vmware/src/com/cloud/hypervisor/guru/VMwareGuru.java:[534,35] error: cannot find symbol
[[1;34mINFO[m] 1 error
Aha @rhtyd!
I've fixed that
ACS CI BVT Run
Sumarry:
Build Number 1014
Hypervisor xenserver
NetworkType Advanced
Passed=102
Failed=9
Skipped=12
Link to logs Folder (search by build_no): https://www.dropbox.com/sh/r2si930m8xxzavs/AAAzNrnoF1fC3auFrvsKo_8-a?dl=0
Failed tests:
test_vm_snapshots.py
test_change_service_offering_for_vm_with_snapshots Failed
test_deploy_vm_iso.py
test_deploy_vm_from_iso Failing since 69 runs
test_list_ids_parameter.py
ContextSuite context=TestListIdsParams>:setup Failing since 45 runs
test_volumes.py
test_06_download_detached_volume Failed
test_vm_life_cycle.py
test_10_attachAndDetach_iso Failing since 69 runs
test_routers_network_ops.py
test_01_isolate_network_FW_PF_default_routes_egress_true Failing since 102 runs
test_02_isolate_network_FW_PF_default_routes_egress_false Failing since 102 runs
test_01_RVR_Network_FW_PF_SSH_default_routes_egress_true Failing since 100 runs
test_02_RVR_Network_FW_PF_SSH_default_routes_egress_false Failing since 100 runs
Skipped tests:
test_vm_nic_adapter_vmxnet3
test_01_verify_libvirt
test_02_verify_libvirt_after_restart
test_03_verify_libvirt_attach_disk
test_04_verify_guest_lspci
test_05_change_vm_ostype_restart
test_06_verify_guest_lspci_again
test_static_role_account_acls
test_11_ss_nfs_version_on_ssvm
test_nested_virtualization_vmware
test_3d_gpu_support
test_deploy_vgpu_enabled_vm
Passed test suits:
test_deploy_vm_with_userdata.py
test_affinity_groups_projects.py
test_portable_publicip.py
test_over_provisioning.py
test_global_settings.py
test_scale_vm.py
test_service_offerings.py
test_routers_iptables_default_policy.py
test_loadbalance.py
test_routers.py
test_reset_vm_on_reboot.py
test_deploy_vms_with_varied_deploymentplanners.py
test_network.py
test_router_dns.py
test_non_contigiousvlan.py
test_login.py
test_public_ip_range.py
test_multipleips_per_nic.py
test_metrics_api.py
test_regions.py
test_affinity_groups.py
test_network_acl.py
test_pvlan.py
test_nic.py
test_deploy_vm_root_resize.py
test_resource_detail.py
test_secondary_storage.py
test_disk_offerings.py
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-867
@blueorangutan test
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
I've tried several times now, the environments fails to come up; perhaps @borisstoyanov can takeover testing...
21:38:41 FAILED - RETRYING: TASK: get wait for state of system VMs to be Running (1 retries left).
21:38:47 fatal: [pr2046-t1277-kvm-centos7-mgmt1]: FAILED! => {"attempts": 200, "changed": true, "cmd": "cloudmonkey list systemvms | jq '.systemvm[]| select(.systemvmtype==\"consoleproxy\")|.state'", "delta": "0:00:00.186658", "end": "2017-07-28 20:38:19.617884", "failed": true, "rc": 0, "start": "2017-07-28 20:38:19.431226", "stderr": "", "stdout": "", "stdout_lines": [], "warnings": []}
21:38:47
21:38:47 cmd: cloudmonkey list systemvms | jq '.systemvm[]| select(.systemvmtype=="consoleproxy")|.state'
21:38:47
21:38:47 start: 2017-07-28 20:38:19.431226
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1018
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1038
@blueorangutan test
@rhtyd a Trillian-Jenkins test job (centos7 mgmt + kvm-centos7) has been kicked to run smoke tests
@wido I've tried 4 times now, the trillian environment fails to come up due to systemvms fail to come up. Do you think there are changes that might affect systemvm agents/patching? /cc @borisstoyanov
FYI, we haven't seen this in other master PRs, so could be related to these changes...
@wido can you fix the conflicts?
@rhtyd Fixed!
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1427
I will fix fhe conflicts asap
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1500
very good !
@blueorangutan package
@rhtyd a Jenkins job has been kicked to build packages. I'll keep you posted as I make progress.
Packaging result: ✔centos6 ✔centos7 ✔debian. JID-1505
@wido can you resolve the conflicts, thanks.
I will in a few days.
Can we merge this one afterwards? I keep resolving conflicts which happen since other code is merged ;)
Sure, ping me @wido
@wido sorry to bring up the old pr but how can I configure under this account level?
I logged as in a regular user with "Domain admin" role but i dont see any settings tab under the account
@wido sorry to bring up the old pr but how can I configure under this account level? I logged as in a regular user with "Domain admin" role but i dont see any settings tab under the account
@ravening please have a look at #4339
| gharchive/pull-request | 2017-04-14T11:27:52 | 2025-04-01T06:37:52.372583 | {
"authors": [
"blueorangutan",
"borisstoyanov",
"cloudmonger",
"ravening",
"remibergsma",
"rhtyd",
"ustcweizhou",
"weizhouapache",
"wido"
],
"repo": "apache/cloudstack",
"url": "https://github.com/apache/cloudstack/pull/2046",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
378843692 | COLLECTIONS-701 SetUniqueList.add() crashes due to infinite recursion…
… when it receives itself
Hi @drajakumar ,
I'm not sure this patch makes sense. Take a look at org.apache.commons.collections4.list.Collections701Test: For ArrayList and HashSet, adding a collection to itself is fine.
In this patch, the argument is not only silently ignored, but the behavior is not even documented. Whatever we do, we really need to document anything that deviates from the standard JRE List contract.
IMO, the fix should be so that a SetUniqueList behaves like a ArrayList and HashSet, it just works.
You did not have to close the PR, I was hoping you would provide a more complete solution ;-)
@garydgregory can you kindly check the new fix, thank you!
@garydgregory can you kindly check the new fix, thank you!
| gharchive/pull-request | 2018-11-08T17:39:40 | 2025-04-01T06:37:52.377609 | {
"authors": [
"drajakumar",
"garydgregory"
],
"repo": "apache/commons-collections",
"url": "https://github.com/apache/commons-collections/pull/57",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
184626396 | LANG-1258: Add ArrayUtils#toStringArray(Object[]) method
patch supplied by IG
Coverage increased (+0.04%) to 93.581% when pulling bfc8805ac898cb4d7a8ca45ab1d5106bc2b63342 on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Coverage increased (+0.04%) to 93.581% when pulling bfc8805ac898cb4d7a8ca45ab1d5106bc2b63342 on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Patch looks good. I wonder if the inline if will raise a warning in checkstyle. Other than that, +1 :D
I added null elements handling in PR to this branch, because in current state NPE would be thrown in case of null in array.
I have updated the pull request with @Xaerxess changes.
Coverage increased (+0.01%) to 93.551% when pulling bf978e7ec7a1bb3d1d671331383619d81bae95ed on PascalSchumacher:ArrayUtils#toStringArray into 91d6bd74fa358fdc8d7cb7681c76c509fd9a8e7d on apache:master.
Sure, consistency in API is the key. But stringIfNull is used when array itself is null, not element, so it wouldn't be consistent with the API.
@Xaerxess Sorry for the confusion, I wasn't talking about toString, but about the toPrimitive methods e.g.
https://github.com/apache/commons-lang/blob/96c8ea2fb3719e2f6e3d7a4d7b46718f26515a86/src/main/java/org/apache/commons/lang3/ArrayUtils.java#L4497
As these also convert the type of the array I think they are the most similar existing methods (compared to the new method in this pull request).
Coverage increased (+0.008%) to 93.563% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.03%) to 93.588% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.03%) to 93.588% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.008%) to 93.563% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Coverage increased (+0.04%) to 93.595% when pulling 8f3577b6919c8bbd806cd499b36babd61f9d3bb5 on PascalSchumacher:ArrayUtils#toStringArray into ff4497aff8cc9de4e0b2c6e5e23e5b6550f76f29 on apache:master.
Merged: https://github.com/apache/commons-lang/commit/8d95ae41975a2307501aa0f4a7eba296c59edce9 and https://github.com/apache/commons-lang/commit/8d601ab71228f7c3dff950540e7ee6e4043e9053
Thanks everybody!
| gharchive/pull-request | 2016-10-22T11:58:46 | 2025-04-01T06:37:52.390974 | {
"authors": [
"PascalSchumacher",
"Xaerxess",
"coveralls",
"kinow"
],
"repo": "apache/commons-lang",
"url": "https://github.com/apache/commons-lang/pull/199",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
370416045 | WKWebview Won't Load
I receive the following error when I have wk webview installed:
Failed to load resource: file:///var/containers/Bundle/Application/39F29CF9-4A2F-4A20-A296-0CAE487974B3/my.app/www/plugins/cordova-plugin-wkwebview-engine/src/www/ios/ios-wkwebview.js The requested URL was not found on this server.
Failed to load resource: file:///var/containers/Bundle/Application/39F29CF9-4A2F-4A20-A296-0CAE487974B3/my.app/www/plugins/cordova-plugin-wkwebview-engine/src/www/ios/ios-wkwebview-exec.js The requested URL was not found on this server.
Error: Module cordova-plugin-wkwebview-engine.ios-wkwebview-exec does not exist.
I have the following in my config.xml
<feature name="CDVWKWebViewEngine">
<param name="ios-package" value="CDVWKWebViewEngine" />
</feature>
<preference name="CordovaWebViewEngine" value="CDVWKWebViewEngine" />
I noticed when I run cordova build ios, the 2 js files in the plugins folder get created, then deleted. Not sure why. Every other plugin is working fine.
I had the same issue and solved it installing : cordova plugin add cordova-plugin-wkwebview-engine. Apparently this plugin needs to be installed for the fix to work.
Dont't forget to add ALL this in your config:
..........[other code]
Closing, the solution is posted above.
| gharchive/issue | 2018-10-16T02:39:15 | 2025-04-01T06:37:52.425978 | {
"authors": [
"adadgio",
"albertleao",
"breautek"
],
"repo": "apache/cordova-plugin-wkwebview-engine",
"url": "https://github.com/apache/cordova-plugin-wkwebview-engine/issues/60",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
996464124 | Microsoft Surface hybrid touch events
Bug Report
Problem
Touch events are not fired correctly
What is expected to happen?
Moving camera in webgl using touch on screen
What does actually happen?
Not any touch events are fired, only mouse and mouse down, few mouse move then nothing, no mouse up.
If I touch without moving mouse up is fired.
Information
Same site on web work correctly with the same device.
Command or Code
Environment, Platform, Device
Microsoft Surface hybrid computer
Version information
Cordova 10.0.0
cordova-windows 8.0.0-dev
Windows 10
Checklist
[x] I searched for existing GitHub issues
[x] I updated all Cordova tooling to most recent version
[x] I included all the necessary information above
We are archiving this repository following Apache Cordova's Deprecation Policy. We will not continue to work on this repository. Therefore all issues and pull requests are being closed. Thanks for your contribution.
| gharchive/issue | 2021-09-14T21:40:25 | 2025-04-01T06:37:52.430336 | {
"authors": [
"corentin-begne",
"timbru31"
],
"repo": "apache/cordova-windows",
"url": "https://github.com/apache/cordova-windows/issues/394",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
98413689 | Call CouchDB's CSRF validation
COUCHDB-2762
+1
| gharchive/pull-request | 2015-07-31T15:30:24 | 2025-04-01T06:37:52.431186 | {
"authors": [
"kxepal",
"rnewson"
],
"repo": "apache/couchdb-chttpd",
"url": "https://github.com/apache/couchdb-chttpd/pull/52",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
461026797 | Changes feed with selector filter doesnt show deleted docs
Description
Deleted documents are not included in the changes feed when filtering with _selector. It does work while filtering by doc_ids, which is not applicable in my scenario.
Steps to Reproduce
# Create document
curl -H 'Content-Type: application/json' -X PUT "localhost:5984/storage/123456" -d '{"name": "Document Name"}'
# Start changes feed
curl -H 'Content-Type: application/json' -X POST "localhost:5984/storage/_changes?feed=longpoll&filter=_selector&since=now" -d '{"selector": {"name": "Document Name"}}'
# Remove document (in another shell, of course)
curl -H 'Content-Type: application/json' -X DELETE "localhost:5984/storage/123456?rev=<LAST-REV>"
Expected Behaviour
The request to _changes should terminate with a results object for that particular document, including deleted=true.
Your Environment
CouchDB running from (official) docker image:
{
"couchdb": "Welcome",
"features": [
"pluggable-storage-engines",
"scheduler"
],
"git_sha": "c298091a4",
"uuid": "54b4e44520a6fc9996a7eb635783fa96",
"vendor": {
"name": "The Apache Software Foundation"
},
"version": "2.3.1"
}
In your third step you remove the 'name' property of the document, and thus it no longer matches your selector.
The DELETE method performs a PUT preserving only _id, _rev and _deleted (set to true).
Instead do a PUT, keeping the "name" field and adding "_deleted":true
Hi there,
This is not a CouchDB bug. GitHub is for actual CouchDB bugs only.
If you are looking for general support with using CouchDB, please try one of these other options:
The user mailing list. Signup instructions are here
The Slack/IRC chat room. Joining instructions are here
Well, that does make sense. I would suggest to mention this in the docs even though it is self-explanatory.
Thanks for the quick response and sorry for opening this as a bug.
| gharchive/issue | 2019-06-26T15:04:39 | 2025-04-01T06:37:52.440773 | {
"authors": [
"modul",
"rnewson",
"wohali"
],
"repo": "apache/couchdb",
"url": "https://github.com/apache/couchdb/issues/2061",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
243818280 | Remove get_details replicator job gen_server call
This was used from a test only and it wasn't reliable. Because of replicator
job delays initialization the State would be either #rep_state{} or #rep. If
replication job hasn't finished initializing, then state would be #rep{} and a
call like get_details which matches the state with #rep_state{] would fail with
the badmatch error.
As seen in issue #686
So remove get_details call and let the test rely on task polling as all other
tests do.
@nickva: While on it could you fix a compile warning on line 100?
couchdb/src/couch_replicator/test/couch_replicator_compact_tests.erl:100: Warning: variable 'RepId' is unused
| gharchive/pull-request | 2017-07-18T19:01:43 | 2025-04-01T06:37:52.443162 | {
"authors": [
"iilyak",
"nickva"
],
"repo": "apache/couchdb",
"url": "https://github.com/apache/couchdb/pull/694",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2493152583 | Implement native parsing of CSV files
What is the problem the feature request solves?
We can probably accelerate reading of CSV files by continuing to use JVM Spark to read bytes from disk but then parse the CSV in native code.
Describe the potential solution
No response
Additional context
No response
Hello.
I would like to start working on this.
| gharchive/issue | 2024-08-28T23:43:27 | 2025-04-01T06:37:52.444916 | {
"authors": [
"andygrove",
"psvri"
],
"repo": "apache/datafusion-comet",
"url": "https://github.com/apache/datafusion-comet/issues/882",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2303436574 | Using Expr::field panics
Describe the bug
After https://github.com/apache/datafusion/pull/10375 merged Expr::field now panics if you use it (as we did in influxdb) DataFusion panics when you try to execute it
To Reproduce
Try to evaluate an expression like col("props").field("a")
Here is a full reproducer in the sql_integration test:
(venv) andrewlamb@Andrews-MacBook-Pro:~/Software/datafusion$ git diff
diff --git a/datafusion/core/tests/expr_api/mod.rs b/datafusion/core/tests/expr_api/mod.rs
index d7e839824..d4141a836 100644
--- a/datafusion/core/tests/expr_api/mod.rs
+++ b/datafusion/core/tests/expr_api/mod.rs
@@ -58,6 +58,25 @@ fn test_eq_with_coercion() {
);
}
+
+#[test]
+fn test_expr_field() {
+ // currently panics with
+ // Internal("NamedStructField should be rewritten in OperatorToFunction")
+ evaluate_expr_test(
+ col("props").field("a"),
+ vec![
+ "+------------+",
+ "| expr |",
+ "+------------+",
+ "| 2021-02-01 |",
+ "| 2021-02-02 |",
+ "| 2021-02-03 |",
+ "+------------+",
+ ],
+ );
+}
+
Expected behavior
Ideally the test should pass Expr::field would continue to work
We could also potentially remove Expr::field but I think that would be less user friendly
Additional context
I am pretty sure I Expr::field is widely used so I think we should continue to support it if possible
I wonder if we could have Expr::field call get_field if the core functions feature was enabled and panic otherwise 🤔
That would be easy to use for most people and backwards compatible
I can fix this along with #10374
Thank you @jayzhan211 🙏 -- I will review it now
| gharchive/issue | 2024-05-17T19:32:52 | 2025-04-01T06:37:52.449539 | {
"authors": [
"alamb",
"jayzhan211"
],
"repo": "apache/datafusion",
"url": "https://github.com/apache/datafusion/issues/10565",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1215501455 | [Bug] [Next-UI][V1.0.0-Beta] Local log file cache is not cleared
Search before asking
[X] I had searched in the issues and found no similar issues.
What happened
View the first component log, when viewing the second component log, it is displayed as the first component log
What you expected to happen
View the first component log, when viewing the second component log, it is displayed to see the second component log
How to reproduce
View the first component log, when viewing the second component log, it is displayed as the first component log
Anything else
No response
Version
3.0.0-alpha
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
i will fix it
| gharchive/issue | 2022-04-26T07:08:02 | 2025-04-01T06:37:52.453684 | {
"authors": [
"XuXuClassMate",
"labbomb"
],
"repo": "apache/dolphinscheduler",
"url": "https://github.com/apache/dolphinscheduler/issues/9780",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2021842446 | [Feature-15260][dolphinscheduler-datasource-hana] add hana related dependencies
Purpose of the pull request
Brief change log
Verify this pull request
This pull request is code cleanup without any test coverage.
(or)
This pull request is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(or)
If your pull request contain incompatible change, you should also add it to docs/docs/en/guide/upgrede/incompatible.md
[Feature][dolphinscheduler-datasource-hana] add hana related dependencies #15259
same with #15127
与#15127相同
same with #15127
This feature was developed by me, but not the PR I submitted. I may be clearer on how to modify it. If you need to merge # 15127, please merge it into the dev branch as soon as possible. My # 15146 requires this modification item
与#15127相同
This feature was developed by me, but not the PR I submitted. I may be clearer on how to modify it. If you need to merge #15127 , please merge it into the dev branch as soon as possible. My #15146 requires this modification item
cc @caishunfeng
test use in the wrong place
Codecov Report
Attention: 4 lines in your changes are missing coverage. Please review.
Comparison is base (0c470ff) 38.19% compared to head (ad073a9) 38.16%.
:exclamation: Current head ad073a9 differs from pull request most recent head 98670be. Consider uploading reports for the commit 98670be to get more accurate results
Files
Patch %
Lines
.../plugin/datasource/hana/HanaDataSourceChannel.java
0.00%
2 Missing :warning:
...in/datasource/hana/HanaPooledDataSourceClient.java
0.00%
2 Missing :warning:
Additional details and impacted files
@@ Coverage Diff @@
## dev #15260 +/- ##
============================================
- Coverage 38.19% 38.16% -0.03%
+ Complexity 4673 4671 -2
============================================
Files 1278 1285 +7
Lines 44482 44463 -19
Branches 4783 4770 -13
============================================
- Hits 16988 16968 -20
- Misses 25632 25633 +1
Partials 1862 1862
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2023-12-02T03:59:12 | 2025-04-01T06:37:52.466567 | {
"authors": [
"codecov-commenter",
"davidzollo",
"fuchanghai",
"xujiaqiang"
],
"repo": "apache/dolphinscheduler",
"url": "https://github.com/apache/dolphinscheduler/pull/15260",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1352043753 | [Feature] Mysql subtable with binlog model merge one table
Search before asking
[X] I had searched in the issues and found no similar issues.
Description
I want to megre Mysql subtable into one table and can update by unique key
Use case
No response
Related issues
No response
Are you willing to submit PR?
[ ] Yes I am willing to submit a PR!
Code of Conduct
[X] I agree to follow this project's Code of Conduct
You can use Agg key and replace_if_not_null implement multitable merge.
| gharchive/issue | 2022-08-26T09:59:58 | 2025-04-01T06:37:52.470088 | {
"authors": [
"GDragon97",
"stalary"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/issues/12110",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1528711666 | Enhancement delete skiplist for duplicate table in memtable
Proposed changes
delete skiplist for dup table when data load.
Problem summary
when load data, data will be insert into skiplist in memtable, for each row skiplist need to Find and Sort, which time cost is O(log(n)), I don't think it's a good way.
There are two ways to insert data into memtable:
insert into skiplist: O(log(n)), when need flush, the data is already sorted
insert into a block(append only): O(1), when need flush, sort once.
this pr implement way2 for duplicate table: append data to a block and sort when need flush.
I don't know how good or bad it turned out
This pr needs to use the community's testing framework
So I submit first to see the result.
TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 36.64 seconds
load time: 517 seconds
storage size: 17134295852 Bytes
https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/tmp/20230112052016_clickbench_pr_78264.html
| gharchive/pull-request | 2023-01-11T09:33:37 | 2025-04-01T06:37:52.474279 | {
"authors": [
"hello-stephen",
"zbtzbtzbt"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/15824",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1572616887 | Chore make compile option work on C objects && some refactor of cmakelists
Proposed changes
make compile option work on C objects && some refactor of cmakelists
Problem summary
Describe your changes.
Checklist(Required)
Does it affect the original behavior:
[ ] Yes
[ ] No
[ ] I don't know
Has unit tests been added:
[ ] Yes
[ ] No
[ ] No Need
Has document been added or modified:
[ ] Yes
[ ] No
[ ] No Need
Does it need to update dependencies:
[ ] Yes
[ ] No
Are there any changes that cannot be rolled back:
[ ] Yes (If Yes, please explain WHY)
[ ] No
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
TeamCity pipeline, clickbench performance test result:
the sum of best hot time: 33.6 seconds
load time: 467 seconds
storage size: 17170873384 Bytes
https://doris-community-test-1308700295.cos.ap-hongkong.myqcloud.com/tmp/20230206145530_clickbench_pr_90979.html
| gharchive/pull-request | 2023-02-06T13:57:48 | 2025-04-01T06:37:52.480694 | {
"authors": [
"BiteTheDDDDt",
"hello-stephen"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/16451",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1743703918 | Fixadd sync after insert into table for nereids_p0
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
| gharchive/pull-request | 2023-06-06T11:53:42 | 2025-04-01T06:37:52.482554 | {
"authors": [
"sohardforaname"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/20516",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1809460622 | Bug fix ScannerContext is done make query failed
Proposed changes
fix ScannerContext is done make query failed
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
| gharchive/pull-request | 2023-07-18T08:44:14 | 2025-04-01T06:37:52.484245 | {
"authors": [
"BiteTheDDDDt"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/21923",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1963399845 | [Fix](inverted index) reorder ConjunctionQuery deconstruct order
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
| gharchive/pull-request | 2023-10-26T12:04:53 | 2025-04-01T06:37:52.485871 | {
"authors": [
"airborne12"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/25972",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1977755754 | cases Add backup & restore test case of dup table
Proposed changes
Issue Number: close #xxx
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
run buildall
| gharchive/pull-request | 2023-11-05T11:56:42 | 2025-04-01T06:37:52.489109 | {
"authors": [
"Bears0haunt"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/26433",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2050044875 | enhance Reduce log in tablet meta
Proposed changes
Reduce log in tablet meta
Further comments
If this is a relatively large or complex change, kick off the discussion at dev@doris.apache.org by explaining why you chose the solution you did and what alternatives you considered, etc...
run buildall
run buildall
| gharchive/pull-request | 2023-12-20T08:01:14 | 2025-04-01T06:37:52.491014 | {
"authors": [
"dataroaring",
"platoneko"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/28719",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2439155617 | Fix query not hit partition in original planner
Proposed changes
Fix issue that query not hit partition in original planner, which will cause serve performance degradation.
this issue seems introduced by https://github.com/apache/doris/pull/21533
run buildall
run p0
run buildall
run p0
run buildall
run buildall
could u submit a pr to master?
| gharchive/pull-request | 2024-07-31T05:23:35 | 2025-04-01T06:37:52.493660 | {
"authors": [
"GoGoWen",
"morrySnow"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/38565",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2441463474 | 2.0upgrade dependencies
Proposed changes
Issue Number: close #xxx
run buildall
| gharchive/pull-request | 2024-08-01T04:48:12 | 2025-04-01T06:37:52.494729 | {
"authors": [
"CalvinKirs"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/38671",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2460848782 | [fix](inverted index)Add exception check when write bkd index
Proposed changes
We are not catching the exception when add values in bkd_writer, if error throws, BE will run into segment fault.
So we add the exception check here to avoid coredump.
run buildall
| gharchive/pull-request | 2024-08-12T12:21:37 | 2025-04-01T06:37:52.495901 | {
"authors": [
"qidaye"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/39248",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2521667620 | fix Fix atomic restore with exists replicas
create replicas with base tablet and schema hash
ignore storage medium when creating replicas with the base tablet
The atomic restore is introduced in #40353.
run buildall
| gharchive/pull-request | 2024-09-12T08:09:23 | 2025-04-01T06:37:52.497232 | {
"authors": [
"w41ter"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/40734",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2631749548 | fix Revome "UNIQUE KEY (k1)" case for test_dynamic_partition_mod_distribution_key (#41002)
Proposed changes
pick: #41002
Remove "UNIQUE KEY (k1)" case, because for unique table hash column must be key column, but for that historical bugs, this case will fail if adding k2 unique key.
Seperate a p0 suite from docker suite because docker suite will not be triggered in community doris p0 CI.
run buildall
| gharchive/pull-request | 2024-11-04T03:39:32 | 2025-04-01T06:37:52.499155 | {
"authors": [
"TangSiyang2001"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/43181",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2759246453 | fixFix bug of pr 44905
What problem does this PR solve?
lock object in PasswordPolicy is written to disk, when user upgrade from older version, this lock will be null, and cause user couldn't connect to Doris.
Code cause this issue PasswordPolicy:
@SerializedName(value = "lock")
private ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
Issue Number: close #xxx
Related PR: #xxx
Problem Summary:
Release note
None
Check List (For Author)
Test
[ ] Regression test
[ ] Unit Test
[ ] Manual test (add detailed scripts or steps below)
[x] No need to test or manual test. Explain why:
[ ] This is a refactor/code format and no logic has been changed.
[ ] Previous test can cover this change.
[ ] No code files have been changed.
[ ] Other reason
Behavior changed:
[ ] No.
[ ] Yes.
Does this need documentation?
[ ] No.
[ ] Yes.
Check List (For Reviewer who merge this PR)
[ ] Confirm the release note
[ ] Confirm test cases
[ ] Confirm document
[ ] Add branch pick label
Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.
Please clearly describe your PR:
What problem was fixed (it's best to include specific error reporting information). How it was fixed.
Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
What features were added. Why was this function added?
Which code was refactored and why was this part of the code refactored?
Which functions were optimized and what is the difference before and after the optimization?
run buildall
| gharchive/pull-request | 2024-12-26T04:05:47 | 2025-04-01T06:37:52.507561 | {
"authors": [
"Jibing-Li",
"Thearas"
],
"repo": "apache/doris",
"url": "https://github.com/apache/doris/pull/45996",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
544990773 | DRILL-7461: Do not pass ClassNotFoundException into SQLNonTransientConnectionException cause when checking that Drill is run in embedded mode
For problem description please refer DRILL-7461.
+1, LGTM
| gharchive/pull-request | 2020-01-03T13:21:14 | 2025-04-01T06:37:52.508976 | {
"authors": [
"arina-ielchiieva",
"vvysotskyi"
],
"repo": "apache/drill",
"url": "https://github.com/apache/drill/pull/1950",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1275630033 | License check fails intermitently
This build and this build both failed during the license check, when writing reports. The build phase completes properly. However, the step to write reports repeatedly fails.
This PR is pretty simple: it is just repackaging of cod that was in a previous PR that passed the license checks. It seems that the license check is just flaky.
As a side note: it seems impossible to run the rat check on a development machine? Rat complains about hundreds of Git, Eclipse and derived files when run with the same Maven command line as reported in the build.
Errors from the build:
[INFO] Building druid-s3-extensions 0.24.0-SNAPSHOT [18/69]
[INFO] --------------------------------[ jar ]---------------------------------
...
[INFO] Rat check: Summary over all files. Unapproved: 0, unknown: 0, generated: 0, approved: 48 licenses.
...
[INFO] druid-s3-extensions ................................ SUCCESS [ 0.073 s]
[INFO] druid-kinesis-indexing-service ..................... SUCCESS [ 0.059 s]
[INFO] druid-azure-extensions ............................. SUCCESS [ 0.651 s]
[INFO] druid-google-extensions ............................ SUCCESS [ 0.075 s]
[INFO] druid-hdfs-storage ................................. SUCCESS [ 0.074 s]
...
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
...
Generating dependency reports
Generating report for /home/travis/build/apache/druid
Generating report for /home/travis/build/apache/druid/extensions-core/s3-extensions
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/s3-extensions
Generating report for /home/travis/build/apache/druid/extensions-core/testing-tools
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/testing-tools
Generating report for /home/travis/build/apache/druid/extensions-core/kinesis-indexing-service
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/kinesis-indexing-service
Generating report for /home/travis/build/apache/druid/extensions-core/mysql-metadata-storage
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/mysql-metadata-storage
Generating report for /home/travis/build/apache/druid/extensions-core/simple-client-sslcontext
Encountered error [Command 'mvn -Ddependency.locations.enabled=false -Ddependency.details.enabled=false project-info-reports:dependencies' returned non-zero exit status 1] when generating report for /home/travis/build/apache/druid/extensions-core/simple-client-sslcontext
Rat has to be run on a clean checkout, yes. I have seen this one before. One thing we would like to do first is to surface this encountered error. it will need some changes in the python file.
| gharchive/issue | 2022-06-18T01:42:03 | 2025-04-01T06:37:52.511980 | {
"authors": [
"abhishekagarwal87",
"paul-rogers"
],
"repo": "apache/druid",
"url": "https://github.com/apache/druid/issues/12676",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
862081183 | Create dynamic config that can limit number of non-primary replicants loaded per coordination cycle
Start Release Notes
Adds new Dynamic Coordinator Config maxNonPrimaryReplicantsToLoad with default value of Integer.MAX_VALUE. This configuration can be used to set a hard upper limit on the number of non-primary replicants that will be loaded in a single Druid Coordinator execution cycle. The default value will mimic the behavior that exists today.
Example usage: If you set this configuration to 1000, the Coordinator duty RunRules will load a maximum of 1000 non-primary replicants in each RunRules execution. Meaning if you ingested 2000 segments with a replication factor of 2, the coordinator would load 2000 primary replicants and 1000 non-primary replicants on the first RunRules execution. Then the next RunRules execution, the last 1000 non-primary replicants will be loaded.
End Release Notes
Description
Add a new dynamic configuration to the coordinator that gives an operator the power to set a hard limit for the number of non-primary segment replicas that are loaded during a single execution of RunRules#run. This allows the operator to limit the amount of work loading non-primary replicas that RunRules will execute in a single run. An example of a reason to use a non-default value for this new config is if the operator wants to ensure that major events such as historical service(s) leaving the cluster, large ingestion jobs, etc. do not cause an abnormally long RunRules execution compared to the cluster's baseline runtime.
Example
cluster: 3 historical servers in _default_tier with 18k segments per server. Each segment belongs to a datasource that has the load rule "LoadForever 2 replicas on _default_tier". The cluster load status is 100% loaded.
Event: 1 historical drops out of the cluster.
Today: The coordinator will load all 18k segments that are now under-replicated in a single execution of RunRules (as long as Throttling limits are not hit and there is capacity)
My change: The coordinator can load a limited number of these under-replicated segments IF the operator has tuned the new dynamic config down from its default. For instance, the operator could say that it is 2k. Meaning it would take at least 9 coordination cycles to fully replicate the segments that were on the recently downed host.
Why
Operators need to balance lots of competing needs. Having the cluster fully replicated is great for HA. But if an event causes the coordinator to take 20 minutes to fully replicate because it has to load thousands of replicas, we sacrifice the timeliness of loading newly ingested segments that were inserted into the metastore after this long coordination cycle started. Maybe the operator cares more about that fresh data timeliness than the replication status, so they change the new config to a value that causes RunRules to take less time but require more execution cycles to bring the data back to full replication.
Really what the change aims to do is give an operator more flexibility. As written the default would give the operator the exact same functionality that they see today.
Design
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Key changed/added classes in this PR
CoordinatorDynamicConfig
ReplicationThrottler
RunRules
LoadRule
This PR has:
[x] been self-reviewed.
[x] added documentation for new or modified features or behaviors.
[x] added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
[x] added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
[x] added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
[x] been tested in a test Druid cluster.
Thanks for the PR! This config should come in handy to reduce coordinator churn in case historicals fall out of the cluster. Have you thought about configuring maxNonPrimaryReplicantsToLoad specific to a tier instead of a global property?
Also could you please add some docs related to this property to the configuration docs?
I added the missing docs.
I had not thought about making this a per-tier setting. I'm coming at it from the angle of an operator not caring if the non-primary replicants are in tier X, Y, or Z, but rather just wanting to make sure the coordinator never spends too much time loading these segments and not doing its other jobs, mainly discovering and loading newly ingested segments.
https://github.com/apache/druid/blob/master/server/src/main/java/org/apache/druid/server/coordinator/CoordinatorDynamicConfig.java#L141
This PR has a similar issue that resulted in this block of code. I think I will do the same solution for now. but long term it would be cool if this had a more elegant solution.
@a2l007 are you okay with merge this week now that the issue for pursuing a cleaner configuration strategy is created?
@capistrant Yup, LGTM. Thanks!
@capistrant , I was taking a look at the maxNonPrimaryReplicantsToLoad config but I couldn't really distinguish it from replicationThrottleLimit.
I see that you have made a similar observation here:
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Could you please help me understand the difference between the two? In which case would we want to tune this config rather than tuning the replicationThrottleLimit itself?
@capistrant , I was taking a look at the maxNonPrimaryReplicantsToLoad config but I couldn't really distinguish it from replicationThrottleLimit.
I see that you have made a similar observation here:
I folded this new configuration and feature into ReplicationThrottler. That is essentially what it is doing, just in a new way compared to the current ReplicationThrottler functionality.
Could you please help me understand the difference between the two? In which case would we want to tune this config rather than tuning the replicationThrottleLimit itself?
My observation is that maxNonPrimaryReplicantsToLoad is a new way of throttling replication. Not that it is doing the same thing as replicationThrottleLimit
replicationThrottleLimit is a limit on the number of in-progress replica loads at any one time during RunRules. We tack the in-progress loads in a list. Items are removed from said list when a LoadQueuePeon issues a callback to remove them on completion of the load.
maxNonPrimaryReplicantsToLoad is a hard limit on the number of replica loads during RunRules. Once it is hit, there is no more non-primary replicas created for the rest of RunRules.
You'd want to tune maxNonPrimaryReplicantsToLoad if you want to put an upper bound on the work to load non-primary replicas done by the coordinator per execution of RunRules. The reason we use it at my org is because we want the coordinator to avoid "putting it's head in the sand" and loading replicas for an un-desirable amount of time instead of finishing it's duties and refreshing its metadata. An example of an "un-desirable amount of work" is if a Historical drops out of the cluster momentarily while the Coordinator is refreshing its SegmentReplicantLookup. The coordinator all of a sudden thinks X segment are under-replicated. But if the Historical is coming back online (say after a restart to deploy new configs), we don't want the Coordinator to spin and load those X segments when it could just finish its duties and notice that the segments are not under-replicated anymore.
I'm not aware of reasons for using replicationThrottleLimit. It didn't meet my orgs needs for throttling replication and it is why I introduced the new config. I guess it is a way to avoid flooding the cluster with replica loads? My clusters have actually tuned that value up to avoid hitting it at the low default that exists. We don't care about the number of in-flight loads, we just care about limiting the total number of replica loads per RunRules execution.
Let me know if that clarification is still not making sense.
Thanks for the explanation, @capistrant !
I completely agree with your opinion that coordinator should not get stuck in a single run and should always keep moving, thereby refreshing its metadata snapshot. I suppose the other open PR from you is in the same vein.
I also think replicationThrottleLimit should probably have done this in the first place, as it was trying to solve the same problem that you describe. Putting the limit on the number of replica loads "currently in progress" is not a very good safeguard to achieve this.
Thanks for adding this config, as I am sure it must come in handy for proper coordinator management.
| gharchive/pull-request | 2021-04-19T23:10:43 | 2025-04-01T06:37:52.532398 | {
"authors": [
"a2l007",
"capistrant",
"kfaraz"
],
"repo": "apache/druid",
"url": "https://github.com/apache/druid/pull/11135",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
456724475 | dubbo-php-framework 未来的规划是怎么样的?
因工作需要从github apache看到dubbo-php-framework产生较大兴趣,最近会发布一个稳定版本吗? 看目前只支持fastjson格式的数据传输,未来会支持其他序列化方式吗? 你们对dubbo-php-framework将来的规划是怎样的?现个人想根据此项目未来规划尝试开发参与进来!
Thanks for reaching out, dubbo php framework was donated by 乐信, and recently the activities of this project seems ceased. I will try to contact the maintainer and see if they are still active.
BTW, if you find anything you can improve, do not hesitate to send pull request!.
支持dubbo-php-framework, 目前我们也在研究并逐步尝试使用。。
首先非常感谢大家对dubbo-php-framework的关注,目前git上的版本经历了乐信生产环境上各业务系统的锤炼,已经是稳定版本,可以直接在生产环境使用。
目前乐信的技术栈已经从php转成了java,所以后续无法在dubbo-php-framework投入太多资源。后面需要依赖社区中感兴趣的同学一起在这个版本上继续发展优化,之前内部有过几个方面的考虑可以供大家参考
1.consumer侧redis换成共享内存,减少外部第三方组件依赖
2.consumer侧根据需要拉取使用到的provider地址信息,减少consumer侧同步地址信息耗费的内存与网络开销
3.其他序列化/反序列化支持,最好做成插件化,如同java版本一样
4.consumer侧熔断降级机制,与provider测过载保护机制配合,斩断故障传染路径
------------------ 原始邮件 ------------------
发件人: "skylway"notifications@github.com;
发送时间: 2019年6月19日(星期三) 中午11:59
收件人: "apache/dubbo-php-framework"dubbo-php-framework@noreply.github.com;
抄送: "Subscribed"subscribed@noreply.github.com;
主题: Re: [apache/dubbo-php-framework] dubbo-php-framework 未来的规划是怎么样的? (#21)
支持dubbo-php-framework, 目前我们也在研究并逐步尝试使用。。
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
@0robin0 可以将service,consumer抽出来单独作为两个项目吗,有的人只关注consumer这一块,或者service这块。抽出来的话,就比较专一,灵活了,维护起来也方便。: )
consumer和provider逻辑上本来就是两个独立的模块,如果需要可以拆成两个独立工程
---原始邮件---
发件人: "Jinxi Wang"notifications@github.com
发送时间: 2019年6月20日(星期四) 晚上6:30
收件人: "apache/dubbo-php-framework"dubbo-php-framework@noreply.github.com;
抄送: "Mention"mention@noreply.github.com;"robinkang"281987291@qq.com;
主题: Re: [apache/dubbo-php-framework] dubbo-php-framework 未来的规划是怎么样的? (#21)
@0robin0 可以将service,consumer抽出来单独作为两个项目吗,有的人只关注consumer这一块,或者service这块。抽出来的话,就比较专一,灵活了,维护起来也方便。: )
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
consumer和provider逻辑上本来就是两个独立的模块,如果需要可以拆成两个独立工程
…
---原始邮件--- 发件人: "Jinxi Wang"notifications@github.com 发送时间: 2019年6月20日(星期四) 晚上6:30 收件人: "apache/dubbo-php-framework"dubbo-php-framework@noreply.github.com; 抄送: "Mention"mention@noreply.github.com;"robinkang"281987291@qq.com; 主题: Re: [apache/dubbo-php-framework] dubbo-php-framework 未来的规划是怎么样的? (#21) @0robin0 可以将service,consumer抽出来单独作为两个项目吗,有的人只关注consumer这一块,或者service这块。抽出来的话,就比较专一,灵活了,维护起来也方便。: ) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
现在我看是通过zookeeper获取服务,再去调服务的,有直接传ip,port直接调用服务的么
| gharchive/issue | 2019-06-17T03:22:40 | 2025-04-01T06:37:52.545289 | {
"authors": [
"0robin0",
"crazyxman",
"keaixiaou",
"ralf0131",
"skylway"
],
"repo": "apache/dubbo-php-framework",
"url": "https://github.com/apache/dubbo-php-framework/issues/21",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1610053772 | java.lang.RuntimeException: publish nacos metadata failed
版本号
dubbo.version 3.1.7
spring-boot.version 2.3.12.RELEASE
spring-cloud.version Hoxton.SR12
spring-cloud-alibaba.version 2.2.10-RC1
nacos-client version 2.2.0
nacos-server version 2.2.0
nacos-all 通过SPI机制注入数据源:postgresql
依赖
spring-cloud-starter-alibaba-nacos-discovery
dubbo-spring-boot-starter
问题点 1:若不配置 use-as-config-center: false use-as-metadata-center: false 则报错标题的异常
问题点 2:若不使用postgresql数据库,使用mysql或derby 不配置use-as-config-center、 use-as-metadata-center可正常启动,
postgresql 扩展数据源插件代码不存在问题已排除数据库源插件代码疑问点
问题点3: 我理解无论是那些数据源mysql、postgresql 对于Dubbo而言都会默认将注册中心的实例同时作为配置中心和元数据中心,但不知道为什么postgresql使用默认配置 use-as-config-center: true use-as-metadata-center: true 就报错
yml配置
spring:
application:
name: rpc
server:
port: 9001
dubbo:
application:
name: rpc
registry:
address: nacos://localhost:8848?username=nacos&password=nacos
protocol:
name: dubbo
port: -1
nacos配置为默认配置
server.servlet.contextPath=/nacos
server.error.include-message=ALWAYS
server.port=8848
spring.datasource.platform=postgresql
db.num=1
db.url.0=jdbc:postgresql://127.0.0.1:5432/nacos_config?reWriteBatchedInserts=true&useUnicode=true&characterEncoding=utf8&serverTimezone=Asia/Shanghai&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false
db.user.0=postgres
db.password.0=123456
db.pool.config.driverClassName=org.postgresql.Driver
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2
management.metrics.export.elastic.enabled=false
management.metrics.export.influx.enabled=false
server.tomcat.accesslog.enabled=true
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i
server.tomcat.basedir=file:.
nacos.security.ignore.urls=/,/error,//*.css,//.js,/**/.html,//*.map,//.svg,/**/.png,//*.ico,/console-ui/public/,/v1/auth/,/v1/console/health/,/actuator/,/v1/console/server/
nacos.core.auth.system.type=nacos
nacos.core.auth.enabled=false
nacos.core.auth.caching.enabled=true
nacos.core.auth.enable.userAgentAuthWhite=false
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security
nacos.core.auth.plugin.nacos.token.expire.seconds=18000
nacos.core.auth.plugin.nacos.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
nacos.istio.mcp.server.enabled=false
异常信息:
:: Dubbo (v3.1.7) : https://github.com/apache/dubbo
:: Discuss group : dev@dubbo.apache.org
, dubbo version: 3.1.7, current host: 172.0.4.184
. ____ _ __ _ _
/\ / ' __ _ () __ __ _ \ \ \
( ( )__ | '_ | '| | ' / ` | \ \ \
\/ )| |)| | | | | || (| | ) ) ) )
' || .__|| ||| |_, | / / / /
=========||==============|/=////
:: Spring Boot :: (v2.3.12.RELEASE)
2023-03-05 12:08:42.555 INFO 5284 --- [ main] c.b.b.b.c.m.rpc.BossBootRpcApplication : No active profile set, falling back to default profiles: default
2023-03-05 12:08:43.279 INFO 5284 --- [ main] o.apache.dubbo.rpc.model.FrameworkModel : [DUBBO] Dubbo Framework[1] is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.300 INFO 5284 --- [ main] o.a.d.c.r.GlobalResourcesRepository : [DUBBO] Creating global shared handler ..., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.373 INFO 5284 --- [ main] o.a.dubbo.rpc.model.ApplicationModel : [DUBBO] Dubbo Application1.0 is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.373 INFO 5284 --- [ main] org.apache.dubbo.rpc.model.ScopeModel : [DUBBO] Dubbo Module[1.0.0] is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.430 INFO 5284 --- [ main] o.a.d.c.context.AbstractConfigManager : [DUBBO] Config settings: {dubbo.config.mode=STRICT, dubbo.config.ignore-duplicated-interface=false}, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.431 INFO 5284 --- [ main] o.a.d.c.context.AbstractConfigManager : [DUBBO] Config settings: {dubbo.config.mode=STRICT, dubbo.config.ignore-duplicated-interface=false}, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.447 INFO 5284 --- [ main] o.a.d.c.utils.SerializeSecurityManager : [DUBBO] Serialize check serializable: true, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.448 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.allowlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.475 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize blocked list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.blockedlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.625 INFO 5284 --- [ main] o.a.dubbo.rpc.model.ApplicationModel : [DUBBO] Dubbo Application1.1 is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.626 INFO 5284 --- [ main] org.apache.dubbo.rpc.model.ScopeModel : [DUBBO] Dubbo Module[1.1.0] is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.657 INFO 5284 --- [ main] o.a.d.c.context.AbstractConfigManager : [DUBBO] Config settings: {dubbo.config.mode=STRICT, dubbo.config.ignore-duplicated-interface=false}, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.657 INFO 5284 --- [ main] o.a.d.c.context.AbstractConfigManager : [DUBBO] Config settings: {dubbo.config.mode=STRICT, dubbo.config.ignore-duplicated-interface=false}, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.664 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.allowlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.665 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize blocked list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.blockedlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.683 INFO 5284 --- [ main] o.a.d.c.s.c.DubboSpringInitializer : [DUBBO] Use default application: Dubbo Application1.1, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.684 INFO 5284 --- [ main] org.apache.dubbo.rpc.model.ScopeModel : [DUBBO] Dubbo Module[1.1.1] is created, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.691 INFO 5284 --- [ main] o.a.d.c.context.AbstractConfigManager : [DUBBO] Config settings: {dubbo.config.mode=STRICT, dubbo.config.ignore-duplicated-interface=false}, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.697 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.allowlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.697 INFO 5284 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize blocked list from jar:file:/C:/Users/Administrator/.m2/repository/org/apache/dubbo/dubbo/3.1.7/dubbo-3.1.7.jar!/security/serialize.blockedlist, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.700 INFO 5284 --- [ main] o.a.d.c.s.c.DubboSpringInitializer : [DUBBO] Use default module model of target application: Dubbo Module[1.1.1], dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:43.700 INFO 5284 --- [ main] o.a.d.c.s.c.DubboSpringInitializer : [DUBBO] Bind Dubbo Module[1.1.1] to spring container: org.springframework.beans.factory.support.DefaultListableBeanFactory@149dd36b, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:44.233 WARN 5284 --- [ main] o.s.boot.actuate.endpoint.EndpointId : Endpoint ID 'service-registry' contains invalid characters, please migrate to a valid format.
2023-03-05 12:08:44.301 INFO 5284 --- [ main] c.s.b.f.a.ServiceAnnotationPostProcessor : [DUBBO] BeanNameGenerator bean can't be found in BeanFactory with name [org.springframework.context.annotation.internalConfigurationBeanNameGenerator], dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:44.301 INFO 5284 --- [ main] c.s.b.f.a.ServiceAnnotationPostProcessor : [DUBBO] BeanNameGenerator will be a instance of org.springframework.context.annotation.AnnotationBeanNameGenerator , it maybe a potential problem on bean name generation., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:44.307 INFO 5284 --- [ main] c.s.b.f.a.ServiceAnnotationPostProcessor : [DUBBO] Found 1 classes annotated by Dubbo @Service under package [xxx.rpc]: [xxx.rpc.DemoServiceImpl], dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:44.328 INFO 5284 --- [ main] c.s.b.f.a.ServiceAnnotationPostProcessor : [DUBBO] Register ServiceBean[ServiceBean:xxx.service.DemoService::]: Root bean: class [org.apache.dubbo.config.spring.ServiceBean]; scope=; abstract=false; lazyInit=null; autowireMode=0; dependencyCheck=0; autowireCandidate=true; primary=false; factoryBeanName=null; factoryMethodName=null; initMethodName=null; destroyMethodName=null, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:44.451 INFO 5284 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=af851f6e-7934-3c8d-ab9d-41ac1c2cc380
2023-03-05 12:08:44.934 INFO 5284 --- [ main] f.a.ReferenceAnnotationBeanPostProcessor : [DUBBO] class org.apache.dubbo.config.spring.beans.factory.annotation.ReferenceAnnotationBeanPostProcessor was destroying!, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:45.639 INFO 5284 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 9001 (http)
2023-03-05 12:08:45.652 INFO 5284 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2023-03-05 12:08:45.652 INFO 5284 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.46]
2023-03-05 12:08:45.957 INFO 5284 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2023-03-05 12:08:45.957 INFO 5284 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 3377 ms
2023-03-05 12:08:46.760 INFO 5284 --- [ main] o.a.d.c.s.c.DubboConfigBeanInitializer : [DUBBO] loading dubbo config beans ..., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:46.773 INFO 5284 --- [ main] o.a.d.c.s.c.DubboConfigBeanInitializer : [DUBBO] dubbo config beans are loaded., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:47.059 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] No value is configured in the registry, the DynamicConfigurationFactory extension[name : nacos] supports as the config center, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:47.061 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] The registry[<dubbo:registry address="nacos://localhost:8848?username=nacos&password=nacos" protocol="nacos" port="8848" parameters="org.apache.dubbo.common.url.component.URLParam$URLParamMap@3686435f" />] will be used as the config center, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:47.076 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] use registry as config-center: <dubbo:config-center highestPriority="false" id="config-center-nacos-localhost-8848" address="nacos://localhost:8848?username=nacos&password=nacos" protocol="nacos" port="8848" parameters="{client=null, password=nacos, username=nacos}" />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:47.218 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2023-03-05 12:08:47.218 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2023-03-05 12:08:51.421 WARN 5284 --- [ main] o.a.d.common.config.ConfigurationUtils : [DUBBO] Config center was specified, but no config item found., dubbo version: 3.1.7, current host: 172.0.4.184, error code: 0-12. This may be caused by , go to https://dubbo.apache.org/faq/0/12 to find instructions.
2023-03-05 12:08:51.421 WARN 5284 --- [ main] o.a.d.common.config.ConfigurationUtils : [DUBBO] Config center was specified, but no config item found., dubbo version: 3.1.7, current host: 172.0.4.184, error code: 0-12. This may be caused by , go to https://dubbo.apache.org/faq/0/12 to find instructions.
2023-03-05 12:08:51.447 INFO 5284 --- [ main] o.a.dubbo.config.context.ConfigManager : [DUBBO] The current configurations or effective configurations are as follows:, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.448 INFO 5284 --- [ main] o.a.dubbo.config.context.ConfigManager : [DUBBO] <dubbo:application parameters="{}" name="rpc" qosEnable="true" protocol="dubbo" />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.448 INFO 5284 --- [ main] o.a.dubbo.config.context.ConfigManager : [DUBBO] <dubbo:protocol port="-1" name="dubbo" />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.449 INFO 5284 --- [ main] o.a.dubbo.config.context.ConfigManager : [DUBBO] <dubbo:registry address="nacos://localhost:8848?username=nacos&password=nacos" protocol="nacos" port="8848" parameters="org.apache.dubbo.common.url.component.URLParam$URLParamMap@3686435f" />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.449 INFO 5284 --- [ main] o.a.dubbo.config.context.ConfigManager : [DUBBO] <dubbo:ssl />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.490 INFO 5284 --- [ main] o.a.d.c.deploy.DefaultModuleDeployer : [DUBBO] Dubbo Module[1.1.0] has been initialized!, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.497 INFO 5284 --- [ main] o.a.d.c.deploy.DefaultModuleDeployer : [DUBBO] Dubbo Module[1.1.1] has been initialized!, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.504 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] No value is configured in the registry, the MetadataReportFactory extension[name : nacos] supports as the metadata center, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.504 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] The registry[<dubbo:registry address="nacos://localhost:8848?username=nacos&password=nacos" protocol="nacos" port="8848" parameters="org.apache.dubbo.common.url.component.URLParam$URLParamMap@3686435f" />] will be used as the metadata center, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.509 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] use registry as metadata-center: <dubbo:metadata-report address="nacos://localhost:8848?username=nacos&password=nacos" protocol="nacos" port="8848" parameters="{password=nacos, client=null, username=nacos}" />, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.668 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2023-03-05 12:08:51.668 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2023-03-05 12:08:51.941 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] Dubbo Application1.1 has been initialized!, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:51.992 WARN 5284 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2023-03-05 12:08:51.992 INFO 5284 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2023-03-05 12:08:51.998 WARN 5284 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources.
2023-03-05 12:08:51.998 INFO 5284 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath.
2023-03-05 12:08:52.398 INFO 5284 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2023-03-05 12:08:56.984 INFO 5284 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
2023-03-05 12:08:57.278 INFO 5284 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 9001 (http) with context path ''
2023-03-05 12:08:57.301 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2023-03-05 12:08:57.301 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2023-03-05 12:08:57.445 INFO 5284 --- [ main] c.a.c.n.registry.NacosServiceRegistry : nacos registry, DEFAULT_GROUP rpc 172.0.4.184:9001 register finished
2023-03-05 12:08:58.586 INFO 5284 --- [ main] o.a.d.c.deploy.DefaultModuleDeployer : [DUBBO] Dubbo Module[1.1.1] is starting., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:08:58.587 INFO 5284 --- [ main] o.a.d.c.d.DefaultApplicationDeployer : [DUBBO] Dubbo Application1.1 is starting., dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:00.236 WARN 5284 --- [ main] org.apache.dubbo.config.ServiceConfig : [DUBBO] Use random available port(20880) for protocol dubbo, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 5-8. This may be caused by , go to https://dubbo.apache.org/faq/5/8 to find instructions.
2023-03-05 12:09:02.022 INFO 5284 --- [ main] org.apache.dubbo.qos.server.Server : [DUBBO] qos-server bind localhost:22222, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.026 INFO 5284 --- [ main] org.apache.dubbo.config.ServiceConfig : [DUBBO] Export dubbo service xxx.service.DemoService to local registry url : injvm://127.0.0.1/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.027 INFO 5284 --- [ main] org.apache.dubbo.config.ServiceConfig : [DUBBO] Register dubbo service xxx.service.DemoService url dubbo://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696 to registry localhost:8848, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.227 INFO 5284 --- [ main] o.a.d.remoting.transport.AbstractServer : [DUBBO] Start NettyServer bind /0.0.0.0:20880, export /172.0.4.184:20880, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.317 INFO 5284 --- [ main] o.a.d.r.c.m.store.MetaCacheManager : [DUBBO] Successfully loaded meta cache from file .metadata.rpc.nacos.localhost:8848, entries 0, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.325 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2023-03-05 12:09:02.326 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2023-03-05 12:09:02.601 INFO 5284 --- [ main] o.a.dubbo.metadata.MappingCacheManager : [DUBBO] Successfully loaded mapping cache from file .mapping.rpc, entries 0, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.621 INFO 5284 --- [ main] o.a.d.r.c.m.MigrationRuleListener : [DUBBO] Listening for migration rules on dataId rpc.migration, group DUBBO_SERVICEDISCOVERY_MIGRATION, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.640 INFO 5284 --- [ main] org.apache.dubbo.config.ServiceConfig : [DUBBO] Register dubbo service xxx.service.DemoService url dubbo://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696 to registry localhost:8848, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.658 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.impl.NacosClientAuthServiceImpl success.
2023-03-05 12:09:02.658 INFO 5284 --- [ main] c.a.n.p.a.s.c.ClientAuthPluginManager : [ClientAuthPluginManager] Load ClientAuthService com.alibaba.nacos.client.auth.ram.RamClientAuthServiceImpl success.
2023-03-05 12:09:02.890 INFO 5284 --- [ main] o.a.dubbo.registry.nacos.NacosRegistry : [DUBBO] Register: dubbo://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.905 INFO 5284 --- [ main] o.a.dubbo.registry.nacos.NacosRegistry : [DUBBO] Subscribe: provider://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&category=configurators&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.930 WARN 5284 --- [ main] o.a.dubbo.registry.nacos.NacosRegistry : [DUBBO] Ignore empty notify urls for subscribe url provider://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&category=configurators&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 1-4. This may be caused by , go to https://dubbo.apache.org/faq/1/4 to find instructions.
2023-03-05 12:09:02.938 WARN 5284 --- [ main] o.a.dubbo.registry.nacos.NacosRegistry : [DUBBO] Ignore empty notify urls for subscribe url provider://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&category=configurators&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 1-4. This may be caused by , go to https://dubbo.apache.org/faq/1/4 to find instructions.
2023-03-05 12:09:02.953 INFO 5284 --- [ main] o.a.d.m.d.TypeDefinitionBuilder : [DUBBO] Throw classNotFound (com/google/protobuf/GeneratedMessageV3) in class org.apache.dubbo.metadata.definition.protobuf.ProtobufTypeBuilder, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.956 INFO 5284 --- [ main] org.apache.dubbo.config.ServiceConfig : [DUBBO] Try to register interface application mapping for service xxx.service.DemoService, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.956 INFO 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] store provider metadata. Identifier : org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b; definition: FullServiceDefinition{parameters=org.apache.dubbo.common.url.component.URLParam$URLParamMap@e2721086} ServiceDefinition [canonicalName=xxx.service.DemoService, codeSource=file:/G:/xxx/target/classes/, methods=[MethodDefinition [name=abc, parameterTypes=[], returnType=java.lang.String]]], dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:02.981 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 1. Next retry delay: 98. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.012 ERROR 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 1-37. This may be caused by , go to https://dubbo.apache.org/faq/1/37 to find instructions.
java.lang.RuntimeException: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:387) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.doStoreProviderMetadata(NacosMetadataReport.java:224) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:283) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:271) [dubbo-3.1.7.jar:3.1.7]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_331]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_331]
at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_331]
2023-03-05 12:09:03.013 ERROR 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] Failed to put provider metadata org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b in FullServiceDefinition{parameters=org.apache.dubbo.common.url.component.URLParam$URLParamMap@e2721086} ServiceDefinition [canonicalName=xxx.service.DemoService, codeSource=file:/G:/xxx/target/classes/, methods=[MethodDefinition [name=abc, parameterTypes=[], returnType=java.lang.String]]], cause: Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 3-2. This may be caused by , go to https://dubbo.apache.org/faq/3/2 to find instructions.
java.lang.RuntimeException: Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:391) ~[dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.doStoreProviderMetadata(NacosMetadataReport.java:224) ~[dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:283) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:271) [dubbo-3.1.7.jar:3.1.7]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_331]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_331]
at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_331]
Caused by: java.lang.RuntimeException: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:387) ~[dubbo-3.1.7.jar:3.1.7]
... 6 common frames omitted
2023-03-05 12:09:03.182 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 2. Next retry delay: 77. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.375 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 3. Next retry delay: 78. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.460 WARN 5284 --- [tion-0-thread-1] o.a.dubbo.registry.nacos.NacosRegistry : [DUBBO] Ignore empty notify urls for subscribe url provider://172.0.4.184:20880/xxx.service.DemoService?anyhost=true&application=rpc&background=false&bind.ip=172.0.4.184&bind.port=20880&category=configurators&check=false&deprecated=false&dubbo=2.0.2&dynamic=true&generic=false&interface=xxx.service.DemoService&methods=abc&pid=5284&qos.enable=true&release=3.1.7&service-name-mapping=true&side=provider×tamp=1677989338696, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 1-4. This may be caused by , go to https://dubbo.apache.org/faq/1/4 to find instructions.
2023-03-05 12:09:03.513 INFO 5284 --- [yTimer-thread-1] stractMetadataReport$MetadataReportRetry : [DUBBO] start to retry task for metadata report. retry times:1, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.514 INFO 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] store provider metadata. Identifier : org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b; definition: FullServiceDefinition{parameters=org.apache.dubbo.common.url.component.URLParam$URLParamMap@e2721086} ServiceDefinition [canonicalName=xxx.service.DemoService, codeSource=file:/G:/xxx/target/classes/, methods=[MethodDefinition [name=abc, parameterTypes=[], returnType=java.lang.String]]], dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.523 ERROR 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 1-37. This may be caused by , go to https://dubbo.apache.org/faq/1/37 to find instructions.
java.lang.RuntimeException: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:387) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.doStoreProviderMetadata(NacosMetadataReport.java:224) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:283) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:271) [dubbo-3.1.7.jar:3.1.7]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_331]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_331]
at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_331]
2023-03-05 12:09:03.524 ERROR 5284 --- [Report-thread-1] o.a.d.m.store.nacos.NacosMetadataReport : [DUBBO] Failed to put provider metadata org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b in FullServiceDefinition{parameters=org.apache.dubbo.common.url.component.URLParam$URLParamMap@e2721086} ServiceDefinition [canonicalName=xxx.service.DemoService, codeSource=file:/G:/xxx/target/classes/, methods=[MethodDefinition [name=abc, parameterTypes=[], returnType=java.lang.String]]], cause: Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed, dubbo version: 3.1.7, current host: 172.0.4.184, error code: 3-2. This may be caused by , go to https://dubbo.apache.org/faq/3/2 to find instructions.
java.lang.RuntimeException: Failed to put org.apache.dubbo.metadata.report.identifier.MetadataIdentifier@210f382b to nacos {"annotations":[],"canonicalName":"xxx.service.DemoService","codeSource":"file:/G:/xxx/target/classes/","methods":[{"annotations":[],"name":"abc","parameterTypes":[],"parameters":[],"returnType":"java.lang.String"}],"parameters":{"pid":"5284","anyhost":"true","interface":"xxx.service.DemoService","side":"provider","application":"rpc","dubbo":"2.0.2","release":"3.1.7","bind.ip":"172.0.4.184","methods":"abc","background":"false","deprecated":"false","dynamic":"true","service-name-mapping":"true","qos.enable":"true","generic":"false","bind.port":"20880","timestamp":"1677989338696"},"types":[{"enums":[],"items":[],"properties":{},"type":"java.lang.String"}],"uniqueId":"xxx.service.DemoService@file:/G:/xxx/target/classes/"}, cause: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:391) ~[dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.doStoreProviderMetadata(NacosMetadataReport.java:224) ~[dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.storeProviderMetadataTask(AbstractMetadataReport.java:283) [dubbo-3.1.7.jar:3.1.7]
at org.apache.dubbo.metadata.report.support.AbstractMetadataReport.lambda$storeProviderMetadata$0(AbstractMetadataReport.java:271) [dubbo-3.1.7.jar:3.1.7]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_331]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_331]
at java.lang.Thread.run(Thread.java:750) ~[na:1.8.0_331]
Caused by: java.lang.RuntimeException: publish nacos metadata failed
at org.apache.dubbo.metadata.store.nacos.NacosMetadataReport.storeMetadata(NacosMetadataReport.java:387) ~[dubbo-3.1.7.jar:3.1.7]
... 6 common frames omitted
2023-03-05 12:09:03.575 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 4. Next retry delay: 89. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.799 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 5. Next retry delay: 41. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:03.976 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 6. Next retry delay: 83. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:04.177 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 7. Next retry delay: 92. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:04.373 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 8. Next retry delay: 12. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
2023-03-05 12:09:04.574 INFO 5284 --- [ main] o.a.d.r.c.m.MetadataServiceNameMapping : [DUBBO] Failed to publish service name mapping to metadata center by cas operation. Times: 9. Next retry delay: 21. Service Interface: xxx.service.DemoService. Origin Content: null. Ticket: 0. Excepted context: rpc, dubbo version: 3.1.7, current host: 172.0.4.184
Process finished with exit code -1
his_config_info中gmt_create为非空字段,在mysql中默认值是当前时间戳CURRENT_TIMESTAMP
postgresql-----his_config_info 字段 gmt_create 也要设置为默认值当前时间戳CURRENT_TIMESTAMP
nacos源码
com.alibaba.nacos.config.server.service.repository.extrnal.ExternalHistoryConfigInfoPersistServiceImpl
@Override
public void insertConfigHistoryAtomic(long id, ConfigInfo configInfo, String srcIp, String srcUser,
final Timestamp time, String ops) {
String appNameTmp = StringUtils.isBlank(configInfo.getAppName()) ? StringUtils.EMPTY : configInfo.getAppName();
String tenantTmp = StringUtils.isBlank(configInfo.getTenant()) ? StringUtils.EMPTY : configInfo.getTenant();
final String md5Tmp = MD5Utils.md5Hex(configInfo.getContent(), Constants.ENCODE);
String encryptedDataKey = StringUtils.isBlank(configInfo.getEncryptedDataKey()) ? StringUtils.EMPTY
: configInfo.getEncryptedDataKey();
try {
HistoryConfigInfoMapper historyConfigInfoMapper = mapperManager.findMapper(
dataSourceService.getDataSourceType(), TableConstant.HIS_CONFIG_INFO);
jt.update(historyConfigInfoMapper.insert(
Arrays.asList("id", "data_id", "group_id", "tenant_id", "app_name", "content", "md5", "src_ip",
"src_user", "gmt_modified", "op_type", "encrypted_data_key")), id, configInfo.getDataId(),
configInfo.getGroup(), tenantTmp, appNameTmp, configInfo.getContent(), md5Tmp, srcIp, srcUser, time,
ops, encryptedDataKey);
} catch (DataAccessException e) {
LogUtil.FATAL_LOG.error("[db-error] " + e, e);
throw e;
}
}
config-fatal.log 异常日志
org.springframework.dao.DataIntegrityViolationException: PreparedStatementCallback; SQL [INSERT INTO his_config_info(id, data_id, group_id, tenant_id, app_name, content, md5, src_ip, src_user, gmt_modified, op_type, encrypted_data_key) VALUES(?,?,?,?,?,?,?,?,?,?,?,?)]; 错误: 在字段 "gmt_create" 中空值违反了非空约束
详细:失败, 行包含(0, 4591, com.xxx.modules.service.DemoService, mapping, , rpc, da0fb2ac892daab4810fb781272173c1, null, 2023-03-05 15:50:27.42, null, 172.0.4.184, I , , ).; nested exception is org.postgresql.util.PSQLException: 错误: 在字段 "gmt_create" 中空值违反了非空约束
详细:失败, 行包含(0, 4591, com.xxx.modules.service.DemoService, mapping, , rpc, da0fb2ac892daab4810fb781272173c1, null, 2023-03-05 15:50:27.42, null, 172.0.4.184, I , , ).
at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:251)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:70)
at org.springframework.jdbc.core.JdbcTemplate.translateException(JdbcTemplate.java:1541)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:667)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:960)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1015)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:1025)
at com.alibaba.nacos.config.server.service.repository.extrnal.ExternalHistoryConfigInfoPersistServiceImpl.insertConfigHistoryAtomic(ExternalHistoryConfigInfoPersistServiceImpl.java:111)
at com.alibaba.nacos.config.server.service.repository.extrnal.ExternalConfigInfoPersistServiceImpl.lambda$addConfigInfo$0(ExternalConfigInfoPersistServiceImpl.java:156)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at com.alibaba.nacos.config.server.service.repository.extrnal.ExternalConfigInfoPersistServiceImpl.addConfigInfo(ExternalConfigInfoPersistServiceImpl.java:149)
at com.alibaba.nacos.config.server.service.repository.extrnal.ExternalConfigInfoPersistServiceImpl.insertOrUpdateCas(ExternalConfigInfoPersistServiceImpl.java:191)
at com.alibaba.nacos.config.server.remote.ConfigPublishRequestHandler.handle(ConfigPublishRequestHandler.java:119)
at com.alibaba.nacos.config.server.remote.ConfigPublishRequestHandler.handle(ConfigPublishRequestHandler.java:55)
at com.alibaba.nacos.core.remote.RequestHandler.handleRequest(RequestHandler.java:58)
at com.alibaba.nacos.core.remote.RequestHandler$$FastClassBySpringCGLIB$$6a0564cd.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:793)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:763)
at org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:89)
at com.alibaba.nacos.config.server.aspect.RequestLogAspect.logClientRequestRpc(RequestLogAspect.java:215)
at com.alibaba.nacos.config.server.aspect.RequestLogAspect.interfacePublishSingleRpc(RequestLogAspect.java:116)
| gharchive/issue | 2023-03-05T04:42:06 | 2025-04-01T06:37:52.699230 | {
"authors": [
"chenglutao"
],
"repo": "apache/dubbo",
"url": "https://github.com/apache/dubbo/issues/11726",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1653480726 | 当接口调用传参为对象时引发一个序列化白名单的错误,提示让我把类加入白名单,要怎么操作?
我尝试过在网上看其他项目去解决这个问题,但我没有找到可以解决的,问过chatGpt,它们给出来的答案都不生效,所以来请教一下这个我要怎么去设置?是在消费层出现的错误 下面是我消费层关于dubbo的设置,再下面就是消费层的报错信息。
dubbo:
application:
name: cloud-demo-nacos-order-consumer
qos-enable: false
registry:
address: nacos://localhost:8848
2023-04-04 16:59:36.027 ERROR 10204 --- [lientWorker-4-1] o.a.d.c.u.DefaultSerializeClassChecker : [DUBBO] [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it., dubbo version: 3.1.8, current host: 192.168.110.1, error code: 4-21. This may be caused by , go to https://dubbo.apache.org/faq/4/21 to find instructions.
2023-04-04 16:59:36.041 ERROR 10204 --- [p-nio-88-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.dubbo.rpc.RpcException: Failed to invoke the method add in the service com.lc.service.ITestInfoService. Tried 3 times of the providers [192.168.110.1:20880] (1/1) from the registry localhost:8848 on the consumer 192.168.110.1 using the dubbo version 3.1.8. Last error is: Failed to invoke remote method: add, provider: DefaultServiceInstance{serviceName='cloud-provider-service', host='192.168.110.1', port=20880, enabled=true, healthy=true, metadata={dubbo.metadata-service.url-params={"connections":"1","version":"1.0.0","dubbo":"2.0.2","release":"3.1.8","side":"provider","port":"20880","protocol":"dubbo"}, dubbo.endpoints=[{"port":20880,"protocol":"dubbo"}], dubbo.metadata.revision=4d666a0dcbf546a2f9a43c423b9d40fe, dubbo.metadata.storage-type=local, timestamp=1680596517041}}, service{name='com.lc.service.ITestInfoService',group='null',version='null',protocol='dubbo',port='20880',params={check.serializable=true, side=provider, release=3.1.8, methods=add,addBatch,count,count,delete,deleteBack,deleteBatch,deleteBatchBack,get,getBaseMapper,getById,getEntityClass,getMap,getObj,getOne,getOne,ktQuery,ktUpdate,lambdaQuery,lambdaQuery,lambdaUpdate,list,list,listByIds,listByLike,listByMap,listMaps,listMaps,listObjs,listObjs,listObjs,listObjs,page,page,pageByCondition,pageByLike,pageMaps,pageMaps,query,remove,removeBatchByIds,removeBatchByIds,removeBatchByIds,removeBatchByIds,removeById,removeById,removeById,removeByIds,removeByIds,removeByMap,save,saveBatch,saveBatch,saveOrUpdate,saveOrUpdate,saveOrUpdateBatch,saveOrUpdateBatch,update,update,update,update,updateBack,updateBatchById,updateBatchById,updateById, deprecated=false, dubbo=2.0.2, interface=com.lc.service.ITestInfoService, service-name-mapping=true, generic=false, application=cloud-provider-service, background=false, dynamic=true, anyhost=true},}, cause: org.apache.dubbo.remoting.RemotingException: io.netty.handler.codec.EncoderException: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
io.netty.handler.codec.EncoderException: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:125)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:881)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:863)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:304)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:879)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:863)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
at io.netty.channel.ChannelDuplexHandler.write(ChannelDuplexHandler.java:115)
at org.apache.dubbo.remoting.transport.netty4.NettyClientHandler.write(NettyClientHandler.java:88)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:879)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:940)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1247)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
at org.apache.dubbo.common.utils.DefaultSerializeClassChecker.loadClass0(DefaultSerializeClassChecker.java:165)
at org.apache.dubbo.common.utils.DefaultSerializeClassChecker.loadClass(DefaultSerializeClassChecker.java:104)
at org.apache.dubbo.common.serialize.hessian2.Hessian2SerializerFactory.getDefaultSerializer(Hessian2SerializerFactory.java:49)
at com.alibaba.com.caucho.hessian.io.SerializerFactory.getSerializer(SerializerFactory.java:393)
at com.alibaba.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:411)
at org.apache.dubbo.common.serialize.hessian2.Hessian2ObjectOutput.writeObject(Hessian2ObjectOutput.java:99)
at org.apache.dubbo.rpc.protocol.dubbo.DubboCodec.encodeRequestData(DubboCodec.java:208)
at org.apache.dubbo.remoting.exchange.codec.ExchangeCodec.encodeRequest(ExchangeCodec.java:261)
at org.apache.dubbo.remoting.exchange.codec.ExchangeCodec.encode(ExchangeCodec.java:75)
at org.apache.dubbo.rpc.protocol.dubbo.DubboCountCodec.encode(DubboCountCodec.java:47)
at org.apache.dubbo.remoting.transport.netty4.NettyCodecAdapter$InternalEncoder.encode(NettyCodecAdapter.java:69)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
... 22 more
] with root cause
org.apache.dubbo.remoting.RemotingException: io.netty.handler.codec.EncoderException: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
io.netty.handler.codec.EncoderException: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:125)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:881)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:863)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
at io.netty.handler.timeout.IdleStateHandler.write(IdleStateHandler.java:304)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:879)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:863)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:968)
at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:856)
at io.netty.channel.ChannelDuplexHandler.write(ChannelDuplexHandler.java:115)
at org.apache.dubbo.remoting.transport.netty4.NettyClientHandler.write(NettyClientHandler.java:88)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:879)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:940)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1247)
at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalArgumentException: [Serialization Security] Serialized class org.apache.catalina.connector.RequestFacade is in disallow list. Current mode is WARN, will disallow to deserialize it by default. Please add it into security/serialize.allowlist or follow FAQ to configure it.
at org.apache.dubbo.common.utils.DefaultSerializeClassChecker.loadClass0(DefaultSerializeClassChecker.java:165)
at org.apache.dubbo.common.utils.DefaultSerializeClassChecker.loadClass(DefaultSerializeClassChecker.java:104)
at org.apache.dubbo.common.serialize.hessian2.Hessian2SerializerFactory.getDefaultSerializer(Hessian2SerializerFactory.java:49)
at com.alibaba.com.caucho.hessian.io.SerializerFactory.getSerializer(SerializerFactory.java:393)
at com.alibaba.com.caucho.hessian.io.Hessian2Output.writeObject(Hessian2Output.java:411)
at org.apache.dubbo.common.serialize.hessian2.Hessian2ObjectOutput.writeObject(Hessian2ObjectOutput.java:99)
at org.apache.dubbo.rpc.protocol.dubbo.DubboCodec.encodeRequestData(DubboCodec.java:208)
at org.apache.dubbo.remoting.exchange.codec.ExchangeCodec.encodeRequest(ExchangeCodec.java:261)
at org.apache.dubbo.remoting.exchange.codec.ExchangeCodec.encode(ExchangeCodec.java:75)
at org.apache.dubbo.rpc.protocol.dubbo.DubboCountCodec.encode(DubboCountCodec.java:47)
at org.apache.dubbo.remoting.transport.netty4.NettyCodecAdapter$InternalEncoder.encode(NettyCodecAdapter.java:69)
at io.netty.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:107)
... 22 more
at org.apache.dubbo.remoting.exchange.support.DefaultFuture.doReceived(DefaultFuture.java:224) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.exchange.support.DefaultFuture.received(DefaultFuture.java:186) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.exchange.support.DefaultFuture.received(DefaultFuture.java:174) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.handleResponse(HeaderExchangeHandler.java:62) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.exchange.support.header.HeaderExchangeHandler.received(HeaderExchangeHandler.java:183) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.transport.DecodeHandler.received(DecodeHandler.java:53) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.remoting.transport.dispatcher.ChannelEventRunnable.run(ChannelEventRunnable.java:62) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.common.threadpool.ThreadlessExecutor$RunnableWrapper.run(ThreadlessExecutor.java:184) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.common.threadpool.ThreadlessExecutor.waitAndDrain(ThreadlessExecutor.java:103) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.AsyncRpcResult.get(AsyncRpcResult.java:194) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.protocol.AbstractInvoker.waitForResultIfSync(AbstractInvoker.java:266) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.protocol.AbstractInvoker.invoke(AbstractInvoker.java:186) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.listener.ListenerInvokerWrapper.invoke(ListenerInvokerWrapper.java:71) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.protocol.ReferenceCountInvokerWrapper.invoke(ReferenceCountInvokerWrapper.java:78) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invokeWithContext(AbstractClusterInvoker.java:379) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.support.FailoverClusterInvoker.doInvoke(FailoverClusterInvoker.java:81) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.support.AbstractClusterInvoker.invoke(AbstractClusterInvoker.java:341) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.router.RouterSnapshotFilter.invoke(RouterSnapshotFilter.java:46) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.monitor.support.MonitorFilter.invoke(MonitorFilter.java:100) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.protocol.dubbo.filter.FutureFilter.invoke(FutureFilter.java:52) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.support.ConsumerClassLoaderFilter.invoke(ConsumerClassLoaderFilter.java:40) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.support.ConsumerContextFilter.invoke(ConsumerContextFilter.java:120) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CopyOfFilterChainNode.invoke(FilterChainBuilder.java:327) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.filter.FilterChainBuilder$CallbackRegistrationInvoker.invoke(FilterChainBuilder.java:194) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.support.wrapper.AbstractCluster$ClusterFilterInvoker.invoke(AbstractCluster.java:92) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.cluster.support.wrapper.MockClusterInvoker.invoke(MockClusterInvoker.java:103) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.registry.client.migration.MigrationInvoker.invoke(MigrationInvoker.java:282) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.proxy.InvocationUtil.invoke(InvocationUtil.java:57) ~[dubbo-3.1.8.jar:3.1.8]
at org.apache.dubbo.rpc.proxy.InvokerInvocationHandler.invoke(InvokerInvocationHandler.java:75) ~[dubbo-3.1.8.jar:3.1.8]
at com.lc.service.ITestInfoServiceDubboProxy10.add(ITestInfoServiceDubboProxy10.java) ~[classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_221]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_221]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_221]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_221]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344) ~[spring-aop-5.3.26.jar:5.3.26]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:208) ~[spring-aop-5.3.26.jar:5.3.26]
at com.sun.proxy.$Proxy149.add(Unknown Source) ~[na:na]
at com.lc.controller.TestInfoController.add(TestInfoController.java:52) ~[classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_221]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_221]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_221]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_221]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1072) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:965) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.26.jar:5.3.26]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:909) ~[spring-webmvc-5.3.26.jar:5.3.26]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:528) ~[tomcat-embed-core-9.0.73.jar:4.0.FR]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.26.jar:5.3.26]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:596) ~[tomcat-embed-core-9.0.73.jar:4.0.FR]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at com.github.xiaoymin.knife4j.spring.filter.SecurityBasicAuthFilter.doFilter(SecurityBasicAuthFilter.java:87) ~[knife4j-spring-3.0.3.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.springframework.web.filter.CorsFilter.doFilterInternal(CorsFilter.java:91) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:96) ~[spring-boot-actuator-2.7.10.jar:2.7.10]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.26.jar:5.3.26]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:389) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:926) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1791) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) [tomcat-embed-core-9.0.73.jar:9.0.73]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-9.0.73.jar:9.0.73]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_221]
我也观察了dubbo内的文档,可是只是提到了要security/serialize.allowlist 资源文件中声明您所使用的类名,Dubbo 将自动将其加载到安全列表中这么操作,没具体演示应该怎么弄,所以我很困惑
在resources 目录下创建 security/serialize.allowlist 文件,将你需要序列化的类填进去
我尝试了几种文件创建如:
如:
如:
重新启动后还是一样的错误,文件里面的值我填充的是 com.lc.entity.TestInfo
请告知该如何弄,这让我很困惑
在resources 目录下创建 security/serialize.allowlist 文件,将你需要序列化的类填进去
添加了
可是还是提示一样错误,是我的依赖没有给对吗?我已经被这个东西弄晕了
或者有没有demo解决了这个问题的,我去参考解决下
resource/security/serialize.allowlist 这样吧。。 resource 下新建一个文件夹security 然后在 security 文件夹下新建 serialize.allowlist文件
我重新的创建了一下这些文件 如果是这样的话那似乎没生效?
你的日志里 有这个开头的么 。。Read serialize allow list from
你的日志里 有这个开头的么 。。Read serialize allow list from
有的,启动完成后共有6处以Read serialize allow list from开头的数据,
2023-04-04 22:11:15.528 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from file:/E:/workspace/new/cloud-immigrant/lc-consumer/lc-consumer-mobileProgram/target/classes/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
2023-04-04 22:11:15.529 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/E:/development/Maven/newmaven/org/apache/dubbo/dubbo/3.1.8/dubbo-3.1.8.jar!/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
2023-04-04 22:11:15.588 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from file:/E:/workspace/new/cloud-immigrant/lc-consumer/lc-consumer-mobileProgram/target/classes/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
2023-04-04 22:11:15.588 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/E:/development/Maven/newmaven/org/apache/dubbo/dubbo/3.1.8/dubbo-3.1.8.jar!/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
2023-04-04 22:11:15.598 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from file:/E:/workspace/new/cloud-immigrant/lc-consumer/lc-consumer-mobileProgram/target/classes/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
2023-04-04 22:11:15.598 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from jar:file:/E:/development/Maven/newmaven/org/apache/dubbo/dubbo/3.1.8/dubbo-3.1.8.jar!/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
这么看上去是生效了的,不过我不清楚为什么还会报错
2023-04-04 22:11:15.528 INFO 19992 --- [ main] o.a.d.c.u.SerializeSecurityConfigurator : [DUBBO] Read serialize allow list from file:/E:/workspace/new/cloud-immigrant/lc-consumer/lc-consumer-mobileProgram/target/classes/security/serialize.allowlist, dubbo version: 3.1.8, current host: 192.168.31.50
3.1.x 版本的序列化校验不会导致调用报错的,麻烦把具体报错栈给贴一下
@AlbumenJ 同求,这个问题怎么解决?我3.2.2 还存在这个问题。 原因是我用了父类,子类,然后子类就反序列化不回来了,报不安全
https://cn.dubbo.apache.org/zh-cn/overview/mannual/java-sdk/advanced-features-and-usage/security/class-check/
参考这个https://github.com/apache/dubbo/issues/13381,建议添加java参数-Ddubbo.application.serialize-check-status=WARN
| gharchive/issue | 2023-04-04T09:04:01 | 2025-04-01T06:37:52.747746 | {
"authors": [
"AlbumenJ",
"CoCoYuYuan",
"liufeiyu1002",
"penghcn",
"pinker-god"
],
"repo": "apache/dubbo",
"url": "https://github.com/apache/dubbo/issues/12014",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
443306099 | @Service.methods 不起作用
@Service(methods ={@Method(name = "sayHello",retries = 2)} ),通过注解加载类时,没有将methods属性加载上。
ServiceAnnotationBeanPostProcessor.buildServiceBeanDefinition 这个方法中需要加一句,将注解对象初始化到ServiceBean的对象中 :builder.addConstructorArgValue(service);
I have tried the @Method annotation and it work for me, can you provider a demo to reproduce this issue?
feel free to reopen it if you still can reproduce with the latest version 2.7.4-SNAPSHOT
| gharchive/issue | 2019-05-13T09:51:48 | 2025-04-01T06:37:52.751266 | {
"authors": [
"Mark-WJQ",
"beiwei30",
"kexianjun"
],
"repo": "apache/dubbo",
"url": "https://github.com/apache/dubbo/issues/4043",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1057119697 | try not to instantiate tool classes
What is the purpose of the change
Brief changelog
Verifying this change
Checklist
[x] Make sure there is a GitHub_issue field for the change (usually before you start working on it). Trivial changes like typos do not require a GitHub issue. Your pull request should address just this issue, without pulling in other changes - one PR resolves one issue.
[ ] Each commit in the pull request should have a meaningful subject line and body.
[ ] Write a pull request description that is detailed enough to understand what the pull request does, how, and why.
[ ] Check if is necessary to patch to Dubbo 3 if you are work on Dubbo 2.7
[ ] Write necessary unit-test to verify your logic correction, more mock a little better when cross module dependency exist. If the new feature or significant change is committed, please remember to add sample in dubbo samples project.
[ ] Add some description to dubbo-website project if you are requesting to add a feature.
[ ] GitHub Actions works fine on your own branch.
[ ] If this contribution is large, please follow the Software Donation Guide.
What is your purpose of this pr?
Codecov Report
Merging #9297 (170bef0) into 3.0 (3469842) will decrease coverage by 0.09%.
The diff coverage is 0.00%.
@@ Coverage Diff @@
## 3.0 #9297 +/- ##
============================================
- Coverage 64.69% 64.60% -0.10%
- Complexity 328 329 +1
============================================
Files 1206 1206
Lines 51849 51849
Branches 7717 7692 -25
============================================
- Hits 33544 33497 -47
- Misses 14688 14725 +37
- Partials 3617 3627 +10
Impacted Files
Coverage Δ
...dubbo/config/spring/util/DubboAnnotationUtils.java
48.27% <0.00%> (ø)
...he/dubbo/config/spring/util/SpringCompatUtils.java
28.26% <0.00%> (ø)
...dubbo/common/status/support/LoadStatusChecker.java
46.15% <0.00%> (-15.39%)
:arrow_down:
...ache/dubbo/remoting/transport/AbstractChannel.java
75.00% <0.00%> (-12.50%)
:arrow_down:
...ian2/dubbo/AbstractHessian2FactoryInitializer.java
50.00% <0.00%> (-11.12%)
:arrow_down:
.../apache/dubbo/remoting/transport/AbstractPeer.java
63.04% <0.00%> (-8.70%)
:arrow_down:
.../common/threadpool/serial/SerializingExecutor.java
70.37% <0.00%> (-7.41%)
:arrow_down:
...ng/transport/dispatcher/all/AllChannelHandler.java
62.06% <0.00%> (-6.90%)
:arrow_down:
.../org/apache/dubbo/rpc/protocol/tri/WriteQueue.java
68.75% <0.00%> (-6.25%)
:arrow_down:
...pache/dubbo/remoting/transport/AbstractServer.java
57.14% <0.00%> (-4.29%)
:arrow_down:
... and 25 more
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3469842...170bef0. Read the comment docs.
| gharchive/pull-request | 2021-11-18T09:20:20 | 2025-04-01T06:37:52.773785 | {
"authors": [
"AlbumenJ",
"LMDreamFree",
"codecov-commenter"
],
"repo": "apache/dubbo",
"url": "https://github.com/apache/dubbo/pull/9297",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
854345725 | 折线图 xAxis中 设置axisPointer里label的背景颜色为渐变色会导致_renderBackground函数出错
Version
5.0.2
Reproduction link
https://jsfiddle.net/cangshudada/gue7wp9j/1/
Steps to reproduce
鼠标移入折线图内触发tooltip即可复现错误
What is expected?
axisPointer里label的背景颜色可以设置渐变色
What is actually happening?
实际上会导致函数报错
这个问题在v4版本中是不会出现的,并且正常字符串展现的颜色格式是不会出现问题的。
报错的主要原因为同时进入非渐变以及渐变的函数判断中去了,其中两个获取样式重要的变量皆为undefined,也就是下面代码中的a 与 s皆为undefined,导致报错
var v = (a || s).style;
Reproduction link
https://jsfiddle.net/cangshudada/0r9zy8av/2/
This is a normal effect,and use v4 version!
| gharchive/issue | 2021-04-09T09:24:17 | 2025-04-01T06:37:52.777926 | {
"authors": [
"cangshudada"
],
"repo": "apache/echarts",
"url": "https://github.com/apache/echarts/issues/14635",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1195299338 | [Feature] Ability to add icons/images in chart title.
What problem does this feature solve?
Current API doesn't support adding icons/images to chart title.
What does the proposed API look like?
title:{
image:"imagepath"
}
You can add images to title with textStyle.rich. Please refer to documentation and examples
Here's an example I made for you.
Code sample
var ROOT_PATH =
'https://cdn.jsdelivr.net/gh/apache/echarts-website@asf-site/examples';
const Sunny = ROOT_PATH + '/data/asset/img/weather/sunny_128.png';
option = {
xAxis: {
type: 'category',
data: ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
},
yAxis: {
type: 'value'
},
title: {
text: '{a|}Sunny Day!',
textStyle: {
color: 'black',
rich: {
a: {
backgroundColor: {
image: Sunny
},
height: 40,
width: 50
}
}
}
},
series: [
{
data: [120, 200, 150, 80, 70, 110, 130],
type: 'bar',
showBackground: true,
backgroundStyle: {
color: 'rgba(180, 180, 180, 0.2)'
}
}
]
};
Thank you!
| gharchive/issue | 2022-04-06T23:24:50 | 2025-04-01T06:37:52.780779 | {
"authors": [
"Maneesh43",
"jiawulin001"
],
"repo": "apache/echarts",
"url": "https://github.com/apache/echarts/issues/16843",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1597453222 | [Bug] Overflow truncate breaks axis label click target position
Version
5.4.1
Link to Minimal Reproduction
No response
Steps to Reproduce
Paste the following in the echarts example:
option = {
xAxis: {
axisLabel: {
// BUG: Overflow: "truncate" breaks the click target on axis label
overflow: 'truncate',
width: 80
},
triggerEvent: true,
type: 'category',
data: ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
},
yAxis: {
axisLabel: {
// BUG: Overflow: "truncate" breaks the click target on axis label
overflow: 'truncate',
width: 80
},
triggerEvent: true,
type: 'value'
},
series: [
{
data: [120, 200, 150, 80, 70, 110, 130],
type: 'bar',
showBackground: true,
backgroundStyle: {
color: 'rgba(180, 180, 180, 0.2)'
}
}
]
};
Current Behavior
Hovering mouse over an axis label should have a click target that is aligned with the axis label text. Click target is where mouse pointer is hovering in the screenshot:
Expected Behavior
Click target is aligned with axis label text.
Environment
- OS:macOS Monterey
- Browser: Chrome 109
- Framework:
Any additional comments?
Looks to be related to https://github.com/apache/echarts/issues/17343 possibly?
This seems to be a bug. If you are interested in making a pull request, it can help you fix this problem quicker. Please checkout the wiki to learn more.
Is there any fix planned for this issue? Experiencing the same behavior, click event is not aligned with the text.
I added an empty rich: {} to the axisLabel and everything worked as expected
I added an empty rich: {} to the axisLabel and everything worked as expected
太棒了!
I added an empty rich: {} to the axisLabel and everything worked as expected
This hack worked for me.
| gharchive/issue | 2023-02-23T19:59:41 | 2025-04-01T06:37:52.786148 | {
"authors": [
"Ovilia",
"amit-unravel",
"gl260",
"hanshupe007",
"ianschmitz",
"psychopathh"
],
"repo": "apache/echarts",
"url": "https://github.com/apache/echarts/issues/18306",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2025336731 | [ISSUE #4602] when wechat send message api response errcode is not zero, the wechat sink connector does not throw IllegalAccessException
Fixes #4602
Motivation
fix bug when wecaht api response fail
Modifications
rename org.apache.eventmesh.connector.wechat.sink.connector.TemplateMessageResponse#errocode
to org.apache.eventmesh.connector.wechat.sink.connector.TemplateMessageResponse#errcode
add abnormal test case
You can add this Sink Connector to the list in this document. https://github.com/apache/eventmesh/tree/master/eventmesh-connectors#connector-status
already done, please review
Finally, are you willing to write a document for your connector (#4601 (comment)) to facilitate user understanding and use. If you are willing, you can do it in this PR or in a new PR in the future.
最后,请问是否愿意给您的这个Connector写一下文档 (#4601 (comment) ),方便用户理解和使用。如果您愿意,可以在该PR中做,也可以以后在新的PR中做。
i prefer in a new PR to do this
@pandaapo hello, i found some commit in this PR used wrong email, so i want to close this PR, an create a new one
| gharchive/pull-request | 2023-12-05T05:49:25 | 2025-04-01T06:37:52.790725 | {
"authors": [
"wizardzhang"
],
"repo": "apache/eventmesh",
"url": "https://github.com/apache/eventmesh/pull/4603",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2356233712 | [FLINK-35623] Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
Bump mongo-driver version from 4.7.2 to 5.1.1 to support MongoDB 7.0
https://www.mongodb.com/docs/drivers/java/sync/current/compatibility/
Hi @GOODBOY008, could you help review this?
Hi @yux, could you help review this?
| gharchive/pull-request | 2024-06-17T02:54:32 | 2025-04-01T06:37:52.794185 | {
"authors": [
"Jiabao-Sun"
],
"repo": "apache/flink-connector-mongodb",
"url": "https://github.com/apache/flink-connector-mongodb/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
527031733 | [FLINK-14924][table sql / api] CsvTableSource can not config empty column as null
CsvTableSource can not config empty column as null.
What is the purpose of the change
This pull request add option of treating empty column as null for CsvTableSource.
Brief change log
update file org.apache.flink.table.sources.CsvTableSource.java
Verifying this change
Add ITCase to to test CsvTableSource:
org.apache.flink.table.runtime.batch.sql.TableSourceITCase.scala
org.apache.flink.table.runtime.utils.CommonTestData.scala
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): ( no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
I look up csv descriptor, we have two csv descriptor named CSV and OldCsv.
Only OldCsv will call CsvTableSource and OldCsv is deprecated, CSV has an another mechanism RuntimeConverter and supports null literal property which interpret literal string(i.e. "null" or "N/A") as null value.
So I think we do not need to add this feature now.
@KurtYoung
I update the PR, Could you have a more look ?
| gharchive/pull-request | 2019-11-22T07:21:22 | 2025-04-01T06:37:52.799983 | {
"authors": [
"leonardBang"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/10289",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
625508259 | [FLINK-17572][runtime] Remove checkpoint alignment buffered metric from webui
What is the purpose of the change
After avoid caching buffers for blocked input channels before barrier alignment, runtime never cache buffers while checkpoint barrier alignment, therefore checkpoint alignment buffered metric would always be 0, which should remove it directly in CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics.
Brief change log
Remove alignmentBuffered attribute in CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics
Remove alignment_buffered in Checkpoint Detail from job-checkpoints.component.html.
Remove alignment_buffered column in document of /jobs/:jobid/checkpoints rest interface.
Verifying this change
Modify test object create by CheckpointStatistics , CheckpointingStatistics, TaskCheckpointStatistics, TaskCheckpointStatisticsWithSubtaskDetails and SubtaskCheckpointStatistics in CheckpointingStatisticsTest, TaskCheckpointStatisticsTest and TaskCheckpointStatisticsWithSubtaskDetailsTest.
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
This breaks the REST API contracts; see the JIRA for details.
| gharchive/pull-request | 2020-05-27T08:54:07 | 2025-04-01T06:37:52.807431 | {
"authors": [
"SteNicholas",
"zentol"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/12354",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
633444633 | [FLINK-18063][checkpointing] Fix the race condition for aborting current checkpoint in CheckpointBarrierUnaligner
What is the purpose of the change
There are three aborting scenarios which might encounter race condition:
1. CheckpointBarrierUnaligner#processCancellationBarrier
2. CheckpointBarrierUnaligner#processEndOfPartition
3. AlternatingCheckpointBarrierHandler#processBarrier
They only consider the pending checkpoint triggered by #processBarrier from task thread to abort it. Actually the checkpoint might also be triggered by #notifyBarrierReceived from netty thread in race condition, so we should also handle properly to abort it.
Brief change log
Fix the process of AlternatingCheckpointBarrierHandler#processBarrier
Fix the process of CheckpointBarrierUnaligner#processEndOfPartition to abort checkpoint properly
Fix the process of CheckpointBarrierUnaligner#processCancellationBarrier to abort checkpoint properly
Verifying this change
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierAfterNotifyBarrierReceived
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierAfterProcessBarrier
Added new unit test CheckpointBarrierUnalignerTest#testProcessCancellationBarrierBeforeProcessAndReceiveBarrier
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (yes / no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (yes / no)
The serializers: (yes / no / don't know)
The runtime per-record code paths (performance sensitive): (yes / no / don't know)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know)
The S3 file system connector: (yes / no / don't know)
Documentation
Does this pull request introduce a new feature? (yes / no)
If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
Cherry-pick it to master from #12406 which was reviewed and approved before.
The failure e2e is known StreamingKafkaITCase, so ignore it to merge.
| gharchive/pull-request | 2020-06-07T14:06:38 | 2025-04-01T06:37:52.814486 | {
"authors": [
"zhijiangW"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/12511",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
928988649 | [FLINK-20518][rest] Add decoding characters for MessageQueryParameter
What is the purpose of the change
Add decoding characters for rest service
Brief change log
Add decoding characters for rest service
Verifying this change
no
Does this pull request potentially affect one of the following parts:
Dependencies (does it add or upgrade a dependency): (no)
The public API, i.e., is any changed class annotated with @Public(Evolving): (no)
The serializers: (no)
The runtime per-record code paths (performance sensitive): (no)
Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
The S3 file system connector: (no)
Documentation
Does this pull request introduce a new feature? (no)
If yes, how is the feature documented? (not documented)
@zentol please take a look, if you have time, thanks
As I said in the ticket, the server side decodes characters just fine. The UI isn't encoding special characters; that's the isssue.
As I said in the ticket, the server side decodes characters just fine. The UI isn't encoding special characters; that's the isssue.
Maybe I didn't understand what you mean correctly?
Do you think that special characters should be encoded on the UI side?
Instead of decoding it on the server again?
Do you think that special characters should be encoded on the UI side? Instead of decoding it on the server again?
yes and yes. I did a manual check when reviewing https://github.com/apache/flink/pull/13514 and confirmed that our REST API handles escaped special characters fine. So this is purely a UI issue.
Do you think that special characters should be encoded on the UI side? Instead of decoding it on the server again?
yes and yes. I did a manual check when reviewing #13514 and confirmed that our REST API handles escaped special characters fine. So this is purely a UI issue.
@zentol
I add some log on UI and REST server, then I found,
UI send 0.GroupWindowAggregate(window=[TumblingGroupWindow(%27w$__rowtime__60000)]__properti.watermarkLatency
but REST server received is 0.GroupWindowAggregate(window=%5BTumblingGroupWindow(%252527w$__rowtime__60000)%5D__properti.watermarkLatency by RouterHandler, this has been encoded 3 times,
QueryStringDecoder decode only once, so this happened.
Do you want anything else to discover?
All I know is that if you send a request, with encoding applied once, things work fine.
All I know is that if you send a request, with encoding applied once, things work fine.
@zentol I found the root cause! We are using flink on yarn, RM will escape the special characters again!!
So what is your suggestion to solve this problem?
We are using flink on yarn, RM will escape the special characters again!!
YARN making things complicate again... 😢
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
So what is your suggestion to solve this problem?
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
I don’t know how many encodes will be done in RM.
The 3 times mentioned before are based on the need to decode 3 times.
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
We are like #13514?
Handle special characters single quotes.
Earlier you said that the requests are encoded 3 times; The UI does it once (does it?), and the RM does it (I assume) once. Any idea where the third one comes from?
I don’t know how many encodes will be done in RM.
The 3 times mentioned before are based on the need to decode 3 times.
hmm...it seems a bit arbitrary to stack a fixed number of decode calls; what if yet another middle-layer gets added between the UI and rest API, things could break at any time. Are there any downsides to decoding too often? As in, we loop the decoding until nothing changes anymore (although that also feels just wrong...).
We are like #13514?
Handle special characters single quotes.
@zentol What do you think of this?
We are like #13514?
Handle special characters single quotes.
I don't understand what you are asking/suggesting, please elaborate.
We are like #13514?
Handle special characters single quotes.
I don't understand what you are asking/suggesting, please elaborate.
@zentol We add the handling of single quotes in MetricQueryService#replaceInvalidChars to avoid single quotes;
Decode multiple times, not the best solution.
Decode multiple times, not the best solution.
I agree, but I have outlined in #13514 why replacing more characters is not a good option as well.
| gharchive/pull-request | 2021-06-24T08:39:15 | 2025-04-01T06:37:52.831737 | {
"authors": [
"Tartarus0zm",
"zentol"
],
"repo": "apache/flink",
"url": "https://github.com/apache/flink/pull/16275",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.