id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
823152629
|
AssertionError: The given network_url 'ganache' ...
Describe the bug
If I use the default config.ini with barge, it complains with the traceback below.
To Reproduce
Do steps 1-3 in developers.md (Install dependencies, Run the services, Set up contracts)
Then open a python console and try the following. The last line will fail.
import os
from ocean_lib.config import Config
from ocean_lib.ocean.ocean import Ocean
from ocean_lib.web3_internal.wallet import Wallet
private_key = os.getenv('TEST_PRIVATE_KEY1')
config = Config('config.ini')
ocean = Ocean(config)
Note: all pytest tests do pass.
Expected behavior
The last line not to fail. While all pytest tests still passing.
Config.ini contents
For reference, here is part of the config.ini file:
[eth-network]
network = 'ganache'
artifacts.path = ...
Traceback
>>> ocean = Ocean(config)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/trentmc/code/ocean.py/ocean_lib/ocean/ocean.py", line 100, in __init__
provider=get_web3_connection_provider(self._config.network_url)
File "/home/trentmc/code/ocean.py/ocean_lib/ocean/util.py", line 79, in get_web3_connection_provider
assert network_url in SUPPORTED_NETWORK_NAMES, (
AssertionError: The given network_url *'ganache'* does not start with either `http` or `wss`, in this case a network name is expected and must be one of the supported networks {'kovan', 'ropsten', 'mainnet', 'rinkeby', 'ganache'}.
I had that issue, but it wasn't a code issue, it was a config issue. Unfortunately, I can't remember how I solved it now.
No worries, I have a quick solution too, I'm just documenting thoroughly. You'll see.
|
gharchive/issue
| 2021-03-05T14:42:32 |
2025-04-01T04:35:16.662841
|
{
"authors": [
"kenbodnar",
"trentmc"
],
"repo": "oceanprotocol/ocean.py",
"url": "https://github.com/oceanprotocol/ocean.py/issues/203",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1173614114
|
Issue 755 Replace Lena
Fixes #755.
Changes proposed in this PR:
Replace lena with peppers
The Github checks are expected to fail until https://github.com/oceanprotocol/c2d-examples/pull/2 is merged.
@DMats I cherry-picked your original commit in https://github.com/oceanprotocol/ocean.py/pull/761, which is properly passing. Let's merge that in instead. I don't know why, but somehow merging v4main is broken.
|
gharchive/pull-request
| 2022-03-18T13:49:18 |
2025-04-01T04:35:16.665192
|
{
"authors": [
"DMats",
"calina-c"
],
"repo": "oceanprotocol/ocean.py",
"url": "https://github.com/oceanprotocol/ocean.py/pull/756",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1964475874
|
README suggestions
These are just some suggestions that may help.
Thank you :-)
|
gharchive/pull-request
| 2023-10-26T22:43:34 |
2025-04-01T04:35:16.711395
|
{
"authors": [
"ochorocho",
"rfay"
],
"repo": "ochorocho/ddev-rabbitmq",
"url": "https://github.com/ochorocho/ddev-rabbitmq/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1055176605
|
docs: messaging architecture
Adding architecture docs to describe messaging properties and building blocks
Checks
[ ] All commits in this Pull Request are signed.
[ ] All commits in this Pull Request follow the Ockam commit message convention.
[ ] I accept the Ockam Community Code of Conduct.
[ ] I have accepted the Ockam Contributor Licence Agreement by adding my Git/Github details in a row at the end of the CONTRIBUTORS.csv file in a separate pull request to the ockam-network/contributors repository.
@dvermd Thanks for review.
|
gharchive/pull-request
| 2021-11-16T18:10:13 |
2025-04-01T04:35:16.714661
|
{
"authors": [
"hairyhum"
],
"repo": "ockam-network/ockam",
"url": "https://github.com/ockam-network/ockam/pull/2216",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
57179562
|
Controller got undefined
Hi, Sorry for bother, and hope it's not a silly question.
I have the following example:
XController.js
angular.module('XModule', ['oc.lazyLoad'])
.controller('XController', function ($scope, $window, $ocLazyLoad) {
$ocLazyLoad.load({
name: "YModule",
files: ["/path/to/controller/YController.js"]
}).then(function() {
// Then just resolve the promise.
}, function(e) {
console.log(e);
});
...
});
YController.js
angular.module('YModule', ['oc.lazyLoad'])
.controller('YController', function ($scope, $window, $ocLazyLoad) {
...
});
HTML:
<script src="/path/to/app.js">
<script src="/path/to/XController.js">
<!-- This one works good -->
<div ng-controller="XController">
<!-- This one called by $ocLazyLoad get~s undefined -->
<div ng-controller="YController">
</div>
</div>
The YController get's loaded successfully, but get's undefined ?
Thank you.
cc @ocombe
If the div <div ng-controller="YController"> is already in the dom before you lazy load the file, you need to either use the directive oc-lazy-load to wrap the bloc, or use an ng-if or ng-include that will create this part of the dom after the controller has been lazy loaded.
@ocombe
I've now included through directive:
<div ng-controller="YController" oc-lazy-load="{name: 'YModule', files: ['/path/to/controller/YController.js']}">...
Now the controller doesn't get undefined, but the problem is that, none of the function defined in that controller work:
angular.module('YModule', ['oc.lazyLoad'])
.controller('YController', function ($scope, $window, $ocLazyLoad) {
$scope.test_func() {
alert("test");
}
});
<div ng-controller="YController" oc-lazy-load="{name: 'YModule', files: ['/path/to/controller/YController.js']}">
<button ng-click="test_func()">Test</button>
</div>
Sorry for bothering again :D>.
the oc-lazy-load directive should be defined in a div around the ng-controller one :)
<div oc-lazy-load="{name: 'YModule', files: ['/path/to/controller/YController.js']}">
<div ng-controller="YController">...</div>
</div>
Uhh Sorry my bad, everything works fine now :D>.
Thank you so much.
No problem, have fun :)
@ocombe Sorry for bothering again, just one more final question: in this particular case, is there any way of lazy loading files from js file instead ?
Yes, you will have to use $compile (check the angular doc) to parse the html that already includes the directives you just lazy loaded.
You can also use ng-include with a partial to only include that part of DOM when the files have been lazy loaded.
Thanks a lot.
|
gharchive/issue
| 2015-02-10T14:15:00 |
2025-04-01T04:35:16.726336
|
{
"authors": [
"dud3",
"ocombe"
],
"repo": "ocombe/ocLazyLoad",
"url": "https://github.com/ocombe/ocLazyLoad/issues/130",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
55971371
|
New release for rust-nightly
0.1.2 fails to build on rust-nightly:
`src/libc.rs` 124:27 error: `cr` does not live long enough
input = from_c_str(&cr);
Fixed in 0.1.3 fba6055938497def4edc0c403668a9f1a24ccf39
|
gharchive/issue
| 2015-01-29T23:08:41 |
2025-04-01T04:35:16.787917
|
{
"authors": [
"NewbiZ",
"octplane"
],
"repo": "octplane/rust-linenoise",
"url": "https://github.com/octplane/rust-linenoise/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1315804457
|
🛑 ISSA (issa.edificarseguros.com.ar) is down
In d328702, ISSA (issa.edificarseguros.com.ar) (https://issa.edificarseguros.com.ar/api/health) was down:
HTTP code: 503
Response time: 1309 ms
Resolved: ISSA (issa.edificarseguros.com.ar) is back up in 99d54ba.
|
gharchive/issue
| 2022-07-24T04:19:00 |
2025-04-01T04:35:16.791198
|
{
"authors": [
"apps-suterh"
],
"repo": "octubre-softlab/octubre-upptime",
"url": "https://github.com/octubre-softlab/octubre-upptime/issues/637",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1428077456
|
🛑 ISSA (issa.edificarseguros.com.ar) is down
In 294e1dd, ISSA (issa.edificarseguros.com.ar) (https://issa.edificarseguros.com.ar/api/health) was down:
HTTP code: 503
Response time: 650 ms
Resolved: ISSA (issa.edificarseguros.com.ar) is back up in c360758.
|
gharchive/issue
| 2022-10-29T04:22:37 |
2025-04-01T04:35:16.794250
|
{
"authors": [
"apps-suterh"
],
"repo": "octubre-softlab/octubre-upptime",
"url": "https://github.com/octubre-softlab/octubre-upptime/issues/892",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1382146574
|
Spike - How to populate the new tables
As described in #531 and #532, we will introduce a new database for the view. This issue is opened to discuss the last step about how we want to populate the database. Basically, we have three options:
Starting to populate the database but not use it in the view as long as we don't have enough data to safely use it. We display a 404 if we don't find the data in it when we would deploy the view.
Starting to populate the database and rely on the old queries on cache and ci_build_index when the first query doesn't succeed (the best option IMO).
To fill the database by querying the cache and ci_build_index if it exists to fill it with the data we already have. This one is the riskier.
We have populated the new ci_build_summary table at the point where Index.record is called.
|
gharchive/issue
| 2022-09-22T09:39:24 |
2025-04-01T04:35:16.796410
|
{
"authors": [
"maiste",
"novemberkilo"
],
"repo": "ocurrent/ocaml-ci",
"url": "https://github.com/ocurrent/ocaml-ci/issues/533",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
680595636
|
Clear redis on server start
And some incidental upstream updates.
Chore: Add redis FLUSHALL on dev server startup.
|
gharchive/pull-request
| 2020-08-18T00:03:27 |
2025-04-01T04:35:16.797596
|
{
"authors": [
"wlonk"
],
"repo": "oddbird/MetaDeploy",
"url": "https://github.com/oddbird/MetaDeploy/pull/133",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
56811607
|
Update RTC.js
fix for screensharing local mirror effect - forgotten parameter in RTC.js in jitsi-meet commit 6c4a5bd tag 340
fix for screensharing local mirror effect - forgotten parameter in RTC.js in jitsi-meet commit 6c4a5bd tag 340
|
gharchive/pull-request
| 2015-02-06T13:29:11 |
2025-04-01T04:35:16.984528
|
{
"authors": [
"odotom"
],
"repo": "odotom/jitsi-meet",
"url": "https://github.com/odotom/jitsi-meet/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1099901118
|
refactor(columbus): make api version in columbus sink as a config
Is your feature request related to a problem? Please describe.
Columbus is updating the version API from v1 to v1beta1 ref, we need to update meteor columbus sink API to sync with the columbus changes.
Describe the solution you'd like
Instead of hardcoding the version to v1beta1, perhaps better for us to make the version configurable via config.
wdyt @ravisuhag @StewartJingga ?
I have nothing against it, but I still think it is still okay to hardcode the version because most times, if api version changes, usually what causes it are changes in API contract, and we might need to update the code anyway to follow the new contract.
I have nothing against it, but I still think it is still okay to hardcode the version because most times, if api version changes, usually what causes it are changes in API contract, and we might need to update the code anyway to follow the new contract.
hmm yeah, that makes sense. let's use hardcoded version for this.
|
gharchive/issue
| 2022-01-12T05:38:38 |
2025-04-01T04:35:16.986414
|
{
"authors": [
"StewartJingga",
"mabdh"
],
"repo": "odpf/meteor",
"url": "https://github.com/odpf/meteor/issues/296",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1304240615
|
feat: add Optimus dashboard
resolves : #365
Pull Request Test Coverage Report for Build 2667953681
0 of 0 changed or added relevant lines in 0 files are covered.
2 unchanged lines in 1 file lost coverage.
Overall coverage decreased (-0.02%) to 77.291%
Files with Coverage Reduction
New Missed Lines
%
ext/notify/pagerduty/pagerdutynotifier.go
2
93.75%
Totals
Change from base Build 2656164503:
-0.02%
Covered Lines:
7396
Relevant Lines:
9569
💛 - Coveralls
grafana spelling is wrong
|
gharchive/pull-request
| 2022-07-14T04:33:19 |
2025-04-01T04:35:16.991088
|
{
"authors": [
"Mryashbhardwaj",
"coveralls",
"sravankorumilli"
],
"repo": "odpf/optimus",
"url": "https://github.com/odpf/optimus/pull/439",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
224477676
|
Add possibility to monitor client nodes
This PR adds a new command line flag -mounts-only which disables all checks except gluster_mount_successful and gluster_volume_writeable. This is useful for the monitoring of nodes that have just the client package of glusterfs installed to mount gluster volumes.
Furthermore I adapted the documentation for the boolean command line flags. The value true/false has to be set as follows for such flags: -flag-name=true/false (or can be left for true). I'm not sure whether this is a bug or a feature in go. You can find the relevant code for this here.
Thanks for participating!! I'll have a look in a few days.
Thanks for this pull request, unfortunately I have no time to test it. If you like to continue to develop this exporter please let me know.
Yes I'd like to continue to develop this exporter. We are already using an enhanced version of this exporter in production and I think my adaptions would be also useful for others. So it would be great if we find a way to get my changes back to this repo.
I invited you as a collaborator. are you able to commit / accept pull requests?
Yes, I am able to do this.
@coder-hugo Why didn't you accept your pull request? Are there any issues? May I help?
regards, Oli
@ofesseler I just forgot about doing it. When I did my last answer I had not the time to check how to "become" a reviewer and approve the pull request. Without an approval I can't merge it.
|
gharchive/pull-request
| 2017-04-26T13:58:09 |
2025-04-01T04:35:17.019588
|
{
"authors": [
"coder-hugo",
"ofesseler"
],
"repo": "ofesseler/gluster_exporter",
"url": "https://github.com/ofesseler/gluster_exporter/pull/14",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
132992739
|
Open links in about page release notes in external browser
Branch : Master
Steps to reproduce:
In the Mongotron menu click 'About Mongotron'
In the release notes click on a Github link
You should see that the Github page gets loaded in Electron. We should open up these types of links externally using https://github.com/atom/electron/blob/master/docs/api/shell.md#shellopenexternalurl.
Could wrap this up in Angular directive, but for rendering Markdown in the About page we need to somehow parse links from the Markdown and convert them.
Fixed in https://github.com/officert/mongotron/commit/be6b810419a998b7e3f650eb335420046715f5aa
|
gharchive/issue
| 2016-02-11T14:49:28 |
2025-04-01T04:35:17.028445
|
{
"authors": [
"officert"
],
"repo": "officert/mongotron",
"url": "https://github.com/officert/mongotron/issues/112",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1679210968
|
🛑 Class is down
In 36190f7, Class (https://class.ohelit.co/) was down:
HTTP code: 503
Response time: 468 ms
Resolved: Class is back up in 8930802.
|
gharchive/issue
| 2023-04-21T23:23:06 |
2025-04-01T04:35:17.113282
|
{
"authors": [
"ohelitco"
],
"repo": "ohelitco/status",
"url": "https://github.com/ohelitco/status/issues/716",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
278202798
|
tes/list: hitting max response size limits
On a project using boltdb and a relatively small amount of tasks + logs, I'm getting errors and slowness from dashboard.
./funnel task list
Error: [STATUS CODE - 403] {"error":"grpc: trying to send message larger than max (8044641 vs. 4194304)","code":8}
The web dashboard doesn't load. The terminal dashboard works, but is very slow.
Possible solution: https://jbrandhorst.com/post/grpc-binary-blob-stream/
I think it's good that it's failing. An 8MB response is too much. Probably just need to optimize the clients to return smaller pages by default.
Also interesting, the majority of the message weight comes from the OutputFileLog which is part of the basic view.
|
gharchive/issue
| 2017-11-30T17:21:45 |
2025-04-01T04:35:17.152076
|
{
"authors": [
"adamstruck",
"buchanae"
],
"repo": "ohsu-comp-bio/funnel",
"url": "https://github.com/ohsu-comp-bio/funnel/issues/367",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
712806436
|
Move argument parsing to main-function
This PR moves the argument parsing into a main function.
This is needed for creating a bionconda-package as a entry point defined by a function name is required.
Thank you for your interest.
We released debarcer 2.1.1 which now includes an entry point and can be installed from PyPi
|
gharchive/pull-request
| 2020-10-01T12:41:40 |
2025-04-01T04:35:17.159564
|
{
"authors": [
"FelixMoelder",
"rjovelin"
],
"repo": "oicr-gsi/debarcer",
"url": "https://github.com/oicr-gsi/debarcer/pull/220",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1842948089
|
CI config for latest 4-stable OKD release picked wrong base release
Describe the bug
In the release status page for 4.13.0-0.okd-2023-08-04-164726 it shows a "failed" entry for a wrong base release (e.g. scos-based instead of fcos).
Version
4.13.0-0.okd-2023-08-04-164726
How reproducible
N/A
Log bundle
N/A
Oops, accidentially used wrong Github account. Should have come from this one.
The above one is our org admin account.
The same happened again for 4.13.0-0.okd-2023-08-18-135805.
Should be fixed by https://github.com/openshift/release/pull/42843
|
gharchive/issue
| 2023-08-09T10:33:35 |
2025-04-01T04:35:17.183434
|
{
"authors": [
"LorbusChris",
"ars-admin",
"kai-uwe-rommel"
],
"repo": "okd-project/okd",
"url": "https://github.com/okd-project/okd/issues/1696",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
709542833
|
MarkDown minor edit
Changes a quote format to a blockquote instead of a code block because syntactically it makes more sense and also it enhances the UI (no need for horizontal scrolling).
Thanks for your contribution! :)
|
gharchive/pull-request
| 2020-09-26T14:01:27 |
2025-04-01T04:35:17.185982
|
{
"authors": [
"cuducos",
"jvanz"
],
"repo": "okfn-brasil/querido-diario-api",
"url": "https://github.com/okfn-brasil/querido-diario-api/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
303582645
|
Fix Rosie instalation via Docker Compose
Not sure why we added the directory rosie/ to the setup script. I
tried it today and it didn't work – removing it, it works. Can anyone else test it please?
It looks like that without the rosie/ in the setup script it works on the Docker flow, but not in Travis…
Tests passing locally. Maybe aren't passing in the CI because they are run from outside Docker?
Yep… should we move to a dockerized CI infra?
I think so.
Closed in favor of #343
|
gharchive/pull-request
| 2018-03-08T18:25:21 |
2025-04-01T04:35:17.192800
|
{
"authors": [
"Irio",
"cuducos"
],
"repo": "okfn-brasil/serenata-de-amor",
"url": "https://github.com/okfn-brasil/serenata-de-amor/pull/341",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
545194175
|
[nav] regression in #navbarCollapse
Is this a BUG REPORT or FEATURE REQUEST?:
bug
What happened:
Regression in nav
What you expected to happen:
When you click on the nav hamburger it should show the list
How to reproduce it (as minimally and precisely as possible):
Just click on the demo site with a small windows size
Anything else we need to know?:
This is all I could find
0.15
HTML:
<div class="navbar-collapse justify-content-between collapse show in" id="navbarCollapse">
Function:
assets/js/helpers/bootstrap-helper.js
function toggleMenu(node) {
const menu = document.querySelector(node.dataset.target);
menu.classList.toggle('in');
}
0.16
HTML:
<div class="navbar-collapse justify-content-between collapse show" id="navbarCollapse">
Function:
assets/js/collapse.js
const showCollapse = function (el, target) {
$(el).attr('aria-expanded', 'true');
$(el).removeClass('collapsed');
$(target).addClass('show');
};
So the problem seems to be that in 0.16 the new functions don't add the in class
Environment:
Syna Theme version: 0.16
Hugo version: 0.60
There seems to have been a bigger underlying issue with a code change leading to a full DOM reset resulting in lost event listeners. This should be fixed in #681.
If this is still an issue in master let us know and reopen :)
|
gharchive/issue
| 2020-01-03T22:43:57 |
2025-04-01T04:35:17.204704
|
{
"authors": [
"Marzal",
"stp-ip"
],
"repo": "okkur/syna",
"url": "https://github.com/okkur/syna/issues/677",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1484624899
|
⚖️ Register Validator:
Network Choice
nemeton-1
Who are you?
MetaStack Data GmbH - https://www.metastackdata.com
gentx
{"body":{"messages":[{"@type":"/cosmos.staking.v1beta1.MsgCreateValidator","description":{"moniker":"metastackdata","identity":"","website":"https://www.metastackdata.com","security_contact":"","details":""},"commission":{"rate":"0.050000000000000000","max_rate":"0.200000000000000000","max_change_rate":"0.010000000000000000"},"min_self_delegation":"1","delegator_address":"okp41tsgdw5rd3tvx6m47wskvhzw6tayj0ulwkfjgem","validator_address":"okp4valoper1tsgdw5rd3tvx6m47wskvhzw6tayj0ulwrwzp86","pubkey":{"@type":"/cosmos.crypto.ed25519.PubKey","key":"p3dHzsAlbwyPR/pghbB2dI+i4eNl7MEk7NGaIvWd7NQ="},"value":{"denom":"uknow","amount":"10000000000"}}],"memo":"35830c8715d90c81f79af8eb73ceaae5ad496841@89.58.9.181:26656","timeout_height":"0","extension_options":[],"non_critical_extension_options":[]},"auth_info":{"signer_infos":[{"public_key":{"@type":"/cosmos.crypto.secp256k1.PubKey","key":"A61ir4Lgdm6NyzOnRP4OoN6fL58kYAPG/Dvrn1ak4H31"},"mode_info":{"single":{"mode":"SIGN_MODE_DIRECT"}},"sequence":"0"}],"fee":{"amount":[],"gas_limit":"200000","payer":"","granter":""},"tip":null},"signatures":["QY4qfbf2VFxyEDhP0YvSdhOv5zR4PUDU92FRa2kAJM0l8D3Kxt8BROt5gmX3k86ubPjZ+98gbvRWpaCIl+ymqw=="]}
😉 Here is the corresponding PR: https://github.com/okp4/networks/pull/379
|
gharchive/issue
| 2022-12-08T13:17:59 |
2025-04-01T04:35:17.207687
|
{
"authors": [
"bot-anik",
"pogi01"
],
"repo": "okp4/networks",
"url": "https://github.com/okp4/networks/issues/378",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
285848466
|
:seedling: adds get_current_user()
Adds support for https://developer.okta.com/docs/api/resources/users#get-current-user
Thank you for your PR and sorry it took so long to get back to it. We have recently released a 1.x version of this SDK which completely updates the structure and build process for our SDK.
I believe our changes no longer require this PR to be included so we are going to close it. If you feel the new version does not resolve the issues you were seeing, prompting this PR, please let us know and we will be happy to re-investigate.
|
gharchive/pull-request
| 2018-01-04T00:20:58 |
2025-04-01T04:35:17.231152
|
{
"authors": [
"bretterer",
"djcrabhat"
],
"repo": "okta/okta-sdk-python",
"url": "https://github.com/okta/okta-sdk-python/pull/59",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
487757464
|
Application failed to start at client
I followed your process but facing an issue at the client application it's failed to start
APPLICATION FAILED TO START
Description:
Method springSecurityFilterChain in org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration required a bean of type 'org.springframework.security.oauth2.client.registration.ClientRegistrationRepository' that could not be found.
The following candidates were found but could not be injected:
- Bean method 'clientRegistrationRepository' in 'OAuth2ClientRegistrationRepositoryConfiguration' not loaded because OAuth2 Clients Configured Condition registered clients is not available
Action:
Consider revisiting the entries above or defining a bean of type 'org.springframework.security.oauth2.client.registration.ClientRegistrationRepository' in your configuration.
Process finished with exit code 0
@arpanbose Can you please try comparing your code to what this repo has? SmartSynchorize works well for comparing two different directories.
I actually figure it out thanks by the way.
|
gharchive/issue
| 2019-08-31T10:47:32 |
2025-04-01T04:35:17.243542
|
{
"authors": [
"arpanbose",
"mraible"
],
"repo": "oktadeveloper/okta-spring-boot-authz-server-example",
"url": "https://github.com/oktadeveloper/okta-spring-boot-authz-server-example/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2037185873
|
Add sound mode "DOLBY PRO LOGIC"
see issue: https://github.com/home-assistant/core/issues/105514
Thanks 😄
@ol-iver I saw this log message in my HA instance, should I open a separate PR with a similar change? I am also ok to just let it fallback since it is right...
Logger: denonavr.soundmode
Source: components/denonavr/media_player.py:362
First occurred: January 1, 2024 at 1:23:02 PM (1 occurrences)
Last logged: January 1, 2024 at 1:23:02 PM
Not able to match sound mode: 'DOLBY PLII GAME', assuming 'DOLBY DIGITAL'.
There is also this one, so sounds like yes I should open a PR https://github.com/ol-iver/denonavr/issues/268
|
gharchive/pull-request
| 2023-12-12T07:56:02 |
2025-04-01T04:35:17.248945
|
{
"authors": [
"ol-iver",
"starkillerOG",
"xconverge"
],
"repo": "ol-iver/denonavr",
"url": "https://github.com/ol-iver/denonavr/pull/270",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
310693157
|
查询结果乱码
查询结果乱码
感谢反馈,已提交新版本,等 Chrome 插件自动更新后应该会解决这个问题。
|
gharchive/issue
| 2018-04-03T04:49:11 |
2025-04-01T04:35:17.258786
|
{
"authors": [
"cztom",
"oldj"
],
"repo": "oldj/SearchIP",
"url": "https://github.com/oldj/SearchIP/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
615068649
|
organizes createables
Description:
Organize createables
Changes:
add ability charge and uncharge toxic staff and serp helm
make it so you can color serp helm with mutagens
remove level req to attach hilt to godsword also make it so you can remove hilt
fix prayer not counting for total xp leader board
[x] I have tested all my changes thoroughly.
i have updated the targeted branch to the new dev branch. Can i also ask that you split your commits into single logical additions, such as having a new commit for the addition of new items to the createables file separate from the reordering commit.
|
gharchive/pull-request
| 2020-05-09T02:10:39 |
2025-04-01T04:35:17.273375
|
{
"authors": [
"Alexsuperfly",
"coolbop32"
],
"repo": "oldschoolgg/oldschoolbot",
"url": "https://github.com/oldschoolgg/oldschoolbot/pull/257",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
174355746
|
Grey Pixel Line
Hi, I have a grey line appear on the right of the image when i try to scale
image.scale(ratio);
Original
Scale
you guys have a idea ?
Thanks
Rightmost pixel on your image is transparent. Here I opened it in new tab and added background: crimson; to <img> element so it could be easily visible:
Short version is: those transparent pixels are described as transparent black (0x00000000 in your case, but could be anyting based on encoder liking) and when pixel color is averaged for chunk of pixels during scaling down that meaningless black color is getting mixed with visible white and we are getting grey, when should be getting white
PR #159 which I submitted two weeks ago fixes that. You will also better explanation there with some examples
Indeed you're right, thanks for the answer !
|
gharchive/issue
| 2016-08-31T19:19:04 |
2025-04-01T04:35:17.330990
|
{
"authors": [
"Iwasawafag",
"jeremypiednoel"
],
"repo": "oliver-moran/jimp",
"url": "https://github.com/oliver-moran/jimp/issues/163",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
228283096
|
Create phobos-visualizer build script
We need a script that builds binaries for OS X, Windows and Linux using Unity and compresses the applications so we can attach them to phobos releases.
#171
Looks like setting up a unity build is easy enough:
http://blog.stablekernel.com/continuous-integration-for-unity-5-using-travisci
https://jonathan.porta.codes/2015/04/17/automatically-build-your-unity3d-project-in-the-cloud-using-travisci-for-free/
I don't know anything about copying the built binaries to somewhere. Would we create a nightly release and attach the binaries to that? Do you have experience with that?
Thought about it, never written a working solution though.
Looks like the Github allows something for releases (not branches): https://docs.travis-ci.com/user/deployment/releases/
Uploading attachments to releases on gitlab: https://gitlab.com/gitlab-org/gitlab-ce/issues/18486
Example scripts: http://answers.unity3d.com/questions/9382/build-from-script.html
Been working on this but it is very hard to activate a unity install from the command line. Activation is required for any command including building. The reason for this is probably that Unity has their own "Cloud building" solution.
Putting more time into this is not worth it. A build script is still useful.
Fixed by https://gitlab.com/bikelab/phobos-visualizer/commit/92a93d971a8c54fea6225ce8f7c425389c4d17e2
Updated release v0.2 with win|lin|mac 32|64 builds.
|
gharchive/issue
| 2017-05-12T12:53:31 |
2025-04-01T04:35:17.353116
|
{
"authors": [
"mickvangelderen",
"oliverlee"
],
"repo": "oliverlee/phobos",
"url": "https://github.com/oliverlee/phobos/issues/184",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2266535771
|
setting OLLAMA_HOST to 0.0.0.0 could make the API to listen on the port using IPv6 only
What is the issue?
Edit2: sorry, if you set BindIPv6Only, 0.0.0.0:11434 should use v4. so this shouldn't be a problem.
Edit: by default, it seems it'll listen on both v4 and v6. If you set BindIPv6Only in systemd.socket, or /proc/sys/net/ipv6/bindv6only is set to 1, it may not listen on v4.
Ollama is only listening on IPv6 with OLLAMA_HOST=0.0.0.0:
# netstat -anp | grep 11434
tcp6 0 0 :::11434 :::* LISTEN 5009/ollama
This seems to be the cause:
https://github.com/golang/go/issues/48723
OS
Linux
GPU
Nvidia
CPU
Other
Ollama version
0.1.32
What's the fix?
i believe this might be the root of all problems related to open webui docker.
when i get into the docker container for open webui:
docker exec -it open-webui bash
then execute:
curl --ipv4 http://host.docker.internal:11434/api/tags
i get a response
but when i do:
curl --ipv6 http://host.docker.internal:11434/api/tags
setting up ipv6 is a topic by itself: https://docs.docker.com/config/daemon/ipv6/
i believe the fix for this is to parse OLLAMA_HOST
if it was "0.0.0.0" then listen to ipv4 interface
if it was "::" then listen to ipv6 interface
or make it easier and listen to both ipv6 and ipv4 interfaces by default.
just like what openssh server is doing for example:
sudo netstat -nutlp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 697/sshd: /usr/sbin
tcp6 0 0 :::22 :::* LISTEN 697/sshd: /usr/sbin
|
gharchive/issue
| 2024-04-26T21:35:48 |
2025-04-01T04:35:17.361351
|
{
"authors": [
"TadayukiOkada",
"martindale",
"nuaimat"
],
"repo": "ollama/ollama",
"url": "https://github.com/ollama/ollama/issues/3961",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1464751697
|
CP patch cpuplus 359
I'm trying to remove CP on a MIB2 PQ, but the offset for my FW is missing from the patch_cp script. I would add it myself but I've no idea what the offset is to!
If you could help add my FW that would be very much appreciated. File size is 2189726, audiomgr is attached.
It's a cpuplus EU PQ 359.
Thanks,
Toby
tsd.mibstd2.audio.zip
Attached full support dump for other patches.
Thanks
dump.zip
hi try to dump this file
tsd.mibstd2.hmi.ifs
you should find it on HMI folder, ill patch it for you
Attached full support dump for other patches.
Thanks dump.zip
Added in https://github.com/olli991/mib-std2-pq-zr-toolbox/archive/refs/heads/master.zip
Just unzip to SD and update via Toolbox->Update
|
gharchive/issue
| 2022-11-25T16:39:11 |
2025-04-01T04:35:17.365525
|
{
"authors": [
"Medmimoza",
"TMakins",
"lprot"
],
"repo": "olli991/mib-std2-pq-zr-toolbox",
"url": "https://github.com/olli991/mib-std2-pq-zr-toolbox/issues/186",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1111220518
|
Connection not available. ThinQ platform not ready
Logger: custom_components.smartthinq_sensors
Source: custom_components/smartthinq_sensors/wideq/core_v2.py:1131
Integration: SmartThinQ LGE Sensors (documentation, issues)
First occurred: 23:42:43 (3 occurrences)
Last logged: 23:43:10
Connection not available. ThinQ platform not ready
Traceback (most recent call last):
File "/config/custom_components/smartthinq_sensors/init.py", line 217, in async_setup_entry
lge_devices = await lge_devices_setup(hass, client)
File "/config/custom_components/smartthinq_sensors/init.py", line 442, in lge_devices_setup
if not await dev.init_device():
File "/config/custom_components/smartthinq_sensors/init.py", line 331, in init_device
result = await self._hass.async_add_executor_job(
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/config/custom_components/smartthinq_sensors/wideq/device.py", line 1283, in init_device_info
self._model_data = self._client.model_url_info(
File "/config/custom_components/smartthinq_sensors/wideq/core_v2.py", line 1158, in model_url_info
self._model_url_info[url] = self._load_json_info(url)
File "/config/custom_components/smartthinq_sensors/wideq/core_v2.py", line 1131, in _load_json_info
return json.loads(enc_resp)
File "/usr/local/lib/python3.9/json/init.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/local/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
There is a dedicate template to create issue. Please open a new issue using correct template and fill all required information.👍
|
gharchive/issue
| 2022-01-22T03:45:41 |
2025-04-01T04:35:17.377536
|
{
"authors": [
"jgasparelo",
"ollo69"
],
"repo": "ollo69/ha-smartthinq-sensors",
"url": "https://github.com/ollo69/ha-smartthinq-sensors/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2203793112
|
🛑 OME CI server (legacy) is down
In afd2972, OME CI server (legacy) (https://ci.openmicroscopy.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: OME CI server (legacy) is back up in 9f49b51 after 19 minutes.
|
gharchive/issue
| 2024-03-23T09:33:52 |
2025-04-01T04:35:17.402481
|
{
"authors": [
"snoopycrimecop"
],
"repo": "ome/upptime",
"url": "https://github.com/ome/upptime/issues/1776",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1718677896
|
🛑 OME downloads is down
In 2806df4, OME downloads (https://downloads.openmicroscopy.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: OME downloads is back up in 88128c1.
|
gharchive/issue
| 2023-05-21T23:30:52 |
2025-04-01T04:35:17.404655
|
{
"authors": [
"snoopycrimecop"
],
"repo": "ome/upptime",
"url": "https://github.com/ome/upptime/issues/757",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
537899315
|
Unsupported file paths using colons
Currently in the documents folder there exists a collection of files and folders which use ':' (colons) in their names. The windows operating system has reserved this character for alternate data streams Source. This prevents operations on windows machines with this repository.
A few files and folders have been detected in the classical/algorithms folder in documents:
bachBWV889Fg/2016DM:SIATECCompressF1
bachBWV889Fg/2016DM:SIATECCompressP.csv
bachBWV889Fg/...
beethovenOp2No1Mvt3/2016DM:SIATECCompressF1
beethovenOp2No1Mvt3/...
2016DM:SIATECCompressF1.csv
2016DM:SIATECCompressP.csv
...
Not all files have been checked but these have been found so far. Is it possible to change the naming scheme of these files/folders? Thanks for all the work.
Ah yes, it's not a problem on Ubuntu and we don't have windows machines so it never came up.
Feel free to fork and make a pull request Kevin!
New Rendering module should have fixed this.
|
gharchive/issue
| 2019-12-14T10:59:30 |
2025-04-01T04:35:17.419311
|
{
"authors": [
"irisyupingren",
"kevin4998"
],
"repo": "omelkonian/hs-pattrans",
"url": "https://github.com/omelkonian/hs-pattrans/issues/10",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
701696193
|
move dev env deployment job to helm repo
:clipboard: https://github.com/omgnetwork/devops/issues/407
need helm repo PR changes: https://github.com/omgnetwork/helm-development/pull/470
Overview
Instead of overriding with kubernate commands in elixir-omg CI, it nows triggers job in helm-development repo to start the deployment.
For master updates, it will trigger an update to helm-development using git sha as the app_version and change the dev env version as well. This would end up having the CircleCI jobs inside helm-development to deploy it.
For release tags changes, it will trigger an update to helm-development using semver as the app_version (same as before)
Changes
add increase_chart_version_XXX jobs for master and release
remove old deployment script and jobs
Testing
used the playground repos: https://github.com/omgnetwork/elixir-omg-boolafish-playground, https://github.com/omgnetwork/helm-development-boolafish-playground
release tag job trigger: https://github.com/omgnetwork/helm-development-boolafish-playground/pull/75
master job trigger: https://github.com/omgnetwork/helm-development-boolafish-playground/pull/73
comment first before reviewing, fanzi
|
gharchive/pull-request
| 2020-09-15T07:45:57 |
2025-04-01T04:35:17.424395
|
{
"authors": [
"boolafish",
"jarindr"
],
"repo": "omgnetwork/elixir-omg",
"url": "https://github.com/omgnetwork/elixir-omg/pull/1738",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1053177593
|
add missing bits to Header.Flags
Fixes #8
Thank you for this! There are a few things missing, so I'll co-author a commit with you including them. :)
|
gharchive/pull-request
| 2021-11-15T03:39:41 |
2025-04-01T04:35:17.425373
|
{
"authors": [
"nailuj29gaming",
"ominitay"
],
"repo": "ominitay/zigvale",
"url": "https://github.com/ominitay/zigvale/pull/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
542599958
|
feature: IFE testing
IFE testing in end2end tests
Overview
Support for IFE testing + streamline contracts
Changes
Pointing to docker images without solc compiler (making docker containers them smaller)
Using the plasma-deployer snapshot (they include contract abis and db.json) - this was we're able to observe events happening from contracts in end2end tests, making easier to develop tests.
end2end tests now derive all contract addresses from the inital plasma contract (deposit queues, exit games...)
When bumping contracts the only thing you need to do is to change the snapshots.env, everything else happens in make init_test. that was needed because tx hash of plasma framework changes with increasing the minimum standard exit time...
hardening the end2end tests (global locks, processing specific exits from queues)
(some part of the IFE is blocked by https://github.com/omisego/plasma-contracts/issues/539 which will be added later)
deleted apps/omg_watcher/test/omg_watcher/integration/standard_exit_test.exs because it has the exact coverage as standard_exits.feature
Testing
Same tests!
Coverage decreased (-0.9%) to 88.75% when pulling 416b7852090903ad4086cf193ff840d6eebcb78c on inomurko/ife into fcb7088c02f61abf6a0f195bf8752d08904eb489 on master.
[ ] rewrite Account.ex to parse the env file
[ ] ABI + websockex events
[ ] cleanup Eventer
[ ] priority queue and exit IFE
accompanying pr https://github.com/omisego-images/docker-elixir-omg/pull/23
|
gharchive/pull-request
| 2019-12-26T16:36:28 |
2025-04-01T04:35:17.436669
|
{
"authors": [
"InoMurko",
"coveralls"
],
"repo": "omisego/elixir-omg",
"url": "https://github.com/omisego/elixir-omg/pull/1236",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1621560965
|
"Reserved wallet balance invalidated" when creating a channel
code = Unknown desc = reserved wallet balance invalidated: transaction would leave insufficient funds for free bumping anchor channel closings(see debug log for details)
My wallet has 0.0002 Bitcoin and 200 Dollar in balance.
Waiting download
Pada tanggal Sen, 13 Mar 2023 21.07, Neo Carmack @.***>
menulis:
My wallet has 0.0002 Bitcoin and 200 Dollar in balance.
—
Reply to this email directly, view it on GitHub
https://github.com/omnilaboratory/OBAndroid/issues/21#issuecomment-1466210829,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ARDOZYJWH7DOFVGZ2TF7UM3W34S2BANCNFSM6AAAAAAVZD6RMI
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
The balance of btc in the wallet is insufficient
|
gharchive/issue
| 2023-03-13T14:06:31 |
2025-04-01T04:35:17.463323
|
{
"authors": [
"healergyl",
"neocarmack",
"saiful112233"
],
"repo": "omnilaboratory/OBAndroid",
"url": "https://github.com/omnilaboratory/OBAndroid/issues/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1029091951
|
Support full width
I'd like to print out my boxes on the terminal's full width 🙏 .
Awesome, thanks ❤️
|
gharchive/issue
| 2021-10-18T13:07:07 |
2025-04-01T04:35:17.465608
|
{
"authors": [
"sahariko"
],
"repo": "omrilotan/boxt",
"url": "https://github.com/omrilotan/boxt/issues/1",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
108533607
|
Broken for 0.4-RC2
In issue #8 there is an example that was working under RC1, but seems to failing now under RC2. Is it just me?
This this may have been a false alarm. Sorry for the noise.
|
gharchive/issue
| 2015-09-27T13:07:56 |
2025-04-01T04:35:17.492541
|
{
"authors": [
"jverzani"
],
"repo": "one-more-minute/Requires.jl",
"url": "https://github.com/one-more-minute/Requires.jl/issues/12",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
456050782
|
oneinstack用户,php7.3+phalcon做api时常被ban,求文档
用oneinstack很久了。
新近安装的实例,php7.3+phalcon做rest api server,启用了waf。
时常在点击的时候,跳转到 /captcha-waf.html?continue=a.....
看过配置文件,只有一个
config_cc_rate = "60/60"
可能会影响这个,但点击频率绝对没有到达60次/分钟。
目前这个模块没有相关文档,
求指点,怎么适当放宽,避免出现 /captcha-waf.html?continue=a.....
如果nginx waf和php在一台服务器,可能有php内部跳转,这种也会计数的,建议调大点,如果nginx waf只做方向代理是正常的
就是在一台服务器。nginx+apach+php-fpm。
|
gharchive/issue
| 2019-06-14T04:05:54 |
2025-04-01T04:35:17.529164
|
{
"authors": [
"cnyyk",
"oneinstack"
],
"repo": "oneinstack/ngx_lua_waf",
"url": "https://github.com/oneinstack/ngx_lua_waf/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2412112897
|
Version 3.0.0 publishing
Is it published or what? I see it merged into master, but both CocoaPods and SPM list 2.4.0 as the latest.
The reason I'm asking is that I want to migrate my RiseTransitSet2 wrapper to either ObjC or C, but I don't know which one.
I would love to see this too, especially to see the newer RiseTransitSet2 wrapper, but I understand that it takes a lot of work to do the release I believe. Perhaps the workload could be cut down by only supporting SPM for instance going forward. I also know that as soon as a new version is posted there will be issue after issue posted of "my code doesn't work any more", which onekiloparsec may not have time or more likely the will to handle. So I can appreciate holding the current position. I'd help out myself but I don't know enough about how to release a new version.
especially to see the newer RiseTransitSet2 wrapper
Frankly, I'm somewhat disappointed in the RiseTransitSet2 implementation in AA+. Original RiseTransitSet is based on Meeus's book and it's fast and elegant (with a few known caveats), while RiseTransitSet2 uses a brute-force approach simply iterating over time with 10-minute steps. I even found a bug in it (it's fixed in the latest version).
|
gharchive/issue
| 2024-07-16T22:00:28 |
2025-04-01T04:35:17.531863
|
{
"authors": [
"alex-vasenin",
"tallPete"
],
"repo": "onekiloparsec/SwiftAA",
"url": "https://github.com/onekiloparsec/SwiftAA/issues/123",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
372901376
|
Shibboleth logout issue
I am facing problem in logout
here is snipped from code:
relayState = url_for controller: 'saml', action: 'index'
redirect_to(logout_request.create(settings, :RelayState => relayState))
Here is the screenshot of the error.
Is the Single logout Service value of the "Configuration" tab of the OneLogin's connector properly configured?
Yes, here is screenshot of configuration tab.
On 23-Oct-2018, at 3:07 PM, Sixto Martin notifications@github.com wrote:
Is the Single logout Service value of the "Configuration" tab of the OneLogin's connector properly configured?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/onelogin/ruby-saml/issues/476#issuecomment-432174270, or mute the thread https://github.com/notifications/unsubscribe-auth/Am4IPQtvpftVIkqicdukcw7EgPgr9u0vks5unuNxgaJpZM4X1IHU.
I found where the issue is.
What is wrong is the ruby-saml settings.
As the IdP single logout service value, you may set
https://rohit-india.com-dev.onelogin.com/trust/saml2/http-redirect/slo/846039
Right now you have a wrong value.
Tried. Did not worked. Same issue persist.
On 23-Oct-2018, at 3:16 PM, Sixto Martin notifications@github.com wrote:
https://rohit-india.com-dev.onelogin.com/trust/saml2/http-redirect/slo/846039 https://rohit-india.com-dev.onelogin.com/trust/saml2/http-redirect/slo/846039
Can you export and send to my mail the SAML Tracer as well as screenshots of the SAML settings of OneLogin's connector and toolkit?
P.S My email is on my profile.
|
gharchive/issue
| 2018-10-23T09:32:13 |
2025-04-01T04:35:17.539478
|
{
"authors": [
"RohitVenturit",
"pitbulk"
],
"repo": "onelogin/ruby-saml",
"url": "https://github.com/onelogin/ruby-saml/issues/476",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
218363147
|
Best way for $("#foo .bar")
document.querySelector and document.querySelectorAll are quite SLOW,
What is better way if use $("#foo .bar") ?
1 - document.getElementById('foo').getElementsByClassName('bar') ?
or
2 - document.querySelector('#id .bar') ?
I would say #2 is better since getElementsByClassName is buggy. But I would also suggest that you avoid selecting things with descendant selectors as it's an anti pattern since it's under-performant.
e.g try and do document.querySelector('#id') or something similar instead of document.querySelector('#id .bar').
|
gharchive/issue
| 2017-03-31T00:11:09 |
2025-04-01T04:35:17.549707
|
{
"authors": [
"adrianoresende",
"kaidez"
],
"repo": "oneuijs/You-Dont-Need-jQuery",
"url": "https://github.com/oneuijs/You-Dont-Need-jQuery/issues/141",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
297869668
|
Voice/Video call in sleep model call session will disconnect in 3 minutes
Steps to reproduce the problem
start call on foreground mode
then peer user joined
press power button to make phone sleep
still can use voice call
in 3 minutes call session will disconnected
I've tried demo project is also happen this issue
In xcode console show
'oniceconnectionstatechange', 'disconnected'
Platform information
React Native version: 0.44.3
Plugin version: 1.58.3
OS: iOS
OS version: 9.0-11.x
remove the leave function from socket
You need to keep your app alive when it goes in the background. That, however, is not this plugin's job.
|
gharchive/issue
| 2018-02-16T17:55:16 |
2025-04-01T04:35:17.553354
|
{
"authors": [
"chindanai",
"saghul",
"shaheem-khanzada"
],
"repo": "oney/react-native-webrtc",
"url": "https://github.com/oney/react-native-webrtc/issues/421",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1329287604
|
Implement new flow.json contract imports syntax
To ensure compatibility with the general ecosystem we need to implement this flip.
Note: This issue does NOT encompass the reading of the flow.json file, we are to assume the the data structure is passed in as configuration.
don't we have this already ? with: import xxx from 0xSOMEWHERE replacements ?
@bluesign we support one of the formats. The other look something like this:
{
...
"contracts": {
"onflow/NonFungibleToken": "./NonFungibleToken.cdc",
"onflow/FungibleToken": {
"testnet": "github.com/flow/FungibleToken/Fungible.cdc",
"mainnet": "0x2",
"emulator": "./FungibleToken.cdc",
"e2e-testnet": "0x3"
}
}
...
}
And this one we need to support as well.
ah I always though those 0xProfile etc can be something like onflow/FungibleToken
Duplicate of #1449
|
gharchive/issue
| 2022-08-04T23:53:40 |
2025-04-01T04:35:17.588421
|
{
"authors": [
"bluesign",
"justinbarry"
],
"repo": "onflow/fcl-js",
"url": "https://github.com/onflow/fcl-js/issues/1345",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
870331609
|
not possible to send transaction with key other then index 0
Instructions
Problem
I want to send a transaction to mainnet where key 0 has been revoked. That index is currently hard coded in the code so not possible to change.
Steps to Reproduce
Set up an account with two keys and revoke key 0
Acceptance Criteria
It is possible to specify optional key index
Duplicate of #220
|
gharchive/issue
| 2021-04-28T20:17:18 |
2025-04-01T04:35:17.589949
|
{
"authors": [
"bjartek"
],
"repo": "onflow/flow-cli",
"url": "https://github.com/onflow/flow-cli/issues/222",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1467875835
|
OpenTelemetry tracing
Integrate open telemetry tracing inside the GraphQL and using prometheus as the exporter. At this point we should just trace requests/responses no need to yet dive deeper in the codebase with traces as that will require refactoring of the functions to include passing of context (which we should do at later point anyway). Use Go opentelemtry library https://pkg.go.dev/github.com/rot1024/otelgqlgen
Some examples:
https://github.com/open-telemetry/opentelemetry-go/blob/main/example/prometheus/main.go
Added stale project metrics
|
gharchive/issue
| 2022-11-29T11:09:04 |
2025-04-01T04:35:17.621936
|
{
"authors": [
"DylanTinianov",
"sideninja"
],
"repo": "onflow/flow-playground-api",
"url": "https://github.com/onflow/flow-playground-api/issues/134",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
486846388
|
An "untitled" buffer is opened, even if you specify a file as an arg
Oni2 correctly opens the file I ask for as an arg in a buffer, but it always opens an untitled buffer as well.
Going to close this out in favour of https://github.com/onivim/oni2/issues/81.
|
gharchive/issue
| 2019-08-29T09:33:30 |
2025-04-01T04:35:17.628166
|
{
"authors": [
"CrossR",
"awilkins"
],
"repo": "onivim/oni2",
"url": "https://github.com/onivim/oni2/issues/718",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
623282859
|
feat(Windows): custom titlebar
This adds a custom titlebar on Windows, similar to the one on macOS.
Just two comments: It looks like the logo could do with a teeny bit of padding when maximised.
Also the image is pretty low res, though I don't think there is much you can do about that @zbaylin, so maybe a thing to just note down for now since there is a few places we need to bump the resolution of the logos, which I think @bryphe has mentioned before, but its pretty low priority for now.
Looks like the integration tests might need to be updated with the new methods:
File "integration_test/lib/Oni_IntegrationTestLib.re", lines 155-177, characters 4-5:
155 | ....Store.StoreThread.start(
156 | ~showUpdateChangelog=false,
157 | ~getUserSettings,
158 | ~setup,
159 | ~onAfterDispatch,
...
174 | ~window=None,
175 | ~filesToOpen,
176 | (),
177 | ).
Error: This expression has type
close:(unit -> unit) ->
restore:(unit -> unit) ->
(Store.StoreThread.Model.Actions.t -> unit) * (unit -> unit)
but an expression was expected of type 'a * 'b
(You can test this locally with esy '@integrationtest' install & esy '@integrationtest' build)
Awesome, will fix that as soon as my Windows VM finishes building the new dependencies haha 🕐. I also think I saw 1 or 2 esy b dune build @check errors that I should resolve too. Overall though Im super happy with how this turned out!
I think I fixed all the esy b dune build @check errors (which include the integration test errors) -- CI will let me know if I messed anything up haha.
Ah, looks like it needs an esy format too to pass the hygiene check - but other builds look good!
Also the image is pretty low res, though I don't think there is much you can do about that @zbaylin, so maybe a thing to just note down for now since there is a few places we need to bump the resolution of the logos, which I think @bryphe has mentioned before, but its pretty low priority for now.
Yes, good point! I need to get the icon tweaked a bit to have more solid outline for low-res - it doesn't look right scaled down
I've noticed a few regression after this unfortunately (I apparently didn't test this as thoroughly as I thought I had).
The Window dimensions are off for me, I expect as part of the many scaling issues we've had.
The maximise behaviour seems to go too far, and fill the whole screen, rather than just up to the toolbar.
There is a slight dead zone in the tool bar where I can't drag, but that could be related to the window scaling issue.
The scaling/window size issue is probably causing #1817 too.
Also, here is an video of what I'm seeing: https://drive.google.com/open?id=1vMrR31pi7HJ-TJKXREOjDctLAZ0JMYGw
(Window handles in the wrong place, then not being able to move the window, then the taskbar being covered on maximise)
|
gharchive/pull-request
| 2020-05-22T15:17:55 |
2025-04-01T04:35:17.635676
|
{
"authors": [
"CrossR",
"bryphe",
"zbaylin"
],
"repo": "onivim/oni2",
"url": "https://github.com/onivim/oni2/pull/1801",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
619085951
|
Status of the support for the ONNX model zoo
This issue is meant as an ongoing discussion about the onnx-mlir coverage of the ONNX model zoo and any other models of interest. Some of the models we have tried and issues found are below.
Supported:
-[x] MNIST
-[x] ResNet
In progress:
-[] ShuffleNet: slight result inconsistency being investigated but all operations are supported
Missing Ops:
-[] DenseNet: missing GlobalAveragePool operation
-[] AlexNet: missing LRN operation
-[] SqueezeNet: missing Dropout operation
-[] CaffeNet: missing LRN operation
Errors:
bertsquad8:
“onnx-mlir: /home/gbercea/patch-compiler/llvm-project/mlir/lib/IR/Value.cpp:20: mlir::Value::Value(mlir::Operation *, unsigned int): Assertion `op->getNumResults() > resultNo && "invalid result number"' failed.”
bidaf:
“onnx-mlir: /home/gbercea/onnf-compiler/onnx-mlir/src/Builder/FrontendDialectTransformer.cpp:84: mlir::Type onnx_mlir::{anonymous}::FrontendGenImpl::convertONNXTypeToMLIRType(onnx::TensorProto_DataType): Assertion `false && "Unsupported data type encountered."' failed.”
Unsupported type: onnx::TensorProto_DataType::TensorProto_DataType_STRING
My finding:
MNIST from PyTorch - Missing Flatten, LogSoftmax.
I hit the same assertion as @doru1004 for
bertsquad-8.onnx and bertsquad-10.onnx, both available here
and also for gpt2-10.onnx, available here.
$ ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 848382 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-8.onnx
$ ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856164 abort (core dumped) ./bin/onnx-mlir --EmitLib ./bertsquad-10.onnx
$ ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx
./bin/onnx-mlir: /home/xxxxx/miniconda3/lib/libtinfo.so.6: no version information available (required by ./bin/onnx-mlir)
onnx-mlir: /home/xxxxx/llvm-project/mlir/lib/IR/Value.cpp:22: mlir::Value::Value(mlir::Operation*, unsigned int): Assertion `op->getNumResults() > resultNo&& "invalid result number"' failed.
[1] 856207 abort (core dumped) ./bin/onnx-mlir --EmitLib ./gpt2-10.onnx
ResNet breaks on shape inference pass. I notice that the output in basic MLIR is
func @main_graph(%arg0: tensor<1x3x224x224xf32>) -> tensor<*xf32> {
The output should be -> tensor<1x1000xf32>
There is a node in the graph called resnetv24_dense0_fwd that is the output
Resnet50-v1 also is not working
% ./onnx-mlir --EmitMLIR resnet50-v1-7.onnx
not a ShapedType or not ranked
UNREACHABLE executed at /Users/xatter/code/compiler/llvm-project/mlir/lib/IR/StandardTypes.cpp:253!
zsh: abort ./onnx-mlir --EmitMLIR resnet50-v1-7.onnx
@tjingrant I think we are running a version of ResNet as part of the test suite, is that different from the one above?
@Xatter and @doru1004 , it appears that the ResNet version included in the tests is different than the onnx zoo or the onnx repo.
The version included by the tests is downloaded from here:
wget https://s3.amazonaws.com/download.onnx/models/opset_9/resnet50.tar.gz
The download location is defined in this file:
onnx-mlir/third_party/onnx/onnx/backend/test/data/real/test_resnet50/data.json
This downloaded model works ok.
But When I try to EmitONNXIR for the implementations of resnet v1 and v2 available at the onnx repo. I get the following (different) errors:
wget https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v1-7.onnx?raw=true -O resnet50-v1.onnx
./onnx-mlir --EmitONNXIR resnet50-v1.onnx
onnx-mlir: /working_dir/llvm-project/mlir/include/mlir/IR/Types.h:308: U mlir::Type::cast() const [U = mlir::MemRefType]: Assertion `isa<U>()' failed.
Aborted (core dumped)
wget https://github.com/onnx/models/blob/master/vision/classification/resnet/model/resnet50-v2-7.onnxraw=true -O resnet50-v2.onnx
./onnx-mlir --EmitONNXIR resnet50-v2.onnx
error: unable to infer shape of operation without shape inference interface
error: Input data tensor not ranked
error: shape inference failed
error: Input tensor(s) not ranked
error: shape inference failed
error: Shape inference failed, 3 operations couldn't be inferred
Using the following versions of onnx-mlir, llvm-project, and protobuf:
git clone https://github.com/llvm/llvm-project.git
cd llvm-project && git checkout 91671e13efbc5dbd17b832d7973401350d0a6ee6 && cd ..
git clone --recursive https://github.com/onnx/onnx-mlir.git
cd onnx-mlir && git checkout --recurse-submodules 75930ffbcf14cfbaccd8417c47c3598f56342926 && cd ..
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf && git checkout --recurse-submodules d16bf914bc5ba569d2b70376051d15f68ce4322d && cd ..
``
I wrote a script to get onnx model zoo coverage status. And I executed ONNX MLIR twice with different versions of docker image onnxmlirczar/onnx-mlir-build:x86.
New Version: 2020-10-12T21:37:40.936418611Z
Old Version: 2020-09-23T19:47:15.588547807Z
ONNX MLIR Compiling Target: ONNX Model Zoo
17 of 118 onnx models can be compiled by onnx-mlir successfully.
0
1
zoo_git_url
git@github.com:onnx/models.git
git@github.com:onnx/models.git
total_count
118
118
success_count
17
17
failed_count
101
101
onnx_mlir_image_creation
2020-10-12T21:37:40.936418611Z
2020-09-23T19:47:15.588547807Z
successed_onnx
[./models/vision/classification/mnist/model/mn...
[./models/vision/classification/mnist/model/mn...
ONNX Files Compiled Successfully with onnx-mlir:
Image built in 2020-10-12T21:37:40.936418611Z
'/models/vision/classification/mnist/model/mnist-7.onnx',
'./models/vision/classification/mnist/model/mnist-8.onnx',
'./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx',
'./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx',
'./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx',
'./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx',
'./models/vision/classification/shufflenet/model/shufflenet-6.onnx',
'./models/vision/classification/shufflenet/model/shufflenet-7.onnx',
'./models/vision/classification/shufflenet/model/shufflenet-8.onnx',
'./models/vision/classification/shufflenet/model/shufflenet-9.onnx',
'./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx',
'./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx',
'./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx',
'./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx',
'./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx',
'./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx',
'./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx'
Successed In New Version which is failed in old version:
{'./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx'}"
Successed In Old Version which is failed in new version
{'./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx'}
Failed Reason Groups
I took errors with "error:" prefix as expected errors, and "onnx-milr:" prefix as mlir assertion failure.
0
1
Expected Error
64
62
Others
3
3
mlir Failure
34
36
Failed Sources
I also categorize the errors with very rough way by source.
Source
0
1
AffineOps.cpp
1
1
Attributes.cpp
4
4
CHECK failed
1
1
Casting.h
2
2
ConstProp.cpp
1
1
FrontendDialectHelper.cpp
1
1
FrontendDialectTransformer.cpp
3
3
Shape inference failed
54
52
Types.h
24
26
op operand must be tensor
10
10
For more details, see attache html report
ONNX_MLIR_Model_Zoo_Support 20201014.pdf
From latest build, it seems 40 Models are supported now.
Success compiled models:
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
./models/vision/classification/mnist/model/mnist-7.onnx
./models/vision/classification/mnist/model/mnist-8.onnx
./models/vision/classification/mobilenet/model/mobilenetv2-7.onnx
./models/vision/classification/resnet/model/resnet101-v1-7.onnx
./models/vision/classification/resnet/model/resnet101-v2-7.onnx
./models/vision/classification/resnet/model/resnet152-v1-7.onnx
./models/vision/classification/resnet/model/resnet152-v2-7.onnx
./models/vision/classification/resnet/model/resnet18-v1-7.onnx
./models/vision/classification/resnet/model/resnet18-v2-7.onnx
./models/vision/classification/resnet/model/resnet34-v1-7.onnx
./models/vision/classification/resnet/model/resnet34-v2-7.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx
./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx
./models/vision/classification/resnet/model/resnet50-v1-7.onnx
./models/vision/classification/resnet/model/resnet50-v2-7.onnx
./models/vision/classification/shufflenet/model/shufflenet-6.onnx
./models/vision/classification/shufflenet/model/shufflenet-7.onnx
./models/vision/classification/shufflenet/model/shufflenet-8.onnx
./models/vision/classification/shufflenet/model/shufflenet-9.onnx
./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-3.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-6.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-7.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-8.onnx
./models/vision/classification/squeezenet/model/squeezenet1.0-9.onnx
./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx
./models/vision/classification/vgg/model/vgg16-7.onnx
./models/vision/classification/vgg/model/vgg16-bn-7.onnx
./models/vision/classification/vgg/model/vgg19-7.onnx
./models/vision/classification/vgg/model/vgg19-bn-7.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx
./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx
./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx
./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx
Update:
MLIR now has global average pool, dropout for inference (some case failing, we are investigating), and we are working on LRN, which should come shortly.
Update:
MLIR now has global average pool, dropout for inference (some case failing, we are investigating), and we are working on LRN, which should come shortly.
Update:
Until Feb-20th, 77 models can be compiled.
Models
Compilation Success
./models/text/machine_comprehension/bert-squad/model/bertsquad-10.onnx
FALSE
./models/text/machine_comprehension/bert-squad/model/bertsquad-8.onnx
FALSE
./models/text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.onnx
FALSE
./models/text/machine_comprehension/gpt-2/model/gpt2-10.onnx
FALSE
./models/text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx
FALSE
./models/text/machine_comprehension/roberta/model/roberta-base-11.onnx
FALSE
./models/text/machine_comprehension/roberta/model/roberta-sequence-classification-9.onnx
FALSE
./models/text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.onnx
FALSE
./models/text/machine_comprehension/t5/model/t5-encoder-12.onnx
FALSE
./models/vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx
TRUE
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-2.onnx
FALSE
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx
TRUE
./models/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
TRUE
./models/vision/classification/alexnet/model/bvlcalexnet-3.onnx
FALSE
./models/vision/classification/alexnet/model/bvlcalexnet-6.onnx
TRUE
./models/vision/classification/alexnet/model/bvlcalexnet-7.onnx
TRUE
./models/vision/classification/alexnet/model/bvlcalexnet-8.onnx
TRUE
./models/vision/classification/alexnet/model/bvlcalexnet-9.onnx
TRUE
./models/vision/classification/caffenet/model/caffenet-3.onnx
TRUE
./models/vision/classification/caffenet/model/caffenet-6.onnx
FALSE
./models/vision/classification/caffenet/model/caffenet-7.onnx
TRUE
./models/vision/classification/caffenet/model/caffenet-8.onnx
TRUE
./models/vision/classification/caffenet/model/caffenet-9.onnx
TRUE
./models/vision/classification/densenet-121/model/densenet-3.onnx
FALSE
./models/vision/classification/densenet-121/model/densenet-6.onnx
TRUE
./models/vision/classification/densenet-121/model/densenet-7.onnx
TRUE
./models/vision/classification/densenet-121/model/densenet-8.onnx
TRUE
./models/vision/classification/densenet-121/model/densenet-9.onnx
TRUE
./models/vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx
TRUE
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-3.onnx
TRUE
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-6.onnx
TRUE
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-7.onnx
TRUE
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-8.onnx
TRUE
./models/vision/classification/inception_and_googlenet/googlenet/model/googlenet-9.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-3.onnx
FALSE
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-6.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-8.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-9.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-3.onnx
FALSE
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-6.onnx
FALSE
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-8.onnx
TRUE
./models/vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-9.onnx
TRUE
./models/vision/classification/mnist/model/mnist-1.onnx
FALSE
./models/vision/classification/mnist/model/mnist-7.onnx
TRUE
./models/vision/classification/mnist/model/mnist-8.onnx
TRUE
./models/vision/classification/mobilenet/model/mobilenetv2-7.onnx
TRUE
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-3.onnx
FALSE
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-6.onnx
TRUE
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-7.onnx
TRUE
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-8.onnx
TRUE
./models/vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-9.onnx
TRUE
./models/vision/classification/resnet/model/resnet101-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet101-v2-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet152-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet152-v2-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet18-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet18-v2-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet34-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet34-v2-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-caffe2-v1-3.onnx
FALSE
./models/vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-v1-7.onnx
TRUE
./models/vision/classification/resnet/model/resnet50-v2-7.onnx
TRUE
./models/vision/classification/shufflenet/model/shufflenet-3.onnx
FALSE
./models/vision/classification/shufflenet/model/shufflenet-6.onnx
TRUE
./models/vision/classification/shufflenet/model/shufflenet-7.onnx
TRUE
./models/vision/classification/shufflenet/model/shufflenet-8.onnx
TRUE
./models/vision/classification/shufflenet/model/shufflenet-9.onnx
TRUE
./models/vision/classification/shufflenet/model/shufflenet-v2-10.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.0-3.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.0-6.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.0-7.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.0-8.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.0-9.onnx
TRUE
./models/vision/classification/squeezenet/model/squeezenet1.1-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg16-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg16-bn-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-bn-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-caffe2-3.onnx
FALSE
./models/vision/classification/vgg/model/vgg19-caffe2-6.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-caffe2-7.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-caffe2-8.onnx
TRUE
./models/vision/classification/vgg/model/vgg19-caffe2-9.onnx
TRUE
./models/vision/classification/zfnet-512/model/zfnet512-3.onnx
FALSE
./models/vision/classification/zfnet-512/model/zfnet512-6.onnx
TRUE
./models/vision/classification/zfnet-512/model/zfnet512-7.onnx
TRUE
./models/vision/classification/zfnet-512/model/zfnet512-8.onnx
TRUE
./models/vision/classification/zfnet-512/model/zfnet512-9.onnx
TRUE
./models/vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx
TRUE
./models/vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx
FALSE
./models/vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-10.onnx
FALSE
./models/vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx
FALSE
./models/vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_10.onnx
FALSE
./models/vision/object_detection_segmentation/ssd/model/ssd-10.onnx
FALSE
./models/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-7.onnx
TRUE
./models/vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx
TRUE
./models/vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx
FALSE
./models/vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx
TRUE
./models/vision/object_detection_segmentation/yolov3/model/yolov3-10.onnx
FALSE
./models/vision/object_detection_segmentation/yolov4/model/yolov4.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/candy-8.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/candy-9.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/mosaic-8.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/mosaic-9.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/pointilism-8.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/pointilism-9.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/rain-princess-8.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/rain-princess-9.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/udnie-8.onnx
FALSE
./models/vision/style_transfer/fast_neural_style/model/udnie-9.onnx
FALSE
./models/vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx
TRUE
any update?
FYI, I wrote a python script to examine the current status and below is the result. Will report the status monthly.
(@AlexandreEichenberger I added error messages when compilation failed)
ONNX-MLIR supports 90 ONNX ops
['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'gru', 'hardsigmoid', 'identity', 'leakyrelu', 'less', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'min', 'mul', 'neg', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'xor']
There are 128 models in the ONNX model zoo
[1] processing vision/style_transfer/fast_neural_style/model/candy-8.onnx
[2] processing vision/style_transfer/fast_neural_style/model/udnie-9.onnx
[3] processing vision/style_transfer/fast_neural_style/model/mosaic-8.onnx
[4] processing vision/style_transfer/fast_neural_style/model/mosaic-9.onnx
[5] processing vision/style_transfer/fast_neural_style/model/rain-princess-8.onnx
[6] processing vision/style_transfer/fast_neural_style/model/pointilism-9.onnx
[7] processing vision/style_transfer/fast_neural_style/model/pointilism-8.onnx
[8] processing vision/style_transfer/fast_neural_style/model/candy-9.onnx
[9] processing vision/style_transfer/fast_neural_style/model/udnie-8.onnx
[10] processing vision/style_transfer/fast_neural_style/model/rain-princess-9.onnx
[11] processing vision/object_detection_segmentation/fcn/model/fcn-resnet101-11.onnx
[12] processing vision/object_detection_segmentation/fcn/model/fcn-resnet50-11.onnx
[13] processing vision/object_detection_segmentation/yolov4/model/yolov4.onnx
[14] processing vision/object_detection_segmentation/yolov3/model/yolov3-10.onnx
[15] processing vision/object_detection_segmentation/mask-rcnn/model/MaskRCNN-10.onnx
[16] processing vision/object_detection_segmentation/retinanet/model/retinanet-9.onnx
[17] processing vision/object_detection_segmentation/faster-rcnn/model/FasterRCNN-10.onnx
[18] processing vision/object_detection_segmentation/ssd/model/ssd-10.onnx
[19] processing vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-7.onnx
[20] processing vision/object_detection_segmentation/tiny-yolov2/model/tinyyolov2-8.onnx
[21] processing vision/object_detection_segmentation/tiny-yolov3/model/tiny-yolov3-11.onnx
[22] processing vision/object_detection_segmentation/duc/model/ResNet101-DUC-7.onnx
[23] processing vision/object_detection_segmentation/ssd-mobilenetv1/model/ssd_mobilenet_v1_10.onnx
[24] processing vision/object_detection_segmentation/yolov2-coco/model/yolov2-coco-9.onnx
[25] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_age_imdb_wiki.onnx
[26] processing vision/body_analysis/age_gender/models/age_googlenet.onnx
[27] processing vision/body_analysis/age_gender/models/gender_googlenet.onnx
[28] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_gender_imdb_wiki.onnx
[29] processing vision/body_analysis/age_gender/models/vgg_ilsvrc_16_age_chalearn_iccv2015.onnx
[30] processing vision/body_analysis/arcface/model/arcfaceresnet100-8.onnx
[31] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-2.onnx
[32] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.onnx
[33] processing vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
[34] processing vision/body_analysis/ultraface/models/version-RFB-640.onnx
[35] processing vision/body_analysis/ultraface/models/version-RFB-320.onnx
[36] processing vision/classification/vgg/model/vgg19-7.onnx
[37] processing vision/classification/vgg/model/vgg19-caffe2-6.onnx
[38] processing vision/classification/vgg/model/vgg19-bn-7.onnx
[39] processing vision/classification/vgg/model/vgg19-caffe2-7.onnx
[40] processing vision/classification/vgg/model/vgg19-caffe2-3.onnx
[41] processing vision/classification/vgg/model/vgg19-caffe2-8.onnx
[42] processing vision/classification/vgg/model/vgg19-caffe2-9.onnx
[43] processing vision/classification/vgg/model/vgg16-7.onnx
[44] processing vision/classification/vgg/model/vgg16-bn-7.onnx
[45] processing vision/classification/mobilenet/model/mobilenetv2-7.onnx
[46] processing vision/classification/squeezenet/model/squeezenet1.0-3.onnx
[47] processing vision/classification/squeezenet/model/squeezenet1.0-9.onnx
[48] processing vision/classification/squeezenet/model/squeezenet1.0-7.onnx
[49] processing vision/classification/squeezenet/model/squeezenet1.0-6.onnx
[50] processing vision/classification/squeezenet/model/squeezenet1.1-7.onnx
[51] processing vision/classification/squeezenet/model/squeezenet1.0-8.onnx
[52] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-8.onnx
[53] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-3.onnx
[54] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-7.onnx
[55] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-9.onnx
[56] processing vision/classification/rcnn_ilsvrc13/model/rcnn-ilsvrc13-6.onnx
[57] processing vision/classification/caffenet/model/caffenet-7.onnx
[58] processing vision/classification/caffenet/model/caffenet-3.onnx
[59] processing vision/classification/caffenet/model/caffenet-6.onnx
[60] processing vision/classification/caffenet/model/caffenet-9.onnx
[61] processing vision/classification/caffenet/model/caffenet-8.onnx
[62] processing vision/classification/densenet-121/model/densenet-9.onnx
[63] processing vision/classification/densenet-121/model/densenet-7.onnx
[64] processing vision/classification/densenet-121/model/densenet-8.onnx
[65] processing vision/classification/densenet-121/model/densenet-3.onnx
[66] processing vision/classification/densenet-121/model/densenet-6.onnx
[67] processing vision/classification/mnist/model/mnist-7.onnx
[68] processing vision/classification/mnist/model/mnist-1.onnx
[69] processing vision/classification/mnist/model/mnist-8.onnx
[70] processing vision/classification/efficientnet-lite4/model/efficientnet-lite4-11.onnx
[71] processing vision/classification/alexnet/model/bvlcalexnet-3.onnx
[72] processing vision/classification/alexnet/model/bvlcalexnet-9.onnx
[73] processing vision/classification/alexnet/model/bvlcalexnet-8.onnx
[74] processing vision/classification/alexnet/model/bvlcalexnet-6.onnx
[75] processing vision/classification/alexnet/model/bvlcalexnet-7.onnx
[76] processing vision/classification/resnet/model/resnet34-v2-7.onnx
[77] processing vision/classification/resnet/model/resnet18-v2-7.onnx
[78] processing vision/classification/resnet/model/resnet50-caffe2-v1-8.onnx
[79] processing vision/classification/resnet/model/resnet50-v2-7.onnx
[80] processing vision/classification/resnet/model/resnet34-v1-7.onnx
[81] processing vision/classification/resnet/model/resnet101-v1-7.onnx
[82] processing vision/classification/resnet/model/resnet101-v2-7.onnx
[83] processing vision/classification/resnet/model/resnet50-v1-12-int8.onnx
[84] processing vision/classification/resnet/model/resnet50-caffe2-v1-7.onnx
[85] processing vision/classification/resnet/model/resnet50-v1-7.onnx
[86] processing vision/classification/resnet/model/resnet152-v1-7.onnx
[87] processing vision/classification/resnet/model/resnet18-v1-7.onnx
[88] processing vision/classification/resnet/model/resnet50-caffe2-v1-9.onnx
[89] processing vision/classification/resnet/model/resnet50-v1-12.onnx
[90] processing vision/classification/resnet/model/resnet50-caffe2-v1-3.onnx
[91] processing vision/classification/resnet/model/resnet152-v2-7.onnx
[92] processing vision/classification/resnet/model/resnet50-caffe2-v1-6.onnx
[93] processing vision/classification/zfnet-512/model/zfnet512-6.onnx
[94] processing vision/classification/zfnet-512/model/zfnet512-7.onnx
[95] processing vision/classification/zfnet-512/model/zfnet512-8.onnx
[96] processing vision/classification/zfnet-512/model/zfnet512-3.onnx
[97] processing vision/classification/zfnet-512/model/zfnet512-9.onnx
[98] processing vision/classification/shufflenet/model/shufflenet-6.onnx
[99] processing vision/classification/shufflenet/model/shufflenet-7.onnx
[100] processing vision/classification/shufflenet/model/shufflenet-3.onnx
[101] processing vision/classification/shufflenet/model/shufflenet-8.onnx
[102] processing vision/classification/shufflenet/model/shufflenet-v2-10.onnx
[103] processing vision/classification/shufflenet/model/shufflenet-9.onnx
[104] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-9.onnx
[105] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-6.onnx
[106] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-7.onnx
[107] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-8.onnx
[108] processing vision/classification/inception_and_googlenet/googlenet/model/googlenet-3.onnx
[109] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-7.onnx
[110] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-9.onnx
[111] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-6.onnx
[112] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-8.onnx
[113] processing vision/classification/inception_and_googlenet/inception_v1/model/inception-v1-3.onnx
[114] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-8.onnx
[115] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-9.onnx
[116] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-6.onnx
[117] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-7.onnx
[118] processing vision/classification/inception_and_googlenet/inception_v2/model/inception-v2-3.onnx
[119] processing vision/super_resolution/sub_pixel_cnn_2016/model/super-resolution-10.onnx
[120] processing text/machine_comprehension/t5/model/t5-decoder-with-lm-head-12.onnx
[121] processing text/machine_comprehension/t5/model/t5-encoder-12.onnx
[122] processing text/machine_comprehension/roberta/model/roberta-base-11.onnx
[123] processing text/machine_comprehension/roberta/model/roberta-sequence-classification-9.onnx
[124] processing text/machine_comprehension/bidirectional_attention_flow/model/bidaf-9.onnx
[125] processing text/machine_comprehension/gpt-2/model/gpt2-lm-head-10.onnx
[126] processing text/machine_comprehension/gpt-2/model/gpt2-10.onnx
[127] processing text/machine_comprehension/bert-squad/model/bertsquad-10.onnx
[128] processing text/machine_comprehension/bert-squad/model/bertsquad-8.onnx
ONNX models and their ops
ONNX model
Ops in the model
Ops not supported in onnx-mlir
Compilable with onnx-mlir
age_googlenet.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
arcfaceresnet100-8.onnx
{'flatten', 'add', 'identity', 'mul', 'sub', 'reshape', 'dropout', 'batchnormalization', 'prelu', 'gemm', 'conv'}
{}
succeeded
bertsquad-10.onnx
{'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'reciprocal', 'onehot', 'unsqueeze', 'softmax', 'constantofshape', 'pow', 'identity', 'split', 'reducemean', 'mul', 'reshape', 'slice'}
{'onehot'}
error: onnx.OneHot: inferShapes() not implementederror: shape inference failed
bertsquad-8.onnx
{'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'reciprocal', 'unsqueeze', 'pow', 'tile', 'identity', 'split', 'reducemean', 'mul', 'reshape', 'slice'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/lib/IR/AttributeDetail.h:115: static mlir::detail::DenseIntOrFPElementsAttrStorage::KeyTy mlir::detail::DenseIntOrFPElementsAttrStorage::getKey(mlir::ShapedType, llvm::ArrayRef, bool): Assertion `numElements != 1 && "splat of 1 element should already be detected"' failed.
bidaf-9.onnx
{'sub', 'squeeze', 'log', 'gather', 'shape', 'transpose', 'concat', 'clip', 'cast', 'add', 'compress', 'categorymapper', 'relu', 'dropout', 'matmul', 'hardmax', 'softmax', 'unsqueeze', 'argmax', 'constantofshape', 'sum', 'scan', 'abs', 'conv', 'mul', 'reshape', 'lstm', 'sigmoid', 'ceil', 'slice', 'reducemax', 'reducesum'}
{'compress', 'hardmax', 'categorymapper'}
onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:126: void onnx_mlir::SymbolMapping::AddMapping(const string&, T) [with T = onnx::TypeProto; std::__cxx11::string = std::__cxx11::basic_string]: Assertion `!_scopes.back().contain(name) && "Tensor already exists."' failed.
bvlcalexnet-3.onnx
{'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
bvlcalexnet-6.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
bvlcalexnet-7.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
bvlcalexnet-8.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
bvlcalexnet-9.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
caffenet-3.onnx
{'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
caffenet-6.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
caffenet-7.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
caffenet-8.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
caffenet-9.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
candy-8.onnx
{'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
candy-9.onnx
{'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
densenet-3.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'conv', 'globalaveragepool', 'concat'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
densenet-6.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
densenet-7.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
densenet-8.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
densenet-9.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'averagepool', 'unsqueeze', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
efficientnet-lite4-11.onnx
{'clip', 'add', 'squeeze', 'matmul', 'batchnormalization', 'softmax', 'averagepool', 'conv', 'transpose'}
{}
succeeded
emotion-ferplus-2.onnx
{'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv', 'constant'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
emotion-ferplus-7.onnx
{'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv'}
{}
succeeded
emotion-ferplus-8.onnx
{'add', 'sub', 'reshape', 'relu', 'dropout', 'matmul', 'maxpool', 'div', 'conv'}
{}
succeeded
fasterrcnn-10.onnx
{'topk', 'sqrt', 'sub', 'squeeze', 'log', 'roialign', 'gather', 'resize', 'shape', 'scatter', 'transpose', 'concat', 'cast', 'clip', 'add', 'greater', 'relu', 'softmax', 'unsqueeze', 'gemm', 'constant', 'exp', 'reducemin', 'constantofshape', 'nonzero', 'equal', 'conv', 'flatten', 'expand', 'mul', 'reshape', 'floor', 'maxpool', 'sigmoid', 'slice', 'div', 'nonmaxsuppression'}
{'topk', 'greater', 'expand', 'roialign', 'nonzero', 'equal', 'scatter', 'nonmaxsuppression'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
fcn-resnet101-11.onnx
{'cast', 'add', 'relu', 'maxpool', 'shape', 'gather', 'slice', 'unsqueeze', 'resize', 'conv', 'constant', 'concat'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yeterror: shape inference failed
fcn-resnet50-11.onnx
{'cast', 'add', 'relu', 'maxpool', 'shape', 'gather', 'slice', 'unsqueeze', 'resize', 'conv', 'constant', 'concat'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yeterror: shape inference failed
gender_googlenet.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
googlenet-3.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
googlenet-6.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
googlenet-7.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
googlenet-8.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
googlenet-9.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
gpt2-10.onnx
{'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'split', 'nonzero', 'reducemean', 'mul', 'reshape', 'slice', 'div'}
{'nonzero'}
error: onnx.NonZero: inferShapes() not implementederror: shape inference failed
gpt2-lm-head-10.onnx
{'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'where', 'split', 'nonzero', 'reducemean', 'mul', 'reshape', 'slice', 'div'}
{'where', 'nonzero'}
error: onnx.NonZero: inferShapes() not implementederror: shape inference failed
inception-v1-3.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
inception-v1-6.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
inception-v1-7.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
inception-v1-8.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
inception-v1-9.onnx
{'relu', 'dropout', 'reshape', 'lrn', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
succeeded
inception-v2-3.onnx
{'add', 'mul', 'relu', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
inception-v2-6.onnx
{'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'conv', 'concat'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
inception-v2-7.onnx
{'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'}
{}
succeeded
inception-v2-8.onnx
{'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'}
{}
succeeded
inception-v2-9.onnx
{'add', 'mul', 'relu', 'reshape', 'batchnormalization', 'maxpool', 'softmax', 'gemm', 'averagepool', 'unsqueeze', 'conv', 'concat'}
{}
succeeded
maskrcnn-10.onnx
{'topk', 'sqrt', 'sub', 'squeeze', 'log', 'not', 'less', 'gather', 'roialign', 'resize', 'shape', 'scatter', 'transpose', 'concat', 'cast', 'clip', 'add', 'greater', 'relu', 'softmax', 'unsqueeze', 'gemm', 'constant', 'and', 'exp', 'reducemin', 'convtranspose', 'constantofshape', 'split', 'nonzero', 'equal', 'conv', 'flatten', 'expand', 'mul', 'reshape', 'floor', 'maxpool', 'sigmoid', 'slice', 'div', 'nonmaxsuppression'}
{'topk', 'greater', 'expand', 'not', 'roialign', 'nonzero', 'equal', 'convtranspose', 'scatter', 'nonmaxsuppression'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
mnist-1.onnx
{'add', 'reshape', 'relu', 'matmul', 'maxpool', 'div', 'conv', 'constant'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
mnist-7.onnx
{'add', 'relu', 'reshape', 'matmul', 'maxpool', 'conv'}
{}
succeeded
mnist-8.onnx
{'add', 'relu', 'reshape', 'matmul', 'maxpool', 'conv'}
{}
succeeded
mobilenetv2-7.onnx
{'clip', 'add', 'reshape', 'constant', 'gather', 'gemm', 'unsqueeze', 'conv', 'shape', 'globalaveragepool', 'concat'}
{}
error: Expected positive number of original loops.
mosaic-8.onnx
{'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
mosaic-9.onnx
{'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
pointilism-8.onnx
{'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
pointilism-9.onnx
{'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
rain-princess-8.onnx
{'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
rain-princess-9.onnx
{'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
rcnn-ilsvrc13-3.onnx
{'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
rcnn-ilsvrc13-6.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'}
{}
succeeded
rcnn-ilsvrc13-7.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'}
{}
succeeded
rcnn-ilsvrc13-8.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'}
{}
succeeded
rcnn-ilsvrc13-9.onnx
{'reshape', 'relu', 'dropout', 'lrn', 'maxpool', 'gemm', 'conv'}
{}
succeeded
resnet101-duc-7.onnx
{'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'conv'}
{}
succeeded
resnet101-v1-7.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet101-v2-7.onnx
{'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet152-v1-7.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet152-v2-7.onnx
{'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet18-v1-7.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet18-v2-7.onnx
{'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet34-v1-7.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet34-v2-7.onnx
{'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet50-caffe2-v1-3.onnx
{'relu', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
resnet50-caffe2-v1-6.onnx
{'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'}
{}
succeeded
resnet50-caffe2-v1-7.onnx
{'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'}
{}
succeeded
resnet50-caffe2-v1-8.onnx
{'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'}
{}
succeeded
resnet50-caffe2-v1-9.onnx
{'relu', 'reshape', 'maxpool', 'sum', 'batchnormalization', 'softmax', 'gemm', 'averagepool', 'conv'}
{}
succeeded
resnet50-v1-12-int8.onnx
{'flatten', 'qlinearglobalaveragepool', 'maxpool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul'}
{'qlinearglobalaveragepool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'qlinearmatmul'}
error: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not ranked
resnet50-v1-12.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet50-v1-7.onnx
{'flatten', 'add', 'relu', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
resnet50-v2-7.onnx
{'add', 'relu', 'reshape', 'maxpool', 'batchnormalization', 'gemm', 'conv', 'globalaveragepool'}
{}
succeeded
retinanet-9.onnx
{'add', 'relu', 'maxpool', 'batchnormalization', 'sigmoid', 'upsample', 'conv'}
{'upsample'}
error: onnx.Upsample: inferShapes() not implementederror: shape inference failed
roberta-base-11.onnx
{'sqrt', 'sub', 'cumsum', 'not', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'erf', 'equal', 'reducemean', 'mul', 'reshape', 'div'}
{'cumsum', 'equal', 'not'}
error: onnx.Equal: inferShapes() not implementederror: shape inference failed
roberta-sequence-classification-9.onnx
{'sqrt', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'matmul', 'tanh', 'softmax', 'unsqueeze', 'gemm', 'constant', 'constantofshape', 'pow', 'erf', 'nonzero', 'reducemean', 'expand', 'mul', 'reshape', 'div'}
{'nonzero', 'expand'}
error: onnx.NonZero: inferShapes() not implementederror: shape inference failed
shufflenet-3.onnx
{'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
shufflenet-6.onnx
{'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'}
{}
succeeded
shufflenet-7.onnx
{'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'}
{}
succeeded
shufflenet-8.onnx
{'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'}
{}
succeeded
shufflenet-9.onnx
{'reshape', 'relu', 'maxpool', 'batchnormalization', 'sum', 'softmax', 'gemm', 'averagepool', 'conv', 'transpose', 'concat'}
{}
succeeded
shufflenet-v2-10.onnx
{'relu', 'reshape', 'reducemean', 'maxpool', 'batchnormalization', 'split', 'gemm', 'conv', 'constant', 'transpose', 'concat'}
{}
succeeded
squeezenet1.0-3.onnx
{'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
squeezenet1.0-6.onnx
{'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
squeezenet1.0-7.onnx
{'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
squeezenet1.0-8.onnx
{'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
squeezenet1.0-9.onnx
{'relu', 'dropout', 'maxpool', 'softmax', 'conv', 'globalaveragepool', 'concat'}
{}
succeeded
squeezenet1.1-7.onnx
{'relu', 'dropout', 'reshape', 'maxpool', 'averagepool', 'conv', 'concat'}
{}
succeeded
ssd-10.onnx
{'topk', 'sub', 'squeeze', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'relu', 'batchnormalization', 'softmax', 'unsqueeze', 'constant', 'exp', 'reducemin', 'constantofshape', 'conv', 'mul', 'reshape', 'maxpool', 'slice', 'nonmaxsuppression'}
{'topk', 'nonmaxsuppression'}
error: onnx.NonMaxSuppression: inferShapes() not implementederror: shape inference failed
ssd_mobilenet_v1_10.onnx
{'sub', 'squeeze', 'less', 'gather', 'shape', 'loop', 'concat', 'transpose', 'cast', 'clip', 'add', 'unsqueeze', 'exp', 'constantofshape', 'tile', 'split', 'conv', 'mul', 'reshape', 'sigmoid', 'slice', 'div', 'min'}
{}
error: scales() and sizes() can not both None/not Noneerror: shape inference failederror: onnx.Equal: inferShapes() not implementederror: shape inference failedonnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:245: U mlir::Type::cast() const [with U = mlir::MemRefType]: Assertion `isa()' failed.
super-resolution-10.onnx
{'reshape', 'relu', 'conv', 'constant', 'transpose'}
{}
succeeded
t5-decoder-with-lm-head-12.onnx
{'range', 'sqrt', 'sub', 'log', 'less', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'max', 'relu', 'matmul', 'softmax', 'unsqueeze', 'constant', 'constantofshape', 'pow', 'lessorequal', 'tile', 'neg', 'where', 'reducemean', 'mul', 'reshape', 'div', 'min'}
{'where', 'lessorequal'}
error: onnx.LessOrEqual: inferShapes() not implementederror: shape inference failed
t5-encoder-12.onnx
{'sqrt', 'range', 'sub', 'log', 'less', 'gather', 'shape', 'transpose', 'concat', 'cast', 'add', 'relu', 'matmul', 'softmax', 'unsqueeze', 'constant', 'constantofshape', 'pow', 'neg', 'where', 'abs', 'reducemean', 'mul', 'reshape', 'div', 'min'}
{'where'}
error: onnx.Where: inferShapes() not implementederror: shape inference failed
tiny-yolov3-11.onnx
{'sub', 'squeeze', 'leakyrelu', 'resize', 'shape', 'transpose', 'concat', 'loop', 'cast', 'add', 'batchnormalization', 'unsqueeze', 'exp', 'reducemin', 'tile', 'identity', 'round', 'conv', 'mul', 'reshape', 'maxpool', 'sigmoid', 'ceil', 'slice', 'div', 'nonmaxsuppression'}
{'round', 'nonmaxsuppression'}
error: onnx.Round: inferShapes() not implementederror: shape inference failed
tinyyolov2-7.onnx
{'add', 'mul', 'batchnormalization', 'maxpool', 'leakyrelu', 'conv'}
{}
succeeded
tinyyolov2-8.onnx
{'add', 'mul', 'batchnormalization', 'maxpool', 'leakyrelu', 'conv'}
{}
succeeded
udnie-8.onnx
{'add', 'relu', 'instancenormalization', 'upsample', 'pad', 'conv'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
udnie-9.onnx
{'cast', 'add', 'mul', 'relu', 'floor', 'instancenormalization', 'shape', 'gather', 'upsample', 'slice', 'unsqueeze', 'div', 'pad', 'constant', 'conv', 'concat'}
{'instancenormalization', 'upsample'}
error: 'onnx.Pad' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
version-rfb-320.onnx
{'add', 'sub', 'mul', 'relu', 'reshape', 'batchnormalization', 'shape', 'gather', 'slice', 'softmax', 'unsqueeze', 'div', 'conv', 'constant', 'exp', 'transpose', 'concat'}
{}
error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.
version-rfb-640.onnx
{'add', 'sub', 'mul', 'relu', 'reshape', 'constant', 'batchnormalization', 'gather', 'slice', 'softmax', 'unsqueeze', 'div', 'conv', 'shape', 'exp', 'transpose', 'concat'}
{}
error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.error: Expected positive number of original loops.
vgg16-7.onnx
{'flatten', 'relu', 'dropout', 'maxpool', 'gemm', 'conv'}
{}
succeeded
vgg16-bn-7.onnx
{'flatten', 'relu', 'dropout', 'maxpool', 'batchnormalization', 'gemm', 'conv'}
{}
succeeded
vgg19-7.onnx
{'flatten', 'relu', 'dropout', 'maxpool', 'gemm', 'conv'}
{}
succeeded
vgg19-bn-7.onnx
{'flatten', 'relu', 'dropout', 'maxpool', 'batchnormalization', 'gemm', 'conv'}
{}
succeeded
vgg19-caffe2-3.onnx
{'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
vgg19-caffe2-6.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg19-caffe2-7.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg19-caffe2-8.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg19-caffe2-9.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg_ilsvrc_16_age_imdb_wiki.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
vgg_ilsvrc_16_gender_imdb_wiki.onnx
{'reshape', 'relu', 'dropout', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
yolov2-coco-9.onnx
{'reshape', 'maxpool', 'batchnormalization', 'leakyrelu', 'conv', 'constant', 'transpose', 'concat'}
{}
succeeded
yolov3-10.onnx
{'sub', 'squeeze', 'gather', 'leakyrelu', 'resize', 'shape', 'loop', 'transpose', 'concat', 'cast', 'add', 'batchnormalization', 'unsqueeze', 'exp', 'reducemin', 'tile', 'conv', 'mul', 'reshape', 'sigmoid', 'ceil', 'slice', 'div', 'nonmaxsuppression'}
{'nonmaxsuppression'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
yolov4.onnx
{'cast', 'add', 'mul', 'reshape', 'log', 'maxpool', 'sigmoid', 'tanh', 'gather', 'split', 'slice', 'leakyrelu', 'resize', 'conv', 'shape', 'exp', 'transpose', 'concat'}
{}
succeeded
zfnet512-3.onnx
{'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
zfnet512-6.onnx
{'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
zfnet512-7.onnx
{'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
zfnet512-8.onnx
{'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
zfnet512-9.onnx
{'reshape', 'relu', 'lrn', 'maxpool', 'gemm', 'softmax', 'conv'}
{}
succeeded
Looks like ONNX-MLIR supports 0 models, where 83 models can be really compiled.
Count the number of models in which an op is used (sorted in the decreasing order):
Operator name
Count
Supported in onnx-mlir
conv
119
supported
relu
111
supported
maxpool
101
supported
reshape
84
supported
gemm
79
supported
softmax
71
supported
add
63
supported
concat
61
supported
dropout
50
supported
batchnormalization
46
supported
mul
36
supported
averagepool
34
supported
unsqueeze
32
supported
lrn
32
supported
transpose
27
supported
shape
26
supported
gather
25
supported
constant
24
supported
cast
23
supported
globalaveragepool
22
supported
div
22
supported
sub
21
supported
slice
21
supported
matmul
16
supported
flatten
14
supported
squeeze
13
supported
constantofshape
12
supported
sum
12
supported
upsample
11
not supported
sqrt
10
supported
instancenormalization
10
not supported
pad
10
supported
reducemean
9
supported
exp
9
supported
pow
8
supported
split
8
supported
sigmoid
8
supported
resize
7
supported
floor
7
supported
tanh
7
supported
log
6
supported
leakyrelu
6
supported
clip
6
supported
reducemin
5
supported
tile
5
supported
nonzero
5
not supported
nonmaxsuppression
5
not supported
identity
4
supported
less
4
supported
topk
3
not supported
loop
3
supported
equal
3
not supported
expand
3
not supported
where
3
not supported
ceil
3
supported
min
3
supported
roialign
2
not supported
reciprocal
2
supported
neg
2
supported
erf
2
supported
abs
2
supported
range
2
supported
not
2
not supported
scatter
2
not supported
greater
2
not supported
cumsum
1
not supported
max
1
supported
categorymapper
1
not supported
onehot
1
not supported
and
1
supported
qlinearconv
1
not supported
argmax
1
supported
lessorequal
1
not supported
qlinearglobalaveragepool
1
not supported
round
1
not supported
prelu
1
supported
scan
1
supported
lstm
1
supported
quantizelinear
1
not supported
reducemax
1
supported
qlinearadd
1
not supported
compress
1
not supported
dequantizelinear
1
not supported
hardmax
1
not supported
convtranspose
1
not supported
qlinearmatmul
1
not supported
reducesum
1
supported
Big thanks @tungld
Just check some old models to see why Gemm failed. Actually these models seemed incorrect, for example, the output of MaxPooling (4D tensors) was passed to Gemm which supported only 2D, so Gemm failed.
Looking at onnx/models, these old models will be removed by this PR: https://github.com/onnx/models/pull/389.
So, we perhaps don't need to care about these old models.
New update: 101 models can be compiled now (it was 83 in the previous update). Out of 17 models failed to compile, 12 models are deprecated (using Opset <=3)
ONNX-MLIR supports 102 ONNX ops
['abs', 'acos', 'acosh', 'add', 'and', 'argmax', 'asin', 'asinh', 'atan', 'atanh', 'averagepool', 'batchnormalization', 'cast', 'ceil', 'clip', 'concat', 'constant', 'constantofshape', 'conv', 'cos', 'div', 'dropout', 'elu', 'equal', 'erf', 'exp', 'flatten', 'floor', 'gather', 'gemm', 'globalaveragepool', 'globalmaxpool', 'greater', 'greaterorequal', 'gru', 'hardsigmoid', 'identity', 'instancenormalization', 'leakyrelu', 'less', 'lessorequal', 'log', 'logsoftmax', 'loop', 'lrn', 'lstm', 'matmul', 'max', 'maxpool', 'mean', 'min', 'mod', 'mul', 'neg', 'nonzero', 'not', 'or', 'pad', 'pow', 'prelu', 'range', 'reciprocal', 'reducel1', 'reducel2', 'reducelogsum', 'reducelogsumexp', 'reducemax', 'reducemean', 'reducemin', 'reduceprod', 'reducesum', 'reducesumsquare', 'relu', 'reshape', 'resize', 'rnn', 'round', 'scan', 'selu', 'shape', 'sigmoid', 'sign', 'sin', 'sinh', 'size', 'slice', 'softmax', 'softplus', 'softsign', 'split', 'sqrt', 'squeeze', 'sub', 'sum', 'tan', 'tanh', 'tile', 'transpose', 'unsqueeze', 'upsample', 'where', 'xor']
There are 128 models in the ONNX model zoo where 12 models are deprecated (using very old opsets, e.g. <= 3)
See https://github.com/onnx/models/pull/389 for a list of deprecated models
ONNX models and their ops
ONNX model
Ops in the model
Ops not supported in onnx-mlir
Compilable with onnx-mlir
age_googlenet.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
arcfaceresnet100-8.onnx
{'prelu', 'conv', 'flatten', 'reshape', 'identity', 'mul', 'batchnormalization', 'sub', 'gemm', 'dropout', 'add'}
{}
succeeded
bertsquad-10.onnx
{'unsqueeze', 'split', 'constantofshape', 'onehot', 'sub', 'softmax', 'matmul', 'identity', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'reciprocal', 'add'}
{'onehot'}
error: onnx.OneHot: inferShapes() not implementederror: shape inference failed
bertsquad-8.onnx ['--repeatOnnxTransform=1']
{'unsqueeze', 'split', 'sub', 'tile', 'softmax', 'matmul', 'identity', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'reciprocal', 'add'}
{}
succeeded
bidaf-9.onnx
{'unsqueeze', 'constantofshape', 'compress', 'sigmoid', 'sub', 'add', 'categorymapper', 'sum', 'softmax', 'matmul', 'mul', 'dropout', 'reducemax', 'gather', 'transpose', 'shape', 'hardmax', 'reshape', 'reducesum', 'squeeze', 'relu', 'scan', 'clip', 'abs', 'conv', 'concat', 'slice', 'argmax', 'cast', 'log', 'ceil', 'lstm'}
{'hardmax', 'categorymapper', 'compress'}
onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:126: void onnx_mlir::SymbolMapping::AddMapping(const string&, T) [with T = onnx::TypeProto; std::__cxx11::string = std::__cxx11::basic_string]: Assertion `!_scopes.back().contain(name) && "Tensor already exists."' failed.
bvlcalexnet-3.onnx (deprecated)
{'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
bvlcalexnet-6.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
bvlcalexnet-7.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
bvlcalexnet-8.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
bvlcalexnet-9.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
caffenet-3.onnx (deprecated)
{'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
caffenet-6.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
caffenet-7.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
caffenet-8.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
caffenet-9.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
candy-8.onnx
{'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'}
{}
succeeded
candy-9.onnx
{'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'}
{}
succeeded
densenet-3.onnx (deprecated)
{'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
densenet-6.onnx
{'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'}
{}
succeeded
densenet-7.onnx
{'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'}
{}
succeeded
densenet-8.onnx
{'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'}
{}
succeeded
densenet-9.onnx
{'unsqueeze', 'averagepool', 'conv', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'globalaveragepool', 'add'}
{}
succeeded
efficientnet-lite4-11.onnx
{'clip', 'averagepool', 'transpose', 'conv', 'softmax', 'squeeze', 'matmul', 'batchnormalization', 'add'}
{}
succeeded
emotion-ferplus-2.onnx (deprecated)
{'conv', 'constant', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
emotion-ferplus-7.onnx
{'conv', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'}
{}
succeeded
emotion-ferplus-8.onnx
{'conv', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'sub', 'dropout', 'add'}
{}
succeeded
fasterrcnn-10.onnx
{'unsqueeze', 'expand', 'constantofshape', 'constant', 'div', 'sigmoid', 'sub', 'roialign', 'exp', 'nonmaxsuppression', 'softmax', 'maxpool', 'mul', 'topk', 'equal', 'floor', 'gather', 'transpose', 'shape', 'flatten', 'reshape', 'squeeze', 'relu', 'sqrt', 'clip', 'scatter', 'conv', 'greater', 'concat', 'slice', 'cast', 'reducemin', 'log', 'gemm', 'resize', 'add', 'nonzero'}
{'expand', 'scatter', 'nonmaxsuppression', 'roialign', 'topk'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
fcn-resnet101-11.onnx
{'unsqueeze', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'maxpool', 'cast', 'relu', 'resize', 'add'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yeterror: shape inference failed
fcn-resnet50-11.onnx
{'unsqueeze', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'maxpool', 'cast', 'relu', 'resize', 'add'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yeterror: shape inference failed
gender_googlenet.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
googlenet-3.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
googlenet-6.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
googlenet-7.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
googlenet-8.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
googlenet-9.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
gpt2-10.onnx ['--repeatOnnxTransform=1']
{'unsqueeze', 'split', 'constantofshape', 'constant', 'div', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'gemm', 'add', 'nonzero'}
{}
succeeded
gpt2-lm-head-10.onnx ['--repeatOnnxTransform=1']
{'unsqueeze', 'split', 'where', 'constantofshape', 'constant', 'div', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reshape', 'reducemean', 'squeeze', 'sqrt', 'tanh', 'concat', 'slice', 'cast', 'gemm', 'add', 'nonzero'}
{}
loc("onnx.Cast"): error: 'std.trunci' op operand #0 must be signless-integer-like, but got 'ui8'
inception-v1-3.onnx (deprecated)
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
inception-v1-6.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
inception-v1-7.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
inception-v1-8.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
inception-v1-9.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
inception-v2-3.onnx (deprecated)
{'averagepool', 'conv', 'softmax', 'concat', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
inception-v2-6.onnx
{'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:229: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
inception-v2-7.onnx
{'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'}
{}
succeeded
inception-v2-8.onnx
{'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'}
{}
succeeded
inception-v2-9.onnx
{'unsqueeze', 'averagepool', 'conv', 'softmax', 'concat', 'reshape', 'maxpool', 'relu', 'mul', 'batchnormalization', 'gemm', 'add'}
{}
succeeded
maskrcnn-10.onnx
{'unsqueeze', 'expand', 'split', 'constantofshape', 'constant', 'div', 'sigmoid', 'and', 'sub', 'roialign', 'exp', 'nonmaxsuppression', 'softmax', 'less', 'maxpool', 'mul', 'topk', 'equal', 'floor', 'gather', 'transpose', 'shape', 'flatten', 'reshape', 'convtranspose', 'squeeze', 'relu', 'sqrt', 'clip', 'not', 'scatter', 'conv', 'greater', 'concat', 'slice', 'cast', 'reducemin', 'log', 'gemm', 'resize', 'add', 'nonzero'}
{'expand', 'scatter', 'nonmaxsuppression', 'roialign', 'convtranspose', 'topk'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
mnist-1.onnx (deprecated)
{'conv', 'constant', 'reshape', 'maxpool', 'div', 'matmul', 'relu', 'add'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
mnist-7.onnx
{'conv', 'reshape', 'maxpool', 'matmul', 'relu', 'add'}
{}
succeeded
mnist-8.onnx
{'conv', 'reshape', 'maxpool', 'matmul', 'relu', 'add'}
{}
succeeded
mobilenetv2-7.onnx
{'unsqueeze', 'clip', 'gather', 'conv', 'shape', 'concat', 'constant', 'reshape', 'gemm', 'globalaveragepool', 'add'}
{}
succeeded
mosaic-8.onnx
{'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'}
{}
succeeded
mosaic-9.onnx
{'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'}
{}
succeeded
pointilism-8.onnx
{'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'}
{}
succeeded
pointilism-9.onnx
{'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'}
{}
succeeded
rain-princess-8.onnx
{'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'}
{}
succeeded
rain-princess-9.onnx
{'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'}
{}
succeeded
rcnn-ilsvrc13-3.onnx (deprecated)
{'conv', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
rcnn-ilsvrc13-6.onnx
{'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
rcnn-ilsvrc13-7.onnx
{'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
rcnn-ilsvrc13-8.onnx
{'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
rcnn-ilsvrc13-9.onnx
{'conv', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm', 'dropout'}
{}
succeeded
resnet101-duc-7.onnx
{'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization'}
{}
succeeded
resnet101-v1-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet101-v2-7.onnx
{'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet152-v1-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet152-v2-7.onnx
{'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet18-v1-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet18-v2-7.onnx
{'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet34-v1-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet34-v2-7.onnx
{'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet50-caffe2-v1-3.onnx (deprecated)
{'averagepool', 'sum', 'conv', 'softmax', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
resnet50-caffe2-v1-6.onnx
{'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-7.onnx
{'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-8.onnx
{'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-9.onnx
{'averagepool', 'sum', 'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
resnet50-v1-12-int8.onnx
{'qlinearglobalaveragepool', 'qlinearmatmul', 'qlinearadd', 'flatten', 'qlinearconv', 'maxpool', 'dequantizelinear', 'quantizelinear'}
{'qlinearglobalaveragepool', 'qlinearmatmul', 'qlinearadd', 'qlinearconv', 'dequantizelinear', 'quantizelinear'}
error: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not ranked
resnet50-v1-12.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet50-v1-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
resnet50-v2-7.onnx
{'conv', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'globalaveragepool', 'gemm', 'add'}
{}
succeeded
retinanet-9.onnx
{'conv', 'maxpool', 'upsample', 'sigmoid', 'relu', 'batchnormalization', 'add'}
{}
succeeded
roberta-base-11.onnx
{'unsqueeze', 'constantofshape', 'constant', 'div', 'erf', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'equal', 'gather', 'transpose', 'shape', 'reducemean', 'reshape', 'sqrt', 'tanh', 'not', 'concat', 'cumsum', 'cast', 'gemm', 'add'}
{'cumsum'}
error: onnx.CumSum: inferShapes() not implementederror: shape inference failed
roberta-sequence-classification-9.onnx
{'unsqueeze', 'expand', 'constantofshape', 'constant', 'div', 'erf', 'sub', 'softmax', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'reducemean', 'reshape', 'squeeze', 'sqrt', 'tanh', 'concat', 'cast', 'gemm', 'add', 'nonzero'}
{'expand'}
error: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not ranked
shufflenet-3.onnx (deprecated)
{'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
error: 'onnx.Reshape' op operand #1 must be tensor of 64-bit signless integer values or memref of any type values, but got 'none'
shufflenet-6.onnx
{'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
shufflenet-7.onnx
{'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
shufflenet-8.onnx
{'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
shufflenet-9.onnx
{'averagepool', 'transpose', 'conv', 'sum', 'concat', 'softmax', 'reshape', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
shufflenet-v2-10.onnx
{'split', 'transpose', 'conv', 'concat', 'constant', 'reshape', 'reducemean', 'maxpool', 'relu', 'batchnormalization', 'gemm'}
{}
succeeded
squeezenet1.0-3.onnx
{'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'}
{}
succeeded
squeezenet1.0-6.onnx
{'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'}
{}
succeeded
squeezenet1.0-7.onnx
{'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'}
{}
succeeded
squeezenet1.0-8.onnx
{'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'}
{}
succeeded
squeezenet1.0-9.onnx
{'conv', 'softmax', 'concat', 'maxpool', 'relu', 'dropout', 'globalaveragepool'}
{}
succeeded
squeezenet1.1-7.onnx
{'averagepool', 'conv', 'concat', 'reshape', 'maxpool', 'relu', 'dropout'}
{}
succeeded
ssd-10.onnx
{'unsqueeze', 'constantofshape', 'constant', 'batchnormalization', 'sub', 'exp', 'softmax', 'nonmaxsuppression', 'maxpool', 'mul', 'topk', 'gather', 'transpose', 'shape', 'reshape', 'squeeze', 'relu', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'add'}
{'nonmaxsuppression', 'topk'}
error: onnx.NonMaxSuppression: inferShapes() not implementederror: shape inference failed
ssd_mobilenet_v1_10.onnx
{'unsqueeze', 'split', 'constantofshape', 'div', 'sigmoid', 'sub', 'min', 'tile', 'loop', 'exp', 'less', 'mul', 'gather', 'transpose', 'shape', 'reshape', 'squeeze', 'clip', 'conv', 'concat', 'slice', 'cast', 'add'}
{}
error: scales() and sizes() can not both None/not Noneerror: shape inference failederror: onnx.NonMaxSuppression: inferShapes() not implementederror: shape inference failedonnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:245: U mlir::Type::cast() const [with U = mlir::MemRefType]: Assertion `isa()' failed.
super-resolution-10.onnx
{'transpose', 'conv', 'constant', 'reshape', 'relu'}
{}
succeeded
t5-decoder-with-lm-head-12.onnx
{'unsqueeze', 'where', 'constantofshape', 'constant', 'div', 'max', 'sub', 'min', 'tile', 'softmax', 'lessorequal', 'less', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'range', 'reshape', 'reducemean', 'relu', 'sqrt', 'neg', 'concat', 'cast', 'log', 'add'}
{}
succeeded
t5-encoder-12.onnx ['--repeatOnnxTransform=1']
{'unsqueeze', 'where', 'constantofshape', 'constant', 'div', 'sub', 'min', 'softmax', 'less', 'matmul', 'mul', 'pow', 'gather', 'transpose', 'shape', 'range', 'reshape', 'reducemean', 'relu', 'sqrt', 'neg', 'abs', 'concat', 'cast', 'log', 'add'}
{}
succeeded
tiny-yolov3-11.onnx
{'unsqueeze', 'round', 'leakyrelu', 'div', 'sigmoid', 'batchnormalization', 'sub', 'tile', 'loop', 'exp', 'nonmaxsuppression', 'maxpool', 'identity', 'mul', 'transpose', 'shape', 'reshape', 'squeeze', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'ceil', 'resize', 'add'}
{'nonmaxsuppression'}
error: onnx.NonMaxSuppression: inferShapes() not implementederror: shape inference failed
tinyyolov2-7.onnx
{'conv', 'leakyrelu', 'maxpool', 'mul', 'batchnormalization', 'add'}
{}
succeeded
tinyyolov2-8.onnx
{'conv', 'leakyrelu', 'maxpool', 'mul', 'batchnormalization', 'add'}
{}
succeeded
udnie-8.onnx
{'conv', 'pad', 'upsample', 'relu', 'instancenormalization', 'add'}
{}
succeeded
udnie-9.onnx
{'unsqueeze', 'floor', 'gather', 'conv', 'shape', 'concat', 'constant', 'slice', 'pad', 'cast', 'div', 'relu', 'upsample', 'mul', 'instancenormalization', 'add'}
{}
succeeded
version-rfb-320.onnx
{'unsqueeze', 'exp', 'transpose', 'conv', 'shape', 'concat', 'gather', 'softmax', 'constant', 'reshape', 'slice', 'div', 'relu', 'mul', 'batchnormalization', 'sub', 'add'}
{}
succeeded
version-rfb-640.onnx
{'unsqueeze', 'exp', 'transpose', 'conv', 'shape', 'concat', 'gather', 'softmax', 'constant', 'reshape', 'slice', 'div', 'relu', 'mul', 'batchnormalization', 'sub', 'add'}
{}
succeeded
vgg16-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg16-bn-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'gemm', 'dropout'}
{}
succeeded
vgg19-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg19-bn-7.onnx
{'conv', 'flatten', 'maxpool', 'relu', 'batchnormalization', 'gemm', 'dropout'}
{}
succeeded
vgg19-caffe2-3.onnx (deprecated)
{'conv', 'softmax', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
vgg19-caffe2-6.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg19-caffe2-7.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg19-caffe2-8.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg19-caffe2-9.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg_ilsvrc_16_age_imdb_wiki.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
vgg_ilsvrc_16_gender_imdb_wiki.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'relu', 'gemm', 'dropout'}
{}
succeeded
yolov2-coco-9.onnx
{'transpose', 'conv', 'leakyrelu', 'concat', 'constant', 'reshape', 'maxpool', 'batchnormalization'}
{}
succeeded
yolov3-10.onnx
{'unsqueeze', 'leakyrelu', 'div', 'sigmoid', 'batchnormalization', 'sub', 'tile', 'loop', 'exp', 'nonmaxsuppression', 'mul', 'transpose', 'gather', 'shape', 'reshape', 'squeeze', 'conv', 'concat', 'slice', 'cast', 'reducemin', 'ceil', 'resize', 'add'}
{'nonmaxsuppression'}
error: scales() and sizes() can not both None/not Noneerror: shape inference failed
yolov4.onnx
{'exp', 'split', 'transpose', 'conv', 'shape', 'concat', 'leakyrelu', 'gather', 'reshape', 'slice', 'maxpool', 'cast', 'sigmoid', 'log', 'mul', 'resize', 'tanh', 'add'}
{}
succeeded
zfnet512-3.onnx (deprecated)
{'conv', 'softmax', 'maxpool', 'lrn', 'relu', 'gemm'}
{}
error: Gemm with A should be a 2D tensorerror: Failed to scan onnx.Gemm parameters successfullyerror: shape inference failed
zfnet512-6.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'}
{}
succeeded
zfnet512-7.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'}
{}
succeeded
zfnet512-8.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'}
{}
succeeded
zfnet512-9.onnx
{'conv', 'softmax', 'reshape', 'maxpool', 'lrn', 'relu', 'gemm'}
{}
succeeded
Looks like ONNX-MLIR supports 118 models, of which 101 models can be really compiled and 17 models failed to compile (12 models are deprecated)
Count the number of models in which an op is used (sorted in the decreasing order):
Operator name
Count
Supported in onnx-mlir
conv
119
supported
relu
111
supported
maxpool
101
supported
reshape
84
supported
gemm
79
supported
softmax
71
supported
add
63
supported
concat
61
supported
dropout
50
supported
batchnormalization
46
supported
mul
36
supported
averagepool
34
supported
unsqueeze
32
supported
lrn
32
supported
transpose
27
supported
shape
26
supported
gather
25
supported
constant
24
supported
cast
23
supported
div
22
supported
globalaveragepool
22
supported
sub
21
supported
slice
21
supported
matmul
16
supported
flatten
14
supported
squeeze
13
supported
constantofshape
12
supported
sum
12
supported
upsample
11
supported
instancenormalization
10
supported
pad
10
supported
sqrt
10
supported
exp
9
supported
reducemean
9
supported
split
8
supported
sigmoid
8
supported
pow
8
supported
floor
7
supported
tanh
7
supported
resize
7
supported
log
6
supported
leakyrelu
6
supported
clip
6
supported
tile
5
supported
nonmaxsuppression
5
not supported
reducemin
5
supported
nonzero
5
supported
identity
4
supported
less
4
supported
min
3
supported
loop
3
supported
topk
3
not supported
ceil
3
supported
expand
3
not supported
where
3
supported
equal
3
supported
erf
2
supported
roialign
2
not supported
range
2
supported
not
2
supported
scatter
2
not supported
reciprocal
2
supported
neg
2
supported
abs
2
supported
greater
2
supported
compress
1
not supported
and
1
supported
qlinearglobalaveragepool
1
not supported
hardmax
1
not supported
cumsum
1
not supported
qlinearconv
1
not supported
argmax
1
supported
convtranspose
1
not supported
lstm
1
supported
round
1
supported
onehot
1
not supported
max
1
supported
categorymapper
1
not supported
qlinearmatmul
1
not supported
lessorequal
1
supported
reducemax
1
supported
prelu
1
supported
reducesum
1
supported
dequantizelinear
1
not supported
quantizelinear
1
not supported
scan
1
supported
qlinearadd
1
not supported
Summary
We examine 116 models out of 128 models in the ONNX model zoo (12 models are exclusive because they use quite old opset, < 3).
Out of 116 models:
107 models can be compiled.
4 models have missing ops:
bidaf-9: missing {'CategoryMapper'}
fasterrcnn-10: missing {'roialign', 'scatter'}
maskrcnn-10: missing {'roialign', 'convtranspose', 'scatter'}
resnet50-v1-12-int8: missing quantization ops, not our target at this moment.
5 models have supported ops but failed to compile:
fcn-resnet101-11, fcn-resnet50-11: it seems related to ResizeOp
gpt2-lm-head-10: it seems related to CastOp
inception-v2-6: it seems it uses old opset => perhaps consider as old model.
ssd_mobilenet_v1_10: it seems related to ResizeOp and IfOp
ONNX models and their ops
ONNX model
Ops in the model
Ops not supported in onnx-mlir
Compilable with onnx-mlir
age_googlenet.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
arcfaceresnet100-8.onnx
{'batchnormalization', 'prelu', 'identity', 'add', 'conv', 'reshape', 'dropout', 'flatten', 'mul', 'gemm', 'sub'}
{}
succeeded
bertsquad-10.onnx
{'gather', 'transpose', 'reshape', 'pow', 'sub', 'slice', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'identity', 'split', 'add', 'reciprocal', 'unsqueeze', 'shape', 'squeeze', 'onehot', 'matmul', 'sqrt', 'concat', 'reducemean'}
{}
succeeded
bertsquad-8.onnx
{'gather', 'transpose', 'reshape', 'pow', 'sub', 'slice', 'tile', 'softmax', 'cast', 'mul', 'tanh', 'identity', 'split', 'add', 'reciprocal', 'unsqueeze', 'shape', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'}
{}
succeeded
bidaf-9.onnx
{'abs', 'gather', 'sigmoid', 'transpose', 'compress', 'conv', 'relu', 'reshape', 'sum', 'sub', 'lstm', 'slice', 'reducemax', 'constantofshape', 'softmax', 'argmax', 'cast', 'mul', 'add', 'unsqueeze', 'clip', 'ceil', 'categorymapper', 'squeeze', 'matmul', 'log', 'scan', 'hardmax', 'reducesum', 'dropout', 'concat', 'shape'}
{'categorymapper'}
onnx-mlir: /home/tungld/dl/onnx-mlir/src/Builder/SymbolTable.hpp:129: void onnx_mlir::SymbolMapping::AddMapping(const string&, T) [with T = onnx::TypeProto; std::__cxx11::string = std::__cxx11::basic_string]: Assertion `!scopes.back().contain(name) && "Tensor already exists."' failed.
bvlcalexnet-6.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
bvlcalexnet-7.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
bvlcalexnet-8.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
bvlcalexnet-9.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
caffenet-6.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
caffenet-7.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
caffenet-8.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
caffenet-9.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'dropout', 'gemm'}
{}
succeeded
candy-8.onnx
{'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'}
{}
succeeded
candy-9.onnx
{'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'}
{}
succeeded
densenet-6.onnx
{'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'}
{}
succeeded
densenet-7.onnx
{'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'}
{}
succeeded
densenet-8.onnx
{'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'}
{}
succeeded
densenet-9.onnx
{'batchnormalization', 'add', 'maxpool', 'globalaveragepool', 'conv', 'relu', 'unsqueeze', 'averagepool', 'concat', 'mul'}
{}
succeeded
efficientnet-lite4-11.onnx
{'batchnormalization', 'squeeze', 'matmul', 'add', 'transpose', 'conv', 'softmax', 'clip', 'averagepool'}
{}
succeeded
emotion-ferplus-7.onnx
{'matmul', 'add', 'maxpool', 'div', 'conv', 'reshape', 'relu', 'dropout', 'sub'}
{}
succeeded
emotion-ferplus-8.onnx
{'matmul', 'add', 'maxpool', 'div', 'conv', 'reshape', 'relu', 'dropout', 'sub'}
{}
succeeded
fasterrcnn-10.onnx
{'resize', 'gather', 'roialign', 'div', 'transpose', 'sigmoid', 'conv', 'relu', 'reshape', 'flatten', 'constant', 'floor', 'sub', 'nonzero', 'slice', 'maxpool', 'greater', 'constantofshape', 'softmax', 'cast', 'mul', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'clip', 'gemm', 'squeeze', 'log', 'reducemin', 'expand', 'equal', 'sqrt', 'scatter', 'concat', 'shape'}
{'roialign', 'scatter'}
error: onnx.RoiAlign: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failed
fcn-resnet101-11.onnx
{'slice', 'resize', 'gather', 'maxpool', 'add', 'relu', 'conv', 'unsqueeze', 'cast', 'concat', 'constant', 'shape'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: pytorch_half_pixelerror: shape inference failed
fcn-resnet50-11.onnx
{'slice', 'resize', 'gather', 'maxpool', 'add', 'relu', 'conv', 'unsqueeze', 'cast', 'concat', 'constant', 'shape'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: pytorch_half_pixelerror: shape inference failed
gender_googlenet.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
googlenet-3.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
googlenet-6.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
googlenet-7.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
googlenet-8.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
googlenet-9.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
gpt2-10.onnx
{'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'slice', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'split', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'}
{}
succeeded
gpt2-lm-head-10.onnx
{'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'slice', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'tanh', 'split', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'sqrt', 'concat', 'reducemean'}
{}
loc("onnx.Cast"): error: 'arith.constant' op integer return type must be signless
inception-v1-6.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
inception-v1-7.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
inception-v1-8.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
inception-v1-9.onnx
{'lrn', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'averagepool', 'concat', 'gemm'}
{}
succeeded
inception-v2-6.onnx
{'batchnormalization', 'add', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'}
{}
onnx-mlir: /home/tungld/dl/llvm-project/mlir/include/mlir/IR/Types.h:235: bool mlir::Type::isa() const [with U = mlir::RankedTensorType]: Assertion `impl && "isa<> used on a null type."' failed.
inception-v2-7.onnx
{'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'}
{}
succeeded
inception-v2-8.onnx
{'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'}
{}
succeeded
inception-v2-9.onnx
{'batchnormalization', 'add', 'maxpool', 'conv', 'relu', 'reshape', 'unsqueeze', 'softmax', 'averagepool', 'concat', 'mul', 'gemm'}
{}
succeeded
maskrcnn-10.onnx
{'resize', 'gather', 'roialign', 'div', 'transpose', 'sigmoid', 'conv', 'relu', 'reshape', 'flatten', 'constant', 'floor', 'sub', 'nonzero', 'slice', 'maxpool', 'and', 'greater', 'constantofshape', 'softmax', 'cast', 'mul', 'split', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'clip', 'gemm', 'convtranspose', 'squeeze', 'log', 'reducemin', 'expand', 'less', 'equal', 'sqrt', 'not', 'scatter', 'concat', 'shape'}
{'roialign', 'convtranspose', 'scatter'}
error: onnx.RoiAlign: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failed
mnist-7.onnx
{'matmul', 'add', 'maxpool', 'relu', 'conv', 'reshape'}
{}
succeeded
mnist-8.onnx
{'matmul', 'add', 'maxpool', 'relu', 'conv', 'reshape'}
{}
succeeded
mobilenetv2-7.onnx
{'gemm', 'gather', 'add', 'globalaveragepool', 'conv', 'reshape', 'unsqueeze', 'clip', 'concat', 'constant', 'shape'}
{}
succeeded
mosaic-8.onnx
{'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'}
{}
succeeded
mosaic-9.onnx
{'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'}
{}
succeeded
pointilism-8.onnx
{'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'}
{}
succeeded
pointilism-9.onnx
{'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'}
{}
succeeded
rain-princess-8.onnx
{'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'}
{}
succeeded
rain-princess-9.onnx
{'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'}
{}
succeeded
rcnn-ilsvrc13-6.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'}
{}
succeeded
rcnn-ilsvrc13-7.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'}
{}
succeeded
rcnn-ilsvrc13-8.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'}
{}
succeeded
rcnn-ilsvrc13-9.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'dropout', 'gemm'}
{}
succeeded
resnet101-duc-7.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'sum'}
{}
succeeded
resnet101-v1-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet101-v2-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'}
{}
succeeded
resnet152-v1-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet152-v2-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'}
{}
succeeded
resnet18-v1-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet18-v2-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'}
{}
succeeded
resnet34-v1-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet34-v2-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-6.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-7.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-8.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'}
{}
succeeded
resnet50-caffe2-v1-9.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'sum', 'gemm'}
{}
succeeded
resnet50-v1-12-int8.onnx
{'qlinearmatmul', 'maxpool', 'qlinearglobalaveragepool', 'dequantizelinear', 'quantizelinear', 'qlinearconv', 'qlinearadd', 'flatten'}
{'qlinearmatmul', 'dequantizelinear', 'qlinearglobalaveragepool', 'quantizelinear', 'qlinearconv', 'qlinearadd'}
error: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not rankederror: not ranked
resnet50-v1-12.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet50-v1-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'flatten', 'gemm'}
{}
succeeded
resnet50-v2-7.onnx
{'batchnormalization', 'maxpool', 'add', 'globalaveragepool', 'relu', 'conv', 'reshape', 'gemm'}
{}
succeeded
retinanet-9.onnx
{'batchnormalization', 'maxpool', 'add', 'sigmoid', 'upsample', 'relu', 'conv'}
{}
succeeded
roberta-base-11.onnx
{'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'add', 'unsqueeze', 'shape', 'cumsum', 'gemm', 'matmul', 'equal', 'sqrt', 'not', 'erf', 'concat', 'reducemean'}
{}
succeeded
roberta-sequence-classification-9.onnx
{'gather', 'div', 'transpose', 'reshape', 'pow', 'constant', 'sub', 'nonzero', 'constantofshape', 'softmax', 'cast', 'mul', 'tanh', 'add', 'unsqueeze', 'shape', 'gemm', 'squeeze', 'matmul', 'expand', 'sqrt', 'erf', 'concat', 'reducemean'}
{}
succeeded
shufflenet-6.onnx
{'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'}
{}
succeeded
shufflenet-7.onnx
{'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'}
{}
succeeded
shufflenet-8.onnx
{'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'}
{}
succeeded
shufflenet-9.onnx
{'batchnormalization', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'softmax', 'averagepool', 'concat', 'sum', 'gemm'}
{}
succeeded
shufflenet-v2-10.onnx
{'batchnormalization', 'gemm', 'split', 'maxpool', 'transpose', 'relu', 'conv', 'reshape', 'concat', 'constant', 'reducemean'}
{}
succeeded
squeezenet1.0-3.onnx
{'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'}
{}
succeeded
squeezenet1.0-6.onnx
{'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'}
{}
succeeded
squeezenet1.0-7.onnx
{'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'}
{}
succeeded
squeezenet1.0-8.onnx
{'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'}
{}
succeeded
squeezenet1.0-9.onnx
{'maxpool', 'globalaveragepool', 'relu', 'conv', 'softmax', 'dropout', 'concat'}
{}
succeeded
squeezenet1.1-7.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'dropout', 'averagepool', 'concat'}
{}
succeeded
ssd-10.onnx
{'gather', 'transpose', 'relu', 'conv', 'reshape', 'constant', 'sub', 'slice', 'maxpool', 'softmax', 'constantofshape', 'cast', 'mul', 'topk', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'batchnormalization', 'squeeze', 'reducemin', 'concat', 'shape'}
{}
succeeded
ssd_mobilenet_v1_10.onnx
{'loop', 'gather', 'div', 'transpose', 'sigmoid', 'reshape', 'conv', 'sub', 'slice', 'tile', 'constantofshape', 'cast', 'mul', 'split', 'add', 'exp', 'unsqueeze', 'clip', 'min', 'squeeze', 'less', 'concat', 'shape'}
{}
error: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixelerror: shape inference failederror: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failederror: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixelerror: shape inference failederror: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failederror: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixelerror: shape inference failederror: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failederror: these modes() or coordinate_transformation_mode() not implemented yet. mode: linear coordinate_transformation_mode: half_pixelerror: shape inference failederror: onnx.If: is not supported at this time. Please open an issue on https://github.com/onnx/onnx-mlir and/or consider contribute code. Error encountered in shape inference.error: shape inference failedLoop op doesn't support dynamic dimensions for scan output.UNREACHABLE executed at /home/tungld/dl/onnx-mlir/src/Conversion/ONNXToKrnl/ControlFlow/Loop.cpp:255!
super-resolution-10.onnx
{'transpose', 'relu', 'conv', 'reshape', 'constant'}
{}
succeeded
t5-decoder-with-lm-head-12.onnx
{'gather', 'div', 'transpose', 'reshape', 'neg', 'range', 'relu', 'lessorequal', 'pow', 'constant', 'sub', 'tile', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'add', 'max', 'unsqueeze', 'shape', 'min', 'matmul', 'log', 'less', 'sqrt', 'concat', 'reducemean'}
{}
succeeded
t5-encoder-12.onnx
{'abs', 'gather', 'div', 'transpose', 'reshape', 'neg', 'range', 'relu', 'pow', 'constant', 'sub', 'constantofshape', 'softmax', 'cast', 'where', 'mul', 'add', 'unsqueeze', 'shape', 'min', 'matmul', 'log', 'less', 'sqrt', 'concat', 'reducemean'}
{}
succeeded
tiny-yolov3-11.onnx
{'resize', 'loop', 'div', 'transpose', 'sigmoid', 'conv', 'reshape', 'sub', 'slice', 'maxpool', 'tile', 'cast', 'mul', 'identity', 'exp', 'add', 'nonmaxsuppression', 'unsqueeze', 'ceil', 'batchnormalization', 'squeeze', 'round', 'reducemin', 'leakyrelu', 'concat', 'shape'}
{}
succeeded
tinyyolov2-7.onnx
{'batchnormalization', 'add', 'maxpool', 'conv', 'leakyrelu', 'mul'}
{}
succeeded
tinyyolov2-8.onnx
{'batchnormalization', 'add', 'maxpool', 'conv', 'leakyrelu', 'mul'}
{}
succeeded
udnie-8.onnx
{'add', 'upsample', 'relu', 'conv', 'pad', 'instancenormalization'}
{}
succeeded
udnie-9.onnx
{'slice', 'gather', 'add', 'mul', 'div', 'cast', 'relu', 'conv', 'floor', 'unsqueeze', 'upsample', 'pad', 'instancenormalization', 'concat', 'constant', 'shape'}
{}
succeeded
version-rfb-320.onnx
{'batchnormalization', 'slice', 'gather', 'add', 'exp', 'mul', 'transpose', 'div', 'relu', 'conv', 'reshape', 'unsqueeze', 'softmax', 'concat', 'constant', 'shape', 'sub'}
{}
succeeded
version-rfb-640.onnx
{'batchnormalization', 'slice', 'gather', 'add', 'exp', 'mul', 'transpose', 'div', 'relu', 'conv', 'reshape', 'unsqueeze', 'softmax', 'concat', 'constant', 'shape', 'sub'}
{}
succeeded
vgg16-7.onnx
{'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'}
{}
succeeded
vgg16-bn-7.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'}
{}
succeeded
vgg19-7.onnx
{'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'}
{}
succeeded
vgg19-bn-7.onnx
{'batchnormalization', 'maxpool', 'relu', 'conv', 'dropout', 'flatten', 'gemm'}
{}
succeeded
vgg19-caffe2-6.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg19-caffe2-7.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg19-caffe2-8.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg19-caffe2-9.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg_ilsvrc_16_age_chalearn_iccv2015.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg_ilsvrc_16_age_imdb_wiki.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
vgg_ilsvrc_16_gender_imdb_wiki.onnx
{'maxpool', 'relu', 'conv', 'reshape', 'softmax', 'dropout', 'gemm'}
{}
succeeded
yolov2-coco-9.onnx
{'batchnormalization', 'maxpool', 'transpose', 'conv', 'reshape', 'leakyrelu', 'concat', 'constant'}
{}
succeeded
yolov3-10.onnx
{'resize', 'loop', 'gather', 'div', 'transpose', 'sigmoid', 'conv', 'reshape', 'sub', 'slice', 'tile', 'cast', 'mul', 'add', 'exp', 'nonmaxsuppression', 'unsqueeze', 'ceil', 'batchnormalization', 'squeeze', 'reducemin', 'leakyrelu', 'concat', 'shape'}
{}
succeeded
yolov4.onnx
{'log', 'slice', 'resize', 'gather', 'split', 'exp', 'add', 'mul', 'transpose', 'maxpool', 'sigmoid', 'conv', 'reshape', 'cast', 'leakyrelu', 'concat', 'tanh', 'shape'}
{}
succeeded
zfnet512-6.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'}
{}
succeeded
zfnet512-7.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'}
{}
succeeded
zfnet512-8.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'}
{}
succeeded
zfnet512-9.onnx
{'lrn', 'maxpool', 'relu', 'reshape', 'conv', 'softmax', 'gemm'}
{}
succeeded
Looks like ONNX-MLIR supports 112 models, of which 107 models can be really compiled and 5 models failed to compile
Count the number of models in which an op is used (sorted in the decreasing order):
Operator name
Count
Supported in onnx-mlir
conv
107
supported
relu
99
supported
maxpool
89
supported
reshape
80
supported
gemm
70
supported
softmax
63
supported
add
59
supported
concat
57
supported
dropout
44
supported
batchnormalization
42
supported
mul
34
supported
unsqueeze
32
supported
averagepool
29
supported
lrn
27
supported
shape
26
supported
transpose
26
supported
gather
25
supported
cast
23
supported
constant
22
supported
slice
21
supported
globalaveragepool
21
supported
sub
20
supported
div
20
supported
flatten
14
supported
matmul
14
supported
squeeze
13
supported
constantofshape
12
supported
upsample
11
supported
sum
10
supported
instancenormalization
10
supported
pad
10
supported
sqrt
10
supported
exp
9
supported
reducemean
9
supported
split
8
supported
sigmoid
8
supported
pow
8
supported
tanh
7
supported
resize
7
supported
floor
7
supported
clip
6
supported
leakyrelu
6
supported
log
6
supported
nonmaxsuppression
5
supported
reducemin
5
supported
nonzero
5
supported
tile
5
supported
identity
4
supported
less
4
supported
loop
3
supported
topk
3
supported
expand
3
supported
equal
3
supported
where
3
supported
ceil
3
supported
min
3
supported
abs
2
supported
neg
2
supported
reciprocal
2
supported
not
2
supported
roialign
2
not supported
range
2
supported
greater
2
supported
scatter
2
not supported
erf
2
supported
prelu
1
supported
qlinearmatmul
1
not supported
compress
1
supported
and
1
supported
dequantizelinear
1
not supported
convtranspose
1
not supported
categorymapper
1
not supported
round
1
supported
quantizelinear
1
not supported
lessorequal
1
supported
qlinearadd
1
not supported
lstm
1
supported
reducemax
1
supported
argmax
1
supported
max
1
supported
qlinearconv
1
not supported
cumsum
1
supported
scan
1
supported
hardmax
1
supported
onehot
1
supported
qlinearglobalaveragepool
1
not supported
reducesum
1
supported
Updated on April 28, 2022
176 models tested: gpt2-10, gpt2-lm-head-10, bidaf-9, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-12-int8, bertsquad-12, bertsquad-8, bertsquad-10, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-2, emotion-ferplus-7, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12-int8, inception-v1-3, inception-v1-12, inception-v1-8, googlenet-3, googlenet-9, googlenet-12-int8, googlenet-12, googlenet-8, googlenet-6, googlenet-7, inception-v2-8, inception-v2-6, inception-v2-9, inception-v2-3, inception-v2-7, mnist-7, mnist-8, mnist-1, rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-8, rcnn-ilsvrc13-3, rcnn-ilsvrc13-6, zfnet512-6, zfnet512-7, zfnet512-12, zfnet512-3, zfnet512-8, zfnet512-9, zfnet512-12-int8, caffenet-12-int8, caffenet-9, caffenet-12, caffenet-7, caffenet-6, caffenet-8, caffenet-3, mobilenetv2-12, mobilenetv2-7, mobilenetv2-12-int8, 1-7, 0-6, 0-12-int8, 0-7, 0-8, 0-3, 0-9, 0-12, densenet-8, densenet-9, densenet-3, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet50-caffe2-v1-9, resnet50-caffe2-v1-3, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v1-7, resnet18-v2-7, resnet34-v2-7, resnet50-v1-12-int8, resnet50-v2-7, resnet101-v2-7, resnet152-v1-7, resnet50-v1-12, efficientnet-lite4-11-int8, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-12, bvlcalexnet-6, bvlcalexnet-3, bvlcalexnet-12-int8, bvlcalexnet-8, vgg16-12-int8, vgg19-bn-7, vgg19-caffe2-3, vgg19-caffe2-7, vgg19-7, vgg16-7, vgg16-bn-7, vgg19-caffe2-9, vgg16-12, vgg19-caffe2-6, vgg19-caffe2-8, shufflenet-3, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-v2-12-int8, shufflenet-7, shufflenet-8, yolov3-10, FasterRCNN-12-int8, FasterRCNN-12, FasterRCNN-10, fcn-resnet50-12, fcn-resnet50-11, fcn-resnet101-11, fcn-resnet50-12-int8, yolov4, ssd-12, ssd-12-int8, ssd-10, ResNet101-DUC-7, retinanet-9, tinyyolov2-7, tinyyolov2-8, ssd_mobilenet_v1_10, ssd_mobilenet_v1_12, ssd_mobilenet_v1_12-int8, MaskRCNN-10, tiny-yolov3-11, udnie-9, pointilism-9, mosaic-9, udnie-8, candy-8, pointilism-8, rain-princess-8, rain-princess-9, mosaic-8, candy-9
109 models passed: gpt2-10, gpt2-lm-head-10, t5-decoder-with-lm-head-12, t5-encoder-12, bertsquad-8, roberta-sequence-classification-9, roberta-base-11, super-resolution-10, arcfaceresnet100-8, emotion-ferplus-8, emotion-ferplus-2, inception-v1-7, inception-v1-6, inception-v1-9, inception-v1-12-int8, googlenet-3, googlenet-9, googlenet-12-int8, googlenet-12, googlenet-7, inception-v2-8, inception-v2-6, inception-v2-9, inception-v2-3, mnist-7, mnist-1,
rcnn-ilsvrc13-9, rcnn-ilsvrc13-7, rcnn-ilsvrc13-3, rcnn-ilsvrc13-6, zfnet512-6, zfnet512-12, zfnet512-8, zfnet512-9, caffenet-9, caffenet-12, caffenet-8, mobilenetv2-12, mobilenetv2-7, mobilenetv2-12-int8, 1-7, 0-8, 0-12, densenet-8, densenet-9, densenet-3, densenet-6, densenet-7, resnet50-v1-7, resnet101-v1-7, resnet34-v1-7, resnet50-caffe2-v1-6, resnet50-caffe2-v1-7, resnet152-v2-7, resnet50-caffe2-v1-8, resnet18-v2-7, resnet50-v2-7, resnet101-v2-7,
resnet50-v1-12, efficientnet-lite4-11-int8, efficientnet-lite4-11, bvlcalexnet-9, bvlcalexnet-7, bvlcalexnet-6, bvlcalexnet-3, bvlcalexnet-12-int8, bvlcalexnet-8, vgg16-12-int8, vgg19-bn-7, vgg19-caffe2-3, vgg19-caffe2-7, vgg16-7, vgg16-bn-7, vgg19-caffe2-9, vgg19-caffe2-6, shufflenet-v2-10, shufflenet-v2-12, shufflenet-9, shufflenet-6, shufflenet-7, FasterRCNN-12-int8, fcn-resnet50-11, yolov4, ssd-12, ssd-10, ResNet101-DUC-7, retinanet-9, tinyyolov2-8, ssd_mobilenet_v1_10, ssd_mobilenet_v1_12-int8, MaskRCNN-10, udnie-9, pointilism-9, mosaic-9
@tungld can we filter out the models that are too old? Presumably some of the models that don't compile are also because there are data types we don't handle? Ideally, we would have a way to have label for each benchmarks (e.g. opset, use fp16, ... ) and then we can pull a set that has/has not certain characteristics on a per test machine architecture basis
Results when filtering out old models and int models:
124 models tested: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, MaskRCNN-12, udnie-8, inception-v2-8, shufflenet-7, zfnet512-7, googlenet-7, resnet101-v1-7, ssd_mobilenet_v1_10, densenet-12, arcfaceresnet100-8, MaskRCNN-10, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, ResNet101-DUC-7, caffenet-9, t5-encoder-12, t5-decoder-with-lm-head-12, squeezenet1.0-8, inception-v1-12, fcn-resnet50-12, inception-v1-6, ssd_mobilenet_v1_12,
inception-v1-7, resnet18-v1-7, gpt2-10, zfnet512-6, rain-princess-8, ssd-12, resnet50-v1-7, squeezenet1.0-6, resnet34-v2-7, resnet50-caffe2-v1-7, vgg16-bn-7,
efficientnet-lite4-11, mnist-8, ssd-10, zfnet512-9, bertsquad-10, yolov3-10, vgg16-7, inception-v1-9, shufflenet-v2-12, resnet50-caffe2-v1-8, resnet101-v2-7,
rcnn-ilsvrc13-8, mobilenetv2-12, tinyyolov2-8, resnet152-v1-7, bvlcalexnet-7, inception-v2-6, squeezenet1.0-7, bvlcalexnet-6, resnet34-v1-7, gpt2-lm-head-10,
densenet-8, resnet50-caffe2-v1-9, emotion-ferplus-7, mosaic-8, shufflenet-9, inception-v2-7, vgg19-7, rain-princess-9, googlenet-6, googlenet-8, caffenet-7, resnet50-v1-12, retinanet-9, super-resolution-10, roberta-sequence-classification-9, vgg19-caffe2-8, zfnet512-8, zfnet512-12, udnie-9, googlenet-12, FasterRCNN-12, mobilenetv2-7, squeezenet1.0-9, shufflenet-8, bertsquad-8, fcn-resnet50-11, googlenet-3, yolov4, rcnn-ilsvrc13-9, bidaf-9, fcn-resnet101-11, FasterRCNN-10, densenet-9, vgg19-caffe2-6, resnet50-caffe2-v1-6, vgg19-caffe2-9, squeezenet1.0-3, bvlcalexnet-12, inception-v2-9, caffenet-6, pointilism-8, densenet-6, shufflenet-v2-10, vgg19-caffe2-7, rcnn-ilsvrc13-6, resnet152-v2-7, squeezenet1.1-7, densenet-7, candy-9, vgg19-bn-7, caffenet-12
102 models passed: mnist-7, bvlcalexnet-9, caffenet-8, mosaic-9, yolov3-12, squeezenet1.0-12, vgg16-12, bvlcalexnet-8, bertsquad-12, udnie-8, shufflenet-7, inception-v2-8, zfnet512-7, googlenet-7, resnet101-v1-7, densenet-12, arcfaceresnet100-8, rcnn-ilsvrc13-7, roberta-base-11, candy-8, resnet18-v2-7, emotion-ferplus-8, tiny-yolov3-11, pointilism-9, googlenet-9, resnet50-v2-7, inception-v1-8, shufflenet-6, tinyyolov2-7, caffenet-9, squeezenet1.0-8, inception-v1-12, inception-v1-6, inception-v1-7, resnet18-v1-7, gpt2-10, rain-princess-8, resnet50-v1-7, squeezenet1.0-6, resnet34-v2-7, resnet50-caffe2-v1-7, vgg16-bn-7, efficientnet-lite4-11, mnist-8, zfnet512-9, bertsquad-10, yolov3-10, inception-v1-9, shufflenet-v2-12, resnet50-caffe2-v1-8, resnet101-v2-7, rcnn-ilsvrc13-8, tinyyolov2-8, resnet152-v1-7, bvlcalexnet-7, squeezenet1.0-7, bvlcalexnet-6, resnet34-v1-7, gpt2-lm-head-10, densenet-8, resnet50-caffe2-v1-9, emotion-ferplus-7, mosaic-8, shufflenet-9, inception-v2-7, rain-princess-9, googlenet-6, googlenet-8, caffenet-7, resnet50-v1-12, retinanet-9, super-resolution-10, roberta-sequence-classification-9, vgg19-caffe2-8, zfnet512-8, zfnet512-12, udnie-9, googlenet-12, mobilenetv2-7, squeezenet1.0-9, shufflenet-8, googlenet-3, yolov4, rcnn-ilsvrc13-9, densenet-9, vgg19-caffe2-6, resnet50-caffe2-v1-6, vgg19-caffe2-9, squeezenet1.0-3, bvlcalexnet-12, inception-v2-9, caffenet-6, pointilism-8, densenet-6, shufflenet-v2-10, vgg19-caffe2-7, rcnn-ilsvrc13-6, resnet152-v2-7, squeezenet1.1-7, densenet-7, candy-9, caffenet-12
22 model failed: fcn-resnet50-12, ssd_mobilenet_v1_12, bidaf-9, fcn-resnet101-11, FasterRCNN-10, zfnet512-6, ssd-12, MaskRCNN-12, ssd-10, ssd_mobilenet_v1_10, vgg16-7, MaskRCNN-10, mobilenetv2-12, inception-v2-6, FasterRCNN-12, vgg19-bn-7, ResNet101-DUC-7, bertsquad-8, vgg19-7, fcn-resnet50-11, t5-encoder-12, t5-decoder-with-lm-head-12
For some of the failures like T5, I've got the ability to help us move them over to successful for our users. Once I find some time I'm going to make a data prep script based off the onnxt5 benchmark notebook to give the community good data prepared by onnxruntime.
For some of the failures like T5, I've got the ability to help us move them over to successful for our users. Once I find some time I'm going to make a data prep script based off the onnxt5 benchmark notebook to give the community good data prepared by onnxruntime.
Great. Thanks!
I am closing this memo because now we can see a live status on the homepage of https://github.com/onnx/onnx-mlir.
|
gharchive/issue
| 2020-05-15T16:11:10 |
2025-04-01T04:35:18.228722
|
{
"authors": [
"AlexandreEichenberger",
"Joejiong",
"Xatter",
"agostini01",
"chenqiny",
"doru1004",
"kernhanda",
"messerb5467",
"tjingrant",
"tungld"
],
"repo": "onnx/onnx-mlir",
"url": "https://github.com/onnx/onnx-mlir/issues/128",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1176278681
|
Optionally return the number of entry points in omQueryEntryPoints
Resolve #1249
This patch changes the function signature of omQueryEntryPoints from **i8() to **i8(*i64) to allow returning the number of entry points via its argument.
This patch also fixes an issue in functions inputSignature(entryPointName) and outputSignature(entryPointName) where NULL will be returned if the entry point name is not found.
Signed-off-by: Tung D. Le tung@jp.ibm.com
Also need to update include/onnx-mlir/Runtime/OMEntryPoint.h
Also need to update include/onnx-mlir/Runtime/OMEntryPoint.h
Thanks! I forgot it. Now it is updated.
Jenkins Linux s390x Build #4470 [push] Optionally return the nu... started at 12:35
Jenkins Linux amd64 Build #4458 [push] Optionally return the nu... started at 11:35
Jenkins Linux ppc64le Build #3587 [push] Optionally return the nu... started at 12:38
Jenkins Linux s390x Build #4470 [push] Optionally return the nu... passed after 45 min
Jenkins Linux amd64 Build #4458 [push] Optionally return the nu... passed after 1 hr 1 min
|
gharchive/pull-request
| 2022-03-22T05:16:41 |
2025-04-01T04:35:18.381653
|
{
"authors": [
"gongsu832",
"jenkins-droid",
"tungld"
],
"repo": "onnx/onnx-mlir",
"url": "https://github.com/onnx/onnx-mlir/pull/1252",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1358202404
|
Fix model download failure for backend tests
add targets to pre-download ONNX model files for backend tests
add -l|--list option to test.py to list model/node tests used for a backend test configuration
reimplement strtobool since distutils is deprecated in python 3.10 and will be removed in 3.12
Signed-off-by: Gong Su gong_su@hotmail.com
Jenkins Linux amd64 Build #7408 [push] Fix model download failu... started at 21:54
Jenkins Linux ppc64le Build #6473 [push] Fix model download failu... started at 22:56
Jenkins Linux s390x Build #7424 [push] Fix model download failu... started at 22:54
Jenkins Linux amd64 Build #7408 [push] Fix model download failu... passed after 1 hr 2 min
Jenkins Linux ppc64le Build #6473 [push] Fix model download failu... passed after 1 hr 37 min
Jenkins Linux s390x Build #7424 [push] Fix model download failu... passed after 1 hr 42 min
check-onnx-backend and check-onnx-numerical have been merged into a single check-onnx-backend-numerical to have proper parallel behavior and dependency resolution. See comments in test/CMakeLists.txt for details. Individual check-onnx-backend and check-onnx-numerical still exist.
Should we recommend developers to run make check-onnx-backend-numerical to test everything before submitting a PR?
Individual check-onnx-backend and check-onnx-numerical still exist so it doesn't really matter that much how people want to run their tests locally.
|
gharchive/pull-request
| 2022-09-01T03:39:40 |
2025-04-01T04:35:18.390911
|
{
"authors": [
"gongsu832",
"jenkins-droid"
],
"repo": "onnx/onnx-mlir",
"url": "https://github.com/onnx/onnx-mlir/pull/1662",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1394388148
|
Assign each question mark a unique negative integer value
IndexExpr currently returns a fix value, -1, for a dimension of question mark during compile time, e.g. when calling shape inference.
This patch assigns question marks with unique negative integer values, starting from -2. This is useful when a input dimension is passed to the output dimension without any computation. In such case, we can check that the output dimension is the same as the input dimension or not via the unique values. However, such checking requires storing IndexExpr for the input inside ShapeHelpr, which requires modifying ShapeHelper. Another PR will be created for that if this patch is accepted.
An optional argument, uniqueQuestionMark is added to IndexExpr::getShape, which is to return negative values (-N) for question marks.
static void getShape(llvm::SmallVectorImpl<IndexExpr> &indexExprList,
llvm::SmallVectorImpl<int64_t> &intDimList,
bool uniqueQuestionMark);
By default, uniqueQuestionMark = false is used for backward compatibility.
Signed-off-by: Tung D. Le tung@jp.ibm.com
Jenkins Linux amd64 Build #8004 [push] Assign each question mar... started at 10:05
Jenkins Linux ppc64le Build #7068 [push] Assign each question mar... started at 11:08
Jenkins Linux amd64 Build #8004 [push] Assign each question mar... passed after 1 hr 15 min
Jenkins Linux ppc64le Build #7068 [push] Assign each question mar... passed after 1 hr 41 min
Jenkins Linux s390x Build #8021 [push] Assign each question mar... passed after 2 hr 7 min
|
gharchive/pull-request
| 2022-10-03T09:03:23 |
2025-04-01T04:35:18.397278
|
{
"authors": [
"jenkins-droid",
"tungld"
],
"repo": "onnx/onnx-mlir",
"url": "https://github.com/onnx/onnx-mlir/pull/1757",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1801791028
|
Fix the compile and run execution session in Python
The error was due to the fact that the constructor of PyOMCompileExecutionSession used the constructor of OMExecutionSession which required a compiled .so. Have now added a method to first compile and then initialize the .so in the super class OMExecutionSession after its constructor is called
Cleaned up also a bit the code
As requested, the extended OMCompileExecutionSession interface now first check if the file that would be created by the compiler already exists, and if it does not, compile the model using the flags passed when building the python object.
Technically, the code parse the flags to detect possible -o filename or -o=filename options to determine if the user is requesting a custom output name, and check also the flags to determine if a custom target is requested.
An optional parameter, reuse_compiled_model=0 can be passed to disable reuse; by default compiled models are reused.
@tungld , does this work for you now?
Jenkins Linux s390x Build #12050 [push] Fix the compile and run ... started at 08:51
Jenkins Linux amd64 Build #12037 [push] Fix the compile and run ... started at 07:51
Jenkins Linux amd64 Build #12037 [push] Fix the compile and run ... failed after 1 hr 7 min
Jenkins Linux s390x Build #12050 [push] Fix the compile and run ... passed after 1 hr 35 min
Jenkins Linux ppc64le Build #11044 [push] Fix the compile and run ... passed after 1 hr 50 min
|
gharchive/pull-request
| 2023-07-12T21:32:31 |
2025-04-01T04:35:18.404446
|
{
"authors": [
"AlexandreEichenberger",
"jenkins-droid"
],
"repo": "onnx/onnx-mlir",
"url": "https://github.com/onnx/onnx-mlir/pull/2373",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
300879656
|
Couldn't answer the incoming call
Hi,
I have registered my user agent
Made a call from softphone to this UA
I could see the response 180 Ringing on my browser
when I try to attend using the below code (ref: https://sipjs.com/guides/receive-call/)
options = {
media: {
remote: {
audio: document.getElementById('remoteAudio')
}
},
ua: {
uri: myAuth + "@XX.XXX.XXX.XX",
wsServers: "wss://XX.XXX.XXX.XX:4433",
authorizationUser: '1000',
password: '1000',
rtcpMuxPolicy: 'negotiate'
}
};
simple = new SIP.WebRTC.Simple(options);
simple.on('ringing', function() {
console.log("Call Attended");
simple.answer();
})
I'm getting the below message and call is not answered
SIP/2.0 180 Ringing
Record-Route: sip:XX.XXX.XXX.XX:4433;transport=wss;r2=on;lr
Record-Route: sip:XX.XXX.XXX.XX;r2=on;lr
Via: SIP/2.0/WSS XX.XXX.XXX.XX:4433;branch=z9hG4bK42bd.22d1982.6
Via: SIP/2.0/UDP XX.XXX.XXX.XX:5060;branch=z9hG4bK-BroadWorks-MS-990101863
To: sip:1234;trunk=I58@XX.XXX.XXX.XX1:5060;pilot=bk;tag=mug73u14to
From: sip:0370103322@XX.XXX.XXX.XX;tag=2056562219
Call-ID: 2014940806@XX.XXX.XXX.XX
CSeq: 1 INVITE
Contact: sip:hvfnkd87@di31numu9v33.invalid;transport=ws
Supported: outbound
User-Agent: SIP.js/0.9.1
Content-Length: 0
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | configuration parameters after validation:
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · viaHost: "arubq6g3l47p.invalid"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · uri: sip:1234@XX.XXX.XXX.XX
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · wsServers: [{"ws_uri":"wss://10.xx.xxx.xx:xxx","sip_uri":"sip:XX.XXX.XXX.XX:4433;transport=ws;lr","weight":0,"status":0,"scheme":"WSS"}]
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · custom: {}
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · displayName: ""
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · password: NOT SHOWN
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · registerExpires: 600
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · register: true
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · registrarServer: sip:xx.xxx.xx.xx
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · wsServerMaxReconnection: 3
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · wsServerReconnectionTimeout: 4
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · connectionRecoveryMinInterval: 2
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · connectionRecoveryMaxInterval: 30
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · keepAliveInterval: 0
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · extraSupported: []
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · usePreloadedRoute: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · userAgentString: "SIP.js/0.9.1"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · noAnswerTimeout: 60000
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · traceSip: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · hackViaTcp: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · hackIpInContact: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · hackWssInTransport: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · hackAllowUnregisteredOptionTags: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · sessionDescriptionHandlerFactoryOptions: {}
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · contactName: "hpmerqdc"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · contactTransport: "ws"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · forceRport: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · autostart: true
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · autostop: true
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · rel100: "none"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · replaces: "none"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · sessionDescriptionHandlerFactory: function defaultFactory(session, options) {
return new SessionDescriptionHandler(session, options);
}
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · authenticationFactory: undefined
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · allowLegacyNotifications: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · allowOutOfDialogRefers: false
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · authorizationUser: "1000"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · instanceId: "b33e3779-39f4-44d4-a055-8fa19a045896"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · sipjsId: "gfor3"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | · hostportParams: "XX.XXX.XXX.XX"
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | user requested startup...
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.transport | connecting to WebSocket wss://XX.XXX.XXX.XX:4433
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.transport | WebSocket wss://XX.XXX.XXX.XX4433 connected
sip-0.9.1.js:807 Wed Feb 28 2018 14:27:55 GMT+1100 (AUS Eastern Summer Time) | sip.ua | connection state set to 0
I'm using SIP- 0.9.1.js
I would appreciate any help or suggestion to solve this...Thanks in advance
Please provide full logs for SIP.js in a gist
I am a bit confused as to why the application is restarting when you receive a 180 Ringing are you doing something in your application that you have not mentioned here?
I did whatever I mentioned in the above steps. I'm not sure why the application is getting restarted but I stopped using simple in my application since I need to use rtcpMuxPolicy: negotiate parameter inside the peerconnection. I'll update here if I could find the reason for this action. Thanks
|
gharchive/issue
| 2018-02-28T03:36:09 |
2025-04-01T04:35:18.454120
|
{
"authors": [
"bharath-inference",
"egreenmachine"
],
"repo": "onsip/SIP.js",
"url": "https://github.com/onsip/SIP.js/issues/524",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
301019077
|
Fix langstring conversion from obda to r2rml format
Correct conversion of langstring.
See a4a9956a .
|
gharchive/issue
| 2018-02-28T13:14:02 |
2025-04-01T04:35:18.470096
|
{
"authors": [
"skomlaebri"
],
"repo": "ontop/ontop",
"url": "https://github.com/ontop/ontop/issues/250",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1289535400
|
🛑 Minecat WS is down
In c0b4a61, Minecat WS ($MINECAT_WS_RAW) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Minecat WS is back up in 1755173.
|
gharchive/issue
| 2022-06-30T03:37:22 |
2025-04-01T04:35:18.477289
|
{
"authors": [
"ooliver1"
],
"repo": "ooliver1/status",
"url": "https://github.com/ooliver1/status/issues/180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1410511330
|
存在一个bug
我将舰娘信息复制后导入noro6时报错,经过与Chrome的console中复制得到的json信息进行比较后,发现该插件导出的数据中多了area字段,并且值为undefined,删除该字段后就能正常导入了。
多謝,上課有點忙沒注意到這邊
|
gharchive/issue
| 2022-10-16T14:46:25 |
2025-04-01T04:35:18.486192
|
{
"authors": [
"frankcwl",
"oooo1111880"
],
"repo": "oooo1111880/poi-plugin-noro6-export",
"url": "https://github.com/oooo1111880/poi-plugin-noro6-export/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2140723034
|
Dopamine auto switching to User registration on TS
I am on M2 with TS installed. I have been noticing ever since I installed Dopamine that Dopamine has been switching over to user registration instead of system. It has been happening about every few hours. I don’t mind switching it back to system registration but is a bit of an inconvenience. Any fix or solution? Thanks.
this can happen after a jbupdate, besides that it cannot
|
gharchive/issue
| 2024-02-18T03:55:38 |
2025-04-01T04:35:18.494023
|
{
"authors": [
"opa334",
"person1593"
],
"repo": "opa334/Dopamine",
"url": "https://github.com/opa334/Dopamine/issues/365",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2301117391
|
A request
Add option to import roothide data to rootless jailbreak , also we want the same option for rorthide one
Not necessary to import the jailbreak files, maybe just import some important information like preferences
not possible
|
gharchive/issue
| 2024-05-16T18:53:35 |
2025-04-01T04:35:18.495160
|
{
"authors": [
"1abd135",
"opa334"
],
"repo": "opa334/Dopamine",
"url": "https://github.com/opa334/Dopamine/issues/575",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
160336924
|
CreateToolhelp32Snapshot issue when building
Hi
I have added the definition to de headers and created a function in the process.py
but when i try to build i get this error
http://dpaste.com/2XV5JF4
any ideas
thx!
here ii an extract of the process.py file not doc yet (I know)
http://dpaste.com/1YEGNVF
I found a missing ")"
but continues droping the compile error
Sorry for the delay!
Looks like you're missing an #include statement in pywincffi/core/cdefs/sources/main.c. Here's a branch I made based off of yours which seems to compile for me:
https://github.com/opalmer/pywincffi/compare/debug_CreateToolhelp32Snapshot
So try adding that include statement in your branch and let me know if it works. When you're ready I'll be happy to do a code review if you open up a PR (I assume that's the direction you're heading?).
ok it works but where is documented the requirement?
sry i'm newby :P
i'll document the function and mke a PR soon
|
gharchive/issue
| 2016-06-15T04:39:17 |
2025-04-01T04:35:18.507241
|
{
"authors": [
"TurBoss",
"opalmer"
],
"repo": "opalmer/pywincffi",
"url": "https://github.com/opalmer/pywincffi/issues/100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2320060172
|
folder renaming and reorgnization
Description
This PR is used to clean the folder naming convention to be consistent with industry.
Issues
n/a
Type of change
List the type of change like below. Please delete options that are not relevant.
[ ] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds new functionality)
[x] Breaking change (fix or feature that would break existing design and interface)
Dependencies
n/a
Tests
n/a
@chensuyue looks like the folder naming change impacts the preci. please check it
@jfding can you please review this change?
|
gharchive/pull-request
| 2024-05-28T03:42:54 |
2025-04-01T04:35:18.510247
|
{
"authors": [
"ftian1"
],
"repo": "opea-project/GenAIExamples",
"url": "https://github.com/opea-project/GenAIExamples/pull/197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2619445801
|
build(deps): bump angular to 18.2.10
@angular/cdk 18.2.9 -> 18.2.10
@angular/cli 18.2.9 -> 18.2.10
@angular/core 18.2.8 -> 18.2.9
PR Checklist
[ ] Unit Tests have been added for new changes
[ ] API tests have been updated if applicable
[ ] All commented code has been removed
[ ] If you've added a dependency, you've ensured license is compatible with Apache 2.0 and clearly outlined the added dependency.
What are you changing?
Anything the reviewer should know when reviewing this PR?
If the there are associated PRs in other repositories, please link them here (i.e. open-amt-cloud-toolkit/repo#365 )
:tada: This PR is included in version 8.0.5 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-10-28T20:15:01 |
2025-04-01T04:35:18.515301
|
{
"authors": [
"RosieAMT",
"madhavilosetty-intel"
],
"repo": "open-amt-cloud-toolkit/ui-toolkit-angular",
"url": "https://github.com/open-amt-cloud-toolkit/ui-toolkit-angular/pull/1584",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1203650152
|
change method and veriables to public scope in addon command.
related to issue #122
Signed-off-by: mgold1234 mgold@redhat.com
/approve
|
gharchive/pull-request
| 2022-04-13T18:35:41 |
2025-04-01T04:35:18.522757
|
{
"authors": [
"mgold1234",
"qiujian16"
],
"repo": "open-cluster-management-io/clusteradm",
"url": "https://github.com/open-cluster-management-io/clusteradm/pull/207",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1006024944
|
trigger build to fix package vulnerability
Ref: https://github.com/open-cluster-management/backlog/issues/16361
Signed-off-by: haoqing0110 qhao@redhat.com
/assign @xuezhaojun
/lgtm
/approve
|
gharchive/pull-request
| 2021-09-24T02:54:21 |
2025-04-01T04:35:18.524655
|
{
"authors": [
"haoqing0110",
"xuezhaojun"
],
"repo": "open-cluster-management/clusterlifecycle-state-metrics",
"url": "https://github.com/open-cluster-management/clusterlifecycle-state-metrics/pull/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
648020827
|
Use FastField where possible to speed up the form
https://jaredpalmer.com/formik/docs/api/fastfield
Finish new case manual entry page
|
gharchive/pull-request
| 2020-06-30T09:29:45 |
2025-04-01T04:35:18.528394
|
{
"authors": [
"allysonjp715"
],
"repo": "open-covid-data/healthmap-gdo-temp",
"url": "https://github.com/open-covid-data/healthmap-gdo-temp/pull/378",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
275259778
|
Responsive "about course" page
This implements https://tasks.opencraft.com/browse/OC-3345 and is made to be as simple as possible: CSS only, no HTML changes, no framework changes.
See screenshots there.
Testing
Note that you need this edx-platform branch!: https://github.com/open-craft/edx-platform/tree/opencraft-release/ginkgo.1-pearson
No migrations needed when you switch branches.
In normal ginkgo.1 this theme wouldn't display correctly.
Instructions:
try http://localhost:8000/courses/course-v1:edX+DemoX+Demo_Course/about
and make your window smaller
no scrollbars should appear
design should be reasonably good, and using the available space
try different sizes
try big screen too
Upstreaming this
The equivalent PR upstream is: https://github.com/edx/edx-platform/pull/16640
Are we also contributing these changes upstream, so we can remove them from the theme in the future?
@bradenmacdonald I was waiting for a response from them to see whether we should do bootstrap or this uglier approach with @media, but ok, I will prepare another version for upstream. Today (I'll work some time).
@bradenmacdonald I did that PR upstream. The code is a bit better there so I will to re-port it into this one. Both have the same effect.
:+1:
[x] I tested this:
I installed this PR's theme with opencraft-release/ginkgo.1-pearson branch of edx-platform.
I opened about course page on Pearson to see what the problem was. After minimizing the browser I've seen the page is not responsive.
I opened About course page on localhost and I minimized the window. The page is responsive. There's no horizontal scrollbar - all fits well in the small window. I've checked other pages of LMS that all available without logging-in - they also look good.
[x] I read through the code
[x] I checked for accessibility issues - I can't see any problems with accessibility that it'd introduce
[x] Includes documentation - the description of this PR and the commit is enough for that.
I will merge this (with the comments @bradenmacdonald asked) as it is, since it's reviewed and working. I wanted to use the same code as in the PR but I can't because in the PR we have lms_main_v1.scss and in here it's _pearsonx.css, and the approach is different (in here we need to overwrite the defaults, whereas in the upstream PR, we set the defaults).
I'll deploy a new appserver.
|
gharchive/pull-request
| 2017-11-20T07:25:32 |
2025-04-01T04:35:18.548072
|
{
"authors": [
"bradenmacdonald",
"clemente",
"tomaszgy"
],
"repo": "open-craft/edx-theme",
"url": "https://github.com/open-craft/edx-theme/pull/23",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
473826989
|
How to parse the iovec array to char array or any other formated data structure?
Hi guys:
In my scenario, I want to use Mysql DB as the backend storage for a test. So I need to write the handler by myself and implement the write() and read() method. And I met a trouble when I tried to convert the data in iovec array to char array in order to write correct SQL to insert the data in DB.
I found that there exists a method named tcmu_memcpy_from_iovec() in api.c. And I also reasearched the the ceph-rbd handler. The rbd.c used the char array bounce_buffer to store the iovec temporarily.
static int tcmu_rbd_aio_write(struct tcmu_device *dev, struct rbd_aio_cb *aio_cb,
rbd_completion_t completion, struct iovec *iov,
size_t iov_cnt, size_t length, off_t offset)
{
struct tcmu_rbd_state *state = tcmur_dev_get_private(dev);
int ret;
aio_cb->bounce_buffer = malloc(length);
if (!aio_cb->bounce_buffer) {
tcmu_dev_err(dev, "Failed to allocate bounce buffer.\n");
return -ENOMEM;;
}
tcmu_memcpy_from_iovec(aio_cb->bounce_buffer, length, iov, iov_cnt);
ret = rbd_aio_write(state->image, offset, length, aio_cb->bounce_buffer,
completion);
if (ret < 0)
free(aio_cb->bounce_buffer);
return ret;
}
So I guess if I can use the char array to store the data in iovec and concat string with SQL clause in order to insert the data to the DB. However I found I failed, the char array is empty when I use %s to format the data and strlen also returned zero.
I am confused about how the ceph-rbd convert the block io request to the OSD io request. Especially on how to parse the iovec array. Could anyone give me some clues?
Thanks
Best Regards
@zjs1224522500
The IO data to/from the kernel space is formatted and stored in the iovec arrary, but the rbd APIs will use the bounce buffer instead, so there will be the data buffer gathered/scattered before/after that callout.
After reading done, it will scatter the data in bounce buffer into one or more smaller buffers into iovec arrary.
Before writing started, it will gather all the small buffers into a bounce buffer.
Maybe you can try to debug it by printing each iovec info, something like:
nbd_info("iovec->iov_len[%d] iovec->iov_base[%s]\n", iovec->iov_len, iovec->iov_base);
To see is there any data exist ?
@lxbsz Thank you firstly!
But as for iovec array, the %s seems like not suitable for void *. So I have to cast the void * to char *.
/* Structure for scatter/gather I/O. */
struct iovec{
void *iov_base; /* Pointer to data. */
size_t iov_len; /* Length of data. */
};
And I use tcmu_err temporarily to print like following code.
void print_iovec(struct iovec *iov, size_t iov_cnt)
{
int i;
for (i = 0;; i < iov_cnt; i++)
{
tcmu_err("iovec->iov_len[%zu] iovec->iov_base[%s].\n", iov->iov_len, (char *) (iov->iov_base));
iov++;
}
}
The partial log result is like following: (All iov_base are empty.)
2019-07-29 17:04:06.445 7672 [ERROR] print_iovec:769: iovec->iov_len[4096] iovec->iov_base[].
2019-07-29 17:04:06.445 7672 [ERROR] print_iovec:769: iovec->iov_len[61440] iovec->iov_base[].
Maybe I can't use char * to cast the value and %s also may changed the data format. So I am confused about that the char array bounce_buffer in rbd.c may be a bridge just for memcpy().
Thanks
Best Regards
@zjs1224522500
Yeah, the data in the iovec arrary not NIL terminated as the string, so you may cannot just do it like this. And it's possible that all the data are zeroed, such as the WRITESAME SCSI command emulator will do. So it not a good idea to cast it to a string as you were concating the string of the SQL clause.
Thanks.
@lxbsz
Do you have any good ideas to store the data of iovc array?
I am trying to find a encode algorithm to convert the data. I am not sure if this method can make sense, I will have a try.
In another way, I may need to have a research about how Ceph-RBD. convert the block io to OSD io. The key point may be the way to convert the iovec array to object storage request.
Thx & BR
|
gharchive/issue
| 2019-07-29T03:02:22 |
2025-04-01T04:35:18.614242
|
{
"authors": [
"lxbsz",
"zjs1224522500"
],
"repo": "open-iscsi/tcmu-runner",
"url": "https://github.com/open-iscsi/tcmu-runner/issues/578",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
647588128
|
Update default-header.html to make previous OLS clickable with the last cohort
Allowing "/OLS-1" to appear when clicked on previous cohort
It isn't clickable at the moment
The link can be updated with the most recently concluded cohort
Do you think it is really needed that we can click on the verry last cohort from the top? My idea there was to list the different cohorts in the dropdown menu. I am afraid it may be disturbing. But if you think it is better, I am happy to merge it
From a user point of you, it is useful to click and get redirected to the last cohort directly. This will continue to expand in long term so drop down should totally exist. I wonder if @yochannah can break the tie.
One point also for you: it is more consistent with the other drop-down menus :)
I merge it
|
gharchive/pull-request
| 2020-06-29T18:46:34 |
2025-04-01T04:35:18.640371
|
{
"authors": [
"bebatut",
"malvikasharan"
],
"repo": "open-life-science/open-life-science.github.io",
"url": "https://github.com/open-life-science/open-life-science.github.io/pull/163",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1396652794
|
Profiler ingestion fails sampling data from MySQL 5.6 compatible databases
Affected module
Ingestion
Describe the bug
Profiler ingestion is not able to sample data from Aurora Serverless because the query being executed used a common table expression (CTE), which is not supported.
Errors similar to the following can be found in the profiler ingestion logs:
[2022-10-03 22:33:58,411] {sqa_interface.py:512} WARNING - Error trying to compute profile for foobars.new_foobar_id: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'only_once AS \n(SELECT count(new_foobar_id) AS count_1 \nFROM foobar_service_rds.foobar' at line 2") [SQL: /* {"app": "OpenMetadata", "version": "0.12.1.0"} */ WITH only_once AS (SELECT count(new_foobar_id) AS count_1 FROM foobar_service_rds.foobarsGROUP BY new_foobar_id HAVING count(new_foobar_id) = %(count_2)s) SELECT count(*) ASuniqueCount FROM only_once LIMIT %(param_1)s] [parameters: {'count_2': 1, 'param_1': 1}] (Background on this error at: https://sqlalche.me/e/14/f405) [2022-10-03 22:33:58,427] {sqa_interface.py:512} WARNING - Error trying to compute profile for foobars.foobar_id: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'only_once AS \n(SELECT count(foobar_id) AS count_1 \nFROM foobar_service_rds.foobars G' at line 2") [SQL: /* {"app": "OpenMetadata", "version": "0.12.1.0"} */ WITH only_once AS (SELECT count(foobar_id) AS count_1 FROM foobar_service_rds.foobarsGROUP BY foobar_id HAVING count(foobar_id) = %(count_2)s) SELECT count(*) ASuniqueCount FROM only_once LIMIT %(param_1)s] [parameters: {'count_2': 1, 'param_1': 1}] (Background on this error at: https://sqlalche.me/e/14/f405) [2022-10-03 22:33:58,441] {sqa_interface.py:512} WARNING - Error trying to compute profile for foobars.foobar_inspection_id: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'only_once AS \n(SELECT count(foobar_inspection_id) AS count_1 \nFROM foobar_service_rd' at line 2") [SQL: /* {"app": "OpenMetadata", "version": "0.12.1.0"} */ WITH only_once AS (SELECT count(foobar_inspection_id) AS count_1 FROM foobar_service_rds.foobarsGROUP BY foobar_inspection_id HAVING count(foobar_inspection_id) = %(count_2)s) SELECT count(*) ASuniqueCount FROM only_once LIMIT %(param_1)s] [parameters: {'count_2': 1, 'param_1': 1}] (Background on this error at: https://sqlalche.me/e/14/f405) [2022-10-03 22:33:58,454] {sqa_interface.py:512} WARNING - Error trying to compute profile for foobars.asset_type: (pymysql.err.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'only_once AS \n(SELECT count(asset_type) AS count_1 \nFROM foobar_service_rds.foobars' at line 2")
[SQL: /* {"app": "OpenMetadata", "version": "0.12.1.0"} /
WITH only_once AS
(SELECT count(asset_type) AS count_1
FROM foobar_service_rds.foobars GROUP BY asset_type
HAVING count(asset_type) = %(count_2)s)
SELECT count() AS uniqueCount
FROM only_once
LIMIT %(param_1)s]
[parameters: {'count_2': 1, 'param_1': 1}]
(Background on this error at: https://sqlalche.me/e/14/f405)
In the above example, the CTE is defined as WITH only_once AS (....
I confirmed the versions being used:
SHOW VARIABLES LIKE "%version%";
-- returns
('aurora_version', '2.08.3')
('innodb_version', '5.7.12')
('protocol_version', '10')
('slave_type_conversions', '')
('tls_version', 'TLSv1,TLSv1.1,TLSv1.2')
('version', '5.7.12')
('version_comment', 'MySQL Community Server (GPL)')
('version_compile_machine', 'x86_64')
('version_compile_os', 'Linux')
To Reproduce
Create a MySQL 5.6 database (optionally use Aurora Serverless) with a simple table containing some data
Connect to it with OpenMetadata
Create a metadata ingestion job and run it (it should succeed)
Create a profiler ingestion job and enable the "Ingest Sample Data" option
Run the profiler job and inspect the logs
Confirm that no sample data is being generated by navigating to the table's "Sample Data" tab
Expected behavior
OpenMetadata should be able to sample data for a 5.x older version of MySQL.
Version:
OS: Kubernets/Helm
Python version:
OpenMetadata version: 0.12.1
OpenMetadata Ingestion package version:
Additional context
I confirmed that the profiler works fine for MySQL 8.x. I verified that the CTE appears to be the problem by manually executing the following query against the database:
WITH only_once AS
(SELECT count(asset_type) AS count_1
FROM foobar_service_rds.foobars GROUP BY asset_type
HAVING count(asset_type) = 1)
SELECT count() AS uniqueCount
FROM only_once
LIMIT 1
Metadata ingestion works fine, as does profiler ingestion without sample data.
Might be able to do a derived table instead of a CTE...derived table syntax appears to be supported by MySQL 5.x. For example:
SELECT count(1) AS uniqueCount
FROM (SELECT count(asset_type) AS count_1
FROM foobar_service_rds.foobars GROUP BY asset_type
HAVING count(asset_type) = 1) AS only_once
LIMIT 1;
Documentation has been updated to explicitly state that MySQL 8.x or later must be used for the connector.
|
gharchive/issue
| 2022-10-04T18:03:53 |
2025-04-01T04:35:18.648545
|
{
"authors": [
"ehausig"
],
"repo": "open-metadata/OpenMetadata",
"url": "https://github.com/open-metadata/OpenMetadata/issues/7944",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1359288994
|
Remove filtering test cases by test suite ID in TestCase list API
Describe your changes :
TestSuite GET API already has a way to get test cases for the given test suite ID. Removing the redundant filter functionality on TestCase list API.
Type of change :
[x] Improvement
Checklist:
[x] I have read the CONTRIBUTING document.
[ ] I have commented on my code, particularly in hard-to-understand areas.
[ ] I have added tests that prove my fix is effective or that my feature works.
[x] All new and existing tests passed.
@sureshms lets hold off merging this. we may have dependencies
Lets handle this API post release for 0.12.1
@TeddyCr @ayush-shah can we incorporate this for 0.13.1?
lets make sure we follow - up in next release
|
gharchive/pull-request
| 2022-09-01T19:09:44 |
2025-04-01T04:35:18.653312
|
{
"authors": [
"harshach",
"sureshms"
],
"repo": "open-metadata/OpenMetadata",
"url": "https://github.com/open-metadata/OpenMetadata/pull/7140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1392407991
|
Some mistakes:pcdet/datasets/kitti/kitti_dataset.py
I ran kitti_dataSet.py to debug, and found an error: from . import kitti_utils ImportError: attempted relative import with no known parent package。Why can't I find this bag? How should I modify or ignore this error? In fact, the code can be trained and reasoned normally
what's your command, better to follow the style of generate_kitti_infos to ensure correct import path.
|
gharchive/issue
| 2022-09-30T13:00:38 |
2025-04-01T04:35:18.667877
|
{
"authors": [
"WesternTrail",
"jihanyang"
],
"repo": "open-mmlab/OpenPCDet",
"url": "https://github.com/open-mmlab/OpenPCDet/issues/1132",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
512740842
|
undefined symbol: _ZN3c1019ComplexCUDATensorIdEv
I've reviewed all other issues in terms of this problem. it is said that complilation cuda is different from runtime cuda. However, I don't know how to check it. and solution doesn't work for me. Many thanks if anybody can give a tutorial.
After updating to the the current master branch, the compile.sh file doesn't exist and can you tell how to recompile it easily?
I think my environment matched my pytorch version.
my system run in the following environment:
1.
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
pytorch 1.3.0 py3.6_cuda10.0.130_cudnn7.6.3_0
I tried pytorch 1.1 , 1.2 and 1.3. none of them help.
you can remove all the ops directory and download the diretory for replacing it. Recompile it.
I actually solved this problem myself. So my "base" environment's python version is 3.7 and "open-mmlab" python version is 3.6.
After I modify the python version to 3.7 in my open-mmlab environment, it works properly. Not sure if it's the problem...
Hope it can help
|
gharchive/issue
| 2019-10-25T22:12:53 |
2025-04-01T04:35:18.842913
|
{
"authors": [
"TWDH",
"jiangwenj02"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/1587",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
558529330
|
How to use the custom optimizer?
I want to customize the optimizer(RAdam etc.), but I find that the optimizer in mmdetection is imported from torch.optim(only a few of optimizer are implemented), can you provide an official example to do this? Thank you
We will introduce an optimizer registry like dataset or model registry to support customizing the optimizer. As a walkaround now, you may just modify this line and get your own optimizer class.
OK, I'll try, thanks
It seems the custom optimizer documentation is out of date. How to use custom optimizer after the cleaning of optimizer code?
I figured out using this way:
from mmcv.runner.optimizer import build_optimizer, OPTIMIZERS
from my_optimizer import MyOptimizer
OPTIMIZERS.register_module()(MyOptimizer)
optim_cfg = dict(type="MyOptimizer", lr=0.001)
optimizer = build_optimizer(model, optim_cfg)
|
gharchive/issue
| 2020-02-01T11:45:46 |
2025-04-01T04:35:18.845302
|
{
"authors": [
"AlanJie",
"hellock",
"hiyyg"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/2041",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
575275617
|
ValueError: need at least one array to concatenate
I have this issue on np.concatenate(indices). I used my dataset with coco format
python tools/train.py configs/pn_test.py
pn_test is copy from mask_rcnn_x101_64x4d_fpn_1x.py
2020-03-04 17:56:10,433 - mmdet - INFO - Distributed training: False
2020-03-04 17:56:10,433 - mmdet - INFO - Config:
model settings
model = dict(
type='MaskRCNN',
pretrained='open-mmlab://resnext101_64x4d',
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHead',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=2,
target_means=[0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
mask_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=14, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
mask_head=dict(
type='FCNMaskHead',
num_convs=4,
in_channels=256,
conv_out_channels=256,
num_classes=2,
loss_mask=dict(
type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)))
model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
mask_size=28,
pos_weight=-1,
debug=False))
test_cfg = dict(
rpn=dict(
nms_across_levels=False,
nms_pre=1000,
nms_post=1000,
max_num=1000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
score_thr=0.05,
nms=dict(type='nms', iou_thr=0.5),
max_per_img=100,
mask_thr_binary=0.5))
dataset settings
dataset_type = 'PnDataset'
data_root = 'data/coco/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1333, 800),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img']),
])
]
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'annotations/pn_train.json',
img_prefix=data_root + 'train2017/',
pipeline=train_pipeline),
val=dict(
type=dataset_type,
ann_file=data_root + 'annotations/pn_val.json',
img_prefix=data_root + 'val2017/',
pipeline=test_pipeline),
test=dict(
type=dataset_type,
ann_file=data_root + 'annotations/pn_test.json',
img_prefix=data_root + 'test2017/',
pipeline=test_pipeline))
evaluation = dict(interval=1, metric=['bbox', 'segm'])
optimizer 这里默认的是8核GPU的学习率,0.02/8 = 0.0025
optimizer = dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
yapf:enable
runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/mask_rcnn_x101_64x4d_fpn_1x'
work_dir = './checkpoints/pn_mask_rcnn_x101_64x4d_fpn_1x'
load_from = None
resume_from = None
workflow = [('train', 1)]
2020-03-04 17:56:11,498 - mmdet - INFO - load model from: open-mmlab://resnext101_64x4d
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
2020-03-04 17:56:13,547 - mmdet - INFO - Start running, host: xly@xly-Ubuntu, work_dir: /home/xly/mmdetection/checkpoints/pn_mask_rcnn_x101_64x4d_fpn_1x
2020-03-04 17:56:13,548 - mmdet - INFO - workflow: [('train', 1)], max: 12 epochs
Traceback (most recent call last):
File "tools/train.py", line 142, in
main()
File "tools/train.py", line 138, in main
meta=meta)
File "/home/xly/mmdetection/mmdet/apis/train.py", line 111, in train_detector
meta=meta)
File "/home/xly/mmdetection/mmdet/apis/train.py", line 305, in _non_dist_train
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/mmcv/runner/runner.py", line 371, in run
epoch_runner(data_loaders[i], **kwargs)
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/mmcv/runner/runner.py", line 271, in train
for i, data_batch in enumerate(data_loader):
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 279, in iter
return _MultiProcessingDataLoaderIter(self)
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 746, in init
self._try_put_index()
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 861, in _try_put_index
index = self._next_index()
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 339, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/xly/anaconda3/envs/envtest/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 200, in iter
for idx in self.sampler:
File "/home/xly/mmdetection/mmdet/datasets/loader/sampler.py", line 63, in iter
indices = np.concatenate(indices)
File "<array_function internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
i print the result,i can not read the data(len(self.sampler)==0),but i check the config path is OK.
sorry, i make a error close this issue
i find an answer,but he says"Ok , I fixed my bugs. I mixed up width and height params in annotations.Thank you guys.'' I can not understand the mean of this sentence.
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
sorry,i I made a mistake and closed the event accidentally. You can enter my new issue.
I use the code to check the type of my image id.
print(json1['images'][0]['id'])
tmp = json1['images'][0]['id']
print(type(tmp)) ###
But it show.
0
<class 'int'> ###
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
But I think string type for image id would work as well.
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
sorry,i I made a mistake and closed the event accidentally. You can enter my new issue.
I use the code to check the type of my image id.
print(json1['images'][0]['id'])
tmp = json1['images'][0]['id']
print(type(tmp)) ###
But it show.
0
<class 'int'> ###
hello, what did you do? I meet the same error
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
i make two error,1. i find the type of image_id is string 2. segmentation type must be list[list[]] . i change these and it is ok.
Check you JSON file.
Your image id would be a string, whereas the parser needs an int.
Change this, and it should be able to work
sorry,i I made a mistake and closed the event accidentally. You can enter my new issue.
I use the code to check the type of my image id.
print(json1['images'][0]['id'])
tmp = json1['images'][0]['id']
print(type(tmp)) ###
But it show.
0
<class 'int'> ###
hello, what did you do? I meet the same error
i make two error,1. i find the type of image_id is string 2. segmentation type must be list[list[]] . i change these and it is ok.
so i think you need to check your label, you can print the type of segmentation and image_id
@CodeXiaoLingYun how to chnage it i have check this but image_id if int
|
gharchive/issue
| 2020-03-04T10:02:40 |
2025-04-01T04:35:18.886529
|
{
"authors": [
"CodeXiaoLingYun",
"Hemantr05",
"Onkarsus13",
"ZhangChao1993",
"yangshiyu89"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/2197",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
694431434
|
question about res2net with dcn?
When I use res2net with dcn, I found this problem.
My config:
model = dict( type='CascadeRCNN', pretrained='data/pre/res2net101.pth', backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)),
`2020-09-06 23:50:32,051 - mmdet - INFO - load model from: data/pre/res2net101.pth
2020-09-06 23:50:34,500 - mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.0.weight, conv1.1.weight, conv1.1.bias, conv1.1.running_mean, conv1.1.running_var, conv1.1.num_batches_tracked, conv1.3.weight, conv1.4.weight, conv1.4.bias, conv1.4.running_mean, conv1.4.running_var, conv1.4.num_batches_tracked, conv1.6.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, fc.weight, fc.bias
missing keys in source state_dict: stem.0.weight, stem.1.weight, stem.1.bias, stem.1.running_mean, stem.1.running_var, stem.3.weight, stem.4.weight, stem.4.bias, stem.4.running_mean, stem.4.running_var, stem.6.weight, stem.7.weight, stem.7.bias, stem.7.running_mean, stem.7.running_var, layer2.0.convs.0.conv_offset.weight, layer2.0.convs.0.conv_offset.bias, layer2.0.convs.1.conv_offset.weight, layer2.0.convs.1.conv_offset.bias, layer2.0.convs.2.conv_offset.weight, layer2.0.convs.2.conv_offset.bias, layer2.1.convs.0.conv_offset.weight, layer2.1.convs.0.conv_offset.bias, layer2.1.convs.1.conv_offset.weight, layer2.1.convs.1.conv_offset.bias, layer2.1.convs.2.conv_offset.weight, layer2.1.convs.2.conv_offset.bias, layer2.2.convs.0.conv_offset.weight, layer2.2.convs.0.conv_offset.bias, layer2.2.convs.1.conv_offset.weight, layer2.2.convs.1.conv_offset.bias, layer2.2.convs.2.conv_offset.weight, layer2.2.convs.2.conv_offset.bias, layer2.3.convs.0.conv_offset.weight, layer2.3.convs.0.conv_offset.bias, layer2.3.convs.1.conv_offset.weight, layer2.3.convs.1.conv_offset.bias, layer2.3.convs.2.conv_offset.weight, layer2.3.convs.2.conv_offset.bias, layer3.0.convs.0.conv_offset.weight, layer3.0.convs.0.conv_offset.bias, layer3.0.convs.1.conv_offset.weight, layer3.0.convs.1.conv_offset.bias, layer3.0.convs.2.conv_offset.weight, layer3.0.convs.2.conv_offset.bias, layer3.1.convs.0.conv_offset.weight, layer3.1.convs.0.conv_offset.bias, layer3.1.convs.1.conv_offset.weight, layer3.1.convs.1.conv_offset.bias, layer3.1.convs.2.conv_offset.weight, layer3.1.convs.2.conv_offset.bias, layer3.2.convs.0.conv_offset.weight, layer3.2.convs.0.conv_offset.bias, layer3.2.convs.1.conv_offset.weight, layer3.2.convs.1.conv_offset.bias, layer3.2.convs.2.conv_offset.weight, layer3.2.convs.2.conv_offset.bias, layer3.3.convs.0.conv_offset.weight, layer3.3.convs.0.conv_offset.bias, layer3.3.convs.1.conv_offset.weight, layer3.3.convs.1.conv_offset.bias, layer3.3.convs.2.conv_offset.weight, layer3.3.convs.2.conv_offset.bias, layer3.4.convs.0.conv_offset.weight, layer3.4.convs.0.conv_offset.bias, layer3.4.convs.1.conv_offset.weight, layer3.4.convs.1.conv_offset.bias, layer3.4.convs.2.conv_offset.weight, layer3.4.convs.2.conv_offset.bias, layer3.5.convs.0.conv_offset.weight, layer3.5.convs.0.conv_offset.bias, layer3.5.convs.1.conv_offset.weight, layer3.5.convs.1.conv_offset.bias, layer3.5.convs.2.conv_offset.weight, layer3.5.convs.2.conv_offset.bias, layer3.6.convs.0.conv_offset.weight, layer3.6.convs.0.conv_offset.bias, layer3.6.convs.1.conv_offset.weight, layer3.6.convs.1.conv_offset.bias, layer3.6.convs.2.conv_offset.weight, layer3.6.convs.2.conv_offset.bias, layer3.7.convs.0.conv_offset.weight, layer3.7.convs.0.conv_offset.bias, layer3.7.convs.1.conv_offset.weight, layer3.7.convs.1.conv_offset.bias, layer3.7.convs.2.conv_offset.weight, layer3.7.convs.2.conv_offset.bias, layer3.8.convs.0.conv_offset.weight, layer3.8.convs.0.conv_offset.bias, layer3.8.convs.1.conv_offset.weight, layer3.8.convs.1.conv_offset.bias, layer3.8.convs.2.conv_offset.weight, layer3.8.convs.2.conv_offset.bias, layer3.9.convs.0.conv_offset.weight, layer3.9.convs.0.conv_offset.bias, layer3.9.convs.1.conv_offset.weight, layer3.9.convs.1.conv_offset.bias, layer3.9.convs.2.conv_offset.weight, layer3.9.convs.2.conv_offset.bias, layer3.10.convs.0.conv_offset.weight, layer3.10.convs.0.conv_offset.bias, layer3.10.convs.1.conv_offset.weight, layer3.10.convs.1.conv_offset.bias, layer3.10.convs.2.conv_offset.weight, layer3.10.convs.2.conv_offset.bias, layer3.11.convs.0.conv_offset.weight, layer3.11.convs.0.conv_offset.bias, layer3.11.convs.1.conv_offset.weight, layer3.11.convs.1.conv_offset.bias, layer3.11.convs.2.conv_offset.weight, layer3.11.convs.2.conv_offset.bias, layer3.12.convs.0.conv_offset.weight, layer3.12.convs.0.conv_offset.bias, layer3.12.convs.1.conv_offset.weight, layer3.12.convs.1.conv_offset.bias, layer3.12.convs.2.conv_offset.weight, layer3.12.convs.2.conv_offset.bias, layer3.13.convs.0.conv_offset.weight, layer3.13.convs.0.conv_offset.bias, layer3.13.convs.1.conv_offset.weight, layer3.13.convs.1.conv_offset.bias, layer3.13.convs.2.conv_offset.weight, layer3.13.convs.2.conv_offset.bias, layer3.14.convs.0.conv_offset.weight, layer3.14.convs.0.conv_offset.bias, layer3.14.convs.1.conv_offset.weight, layer3.14.convs.1.conv_offset.bias, layer3.14.convs.2.conv_offset.weight, layer3.14.convs.2.conv_offset.bias, layer3.15.convs.0.conv_offset.weight, layer3.15.convs.0.conv_offset.bias, layer3.15.convs.1.conv_offset.weight, layer3.15.convs.1.conv_offset.bias, layer3.15.convs.2.conv_offset.weight, layer3.15.convs.2.conv_offset.bias, layer3.16.convs.0.conv_offset.weight, layer3.16.convs.0.conv_offset.bias, layer3.16.convs.1.conv_offset.weight, layer3.16.convs.1.conv_offset.bias, layer3.16.convs.2.conv_offset.weight, layer3.16.convs.2.conv_offset.bias, layer3.17.convs.0.conv_offset.weight, layer3.17.convs.0.conv_offset.bias, layer3.17.convs.1.conv_offset.weight, layer3.17.convs.1.conv_offset.bias, layer3.17.convs.2.conv_offset.weight, layer3.17.convs.2.conv_offset.bias, layer3.18.convs.0.conv_offset.weight, layer3.18.convs.0.conv_offset.bias, layer3.18.convs.1.conv_offset.weight, layer3.18.convs.1.conv_offset.bias, layer3.18.convs.2.conv_offset.weight, layer3.18.convs.2.conv_offset.bias, layer3.19.convs.0.conv_offset.weight, layer3.19.convs.0.conv_offset.bias, layer3.19.convs.1.conv_offset.weight, layer3.19.convs.1.conv_offset.bias, layer3.19.convs.2.conv_offset.weight, layer3.19.convs.2.conv_offset.bias, layer3.20.convs.0.conv_offset.weight, layer3.20.convs.0.conv_offset.bias, layer3.20.convs.1.conv_offset.weight, layer3.20.convs.1.conv_offset.bias, layer3.20.convs.2.conv_offset.weight, layer3.20.convs.2.conv_offset.bias, layer3.21.convs.0.conv_offset.weight, layer3.21.convs.0.conv_offset.bias, layer3.21.convs.1.conv_offset.weight, layer3.21.convs.1.conv_offset.bias, layer3.21.convs.2.conv_offset.weight, layer3.21.convs.2.conv_offset.bias, layer3.22.convs.0.conv_offset.weight, layer3.22.convs.0.conv_offset.bias, layer3.22.convs.1.conv_offset.weight, layer3.22.convs.1.conv_offset.bias, layer3.22.convs.2.conv_offset.weight, layer3.22.convs.2.conv_offset.bias, layer4.0.convs.0.conv_offset.weight, layer4.0.convs.0.conv_offset.bias, layer4.0.convs.1.conv_offset.weight, layer4.0.convs.1.conv_offset.bias, layer4.0.convs.2.conv_offset.weight, layer4.0.convs.2.conv_offset.bias, layer4.1.convs.0.conv_offset.weight, layer4.1.convs.0.conv_offset.bias, layer4.1.convs.1.conv_offset.weight, layer4.1.convs.1.conv_offset.bias, layer4.1.convs.2.conv_offset.weight, layer4.1.convs.2.conv_offset.bias, layer4.2.convs.0.conv_offset.weight, layer4.2.convs.0.conv_offset.bias, layer4.2.convs.1.conv_offset.weight, layer4.2.convs.1.conv_offset.bias, layer4.2.convs.2.conv_offset.weight, layer4.2.convs.2.conv_offset.bias`
Does this problem affect training results?
Well, by 11epoch, loss suddenly changed to Nan, but the previous training seemed to be OK.
2020-09-07 09:11:23,801 - mmdet - INFO - Epoch [11][5250/6296] lr: 2.500e-03, eta: 5:16:52, time: 0.475, data_time: 0.003, memory: 4115, loss_rpn_cls: 0.8262, loss_rpn_bbox: 0.0616, s0.loss_cls: nan, s0.acc: 44.7031, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 46.4556, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 46.6877, s2.loss_bbox: nan, loss: nan 2020-09-07 09:11:48,830 - mmdet - INFO - Epoch [11][5300/6296] lr: 2.500e-03, eta: 5:16:28, time: 0.501, data_time: 0.004, memory: 4115, loss_rpn_cls: 0.6458, loss_rpn_bbox: 0.0278, s0.loss_cls: nan, s0.acc: 0.0840, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 1.0774, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 0.9762, s2.loss_bbox: nan, loss: nan 2020-09-07 09:12:13,004 - mmdet - INFO - Epoch [11][5350/6296] lr: 2.500e-03, eta: 5:16:03, time: 0.484, data_time: 0.004, memory: 4115, loss_rpn_cls: 0.6098, loss_rpn_bbox: 0.0303, s0.loss_cls: nan, s0.acc: 0.1230, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 2.4286, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 2.7786, s2.loss_bbox: nan, loss: nan 2020-09-07 09:12:36,075 - mmdet - INFO - Epoch [11][5400/6296] lr: 2.500e-03, eta: 5:15:38, time: 0.461, data_time: 0.003, memory: 4115, loss_rpn_cls: 0.5786, loss_rpn_bbox: 0.0336, s0.loss_cls: nan, s0.acc: 0.1055, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 1.2956, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 2.5167, s2.loss_bbox: nan, loss: nan 2020-09-07 09:12:59,321 - mmdet - INFO - Epoch [11][5450/6296] lr: 2.500e-03, eta: 5:15:13, time: 0.465, data_time: 0.003, memory: 4115, loss_rpn_cls: 0.5501, loss_rpn_bbox: 0.0312, s0.loss_cls: nan, s0.acc: 0.0859, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 0.9762, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 1.5762, s2.loss_bbox: nan, loss: nan 2020-09-07 09:13:22,112 - mmdet - INFO - Epoch [11][5500/6296] lr: 2.500e-03, eta: 5:14:47, time: 0.455, data_time: 0.003, memory: 4115, loss_rpn_cls: 0.5181, loss_rpn_bbox: 0.0251, s0.loss_cls: nan, s0.acc: 0.0977, s0.loss_bbox: nan, s1.loss_cls: nan, s1.acc: 3.1778, s1.loss_bbox: nan, s2.loss_cls: nan, s2.acc: 2.8944, s2.loss_bbox: nan, loss: nan
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
a smaller learning rate maybe can address the nan problem
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
a smaller learning rate maybe can address the nan problem
I calculated the learning rate according to the official formula, and I didn't have this problem with other backbones.
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
Thanks, I will try again!
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
Hello, I have a question. Does res2net add DCN like this?
model = dict( type='CascadeRCNN', pretrained='data/pre/res2net101.pth', backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)),
I use your model, but the problem also appear
`mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.0.weight, conv1.1.weight, conv1.1.bias, conv1.1.running_mean, conv1.1.running_var, conv1.1.num_batches_tracked, conv1.3.weight, conv1.4.weight, conv1.4.bias, conv1.4.running_mean, conv1.4.running_var, conv1.4.num_batches_tracked, conv1.6.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, fc.weight, fc.bias
missing keys in source state_dict: stem.0.weight, stem.1.weight, stem.1.bias, stem.1.running_mean, stem.1.running_var, stem.3.weight, stem.4.weight, stem.4.bias, stem.4.running_mean, stem.4.running_var, stem.6.weight, stem.7.weight, stem.7.bias, stem.7.running_mean, stem.7.running_var, layer2.0.convs.0.conv_offset.weight, layer2.0.convs.0.conv_offset.bias, layer2.0.convs.1.conv_offset.weight, layer2.0.convs.1.conv_offset.bias, layer2.0.convs.2.conv_offset.weight, layer2.0.convs.2.conv_offset.bias, layer2.1.convs.0.conv_offset.weight, layer2.1.convs.0.conv_offset.bias, layer2.1.convs.1.conv_offset.weight, layer2.1.convs.1.conv_offset.bias, layer2.1.convs.2.conv_offset.weight, layer2.1.convs.2.conv_offset.bias, layer2.2.convs.0.conv_offset.weight, layer2.2.convs.0.conv_offset.bias, layer2.2.convs.1.conv_offset.weight, layer2.2.convs.1.conv_offset.bias, layer2.2.convs.2.conv_offset.weight, layer2.2.convs.2.conv_offset.bias, layer2.3.convs.0.conv_offset.weight, layer2.3.convs.0.conv_offset.bias, layer2.3.convs.1.conv_offset.weight, layer2.3.convs.1.conv_offset.bias, layer2.3.convs.2.conv_offset.weight, layer2.3.convs.2.conv_offset.bias, layer3.0.convs.0.conv_offset.weight, layer3.0.convs.0.conv_offset.bias, layer3.0.convs.1.conv_offset.weight, layer3.0.convs.1.conv_offset.bias, layer3.0.convs.2.conv_offset.weight, layer3.0.convs.2.conv_offset.bias, layer3.1.convs.0.conv_offset.weight, layer3.1.convs.0.conv_offset.bias, layer3.1.convs.1.conv_offset.weight, layer3.1.convs.1.conv_offset.bias, layer3.1.convs.2.conv_offset.weight, layer3.1.convs.2.conv_offset.bias, layer3.2.convs.0.conv_offset.weight, layer3.2.convs.0.conv_offset.bias, layer3.2.convs.1.conv_offset.weight, layer3.2.convs.1.conv_offset.bias, layer3.2.convs.2.conv_offset.weight, layer3.2.convs.2.conv_offset.bias, layer3.3.convs.0.conv_offset.weight, layer3.3.convs.0.conv_offset.bias, layer3.3.convs.1.conv_offset.weight, layer3.3.convs.1.conv_offset.bias, layer3.3.convs.2.conv_offset.weight, layer3.3.convs.2.conv_offset.bias, layer3.4.convs.0.conv_offset.weight, layer3.4.convs.0.conv_offset.bias, layer3.4.convs.1.conv_offset.weight, layer3.4.convs.1.conv_offset.bias, layer3.4.convs.2.conv_offset.weight, layer3.4.convs.2.conv_offset.bias, layer3.5.convs.0.conv_offset.weight, layer3.5.convs.0.conv_offset.bias, layer3.5.convs.1.conv_offset.weight, layer3.5.convs.1.conv_offset.bias, layer3.5.convs.2.conv_offset.weight, layer3.5.convs.2.conv_offset.bias, layer3.6.convs.0.conv_offset.weight, layer3.6.convs.0.conv_offset.bias, layer3.6.convs.1.conv_offset.weight, layer3.6.convs.1.conv_offset.bias, layer3.6.convs.2.conv_offset.weight, layer3.6.convs.2.conv_offset.bias, layer3.7.convs.0.conv_offset.weight, layer3.7.convs.0.conv_offset.bias, layer3.7.convs.1.conv_offset.weight, layer3.7.convs.1.conv_offset.bias, layer3.7.convs.2.conv_offset.weight, layer3.7.convs.2.conv_offset.bias, layer3.8.convs.0.conv_offset.weight, layer3.8.convs.0.conv_offset.bias, layer3.8.convs.1.conv_offset.weight, layer3.8.convs.1.conv_offset.bias, layer3.8.convs.2.conv_offset.weight, layer3.8.convs.2.conv_offset.bias, layer3.9.convs.0.conv_offset.weight, layer3.9.convs.0.conv_offset.bias, layer3.9.convs.1.conv_offset.weight, layer3.9.convs.1.conv_offset.bias, layer3.9.convs.2.conv_offset.weight, layer3.9.convs.2.conv_offset.bias, layer3.10.convs.0.conv_offset.weight, layer3.10.convs.0.conv_offset.bias, layer3.10.convs.1.conv_offset.weight, layer3.10.convs.1.conv_offset.bias, layer3.10.convs.2.conv_offset.weight, layer3.10.convs.2.conv_offset.bias, layer3.11.convs.0.conv_offset.weight, layer3.11.convs.0.conv_offset.bias, layer3.11.convs.1.conv_offset.weight, layer3.11.convs.1.conv_offset.bias, layer3.11.convs.2.conv_offset.weight, layer3.11.convs.2.conv_offset.bias, layer3.12.convs.0.conv_offset.weight, layer3.12.convs.0.conv_offset.bias, layer3.12.convs.1.conv_offset.weight, layer3.12.convs.1.conv_offset.bias, layer3.12.convs.2.conv_offset.weight, layer3.12.convs.2.conv_offset.bias, layer3.13.convs.0.conv_offset.weight, layer3.13.convs.0.conv_offset.bias, layer3.13.convs.1.conv_offset.weight, layer3.13.convs.1.conv_offset.bias, layer3.13.convs.2.conv_offset.weight, layer3.13.convs.2.conv_offset.bias, layer3.14.convs.0.conv_offset.weight, layer3.14.convs.0.conv_offset.bias, layer3.14.convs.1.conv_offset.weight, layer3.14.convs.1.conv_offset.bias, layer3.14.convs.2.conv_offset.weight, layer3.14.convs.2.conv_offset.bias, layer3.15.convs.0.conv_offset.weight, layer3.15.convs.0.conv_offset.bias, layer3.15.convs.1.conv_offset.weight, layer3.15.convs.1.conv_offset.bias, layer3.15.convs.2.conv_offset.weight, layer3.15.convs.2.conv_offset.bias, layer3.16.convs.0.conv_offset.weight, layer3.16.convs.0.conv_offset.bias, layer3.16.convs.1.conv_offset.weight, layer3.16.convs.1.conv_offset.bias, layer3.16.convs.2.conv_offset.weight, layer3.16.convs.2.conv_offset.bias, layer3.17.convs.0.conv_offset.weight, layer3.17.convs.0.conv_offset.bias, layer3.17.convs.1.conv_offset.weight, layer3.17.convs.1.conv_offset.bias, layer3.17.convs.2.conv_offset.weight, layer3.17.convs.2.conv_offset.bias, layer3.18.convs.0.conv_offset.weight, layer3.18.convs.0.conv_offset.bias, layer3.18.convs.1.conv_offset.weight, layer3.18.convs.1.conv_offset.bias, layer3.18.convs.2.conv_offset.weight, layer3.18.convs.2.conv_offset.bias, layer3.19.convs.0.conv_offset.weight, layer3.19.convs.0.conv_offset.bias, layer3.19.convs.1.conv_offset.weight, layer3.19.convs.1.conv_offset.bias, layer3.19.convs.2.conv_offset.weight, layer3.19.convs.2.conv_offset.bias, layer3.20.convs.0.conv_offset.weight, layer3.20.convs.0.conv_offset.bias, layer3.20.convs.1.conv_offset.weight, layer3.20.convs.1.conv_offset.bias, layer3.20.convs.2.conv_offset.weight, layer3.20.convs.2.conv_offset.bias, layer3.21.convs.0.conv_offset.weight, layer3.21.convs.0.conv_offset.bias, layer3.21.convs.1.conv_offset.weight, layer3.21.convs.1.conv_offset.bias, layer3.21.convs.2.conv_offset.weight, layer3.21.convs.2.conv_offset.bias, layer3.22.convs.0.conv_offset.weight, layer3.22.convs.0.conv_offset.bias, layer3.22.convs.1.conv_offset.weight, layer3.22.convs.1.conv_offset.bias, layer3.22.convs.2.conv_offset.weight, layer3.22.convs.2.conv_offset.bias, layer4.0.convs.0.conv_offset.weight, layer4.0.convs.0.conv_offset.bias, layer4.0.convs.1.conv_offset.weight, layer4.0.convs.1.conv_offset.bias, layer4.0.convs.2.conv_offset.weight, layer4.0.convs.2.conv_offset.bias, layer4.1.convs.0.conv_offset.weight, layer4.1.convs.0.conv_offset.bias, layer4.1.convs.1.conv_offset.weight, layer4.1.convs.1.conv_offset.bias, layer4.1.convs.2.conv_offset.weight, layer4.1.convs.2.conv_offset.bias, layer4.2.convs.0.conv_offset.weight, layer4.2.convs.0.conv_offset.bias, layer4.2.convs.1.conv_offset.weight, layer4.2.convs.1.conv_offset.bias, layer4.2.convs.2.conv_offset.weight, layer4.2.convs.2.conv_offset.bias``
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
Hello, I have a question. Does res2net add DCN like this?
model = dict( type='CascadeRCNN', pretrained='data/pre/res2net101.pth', backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)),
I use your model, but the problem also appear
`mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.0.weight, conv1.1.weight, conv1.1.bias, conv1.1.running_mean, conv1.1.running_var, conv1.1.num_batches_tracked, conv1.3.weight, conv1.4.weight, conv1.4.bias, conv1.4.running_mean, conv1.4.running_var, conv1.4.num_batches_tracked, conv1.6.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, fc.weight, fc.bias
missing keys in source state_dict: stem.0.weight, stem.1.weight, stem.1.bias, stem.1.running_mean, stem.1.running_var, stem.3.weight, stem.4.weight, stem.4.bias, stem.4.running_mean, stem.4.running_var, stem.6.weight, stem.7.weight, stem.7.bias, stem.7.running_mean, stem.7.running_var, layer2.0.convs.0.conv_offset.weight, layer2.0.convs.0.conv_offset.bias, layer2.0.convs.1.conv_offset.weight, layer2.0.convs.1.conv_offset.bias, layer2.0.convs.2.conv_offset.weight, layer2.0.convs.2.conv_offset.bias, layer2.1.convs.0.conv_offset.weight, layer2.1.convs.0.conv_offset.bias, layer2.1.convs.1.conv_offset.weight, layer2.1.convs.1.conv_offset.bias, layer2.1.convs.2.conv_offset.weight, layer2.1.convs.2.conv_offset.bias, layer2.2.convs.0.conv_offset.weight, layer2.2.convs.0.conv_offset.bias, layer2.2.convs.1.conv_offset.weight, layer2.2.convs.1.conv_offset.bias, layer2.2.convs.2.conv_offset.weight, layer2.2.convs.2.conv_offset.bias, layer2.3.convs.0.conv_offset.weight, layer2.3.convs.0.conv_offset.bias, layer2.3.convs.1.conv_offset.weight, layer2.3.convs.1.conv_offset.bias, layer2.3.convs.2.conv_offset.weight, layer2.3.convs.2.conv_offset.bias, layer3.0.convs.0.conv_offset.weight, layer3.0.convs.0.conv_offset.bias, layer3.0.convs.1.conv_offset.weight, layer3.0.convs.1.conv_offset.bias, layer3.0.convs.2.conv_offset.weight, layer3.0.convs.2.conv_offset.bias, layer3.1.convs.0.conv_offset.weight, layer3.1.convs.0.conv_offset.bias, layer3.1.convs.1.conv_offset.weight, layer3.1.convs.1.conv_offset.bias, layer3.1.convs.2.conv_offset.weight, layer3.1.convs.2.conv_offset.bias, layer3.2.convs.0.conv_offset.weight, layer3.2.convs.0.conv_offset.bias, layer3.2.convs.1.conv_offset.weight, layer3.2.convs.1.conv_offset.bias, layer3.2.convs.2.conv_offset.weight, layer3.2.convs.2.conv_offset.bias, layer3.3.convs.0.conv_offset.weight, layer3.3.convs.0.conv_offset.bias, layer3.3.convs.1.conv_offset.weight, layer3.3.convs.1.conv_offset.bias, layer3.3.convs.2.conv_offset.weight, layer3.3.convs.2.conv_offset.bias, layer3.4.convs.0.conv_offset.weight, layer3.4.convs.0.conv_offset.bias, layer3.4.convs.1.conv_offset.weight, layer3.4.convs.1.conv_offset.bias, layer3.4.convs.2.conv_offset.weight, layer3.4.convs.2.conv_offset.bias, layer3.5.convs.0.conv_offset.weight, layer3.5.convs.0.conv_offset.bias, layer3.5.convs.1.conv_offset.weight, layer3.5.convs.1.conv_offset.bias, layer3.5.convs.2.conv_offset.weight, layer3.5.convs.2.conv_offset.bias, layer3.6.convs.0.conv_offset.weight, layer3.6.convs.0.conv_offset.bias, layer3.6.convs.1.conv_offset.weight, layer3.6.convs.1.conv_offset.bias, layer3.6.convs.2.conv_offset.weight, layer3.6.convs.2.conv_offset.bias, layer3.7.convs.0.conv_offset.weight, layer3.7.convs.0.conv_offset.bias, layer3.7.convs.1.conv_offset.weight, layer3.7.convs.1.conv_offset.bias, layer3.7.convs.2.conv_offset.weight, layer3.7.convs.2.conv_offset.bias, layer3.8.convs.0.conv_offset.weight, layer3.8.convs.0.conv_offset.bias, layer3.8.convs.1.conv_offset.weight, layer3.8.convs.1.conv_offset.bias, layer3.8.convs.2.conv_offset.weight, layer3.8.convs.2.conv_offset.bias, layer3.9.convs.0.conv_offset.weight, layer3.9.convs.0.conv_offset.bias, layer3.9.convs.1.conv_offset.weight, layer3.9.convs.1.conv_offset.bias, layer3.9.convs.2.conv_offset.weight, layer3.9.convs.2.conv_offset.bias, layer3.10.convs.0.conv_offset.weight, layer3.10.convs.0.conv_offset.bias, layer3.10.convs.1.conv_offset.weight, layer3.10.convs.1.conv_offset.bias, layer3.10.convs.2.conv_offset.weight, layer3.10.convs.2.conv_offset.bias, layer3.11.convs.0.conv_offset.weight, layer3.11.convs.0.conv_offset.bias, layer3.11.convs.1.conv_offset.weight, layer3.11.convs.1.conv_offset.bias, layer3.11.convs.2.conv_offset.weight, layer3.11.convs.2.conv_offset.bias, layer3.12.convs.0.conv_offset.weight, layer3.12.convs.0.conv_offset.bias, layer3.12.convs.1.conv_offset.weight, layer3.12.convs.1.conv_offset.bias, layer3.12.convs.2.conv_offset.weight, layer3.12.convs.2.conv_offset.bias, layer3.13.convs.0.conv_offset.weight, layer3.13.convs.0.conv_offset.bias, layer3.13.convs.1.conv_offset.weight, layer3.13.convs.1.conv_offset.bias, layer3.13.convs.2.conv_offset.weight, layer3.13.convs.2.conv_offset.bias, layer3.14.convs.0.conv_offset.weight, layer3.14.convs.0.conv_offset.bias, layer3.14.convs.1.conv_offset.weight, layer3.14.convs.1.conv_offset.bias, layer3.14.convs.2.conv_offset.weight, layer3.14.convs.2.conv_offset.bias, layer3.15.convs.0.conv_offset.weight, layer3.15.convs.0.conv_offset.bias, layer3.15.convs.1.conv_offset.weight, layer3.15.convs.1.conv_offset.bias, layer3.15.convs.2.conv_offset.weight, layer3.15.convs.2.conv_offset.bias, layer3.16.convs.0.conv_offset.weight, layer3.16.convs.0.conv_offset.bias, layer3.16.convs.1.conv_offset.weight, layer3.16.convs.1.conv_offset.bias, layer3.16.convs.2.conv_offset.weight, layer3.16.convs.2.conv_offset.bias, layer3.17.convs.0.conv_offset.weight, layer3.17.convs.0.conv_offset.bias, layer3.17.convs.1.conv_offset.weight, layer3.17.convs.1.conv_offset.bias, layer3.17.convs.2.conv_offset.weight, layer3.17.convs.2.conv_offset.bias, layer3.18.convs.0.conv_offset.weight, layer3.18.convs.0.conv_offset.bias, layer3.18.convs.1.conv_offset.weight, layer3.18.convs.1.conv_offset.bias, layer3.18.convs.2.conv_offset.weight, layer3.18.convs.2.conv_offset.bias, layer3.19.convs.0.conv_offset.weight, layer3.19.convs.0.conv_offset.bias, layer3.19.convs.1.conv_offset.weight, layer3.19.convs.1.conv_offset.bias, layer3.19.convs.2.conv_offset.weight, layer3.19.convs.2.conv_offset.bias, layer3.20.convs.0.conv_offset.weight, layer3.20.convs.0.conv_offset.bias, layer3.20.convs.1.conv_offset.weight, layer3.20.convs.1.conv_offset.bias, layer3.20.convs.2.conv_offset.weight, layer3.20.convs.2.conv_offset.bias, layer3.21.convs.0.conv_offset.weight, layer3.21.convs.0.conv_offset.bias, layer3.21.convs.1.conv_offset.weight, layer3.21.convs.1.conv_offset.bias, layer3.21.convs.2.conv_offset.weight, layer3.21.convs.2.conv_offset.bias, layer3.22.convs.0.conv_offset.weight, layer3.22.convs.0.conv_offset.bias, layer3.22.convs.1.conv_offset.weight, layer3.22.convs.1.conv_offset.bias, layer3.22.convs.2.conv_offset.weight, layer3.22.convs.2.conv_offset.bias, layer4.0.convs.0.conv_offset.weight, layer4.0.convs.0.conv_offset.bias, layer4.0.convs.1.conv_offset.weight, layer4.0.convs.1.conv_offset.bias, layer4.0.convs.2.conv_offset.weight, layer4.0.convs.2.conv_offset.bias, layer4.1.convs.0.conv_offset.weight, layer4.1.convs.0.conv_offset.bias, layer4.1.convs.1.conv_offset.weight, layer4.1.convs.1.conv_offset.bias, layer4.1.convs.2.conv_offset.weight, layer4.1.convs.2.conv_offset.bias, layer4.2.convs.0.conv_offset.weight, layer4.2.convs.0.conv_offset.bias, layer4.2.convs.1.conv_offset.weight, layer4.2.convs.1.conv_offset.bias, layer4.2.convs.2.conv_offset.weight, layer4.2.convs.2.conv_offset.bias``
base = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
pretrained='open-mmlab://res2net101_v1d_26w_4s',
backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26,
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
stage_with_dcn=(False, True, True, True)),
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
Hello, I have a question. Does res2net add DCN like this?
model = dict( type='CascadeRCNN', pretrained='data/pre/res2net101.pth', backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)),
I use your model, but the problem also appear
`mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.0.weight, conv1.1.weight, conv1.1.bias, conv1.1.running_mean, conv1.1.running_var, conv1.1.num_batches_tracked, conv1.3.weight, conv1.4.weight, conv1.4.bias, conv1.4.running_mean, conv1.4.running_var, conv1.4.num_batches_tracked, conv1.6.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, fc.weight, fc.bias
missing keys in source state_dict: stem.0.weight, stem.1.weight, stem.1.bias, stem.1.running_mean, stem.1.running_var, stem.3.weight, stem.4.weight, stem.4.bias, stem.4.running_mean, stem.4.running_var, stem.6.weight, stem.7.weight, stem.7.bias, stem.7.running_mean, stem.7.running_var, layer2.0.convs.0.conv_offset.weight, layer2.0.convs.0.conv_offset.bias, layer2.0.convs.1.conv_offset.weight, layer2.0.convs.1.conv_offset.bias, layer2.0.convs.2.conv_offset.weight, layer2.0.convs.2.conv_offset.bias, layer2.1.convs.0.conv_offset.weight, layer2.1.convs.0.conv_offset.bias, layer2.1.convs.1.conv_offset.weight, layer2.1.convs.1.conv_offset.bias, layer2.1.convs.2.conv_offset.weight, layer2.1.convs.2.conv_offset.bias, layer2.2.convs.0.conv_offset.weight, layer2.2.convs.0.conv_offset.bias, layer2.2.convs.1.conv_offset.weight, layer2.2.convs.1.conv_offset.bias, layer2.2.convs.2.conv_offset.weight, layer2.2.convs.2.conv_offset.bias, layer2.3.convs.0.conv_offset.weight, layer2.3.convs.0.conv_offset.bias, layer2.3.convs.1.conv_offset.weight, layer2.3.convs.1.conv_offset.bias, layer2.3.convs.2.conv_offset.weight, layer2.3.convs.2.conv_offset.bias, layer3.0.convs.0.conv_offset.weight, layer3.0.convs.0.conv_offset.bias, layer3.0.convs.1.conv_offset.weight, layer3.0.convs.1.conv_offset.bias, layer3.0.convs.2.conv_offset.weight, layer3.0.convs.2.conv_offset.bias, layer3.1.convs.0.conv_offset.weight, layer3.1.convs.0.conv_offset.bias, layer3.1.convs.1.conv_offset.weight, layer3.1.convs.1.conv_offset.bias, layer3.1.convs.2.conv_offset.weight, layer3.1.convs.2.conv_offset.bias, layer3.2.convs.0.conv_offset.weight, layer3.2.convs.0.conv_offset.bias, layer3.2.convs.1.conv_offset.weight, layer3.2.convs.1.conv_offset.bias, layer3.2.convs.2.conv_offset.weight, layer3.2.convs.2.conv_offset.bias, layer3.3.convs.0.conv_offset.weight, layer3.3.convs.0.conv_offset.bias, layer3.3.convs.1.conv_offset.weight, layer3.3.convs.1.conv_offset.bias, layer3.3.convs.2.conv_offset.weight, layer3.3.convs.2.conv_offset.bias, layer3.4.convs.0.conv_offset.weight, layer3.4.convs.0.conv_offset.bias, layer3.4.convs.1.conv_offset.weight, layer3.4.convs.1.conv_offset.bias, layer3.4.convs.2.conv_offset.weight, layer3.4.convs.2.conv_offset.bias, layer3.5.convs.0.conv_offset.weight, layer3.5.convs.0.conv_offset.bias, layer3.5.convs.1.conv_offset.weight, layer3.5.convs.1.conv_offset.bias, layer3.5.convs.2.conv_offset.weight, layer3.5.convs.2.conv_offset.bias, layer3.6.convs.0.conv_offset.weight, layer3.6.convs.0.conv_offset.bias, layer3.6.convs.1.conv_offset.weight, layer3.6.convs.1.conv_offset.bias, layer3.6.convs.2.conv_offset.weight, layer3.6.convs.2.conv_offset.bias, layer3.7.convs.0.conv_offset.weight, layer3.7.convs.0.conv_offset.bias, layer3.7.convs.1.conv_offset.weight, layer3.7.convs.1.conv_offset.bias, layer3.7.convs.2.conv_offset.weight, layer3.7.convs.2.conv_offset.bias, layer3.8.convs.0.conv_offset.weight, layer3.8.convs.0.conv_offset.bias, layer3.8.convs.1.conv_offset.weight, layer3.8.convs.1.conv_offset.bias, layer3.8.convs.2.conv_offset.weight, layer3.8.convs.2.conv_offset.bias, layer3.9.convs.0.conv_offset.weight, layer3.9.convs.0.conv_offset.bias, layer3.9.convs.1.conv_offset.weight, layer3.9.convs.1.conv_offset.bias, layer3.9.convs.2.conv_offset.weight, layer3.9.convs.2.conv_offset.bias, layer3.10.convs.0.conv_offset.weight, layer3.10.convs.0.conv_offset.bias, layer3.10.convs.1.conv_offset.weight, layer3.10.convs.1.conv_offset.bias, layer3.10.convs.2.conv_offset.weight, layer3.10.convs.2.conv_offset.bias, layer3.11.convs.0.conv_offset.weight, layer3.11.convs.0.conv_offset.bias, layer3.11.convs.1.conv_offset.weight, layer3.11.convs.1.conv_offset.bias, layer3.11.convs.2.conv_offset.weight, layer3.11.convs.2.conv_offset.bias, layer3.12.convs.0.conv_offset.weight, layer3.12.convs.0.conv_offset.bias, layer3.12.convs.1.conv_offset.weight, layer3.12.convs.1.conv_offset.bias, layer3.12.convs.2.conv_offset.weight, layer3.12.convs.2.conv_offset.bias, layer3.13.convs.0.conv_offset.weight, layer3.13.convs.0.conv_offset.bias, layer3.13.convs.1.conv_offset.weight, layer3.13.convs.1.conv_offset.bias, layer3.13.convs.2.conv_offset.weight, layer3.13.convs.2.conv_offset.bias, layer3.14.convs.0.conv_offset.weight, layer3.14.convs.0.conv_offset.bias, layer3.14.convs.1.conv_offset.weight, layer3.14.convs.1.conv_offset.bias, layer3.14.convs.2.conv_offset.weight, layer3.14.convs.2.conv_offset.bias, layer3.15.convs.0.conv_offset.weight, layer3.15.convs.0.conv_offset.bias, layer3.15.convs.1.conv_offset.weight, layer3.15.convs.1.conv_offset.bias, layer3.15.convs.2.conv_offset.weight, layer3.15.convs.2.conv_offset.bias, layer3.16.convs.0.conv_offset.weight, layer3.16.convs.0.conv_offset.bias, layer3.16.convs.1.conv_offset.weight, layer3.16.convs.1.conv_offset.bias, layer3.16.convs.2.conv_offset.weight, layer3.16.convs.2.conv_offset.bias, layer3.17.convs.0.conv_offset.weight, layer3.17.convs.0.conv_offset.bias, layer3.17.convs.1.conv_offset.weight, layer3.17.convs.1.conv_offset.bias, layer3.17.convs.2.conv_offset.weight, layer3.17.convs.2.conv_offset.bias, layer3.18.convs.0.conv_offset.weight, layer3.18.convs.0.conv_offset.bias, layer3.18.convs.1.conv_offset.weight, layer3.18.convs.1.conv_offset.bias, layer3.18.convs.2.conv_offset.weight, layer3.18.convs.2.conv_offset.bias, layer3.19.convs.0.conv_offset.weight, layer3.19.convs.0.conv_offset.bias, layer3.19.convs.1.conv_offset.weight, layer3.19.convs.1.conv_offset.bias, layer3.19.convs.2.conv_offset.weight, layer3.19.convs.2.conv_offset.bias, layer3.20.convs.0.conv_offset.weight, layer3.20.convs.0.conv_offset.bias, layer3.20.convs.1.conv_offset.weight, layer3.20.convs.1.conv_offset.bias, layer3.20.convs.2.conv_offset.weight, layer3.20.convs.2.conv_offset.bias, layer3.21.convs.0.conv_offset.weight, layer3.21.convs.0.conv_offset.bias, layer3.21.convs.1.conv_offset.weight, layer3.21.convs.1.conv_offset.bias, layer3.21.convs.2.conv_offset.weight, layer3.21.convs.2.conv_offset.bias, layer3.22.convs.0.conv_offset.weight, layer3.22.convs.0.conv_offset.bias, layer3.22.convs.1.conv_offset.weight, layer3.22.convs.1.conv_offset.bias, layer3.22.convs.2.conv_offset.weight, layer3.22.convs.2.conv_offset.bias, layer4.0.convs.0.conv_offset.weight, layer4.0.convs.0.conv_offset.bias, layer4.0.convs.1.conv_offset.weight, layer4.0.convs.1.conv_offset.bias, layer4.0.convs.2.conv_offset.weight, layer4.0.convs.2.conv_offset.bias, layer4.1.convs.0.conv_offset.weight, layer4.1.convs.0.conv_offset.bias, layer4.1.convs.1.conv_offset.weight, layer4.1.convs.1.conv_offset.bias, layer4.1.convs.2.conv_offset.weight, layer4.1.convs.2.conv_offset.bias, layer4.2.convs.0.conv_offset.weight, layer4.2.convs.0.conv_offset.bias, layer4.2.convs.1.conv_offset.weight, layer4.2.convs.1.conv_offset.bias, layer4.2.convs.2.conv_offset.weight, layer4.2.convs.2.conv_offset.bias``
_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
pretrained='open-mmlab://res2net101_v1d_26w_4s',
backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26,
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
stage_with_dcn=(False, True, True, True)),
It works normally for me
"res2net101_v1d_26w_4s": "https://openmmlab.ossaccelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth"
Sorry, I can't open the link.
they changed the download path.you can download the model from here(https://openmmlab.oss-accelerate.aliyuncs.com/pretrain/third_party/res2net101_v1d_26w_4s_mmdetv2-f0a600f9.pth)
Hello, I have a question. Does res2net add DCN like this?
model = dict( type='CascadeRCNN', pretrained='data/pre/res2net101.pth', backbone=dict( type='Res2Net', depth=101, scales=4, base_width=26, dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False), stage_with_dcn=(False, True, True, True)),
I use your model, but the problem also appear
`mmdet - WARNING - The model and loaded state dict do not match exactly
unexpected key in source state_dict: conv1.0.weight, conv1.1.weight, conv1.1.bias, conv1.1.running_mean, conv1.1.running_var, conv1.1.num_batches_tracked, conv1.3.weight, conv1.4.weight, conv1.4.bias, conv1.4.running_mean, conv1.4.running_var, conv1.4.num_batches_tracked, conv1.6.weight, bn1.weight, bn1.bias, bn1.running_mean, bn1.running_var, bn1.num_batches_tracked, fc.weight, fc.bias
missing keys in source state_dict: stem.0.weight, stem.1.weight, stem.1.bias, stem.1.running_mean, stem.1.running_var, stem.3.weight, stem.4.weight, stem.4.bias, stem.4.running_mean, stem.4.running_var, stem.6.weight, stem.7.weight, stem.7.bias, stem.7.running_mean, stem.7.running_var, layer2.0.convs.0.conv_offset.weight, layer2.0.convs.0.conv_offset.bias, layer2.0.convs.1.conv_offset.weight, layer2.0.convs.1.conv_offset.bias, layer2.0.convs.2.conv_offset.weight, layer2.0.convs.2.conv_offset.bias, layer2.1.convs.0.conv_offset.weight, layer2.1.convs.0.conv_offset.bias, layer2.1.convs.1.conv_offset.weight, layer2.1.convs.1.conv_offset.bias, layer2.1.convs.2.conv_offset.weight, layer2.1.convs.2.conv_offset.bias, layer2.2.convs.0.conv_offset.weight, layer2.2.convs.0.conv_offset.bias, layer2.2.convs.1.conv_offset.weight, layer2.2.convs.1.conv_offset.bias, layer2.2.convs.2.conv_offset.weight, layer2.2.convs.2.conv_offset.bias, layer2.3.convs.0.conv_offset.weight, layer2.3.convs.0.conv_offset.bias, layer2.3.convs.1.conv_offset.weight, layer2.3.convs.1.conv_offset.bias, layer2.3.convs.2.conv_offset.weight, layer2.3.convs.2.conv_offset.bias, layer3.0.convs.0.conv_offset.weight, layer3.0.convs.0.conv_offset.bias, layer3.0.convs.1.conv_offset.weight, layer3.0.convs.1.conv_offset.bias, layer3.0.convs.2.conv_offset.weight, layer3.0.convs.2.conv_offset.bias, layer3.1.convs.0.conv_offset.weight, layer3.1.convs.0.conv_offset.bias, layer3.1.convs.1.conv_offset.weight, layer3.1.convs.1.conv_offset.bias, layer3.1.convs.2.conv_offset.weight, layer3.1.convs.2.conv_offset.bias, layer3.2.convs.0.conv_offset.weight, layer3.2.convs.0.conv_offset.bias, layer3.2.convs.1.conv_offset.weight, layer3.2.convs.1.conv_offset.bias, layer3.2.convs.2.conv_offset.weight, layer3.2.convs.2.conv_offset.bias, layer3.3.convs.0.conv_offset.weight, layer3.3.convs.0.conv_offset.bias, layer3.3.convs.1.conv_offset.weight, layer3.3.convs.1.conv_offset.bias, layer3.3.convs.2.conv_offset.weight, layer3.3.convs.2.conv_offset.bias, layer3.4.convs.0.conv_offset.weight, layer3.4.convs.0.conv_offset.bias, layer3.4.convs.1.conv_offset.weight, layer3.4.convs.1.conv_offset.bias, layer3.4.convs.2.conv_offset.weight, layer3.4.convs.2.conv_offset.bias, layer3.5.convs.0.conv_offset.weight, layer3.5.convs.0.conv_offset.bias, layer3.5.convs.1.conv_offset.weight, layer3.5.convs.1.conv_offset.bias, layer3.5.convs.2.conv_offset.weight, layer3.5.convs.2.conv_offset.bias, layer3.6.convs.0.conv_offset.weight, layer3.6.convs.0.conv_offset.bias, layer3.6.convs.1.conv_offset.weight, layer3.6.convs.1.conv_offset.bias, layer3.6.convs.2.conv_offset.weight, layer3.6.convs.2.conv_offset.bias, layer3.7.convs.0.conv_offset.weight, layer3.7.convs.0.conv_offset.bias, layer3.7.convs.1.conv_offset.weight, layer3.7.convs.1.conv_offset.bias, layer3.7.convs.2.conv_offset.weight, layer3.7.convs.2.conv_offset.bias, layer3.8.convs.0.conv_offset.weight, layer3.8.convs.0.conv_offset.bias, layer3.8.convs.1.conv_offset.weight, layer3.8.convs.1.conv_offset.bias, layer3.8.convs.2.conv_offset.weight, layer3.8.convs.2.conv_offset.bias, layer3.9.convs.0.conv_offset.weight, layer3.9.convs.0.conv_offset.bias, layer3.9.convs.1.conv_offset.weight, layer3.9.convs.1.conv_offset.bias, layer3.9.convs.2.conv_offset.weight, layer3.9.convs.2.conv_offset.bias, layer3.10.convs.0.conv_offset.weight, layer3.10.convs.0.conv_offset.bias, layer3.10.convs.1.conv_offset.weight, layer3.10.convs.1.conv_offset.bias, layer3.10.convs.2.conv_offset.weight, layer3.10.convs.2.conv_offset.bias, layer3.11.convs.0.conv_offset.weight, layer3.11.convs.0.conv_offset.bias, layer3.11.convs.1.conv_offset.weight, layer3.11.convs.1.conv_offset.bias, layer3.11.convs.2.conv_offset.weight, layer3.11.convs.2.conv_offset.bias, layer3.12.convs.0.conv_offset.weight, layer3.12.convs.0.conv_offset.bias, layer3.12.convs.1.conv_offset.weight, layer3.12.convs.1.conv_offset.bias, layer3.12.convs.2.conv_offset.weight, layer3.12.convs.2.conv_offset.bias, layer3.13.convs.0.conv_offset.weight, layer3.13.convs.0.conv_offset.bias, layer3.13.convs.1.conv_offset.weight, layer3.13.convs.1.conv_offset.bias, layer3.13.convs.2.conv_offset.weight, layer3.13.convs.2.conv_offset.bias, layer3.14.convs.0.conv_offset.weight, layer3.14.convs.0.conv_offset.bias, layer3.14.convs.1.conv_offset.weight, layer3.14.convs.1.conv_offset.bias, layer3.14.convs.2.conv_offset.weight, layer3.14.convs.2.conv_offset.bias, layer3.15.convs.0.conv_offset.weight, layer3.15.convs.0.conv_offset.bias, layer3.15.convs.1.conv_offset.weight, layer3.15.convs.1.conv_offset.bias, layer3.15.convs.2.conv_offset.weight, layer3.15.convs.2.conv_offset.bias, layer3.16.convs.0.conv_offset.weight, layer3.16.convs.0.conv_offset.bias, layer3.16.convs.1.conv_offset.weight, layer3.16.convs.1.conv_offset.bias, layer3.16.convs.2.conv_offset.weight, layer3.16.convs.2.conv_offset.bias, layer3.17.convs.0.conv_offset.weight, layer3.17.convs.0.conv_offset.bias, layer3.17.convs.1.conv_offset.weight, layer3.17.convs.1.conv_offset.bias, layer3.17.convs.2.conv_offset.weight, layer3.17.convs.2.conv_offset.bias, layer3.18.convs.0.conv_offset.weight, layer3.18.convs.0.conv_offset.bias, layer3.18.convs.1.conv_offset.weight, layer3.18.convs.1.conv_offset.bias, layer3.18.convs.2.conv_offset.weight, layer3.18.convs.2.conv_offset.bias, layer3.19.convs.0.conv_offset.weight, layer3.19.convs.0.conv_offset.bias, layer3.19.convs.1.conv_offset.weight, layer3.19.convs.1.conv_offset.bias, layer3.19.convs.2.conv_offset.weight, layer3.19.convs.2.conv_offset.bias, layer3.20.convs.0.conv_offset.weight, layer3.20.convs.0.conv_offset.bias, layer3.20.convs.1.conv_offset.weight, layer3.20.convs.1.conv_offset.bias, layer3.20.convs.2.conv_offset.weight, layer3.20.convs.2.conv_offset.bias, layer3.21.convs.0.conv_offset.weight, layer3.21.convs.0.conv_offset.bias, layer3.21.convs.1.conv_offset.weight, layer3.21.convs.1.conv_offset.bias, layer3.21.convs.2.conv_offset.weight, layer3.21.convs.2.conv_offset.bias, layer3.22.convs.0.conv_offset.weight, layer3.22.convs.0.conv_offset.bias, layer3.22.convs.1.conv_offset.weight, layer3.22.convs.1.conv_offset.bias, layer3.22.convs.2.conv_offset.weight, layer3.22.convs.2.conv_offset.bias, layer4.0.convs.0.conv_offset.weight, layer4.0.convs.0.conv_offset.bias, layer4.0.convs.1.conv_offset.weight, layer4.0.convs.1.conv_offset.bias, layer4.0.convs.2.conv_offset.weight, layer4.0.convs.2.conv_offset.bias, layer4.1.convs.0.conv_offset.weight, layer4.1.convs.0.conv_offset.bias, layer4.1.convs.1.conv_offset.weight, layer4.1.convs.1.conv_offset.bias, layer4.1.convs.2.conv_offset.weight, layer4.1.convs.2.conv_offset.bias, layer4.2.convs.0.conv_offset.weight, layer4.2.convs.0.conv_offset.bias, layer4.2.convs.1.conv_offset.weight, layer4.2.convs.1.conv_offset.bias, layer4.2.convs.2.conv_offset.weight, layer4.2.convs.2.conv_offset.bias``
_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
pretrained='open-mmlab://res2net101_v1d_26w_4s',
backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26,
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
stage_with_dcn=(False, True, True, True)),
It works normally for me
Hello, I have trained it, but when I test, I met this problem.
'Bottle2neck' object has no attribute 'conv2'
How can I test? Thanks!
@sky-fly97 #3007 #2237
@sky-fly97 #3007 #2237
I've commented out this line, and I see that it's also done in it, but I don't know if the result is right
`mmdetection/mmdet/models/backbones/res2net.py
Line 100 in c298a0a
delattr(self, 'conv2') `
@sky-fly97 #3007 #2237
I've commented out this line, and I see that it's also done in it, but I don't know if the result is right
`mmdetection/mmdet/models/backbones/res2net.py
Line 100 in c298a0a
# delattr(self, 'conv2') `
I have fixed this problem, you can download the latest version(master) to test your own model
@sky-fly97 #3007 #2237
I've commented out this line, and I see that it's also done in it, but I don't know if the result is right
`mmdetection/mmdet/models/backbones/res2net.py
Line 100 in c298a0a
# delattr(self, 'conv2') `
I have fixed this problem, you can download the latest version(master) to test your own model
Thanks~
@sky-fly97
I comment below code in the init_weights() of resnet.py for test. The test result is good.
if self.dcn is not None:
for m in self.modules():
if isinstance(m, Bottleneck) and hasattr(
m.conv2, 'conv_offset'):
constant_init(m.conv2.conv_offset, 0)
Looks like that it has been solved.
_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
pretrained='open-mmlab://res2net101_v1d_26w_4s',
backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26,
dcn=dict(type='DCN', deform_groups=1, fallback_on_stride=False),
stage_with_dcn=(False, True, True, True)),
It works normally for me
Is this code could download the RES2NET pre-trained model with DCN? If I want to download a pre-trained model without dcn, is it like the following code, just delete one line?
_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py'
model = dict(
pretrained='open-mmlab://res2net101_v1d_26w_4s',
backbone=dict(type='Res2Net', depth=101, scales=4, base_width=26,
stage_with_dcn=(False, True, True, True)),
|
gharchive/issue
| 2020-09-06T15:57:19 |
2025-04-01T04:35:18.951668
|
{
"authors": [
"forever-rz",
"hellock",
"minan19605",
"sky-fly97",
"yuzhj"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/3699",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
701972292
|
AP = -1.0
I use mmdetection to train my custom dataset, the console output is:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.438
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.735
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.476
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.399
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.372
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.639
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.639
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.639
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.633
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.583
why the fourth AP is -1.000,What are the possible reasons?
No small bbox (area <= 32*32) in your test dataset.
|
gharchive/issue
| 2020-09-15T14:19:29 |
2025-04-01T04:35:18.957922
|
{
"authors": [
"sakurasakura1996",
"shinya7y"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/3775",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
941273320
|
Why is the grad_fn "AddBackward0" instead of "CudnnConvolutionBackward"
Hi, Can you help me ?
I am debuging the mmdetection source code with pdb. When i viewed the fpn code, I found a strange debug info. See the snapshot picture below, please. As the picture showing, self.lateral_convs[0] is a Conv2d, I think a tensor inputs[1] inputs into the module, the result self.lateral_convs0.grad_fn should be CudnnConvolutionBackward. Why it is AddBackward0 ?
the code is in file fpn.py, line 151~159
# build laterals
laterals = [
lateral_conv(inputs[i + self.start_level])
for i, lateral_conv in enumerate(self.lateral_convs)
]
bias
Feel free to reopen the issue if there is any question
|
gharchive/issue
| 2021-07-10T15:36:11 |
2025-04-01T04:35:18.960916
|
{
"authors": [
"johnson-magic",
"jshilong"
],
"repo": "open-mmlab/mmdetection",
"url": "https://github.com/open-mmlab/mmdetection/issues/5584",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1206366000
|
关于votenet的一些问题
你好,我在用你们官方释放出的checkpint测试votenet时(数据集为sunrgbd),结果只有53.81,如下:
这是什么问题呢想请问下?
同时我在自己重新跑votenet时,精度最高也只有55左右,如图,这是什么问题呢?
启动任务都是照着官方给的命令启动的,同时直接下载的mmdet3d没做任何修改,谢谢
什么玩意儿?
请问用的是哪个分支的代码,使用的 pkl 文件是最新版本的吗
|
gharchive/issue
| 2022-04-17T12:46:13 |
2025-04-01T04:35:18.962919
|
{
"authors": [
"XT-1997",
"ZCMax"
],
"repo": "open-mmlab/mmdetection3d",
"url": "https://github.com/open-mmlab/mmdetection3d/issues/1405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1850978673
|
supporting voxelnext inference
Motivation
supporting voxelnext inference
Modification
adding the module used in voxelnext:
mmdet3d/configs/base/models/voxelnext.py
mmdet3d/configs/voxelnext/
mmdet3d/models/dense_heads/voxelnext_head.py
mmdet3d/models/detectors/voxelnext.py
mmdet3d/models/middle_encoders/sparse_encoder_voxelnext.py
mmdet3d/models/task_modules/coders/voxelnext_bbox_coder.py
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
Hi @csinsgcc,
We'd like to express our appreciation for your valuable contributions to the mmdetection3d. Your efforts have significantly aided in enhancing the project's quality.
It is our pleasure to invite you to join our community thorugh Discord_Special Interest Group (SIG) channel. This is a great place to share your experiences, discuss ideas, and connect with other like-minded people. To become a part of the SIG channel, send a message to the moderator, OpenMMLab, briefly introduce yourself and mention your open-source contributions in the #introductions channel. Our team will gladly facilitate your entry. We eagerly await your presence. Please follow this link to join us: https://discord.gg/UjgXkPWNqA.
If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID".
Thanks again for your awesome contribution, and we're excited to have you as part of our community!
|
gharchive/pull-request
| 2023-08-15T06:45:01 |
2025-04-01T04:35:18.969910
|
{
"authors": [
"CLAassistant",
"OpenMMLab-Assistant-004",
"csinsgcc"
],
"repo": "open-mmlab/mmdetection3d",
"url": "https://github.com/open-mmlab/mmdetection3d/pull/2692",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
811065418
|
Waste commodity limit is global, how to allocate to regions?
As i understand from the input file, the domestic limit for waste commodity is given for the whole Germany. However, in Urbs at least, these commodity limits are to be set on each model region instaed. There, i see two options:
a) we need to find a way to allocate these limits to each region (by installed capacities in each?)
b) I need to hard-code a dummy "transmission system for waste" in urbs, where the waste commodities are generated in one of the regions and then freely can be transported to any other region.
The choice b) would increase the model complexity a bit, so I wanted to ask, if some of the other frameworks have the similar problem? Then we could think of an allocation method.
We can collect the other FW maintainer assessment on the topic. From the discussion 18.02.21 the following was noted:
Balmorel: Can handle the current input data, as limits for either techs or regions are set. The waste limit is set for the region.
GENESYS can only define global levels...
with urbs we will pick the solution of "waste transmission". (option b)
Data point to this issue:
I actually did not realize that the waste was a limit for 'DE' since it was stated with "['BB', 'BE', 'BW', 'BY', 'HB', 'HE', 'HH', 'MV', 'NI', 'NW', 'RP', 'SH', 'SL', 'SN', 'ST', 'TH']". For comparison, we have the co2 emission limit stated for 'DE' while parameters such as capital costs, input/output ratios etc. are stated for "['BB', 'BE', 'BW', 'BY', 'HB', 'HE', 'HH', 'MV', 'NI', 'NW', 'RP', 'SH', 'SL', 'SN', 'ST', 'TH']", with the understanding that they apply within each region. Consequently, I understood the domestic waste limit as a limit that applies within each region. For consistency reasons I would therefore say that the domestic limit for waste should have the region 'DE' instead. In my data code I can make this change manually, but I imagine that it could create some confusion when mappings are used?
Data point to this issue:
I actually did not realize that the waste was a limit for 'DE' since it was stated with "['BB', 'BE', 'BW', 'BY', 'HB', 'HE', 'HH', 'MV', 'NI', 'NW', 'RP', 'SH', 'SL', 'SN', 'ST', 'TH']". For comparison, we have the co2 emission limit stated for 'DE' while parameters such as capital costs, input/output ratios etc. are stated for "['BB', 'BE', 'BW', 'BY', 'HB', 'HE', 'HH', 'MV', 'NI', 'NW', 'RP', 'SH', 'SL', 'SN', 'ST', 'TH']", with the understanding that they apply within each region. Consequently, I understood the domestic waste limit as a limit that applies within each region. For consistency reasons I would therefore say that the domestic limit for waste should have the region 'DE' instead. In my data code I can make this change manually, but I imagine that it could create some confusion when mappings are used?
Good point, so did i actually interpret it wrong? @chrwm
It is like Stefanie said. The data itself describes the available amount of waste of whole Germany.
Hence the annotation in its current state is confusing and it should be changed to ['DE']
Hence the annotation in its current state is confusing and should be changed to ['DE'].
I will do that in ID19.
The change is implemented in ID19.
|
gharchive/issue
| 2021-02-18T12:42:51 |
2025-04-01T04:35:18.983098
|
{
"authors": [
"StefanieBuchholz",
"chrwm",
"jonasVano",
"sonercandas"
],
"repo": "open-modex/models",
"url": "https://github.com/open-modex/models/issues/7",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
339135965
|
fix: Update Cuda directory links in Ansible tasks
Cuda directories (cuda-dnn and cuda-nccl2) need to be updated in Ansible
install tasks due to commit a8a1916.
This is a short term fix. Soon we need to put pointers to these files in 'software-vars.yml'.
|
gharchive/pull-request
| 2018-07-07T10:37:22 |
2025-04-01T04:35:19.022104
|
{
"authors": [
"jaywcarman"
],
"repo": "open-power-ref-design-toolkit/power-up",
"url": "https://github.com/open-power-ref-design-toolkit/power-up/pull/176",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1532988394
|
Changing value for PROC_VRM_VOFFSET_UV to increase VDN by 50mV
Per PE00F1SD, adding 50mV VDN uplift to Rainier MRW as it was approved for FW1020.30
Dinesh updated the Found In field in the defect to say FW1020. It has approval by HW MFB for FW1020.30 integration.
|
gharchive/pull-request
| 2023-01-13T22:24:36 |
2025-04-01T04:35:19.027303
|
{
"authors": [
"ruby452"
],
"repo": "open-power/rainier-xml",
"url": "https://github.com/open-power/rainier-xml/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
692804890
|
Cloud deployment automation
Automation scripts for image deployment for:
[x] AWS
[ ] Azure
[ ] IBM
Closing for now as I don't believe we have any intention to deploy to other cloud platforms at the moment.
|
gharchive/issue
| 2020-09-04T07:05:43 |
2025-04-01T04:35:19.030502
|
{
"authors": [
"baentsch",
"dstebila"
],
"repo": "open-quantum-safe/profiling",
"url": "https://github.com/open-quantum-safe/profiling/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2183911197
|
防湿庫を改良する
以前に動画を参考に防湿庫を作成し、現在も使用している。
しかしこの防湿庫はフィラメントを支える軸が片持ちになっている。この構造は慣れてない人が使うと破損につながる他、フィラメントの交換や時の作業がしにくい問題があった。また、乾燥剤がフィラメントに引っかかってしまうという問題もある。
フィラメント防湿庫の使い方に慣れることで解決するアプローチもあるが、使用環境として初めて触るメンバーもいることを考えて上記の問題が解決できそうなこちらのフィラメント防湿庫を製作し交換する。
PLAで必要な数のパーツを印刷し終えました
|
gharchive/issue
| 2024-03-13T12:38:25 |
2025-04-01T04:35:19.032136
|
{
"authors": [
"MibuchiYuta"
],
"repo": "open-rdc/Creality-K1-Slicer-Profiles",
"url": "https://github.com/open-rdc/Creality-K1-Slicer-Profiles/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1405964312
|
Psycopg2 dependency fails to compile in CI/CD
Hi,
I use a CI/CD system for a package that lists ord-schema as a dependency. One of its subdependencies, psycopg2, is available on PyPI as source code that must be compiled by the local machine. Some of the compilation dependencies are not covered by PyPI, so without intervention the CI/CD pipeline fails:
Collecting psycopg2>=2.8.5
Downloading psycopg2-2.9.4.tar.gz (384 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 384.0/384.0 kB 83.1 MB/s eta 0:00:00
Preparing metadata ([setup.py](http://setup.py/)): started
Preparing metadata ([setup.py](http://setup.py/)): finished with status 'error'
error: subprocess-exited-with-error
× python [setup.py]
egg_info did not run successfully.
│ exit code: 1
╰─> [25 lines of output]
/opt/pipelines/agent/build/.cache/nox/typing_check/lib/python3.9/site-packages/setuptools/config/setupcfg.py:508: SetuptoolsDeprecationWarning: The license_file parameter is deprecated, use license_files instead.
warnings.warn(msg, warning_class)
running egg_info
creating /tmp/pip-pip-egg-info-vyczfsiy/psycopg2.egg-info
writing /tmp/pip-pip-egg-info-vyczfsiy/psycopg2.egg-info/PKG-INFO
writing dependency_links to /tmp/pip-pip-egg-info-vyczfsiy/psycopg2.egg-info/dependency_links.txt
writing top-level names to /tmp/pip-pip-egg-info-vyczfsiy/psycopg2.egg-info/top_level.txt
writing manifest file '/tmp/pip-pip-egg-info-vyczfsiy/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python [setup.py] build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
This can be resolved in two ways:
Install pg_config (in every scenario where ord-schema is used)
Replace ord-schema's psycopg2 depedency with the precompiled psycopg2-binary, which is also available from PyPI.
Would you be open to option 2?
Embarrassingly, I see now that psycopg2's authors recommend compiling from binary every time: https://www.psycopg.org/docs/install.html#psycopg-vs-psycopg-binary
Sorry! Closing this issue.
|
gharchive/issue
| 2022-10-12T10:28:17 |
2025-04-01T04:35:19.036063
|
{
"authors": [
"d-miketa"
],
"repo": "open-reaction-database/ord-schema",
"url": "https://github.com/open-reaction-database/ord-schema/issues/650",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1215481360
|
Remove building ros_ign from source since 0.233.4 has been released
Signed-off-by: Aaron Chong aaronchongth@gmail.com
New feature implementation
Implemented feature
Removed buiding ros_ign packages from source, since the last release of 0.233.4 is out, https://github.com/ignitionrobotics/ros_ign/blob/galactic/ros_ign_bridge/CHANGELOG.rst.
Related to https://github.com/open-rmf/rmf/pull/144
Closing this in favor of https://github.com/open-rmf/rmf/pull/151
|
gharchive/pull-request
| 2022-04-26T06:48:33 |
2025-04-01T04:35:19.039349
|
{
"authors": [
"aaronchongth"
],
"repo": "open-rmf/rmf",
"url": "https://github.com/open-rmf/rmf/pull/149",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1993603262
|
Bug: Hovercards are broken
Describe the bug
Noticigin in this PR's deploy preview they work, but they currently do not work in production.
https://github.com/open-sauced/app/pull/1980
Steps to reproduce
What the add does today.
Browsers
No response
Additional context (Is this in dev or production?)
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Contributing Docs
[ ] I agree to follow this project's Contribution Docs
Found the bad commit through a bisect. FYI @nickytonline https://github.com/open-sauced/app/commit/e1dcd5568653dc5716a24fd775f83969dd190159
I'll pick i up from here if you don't mind @bdougie
|
gharchive/issue
| 2023-11-14T21:28:12 |
2025-04-01T04:35:19.045527
|
{
"authors": [
"OgDev-01",
"bdougie"
],
"repo": "open-sauced/app",
"url": "https://github.com/open-sauced/app/issues/2138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2470463227
|
fix: add tooltip for long titles
Description
Fixes #3673
This pull request fixes the issue where a long title would overflow from the side drawer. Now, the PR includes truncated labels for long titles, with the full label shown in a tooltip upon hovering.
Related Tickets & Documents
This PR fixes issue #3673
Mobile & Desktop Screenshots/Recordings
screen-capture (21).webm
Steps to QA
Test cases were not added as I didn't find the corresponding test file for this component. Should I write it from scratch?
Go to workspace
Create new workspace with a long name
See the truncated title and tooltip upon hovering
Tier (staff will fill in)
[ ] Tier 1
[ ] Tier 2
[ ] Tier 3
[x] Tier 4
[optional] What gif best describes this PR or how it makes you feel?
Accepted deploy preview and requested engineering's review: thanks for the contribution!!
|
gharchive/pull-request
| 2024-08-16T15:11:14 |
2025-04-01T04:35:19.049939
|
{
"authors": [
"SURAJ-SHARMA27",
"jpmcb"
],
"repo": "open-sauced/app",
"url": "https://github.com/open-sauced/app/pull/3969",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1714716971
|
Bug: the text got overlap on 1135px screens
Describe the bug
the text got overlap on 1135px screens
Steps to reproduce
https://insights.opensauced.pizza/hub/insights/new
Change screen to something between 1040 and 1170
you will see the issue
Affected services
insights.opensauced.pizza
Platforms
No response
Browsers
No response
Environment
No response
Additional context
No response
Code of Conduct
[X] I agree to follow this project's Code of Conduct
Contributing Docs
[X] I agree to follow this project's Contribution Docs
Hey @a0m0rajab I have added a PR can you please check!
Your branch got merged! Thank you for your first contribution @Lucif3r-in!
You might be interested in sharing this achievement on the highlight page: https://insights.opensauced.pizza/feed
|
gharchive/issue
| 2023-05-17T22:08:17 |
2025-04-01T04:35:19.054972
|
{
"authors": [
"Lucif3r-in",
"a0m0rajab"
],
"repo": "open-sauced/insights",
"url": "https://github.com/open-sauced/insights/issues/1198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1854042797
|
feat: Add install instructions and script for pizza CLI
Description
Adds instructions for installing the pizza CLI
install.sh script for doing the install (including instructions for safe use)
What type of PR is this? (check all applicable)
[x] 🍕 Feature
[ ] 🐛 Bug Fix
[ ] 📝 Documentation Update
[ ] 🎨 Style
[ ] 🧑💻 Code Refactor
[ ] 🔥 Performance Improvements
[ ] ✅ Test
[ ] 🤖 Build
[ ] 🔁 CI
[ ] 📦 Chore (Release)
[ ] ⏩ Revert
Related Tickets & Documents
Closes #19 and is related to https://github.com/open-sauced/homebrew-tap/pull/1
Mobile & Desktop Screenshots/Recordings
N/a
Added tests?
[ ] 👍 yes
[x] 🙅 no, because they aren't needed
[ ] 🙋 no, because I need help
Added to documentation?
[x] 📜 README.md
[ ] 📓 docs.opensauced.pizza
[ ] 🍕 dev.to/opensauced
[ ] 📕 storybook
[ ] 🙅 no documentation needed
[optional] Are there any post-deployment tasks we need to perform?
[optional] What gif best describes this PR or how it makes you feel?
Force pushed to:
Addressed mistakes in install.sh script. Also ran it through shellcheck and added missing "" where needed
Added [!WARNING] block in the readme (which is very cool, did not know that was a thing)
|
gharchive/pull-request
| 2023-08-16T23:57:13 |
2025-04-01T04:35:19.062710
|
{
"authors": [
"jpmcb"
],
"repo": "open-sauced/pizza-cli",
"url": "https://github.com/open-sauced/pizza-cli/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1507273026
|
后端构建失败
按照readme中的方法构建,然后返回错误,请问是什么问题?如何解决(或者您是如何构建的)?谢谢(Windows10 21H2 64位)
——来自不会Java的悟元
C:\Users\CC.jdks\openjdk-19.0.1\bin\java.exe -Dmaven.multiModuleProjectDirectory=C:\Users\CC\Desktop\teaching-open\api -Djansi.passthrough=true "-Dmaven.home=C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3" "-Dclassworlds.conf=C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\bin\m2.conf" "-Dmaven.ext.class.path=C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven-event-listener.jar" "-javaagent:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\lib\idea_rt.jar=61737:C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\bin" -Dfile.encoding=UTF-8 -Dsun.stdout.encoding=UTF-8 -Dsun.stderr.encoding=UTF-8 -classpath "C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\boot\plexus-classworlds-2.6.0.jar;C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\boot\plexus-classworlds.license" org.codehaus.classworlds.Launcher -Didea.version=2022.3.1 --debug -s "C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\conf\settings.xml" clean package
Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
Maven home: C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3
Java version: 19.0.1, vendor: Oracle Corporation, runtime: C:\Users\CC.jdks\openjdk-19.0.1
Default locale: zh_CN, platform encoding: UTF-8
OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
[DEBUG] Included C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven-event-listener.jar
[DEBUG] Populating class realm maven.ext
[DEBUG] Included C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven-event-listener.jar
[DEBUG] Created new class realm maven.api
[DEBUG] Importing foreign packages into class realm maven.api
[DEBUG] Imported: javax.annotation.* < maven.ext
[DEBUG] Imported: javax.annotation.security.* < maven.ext
[DEBUG] Imported: javax.enterprise.inject.* < maven.ext
[DEBUG] Imported: javax.enterprise.util.* < maven.ext
[DEBUG] Imported: javax.inject.* < maven.ext
[DEBUG] Imported: org.apache.maven.* < maven.ext
[DEBUG] Imported: org.apache.maven.artifact < maven.ext
[DEBUG] Imported: org.apache.maven.classrealm < maven.ext
[DEBUG] Imported: org.apache.maven.cli < maven.ext
[DEBUG] Imported: org.apache.maven.configuration < maven.ext
[DEBUG] Imported: org.apache.maven.exception < maven.ext
[DEBUG] Imported: org.apache.maven.execution < maven.ext
[DEBUG] Imported: org.apache.maven.execution.scope < maven.ext
[DEBUG] Imported: org.apache.maven.lifecycle < maven.ext
[DEBUG] Imported: org.apache.maven.model < maven.ext
[DEBUG] Imported: org.apache.maven.monitor < maven.ext
[DEBUG] Imported: org.apache.maven.plugin < maven.ext
[DEBUG] Imported: org.apache.maven.profiles < maven.ext
[DEBUG] Imported: org.apache.maven.project < maven.ext
[DEBUG] Imported: org.apache.maven.reporting < maven.ext
[DEBUG] Imported: org.apache.maven.repository < maven.ext
[DEBUG] Imported: org.apache.maven.rtinfo < maven.ext
[DEBUG] Imported: org.apache.maven.settings < maven.ext
[DEBUG] Imported: org.apache.maven.toolchain < maven.ext
[DEBUG] Imported: org.apache.maven.usability < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.* < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.authentication < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.authorization < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.events < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.observers < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.proxy < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.repository < maven.ext
[DEBUG] Imported: org.apache.maven.wagon.resource < maven.ext
[DEBUG] Imported: org.codehaus.classworlds < maven.ext
[DEBUG] Imported: org.codehaus.plexus.* < maven.ext
[DEBUG] Imported: org.codehaus.plexus.classworlds < maven.ext
[DEBUG] Imported: org.codehaus.plexus.component < maven.ext
[DEBUG] Imported: org.codehaus.plexus.configuration < maven.ext
[DEBUG] Imported: org.codehaus.plexus.container < maven.ext
[DEBUG] Imported: org.codehaus.plexus.context < maven.ext
[DEBUG] Imported: org.codehaus.plexus.lifecycle < maven.ext
[DEBUG] Imported: org.codehaus.plexus.logging < maven.ext
[DEBUG] Imported: org.codehaus.plexus.personality < maven.ext
[DEBUG] Imported: org.codehaus.plexus.util.xml.Xpp3Dom < maven.ext
[DEBUG] Imported: org.codehaus.plexus.util.xml.pull.XmlPullParser < maven.ext
[DEBUG] Imported: org.codehaus.plexus.util.xml.pull.XmlPullParserException < maven.ext
[DEBUG] Imported: org.codehaus.plexus.util.xml.pull.XmlSerializer < maven.ext
[DEBUG] Imported: org.eclipse.aether.* < maven.ext
[DEBUG] Imported: org.eclipse.aether.artifact < maven.ext
[DEBUG] Imported: org.eclipse.aether.collection < maven.ext
[DEBUG] Imported: org.eclipse.aether.deployment < maven.ext
[DEBUG] Imported: org.eclipse.aether.graph < maven.ext
[DEBUG] Imported: org.eclipse.aether.impl < maven.ext
[DEBUG] Imported: org.eclipse.aether.installation < maven.ext
[DEBUG] Imported: org.eclipse.aether.internal.impl < maven.ext
[DEBUG] Imported: org.eclipse.aether.metadata < maven.ext
[DEBUG] Imported: org.eclipse.aether.repository < maven.ext
[DEBUG] Imported: org.eclipse.aether.resolution < maven.ext
[DEBUG] Imported: org.eclipse.aether.spi < maven.ext
[DEBUG] Imported: org.eclipse.aether.transfer < maven.ext
[DEBUG] Imported: org.eclipse.aether.version < maven.ext
[DEBUG] Imported: org.fusesource.jansi.* < maven.ext
[DEBUG] Imported: org.slf4j.* < maven.ext
[DEBUG] Imported: org.slf4j.event.* < maven.ext
[DEBUG] Imported: org.slf4j.helpers.* < maven.ext
[DEBUG] Imported: org.slf4j.spi.* < maven.ext
[DEBUG] Populating class realm maven.api
[INFO] Error stacktraces are turned on.
[DEBUG] Message scheme: color
[DEBUG] Message styles: debug info warning error success failure strong mojo project
[DEBUG] Reading global settings from C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\conf\settings.xml
[DEBUG] Reading user settings from C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\conf\settings.xml
[DEBUG] Reading global toolchains from C:\Program Files\JetBrains\IntelliJ IDEA Community Edition 2022.3.1\plugins\maven\lib\maven3\conf\toolchains.xml
[DEBUG] Reading user toolchains from C:\Users\CC.m2\toolchains.xml
[DEBUG] Using local repository at C:\Users\CC.m2\repository
[DEBUG] Using manager EnhancedLocalRepositoryManager with priority 10.0 for C:\Users\CC.m2\repository
[INFO] Scanning for projects...
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for aliyun (http://maven.aliyun.com/nexus/content/groups/public).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for central (https://repo.maven.apache.org/maven2).
[DEBUG] Extension realms for project org.jeecgframework.boot:jeecg-boot-parent:pom:2.7.0: (none)
[DEBUG] Looking up lifecycle mappings for packaging pom from ClassRealm[maven.ext, parent: ClassRealm[plexus.core, parent: null]]
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for sonatype-nexus-snapshots (https://oss.sonatype.org/content/repositories/snapshots).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for apache.snapshots (https://repository.apache.org/snapshots).
[DEBUG] Extension realms for project org.springframework.boot:spring-boot-starter-parent:pom:2.1.3.RELEASE: (none)
[DEBUG] Looking up lifecycle mappings for packaging pom from ClassRealm[maven.ext, parent: ClassRealm[plexus.core, parent: null]]
[DEBUG] Extension realms for project org.springframework.boot:spring-boot-dependencies:pom:2.1.3.RELEASE: (none)
[DEBUG] Looking up lifecycle mappings for packaging pom from ClassRealm[maven.ext, parent: ClassRealm[plexus.core, parent: null]]
[DEBUG] Extension realms for project org.jeecgframework.boot:jeecg-boot-base-common:jar:2.7.0: (none)
[DEBUG] Looking up lifecycle mappings for packaging jar from ClassRealm[maven.ext, parent: ClassRealm[plexus.core, parent: null]]
[DEBUG] Extension realms for project org.jeecgframework.boot:teaching-open:jar:2.7.0: (none)
[DEBUG] Looking up lifecycle mappings for packaging jar from ClassRealm[maven.ext, parent: ClassRealm[plexus.core, parent: null]]
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Build Order:
[INFO]
[INFO] jeecg-boot-parent [pom]
[INFO] jeecg-boot-base-common [jar]
[INFO] teaching-open [jar]
[DEBUG] === REACTOR BUILD PLAN ================================================
[DEBUG] Project: org.jeecgframework.boot:jeecg-boot-parent:pom:2.7.0
[DEBUG] Tasks: [clean, package]
[DEBUG] Style: Regular
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Project: org.jeecgframework.boot:jeecg-boot-base-common:jar:2.7.0
[DEBUG] Tasks: [clean, package]
[DEBUG] Style: Regular
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Project: org.jeecgframework.boot:teaching-open:jar:2.7.0
[DEBUG] Tasks: [clean, package]
[DEBUG] Style: Regular
[DEBUG] =======================================================================
[INFO]
[INFO] -------------< org.jeecgframework.boot:jeecg-boot-parent >--------------
[INFO] Building jeecg-boot-parent 2.7.0 [1/3]
[INFO] --------------------------------[ pom ]---------------------------------
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] === PROJECT BUILD PLAN ================================================
[DEBUG] Project: org.jeecgframework.boot:jeecg-boot-parent:2.7.0
[DEBUG] Dependencies (collect): []
[DEBUG] Dependencies (resolve): []
[DEBUG] Repositories (dependencies): [nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public, default, releases), jeecg (http://maven.jeecg.org/nexus/content/repositories/jeecg, default, releases)]
[DEBUG] Repositories (plugins) : [nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public, default, releases)]
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-clean-plugin:3.1.0:clean (default-clean)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${maven.clean.excludeDefaultDirectories}
${maven.clean.failOnError}
${maven.clean.followSymLinks}
${maven.clean.retryOnError}
${maven.clean.skip}
${maven.clean.verbose}
[DEBUG] =======================================================================
[INFO]
[INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ jeecg-boot-parent ---
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for apache.snapshots (http://repository.apache.org/snapshots).
[DEBUG] Dependency collection stats {ConflictMarker.analyzeTime=19651200, ConflictMarker.markTime=318500, ConflictMarker.nodeCount=14, ConflictIdSorter.graphTime=2100800, ConflictIdSorter.topsortTime=2748200, ConflictIdSorter.conflictIdCount=12, ConflictIdSorter.conflictIdCycleCount=0, ConflictResolver.totalTime=13148500, ConflictResolver.conflictItemCount=14, DefaultDependencyCollector.collectTime=715142500, DefaultDependencyCollector.transformTime=77801000}
[DEBUG] org.apache.maven.plugins:maven-clean-plugin:jar:3.1.0
[DEBUG] org.apache.maven:maven-plugin-api:jar:3.0:compile
[DEBUG] org.apache.maven:maven-model:jar:3.0:compile
[DEBUG] org.codehaus.plexus:plexus-utils:jar:2.0.4:compile
[DEBUG] org.apache.maven:maven-artifact:jar:3.0:compile
[DEBUG] org.sonatype.sisu:sisu-inject-plexus:jar:1.4.2:compile
[DEBUG] org.codehaus.plexus:plexus-component-annotations:jar:1.7.1:compile (version managed from default)
[DEBUG] org.codehaus.plexus:plexus-classworlds:jar:2.2.3:compile
[DEBUG] org.sonatype.sisu:sisu-inject-bean:jar:1.4.2:compile
[DEBUG] org.sonatype.sisu:sisu-guice:jar:noaop:2.1.7:compile
[DEBUG] org.apache.maven.shared:maven-shared-utils:jar:3.2.1:compile
[DEBUG] commons-io:commons-io:jar:2.5:compile
[DEBUG] Created new class realm plugin>org.apache.maven.plugins:maven-clean-plugin:3.1.0
[DEBUG] Importing foreign packages into class realm plugin>org.apache.maven.plugins:maven-clean-plugin:3.1.0
[DEBUG] Imported: < maven.api
[DEBUG] Populating class realm plugin>org.apache.maven.plugins:maven-clean-plugin:3.1.0
[DEBUG] Included: org.apache.maven.plugins:maven-clean-plugin:jar:3.1.0
[DEBUG] Included: org.codehaus.plexus:plexus-utils:jar:2.0.4
[DEBUG] Included: org.codehaus.plexus:plexus-component-annotations:jar:1.7.1
[DEBUG] Included: org.sonatype.sisu:sisu-inject-bean:jar:1.4.2
[DEBUG] Included: org.sonatype.sisu:sisu-guice:jar:noaop:2.1.7
[DEBUG] Included: org.apache.maven.shared:maven-shared-utils:jar:3.2.1
[DEBUG] Included: commons-io:commons-io:jar:2.5
[DEBUG] Configuring mojo org.apache.maven.plugins:maven-clean-plugin:3.1.0:clean from plugin realm ClassRealm[plugin>org.apache.maven.plugins:maven-clean-plugin:3.1.0, parent: jdk.internal.loader.ClassLoaders$AppClassLoader@78308db1]
[DEBUG] Configuring mojo 'org.apache.maven.plugins:maven-clean-plugin:3.1.0:clean' with basic configurator -->
[DEBUG] (f) directory = C:\Users\CC\Desktop\teaching-open\api\target
[DEBUG] (f) excludeDefaultDirectories = false
[DEBUG] (f) failOnError = true
[DEBUG] (f) followSymLinks = false
[DEBUG] (f) outputDirectory = C:\Users\CC\Desktop\teaching-open\api\target\classes
[DEBUG] (f) reportDirectory = C:\Users\CC\Desktop\teaching-open\api\target\classes
[DEBUG] (f) retryOnError = true
[DEBUG] (f) skip = false
[DEBUG] (f) testOutputDirectory = C:\Users\CC\Desktop\teaching-open\api\target\test-classes
[DEBUG] -- end configuration --
[DEBUG] Skipping non-existing directory C:\Users\CC\Desktop\teaching-open\api\target
[DEBUG] Skipping non-existing directory C:\Users\CC\Desktop\teaching-open\api\target\classes
[DEBUG] Skipping non-existing directory C:\Users\CC\Desktop\teaching-open\api\target\test-classes
[DEBUG] Skipping non-existing directory C:\Users\CC\Desktop\teaching-open\api\target\classes
[INFO]
[INFO] -----------< org.jeecgframework.boot:jeecg-boot-base-common >-----------
[INFO] Building jeecg-boot-base-common 2.7.0 [2/3]
[INFO] --------------------------------[ jar ]---------------------------------
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] Lifecycle default -> [validate, initialize, generate-sources, process-sources, generate-resources, process-resources, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, test-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-test, verify, install, deploy]
[DEBUG] Lifecycle clean -> [pre-clean, clean, post-clean]
[DEBUG] Lifecycle site -> [pre-site, site, post-site, site-deploy]
[DEBUG] === PROJECT BUILD PLAN ================================================
[DEBUG] Project: org.jeecgframework.boot:jeecg-boot-base-common:2.7.0
[DEBUG] Dependencies (collect): []
[DEBUG] Dependencies (resolve): [compile, runtime, test]
[DEBUG] Repositories (dependencies): [nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public, default, releases), jeecg (http://maven.jeecg.org/nexus/content/repositories/jeecg, default, releases)]
[DEBUG] Repositories (plugins) : [nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public, default, releases)]
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-clean-plugin:3.1.0:clean (default-clean)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${maven.clean.excludeDefaultDirectories}
${maven.clean.failOnError}
${maven.clean.followSymLinks}
${maven.clean.retryOnError}
${maven.clean.skip}
${maven.clean.verbose}
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-resources-plugin:3.1.0:resources (default-resources)
[DEBUG] Style: Regular
[DEBUG] Configuration:
@
woff
woff2
eot
ttf
svg
${maven.resources.skip}
false
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-compiler-plugin:3.8.0:compile (default-compile)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${maven.compiler.compilerId}
${maven.compiler.compilerReuseStrategy}
${maven.compiler.compilerVersion}
${maven.compiler.debug}
${maven.compiler.debuglevel}
UTF-8
${maven.compiler.executable}
${maven.compiler.failOnError}
${maven.compiler.failOnWarning}
${maven.compiler.forceJavacCompilerUse}
${maven.compiler.fork}
${maven.compiler.maxmem}
${maven.compiler.meminitial}
${maven.compiler.optimize}
true
${maven.compiler.release}
${maven.compiler.showDeprecation}
${maven.compiler.showWarnings}
${maven.main.skip}
${maven.compiler.skipMultiThreadWarning}
1.8
${lastModGranularityMs}
1.8
${maven.compiler.useIncrementalCompilation}
${maven.compiler.verbose}
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-resources-plugin:3.1.0:testResources (default-testResources)
[DEBUG] Style: Regular
[DEBUG] Configuration:
@
woff
woff2
eot
ttf
svg
${maven.test.skip}
false
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-compiler-plugin:3.8.0:testCompile (default-testCompile)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${maven.compiler.compilerId}
${maven.compiler.compilerReuseStrategy}
${maven.compiler.compilerVersion}
${maven.compiler.debug}
${maven.compiler.debuglevel}
UTF-8
${maven.compiler.executable}
${maven.compiler.failOnError}
${maven.compiler.failOnWarning}
${maven.compiler.forceJavacCompilerUse}
${maven.compiler.fork}
${maven.compiler.maxmem}
${maven.compiler.meminitial}
${maven.compiler.optimize}
true
${maven.compiler.release}
${maven.compiler.showDeprecation}
${maven.compiler.showWarnings}
${maven.test.skip}
${maven.compiler.skipMultiThreadWarning}
1.8
${lastModGranularityMs}
1.8
${maven.compiler.testRelease}
${maven.compiler.testSource}
${maven.compiler.testTarget}
${maven.compiler.useIncrementalCompilation}
${maven.compiler.verbose}
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (default-test)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${maven.test.additionalClasspath}
${argLine}
${childDelegation}
${maven.test.dependency.excludes}
${maven.surefire.debug}
${dependenciesToScan}
${disableXmlReport}
${enableAssertions}
${surefire.encoding}
${excludedGroups}
${surefire.excludesFile}
${surefire.failIfNoSpecifiedTests}
${failIfNoTests}
${forkCount}
${forkMode}
${surefire.exitTimeout}
${surefire.timeout}
${groups}
${surefire.includesFile}
${junitArtifactName}
${junitPlatformArtifactName}
${jvm}
${objectFactory}
${parallel}
${parallelOptimized}
${surefire.parallel.forcedTimeout}
${surefire.parallel.timeout}
${perCoreThreadCount}
${plugin.artifactMap}
${surefire.printSummary}
${project.artifactMap}
${maven.test.redirectTestOutputToFile}
${surefire.reportFormat}
${surefire.reportNameSuffix}
${surefire.rerunFailingTestsCount}
${reuseForks}
${surefire.runOrder}
${surefire.shutdown}
${maven.test.skip}
${surefire.skipAfterFailureCount}
${maven.test.skip.exec}
true
${surefire.suiteXmlFiles}
${tempDir}
${test}
${maven.test.failure.ignore}
${testNGArtifactName}
${threadCount}
${threadCountClasses}
${threadCountMethods}
${threadCountSuites}
${trimStackTrace}
${surefire.useFile}
${surefire.useManifestOnlyJar}
${surefire.useSystemClassLoader}
${useUnlimitedThreads}
${basedir}
[DEBUG] -----------------------------------------------------------------------
[DEBUG] Goal: org.apache.maven.plugins:maven-jar-plugin:3.1.1:jar (default-jar)
[DEBUG] Style: Regular
[DEBUG] Configuration:
${start-class}
true
${maven.jar.forceCreation}
${jar.useDefaultManifestFile}
[DEBUG] =======================================================================
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for jvnet-nexus-snapshots (https://maven.java.net/content/repositories/snapshots).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for ow2-snapshot (http://repository.ow2.org/nexus/content/repositories/snapshots).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for vaadin-snapshots (http://oss.sonatype.org/content/repositories/vaadin-snapshots/).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for vaadin-releases (http://oss.sonatype.org/content/repositories/vaadin-releases/).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for jvnet-nexus-releases (https://maven.java.net/content/repositories/releases/).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for local-file (file://${basedir}/lib/).
[WARNING] The POM for com.alibaba:druid:jar:1.1.17 is invalid, transitive dependencies (if any) will not be available: 2 problems were encountered while building the effective model for com.alibaba:druid:1.1.17
[ERROR] 'dependencies.dependency.systemPath' for com.alibaba:jconsole:jar must specify an absolute path but is ${env.JAVA_HOME}/lib/jconsole.jar @
[ERROR] 'dependencies.dependency.systemPath' for com.alibaba:tools:jar must specify an absolute path but is ${env.JAVA_HOME}/lib/tools.jar @
[DEBUG] Using transporter WagonTransporter with priority -1.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
[DEBUG] Using connector BasicRepositoryConnector with priority 0.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
Downloading from jeecg: http://maven.jeecg.org/nexus/content/repositories/jeecg/org/hibernate/hibernate-re/2.1.5/hibernate-re-2.1.5.pom
[DEBUG] Writing tracking file C:\Users\CC.m2\repository\org\hibernate\hibernate-re\2.1.5\hibernate-re-2.1.5.pom.lastUpdated
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for terracotta-snapshots (http://www.terracotta.org/download/reflector/snapshots).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for terracotta-releases (http://www.terracotta.org/download/reflector/releases).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for snapshots (https://oss.sonatype.org/content/repositories/snapshots/).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for spring-libs-release (https://repo.spring.io/libs-release).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for spring-libs-snapshot (https://repo.spring.io/libs-snapshot).
[DEBUG] Using transporter WagonTransporter with priority -1.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
[DEBUG] Using connector BasicRepositoryConnector with priority 0.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
Downloading from jeecg: http://maven.jeecg.org/nexus/content/repositories/jeecg/org/jeecgframework/boot/codegenerate/1.2.0/codegenerate-1.2.0.pom
[DEBUG] Writing tracking file C:\Users\CC.m2\repository\org\jeecgframework\boot\codegenerate\1.2.0\codegenerate-1.2.0.pom.lastUpdated
[DEBUG] Using transporter WagonTransporter with priority -1.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
[DEBUG] Using connector BasicRepositoryConnector with priority 0.0 for http://maven.jeecg.org/nexus/content/repositories/jeecg
Downloading from jeecg: http://maven.jeecg.org/nexus/content/repositories/jeecg/org/jeecgframework/autopoi-web/1.1.1/autopoi-web-1.1.1.pom
[DEBUG] Writing tracking file C:\Users\CC.m2\repository\org\jeecgframework\autopoi-web\1.1.1\autopoi-web-1.1.1.pom.lastUpdated
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for codehaus-snapshots (http://nexus.codehaus.org/snapshots/).
[DEBUG] Using mirror nexus-aliyun (http://maven.aliyun.com/nexus/content/groups/public) for xkcoding-nexus (https://nexus.xkcoding.com/repository/maven-public/).
[DEBUG] Dependency collection stats {ConflictMarker.analyzeTime=19530500, ConflictMarker.markTime=1967500, ConflictMarker.nodeCount=470, ConflictIdSorter.graphTime=3551500, ConflictIdSorter.topsortTime=464900, ConflictIdSorter.conflictIdCount=204, ConflictIdSorter.conflictIdCycleCount=0, ConflictResolver.totalTime=155001100, ConflictResolver.conflictItemCount=372, DefaultDependencyCollector.collectTime=193359671200, DefaultDependencyCollector.transformTime=183272500}
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary for jeecg-boot-parent 2.7.0:
[INFO]
[INFO] jeecg-boot-parent .................................. SUCCESS [ 1.530 s]
[INFO] jeecg-boot-base-common ............................. FAILURE [03:14 min]
[INFO] teaching-open ...................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:17 min
[INFO] Finished at: 2022-12-22T11:56:53+08:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal on project jeecg-boot-base-common: Could not resolve dependencies for project
[ERROR]
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn -rf :jeecg-boot-base-common
进程已结束,退出代码1
有待进一步确认
|
gharchive/issue
| 2022-12-22T04:58:24 |
2025-04-01T04:35:19.179722
|
{
"authors": [
"SUNWUYUAN"
],
"repo": "open-scratch/teaching-open",
"url": "https://github.com/open-scratch/teaching-open/issues/28",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
372727181
|
Naming the API
The new API currently has a module called pb_api.py and a class called Specifications.
Since PolicyBrain maybe phasing out (?) and OG-USA will be part of the PSL, perhaps a name like ogusa_api.py or psl_api.py makes more sense. @hdoupe, thoughts?
I also suggest renaming of the Specifications class to something like Parameters. Any thought on this @rickecon ?
@jdebacker asked:
Since PolicyBrain maybe phasing out (?) and OG-USA will be part of the PSL, perhaps a name like ogusa_api.py or psl_api.py makes more sense. @hdoupe, thoughts?
I think pb_api.py should be renamed. If you are thinking about renaming the class to Parameters, you could rename the module parameters.py. However, this could be a little confusing since there's already a parametersbase.py. On the other hand, I don't think anyone will directly use parametersbase.py. So, it may not be too much of an issue. If you rename it parameters.py people would use it like:
from ogusa.parameters import Parameters
I think that makes more sense than
from ogusa.ogusa_api import Parameters
I hesitate to name the module psl_api.py because I think it would be better to think of the API as a good way for people to interact with your project in general and not something that's defined by some other entity.
What other names are you thinking?
Ok - we defined to go with parameters.py as the module and keep Specifications as the class.
|
gharchive/issue
| 2018-10-22T22:31:04 |
2025-04-01T04:35:19.185886
|
{
"authors": [
"hdoupe",
"jdebacker"
],
"repo": "open-source-economics/OG-USA",
"url": "https://github.com/open-source-economics/OG-USA/issues/410",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2056402447
|
Set the client workspace directory path into PhpCommand
Related to the issue #50
TL:DR; Pint requires access to the current project path using getcwd(). The command must be executed within the project itself.
I added variables into the getPintCommand() method in ModuleResolver.ts file at line 48:
const cmd = getWorkspaceConfig('executablePath', path.posix.join(...DEFAULT_EXEC_PATH));
const cwd = executable.replace( cmd, '' );
return new PhpCommand(cmd, await this.getPintConfigAsArgs(workspaceFolder, input), cwd);
instead of
return new PhpCommand(executable, await this.getPintConfigAsArgs(workspaceFolder, input));
In that matter, even in a multi-workspace environment, the path to the client workspace directory is respected.
I'll merge this so I can test it with all the rest and release
Thanks @mho22 !
|
gharchive/pull-request
| 2023-12-26T13:11:48 |
2025-04-01T04:35:19.191391
|
{
"authors": [
"d8vjork",
"mho22"
],
"repo": "open-southeners/vscode-laravel-pint",
"url": "https://github.com/open-southeners/vscode-laravel-pint/pull/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
628902341
|
Fails to import celestial bodies except Earth [Python]
In Python, trying to import celestial bodies other than Earth seems to fail:
Indeed, neither Moon nor Sun shows up in the list of attributes:
Good catch!
Indeed, other celestial bodies are available here (C++ side):
https://github.com/open-space-collective/open-space-toolkit-physics/tree/master/src/OpenSpaceToolkit/Physics/Environment/Objects/CelestialBodies
But Python mapping is missing:
https://github.com/open-space-collective/open-space-toolkit-physics/tree/master/bindings/python/src/OpenSpaceToolkitPhysicsPy/Environment/Objects/CelestialBodies
Can you create a PR?
|
gharchive/issue
| 2020-06-02T05:02:56 |
2025-04-01T04:35:19.194480
|
{
"authors": [
"lucas-bremond",
"robinpdm"
],
"repo": "open-space-collective/open-space-toolkit-physics",
"url": "https://github.com/open-space-collective/open-space-toolkit-physics/issues/48",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
642544171
|
getCurrentSpan() returns undefined when a span is started
Please answer these questions before submitting a bug report.
What version of OpenTelemetry are you using?
"@opentelemetry/api": "^0.8.3",
"@opentelemetry/context-base": "^0.9.0",
"@opentelemetry/node": "^0.8.3",
"@opentelemetry/plugin-express": "^0.8.0",
"@opentelemetry/plugin-http": "^0.8.3",
"@opentelemetry/plugin-https": "^0.8.3",
"@opentelemetry/plugin-pg": "^0.8.0",
"@opentelemetry/plugin-pg-pool": "^0.8.0",
"@opentelemetry/tracing": "^0.8.3",
What version of Node are you using?
v10.18.1
What did you do?
If possible, provide a recipe for reproducing the error.
const parentSpan = tracer.startSpan('main');
console.log(tracer.getCurrentSpan())
getCurrentSpan() returns undefined, even though the main span started on the previous line.
What did you expect to see?
tracer.getCurrentSpan() not returning undefined.
What did you see instead?
undefined
Additional context
I've got quite a large codebase that I want to instrument with opentelemetry.
I'd like to avoid passing parent spans down the call stack explicitly because that would require me to change all function signatures.
I expected const span = tracer.startSpan('name') to inject the parent if not provided.
Instead it seems like the only way for a span to be linked to its parent is const span = tracer.startSpan('name', { parent }).
I tried getCurrentSpan(), but it's always undefined.
Would be great if you could advise on how to proceed.
Thank you
I guess you didn't register a ContextManager, how did you configure the tracer exactly ? Could you show the full code used ? See this example
Hi @vmarchaud, thanks for the quick response, it's very appreciated!
I do call register. register adds and enables an instance of AsyncHooksContextManager.
Here's a simple script which I run with ts-node.
const opentelemetry = require('@opentelemetry/api');
const { NodeTracerProvider } = require('@opentelemetry/node');
const { SimpleSpanProcessor, ConsoleSpanExporter } = require('@opentelemetry/tracing');
const { TraceExporter } = require('@google-cloud/opentelemetry-cloud-trace-exporter');
let tracer
let exporter
export function initialise() {
const projectId = 'HIDDEN' // I use Google Cloud Trace
const provider = new NodeTracerProvider();
exporter = new TraceExporter({ projectId: projectId });
provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
provider.addSpanProcessor(new SimpleSpanProcessor(new ConsoleSpanExporter()));
provider.register()
opentelemetry.trace.setGlobalTracerProvider(provider);
tracer = opentelemetry.trace.getTracer('basic')
}
function doWork() {
const span = tracer.startSpan('doWork')
for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) {
// empty
}
span.setAttribute('key', 'value');
span.addEvent('invoking doWork');
span.end();
}
function run() {
initialise()
const parentSpan = tracer.startSpan('main');
console.log('Current span')
console.log(tracer.getCurrentSpan())
for (let i = 0; i < 10; i += 1) {
doWork();
}
parentSpan.end();
}
run()
console.log(tracer.getCurrentSpan()) prints undefined.
Also, all child spans do not link to the parent.
If I pass parentSpan to doWork, then child spans do link to their parent. Like so
// ...
function doWork(parent) {
const span = tracer.startSpan('doWork', { parent })
for (let i = 0; i <= Math.floor(Math.random() * 40000000); i += 1) {
// empty
}
span.setAttribute('key', 'value');
span.addEvent('invoking doWork');
span.end();
}
function run() {
initialise()
const parentSpan = tracer.startSpan('main');
console.log('Current span')
console.log(tracer.getCurrentSpan())
for (let i = 0; i < 10; i += 1) {
doWork(parentSpan);
}
parentSpan.end();
}
// ...
But it does not happen via the context.
I suspect, but it's just a guess, that the child spans not linking to the parent (via the context) and getCurrenctSpan() returning undefined could be the same issue.
Unless I am doing something completely wrong in the script.
In any case, would you be able to advice on how best to proceed?
Thank you!
startSpan does not add the span to the context, it only creates a new span.
const parentSpan = tracer.startSpan('main');
tracer.withSpan(parentSpan, () => {
console.log(tracer.getCurrentSpan());
});
Yes, that's right. You can read more in API Documentation: startSpan and withSpan.
Ok, thanks guys! I'm going to close this.
Does getNodeAutoInstrumentations which auto calls HttpInstrumentation auto-wrap every incoming HTTP server with a request hook that will auto start/create a root span if the request comes in with no baggage headers or not?
|
gharchive/issue
| 2020-06-21T11:22:52 |
2025-04-01T04:35:19.245939
|
{
"authors": [
"brandonros",
"dyladan",
"mayurkale22",
"rbruggem",
"vmarchaud"
],
"repo": "open-telemetry/opentelemetry-js",
"url": "https://github.com/open-telemetry/opentelemetry-js/issues/1228",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
772659043
|
Tests failing in tip of master
What version of OpenTelemetry are you using?
the tip of master. Or a2304c9a82b77c762ae5df0f8f8621b4e2ab5ecb
What version of Node are you using?
% node -v
v15.4.0
Please provide the code you used to setup the OpenTelemetry SDK
npm install && npm run compile && npm test
What did you do?
cloned the repo
followed the instructions to get tests running in https://github.com/open-telemetry/opentelemetry-js/blob/master/CONTRIBUTING.md#development
npm test
What did you expect to see?
The tests passing.
What did you see instead?
awsEksDetector
on successful request
✓ should return an aws_eks_instance_resource
✓ should return a resource with clusterName attribute without cgroup file
✓ should return a resource with container ID attribute without a clusterName
✓ should return a resource with clusterName attribute when cgroup file does not contain valid Container ID
✓ should return an empty resource when not running on Eks
✓ should return an empty resource when k8s token file does not exist
✓ should return an empty resource when containerId and clusterName are invalid
on unsuccesful request
✓ should throw when receiving error response code
✓ should return an empty resource when timed out
1) should return an empty resource when timed out
25 passing (1s)
1 failing
1) awsEksDetector
on unsuccesful request
should return an empty resource when timed out:
Uncaught Error: Failed to load page, status code: 404
at IncomingMessage.<anonymous> (src/detectors/AwsEksDetector.ts:79:55)
at IncomingMessage.emit (node:events:376:20)
at endReadableNT (node:internal/streams/readable:1295:12)
at processTicksAndRejections (node:internal/process/task_queues:80:21)
at runNextTicks (node:internal/process/task_queues:62:3)
at processImmediate (node:internal/timers:436:9)
Additional context
the "error" in the test is exactly the expected error we see in the test file
const expectedError = new Error('Failed to load page, status code: 404');
which suggests to me that either:
0.a. this is not how you're actually supposed to do assert-exception-is-thrown in this test framework, or
0.b. the tests have some non-automatic & undocumented dependency that i don't have (e.g. the author of the package has a more recent version of mocha on his personal machine, and that's what runs for him rather than the version the package explicitly depends on).
If I attempt to work around this by doing
- it('should return an empty resource when timed out', async () => {
+ xit('should return an empty resource when timed out', async () => {
In spite of the fact that I had only 1 error above, I now get
awsEksDetector
on successful request
✓ should return an aws_eks_instance_resource
✓ should return a resource with clusterName attribute without cgroup file
✓ should return a resource with container ID attribute without a clusterName
✓ should return a resource with clusterName attribute when cgroup file does not contain valid Container ID
✓ should return an empty resource when not running on Eks
✓ should return an empty resource when k8s token file does not exist
✓ should return an empty resource when containerId and clusterName are invalid
on unsuccesful request
✓ should throw when receiving error response code
- should return an empty resource when timed out
24 passing (1s)
1 pending
----------|---------|----------|---------|---------|-------------------
File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s
----------|---------|----------|---------|---------|-------------------
All files | 0 | 0 | 0 | 0 |
index.ts | 0 | 0 | 0 | 0 |
----------|---------|----------|---------|---------|-------------------
lerna ERR! npm run test stderr:
/home/dmr/job/repo/bukkitz/vendor/opentelemetry-js/packages/opentelemetry-resource-detector-aws/node_modules/mocha/lib/runner.js:911
throw err;
^
Error: EKS metadata api request timed out.
which is again the error we were trying to expect, not to trigger.
We currently only test and officially support LTS versions of node. We also test on 8. Do you have a requirement to use node 15?
We should definitely fix this issue though and the test is actually broken. On my machine the detection never throws:
it('should return an empty resource when timed out', async () => {
const expectedError = new Error('Failed to load page, status code: 404');
fileStub = sandbox
.stub(AwsEksDetector, 'fileAccessAsync' as any)
.resolves();
readStub = sandbox
.stub(AwsEksDetector, 'readFileAsync' as any)
.resolves(correctCgroupData);
getCredStub = sandbox
.stub(awsEksDetector, '_getK8sCredHeader' as any)
.resolves(k8s_token);
const scope = nock('https://' + K8S_SVC_URL)
.persist()
.get(AUTH_CONFIGMAP_PATH)
.matchHeader('Authorization', k8s_token)
.reply(404, () => new Error());
let thrown = false;
try {
await awsEksDetector.detect({
logger: new NoopLogger(),
});
} catch (err) {
thrown = true;
assert.deepStrictEqual(err, expectedError);
}
assert.ok(thrown); // <= this assertion fails on my machine (v14)
scope.done();
});
The name also mentions timeouts but the test doesn't appear to deal with any timing
@dyladan strictly speaking, we only have a hard requirement to use a version of node that supports async_hooks, so LTS should be okay. However, the package in question (and all the other packages I've seen) have
"engines": {
"node": ">=8.0.0"
}
I studied math in college & I can confirm that 15.4.0 >= 8.0.0.
I'm not saying that's a bug or that you need to change your engines declarations. But a user could be forgiven for assuming you support 15.x.x releases.
@dyladan you said
The name also mentions timeouts but the test doesn't appear to deal with any timing
I'm guessing what's going on is that the above error is what occurs when there's a timeout in eks. It's not that the test involves timing, he's just having it trigger the error which would be triggered by a timeout immediately.
What (I'm guessing) the test is about is making sure that if this error occurs, the resource which we detect isn't in some incorrect garbage state, like if it got half the info & then timed out, the result should be empty, not with half the fields filled out. Or something like that (I don't really know the resource detector specifically).
|
gharchive/issue
| 2020-12-22T05:44:34 |
2025-04-01T04:35:19.255555
|
{
"authors": [
"dradetsky",
"dyladan"
],
"repo": "open-telemetry/opentelemetry-js",
"url": "https://github.com/open-telemetry/opentelemetry-js/issues/1776",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.