id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2570757872 | [BUG]: Unable to run migrations using drizzle-kit 0.25.0
What version of drizzle-orm are you using?
0.33.0
What version of drizzle-kit are you using?
0.25.0
Describe the Bug
pnpm --package=drizzle-kit --package=drizzle-orm --package=pg -c dlx 'drizzle-kit migrate'
gives
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './casing' is not defined by "exports" in /Users/marc.neander/Library/Caches/pnpm/dlx/3zgskc56eje2wm6mx5ky2kqzoe/192677b77f9-1747/node_modules/drizzle-orm/package.json
Expected behavior
No response
Environment & setup
No response
Can't reproduce, did you try to force reinstall node_modules?
Can't reproduce, did you try to force reinstall node_modules?
Happens in CI without any cache
What version of drizzle-orm are you using?
0.33.0
What version of drizzle-kit are you using?
0.25.0
Describe the Bug
pnpm --package=drizzle-kit --package=drizzle-orm --package=pg -c dlx 'drizzle-kit migrate'
gives
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './casing' is not defined by "exports" in /Users/marc.neander/Library/Caches/pnpm/dlx/3zgskc56eje2wm6mx5ky2kqzoe/192677b77f9-1747/node_modules/drizzle-orm/package.json
Expected behavior
No response
Environment & setup
No response
Encountered the same problem. drizzle-kit 0.25.0 only works correctly with drizzle-orm 0.34.0. Update the version of drizzle-orm to resolve the issue
What version of drizzle-orm are you using?
0.33.0
What version of drizzle-kit are you using?
0.25.0
Describe the Bug
pnpm --package=drizzle-kit --package=drizzle-orm --package=pg -c dlx 'drizzle-kit migrate'
gives
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './casing' is not defined by "exports" in /Users/marc.neander/Library/Caches/pnpm/dlx/3zgskc56eje2wm6mx5ky2kqzoe/192677b77f9-1747/node_modules/drizzle-orm/package.json
Expected behavior
No response
Environment & setup
No response
Encountered the same problem. drizzle-kit 0.25.0 only works correctly with drizzle-orm 0.34.0. Update the version of drizzle-orm to resolve the issue
Can confirm that this solves it. Had a bad break running migrations with latest packages inbetween orm and kit releases i guess
Can confirm this as not completed.
Using pnpm
drizzle-orm@0.34.1
drizzle-kit@0.25.0
Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './casing' is not defined by "exports" in C:[path to project]/node_modules\drizzle-orm\package.json
Cleared pnpm cache and forcefully remove and updated modules all to no avail.
Also getting this after upgrading to the following versions using pnpm:
drizzle-orm@0.34.1
drizzle-kit@0.25.0
I was able to reproduce this by upgrading a single package in my pnpm monorepo to the following versions:
drizzle-orm@0.34.1
drizzle-kit@0.25.0
After I upgraded every app in the monorepo to the same versions the issue goes away.
I was able to reproduce this by upgrading a single package in my pnpm monorepo to the following versions:
drizzle-orm@0.34.1 drizzle-kit@0.25.0
After I upgraded every app in the monorepo to the same versions the issue goes away.
Thank you! I faced with two different problems when I was trying to create migrations and I've tried different versions of this packages and this versions combination works fine
I had this bug. I fixed it. Here's how.
I had a monorepo with pnpm with multiple different versions of drizzle installed. I removed drizzle as a dependency from modules that didn't need it. This left only one version of drizzle, up to date, in only one package. I deleted node_modules, reinstalled everything, and the problem vanished.
| gharchive/issue | 2024-10-07T15:16:43 | 2025-04-01T06:38:27.525842 | {
"authors": [
"AndriiSherman",
"Krinopotam",
"john-griffin",
"kylesloper",
"marc-neander",
"notcodev",
"ryanxcharles"
],
"repo": "drizzle-team/drizzle-orm",
"url": "https://github.com/drizzle-team/drizzle-orm/issues/3057",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1677403629 | feat: YAML Diff
Tool that compares two YAML text and shows the diff
General diff functionality would be nice too, not just for YAMLs :)
@drodil good idea! I'll see if I can make it generic :)
PR to solve this issue: #40
| gharchive/issue | 2023-04-20T20:43:07 | 2025-04-01T06:38:27.543917 | {
"authors": [
"aaronnickovich",
"drodil"
],
"repo": "drodil/backstage-plugin-toolbox",
"url": "https://github.com/drodil/backstage-plugin-toolbox/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
162321246 | The sslsplit child process continues, even if the master process terminated.
See example to reproduce it:
sslsplit -p sslsplit.pid -P http 127.0.0.1 8080 www.example.com 80&
ps x | grep sslsplit | grep -v grep
3018 s002 S 0:00.00 sslsplit -p sslsplit.pid -P http 127.0.0.1 8080 www.example.com 80
3019 s002 S 0:00.00 sslsplit -p sslsplit.pid -P http 127.0.0.1 8080 www.example.com 80
kill $(cat sslsplit.pid)
ps x | grep sslsplit | grep -v grep
3019 s002 S 0:00.00 sslsplit -p sslsplit.pid -P http 127.0.0.1 8080 www.example.com 80
sslsplit -V
SSLsplit 0.5.0 (built 2016-06-25)
Copyright (c) 2009-2016, Daniel Roethlisberger <daniel@roe.ch>
http://www.roe.ch/SSLsplit
Build info: OSX:10.10.5 XNU:2782.40.9:sw_vers:2782.50.1 V:FILE
Features: -DHAVE_DARWIN_LIBPROC -DHAVE_PF
NAT engines: pf*
Local process info support: yes (Darwin libproc)
compiled against OpenSSL 1.0.2h 3 May 2016 (1000208f)
rtlinked against OpenSSL 1.0.2h 3 May 2016 (1000208f)
OpenSSL has support for TLS extensions
TLS Server Name Indication (SNI) supported
OpenSSL is thread-safe with THREADID
Using SSL_MODE_RELEASE_BUFFERS
SSL/TLS protocol availability: ssl3 tls10 tls11 tls12
SSL/TLS algorithm availability: RSA DSA ECDSA DH ECDH EC
OpenSSL option availability: SSL_OP_NO_COMPRESSION SSL_OP_NO_TICKET SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION SSL_OP_TLS_ROLLBACK_BUG
compiled against libevent 2.0.22-stable
rtlinked against libevent 2.0.22-stable
8 CPU cores detected
I will take a look. Any reason why you are not using daemon mode? (-d)
I can reproduce the problem. The PID file always points to the parent process. When the parent process is killed, the child process does not terminate. When on the other hand the child process is killed, all is well, the parent process gets notified and terminates gracefully.
The above commits fix a number of issues that would lead to the parent process being stuck in wait() while still having signals queued to forward to the child process. The notable commits are abc86df, adding SIGTERM which was missing in the list of signals forwarded to the client process, and 5ece01a, preventing the server from being stuck in wait() after all privsep client sockets send a close message before the child process actually terminates.
This fixes the issue for me. Please test and report back if it resolved the issue for you too.
Works for me too.
| gharchive/issue | 2016-06-26T10:09:23 | 2025-04-01T06:38:27.546800 | {
"authors": [
"cgroschupp",
"droe"
],
"repo": "droe/sslsplit",
"url": "https://github.com/droe/sslsplit/issues/137",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
863587904 | PluginHandle configuration not work
Describe the bug
Configure PluginHandle .
PluginName: divide
FieldName: timeout
DefaultValue: 10000
I hope when rule registered,the timeout is 10000, but still 3000.
In the plugin processing management, configure the timeout of the divide plugin to 10000 (the system default is 3000). After the rule is registered, the timeout of the rule is still 3000. I originally hoped that the registered rules could be 10000 in accordance with the configuration timeout in the processing management
Environment
Soul version(s): 2.3.0
Steps to reproduce
Expected behavior
I will follow up this question.keep contact!
we will fix it on next version.
The current version of the interface registration does not go to the pluginhandle table to read the plugin attribute value. Under our internal evaluation, this function will be added if necessary.
Need to be optimized? @yu199195 @dengliming
After discussion, support is not considered for the time being, and the issue will be closed in one day.
| gharchive/issue | 2021-04-21T08:29:00 | 2025-04-01T06:38:27.557063 | {
"authors": [
"PandaUncle",
"nuo-promise",
"zendwang"
],
"repo": "dromara/shenyu",
"url": "https://github.com/dromara/shenyu/issues/1294",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
281142991 | Package integration test, generate .deb and .rpm
This moves from checkinstall to fpm which allows us to generate multiple packages, one for the library and one for the integration tests.
Coverage remained the same at 16.051% when pulling ad6d8611e86c2659645de870edee520ee843ee8d on integration-tests-exe into 7225b768e006d2a01aea0c57289889ae56c96a2f on develop.
Coverage remained the same at 16.051% when pulling cdf884904a83638a1f5bada13277b8fb5bafdca4 on integration-tests-exe into 7225b768e006d2a01aea0c57289889ae56c96a2f on develop.
Coverage remained the same at 16.051% when pulling 96c2dae692fbee72cb9203f5892306f063b42183 on integration-tests-exe into 7225b768e006d2a01aea0c57289889ae56c96a2f on develop.
Coverage remained the same at 16.051% when pulling 251c6ae0d7f561169f89e50b3d912359af20bd7a on integration-tests-exe into 7225b768e006d2a01aea0c57289889ae56c96a2f on develop.
Coverage decreased (-0.3%) to 15.78% when pulling e18736b99d02c3256632654a7bb8d3e5dbb781c1 on integration-tests-exe into 7225b768e006d2a01aea0c57289889ae56c96a2f on develop.
| gharchive/pull-request | 2017-12-11T19:41:32 | 2025-04-01T06:38:27.565482 | {
"authors": [
"coveralls",
"julianoes"
],
"repo": "dronecore/DroneCore",
"url": "https://github.com/dronecore/DroneCore/pull/198",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
285139389 | mavproxy command not found
Hi,
I installed everything properly and I keep getting this error:
bash: mavproxy.py: command not found
do you know what up?
I have the same problem. I use ubuntu 17 version. I use the following instruction to install: https://ardupilot.github.io/MAVProxy/html/getting_started/download_and_installation.html#linux
| gharchive/issue | 2017-12-29T18:56:35 | 2025-04-01T06:38:27.573664 | {
"authors": [
"duckhang9113",
"jyopari"
],
"repo": "dronekit/dronekit-python",
"url": "https://github.com/dronekit/dronekit-python/issues/778",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1133663886 | Disable configuration caching on CreateFocusSettingsTask
Partially handles #2 but doesn't implement actual support for configuration caching. This instead marks this task as incompatible with it, allowing Gradle to gracefully fall back to regular mode when this task runs rather than require users to manually disable CC first.
Gradle 7.5 will introduce APIs for depending on the resolved dependency graph in a CC-friendly way, but that's not ready yet. Example of that can be found here though: https://github.com/adammurdoch/dependency-graph-as-task-inputs/blob/main/plugins/src/main/java/TestPlugin.java#L31-L35
Note this also raises the minimum Gradle version to 7.4 unless you want to dynamically gate the API call at runtime on the gradle version
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it.
The CLA link leads me to an inactionable screen (I can't appear to click anything or input anything?)
Weird, seems to be loading fine for me. Try again?
Worked on mobile 👍
Raising apis seems fine, if we run into problems internally I’ll follow up and gate
Thanks Zac!
@digitalbuddha @ZacSweers it would be great if you could gate this call for Gradle versions below 7 for the poor folks who haven't got to the latest yet.
I think that you should just update. I know the Gradle ecosystem has a long track record of supporting old versions for long periods of time, and I think that's significantly held back the Gradle ecosystem's evolution. If you care enough about your build QoL to adopt a tool like this, you should care enough to update it regularly IMO
+1 Zac, configuration caching and project isolation compatibility is simply not something that will be possible to adopt by plugins without ugly hacks. Let's keep this modern!
| gharchive/pull-request | 2022-02-12T07:10:19 | 2025-04-01T06:38:27.589574 | {
"authors": [
"CLAassistant",
"ZacSweers",
"digitalbuddha",
"eleventigerssc",
"liutikas"
],
"repo": "dropbox/focus",
"url": "https://github.com/dropbox/focus/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1603814259 | stub Select.filter and Select.filter_by
PR for https://github.com/dropbox/sqlalchemy-stubs/issues/254
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.dzhak seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
See https://github.com/dropbox/sqlalchemy-stubs/pull/256
| gharchive/pull-request | 2023-02-28T21:07:30 | 2025-04-01T06:38:27.596164 | {
"authors": [
"CLAassistant",
"denyszhak"
],
"repo": "dropbox/sqlalchemy-stubs",
"url": "https://github.com/dropbox/sqlalchemy-stubs/pull/255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
113113754 | Restore update locations when a socket reparse fails
Prior to this change, socket reparse would modify the updates array of locations, but not restore the original values if the parent fails to parse with the new socket value. This would cause Container#getFromLocation to throw after falling off the end of the linked list.
The fix is to save the update locations before attempting to parse the socket as a new block, and restore them if the parent reparse fails.
This is a better fix for the JavaScript function sockets bug than https://github.com/droplet-editor/droplet/pull/130, which only treats the symptom but not the root cause.
Do you have enough understanding now to update the comment at the beginning of the method?
# Don't reparse sockets. When we reparse sockets,
# reparse them first, then try reparsing their parent and
# make sure everything checks out.
Changes lgtm. Good job tracking this down.
| gharchive/pull-request | 2015-10-23T22:57:55 | 2025-04-01T06:38:27.598512 | {
"authors": [
"Bjvanminnen",
"joshlory"
],
"repo": "droplet-editor/droplet",
"url": "https://github.com/droplet-editor/droplet/pull/131",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
233845360 | Upload Binary Error - Java heap space for MediaType.APPLICATION_OCTET_STREAM
Hi all,
I'm trying to upload a zip about 100mb stored on mysql table (only id and blob)
this is my code:
@Path("/downloadlatestinstaller/{type}")
@GET
@Timed
@UnitOfWork
public Response downloadLastInstaller(@PathParam("type") String type) {
try {
LOGGER.info("Finding latest Installer...");
// read DB app
Optional<AppFile> res = Optional.empty();
res = appFileDAO.findLastIntranetWinInstaller();
if (res.isPresent()) {
AppFile a = res.get();
byte[] app = a.getBinary().getApp();
int length = app.length;
LOGGER.info("Found latest Installer ID= " + a.getId() + " - Name= " + a.getName() + " - Version= " + a.getVersion() + " - Date= " + a.getDateUpdated());
Response respo = Response.ok(app, MediaType.APPLICATION_OCTET_STREAM)
.header("Access-Control-Expose-Headers", HttpHeaders.CONTENT_DISPOSITION)
.header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + a.getVersion() + "_" + a.getName() + "\"")
.header(HttpHeaders.CONTENT_LENGTH, length)
.encoding(UTF_8)
.build();
LOGGER.info(respo.getHeaders().toString());
return respo;
} else {
LOGGER.warn("Not found any installer type= " + type);
return Response.serverError().build();
}
} catch (Exception e) {
LOGGER.error(e.getLocalizedMessage(), e);
return Response.serverError().build();
}
}
on response return I've got this stacktrace error:
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:489)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:427)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:388)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:341)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:228)
at io.dropwizard.jetty.NonblockingServletHolder.handle(NonblockingServletHolder.java:49)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1650)
at io.dropwizard.servlets.ThreadNameFilter.doFilter(ThreadNameFilter.java:34)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
at io.dropwizard.jersey.filter.AllowedMethodsFilter.handle(AllowedMethodsFilter.java:45)
at io.dropwizard.jersey.filter.AllowedMethodsFilter.doFilter(AllowedMethodsFilter.java:39)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
at org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
at org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at com.codahale.metrics.jetty9.InstrumentedHandler.handle(InstrumentedHandler.java:241)
at io.dropwizard.jetty.RoutingHandler.handle(RoutingHandler.java:52)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:521)
at io.dropwizard.jetty.BiDiGzipHandler.handle(BiDiGzipHandler.java:68)
at org.eclipse.jetty.server.handler.RequestLogHandler.handle(RequestLogHandler.java:56)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:564)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:317)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:279)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:110)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:124)
at org.eclipse.jetty.util.thread.Invocable.invokePreferred(Invocable.java:128)
at org.eclipse.jetty.util.thread.Invocable$InvocableExecutor.invoke(Invocable.java:222)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:294)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:672)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:590)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:278)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:260)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:509)
at org.glassfish.jersey.server.ServerRuntime$2.run(ServerRuntime.java:334)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:317)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:305)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:1154)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:473)
... 43 common frames omitted
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3332)
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:674)
at java.lang.StringBuilder.append(StringBuilder.java:208)
at java.util.Arrays.toString(Arrays.java:4446)
at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.extractLoggableRepresentation(PrimitiveByteArrayTypeDescriptor.java:63)
at org.hibernate.type.descriptor.java.PrimitiveByteArrayTypeDescriptor.extractLoggableRepresentation(PrimitiveByteArrayTypeDescriptor.java:26)
at org.hibernate.type.AbstractStandardBasicType.toLoggableString(AbstractStandardBasicType.java:296)
at org.hibernate.internal.util.EntityPrinter.toString(EntityPrinter.java:66)
at org.hibernate.internal.util.EntityPrinter.toString(EntityPrinter.java:109)
at org.hibernate.event.internal.AbstractFlushingEventListener.logFlushResults(AbstractFlushingEventListener.java:120)
at org.hibernate.event.internal.AbstractFlushingEventListener.flushEverythingToExecutions(AbstractFlushingEventListener.java:96)
at org.hibernate.event.internal.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:38)
at org.hibernate.internal.SessionImpl.doFlush(SessionImpl.java:1435)
at org.hibernate.internal.SessionImpl.managedFlush(SessionImpl.java:491)
at org.hibernate.internal.SessionImpl.flushBeforeTransactionCompletion(SessionImpl.java:3201)
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:2411)
at org.hibernate.engine.jdbc.internal.JdbcCoordinatorImpl.beforeTransactionCompletion(JdbcCoordinatorImpl.java:467)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.beforeCompletionCallback(JdbcResourceLocalTransactionCoordinatorImpl.java:146)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl.access$100(JdbcResourceLocalTransactionCoordinatorImpl.java:38)
at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.commit(JdbcResourceLocalTransactionCoordinatorImpl.java:220)
at org.hibernate.engine.transaction.internal.TransactionImpl.commit(TransactionImpl.java:68)
at io.dropwizard.hibernate.UnitOfWorkAspect.commitTransaction(UnitOfWorkAspect.java:143)
at io.dropwizard.hibernate.UnitOfWorkAspect.afterEnd(UnitOfWorkAspect.java:82)
at io.dropwizard.hibernate.UnitOfWorkApplicationListener$UnitOfWorkEventListener.onEvent(UnitOfWorkApplicationListener.java:80)
at org.glassfish.jersey.server.internal.monitoring.CompositeRequestEventListener.onEvent(CompositeRequestEventListener.java:71)
at org.glassfish.jersey.server.internal.process.RequestProcessingContext.triggerEvent(RequestProcessingContext.java:226)
at org.glassfish.jersey.server.ContainerFilteringStage$ResponseFilterStage.apply(ContainerFilteringStage.java:188)
at org.glassfish.jersey.server.ContainerFilteringStage$ResponseFilterStage.apply(ContainerFilteringStage.java:163)
at org.glassfish.jersey.process.internal.Stages.process(Stages.java:171)
at org.glassfish.jersey.server.ServerRuntime$Responder.processResponse(ServerRuntime.java:442)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:434)
any ideas? problem is hibernate or byte array in response? for a zip about 40mb i've got no problems
Looks like a logging problem. Hibernate tries to log the entity, but it's to big to allocate in the heap. You could try to disable DEBUG logging for the org.hibernate.event package and see if it helps.
@arteam yesss !!! i'm stupid, i've got same problem some months ago but I forgot it!! many thanks @arteam 👍
| gharchive/issue | 2017-06-06T10:21:45 | 2025-04-01T06:38:27.603493 | {
"authors": [
"arteam",
"charbonnier666"
],
"repo": "dropwizard/dropwizard",
"url": "https://github.com/dropwizard/dropwizard/issues/2070",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1352981704 | Remove redundant outputPatternAsHeader logging setting from tests
This defaults to false - we don't need to set it to false explicitly.
See https://logback.qos.ch/manual/encoders.html#outputPatternAsHeader
If you'd rather merge into 2.0.x, close this and look at https://github.com/dropwizard/dropwizard/pull/5769 instead
@rhowe Thanks a lot for the cleanup. Solid work as always. heart
This one was @zUniQueX really, I just pressed the buttons
| gharchive/pull-request | 2022-08-27T08:18:35 | 2025-04-01T06:38:27.606795 | {
"authors": [
"rhowe"
],
"repo": "dropwizard/dropwizard",
"url": "https://github.com/dropwizard/dropwizard/pull/5770",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
844020051 | DB5 dataset for Protein Interface Prediction is empty
Hi, thanks for this amazing and comprehensive work!
After I download the full dataset from https://www.atom3d.ai/pip.html, I load the DB5 dataset, and print its length which is 0. The DIPS part data is correct.
Thanks for bug report! We've fixed and re-uploaded the DB5 dataset.
| gharchive/issue | 2021-03-30T03:08:13 | 2025-04-01T06:38:27.608597 | {
"authors": [
"psuriana",
"xinyuan-huang"
],
"repo": "drorlab/atom3d",
"url": "https://github.com/drorlab/atom3d/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
196228807 | Lenspop Magnitude Calculation Demo
I have one question about the pull request. May I submit the request when I think the new codes are ready to be reviewed or every time when I am committing changes to my master branch? I am making sure that I am committing all the changes to github whenever I am changing the files in my computer.
I started working on making the diagnostic plots after December 10th, so I had to make this new branch named color to only include the commits that I made before December 10th. This branch has the first version of the lenspop demo notebook, and I am working on my master branch to make the diagnostic plots in addition to fixing some of the features in the first demo notebook.
Thanks!
p.s: I would not want this to be merged with the base OM10 directory yet!
@jennykim1016 Looking at this notebook again, in the light of your more recent work on the diagnostic plots, I wonder if we can use it to demonstrate the difference between the star forming and old stellar population SEDs, as well as showing the contrast with a quasar at z=2. I'll merge this PR and play around a bit, if I can fid some time! :-) Nice to have a basic demo in place though. Thanks!
| gharchive/pull-request | 2016-12-17T16:02:38 | 2025-04-01T06:38:27.618906 | {
"authors": [
"drphilmarshall",
"jennykim1016"
],
"repo": "drphilmarshall/OM10",
"url": "https://github.com/drphilmarshall/OM10/pull/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
210123795 | How use clearsign
The command to sign my file asks me a secret code.
gpg --clearsign file.txt
How to proceed with node-gpg to input my secret code.
Thanks
This is similar to #20 I believe, where you need to input a passpharse.
| gharchive/issue | 2017-02-24T18:31:38 | 2025-04-01T06:38:27.630804 | {
"authors": [
"BenoitClaveau",
"freewil"
],
"repo": "drudge/node-gpg",
"url": "https://github.com/drudge/node-gpg/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
247903776 | Update to use latest Apache RocketMQ
RocketMQ has been incubating on Apache for a while and new version has been released thereafter. This PR is to update the plugin in order to be compatible with the latest release of Apache RocketMQ.
Please fix the build failure.
@lizhanhui, the error seems to be an unused import.
[ERROR] /home/travis/build/druid-io/druid/extensions-contrib/druid-rocketmq/src/main/java/io/druid/firehose/rocketmq/RocketMQFirehoseFactory.java:38:8: Unused import - java.util.Iterator. [UnusedImports]
@lizhanhui
| gharchive/pull-request | 2017-08-04T05:06:18 | 2025-04-01T06:38:27.632446 | {
"authors": [
"gianm",
"jihoonson",
"lizhanhui",
"vongosling"
],
"repo": "druid-io/druid",
"url": "https://github.com/druid-io/druid/pull/4648",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
204459381 | Test behaviour of newly generated app
Adopt the approach (and much of the support code) taken by hanami in their CLI testing. Now, instead of having a "dummy" app baked into our spec/ dir, we generate a new dry-web-roda app completely from scratch, following the approach a user would take as closely as possible, including bundling all the gems for the app. With the app generated, our first test is to actually boot it up and then ensure a page renders as we expect. This should serve as a helpful smoke test for setup/configuration/dependency issues in the future.
Our next step here would be to throw some extra files into the generated project (migration, relation, repo, routes, extra view) to ensure the whole persistence layer works. (This would be for another PR and another day, though 😄)
Thanks @jodosha for the code and the blog posts sharing this approach!
Adopt the approach (and much of the support code) taken by hanami in their CLI testing. Now, instead of having a "dummy" app baked into our spec/ dir, we generate a new dry-web-roda app completely from scratch, following the approach a user would take as closely as possible, including bundling all the gems for the app. With the app generated, our first test is to actually boot it up and then ensure a page renders as we expect. This should serve as a helpful smoke test for setup/configuration/dependency issues in the future.
Our next step here would be to throw some extra files into the generated project (migration, relation, repo, routes, extra view) to ensure the whole persistence layer works. (This would be for another PR and another day, though 😄)
Thanks @jodosha for the code and the blog posts sharing this approach!
| gharchive/pull-request | 2017-01-31T23:54:24 | 2025-04-01T06:38:27.695884 | {
"authors": [
"timriley"
],
"repo": "dry-rb/dry-web-roda",
"url": "https://github.com/dry-rb/dry-web-roda/pull/28",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2314060937 | add volumes:client support
commands:
drycc volumes:client put xxx.file vol://s1/etc
drycc volumes:client get vol://s1/etc/xxx.file
Components:
workflow-cli
controller
controller-go-sdk
Implementation:
generate a temporary filer pod
filer mounting volumes for operation
complete
| gharchive/issue | 2024-05-24T00:22:42 | 2025-04-01T06:38:27.698765 | {
"authors": [
"duanhongyi"
],
"repo": "drycc/workflow",
"url": "https://github.com/drycc/workflow/issues/53",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1419872979 | Service Layer added on full terms
Service layer features added more.
Thank You @arpanghosh2416 for your contribution
| gharchive/pull-request | 2022-10-23T17:53:00 | 2025-04-01T06:38:27.719219 | {
"authors": [
"arpanghosh2416",
"mriganka56"
],
"repo": "dsc-iem/Tourist-Guiding-App-Hacktoberfest22",
"url": "https://github.com/dsc-iem/Tourist-Guiding-App-Hacktoberfest22/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1857187136 | Request Bard API Key
Name : Joms
Email: carganillo@gmail.com
Purpose : Educational Purposes only. Thank you! 😇
Hello,
The Bard API package is an unofficial Python package, so we cannot provide an official Google API key.
Please check https://github.com/dsdanielpark/Bard-API#google-palm-api for more information or reach out to Google Cloud services for an official request. As far as I know, currently, only some restricted whitelists have access to the API, so it seems like you are using this package unofficially.
Also, please refer to the Readme for sufficient information related to the Bard API package.
Thank you.
| gharchive/issue | 2023-08-18T19:23:22 | 2025-04-01T06:38:27.781020 | {
"authors": [
"carganillox",
"dsdanielpark"
],
"repo": "dsdanielpark/Bard-API",
"url": "https://github.com/dsdanielpark/Bard-API/issues/168",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1225163841 | Closing and Archiving the Project
This is an initial Issue on working towards closing and archiving this project. If there are TODOs that you want to create or higlight, please do so and use the milestone
Suggested Data Archiving -> 1 July 2022
Please see above @HerkulaasCombrink @shaze @dmackie @lrossouw @lizelgreyling @elolelo @anelda
You can add ideas for Archiving Prep here https://docs.google.com/document/d/13PkZ5bdyGF4T6kCVG8e58Znl4ieUA7hFiC4PgSMlt5E/edit?usp=sharing
Ask for editing permission.
@HerkulaasCombrink and Jonas are busy with this - in addition to the other team members.
We are starting by consolidating the issues, but then we might open new issues (if they are data related) and we are going to update any information, if needed.
Hello, I see that the NICD media alerts page stopped releasing covid-19 updates after 29 July 2022. There is an official dashboard which gives up-to-date stats, including by province and municipality. Can you please tell me how we can get those numbers in tabular format?
Hey @aidanhorn Can you maybe create a seperate issue with details and pictures and we can then share and ask for any volunteers who can help with something that can fill in whatever is missing.
Hi @vukosim, I have looked at the linked official dashboard, and I don't see how to get historical numbers at a sub-national level. At least if we knew how to do that, then we would know how to ask for volunteer help.
@aidanhorn Actually found the ArcGIS API endpoint, so if someone can extract from there https://gis.nicd.ac.za/server/rest/
| gharchive/issue | 2022-05-04T10:04:23 | 2025-04-01T06:38:27.787832 | {
"authors": [
"HerkulaasCombrink",
"aidanhorn",
"vukosim"
],
"repo": "dsfsi/covid19za",
"url": "https://github.com/dsfsi/covid19za/issues/970",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1055387537 | Performance
Hi @dsietz ,
Hope you don't mind but I took the liberty of doing a quick pass at improving performance.
I squeezed about 10% better performance from my testing with these tweaks.
BTW, I noticed that you do multi-threading in the library, is that something that can be exposed to the Data Sample Parser? Several qsv commands support multi-threading and have CLI options for it to take advantage of additional processors.
If so, let me know so I can create an enhancement issue for it.
Thanks again!
@jqnatividad Thank you very much for the performance improvements and sharing your code. I'm always open to code contributions and feedback. Your changes make sense and are more eloquent than what I had. Going forward, please make sure to create Pull Requests onto the Development branch so that I can integrate it with my own changes before pushing it up to Master.
Unfortunately, I don't have the time necessary to support continued development (enhancements) on this package, however if you'd like to join as a contributor of this package, you are more than welcome. Just let me know.
Hi @dsietz,
Thanks for your prompt reply and merging my enhancements.
And yes, I'd be more than happy to help maintain and enhance test-data-generation!
It fills an underserved need, and I can imagine several enhancements already.
| gharchive/pull-request | 2021-11-16T21:49:19 | 2025-04-01T06:38:27.792186 | {
"authors": [
"dsietz",
"jqnatividad"
],
"repo": "dsietz/test-data-generation",
"url": "https://github.com/dsietz/test-data-generation/pull/98",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
535779557 | Canvas art - take 2
Meni je ovo jako zabavno, pa sam napravila jos jedan :)
Wow! Potpisuješ autograme? :)
Odlično! Eh, da je ovaj tečaj samo o SVG-u... :)
| gharchive/pull-request | 2019-12-10T14:49:59 | 2025-04-01T06:38:27.813987 | {
"authors": [
"Ratiness",
"dstrekelj",
"foosball82"
],
"repo": "dstrekelj/algebra-front-end-developer-2019",
"url": "https://github.com/dstrekelj/algebra-front-end-developer-2019/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
137625509 | vantage-auth-basic missing
It looks like the repository no longer exists, and the pam repository is abandoned. I plan on implementing public / private-key authentication of my own, would you be interested in a repository link once it's done to add to the readme?
Sure! Thanks!
| gharchive/issue | 2016-03-01T16:41:21 | 2025-04-01T06:38:27.868119 | {
"authors": [
"dthree",
"looterz"
],
"repo": "dthree/vantage",
"url": "https://github.com/dthree/vantage/issues/50",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
436865989 | Reduce circular dependencies in esperanza
Moves ExtractValidatorPubKey and ExtractValidatorAddress to the esperanza/script.cpp
< Circular dependency: esperanza/checks -> esperanza/finalizationstate -> esperanza/checks
< Circular dependency: esperanza/finalizationstate -> validation -> esperanza/finalizationstate
< Circular dependency: consensus/tx_verify -> esperanza/finalizationstate -> validation -> consensus/tx_verify
< Circular dependency: esperanza/finalizationstate -> validation -> finalization/vote_recorder -> esperanza/finalizationstate
Force pushed to resolve conflicts.
| gharchive/pull-request | 2019-04-24T19:22:52 | 2025-04-01T06:38:27.869343 | {
"authors": [
"frolosofsky"
],
"repo": "dtr-org/unit-e",
"url": "https://github.com/dtr-org/unit-e/pull/1019",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2445518524 | Add the option in mA
Add the option to display in mA (MILIAMPERE), either positive if it is in the socket charging, or negative if it is out of the socket discharging. And also add the option of the icon update interval.
I initially used dynamic units, but never observed values small enough to actually display mA/mW/etc. This does overlap a bit with another request to increase precision in the main app view, so this might become more relevant.
I initially used dynamic units, but never observed values small enough to actually display mA/mW/etc. This does overlap a bit with another request to increase precision in the main app view, so this might become more relevant.
I'd be very grateful if you could add this option, so it's easier to know how much the charger is charging and how much the phone is draining. Thanks for taking it into consideration.
I was going to request this. Thank you.
| gharchive/issue | 2024-08-02T18:13:27 | 2025-04-01T06:38:27.887190 | {
"authors": [
"HT-7",
"deaveipslon",
"dubrowgn"
],
"repo": "dubrowgn/wattz",
"url": "https://github.com/dubrowgn/wattz/issues/43",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2454366303 | security: GUC to block access to local filesystem
Discussed this a long time ago... but currently we allow DuckDB to read from the local filesystem. This is a security risk; the CSV reader is particularly easy to use here since it will read just about any plain text file.
This should instead be controllable via a GUC, default disabled, that can only be enabled by superuser.
Another option would be to restrict it to certain directories?
Yeah, could certainly do that as a further enhancement. My thought was you either are cool with accessing the filesystem (for testing, running on localhost, etc), or you're not (hosted / production environment).
I have production uses for local data (not just dev/testing) & so would like this restricted to certain directories, instead of just on/off
/etc/passwd is a world-readable CSV file 😅
ah, excellent!
| gharchive/issue | 2024-08-07T21:17:14 | 2025-04-01T06:38:27.908061 | {
"authors": [
"JohnHVancouver",
"wearpants",
"wuputah"
],
"repo": "duckdb/pg_duckdb",
"url": "https://github.com/duckdb/pg_duckdb/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1637263475 | Trigger translations
Task/Issue URL: https://app.asana.com/0/0/1204249256538142/f
Description
Steps to test this PR
Feature 1
[ ]
[ ]
UI changes
Before
After
!(Upload before screenshot)
(Upload after screenshot)
Closing in favor of using https://github.com/duckduckgo/Android/pull/3017 instead
| gharchive/pull-request | 2023-03-23T10:37:55 | 2025-04-01T06:38:27.912333 | {
"authors": [
"CDRussell"
],
"repo": "duckduckgo/Android",
"url": "https://github.com/duckduckgo/Android/pull/2996",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2492095779 | Replace "Always Ask" with "Ask Every Time"
Task/Issue URL:
Description
Steps to test this PR
Feature 1
[ ]
[ ]
UI changes
Before
After
!(Upload before screenshot)
(Upload after screenshot)
[!WARNING]
This pull request is not mergeable via GitHub because a downstack PR is open. Once all requirements are satisfied, merge this PR as a stack on Graphite.
Learn more
#4947 👈
#4663 : 10 other dependent PRs (#4748 , #4752 , #4780 and 7 others)
develop
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @CrisBarreiro and the rest of your teammates on Graphite
| gharchive/pull-request | 2024-08-28T13:33:18 | 2025-04-01T06:38:27.919988 | {
"authors": [
"CrisBarreiro"
],
"repo": "duckduckgo/Android",
"url": "https://github.com/duckduckgo/Android/pull/4947",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
945453515 | Bump nom to 0.7.0-alpha1, add Cargo.lock
Hi there!
Follow up on PR to https://github.com/duesee/abnf-core/pull/3. Reasons for the bump are described in the abnf-core PR.
Should be merged (if merged) only after abnf-core is updated and published to crates.io because of dependency on bumped version.
Tested on this branch with cargo git imports: https://github.com/damirka/abnf/tree/bump-nom-crate
Hey again :-) Sorry, I am a confused now... The PR title says "0.7.0-alpha1", the commit message says "0.6.5", but the change is to use nom "6.2.1"?
I thought we want to use nom 7.0.0-alpha1?
Could you please clarify what you want to archive and cleanup the commits? Maybe it would be easier to start with the abnf-core crate. Than we can come back here.
Done. Used force to rewrite commit message.
Also minor nit: Could you increase the version to 0.12.0 (due to the same reasons as in abnf-core) and use the commit message "Use nom 7.0.0-alpha1 and abnf-core 0.5.0 to avoid issues with dependencies"? Thank you!
I will merge this as soon as abnf-core 0.5.0 is published on crates.io.
@duesee done!
https://crates.io/crates/abnf/0.12.0 Hope that helps!
| gharchive/pull-request | 2021-07-15T14:34:31 | 2025-04-01T06:38:27.963711 | {
"authors": [
"damirka",
"duesee"
],
"repo": "duesee/abnf",
"url": "https://github.com/duesee/abnf/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2313823355 | apex: unit test
Unit test
[ ] ability to launch unit test (select or all)
[ ] display result of unit test : ApexTestRunResult global and ApexTestResult/ApexTestQueueItem by method
[ ] display code coverage after test run
cf https://github.com/sorenkrabbe/Chrome-Salesforce-inspector/issues/112
| gharchive/issue | 2024-05-23T20:51:48 | 2025-04-01T06:38:27.965711 | {
"authors": [
"dufoli"
],
"repo": "dufoli/Salesforce-Inspector-Advanced",
"url": "https://github.com/dufoli/Salesforce-Inspector-Advanced/issues/116",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
162792055 | loader.js is not merged in the vendor.js file
When using this addon with an updated version of ember-cli >= 2.3.0-beta.2, the loader.js module is not loaded in the vendor.js file, resulting in a JS error stating that define is not defined.
I believe this is due to an upstream change in ember-cli, which changed the loader.js dep to a node module in package.json. Previously loader.js was a bower component.
See: https://github.com/ember-cli/ember-cli/blob/v2.3.0-beta.2/CHANGELOG.md#230-beta2
Thanks for the heads-up - I updated the readme that an exact ember-cli version of 2.2.0 is needed. I'm open for PRs to improve this, but personally, I am not pushing this project much further and rather looking at ember-engines.
Yeah I definitely agree moving forward with ember-engines is much better long-term. However, our team is desperate for lazy loading and need an implementation for the short term. This addon works great for that use case and doesn't require a ton of changes to the overall app structure.
I have a fix for this issue locally that works with newer versions of ember-cli and would be happy to submit a PR.
Happily accepting a PR! Especially if it supports both ember-cli versions
On Wed, 29 Jun 2016, 17:24 Joshua Bailey, notifications@github.com wrote:
Yeah I definitely agree moving forward with ember-engines is much better
long-term. However, our team is desperate for lazy loading and need an
implementation for the short term. This addon works great for that use case
and doesn't require a ton of changes to the overall app structure.
I have a fix for this issue locally that works with newer versions of
ember-cli and would be happy to submit a PR.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
https://github.com/duizendnegen/ember-cli-lazy-load/issues/11#issuecomment-229391792,
or mute the thread
https://github.com/notifications/unsubscribe/AAoow5aR0DWVYfrMbnZ1BxHIICMXIBzcks5qQo5HgaJpZM4JAjoQ
.
| gharchive/issue | 2016-06-28T21:29:13 | 2025-04-01T06:38:27.972153 | {
"authors": [
"duizendnegen",
"jbailey4"
],
"repo": "duizendnegen/ember-cli-lazy-load",
"url": "https://github.com/duizendnegen/ember-cli-lazy-load/issues/11",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
123058348 | user invites
I am running streama behind a nginx ssl reverse proxy and the invite urls comeback as Sorry, you're not authorized to view this page. I tried https:// removing http:// also remove the :8080. Nothing works.
Did you make sure to include the https to your base path in your settings? Can you paste the url of a working page and the invite url ?
Where is the base path in your settings?
on your installation, go to Admin -> Settings. For me thats
localhost:8080/streama/#/admin/settings
I feel stupid thanks you!
don't worry! did it fix the issue?
Closing until further notice
| gharchive/issue | 2015-12-19T03:28:37 | 2025-04-01T06:38:27.974660 | {
"authors": [
"dularion",
"zinnfamily"
],
"repo": "dularion/streama",
"url": "https://github.com/dularion/streama/issues/108",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1892028390 | 支持 eslint 只引用不声明能力
因为 import/no-extraneous-dependencies 中 packageDir 只会合并依赖,而真正模块可能并没有声明,这样容易导致发布后项目缺少依赖。
例如:项目中有两模块A,B,AB 均没有相互依赖。当 A 依赖声明 lodash,B 引用了 lodash 并且没有声明依赖。这时通过 packageDir: [folder A, folder B], 这时 eslint 会通过检测,而实际 B 需要告警依赖 lodash 并没有声明。
DEMO: https://stackblitz.com/edit/node-4meg4u?file=.eslintrc.js,package.json,packages%2FA%2Findex.js,packages%2FB%2Findex.js,packages%2FB%2Fpacakge.json
I found the solution here, there is the brief introduction for it.
It work for the single repository well but when it comes to the monorepo it provider the option which is the packageDir.
If it includes the all of the repositories in contains in the nomorepo, it'll read the dependencies from another package, that's what we don't need.
So, with this situation we can follow the solution in the issue I posted above.
there is the solution link to https://github.com/dumlj/dumlj-build/pull/4
| gharchive/issue | 2023-09-12T09:01:07 | 2025-04-01T06:38:27.979601 | {
"authors": [
"cjfff"
],
"repo": "dumlj/dumlj-build",
"url": "https://github.com/dumlj/dumlj-build/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1378739991 | 🛑 Mention Test Bot is down
In 8933275, Mention Test Bot (https://backend.isbotdown.com/bots/mentiontestbot) was down:
HTTP code: 200
Response time: 119 ms
Resolved: Mention Test Bot is back up in 1c60b31.
| gharchive/issue | 2022-09-20T02:44:24 | 2025-04-01T06:38:28.004765 | {
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/452",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2304285122 | 🛑 Mention Robot is down
In 0dd0d2f, Mention Robot (https://backend.isbotdown.com/bots/mentionrobot) was down:
HTTP code: 200
Response time: 89 ms
Resolved: Mention Robot is back up in 73c5b4e after 1 hour, 49 minutes.
| gharchive/issue | 2024-05-18T19:36:54 | 2025-04-01T06:38:28.007421 | {
"authors": [
"durof"
],
"repo": "durof/status",
"url": "https://github.com/durof/status/issues/7465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
600829108 | Maintainer package should use rusk RPC directly
The StakeAutomaton uses the node RPC to handle automation of bid and stake. It should instead use rusk. This also means removing the reference to ristretto.Scalar from the code
Duplicate of #492. Closing
| gharchive/issue | 2020-04-16T08:00:16 | 2025-04-01T06:38:28.008505 | {
"authors": [
"autholykos"
],
"repo": "dusk-network/dusk-blockchain",
"url": "https://github.com/dusk-network/dusk-blockchain/issues/412",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2615617947 | Upg: stop fetching feature flags on all requests, do it on-demand
Description
Querying the feature flag is the sixth query that cost us the most CPU according to GCP insight.
It's very fast but done 9 millions times / day.
This refactor the code to fetch the feature flags only when needed.
If it's efficient, we should then look into the other query done 9 millions times (active subscription + plan), and then the other ones done 5-3 millions times (such as fetching the groups, the memberships etc..).
Risk
I tested locally. Worst case, the feature behind feature flags are disabled.
Deploy Plan
Deploy front
🔥 🔥
| gharchive/pull-request | 2024-10-26T07:12:55 | 2025-04-01T06:38:28.015768 | {
"authors": [
"Fraggle",
"flvndvd"
],
"repo": "dust-tt/dust",
"url": "https://github.com/dust-tt/dust/pull/8260",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2051468575 | Terminated by signal SIGSEGV
On nixos, relevant is:
#input
nur.url = github:nix-community/NUR;
overlays = [
<...>
nur.overlay
];
environment.systemPackages = with pkgs; [
nur.repos.dustinblackman.oatmeal
];
When using cargo openssl-sys fails to build
Output of cargo install oatmeal
Updating crates.io index
Installing oatmeal v0.9.2
Updating crates.io index
warning: profile package spec `insta` in profile `dev` did not match any packages
Did you mean `idna`?
warning: profile package spec `similar` in profile `dev` did not match any packages
Compiling proc-macro2 v1.0.70
Compiling unicode-ident v1.0.12
Compiling libc v0.2.151
Compiling autocfg v1.1.0
Compiling cfg-if v1.0.0
Compiling pkg-config v0.3.28
Compiling futures-core v0.3.29
Compiling once_cell v1.18.0
Compiling serde v1.0.193
Compiling vcpkg v0.2.15
Compiling futures-task v0.3.29
Compiling log v0.4.20
Compiling slab v0.4.9
Compiling version_check v0.9.4
Compiling zerocopy v0.7.31
Compiling memchr v2.6.4
Compiling crc32fast v1.3.2
Compiling ahash v0.8.6
Compiling quote v1.0.33
Compiling itoa v1.0.10
Compiling syn v2.0.42
Compiling cc v1.0.83
Compiling openssl v0.10.61
Compiling allocator-api2 v0.2.16
Compiling pin-project-lite v0.2.13
Compiling bitflags v2.4.1
Compiling native-tls v0.2.11
Compiling simd-adler32 v0.3.7
Compiling rustversion v1.0.14
Compiling hashbrown v0.14.3
Compiling httparse v1.8.0
Compiling serde_json v1.0.107
Compiling openssl-src v300.2.1+3.2.0
Compiling adler v1.0.2
Compiling signal-hook-registry v1.4.1
Compiling mio v0.8.10
Compiling bytes v1.5.0
Compiling miniz_oxide v0.7.1
Compiling thiserror v1.0.51
Compiling openssl-sys v0.9.97
Compiling onig_sys v69.8.1
Compiling socket2 v0.5.5
Compiling num_cpus v1.16.0
Compiling lock_api v0.4.11
Compiling tracing-core v0.1.32
Compiling equivalent v1.0.1
Compiling futures-sink v0.3.29
Compiling fnv v1.0.7
Compiling futures-channel v0.3.29
Compiling parking_lot_core v0.9.9
Compiling tokio v1.33.0
Compiling indexmap v2.1.0
Compiling scopeguard v1.2.0
Compiling bitflags v1.3.2
Compiling futures-util v0.3.29
Compiling smallvec v1.11.2
Compiling foreign-types-shared v0.1.1
Compiling foreign-types v0.3.2
Compiling tracing v0.1.40
Compiling http v0.2.11
Compiling num-traits v0.2.17
Compiling futures-io v0.3.29
Compiling signal-hook v0.3.17
Compiling pin-utils v0.1.0
Compiling ryu v1.0.16
Compiling tinyvec_macros v0.1.1
Compiling powerfmt v0.2.0
Compiling tinyvec v1.6.0
Compiling deranged v0.3.10
Compiling flate2 v1.0.28
Compiling memoffset v0.6.5
Compiling unicode-width v0.1.11
Compiling percent-encoding v2.3.1
Compiling lazy_static v1.4.0
Compiling num_threads v0.1.6
Compiling time-core v0.1.2
Compiling openssl-probe v0.1.5
Compiling tokio-util v0.7.9
Compiling try-lock v0.2.5
Compiling want v0.3.1
Compiling time v0.3.31
Compiling form_urlencoded v1.2.1
Compiling serde_derive v1.0.193
Compiling openssl-macros v0.1.1
Compiling tokio-macros v2.1.0
Compiling thiserror-impl v1.0.51
Compiling futures-macro v0.3.29
Compiling h2 v0.3.22
Compiling unicode-normalization v0.1.22
Compiling http-body v0.4.6
Compiling parking_lot v0.12.1
Compiling num-integer v0.1.45
Compiling utf8parse v0.2.1
Compiling paste v1.0.14
Compiling safemem v0.3.3
Compiling heck v0.4.1
Compiling anyhow v1.0.75
Compiling base64 v0.21.5
Compiling either v1.9.0
Compiling syn v1.0.109
Compiling httpdate v1.0.3
Compiling tower-service v0.3.2
Compiling unicode-bidi v0.3.14
Compiling idna v0.5.0
Compiling hyper v0.14.28
Compiling strum_macros v0.25.3
Compiling line-wrap v0.1.1
error: failed to run custom build command for `openssl-sys v0.9.97`
Caused by:
process didn't exit successfully: `/tmp/cargo-installNPfWyO/release/build/openssl-sys-81eb1754cc439330/build-script-main` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR
X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR unset
cargo:rerun-if-env-changed=OPENSSL_NO_VENDOR
OPENSSL_NO_VENDOR unset
cargo:rerun-if-env-changed=CC_x86_64-unknown-linux-gnu
CC_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CC_x86_64_unknown_linux_gnu
CC_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CC
HOST_CC = None
cargo:rerun-if-env-changed=CC
CC = None
cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
CRATE_CC_NO_DEFAULTS = None
DEBUG = Some("true")
CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
cargo:rerun-if-env-changed=CFLAGS_x86_64-unknown-linux-gnu
CFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CFLAGS_x86_64_unknown_linux_gnu
CFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CFLAGS
HOST_CFLAGS = None
cargo:rerun-if-env-changed=CFLAGS
CFLAGS = None
cargo:rerun-if-env-changed=AR_x86_64-unknown-linux-gnu
AR_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=AR_x86_64_unknown_linux_gnu
AR_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_AR
HOST_AR = None
cargo:rerun-if-env-changed=AR
AR = None
cargo:rerun-if-env-changed=ARFLAGS_x86_64-unknown-linux-gnu
ARFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=ARFLAGS_x86_64_unknown_linux_gnu
ARFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_ARFLAGS
HOST_ARFLAGS = None
cargo:rerun-if-env-changed=ARFLAGS
ARFLAGS = None
cargo:rerun-if-env-changed=RANLIB_x86_64-unknown-linux-gnu
RANLIB_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=RANLIB_x86_64_unknown_linux_gnu
RANLIB_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_RANLIB
HOST_RANLIB = None
cargo:rerun-if-env-changed=RANLIB
RANLIB = None
cargo:rerun-if-env-changed=RANLIBFLAGS_x86_64-unknown-linux-gnu
RANLIBFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=RANLIBFLAGS_x86_64_unknown_linux_gnu
RANLIBFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_RANLIBFLAGS
HOST_RANLIBFLAGS = None
cargo:rerun-if-env-changed=RANLIBFLAGS
RANLIBFLAGS = None
running cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-d95e5824db77a078/out/openssl-build/build/src" && env -u CROSS_COMPILE AR="ar" CC="cc" RANLIB="ranlib" "perl" "./Configure" "--prefix=/tmp/cargo-installNPfWyO/release/build/openssl-sys-d95e5824db77a078/out/openssl-build/install" "--openssldir=/usr/local/ssl" "no-dso" "no-shared" "no-ssl3" "no-tests" "no-comp" "no-zlib" "no-zlib-dynamic" "--libdir=lib" "no-md2" "no-rc5" "no-weak-ssl-ciphers" "no-camellia" "no-idea" "no-seed" "linux-x86_64" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-gdwarf-4" "-fno-omit-frame-pointer" "-m64"
Configuring OpenSSL version 3.2.0 for target linux-x86_64
Using os-specific seed configuration
Created configdata.pm
Running configdata.pm
Created Makefile.in
Created Makefile
Created include/openssl/configuration.h
**********************************************************************
*** ***
*** OpenSSL has been successfully configured ***
*** ***
*** If you encounter a problem while building, please open an ***
*** issue on GitHub <https://github.com/openssl/openssl/issues> ***
*** and include the output from the following command: ***
*** ***
*** perl configdata.pm --dump ***
*** ***
*** (If you are new to OpenSSL, you might want to consult the ***
*** 'Troubleshooting' section in the INSTALL.md file first) ***
*** ***
**********************************************************************
running cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-d95e5824db77a078/out/openssl-build/build/src" && "make" "depend"
--- stderr
thread 'main' panicked at /home/matthisk/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openssl-src-300.2.1+3.2.0/src/lib.rs:611:9:
Error building OpenSSL dependencies:
Command: cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-d95e5824db77a078/out/openssl-build/build/src" && "make" "depend"
Failed to execute: No such file or directory (os error 2)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: failed to run custom build command for `openssl-sys v0.9.97`
Caused by:
process didn't exit successfully: `/tmp/cargo-installNPfWyO/release/build/openssl-sys-81eb1754cc439330/build-script-main` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR
X86_64_UNKNOWN_LINUX_GNU_OPENSSL_NO_VENDOR unset
cargo:rerun-if-env-changed=OPENSSL_NO_VENDOR
OPENSSL_NO_VENDOR unset
cargo:rerun-if-env-changed=CC_x86_64-unknown-linux-gnu
CC_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CC_x86_64_unknown_linux_gnu
CC_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CC
HOST_CC = None
cargo:rerun-if-env-changed=CC
CC = None
cargo:rerun-if-env-changed=CRATE_CC_NO_DEFAULTS
CRATE_CC_NO_DEFAULTS = None
DEBUG = Some("false")
CARGO_CFG_TARGET_FEATURE = Some("fxsr,sse,sse2")
cargo:rerun-if-env-changed=CFLAGS_x86_64-unknown-linux-gnu
CFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=CFLAGS_x86_64_unknown_linux_gnu
CFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_CFLAGS
HOST_CFLAGS = None
cargo:rerun-if-env-changed=CFLAGS
CFLAGS = None
cargo:rerun-if-env-changed=AR_x86_64-unknown-linux-gnu
AR_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=AR_x86_64_unknown_linux_gnu
AR_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_AR
HOST_AR = None
cargo:rerun-if-env-changed=AR
AR = None
cargo:rerun-if-env-changed=ARFLAGS_x86_64-unknown-linux-gnu
ARFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=ARFLAGS_x86_64_unknown_linux_gnu
ARFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_ARFLAGS
HOST_ARFLAGS = None
cargo:rerun-if-env-changed=ARFLAGS
ARFLAGS = None
cargo:rerun-if-env-changed=RANLIB_x86_64-unknown-linux-gnu
RANLIB_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=RANLIB_x86_64_unknown_linux_gnu
RANLIB_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_RANLIB
HOST_RANLIB = None
cargo:rerun-if-env-changed=RANLIB
RANLIB = None
cargo:rerun-if-env-changed=RANLIBFLAGS_x86_64-unknown-linux-gnu
RANLIBFLAGS_x86_64-unknown-linux-gnu = None
cargo:rerun-if-env-changed=RANLIBFLAGS_x86_64_unknown_linux_gnu
RANLIBFLAGS_x86_64_unknown_linux_gnu = None
cargo:rerun-if-env-changed=HOST_RANLIBFLAGS
HOST_RANLIBFLAGS = None
cargo:rerun-if-env-changed=RANLIBFLAGS
RANLIBFLAGS = None
running cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-eaf2aa7bde328763/out/openssl-build/build/src" && env -u CROSS_COMPILE AR="ar" CC="cc" RANLIB="ranlib" "perl" "./Configure" "--prefix=/tmp/cargo-installNPfWyO/release/build/openssl-sys-eaf2aa7bde328763/out/openssl-build/install" "--openssldir=/usr/local/ssl" "no-dso" "no-shared" "no-ssl3" "no-tests" "no-comp" "no-zlib" "no-zlib-dynamic" "--libdir=lib" "no-md2" "no-rc5" "no-weak-ssl-ciphers" "no-camellia" "no-idea" "no-seed" "linux-x86_64" "-O2" "-ffunction-sections" "-fdata-sections" "-fPIC" "-m64"
Configuring OpenSSL version 3.2.0 for target linux-x86_64
Using os-specific seed configuration
Created configdata.pm
Running configdata.pm
Created Makefile.in
Created Makefile
Created include/openssl/configuration.h
**********************************************************************
*** ***
*** OpenSSL has been successfully configured ***
*** ***
*** If you encounter a problem while building, please open an ***
*** issue on GitHub <https://github.com/openssl/openssl/issues> ***
*** and include the output from the following command: ***
*** ***
*** perl configdata.pm --dump ***
*** ***
*** (If you are new to OpenSSL, you might want to consult the ***
*** 'Troubleshooting' section in the INSTALL.md file first) ***
*** ***
**********************************************************************
running cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-eaf2aa7bde328763/out/openssl-build/build/src" && "make" "depend"
--- stderr
thread 'main' panicked at /home/matthisk/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openssl-src-300.2.1+3.2.0/src/lib.rs:611:9:
Error building OpenSSL dependencies:
Command: cd "/tmp/cargo-installNPfWyO/release/build/openssl-sys-eaf2aa7bde328763/out/openssl-build/build/src" && "make" "depend"
Failed to execute: No such file or directory (os error 2)
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
error: failed to compile `oatmeal v0.9.2`, intermediate artifacts can be found at `/tmp/cargo-installNPfWyO`.
To reuse those artifacts with a future compilation, set the environment variable `CARGO_TARGET_DIR` to that path.
Thanks for reporting!
For oatmeal --help failing, it feels like a NixOS thing. I have a similar repro in a docker container that may be related to this. I'll dig a deeper to see if this is something I can fix in the binary distribution, or if I have to update docs.
For openssl, this is a common nixos problem. More details here.
SSL related issues should be fixed in v0.10.0
@matthis-k I got the application running in again in a container, but I don't have an active NixOS installation to test in a real scenario. Would you mind giving this a run for me please and confirm if Oatmeal starts, please?
nix-shell -p binutils stdenv curl gzip
mkdir test
cd test
curl -L https://github.com/dustinblackman/oatmeal/releases/download/v0.10.0/oatmeal_0.10.0_linux_amd64.tar.gz | tar -xz oatmeal
patchelf --set-interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)" ./oatmeal
./oatmeal --help
Thanks!
This works.
Thanks! I've updated my NUR packages, and should be available in the main repository in a couple hours.
| gharchive/issue | 2023-12-21T00:02:14 | 2025-04-01T06:38:28.027029 | {
"authors": [
"dustinblackman",
"matthis-k"
],
"repo": "dustinblackman/oatmeal",
"url": "https://github.com/dustinblackman/oatmeal/issues/29",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
470657068 | Create Documentation Website
Create Documentation Website ⚛️
This pull request updates the conflux repository to contain source code for the documentation website.
Pull Request List 🔥
[x] Install Gatsby
[x] Setup Gatsby configuration files
[x] Configure Node-Sass with global variables, styles, and mixins
[x] Install needed dependencies and plugins in Gatsby
[x] Make landing page for documentation site
[ ] Craft documentation and pull content from CMS such as Contentful
[ ] Complete accessibility features for superb user experience
[ ] Integrate markdown files for documentation along with code snippets
[ ] Implement fuzzy search in search bar
[ ] Update with About.html page
[ ] Finish out Contact.html page in Gatsby
[ ] Make custom 404 page
Closed in lieu of using a different setup for our documentation website.
| gharchive/pull-request | 2019-07-20T09:48:22 | 2025-04-01T06:38:28.033815 | {
"authors": [
"nwthomas"
],
"repo": "dustinmyers/react-conflux",
"url": "https://github.com/dustinmyers/react-conflux/pull/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
274107633 | RTSP source in detectnet-camera example
Hi everybody.
I want to get images from my Hikvision camera. RTSP link is: rtsp://192.168.196.243:554/Streaming/Channels/102
I have successfully used this pipeline in my OpenCV(3.3.1) app, but it does not work in detectnet-camera example, which i`ve built on my x86_64 PC(Cuda 8.0, OpenCV 3.3.1, TensorRT version 2.1, build 2102
)
I have added this line to code cloned from this repo:
ss<< "rtspsrc location=rtsp://192.168.196.243:554/Streaming/Channels/102 protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink ";
Those lines were in the output.
`[gstreamer] initialized gstreamer, version 1.8.3.0
[gstreamer] gstreamer decoder pipeline string:
rtspsrc location=rtsp://192.168.196.243:554/Streaming/Channels/102 protocols=udp latency=0 ! decodebin ! videoconvert ! appsink name=mysink
detectnet-camera: successfully initialized video device
width: 1280
height: 720
depth: 24 (bpp)
[gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> typefind
[gstreamer] gstreamer changed state from NULL to READY ==> decodebin0
[gstreamer] gstreamer changed state from NULL to READY ==> rtspsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> typefind
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtspsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer msg progress ==> rtspsrc0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtspsrc0
[gstreamer] gstreamer msg progress ==> rtspsrc0
detectnet-camera: camera open for streaming
detectnet-camera: failed to capture frame
detectnet-camera: failed to convert from NV12 to RGBA
detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters
[cuda] cudaNormalizeRGBA((float4*)imgRGBA, make_float2(0.0f, 255.0f), (float4*)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight())
[cuda] invalid device pointer (error 17) (hex 0x11)
[cuda] /home/aprentis/jetson-inference/detectnet-camera/detectnet-camera.cpp:247
[cuda] registered 14745600 byte openGL texture for interop access (1280x720)
`
What`s wrong? Thanks in advance!
As it says, 'detectnet-camera: failed to convert from NV12 to RGBA'
There is a conversion to RGBAf format from NV12 or RGB in the file gstCamera.cpp: in function ConvertRGBA. Either you modify the check for onboard camera or change your pipeline to use 'nvvidconv' to generate the output in 'NV12' format instead of using videoconvert.
Hope this works.
I had the same issue and managed to fix it using @omaralvarez's Pull Request described here: https://github.com/dusty-nv/jetson-inference/issues/88
After taking in the pull request code, in detectnet-camera I replaced the line:
gstCamera* camera = gstCamera::Create(DEFAULT_CAMERA);
With:
gstPipeline* pipeline = gstPipeline::Create(
"rtspsrc location=rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov ! queue ! rtph264depay ! h264parse ! queue ! omxh264dec ! appsink name=mysink",
240,
160,
12
);
(Replacing the RTSP address, height & width accordingly + #include "gstPipeline.h" at the top)
Yes, that will work fine because he gets rid of the onboard camera check and hence, there is just one conversion method, RGBtoRGBAf which is the one you should use.
Could you post the full code on how to use gstPipeline?
Dear,
is it possible to see full code for this? I'm new to GStreamer and i'm looking for several days already for a simple an clear example how to play a rtsp stream from an IP camera from the imagenet-camera source code.
your help is much appreciated
The full code for gstPipeline is in the Pull Request:
https://github.com/dusty-nv/jetson-inference/pull/93/commits/2717e8914dad03116641247ed2dd9ebc88379d4c
Hi, thank you for the code.
but I have some issues when trying to compile
(original jetson-inference is compiling and working)
I'm running on a jetson TX2 with jetpack 4.2.2.
Can you give me any direction where to start looking for a solution?
Thank you in advance.
nvidia@tx2:~/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/build$ make -j2
[ 5%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaYUV-YV12.cu.o
[ 5%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/jetson-inference_generated_imageNet.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 7%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaFont.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 10%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaNormalize.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 12%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaOverlay.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/util/cuda/cudaOverlay.cu(29): warning: variable "thick" was declared but never referenced
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/util/cuda/cudaOverlay.cu(8): warning: function "eq_less" was declared but never referenced
:0:7: warning: ISO C++11 requires whitespace after the macro name
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/util/cuda/cudaOverlay.cu(29): warning: variable "thick" was declared but never referenced
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/util/cuda/cudaOverlay.cu(8): warning: function "eq_less" was declared but never referenced
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 15%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaRGB.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 17%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaResize.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 20%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaYUV-NV12.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
[ 22%] Building NVCC (Device) object CMakeFiles/jetson-inference.dir/util/cuda/jetson-inference_generated_cudaYUV-YUYV.cu.o
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
:0:7: warning: ISO C++11 requires whitespace after the macro name
Scanning dependencies of target jetson-inference
[ 25%] Building CXX object CMakeFiles/jetson-inference.dir/detectNet.cpp.o
[ 27%] Building CXX object CMakeFiles/jetson-inference.dir/imageNet.cpp.o
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:9:0,
from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:5:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:51:46: error: ‘vector’ in namespace ‘std’ does not name a template type
const char* input_blob, const std::vectorstd::string& output_blobs,
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:51:52: error: expected ‘,’ or ‘...’ before ‘<’ token
const char* input_blob, const std::vectorstd::string& output_blobs,
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:93:20: error: ‘vector’ in namespace ‘std’ does not name a template type
const std::vectorstd::string& outputs,
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:93:26: error: expected ‘,’ or ‘...’ before ‘<’ token
const std::vectorstd::string& outputs,
^
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:9:0,
from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:5:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:170:7: error: ‘vector’ in namespace ‘std’ does not name a template type
std::vector mOutputs;
^~~~~~
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:5:0:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:117:7: error: ‘vector’ in namespace ‘std’ does not name a template type
std::vectorstd::string mClassSynset; // 1000 class ID's (ie n01580077, n04325704)
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:118:7: error: ‘vector’ in namespace ‘std’ does not name a template type
std::vectorstd::string mClassDesc;
^~~~~~
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:5:0:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h: In member function ‘const char* imageNet::GetClassDesc(uint32_t) const’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:90:68: error: ‘mClassDesc’ was not declared in this scope
inline const char* GetClassDesc( uint32_t index ) const { return mClassDesc[index].c_str(); }
^~~~~~~~~~
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:9:0,
from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:5:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:51:46: error: ‘vector’ in namespace ‘std’ does not name a template type
const char* input_blob, const std::vectorstd::string& output_blobs,
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:51:52: error: expected ‘,’ or ‘...’ before ‘<’ token
const char* input_blob, const std::vectorstd::string& output_blobs,
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:93:20: error: ‘vector’ in namespace ‘std’ does not name a template type
const std::vectorstd::string& outputs,
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:93:26: error: expected ‘,’ or ‘...’ before ‘<’ token
const std::vectorstd::string& outputs,
^
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:9:0,
from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:5:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/tensorNet.h:170:7: error: ‘vector’ in namespace ‘std’ does not name a template type
std::vector mOutputs;
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:90:68: note: suggested alternative: ‘GetClassDesc’
inline const char* GetClassDesc( uint32_t index ) const { return mClassDesc[index].c_str(); }
^~~~~~~~~~
GetClassDesc
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h: In member function ‘const char* imageNet::GetClassSynset(uint32_t) const’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:95:70: error: ‘mClassSynset’ was not declared in this scope
inline const char* GetClassSynset( uint32_t index ) const { return mClassSynset[index].c_str(); }
^~~~~~~~~~~~
In file included from /home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:5:0:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h: In member function ‘uint32_t detectNet::GetMaxBoundingBoxes() const’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:122:56: error: ‘mOutputs’ was not declared in this scope
inline uint32_t GetMaxBoundingBoxes() const { return mOutputs[1].dims.w * mOutputs[1].dims.h * mOutputs[1].dims.c; }
^~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.h:95:70: note: suggested alternative: ‘GetClassSynset’
inline const char* GetClassSynset( uint32_t index ) const { return mClassSynset[index].c_str(); }
^~~~~~~~~~~~
GetClassSynset
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:122:56: note: suggested alternative: ‘puts’
inline uint32_t GetMaxBoundingBoxes() const { return mOutputs[1].dims.w * mOutputs[1].dims.h * mOutputs[1].dims.c; }
^~~~~~~~
puts
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h: In member function ‘uint32_t detectNet::GetNumClasses() const’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:127:51: error: ‘mOutputs’ was not declared in this scope
inline uint32_t GetNumClasses() const { return mOutputs[0].dims.c; }
^~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.h:127:51: note: suggested alternative: ‘puts’
inline uint32_t GetNumClasses() const { return mOutputs[0].dims.c; }
^~~~~~~~
puts
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp: In member function ‘bool imageNet::init(const char*, const char*, const char*, const char*, const char*, const char*, uint32_t)’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:128:19: error: ‘mOutputs’ was not declared in this scope
mOutputClasses = mOutputs[0].dims.c;
^~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp: In static member function ‘static detectNet* detectNet::Create(const char*, const char*, const char*, float, const char*, const char*, const char*, uint32_t)’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:56:7: error: ‘vector’ is not a member of ‘std’
std::vectorstd::string output_blobs;
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:128:19: note: suggested alternative: ‘output’
mOutputClasses = mOutputs[0].dims.c;
^~~~~~~~
output
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:56:25: error: expected primary-expression before ‘>’ token
std::vectorstd::string output_blobs;
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:130:36: error: ‘mClassSynset’ was not declared in this scope
if( !loadClassInfo(class_path) || mClassSynset.size() != mOutputClasses || mClassDesc.size() != mOutputClasses )
^~~~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:56:27: error: ‘output_blobs’ was not declared in this scope
std::vectorstd::string output_blobs;
^~~~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:130:36: note: suggested alternative: ‘GetClassSynset’
if( !loadClassInfo(class_path) || mClassSynset.size() != mOutputClasses || mClassDesc.size() != mOutputClasses )
^~~~~~~~~~~~
GetClassSynset
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:130:77: error: ‘mClassDesc’ was not declared in this scope
if( !loadClassInfo(class_path) || mClassSynset.size() != mOutputClasses || mClassDesc.size() != mOutputClasses )
^~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:56:27: note: suggested alternative: ‘input_blob’
std::vectorstd::string output_blobs;
^~~~~~~~~~~~
input_blob
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:130:77: note: suggested alternative: ‘GetClassDesc’
if( !loadClassInfo(class_path) || mClassSynset.size() != mOutputClasses || mClassDesc.size() != mOutputClasses )
^~~~~~~~~~
GetClassDesc
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp: In member function ‘bool imageNet::loadClassInfo(const char*)’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:231:4: error: ‘mClassSynset’ was not declared in this scope
mClassSynset.push_back(a);
^~~~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp: At global scope:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:29: error: variable or field ‘mergeRect’ declared void
static void mergeRect( std::vector& rects, const float6& rect )
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:29: error: ‘vector’ is not a member of ‘std’
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:42: error: expected primary-expression before ‘>’ token
static void mergeRect( std::vector& rects, const float6& rect )
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:45: error: ‘rects’ was not declared in this scope
static void mergeRect( std::vector& rects, const float6& rect )
^~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:45: note: suggested alternative: ‘gets’
static void mergeRect( std::vector& rects, const float6& rect )
^~~~~
gets
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:197:52: error: expected primary-expression before ‘const’
static void mergeRect( std::vector& rects, const float6& rect )
^~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp: In member function ‘bool detectNet::Detect(float*, uint32_t, uint32_t, float*, int*, float*)’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:247:43: error: ‘mOutputs’ was not declared in this scope
void* inferenceBuffers[] = { mInputCUDA, mOutputs[OUTPUT_CVG].CUDA, mOutputs[OUTPUT_BBOX].CUDA };
^~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:231:4: note: suggested alternative: ‘GetClassSynset’
mClassSynset.push_back(a);
^~~~~~~~~~~~
GetClassSynset
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:232:4: error: ‘mClassDesc’ was not declared in this scope
mClassDesc.push_back(b);
^~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:247:43: note: suggested alternative: ‘puts’
void* inferenceBuffers[] = { mInputCUDA, mOutputs[OUTPUT_CVG].CUDA, mOutputs[OUTPUT_BBOX].CUDA };
^~~~~~~~
puts
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:232:4: note: suggested alternative: ‘GetClassDesc’
mClassDesc.push_back(b);
^~~~~~~~~~
GetClassDesc
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:245:4: error: ‘mClassSynset’ was not declared in this scope
mClassSynset.push_back(a);
^~~~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:267:49: error: ‘class nvinfer1::Dims3’ has no member named ‘w’
const float cell_width = /width/ mInputDims.w / ow;
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:268:50: error: ‘class nvinfer1::Dims3’ has no member named ‘h’
const float cell_height = /height/ mInputDims.h / oh;
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:270:56: error: ‘class nvinfer1::Dims3’ has no member named ‘w’
const float scale_x = float(width) / float(mInputDims.w);
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:271:57: error: ‘class nvinfer1::Dims3’ has no member named ‘h’
const float scale_y = float(height) / float(mInputDims.h);
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:7: error: ‘vector’ is not a member of ‘std’
std::vector< std::vector > rects;
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:20: error: ‘vector’ is not a member of ‘std’
std::vector< std::vector > rects;
^~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:33: error: expected primary-expression before ‘>’ token
std::vector< std::vector > rects;
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:35: error: expected primary-expression before ‘>’ token
std::vector< std::vector > rects;
^
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:37: error: ‘rects’ was not declared in this scope
std::vector< std::vector > rects;
^~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:245:4: note: suggested alternative: ‘GetClassSynset’
mClassSynset.push_back(a);
^~~~~~~~~~~~
GetClassSynset
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:246:4: error: ‘mClassDesc’ was not declared in this scope
mClassDesc.push_back(str);
^~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:280:37: note: suggested alternative: ‘gets’
std::vector< std::vector > rects;
^~~~~
gets
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/detectNet.cpp:307:6: error: ‘mergeRect’ was not declared in this scope
mergeRect( rects[z], make_float6(x1, y1, x2, y2, coverage, z) );
^~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:246:4: note: suggested alternative: ‘GetClassDesc’
mClassDesc.push_back(str);
^~~~~~~~~~
GetClassDesc
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:252:56: error: ‘mClassSynset’ was not declared in this scope
printf("imageNet -- loaded %zu class info entries\n", mClassSynset.size());
^~~~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:252:56: note: suggested alternative: ‘GetClassSynset’
printf("imageNet -- loaded %zu class info entries\n", mClassSynset.size());
^~~~~~~~~~~~
GetClassSynset
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp: In member function ‘int imageNet::Classify(float*, uint32_t, uint32_t, float*)’:
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:286:43: error: ‘mOutputs’ was not declared in this scope
void* inferenceBuffers[] = { mInputCUDA, mOutputs[0].CUDA };
^~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:286:43: note: suggested alternative: ‘puts’
void* inferenceBuffers[] = { mInputCUDA, mOutputs[0].CUDA };
^~~~~~~~
puts
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:303:49: error: ‘mClassDesc’ was not declared in this scope
printf("class %04zu - %f (%s)\n", n, value, mClassDesc[n].c_str());
^~~~~~~~~~
/home/nvidia/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/imageNet.cpp:303:49: note: suggested alternative: ‘GetClassDesc’
printf("class %04zu - %f (%s)\n", n, value, mClassDesc[n].c_str());
^~~~~~~~~~
GetClassDesc
CMakeFiles/jetson-inference.dir/build.make:125: recipe for target 'CMakeFiles/jetson-inference.dir/detectNet.cpp.o' failed
make[2]: *** [CMakeFiles/jetson-inference.dir/detectNet.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
CMakeFiles/jetson-inference.dir/build.make:149: recipe for target 'CMakeFiles/jetson-inference.dir/imageNet.cpp.o' failed
make[2]: *** [CMakeFiles/jetson-inference.dir/imageNet.cpp.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/jetson-inference.dir/all' failed
make[1]: *** [CMakeFiles/jetson-inference.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2
nvidia@tx2:~/jetson-inference-2717e8914dad03116641247ed2dd9ebc88379d4c/build$
Hi,
I managed to get a RTSP stream from an IP camera with pipeline.
But I don't get how to make it work with the gstCamera that is used in the jetson-inference example
how should i make the pipeline as video source of the gstCamera?
I'm really sorry if i ask supid questions but this is completely new to me.
When I start program now, I have my IP camera live image on top of the onboard camera of my jetson...
complete c++ code:
#include "gstCamera.h"
#include "glDisplay.h"
#include "detectNet.h"
#include "commandLine.h"
#include <signal.h>
bool signal_recieved = false;
void sig_handler(int signo)
{
if( signo == SIGINT )
{
printf("received SIGINT\n");
signal_recieved = true;
}
}
int usage()
{
printf("usage: detectnet-camera [-h] [--network NETWORK] [--threshold THRESHOLD]\n");
printf(" [--camera CAMERA] [--width WIDTH] [--height HEIGHT]\n\n");
printf("Locate objects in a live camera stream using an object detection DNN.\n\n");
printf("optional arguments:\n");
printf(" --help show this help message and exit\n");
printf(" --network NETWORK pre-trained model to load (see below for options)\n");
printf(" --overlay OVERLAY detection overlay flags (e.g. --overlay=box,labels,conf)\n");
printf(" valid combinations are: 'box', 'labels', 'conf', 'none'\n");
printf(" --alpha ALPHA overlay alpha blending value, range 0-255 (default: 120)\n");
printf(" --camera CAMERA index of the MIPI CSI camera to use (e.g. CSI camera 0),\n");
printf(" or for VL42 cameras the /dev/video device to use.\n");
printf(" by default, MIPI CSI camera 0 will be used.\n");
printf(" --width WIDTH desired width of camera stream (default is 1280 pixels)\n");
printf(" --height HEIGHT desired height of camera stream (default is 720 pixels)\n");
printf(" --threshold VALUE minimum threshold for detection (default is 0.5)\n\n");
printf("%s\n", detectNet::Usage());
return 0;
}
int main( int argc, char** argv )
{
/*
* parse command line
*/
commandLine cmdLine(argc, argv);
if( cmdLine.GetFlag("help") )
return usage();
/*
* attach signal handler
*/
if( signal(SIGINT, sig_handler) == SIG_ERR )
printf("\ncan't catch SIGINT\n");
/**Added by me for rtsp streaming */
GstElement *pipeline;
GstBus *bus;
GstMessage *msg;
/* Initialize GStreamer */
gst_init (&argc, &argv);
/* Build the pipeline */
pipeline = gst_parse_launch ("playbin uri=rtsp://admin:admin@192.168.1.18:554/cam/realmonitor?channel=1&subtype=1&unicast=true&proto=Onvif",NULL);
/* Start playing IP Camera */
gst_element_set_state (pipeline, GST_STATE_PLAYING);
/*
How to set pipeline as gstCamera source instead of onboard cam of TX2?
*/
/* create the camera device */
gstCamera* camera = gstCamera::Create(cmdLine.GetInt("width", gstCamera::DefaultWidth),cmdLine.GetInt("height", gstCamera::DefaultHeight),0);
if( !camera )
{
printf("\ndetectnet-camera: failed to initialize camera device\n");
return 0;
}
printf("\ndetectnet-camera: successfully initialized camera device\n");
printf(" width: %u\n", camera->GetWidth());
printf(" height: %u\n", camera->GetHeight());
printf(" depth: %u (bpp)\n\n", camera->GetPixelDepth());
/*
* create detection network
*/
detectNet* net = detectNet::Create(argc, argv);
if( !net )
{
printf("detectnet-camera: failed to load detectNet model\n");
return 0;
}
// parse overlay flags
const uint32_t overlayFlags = detectNet::OverlayFlagsFromStr(cmdLine.GetString("overlay", "box,labels,conf"));
/*
* create openGL window
*/
glDisplay* display = glDisplay::Create();
if( !display )
printf("detectnet-camera: failed to create openGL display\n");
/*
* start streaming
*/
if( !camera->Open() )
{
printf("detectnet-camera: failed to open camera for streaming\n");
return 0;
}
printf("detectnet-camera: camera open for streaming\n");
/*
* processing loop
*/
float confidence = 0.0f;
while( !signal_recieved )
{
// capture RGBA image
float* imgRGBA = NULL;
if( !camera->CaptureRGBA(&imgRGBA, 1000) )
printf("detectnet-camera: failed to capture RGBA image from camera\n");
// detect objects in the frame
detectNet::Detection* detections = NULL;
const int numDetections = net->Detect(imgRGBA, camera->GetWidth(), camera->GetHeight(), &detections, overlayFlags);
if( numDetections > 0 )
{
printf("%i objects detected\n", numDetections);
for( int n=0; n < numDetections; n++ )
{
printf("detected obj %i class #%u (%s) confidence=%f\n", n, detections[n].ClassID, net->GetClassDesc(detections[n].ClassID), detections[n].Confidence);
printf("bounding box %i (%f, %f) (%f, %f) w=%f h=%f\n", n, detections[n].Left, detections[n].Top, detections[n].Right, detections[n].Bottom, detections[n].Width(), detections[n].Height());
}
}
// update display
if( display != NULL )
{
// render the image
display->RenderOnce(imgRGBA, camera->GetWidth(), camera->GetHeight());
// update the status bar
char str[256];
sprintf(str, "TensorRT %i.%i.%i | %s | Network %.0f FPS", NV_TENSORRT_MAJOR, NV_TENSORRT_MINOR, NV_TENSORRT_PATCH, precisionTypeToStr(net->GetPrecision()), net->GetNetworkFPS());
display->SetTitle(str);
// check if the user quit
if( display->IsClosed() )
signal_recieved = true;
}
// print out timing info
net->PrintProfilerTimes();
}
/*
* destroy resources
*/
printf("detectnet-camera: shutting down...\n");
SAFE_DELETE(camera);
SAFE_DELETE(display);
SAFE_DELETE(net);
printf("detectnet-camera: shutdown complete.\n");
return 0;
}
I've updated the jetson inference code with alinabee revisions, when I run sudo make I get the following error:
QMutex: No such file or directory
#include compilation terminated.
Any idea how to get this dependency installed on my Jetson Nano?
Thanks
I also get this:
QMutex: No such file or directory
#include compilation terminated.
Any ideas gratefully received!
Here is what I did to get my external Hikvision IP camera feed working with detectnet example.
## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms
## Install ffmpeg
$ sudo apt install ffmpeg
## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback
## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
-i rtsp://camuser:campassword@192.168.1.13/Streaming/channels/502 \
-vcodec rawvideo -vf scale=640:480 -f v4l2 \
-threads 0 -pix_fmt yuyv422 /dev/video0
## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py \
--network=ped-100 \
--width=640 --height=360 \
--camera=/dev/video0 \
--threshold=1.8 --overlay=box
I tried that also...I've been trying to get IP camera to work for quite a while with no luck...I followed your patch and go a gstreamer error below..The stream runs great in browser but nothing on my TX2. .Any ideas? Any Hints? Best,
Sudo modprobe v4l2loopback
ffmpeg -thread_queue_size 512 -i http://192.168.1.9:8888/ir.mjpeg -vcodec rawvideo -vf scale=320:240 -f v4l2 -threads 0 -pix_fmt yuv420p /dev/video1
./detectnet-camera --width=320 --height=240 --camera=/dev/video1 --overlay=box
OpenGL] glDisplay -- X screen 0 resolution: 1280x1024
[OpenGL] glDisplay -- display device initialized
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer msg new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer msg stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstCamera onEOS
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
detectnet-camera: camera open for streaming
Here is what I did to get my external Hikvision IP camera feed working with detectnet example.
## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms
## Install ffmpeg
$ sudo apt install ffmpeg
## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback
## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
-i rtsp://camuser:campassword@192.168.1.13/Streaming/channels/502 \
-vcodec rawvideo -vf scale=640:480 -f v4l2 \
-threads 0 -pix_fmt yuyv422 /dev/video0
## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py \
--network=ped-100 \
--width=640 --height=360 \
--camera=/dev/video0 \
--threshold=1.8 --overlay=box
I use the stream from my Raspberry Pi running MotionEyeOS using this method. Thanks for the solution. I did tweak the ffmpeg command line slightly.
$ ffmpeg -thread_queue_size 512 -i http://192.168.0.142:8081/ -vcodec rawvideo -vf scale=1280:720 -f v4l2 -threads 0 -pix_fmt yuyv422 /dev/video2
$ detectnet-camera.py
--network=ped-100
--width=1280 --height=720
--camera=/dev/video2
--threshold=1.8 --overlay=box
Here is what I did to get my external Hikvision IP camera feed working with detectnet example.
## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms
## Install ffmpeg
$ sudo apt install ffmpeg
## Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback
## Using ffmpeg pull rtsp stream from camera and push it to the video device created by
## v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512 \
-i rtsp://camuser:campassword@192.168.1.13/Streaming/channels/502 \
-vcodec rawvideo -vf scale=640:480 -f v4l2 \
-threads 0 -pix_fmt yuyv422 /dev/video0
## Now you can run the detectnet or imagenet example against the video device
## make sure to match height and width specified in ffmpeg command here.
## I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py \
--network=ped-100 \
--width=640 --height=360 \
--camera=/dev/video0 \
--threshold=1.8 --overlay=box
That worked fine for me, thanks! But I had some issues that were solved wrapping up the rtsp url with "rtsp://...". Here is a video of everything working together: https://youtu.be/PLBffle0CcQ
Hello, how can I put rtsp in "videoOutput"? I need steram detectnet from jetson nano over lan by RTSP.
Hi @niyazFattahov, jetson-inference/jetson-utils doesn't support RTSP output, as it requires support for special server code. Otherwise I would add it if it were simple.
Note that DeepStream supports RTSP output and has support for the RTSP server if you need that.
What about this: https://github.com/GStreamer/gst-rtsp-server/blob/1.14.5/examples/test-launch.c
can it be used somehow?
I tested rtsp only from csi camera with gstreamer like this:
./test-launch "videotestsrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! rtph264pay name=pay0 pt=96"
(https://forums.developer.nvidia.com/t/jetson-nano-faq/82953)
thank you.
In theory, yes, some similar code / dependencies would need to be integrated into the videoOutput class in order to support RTSP output.
(## Install v4l2loopback utilities and kernel driver
$ sudo apt install v4l2loopback-utils v4l2loopback-dkms
Install ffmpeg
$ sudo apt install ffmpeg
Load the v4l2loopback driver. This will create /dev/video0 device
$ sudo modprobe v4l2loopback
Using ffmpeg pull rtsp stream from camera and push it to the video device created by
v4l2loopback kernel module.
$ ffmpeg -thread_queue_size 512
-i rtsp://camuser:campassword@192.168.1.13/Streaming/channels/502
-vcodec rawvideo -vf scale=640:480 -f v4l2
-threads 0 -pix_fmt yuyv422 /dev/video0
Now you can run the detectnet or imagenet example against the video device
make sure to match height and width specified in ffmpeg command here.
I was getting gstreamer error when the size is not matched.
$ detectnet-camera.py
--network=ped-100
--width=640 --height=360
--camera=/dev/video0
--threshold=1.8 --overlay=box #)
@neildotwilliams @dusty-nv
I've tried this on Jetson TX2 but I got this error
(aititx2-2@aititx22-desktop:~$ ffmpeg -thread_queue_size 2631 -i "rtsp://aititx2-2@aititx22-desktop@192.168.10.76:1935/profile" -vcodec rawvideo -vf scale=1280:720 -f v4l2 -threads 0 -pix_fmt yuyv422 /dev/video0
ffmpeg version 3.4.8-0ubuntu0.2 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 7 (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/usr --extra-version=0ubuntu0.2 --toolchain=hardened --libdir=/usr/lib/aarch64-linux-gnu --incdir=/usr/include/aarch64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 2 packets
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 2 packets
[rtsp @ 0x559b6b26b0] max delay reached. need to consume packet
[rtsp @ 0x559b6b26b0] RTP: missed 1 packets
Input #0, rtsp, from 'rtsp://aititx2-2@aititx22-desktop@192.168.10.76:1935/profile':
Metadata:
title : Unnamed
comment : N/A
Duration: N/A, start: 0.000000, bitrate: N/A
Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 720x1280, 90k tbr, 90k tbn, 180k tbc
Stream #0:1: Audio: aac (LC), 32000 Hz, stereo, fltp
Stream mapping:
Stream #0:0 -> #0:0 (h264 (native) -> rawvideo (native))
Press [q] to stop, [?] for help
[v4l2 @ 0x559b7204e0] Frame rate very high for a muxer not efficiently supporting it.
Please consider specifying a lower framerate, a different muxer or -vsync 2
[v4l2 @ 0x559b7204e0] ioctl(VIDIOC_G_FMT): Invalid argument
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:0 --
Conversion failed!
)
please help! I use RSTP to connect to my android phone camera.
| gharchive/issue | 2017-11-15T10:34:05 | 2025-04-01T06:38:28.151334 | {
"authors": [
"LSAMIJN",
"alinabee",
"aprentis",
"bkanaki",
"dusty-nv",
"e-mily",
"engineer1982",
"flurpo",
"jonwilliams84",
"kanakiyab",
"linusali",
"neildotwilliams",
"nikever",
"niyazFattahov",
"sms720"
],
"repo": "dusty-nv/jetson-inference",
"url": "https://github.com/dusty-nv/jetson-inference/issues/160",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
727932005 | Does it work with latest Stencil v1?
Hello @dutscher and thank you so much for creating this package! :heart:
I see that it supports Stencil v2 but our organization can't quite move to v2 just yet, so we're on the latest version of v1. Would this package work with the latest v1?
Thanks for any information.
storybook runs alone and stencil is connected via proxy. so i promise you that your "old" stencil will work.
give it a try, clone and set down the stencil version in package.json. thr sample component and scss should run also under older version.
Ah okay, that's great. I'll give it a try!
please let me know if it really works :)
| gharchive/issue | 2020-10-23T05:45:53 | 2025-04-01T06:38:28.170100 | {
"authors": [
"dutscher",
"mkay581"
],
"repo": "dutscher/stencil-storybook",
"url": "https://github.com/dutscher/stencil-storybook/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1282894004 | [Feature Request] Set output device
It would be pretty awesome, if deezer-enhanced would provide the posibility to select the output device. I have the GoXLR, which has 5 Output channels and everytime the audio starts again I have to go to pavoucontrol and select the music channel. (Default is System).
I see, sounds great, I'll look into it!
This issue is probably upstream.
https://github.com/electron/electron/issues/27581
https://github.com/electron/electron/issues/7470
I'm closing this because the error never occurred again. I don't know what it was, but it seems to be gone now.
| gharchive/issue | 2022-06-23T20:19:22 | 2025-04-01T06:38:28.172332 | {
"authors": [
"duzda",
"lm41"
],
"repo": "duzda/deezer-enhanced",
"url": "https://github.com/duzda/deezer-enhanced/issues/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
179782777 | 关于dva组件状态的问题
在使用dva过程中,如果在一个列表页面点击添加按钮,弹出一个modal窗口,窗口是否显示保存在model的state中,如果在model里的控件例如 ant.design的穿梭框(Transfer)需要靠model里state的值改变选中结果,这样每次选中改变后就会调用 reducers ,这样整个modal窗口就会重新弹出。 如何解决呢
应该是使用方式不对,贴下代码
全部数据走 model,不要走 component 的 state 。混用会出现一些奇怪的问题,除非你很明白整套机制。
@sorrycc 数据走的就是model,我把代码贴上来
弹出的modal 组件
`import React, { PropTypes } from 'react';
import { Form, Input, Modal, InputNumber, Select, Transfer } from 'antd';
import { connect } from 'dva';
const FormItem = Form.Item;
const Option = Select.Option;
const formItemLayout = {
labelCol: {
span: 6
},
wrapperCol: {
span: 14
}
};
const formItemLayout2 = {
labelCol: {
span: 8
},
wrapperCol: {
span: 12
}
};
var SetSpotsModal = ({cardtype, dispatch, form, type, item = {}, onOk, onCancel}) => {
const { getFieldProps, validateFields, getFieldsValue } = form;
const modalOpts = {
title: cardtype.title,
visible : cardtype.spotsVisible,
maskClosable: false,
onOk: handleOk,
onCancel,
};
let allSpots=[];
if(cardtype.allSpots!=null&&cardtype.allSpots.length>0){
for(var i=0;i<cardtype.allSpots.length;i++){
allSpots.push({cardtype.allSpots[i].name});
}
}
function handleOk() {
form.validateFields((errors) => {
if (errors) {
return;
}
const data = { ...form.getFieldsValue(), cardTypeId: item.cardTypeId };
onOk(data);
});
}
function handleChange(targetKeys, direction, moveKeys) {
dispatch({
type: 'card/type/tranfser/success',
payload: {
targetKeys: targetKeys,
},
});
}
return (
<Modal {...modalOpts}>
<FormItem
label="卡类型:"
{...formItemLayout}
hasFeedback
>
{cardtype.cardTypeName}
<FormItem
label="景区:"
{...formItemLayout}
hasFeedback
>
<Select showSearch
placeholder="请选择景区" multiple
{...getFieldProps('scenicIds',{rules: [{ required: true, type: 'array',message: '请选择景区'}],initialValue: item['scenicIds']==null?[]:item['scenicIds']})}
>
{allSpots}
<Transfer
dataSource={cardtype.mockData}
targetKeys={cardtype.targetKeys}
onChange={handleChange}
render={item => item.title}
/>
</Form>
</Modal>
);
}
function mapStateToProps(state) {
return { cardtype: state.cardtype };
}
SetSpotsModal = Form.create()(SetSpotsModal);
export default connect(mapStateToProps)(SetSpotsModal);
`
model
`import { call, put } from 'dva/effects';
import { hashHistory } from 'dva/router';
import { query, create, del, info, edit,getScenicspots,spotsinfo,setCardTtypeSpots} from '../services/cardtype';
import { message } from 'antd';
export default {
namespace: 'cardtype',
state: {
list: [],
tableLoading: false,
currentItem: {},
modalType: 'create',
total: null,
title: '新增卡分类',
modalVisible : false,
spotsVisible:false,
currentSize: 10,
current: 1,
allSpots:[],
cardTypeName:'',
targetKeys: [],
mockData:[],
},
subscriptions: [
function(dispatch) {
hashHistory.listen(location => {
if (location.pathname === '/card/type') {
dispatch.dispatch({
type: 'card/type/tranfser',
});
dispatch.dispatch({
type: 'card/type/query',
payload: {'name':''},
});
dispatch.dispatch({
type: 'card/type/allSpots',
payload: {'name':''},
});
}
});
},
],
effects: {
*'card/type/tranfser' {
const targetKeys = [];
const mockData = [];
for (let i = 0; i < 20; i++) {
const data = {
key: i,
title: 内容${i + 1},
description: 内容${i + 1}的描述,
chosen: Math.random() * 2 > 1,
};
if (data.chosen) {
targetKeys.push(data.key);
}
mockData.push(data);
}
yield put({
type: 'card/type/tranfser/success',
payload: {
targetKeys: targetKeys,
mockData: mockData,
},
});
},
// 卡分类查询
*['card/type/query']({ payload }) {
yield put({ type: 'card/type/showLoading' });
const result = yield call(query, payload);
if (result.body && result.body.success) {
yield put({
type: 'card/type/query/success',
payload: {
list: result.body.page.list,
total: result.body.page.total,
currentSize: result.body.page.pageSize,
current: result.body.page.pageNum,
},
});
}
},
// 增加卡分类
*['card/type/create']({ payload }) {
yield put({ type: 'card/type/hideModal' });
yield put({ type: 'card/type/showLoading' });
const result = yield call(create, payload);
if (result.body && result.body.success) {
message.success("添加卡分类成功");
const result = yield call(query, {name:''});
if (result && result.body.success) {
yield put({
type: 'card/type/query/success',
payload: {
list: result.body.page.list,
total: result.body.page.total,
currentSize: result.body.page.pageSize,
current: result.body.page.pageNum,
},
});
}
}else if (result.body && !result.body.success) {
message.success(result.body.errorMsg);
}
yield put({ type: 'card/type/hideLoading' });
},
// 删除用户
*['card/type/del']({ payload }) {
yield put({ type: 'card/type/showLoading' });
const result = yield call(del, payload);
if (result.body && result.body.success) {
message.success("删除卡分类成功");
const result = yield call(query, {name:''});
if (result.body && result.body.success) {
yield put({
type: 'card/type/query/success',
payload: {
list: result.body.page.list,
total: result.body.page.total,
currentSize: result.body.page.pageSize,
current: result.body.page.pageNum,
},
});
}
}
yield put({ type: 'card/type/hideLoading' });
},
// 获取卡分类信息
*['card/type/info']({ payload }) {
const result = yield call(info, payload.id);
if (result.body && result.body.success) {
yield put({
type: 'card/type/showModal',
payload: {
currentItem: result.body.data,
modalType: 'edit',
},
});
}
},
//获取所有景区
*'card/type/allSpots' {
const result = yield call(getScenicspots);
if (result.body && result.body.success) {
yield put({
type: 'card/type/allSpots/success',
payload: {
allSpots:result.body.data,
},
});
}
},
//卡设置景区
*['card/type/spotsinfo']({ payload }) {
const result = yield call(spotsinfo, payload.id);
if (result.body && result.body.success) {
yield put({
type: 'card/type/showSpotsModal',
payload: {
currentItem: result.body.data,
title:'关联景区',
cardTypeName:payload.name,
},
});
}
},
// 更新卡分类信息
*['card/type/setSpots']({ payload }) {
const result = yield call(setCardTtypeSpots, payload);
if (result.body && result.body.success) {
yield put({ type: 'card/type/hideSpotsModal' });
yield put({ type: 'card/type/showLoading' });
message.success("设置景区成功");
const result = yield call(query, {name:''});
if (result.body && result.body.success) {
yield put({ type: 'card/type/hideLoading' });
yield put({
type: 'card/type/query/success',
payload: {
list: result.body.page.list,
total: result.body.page.total,
currentSize: result.body.page.pageSize,
current: result.body.page.pageNum,
},
});
}
}else if(result.body && !result.body.success) {
message.success(result.body.errorMsg);
}
},
// 更新卡分类信息
*['card/type/edit']({ payload }) {
yield put({ type: 'card/type/hideModal' });
yield put({ type: 'card/type/showLoading' });
const result = yield call(edit, payload);
if (result.body && result.body.success) {
message.success("编辑卡分类成功");
const result = yield call(query, {name:''});
if (result.body && result.body.success) {
yield put({
type: 'card/type/query/success',
payload: {
list: result.body.page.list,
total: result.body.page.total,
currentSize: result.body.page.pageSize,
current: result.body.page.pageNum,
},
});
}
}else if(result.body && !result.body.success) {
message.success(result.body.errorMsg);
}
yield put({ type: 'card/type/hideLoading' });
},
},
reducers: {
['card/type/tranfser/success'](state, action) {
return { ...state, ...action.payload};
},
['card/type/showLoading'](state, action) {
return { ...state, tableLoading: true };
},
['card/type/hideLoading'](state, action) {
return { ...state, tableLoading: false };
},
['card/type/query/success'](state, action) {
return { ...state, ...action.payload, tableLoading: false};
},
['card/type/showModal'](state, action) {
return { ...state, ...action.payload, modalVisible: true};
},
['card/type/hideModal'](state, action) {
return { ...state, modalVisible: false};
},
['card/type/showSpotsModal'](state, action) {
return { ...state, ...action.payload, spotsVisible: true};
},
['card/type/hideSpotsModal'](state, action) {
return { ...state, spotsVisible: false};
},
['card/type/allSpots/success'](state, action) {
return { ...state, ...action.payload};
},
},
}
`
@sorrycc 问题我找到了
按你的方法
// 解决 Form.create initialValue 的问题 // 每次创建一个全新的组件, 而不做diff // 如果你使用了redux, 请移步 http://react-component.github.io/form/examples/redux.html const UserModalGen = () => <UserModal {...userModalProps} />;
每次创建全新的modal组件如果更新model数据组件就会重新绘制,就会造成modal重新弹出
还成手动控制就行了
借此问题提出一下组件的疑问,user-dashboard demo中的User.jsx中有这样一段:
// 解决 Form.create initialValue 的问题
// 每次创建一个全新的组件, 而不做diff
// 如果你使用了redux, 请移步 http://react-component.github.io/form/examples/redux.html
const UserModalGen = () =>
<UserModal {...userModalProps} />;
return (
<MainLayout location={location}>
<div className={styles.normal}>
<UserSearch {...userSearchProps} />
<UserList {...userListProps} />
<UserModalGen />
</div>
</MainLayout>
);
为什么User.jsx中引入的三个子组件里面只有UserModal组件需要重新包装一下,而UserSearch却没有,看注释说每次会生成一个全新的组件,为啥会这样,求大神指导一下,非常感谢
@wonyun
// 解决 Form.create initialValue 的问题
// 每次创建一个全新的组件, 而不做diff
// 如果你使用了redux, 请移步 http://react-component.github.io/form/examples/redux.html
antd2.0.0已经没有此问题
| gharchive/issue | 2016-09-28T14:06:08 | 2025-04-01T06:38:28.205858 | {
"authors": [
"codering",
"nikogu",
"sorrycc",
"wonyun",
"yrcy0418"
],
"repo": "dvajs/dva",
"url": "https://github.com/dvajs/dva/issues/112",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1209104738 | can we make the structure of notion the same as zotero collections?
Right now, all items are in the same folder in Notero, can we make the structure of notion the same as zotero collections
(The same item can exist in different folders)
Hi @fredericky123, thanks for the feedback!
There was some discussion around this topic in #6, and the decision was made to sync to a single Notion database for now. To achieve something similar to what you're asking, the idea we had was to sync a Collection property into Notion as described in #30. Then you could make as many different views of that database as you wanted based on the Collection property or any combination of other properties. This felt like the more "Notion way" of doing things.
Do you think having a Collection property in Notion as described in #30 would meet your needs?
Thanks @dvanoni ! This is exactly what I want. Have it already been realized? How to configure it?
By the way, when enable the collections to sync, can we have a toogle to sync all collections? Right now, we have to choose each single collection, since I have a lot of collections, it seems time-consuming to do this.
@fredericky123, unfortunately this functionality isn't built yet. Keep an eye on #30, and I'll be sure to post an update when it's ready.
I pulled out your second question into its own issue so we can track that separately: #63
I'm going to close this issue as I believe we should have these points covered by #30 and #63. Feel free to add your input on those!
| gharchive/issue | 2022-04-20T03:09:22 | 2025-04-01T06:38:28.210213 | {
"authors": [
"dvanoni",
"fredericky123"
],
"repo": "dvanoni/notero",
"url": "https://github.com/dvanoni/notero/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2304432637 | Something wrong with the torch version
I followed the steps in readme but I encounted the following errors in SFT.
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.2.0), only 1.0.0 is known to be compatible
[2024-05-19 04:59:05,039] [INFO] [comm.py:637:init_distributed] cdb=None
[2024-05-19 04:59:05,082] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-05-19 04:59:05,108] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.2.0), only 1.0.0 is known to be compatible
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.3
[WARNING] using untested triton version (2.2.0), only 1.0.0 is known to be compatible
[rank5]: Traceback (most recent call last):
[rank5]: File "/home/zhangyan/Dynathink/LongLoRA/fine-tune.py", line 211, in
[rank5]: train()
[rank5]: File "/home/zhangyan/Dynathink/LongLoRA/fine-tune.py", line 106, in train
[rank5]: model_args, training_args = parser.parse_args_into_dataclasses()
[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank5]: File "/home/zhangyan/anaconda3/envs/longlora/lib/python3.12/site-packages/transformers/hf_argparser.py", line 347, in parse_args_into_dataclasses
[rank5]: raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
[rank5]: ValueError: Some specified arguments are not used by the HfArgumentParser: [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ']
In requirements.txt the version of torch is >=2.0.0, but the errors show that I need to install torch<2.0.0. How can I deal with the problem?
the same problem
| gharchive/issue | 2024-05-19T05:06:25 | 2025-04-01T06:38:28.223851 | {
"authors": [
"LebronXierunfeng",
"dian1414"
],
"repo": "dvlab-research/LongLoRA",
"url": "https://github.com/dvlab-research/LongLoRA/issues/185",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2616536081 | Chore: Improved the overall cart UI including empty cart
Issue # 15
I divided the main section into 80-20%
Implemented bootstrap card styling to make it more attractive.
Iterated the same items in the pricing module to reflect the pricing more clearly.
PREVIOUSLY
NOW
(1/2)
(2/2)
Please have a look at the PR
Great work @AqibAliMughal.
| gharchive/pull-request | 2024-10-27T11:41:07 | 2025-04-01T06:38:28.255755 | {
"authors": [
"AqibAliMughal",
"dwip708"
],
"repo": "dwip708/EcoFusion",
"url": "https://github.com/dwip708/EcoFusion/pull/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
615143734 | is the new "high quality" camera supported?
hi,
rpi2dng barfs with images from the new raspberry camera. is this supposed to work? here's a raw (captured with raspi-still --raw) in case this helps.
https://rm.ignorelist.com/s/yrWY8362a6wNmjC
.rm
the sensor is a "IMX477R"
Not yet... Will look into it when I have time
Here are some infos about other tools that do the dng conversion from the HQ Camera Raw Images: https://www.raspberrypi.org/forums/viewtopic.php?p=1665327#p1665327
| gharchive/issue | 2020-05-09T10:25:02 | 2025-04-01T06:38:28.259612 | {
"authors": [
"dword1511",
"rmalchow",
"tilllt"
],
"repo": "dword1511/raspiraw",
"url": "https://github.com/dword1511/raspiraw/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
312862188 | Feature/26 fix payscale ordering
updates pay scale labels
retrieves in correct order
Fix pay_scale ordering
Correct naming of pay scale options
| gharchive/pull-request | 2018-04-10T10:24:25 | 2025-04-01T06:38:28.325505 | {
"authors": [
"despo"
],
"repo": "dxw/teacher-vacancy-service",
"url": "https://github.com/dxw/teacher-vacancy-service/pull/112",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1552901521 | Update preinstalled software list for TCC agents
This PR was created by an automated build in TeamCity Cloud
Closing this PR as obsolete, there is a more recent one
| gharchive/pull-request | 2023-01-23T11:05:58 | 2025-04-01T06:38:28.326276 | {
"authors": [
"dy1ng"
],
"repo": "dy1ng/teamcity-documentation",
"url": "https://github.com/dy1ng/teamcity-documentation/pull/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2521106978 | [Snyk] Security upgrade @backstage/theme from 0.0.0-use.local to 0.1.1
Snyk has created this PR to fix 4 vulnerabilities in the yarn dependencies of this project.
Snyk changed the following file(s):
plugins/example-todo-list/package.json
Note for zero-installs users
If you are using the Yarn feature zero-installs that was introduced in Yarn V2, note that this PR does not update the .yarn/cache/ directory meaning this code cannot be pulled and immediately developed on as one would expect for a zero-install project - you will need to run yarn to update the contents of the ./yarn/cache directory.
If you are not using zero-install you can ignore this as your flow should likely be unchanged.
⚠️ Warning
Failed to update the yarn.lock, please update manually before merging.
Vulnerabilities that will be fixed with an upgrade:
Issue
Score
Asymmetric Resource Consumption (Amplification) SNYK-JS-BODYPARSER-7926860
112
Cross-site Scripting SNYK-JS-EXPRESS-7926867
80
Cross-site Scripting SNYK-JS-SEND-7926862
80
Cross-site Scripting SNYK-JS-SERVESTATIC-7926865
80
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Cross-site Scripting
🎉 Snyk hasn't found any issues so far.
✅ code/snyk check is completed. No issues were found. (View Details)
| gharchive/pull-request | 2024-09-12T01:17:46 | 2025-04-01T06:38:28.364351 | {
"authors": [
"dylansnyk"
],
"repo": "dylansnyk/backstage",
"url": "https://github.com/dylansnyk/backstage/pull/4195",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2568872863 | [Snyk] Security upgrade @backstage/plugin-auth-node from 0.0.0-use.local to 0.1.0
Snyk has created this PR to fix 1 vulnerabilities in the yarn dependencies of this project.
Snyk changed the following file(s):
plugins/catalog-backend/package.json
Note for zero-installs users
If you are using the Yarn feature zero-installs that was introduced in Yarn V2, note that this PR does not update the .yarn/cache/ directory meaning this code cannot be pulled and immediately developed on as one would expect for a zero-install project - you will need to run yarn to update the contents of the ./yarn/cache directory.
If you are not using zero-install you can ignore this as your flow should likely be unchanged.
⚠️ Warning
Failed to update the yarn.lock, please update manually before merging.
Vulnerabilities that will be fixed with an upgrade:
Issue
Score
Cross-site Scripting (XSS) SNYK-JS-COOKIE-8163060
44
[!IMPORTANT]
Check the changes in this PR to ensure they won't cause issues with your project.
Max score is 1000. Note that the real score may have changed since the PR was raised.
This PR was automatically created by Snyk using the credentials of a real user.
Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.
For more information:
🧐 View latest project report
📜 Customise PR templates
🛠 Adjust project settings
📚 Read about Snyk's upgrade logic
Learn how to fix vulnerabilities with free interactive lessons:
🦉 Cross-site Scripting (XSS)
🎉 Snyk hasn't found any issues so far.
✅ security/snyk check is completed. No issues were found. (View Details)
✅ license/snyk check is completed. No issues were found. (View Details)
| gharchive/pull-request | 2024-10-06T20:14:14 | 2025-04-01T06:38:28.378067 | {
"authors": [
"dylansnyk"
],
"repo": "dylansnyk/backstage",
"url": "https://github.com/dylansnyk/backstage/pull/4621",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1958166786 | Initial benchmark setup
It is advisable to have solid benchmarks from the beginning to quickly and easily assess the performance impact of different implementations.
For this we could leverage jmh.
Some example JMH integrations can be found here: https://github.com/FasterXML/jackson-benchmarks/tree/2.15
The idea of this issue is to provide an initial setup to enable the project to write easy and reliable micro benchmarks.
I started a branch for this this morning. We've been mostly using the tests to measure performance but i think it makes more sense to get a basic micro benchmark setup working. JMH will provide more control and also i'd like to separate out things like parsing, compilation, and runtime performance. Will ping you when i get a PR started @danielperano
I think this has been solved by https://github.com/dylibso/chicory/pull/280 in the meantime.
| gharchive/issue | 2023-10-23T22:59:50 | 2025-04-01T06:38:28.396954 | {
"authors": [
"bhelx",
"thomasdarimont"
],
"repo": "dylibso/chicory",
"url": "https://github.com/dylibso/chicory/issues/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1867570947 | Improve Configuration and Validation sections of the docs
Summary
Fixes #911 and #968 by adding documentation on some Dynaconf parameters and reworking the Validation section of the docs.
Details
Not much detail to give, as context is present in the 2 linked issues.
Aside from the obvious additions to configuration.md and validation.md, I updated the Pyhon version in CONTRIBUTING.md to 3.8+ and added .DS_Store file to .gitignore. In particular, any of these 2 unrelated changes could be dropped from this PR if needed :).
Questions
Links between Markdown files don't seem to work on my local docs (make docs). Did I do something wrong or do they get fixed when deployed to the web?
Notes
Please feel free to ask any changes in this PR. The idea is to improve Dynaconf.
Maybe I have to do absolute links (without the ./ at the start?
Thanks @pedro-psb for your input and patience. I pushed some changes, you can look at the diff here.
Please don't hesitate to re-ask for changes, and please mark as solved any discussions you think should be closed.
Hey @pedro-psb, I'll look at your comments during the weekend. My bad with the links though, I could've sworn that I went through all of them :S.
@pedro-psb sorry this is taking so long, I don't have much time mid-week to tackle this PR :S. I've now re-read and it looks like its better now. I'll look check all links from the dev deploy once it's ready.
| gharchive/pull-request | 2023-08-25T19:17:30 | 2025-04-01T06:38:28.403534 | {
"authors": [
"sebastian-correa"
],
"repo": "dynaconf/dynaconf",
"url": "https://github.com/dynaconf/dynaconf/pull/989",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1230077784 | md formatting
Added some python syntax highlighting to code blocks.
@ginotrombetti can you review this when you get a chance?
| gharchive/pull-request | 2022-05-09T18:15:35 | 2025-04-01T06:38:28.424543 | {
"authors": [
"andynataco"
],
"repo": "dynata/rex-sdk-python",
"url": "https://github.com/dynata/rex-sdk-python/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
921664882 | Add color override for OOTB Honeycomb Tiles
When viewing certain honeycomb tiles, like synthetics, the chart displays only green/red based on the success/failure of the synthetic monitor. If a monitor is disabled and it's last run was successful, the cell shows green. This request is to override that green color for disabled monitor (last run was good) or red color for disabled monitor (last run was bad) to grey. This would depend on the status of the monitor. The field is called status and I can filter the chart on that, but not sure if "status" is exposed where the powerup could just color those cells grey. Hopefully it is...
this doesn't seem possible without making some sort of api call
| gharchive/issue | 2021-06-15T18:07:06 | 2025-04-01T06:38:28.425804 | {
"authors": [
"LucasHocker",
"TechShady"
],
"repo": "dynatrace-oss/DynatraceDashboardPowerups",
"url": "https://github.com/dynatrace-oss/DynatraceDashboardPowerups/issues/57",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
179763468 | No handler for module nn.MulConstant! for ResNet
Hi,
when trying to profile a ReNet trained on CIFAR10, I get:
In 5 module of nn.Sequential:
In 1 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.ConcatTable:
In 2 module of nn.Sequential:
In 2 module of nn.Concat:
/home/pedro/repositories/Torch7-profiling/src/profiler.lua:37: No handler for module nn.MulConstant!
stack traceback:
[C]: in function 'assert'
/home/pedro/repositories/Torch7-profiling/src/profiler.lua:37: in function 'compute_ops'
/home/pedro/repositories/Torch7-profiling/src/profiler.lua:19: in function </home/pedro/repositories/Torch7-profiling/src/profiler.lua:18>
[C]: in function 'xpcall'
/home/pedro/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/pedro/torch/install/share/lua/5.1/nn/Concat.lua:14: in function </home/pedro/torch/install/share/lua/5.1/nn/Concat.lua:9>
[C]: in function 'xpcall'
/home/pedro/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/pedro/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/pedro/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
...
/home/pedro/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/pedro/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function </home/pedro/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
/home/pedro/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
/home/pedro/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
/home/pedro/repositories/Torch7-profiling/src/profiler.lua:10: in function 'count_ops'
profile-model.lua:87: in main chunk
[C]: in function 'dofile'
...edro/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d50 ```
Thanks,
Added nn.MulConstant with this commit. Let me know if it is working.
Sorry for the late reply. It works! Thanks.
| gharchive/issue | 2016-09-28T12:50:17 | 2025-04-01T06:38:28.509796 | {
"authors": [
"codeAC29",
"pedropgusmao"
],
"repo": "e-lab/Torch7-profiling",
"url": "https://github.com/e-lab/Torch7-profiling/issues/7",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1791886272 | KC and mariadb upgrade
Check following resources:
https://github.com/keycloak/keycloak/tree/main/quarkus/container
https://github.com/keycloak/keycloak-containers/tree/main/docker-compose-examples
https://github.com/MariaDB/mariadb-docker/ issues/94
https://mariadb.org/mariadb-server-docker-official-images-healthcheck-without-mysqladmin/
https://github.com/MariaDB/mariadb-docker/ pull/508
Try to send a PR to upstream now that seems that the project is active again:
https://github.com/Frankniesten/Limesurvey-SAML-Authentication/ pull/6
Beware that LS seems to create responses' tables using MyISAM but seems that now can be configured:
https://github.com/LimeSurvey/LimeSurvey/ pull/1043
https://mariadb.com/kb/en/converting-tables-from-myisam-to-innodb/
| gharchive/issue | 2023-07-06T16:38:49 | 2025-04-01T06:38:28.516623 | {
"authors": [
"imartinezortiz"
],
"repo": "e-ucm/docker-limesurvey",
"url": "https://github.com/e-ucm/docker-limesurvey/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
811270249 | Downloading Dashboard Comprised of Google Charts extends past width of PDF
I have a simple div tag denoted with the id: dashboard and my code is as the following:
//Download PDF of Dashboard
$('#options-menu-download-as-pdf').on('click', function () {
var dashboard = document.getElementById('dashboard');
var options = {
filename: 'Report.pdf',
html2canvas: {
scale: 2
},
pagebreak: {
mode: ['avoid-all', 'css', 'legacy']
},
jsPDF: {
format: [500, 200],
unit: 'mm',
orientation: 'landscape'
}
}
html2pdf().from(dashboard).set(options).save();
});
This does everything I would like except I would like to be able to assign the width and height dynamically (I can do this already by window size or the current h/w of the dashboard in HTML but that could lead to poorly rendered PDFs if the user has their browser window not at full screen).
It appears as though the charts themselves cannot be manipulated to fit within the h/w of the pdf so the get cut off, whereas my grid styling actually fits to the dimensions of he document.
I looked at this thread: https://github.com/eKoopmans/html2pdf.js/issues/44 that discusses a 'fit-to-width' option. I am missing an example of that in documentation so I am not sure if that is an option with html2pdf, the underlying jsPDF, or even a CSS option.
How about: $("#Elementd).css("height", "297mm");
Or have you already found a solution?
| gharchive/issue | 2021-02-18T16:41:28 | 2025-04-01T06:38:28.644335 | {
"authors": [
"alissonalberini",
"neldreth2021"
],
"repo": "eKoopmans/html2pdf.js",
"url": "https://github.com/eKoopmans/html2pdf.js/issues/396",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2474781250 | 'HelloWorldPubSubType' object has no attribute 'setName'. Did you mean: 'set_name'?
E:\test\build\Release>python HelloWorldExample.py -p publisher
Creating publisher.
Traceback (most recent call last):
File "E:\test\build\Release\HelloWorldExample.py", line 206, in
writer = Writer(args.domain, args.machine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\test\build\Release\HelloWorldExample.py", line 113, in init
self.topic_data_type.setName("HelloWorldDataType")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'HelloWorldPubSubType' object has no attribute 'setName'. Did you mean: 'set_name'?
Hi @SunLiangcan, thanks for your report.
I am assuming you are using the Fast DDS python version from master branch.
Fast DDS, and Fast DDS Python, are near to a mayor release (v3.0.0 and v2.0.0, respectively). That entails several API breaks and refactors. For that reason, and until these versions are released, it is strongly advisable to use the latest stable branch, 2.14.x (and python v1.4.2).
It seems that performing the TopicDataType refactor in the python repository we missed updating that example.
We will fix it and come back to you, sorry for the inconvenience.
ok,use 1.4.2 no problem
Hi @SunLiangcan, we have already fixed it (#183) in master
| gharchive/issue | 2024-08-20T05:57:49 | 2025-04-01T06:38:28.651028 | {
"authors": [
"JesusPoderoso",
"SunLiangcan"
],
"repo": "eProsima/Fast-DDS-python",
"url": "https://github.com/eProsima/Fast-DDS-python/issues/182",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1085601499 | [13307] is_metatraffic documentation
#126 extended public API but no documentation was included. This PR solves the issue.
Merge after #140 (only last commit is relevant)
@JLBuenoLopez-eProsima I think this one has to be rebased
| gharchive/pull-request | 2021-12-21T09:10:03 | 2025-04-01T06:38:28.652211 | {
"authors": [
"JLBuenoLopez-eProsima",
"MiguelCompany"
],
"repo": "eProsima/Fast-DDS-statistics-backend",
"url": "https://github.com/eProsima/Fast-DDS-statistics-backend/pull/141",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
274788329 | 图片展示的时候有间隔,有待美化
图片随机打乱后,在展示时有缝隙。
这是高分屏系统按倍数放大的兼容性问题...
暂时没有时间改了,so sorry~
| gharchive/issue | 2017-11-17T08:32:34 | 2025-04-01T06:38:28.865775 | {
"authors": [
"eatage",
"reckcn"
],
"repo": "eatage/VerificationCode",
"url": "https://github.com/eatage/VerificationCode/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
259099521 | Avoid deleting parent backup failed caused by 404
Catch NotFound exception in trove API level, and return directly to avoid dropping out while deleting parent backup that has multiple children backups.
Signed-off-by: Fan Zhang zh.f@outlook.com
commit message 里加上 issue 的信息。
| gharchive/pull-request | 2017-09-20T09:32:23 | 2025-04-01T06:38:28.867009 | {
"authors": [
"2hf",
"zhaochao"
],
"repo": "eayunstack/trove",
"url": "https://github.com/eayunstack/trove/pull/43",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1334070532 | Allow settings-only mode
There should be a way to only show settings (and even user-defined lists of settings) in HomeKit as a multi-switch accessory.
This would allow voice control of settings such as speech enhancement or night mode without requiring any other visibility of Sonos in HK.
Ideally, I would be able to configure SonosZP to show a single tile that allowed me to turn on or off Night Mode and Speech Enhancement and any subwoofer I had paired to a given zone/surround setup. This would allow me to automate the "evening" Sonos settings along with lights and other HK devices.
You can change the settings from Siri: use Eve or another decent HomeKit app to create a scene with the setting. Then recall that scene from Siri.
Only exposing a single setting is beyond the scope of Homebridge ZP. If you don’t like to see the accessories exposed by Homebridge ZP, don’t use that plugin. You could probably use a plugin like homebridge-commander to expose dummy switches that issue zp commands to change a setting.
| gharchive/issue | 2022-08-10T05:00:33 | 2025-04-01T06:38:28.869039 | {
"authors": [
"SamTheGeek",
"ebaauw"
],
"repo": "ebaauw/homebridge-zp",
"url": "https://github.com/ebaauw/homebridge-zp/issues/198",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1375323559 | Country number is not supported
Hi, I was trying to use the SMS spammer on my own number. But it seems my Indian number is not supported. Can you help me with adding my country code to the script? What changes I must do and where.
Can you help me with adding my country code to the script? What changes I must do and where.
Hello, it is unlikely that I will be able to add support for Indian phone numbers🤷
| gharchive/issue | 2022-09-16T02:45:45 | 2025-04-01T06:38:28.870770 | {
"authors": [
"aakash-priyadarshi",
"ebankoff"
],
"repo": "ebankoff/Beast_Bomber",
"url": "https://github.com/ebankoff/Beast_Bomber/issues/48",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1922427406 | Allow port 80 for certificate renewal
Let's Encrypt requires port 80 to be open for certificate renewal. This commit adds port 80 to the firewall rules for coturn when using TLS.
OH! Sorry somehow I missed this PR completely.
Will work on it right away.
Moved to PR #10
| gharchive/pull-request | 2023-10-02T18:36:48 | 2025-04-01T06:38:28.872286 | {
"authors": [
"BrutalBirdie",
"louim"
],
"repo": "ebbba-org/ansible-role-coturn",
"url": "https://github.com/ebbba-org/ansible-role-coturn/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1032271056 | Fix BAM to FASTQ - d5410c6e-612d-421a-a66f-2de5e04dd050
SOP: https://ebi-ait.github.io/hca-ebi-wrangler-central/SOPs/update_ena_runs_SOP.html
I started to download the files from DCP Data Browser (https://data.humancellatlas.org/explore/projects/abe1a013-af7a-45ed-8c26-f3793c24a1f4/get-curl-command) to the EBI cluster into the nfs/production/hca/d5410c6e-612d-421a-a66f-2de5e04dd050 folder. 126 files/ 3.09 TB so it is going to take a while.
Started checksum generation on the EBI cluster to /nfs/production/hca/d5410c6e-612d-421a-a66f-2de5e04dd050/md5_checksum.txt.
Started uploading the files to ENA's FTP server into /d5410c6e-612d-421a-a66f-2de5e04dd050 folder.
Checksum calculation is done and stored here: /nfs/production/hca/d5410c6e-612d-421a-a66f-2de5e04dd050/md5_checksum.txt.
Files has been submitted to ENA and we got back the accessions:
<?xml version='1.0' encoding='UTF-8'?>
<RECEIPT receiptDate="2021-11-29T16:08:45.085Z" submissionFile="submission.xml" success="true">
<RUN accession="ERR7441145" alias="sequencingRun_90b34757-474f-42c3-9d31-683a5a0a84bd_1" status="PRIVATE" />
<RUN accession="ERR7441146" alias="sequencingRun_0f14c412-5014-4ac0-9a71-858b2f047777_1" status="PRIVATE" />
<RUN accession="ERR7441147" alias="sequencingRun_082e87ac-5cf6-4bad-bedb-5f6591b8f566_1" status="PRIVATE" />
<RUN accession="ERR7441148" alias="sequencingRun_157ba915-28d7-4d80-89ad-71c8291dbc05_1" status="PRIVATE" />
<RUN accession="ERR7441149" alias="sequencingRun_cc5a78a1-539f-4dec-80b6-62f35dcafd89_1" status="PRIVATE" />
<RUN accession="ERR7441150" alias="sequencingRun_01f7c3d0-d4be-432d-aa25-8c7fbce20b49_1" status="PRIVATE" />
<RUN accession="ERR7441151" alias="sequencingRun_dc31f31d-ab56-4025-9834-99be638a2d50_1" status="PRIVATE" />
<RUN accession="ERR7441152" alias="sequencingRun_b6dec4a6-2d9b-40ac-80c4-41dce01aea46_1" status="PRIVATE" />
<RUN accession="ERR7441153" alias="sequencingRun_51fb7eb7-a422-482e-a98e-c9e6f9628e97_1" status="PRIVATE" />
<RUN accession="ERR7441154" alias="sequencingRun_d8c08782-6f69-4314-947c-1afe6928cbce_1" status="PRIVATE" />
<RUN accession="ERR7441155" alias="sequencingRun_b3ce1085-08dc-42ff-a609-6968315327a8_1" status="PRIVATE" />
<RUN accession="ERR7441156" alias="sequencingRun_6a0f0064-ba67-43d6-985e-68d8edcf8c0b_1" status="PRIVATE" />
<RUN accession="ERR7441157" alias="sequencingRun_afd0ea55-e710-4b46-bb05-2423e491b6f5_1" status="PRIVATE" />
<RUN accession="ERR7441158" alias="sequencingRun_13a062ba-2b8e-43a1-bc2a-bc17f650b37d_1" status="PRIVATE" />
<RUN accession="ERR7441159" alias="sequencingRun_6d273f72-f55c-4c8e-b91e-29e762194c3f_1" status="PRIVATE" />
<RUN accession="ERR7441160" alias="sequencingRun_37cad11b-c8c9-4d1f-b715-498b0f8d4b35_1" status="PRIVATE" />
<RUN accession="ERR7441161" alias="sequencingRun_0b52914d-687b-44d1-9a70-a95df55ed502_1" status="PRIVATE" />
<RUN accession="ERR7441162" alias="sequencingRun_3a20b6a5-6652-4486-86bc-842c7c31c343_1" status="PRIVATE" />
<RUN accession="ERR7441163" alias="sequencingRun_44b8ad82-1109-4543-a534-a85b34c2c301_1" status="PRIVATE" />
<RUN accession="ERR7441164" alias="sequencingRun_548a75b4-ba45-4700-b7bb-656c3995c316_1" status="PRIVATE" />
<RUN accession="ERR7441165" alias="sequencingRun_2c2c943c-1c0e-462c-b630-8a91a1f0fb94_1" status="PRIVATE" />
<RUN accession="ERR7441166" alias="sequencingRun_83b474d3-c20f-48f6-95a0-b0fa2269f14d_1" status="PRIVATE" />
<SUBMISSION accession="ERA7498444" alias="SUBMISSION-29-11-2021-16:08:43:157" />
<MESSAGES />
<ACTIONS>ADD</ACTIONS>
</RECEIPT>
Filed ticket to ENA HelpDesk: [ENA DATA STATUS #549358]
Submitter: Broker
Name: Karoly Erdos
Email: karoly@ebi.ac.uk
CCEmails: wrangler-team@data.humancellatlas.org
Subject: Suppressing old ENA runs
Query is related to: Suppression
I work on: Humans
Organisms classification: Not applicable
The work is: Other/not sure (Raw sequencing reads)
Message Body:
Hi,
Could you please suppress the following ENA runs from this study (ERP120466 / PRJEB37165):
ERR4336830
ERR4336831
ERR4336832
ERR4336833
ERR4336834
ERR4336835
ERR4336836
ERR4336837
ERR4336838
ERR4336839
ERR4336840
ERR4336841
ERR4336842
ERR4336843
ERR4336844
ERR4336845
ERR4336846
ERR4336847
ERR4336848
ERR4336849
ERR4336850
ERR4336851
These were BAM files and they had been replaced with FASTQ files.
Please DO NOT delete/suppress the experiment and the new FASTQ files.
Many thanks,
Karoly
We have to monitor ENA when are the new FASTQ files are available in the browser. Normally it takes at least 48 hours.
Project to check: https://www.ebi.ac.uk/ena/browser/view/PRJEB37165
Bam files have not yet been deleted as per Monday Dec 6th, 10:20 AM
@ke4 to confirm if bam files are deleted in the ENA study today.
| gharchive/issue | 2021-10-21T09:33:50 | 2025-04-01T06:38:28.880879 | {
"authors": [
"ESapenaVentura",
"aaclan-ebi",
"ke4",
"ofanobilbao"
],
"repo": "ebi-ait/hca-ebi-wrangler-central",
"url": "https://github.com/ebi-ait/hca-ebi-wrangler-central/issues/526",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
810129393 | Wrong ontology for norbornene and premutilin
These relationships are wrong because alkenes are acyclic while norbornene and premutilin are cyclic.
norbornene (CHEBI:52286) is a alkene (CHEBI:32878)
premutilin (CHEBI:142455) is a alkene
Thanks. Now corrected.
| gharchive/issue | 2021-02-17T12:12:04 | 2025-04-01T06:38:28.882308 | {
"authors": [
"K-r-ll",
"amalik01"
],
"repo": "ebi-chebi/ChEBI",
"url": "https://github.com/ebi-chebi/ChEBI/issues/3999",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
487245288 | #9 Get block
Not ready for merging yet.
Created PR for reviewing and discussion.
@carte7000 done
| gharchive/pull-request | 2019-08-30T01:45:26 | 2025-04-01T06:38:28.912799 | {
"authors": [
"progital"
],
"repo": "ecadlabs/tezos-ts",
"url": "https://github.com/ecadlabs/tezos-ts/pull/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1427597676 | No incident recorded in the database
it seems that the incidents are not saved in DB
A critical error is visible in the logs
INFO:bot.incident.incident:Creating incident channel: inc-202210281751-test-4
ERROR:bot.incident.incident:Error sending message to incident digest channel: The request to the Slack API failed. (url: https://www.slack.com/api/chat.postMessage)
The server responded with: {'ok': False, 'error': 'not_in_channel'}
INFO:bot.incident.incident:Sending message to digest channel for: inc-202210281751-test-4
INFO:bot.incident.incident:Writing incident entry to database for inc-202210281751-test-4...
CRITICAL:bot.incident.incident:Error writing entry to database: local variable 'digest_message' referenced before assignment
ERROR:bot.models.incident:Incident update failed for inc-202210281751-test-4: No row was found when one was required```
False positive : incident-bot was not added to the incident digest channel, which means it was not able to post a message.
Then everything failed afterward
| gharchive/issue | 2022-10-28T17:58:21 | 2025-04-01T06:38:28.929179 | {
"authors": [
"benjoz"
],
"repo": "echoboomer/incident-bot",
"url": "https://github.com/echoboomer/incident-bot/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2469009018 | Fix link error related to Go 1.23 release
Since release of Go 1.23 go:linkname is a problem and the hook into testing.(*common).logDepth does not longer work. Therefore (although ugly), we have to disable this feature inside of github.com/echocat/slf4g/sdk/testlog for now.
Pull Request Test Coverage Report for Build 10410896027
Details
0 of 0 changed or added relevant lines in 0 files are covered.
No unchanged relevant lines lost coverage.
Overall coverage remained the same at 97.987%
Totals
Change from base Build 10342024026:
0.0%
Covered Lines:
3992
Relevant Lines:
4074
💛 - Coveralls
| gharchive/pull-request | 2024-08-15T21:35:03 | 2025-04-01T06:38:28.934356 | {
"authors": [
"blaubaer",
"coveralls"
],
"repo": "echocat/slf4g",
"url": "https://github.com/echocat/slf4g/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1627587574 | Update GitHub Actions
Due to the migration, some actions do not work anymore
Currently, we have three actions:
Deploy to maven central: Is obsolete due to mandatory Jenkins usage --> Delete
Deploy to java Model: Is still needed. However, in the current state, it is still referencing the admin shell io packages --> Update
Run Tests: is still needed. Most likely an update to also cover the main branch is sufficient.
| gharchive/issue | 2023-03-16T14:14:02 | 2025-04-01T06:38:28.938077 | {
"authors": [
"FrankSchnicke"
],
"repo": "eclipse-aas4j/aas4j-model-generator",
"url": "https://github.com/eclipse-aas4j/aas4j-model-generator/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1677527448 | "org.eclipse.tracecompass.incubator.trace.server.jersey.rest.core.id" could not be found in the registry
Hi,
Just trying to follow the steps on the README file to run a demo on a Linux machine using the Trace extension on Theia.
I cloned the repo
I have the prerequisite of Java 11 as required
I completed the section Build the extension and example application
And the problem arise on the next step on Try the trace extension section, when I run the command: yarn start:server then the following error message is saved on the log file at (theia-trace-extension/trace-compass-server/configuration/):
!ENTRY org.eclipse.osgi 4 0 2023-04-20 23:02:54.573
!MESSAGE Application error
!STACK 1
java.lang.RuntimeException: Application "org.eclipse.tracecompass.incubator.trace.server.jersey.rest.core.id" could not be found in the registry. The applications available are: org.eclipse.equinox.app.error.
at org.eclipse.equinox.internal.app.EclipseAppContainer.startDefaultApp(EclipseAppContainer.java:252)
at org.eclipse.equinox.internal.app.MainApplicationLauncher.run(MainApplicationLauncher.java:33)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:136)
at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:402)
at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:255)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:659)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:596)
at org.eclipse.equinox.launcher.Main.run(Main.java:1467)
at org.eclipse.equinox.launcher.Main.main(Main.java:1440)
What is the error about and the solution for it?
Thank you.
Thanks for having reported this. I just pushed a PR (linked herein) to fix the Java version in the README. Using 17 instead of 11 works locally for me, and should be the required version AFAIK.
| gharchive/issue | 2023-04-20T22:55:37 | 2025-04-01T06:38:28.941735 | {
"authors": [
"marco-miller",
"santimchp"
],
"repo": "eclipse-cdt-cloud/theia-trace-extension",
"url": "https://github.com/eclipse-cdt-cloud/theia-trace-extension/issues/964",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2614866290 | Update README files for default prop values cleanup
As part of https://github.com/eclipse-pass/main/issues/1066
@markpatton I realized I have to fix the ITs for pass-support after merging the pass-core change for default prop values. I'll fix them and then reopen this PR.
Apparently github has some issues with reopened PRs and rebase and merge. Closed this PR and opened a new PR https://github.com/eclipse-pass/pass-support/pull/130
| gharchive/pull-request | 2024-10-25T18:40:07 | 2025-04-01T06:38:29.021071 | {
"authors": [
"rpoet-jh"
],
"repo": "eclipse-pass/pass-support",
"url": "https://github.com/eclipse-pass/pass-support/pull/129",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2040365890 | fix(tutorials): link to perquisites doesn't work
What
There is a link to prerequisites in the Skills required section. This link doesn't work, and ends up in a Page not found.
Why
The link doesn't work.
closed by #623
| gharchive/issue | 2023-12-13T19:50:06 | 2025-04-01T06:38:29.044266 | {
"authors": [
"stephanbcbauer"
],
"repo": "eclipse-tractusx/eclipse-tractusx.github.io",
"url": "https://github.com/eclipse-tractusx/eclipse-tractusx.github.io/issues/563",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2539119953 | fix(service marketplace): list all active services
Description
Service marketplace should have all the active services but now with implemented solution we can see all the services.
Changelog entry:
- fixed service marketplace to display all active available servicesp[#1143](https://github.com/eclipse-tractusx/portal-frontend/issues/1143)
Why
Service Marketplace hasn't have the all the active services available throughout from all the service provider. Now, for each service provider marketplace we can observe all the services.
Issue
https://github.com/eclipse-tractusx/portal-frontend/issues/1143
Checklist
[x] I have performed a self-review of my own code
[x] I have successfully tested my changes locally
@ma3u @oyo can you please review this PR ?
currently, I can't add you in this PR as reviewer. However, I have already created the ticket to become tractusx contributor: https://github.com/eclipse-tractusx/sig-infra/issues/547 . Thank you!
@evegufy I have updated the dependency file but the error message still persists and not any clue with the error message.
signal need to update DEPENDENCIES.
@manojava-gk I have introduced enum for frequently used string for sorting type . can you have a look please ? :)
@Usmanfee Just to maintain consistency use PascalCase instead. See other examples in the page.
@manojava-gk we are also use camelcase format in e.g. PAGES and OVERLAYS enum . I am using camelcase since this is the pattern backend support.
@Usmanfee one more thing, constants file is a place where we host most common things in the app. I do not think this sorting type is a generic one. it is very specific to the back end api. I prefer to define this in the specific api types's file.
In future we can define this in the constants when all apis sorting types are unified.
CC: @oyo
@manojava-gk I have updated the code based on your suggestion. could you please have a look now ? Thank you :)
Looks fine now
@oyo Thanks for your feedback :) . I have resolved your comments with suggested changes.
@evegufy I have reverted changelog changes. @oyo could you please re-approve and merge it again ? Thank you
| gharchive/pull-request | 2024-09-20T15:40:29 | 2025-04-01T06:38:29.051297 | {
"authors": [
"Usmanfee",
"manojava-gk"
],
"repo": "eclipse-tractusx/portal-frontend",
"url": "https://github.com/eclipse-tractusx/portal-frontend/pull/1134",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1722324452 | chore(docker hub): clean up after registry move
Description
change registry in package.json
Why
moved to docker hub
Issue
https://github.com/eclipse-tractusx/portal-frontend/pull/19
Checklist
[x] I have performed a self-review of my own code
[x] I have successfully tested my changes locally
@oyo could you please check if this script is still needed? It contains references to the old registry.
@oyo could you please check if this script is still needed? It contains references to the old registry.
@evegufy That is a very old script initially meant to build and push images to the azure container registry that we had before ghcr. It's not used any more and we can delete it.
| gharchive/pull-request | 2023-05-23T15:34:37 | 2025-04-01T06:38:29.054912 | {
"authors": [
"evegufy",
"oyo"
],
"repo": "eclipse-tractusx/portal-frontend",
"url": "https://github.com/eclipse-tractusx/portal-frontend/pull/34",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1948862756 | Separate Issuer Component (removed from MIW)
Description
Remove Issuer Component from MIW.
Issuer Component to be transferred to separate component,
Harmonize approach with other dataspaces, GAIA-X and EDC and reduce restrictions.
Impact
Additional information
[ ] I'm willing to contribute to this feature
Won't do in PI11
replaced with new feature https://github.com/eclipse-tractusx/sig-release/issues/416
| gharchive/issue | 2023-10-18T05:34:02 | 2025-04-01T06:38:29.057275 | {
"authors": [
"jjeroch",
"stefan-ettl",
"stephanbcbauer"
],
"repo": "eclipse-tractusx/sig-release",
"url": "https://github.com/eclipse-tractusx/sig-release/issues/246",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1959403673 | Story "Introduce Supply Chain Domain Ontology"
# Story "Introduce Supply Chain Domain Ontology"
Repository https://github.com/catenax-ng/product-ontology
Statement
As an ESS use case developer, I want to have a supply-chain domain ontology such that I can formulate my use case roles, permissions and skills with a vocabulary based on BPN entities paired with abstract material flows.
Acceptance Criteria
Domain Ontology Exists
Domain Ontology Validates
Estimation
5 SP
Originally posted by @drcgjung in https://github.com/eclipse-tractusx/sig-release/issues/280#issuecomment-1770735415
wrong issue place @igorsvetlov could you please raise the issue under https://github.com/catenax-ng/product-ontology
@FaGru3n please re-open for now. We'll discuss with @drcgjung where to place this one and other related stories.
| gharchive/issue | 2023-10-24T14:33:53 | 2025-04-01T06:38:29.061069 | {
"authors": [
"FaGru3n",
"igorsvetlov"
],
"repo": "eclipse-tractusx/sig-release",
"url": "https://github.com/eclipse-tractusx/sig-release/issues/318",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1817606614 | Update esmf-sdk version
update esmf-sdk version to 2.2.3
PR:139 will close this issue.
Hi @richaashara ,
thanks for your feedback and PR. We will take a look over the PR.
Hi @richaashara ,
Thanks for the PR. This PR is incomplete since unit tests are failing.
We are also working on ESMF SDK vesion update.
Updated to 2.4.2. Issue will be closed.
| gharchive/issue | 2023-07-24T04:44:11 | 2025-04-01T06:38:29.063509 | {
"authors": [
"richaashara",
"shijinrajbosch",
"tunacicek"
],
"repo": "eclipse-tractusx/sldt-semantic-hub",
"url": "https://github.com/eclipse-tractusx/sldt-semantic-hub/issues/138",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1849697512 | fix: get rid of ghcr.io image references
I've fount multiple docker image links referencing ghcr.io registry. Please get rid of them and reference the DockerHub images instead. Some examples:
trivy.yml worfklow
values.yaml in main chart
values.yaml in backend and frontend chart
It's okay to build for both registries but DockerHub is preferred and should be used in cases mentioned above. See TRG4.05.
Hi @almadigabor I removed the main references and only left the ghcr repository in cases where we build on both registry based on repository. Although the 3 mentioned have been changed!
Please find attached links:
Trivy: https://github.com/eclipse-tractusx/traceability-foss/blob/main/.github/workflows/trivy.yml
Backend Values: https://github.com/eclipse-tractusx/traceability-foss/blob/main/charts/traceability-foss/charts/backend/values.yaml
Frontend Values: https://github.com/eclipse-tractusx/traceability-foss/blob/main/charts/traceability-foss/charts/frontend/values.yaml
Main Values: https://github.com/eclipse-tractusx/traceability-foss/blob/main/charts/traceability-foss/values.yaml
Please let me know if you need anything else.
Thanks in advance!
Hey! Looks good!
| gharchive/issue | 2023-08-14T12:39:34 | 2025-04-01T06:38:29.068627 | {
"authors": [
"almadigabor",
"ds-mwesener"
],
"repo": "eclipse-tractusx/traceability-foss",
"url": "https://github.com/eclipse-tractusx/traceability-foss/issues/241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2438808641 | Clarify requirements of L3 services and L3 clients
The current up-l3 specs combine the requirements for the service with the requirements for clients communicating with those services. We have found it difficult to parse what we are expected to implement for client code in up-cpp vs what is a behavior expected of the service. For example, it is not clear in the USubscription spec whether the state diagram is something the client does when subscribing to a topic, or if it represents an internal state per-topic within the service.
I would recommend splitting the specs into separate documents. One could focus on the requirements of the service itself, such as its states, behaviors, inputs, and outputs. The other(s) could cover how clients interact with those services - what RPC methods are available, what data they're expected to send, what steps they need to follow, are they expected to subscribe for notifications, etc. The client specs could be defined entirely in terms of layer 2 components and operations, abstracting any protocol details below layer 2.
After splitting these specs, the file tree would probably be something like this:
up-l3
├── usubscription
│ └── v3
│ ├── README.adoc
│ ├── service.adoc
│ ├── client_publisher.adoc
│ └── client_subscriber.adoc
└── utwin
└── v2
├── README.adoc
├── service.adoc
└── client.adoc
this has been done for uDiscovery, Need to open a separate issue for uSubscription.
| gharchive/issue | 2024-07-30T23:00:26 | 2025-04-01T06:38:29.070911 | {
"authors": [
"gregmedd",
"stevenhartley"
],
"repo": "eclipse-uprotocol/up-spec",
"url": "https://github.com/eclipse-uprotocol/up-spec/issues/209",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
610910661 | NPE on w2vec example
Running with snapshots. Code here
o.n.l.f.Nd4jBackend - Loaded [JCublasBackend] backend
o.n.n.NativeOpsHolder - Number of threads used for linear algebra: 32
o.n.l.a.o.e.DefaultOpExecutioner - Backend used: [CUDA]; OS: [Linux]
o.n.l.a.o.e.DefaultOpExecutioner - Cores: [64]; Memory: [26.7GB];
o.n.l.a.o.e.DefaultOpExecutioner - Blas vendor: [CUBLAS]
o.n.l.j.JCublasBackend - ND4J CUDA build version: 10.2.89
o.n.l.j.JCublasBackend - CUDA device 0: [GeForce RTX 2060 SUPER]; cc: [7.5]; Total memory: [8368685056]
o.n.l.j.JCublasBackend - CUDA device 1: [GeForce RTX 2060 SUPER]; cc: [7.5]; Total memory: [8370061312]
o.d.n.m.MultiLayerNetwork - Starting MultiLayerNetwork with WorkspaceModes set to [training: ENABLED; inference: ENABLED], cacheMode set to [NONE]
o.d.n.l.r.LSTM - cuDNN not found: use cuDNN for better GPU performance by including the deeplearning4j-cuda module. For more information, please refer to: https://deeplearning4j.org/docs/latest/deeplearning4j-config-cudnn
java.lang.ClassNotFoundException: org.deeplearning4j.cuda.recurrent.CudnnLSTMHelper
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at java.lang.Class.forName0(Native Method)
Exception in thread "main" java.lang.NullPointerException
at org.nd4j.linalg.dataset.ExistingMiniBatchDataSetIterator.<init>(ExistingMiniBatchDataSetIterator.java:58)
at org.nd4j.linalg.dataset.ExistingMiniBatchDataSetIterator.<init>(ExistingMiniBatchDataSetIterator.java:47)
at org.deeplearning4j.examples.multigpu.advanced.w2vsentiment.ImdbReviewClassificationRNN.main(ImdbReviewClassificationRNN.java:106)
Is it reproducible on CPU?
Will check
Pure Java issue by the looks of it. Added null check to iterator constructor.
Closing. Also changed example to automatically run presave. Thank you @raver119
| gharchive/issue | 2020-05-01T18:56:01 | 2025-04-01T06:38:29.377969 | {
"authors": [
"eraly",
"raver119"
],
"repo": "eclipse/deeplearning4j",
"url": "https://github.com/eclipse/deeplearning4j/issues/8902",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
789975722 | "Cannot do backward pass: all epsilons not set" when using frozen layer
I have a convolution layer that, when its output feeds a frozen deconvolution2D layer, triggers the following exception:
java.lang.IllegalStateException: Cannot do backward pass: all epsilons not set. Layer "fc" (idx 1876), numInputs :1; numOutputs: 1
at org.deeplearning4j.nn.graph.vertex.impl.LayerVertex.doBackward(LayerVertex.java:133)
at org.deeplearning4j.nn.graph.ComputationGraph.calcBackpropGradients(ComputationGraph.java:2713)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1382)
at org.deeplearning4j.nn.graph.ComputationGraph.computeGradientAndScore(ComputationGraph.java:1342)
at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:170)
at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:63)
at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
at org.deeplearning4j.nn.graph.ComputationGraph.fitHelper(ComputationGraph.java:1166)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1116)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1083)
at org.deeplearning4j.nn.graph.ComputationGraph.fit(ComputationGraph.java:1019)
Here is the creation of the frozen layer:
graph.appendLayer("transpose",
new FrozenLayer(
new Deconvolution2D.Builder(new int[]{upFactor * 2, upFactor * 2}, new int[]{upFactor, upFactor},
new int[]{upFactor / 2, upFactor / 2})
.nOut(numClasses)
.hasBias(false)
.weightInit(linearInterpolationInit)
.build()
)
);
if I remove the FrozenLayer (keeping an unfrozen Deconvolution2D layer), the exception doesn't show up.
Using DL4J beta 7.
@HGuillemet please check with debugger if it goes into org.deeplearning4j.nn.layers.FrozenLayer.java:159 at gradientAndScore() method
Also seems like "fc" layer is another layer and it's not provided here. Debugging requires full code or probably at least graph declaration and initialization.
@HGuillemet (and future readers) I'll get to this after the release. I plan on doing a bug fix sprint for bugs like this and in samediff. Sorry for the wait.
@HGuillemet I'll be getting to this now that M1 is out. If you have any updates on this, let me know. Feedback is appreciated!
Closing this, we support backprop with: FrozenLayerWithBackprop - usages here. https://github.com/eclipse/deeplearning4j/blob/5e8951cd8ee8106bb393635f840c398a1759b2fa/deeplearning4j/deeplearning4j-core/src/test/java/org/deeplearning4j/nn/layers/FrozenLayerWithBackpropTest.java#L90 I assume this is for GANs.
| gharchive/issue | 2021-01-20T13:23:59 | 2025-04-01T06:38:29.382209 | {
"authors": [
"HGuillemet",
"agibsonccc",
"jljljl"
],
"repo": "eclipse/deeplearning4j",
"url": "https://github.com/eclipse/deeplearning4j/issues/9159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
188728531 | "Cannot retry request with a non-repeatable request entity" exception when uploading triples
I have made a fresh install of rdf4j 2.1.1 on Tomcat 8.5.
Now I'm trying to add some triples using the workbench->add upload which doesn't work. I get the following exception:
org.apache.http.client.ClientProtocolException
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:186)
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
org.eclipse.rdf4j.http.client.SparqlSession.execute(SparqlSession.java:1102)
org.eclipse.rdf4j.http.client.SparqlSession.executeNoContent(SparqlSession.java:1083)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:845)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:672)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:661)
org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.add(HTTPRepositoryConnection.java:466)
org.eclipse.rdf4j.workbench.commands.AddServlet.add(AddServlet.java:101)
org.eclipse.rdf4j.workbench.commands.AddServlet.doPost(AddServlet.java:52)
org.eclipse.rdf4j.workbench.base.TransformationServlet.service(TransformationServlet.java:96)
org.eclipse.rdf4j.workbench.base.AbstractServlet.service(AbstractServlet.java:125)
org.eclipse.rdf4j.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:109)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:213)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.handleRequest(WorkbenchServlet.java:141)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:109)
org.eclipse.rdf4j.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:120)
org.eclipse.rdf4j.workbench.base.AbstractServlet.service(AbstractServlet.java:125)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
org.eclipse.rdf4j.workbench.proxy.CacheFilter.doFilter(CacheFilter.java:62)
org.eclipse.rdf4j.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:53)
root cause
org.apache.http.client.NonRepeatableRequestException: Cannot retry request with a non-repeatable request entity
org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:107)
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
org.eclipse.rdf4j.http.client.SparqlSession.execute(SparqlSession.java:1102)
org.eclipse.rdf4j.http.client.SparqlSession.executeNoContent(SparqlSession.java:1083)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:845)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:672)
org.eclipse.rdf4j.http.client.SesameSession.upload(SesameSession.java:661)
org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.add(HTTPRepositoryConnection.java:466)
org.eclipse.rdf4j.workbench.commands.AddServlet.add(AddServlet.java:101)
org.eclipse.rdf4j.workbench.commands.AddServlet.doPost(AddServlet.java:52)
org.eclipse.rdf4j.workbench.base.TransformationServlet.service(TransformationServlet.java:96)
org.eclipse.rdf4j.workbench.base.AbstractServlet.service(AbstractServlet.java:125)
org.eclipse.rdf4j.workbench.proxy.ProxyRepositoryServlet.service(ProxyRepositoryServlet.java:109)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:213)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.handleRequest(WorkbenchServlet.java:141)
org.eclipse.rdf4j.workbench.proxy.WorkbenchServlet.service(WorkbenchServlet.java:109)
org.eclipse.rdf4j.workbench.proxy.WorkbenchGateway.service(WorkbenchGateway.java:120)
org.eclipse.rdf4j.workbench.base.AbstractServlet.service(AbstractServlet.java:125)
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
org.eclipse.rdf4j.workbench.proxy.CacheFilter.doFilter(CacheFilter.java:62)
org.eclipse.rdf4j.workbench.proxy.CookieCacheControlFilter.doFilter(CookieCacheControlFilter.java:53)
I can't be the only one with this problem?
/Rune
I can't immediately reproduce this.
How are you adding the triples: by URL, by local file upload, or are you using the text area to copy-paste data?
Also: what kind of store are you uploading to? Native, Memory? Any reasoners or other "extras" (lucene, etc) enabled?
Are there any errors or warning in the RDF4J Server logs (http://localhost:8080/rdf4j-server/system/logging/overview.view)?
Possible cause is use of InputStreamEntity (which is non-repeatable) to read an InputStream. We should look into using a FileEntity instead, if possible.
Btw this should not normally happen: for some reason the initial request failed, the http client tries to redo the request (and fails because it uses an inputstream for its content). It will be interesting to figure out why the request failed (which is why I'd like to see if your server logs show anything).
Hi Jeen
So it’s happening when I try upload an .nt-file via the add menu in the workbench:
Uploading the same data as rdf works.
The same problem if I use curl.
Other questions:
Simple Native Java Store
No reasoners
RDF4J Workbench 2.1.1
Java 1.8.0_112
Server log:
[Rio fatal] Auf Elementtyp "http:" müssen entweder Attributspezifikationen, ">" oder "/>" folgen. (1, 8)
Client sent bad request ( 400)org.eclipse.rdf4j.http.server.ClientHTTPException: MALFORMED DATA: Auf Elementtyp "http:" müssen entweder Attributspezifikationen, ">" oder "/>" folgen. [line 1, column 8]
Rolling back transaction due to connection closejava.lang.Throwable: null
at org.eclipse.rdf4j.sail.helpers.AbstractSailConnection.close(AbstractSailConnection.java:224)
at org.eclipse.rdf4j.repository.sail.SailRepositoryConnection.close(SailRepositoryConnection.java:199)
at org.eclipse.rdf4j.http.server.repository.RepositoryInterceptor.cleanUpResources(RepositoryInterceptor.java:164)
at org.eclipse.rdf4j.http.server.ServerInterceptor.afterCompletion(ServerInterceptor.java:45)
at org.eclipse.rdf4j.http.server.ServerInterceptor$$FastClassBySpringCGLIB$$3d820688.invoke()
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.doProceed(DelegatingIntroductionInterceptor.java:133)
at org.springframework.aop.support.DelegatingIntroductionInterceptor.invoke(DelegatingIntroductionInterceptor.java:121)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653)
at org.eclipse.rdf4j.http.server.repository.RepositoryInterceptor$$EnhancerBySpringCGLIB$$8ebd78c0.afterCompletion()
at org.springframework.web.servlet.HandlerExecutionChain.triggerAfterCompletion(HandlerExecutionChain.java:170)
at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1045)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:971)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at com.github.ziplet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:300)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:108)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:349)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:784)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Unknown Source)
Client sent bad request ( 400)org.eclipse.rdf4j.http.server.ClientHTTPException: MALFORMED DATA: Auf Elementtyp "http:" müssen entweder Attributspezifikationen, ">" oder "/>" folgen. [line 1, column 8]
at org.eclipse.rdf4j.http.server.repository.statements.StatementsController.getAddDataResult(StatementsController.java:463)
at org.eclipse.rdf4j.http.server.repository.statements.StatementsController.handleRequestInternal(StatementsController.java:125)
at org.springframework.web.servlet.mvc.AbstractController.handleRequest(AbstractController.java:147)
at org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter.handle(SimpleControllerHandlerAdapter.java:50)
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959)
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893)
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648)
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at com.github.ziplet.filter.compression.CompressingFilter.doFilter(CompressingFilter.java:300)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:108)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:349)
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:784)
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:802)
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1410)
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Unknown Source)
Regards,
Rune
Den 12. nov. 2016 kl. 00.41 skrev Jeen Broekstra notifications@github.com:
I can't immediately reproduce this.
How are you adding the triples: by URL, by local file upload, or are you using the text area to copy-paste data?
Also: what kind of store are you uploading to? Native, Memory? Any reasoners or other "extras" (lucene, etc) enabled?
Are there any errors or warning in the RDF4J Server logs (http://localhost:8080/rdf4j-server/system/logging/overview.view)?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/eclipse/rdf4j/issues/647#issuecomment-260081158, or mute the thread https://github.com/notifications/unsubscribe-auth/AEeP1hFiOVuLRIc51_TdLeHtEkI3rWz3ks5q9P0tgaJpZM4KvnFT.
It's definitely strange that you get this unhelpful error in the Workbench, but the Server log shows that your N-Triples file is syntactically incorrect. Apparently already at line 1. This is what causes the request to fail.
Strange because since it’s an unmodified export from the workbench of an earlier version om Sesame.
Den 14. nov. 2016 kl. 16.29 skrev Jeen Broekstra notifications@github.com:
It's definitely strange that you get this unhelpful error in the Workbench, but the Server log shows that your N-Triples file is syntactically incorrect. Apparently already at line 1. This is what causes the request to fail.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/eclipse/rdf4j/issues/647#issuecomment-260366456, or mute the thread https://github.com/notifications/unsubscribe-auth/AEeP1i1U5IqfnXQjO7E_hdv-XIi2t6yBks5q-H5vgaJpZM4KvnFT.
There have been some changes in the N-Triples parser. We updated to a newer version of the spec a few versions of Sesame ago. However I thought those changes were backward-compatible.
Can you copy in a fragment of your N-Triples file? Just the first few lines should do since apparently the problem is right at the start...
Ok - Thank you:
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.kl.dk/ontologies/kle_emneindeks.owl#Hovedgruppe .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.w3.org/1999/02/22-rdf-syntax-ns#type http://www.kl.dk/ontologies/kle_emneindeks.owl#Hovedgruppe .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.w3.org/2000/01/rdf-schema#label "Kommunens styrelse" .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#HgServicesidetekst "Hovedgruppe 00 'Kommunens styrelse' d\u00E6kker sager der vedr\u00F8rer styrelsen af kommunen som kommune jf. kommunestyrelsesloven. Det g\u00E6lder bl.a. kommunalbestyrelsens konstituering, neds\u00E6ttelse af politiske udvalg, forretningsorden, styrelsesvedt\u00E6gt, kommunens organisation og \u00F8konomiske forvaltning.\n\nHovedgruppe 00 indeholder ogs\u00E5 grupper, som ikke naturligt kan indplaceres under de \u00F8vrige hovedgrupper. Fx 00.03 'International virksomhed og EU', 00.05 'Bes\u00F8g, repr\u00E6sentation', 00.10 'Integration og udl\u00E6ndinge.'\n\nSager der g\u00E5r p\u00E5 tv\u00E6rs af flere hovedgrupper, indplaceres under 00.01.10 'Opgaver der d\u00E6kker flere hovedgrupper', oprettet konkret til dette form\u00E5l.\n\nHovedgruppe 00 omfatter bl.a. sager, der administreres/behandles i henhold til f\u00F8lgende lovgivning:\n\n\nLBK om folkeskolen (Folkeskoleloven)\nLBK om fonde og visse foreninger (Fondsloven)\nLBK om forpligtende kommunale samarbejder\nLBK om fremgangsm\u00E5den ved ekspropriation vedr\u00F8rende fast ejendom\nLBK om kommunal udligning og generelle tilskud til kommuner (Udligningsloven)\nLBK om kommunernes styrelse (Kommunestyrelsesloven)\nLBK om regionernes finansiering\nLBK om statsgaranti til l\u00E5n til fors\u00F8gsbyggeri (St\u00F8ttet boligbyggeri)\nLBK om ungdomsskoler (Ungdomsskoleloven)\nLov om en satsreguleringsprocent\nLov om etablering af den selvejende institution Udbetaling Danmark\nLov om Folketingets Ombudsmand (Ombudsmandsloven)\nLov om kommunale borgerservicecentre (Borgerservicecenterloven)\nLov om midlertidig binding af kommunernes og amtskommunernes overskudslikviditet\nLov om neds\u00E6ttelse af statstilskuddet til kommuner ved forh\u00F8jelser af kommunale parkeringsindt\u00E6gter\nLov om offentlighed i forvaltningen (Offentlighedsloven)\nLov om regionale kulturfors\u00F8g\nLov om revision af den kommunale inddeling\nLov om Udbetaling Danmark (Udbetaling Danmark-loven)\nLov om videreanvendelse af den offentlige sektors informationer (PSI-loven)\nLov om \u00E6ndringer af landets inddeling i kommuner og regioner og om opl\u00F8sning og udpegelse af forpligtende kommunale samarbejder\nLov om socialtilsyn (Socialtilsynsloven)\nLov om \u00E6ndring af lov om Det Centrale Personregister (S\u00E6rlig adressebeskyttelse til personer, som uds\u00E6ttes for trusler mod deres person i forbindelse med \u00E6resrelaterede eller samlivsrelaterede konflikter m.v.) (CPR-loven)\nLBK om biblioteksvirksomhed (Biblioteksloven)\nBEK om beregning af satsreguleringsprocenten m.v.\nBEK om deling af kommuner\nBEK om forretningsorden for b\u00F8rn og unge-udvalgene\nBEK om kommuners l\u00E5ntagning og meddelelse af garantier mv.\nBEK om kommunernes mellemv\u00E6render med de kommunale forsyningsvirksomheder\nBEK af lov om frikommuner m.v. (Frikommuneloven)\nBEK om aktindsigt i visse interne kommunale og regionale dokumenter\nBEK af lov om rettens pleje\nBEK om Udbetaling Danmarks kontoplan m.v.\nBEK om kommunernes budget- og regnskabsv\u00E6sen, revision m.v.\nBEK om \u00E6ndring af bekendtg\u00F8relse om regionernes budget- og regnskabsv\u00E6sen, revision m.v.\nBEK om regnskaber m.v. for Udbetaling Danmark\nBEK af lov om merv\u00E6rdiafgift (momsloven)\nBEK af forvaltningsloven (Forvaltningsloven)\nBEK om organisering og underst\u00F8ttelse af besk\u00E6ftigelsesindsatsen m.v.\n\n" .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#Hovedgruppenummer "00" .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#Oprettetdato "1988-01-01"^^http://www.w3.org/2001/XMLSchema#date .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#har_rettelse http://www.kl.dk/ontologies/kle_emneindeks.owl#ret_hg_00 .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#indg\u00E5r_i_delplan http://www.kl.dk/ontologies/kle_emneindeks.owl#dp_FolkeskolenDelplan .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#indg\u00E5r_i_delplan http://www.kl.dk/ontologies/kle_emneindeks.owl#dp_DagtilbudDelplan .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#indg\u00E5r_i_delplan http://www.kl.dk/ontologies/kle_emneindeks.owl#dp_KommunalVandDelplan .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#indg\u00E5r_i_delplan http://www.kl.dk/ontologies/kle_emneindeks.owl#dp_SaerligStoetteDelplan .
http://www.kl.dk/ontologies/kle_emneindeks.owl#_00 http://www.kl.dk/ontologies/kle_emneindeks.owl#indg\u00E5r_i_delplan http://www.kl.dk/ontologies/kle_emneindeks.owl#dp_JobcenterDelplan .
Den 14. nov. 2016 kl. 16.44 skrev Jeen Broekstra notifications@github.com:
There have been some changes in the N-Triples parser. We updated to a newer version of the spec a few versions of Sesame ago. However I thought those changes were backward-compatible.
Can you copy in a fragment of your N-Triples file? Just the first few lines should do since apparently the problem is right at the start...
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/eclipse/rdf4j/issues/647#issuecomment-260370995, or mute the thread https://github.com/notifications/unsubscribe-auth/AEeP1q-86Ir6-P2fGcURnM__vuOdGsVhks5q-IHagaJpZM4KvnFT.
Odd - I can upload the fragment you provided just fine, without any errors. Can you verify if the problem still occurs if you only try to upload this fragment?
If it no longer occurs with this fragment, there might be a problem deeper in the file. Not sure how private your data is, but if possible can you attach a copy of the full file to this issue?
Update I think I have found the root cause: you are trying to upload a file of type "N-Triples", but you have the wrong file type selected, or you have "(autodetect)" selected. Unfortunately autodetect does not work properly, this is a known issue ( #67 ). Try uploading by explicitly selecting "N-Triples" in the file format dropdown.
A secondary issue is that the error should be reported like this:
But instead you get this strange ClientProtocolException
Hi Jeen
I found out importing in RDF and via looking at the system log, that I problems with non valid RDF:id-names, and also duplicates. Like you said these things weren’t checked in the older Sesame versions.
Only thing is, that the browser and API still throws the not very telling connection exception, when there’s and error in the RDF/XML. I have to go to the system log to see what’s actually wrong.
Den 16. nov. 2016 kl. 04.42 skrev Jeen Broekstra notifications@github.com:
Update I think I have found the root cause: you are trying to upload a file of type "N-Triples", but you have the wrong file type selected, or you have "(autodetect)" selected. Unfortunately autodetect does not work properly, this is a known issue ( #67 https://github.com/eclipse/rdf4j/issues/67 ). Try uploading by explicitly selecting "N-Triples" in the file format dropdown.
A secondary issue is that the error should be reported like this:
https://cloud.githubusercontent.com/assets/728776/20333995/c2f86aae-ac0a-11e6-9ed0-639758b711a2.png
But instead you get this strange ClientProtocolException
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub https://github.com/eclipse/rdf4j/issues/647#issuecomment-260845572, or mute the thread https://github.com/notifications/unsubscribe-auth/AEeP1oUDiflluqKPg2_Z6eJ5Nlhvqx9xks5q-nuagaJpZM4KvnFT.
One of the implementations of SesameSession.upload (heavily overloaded and there are multiple final paths possible) buffers the content into a ByteArrayInputStream in memory and stores it in a custom implementation of AbstractHttpEntity. It then uses Apache Commons IOUtil.transfer, which seems like it depletes the InputStream, and hence isRepeatable has been set to return false for it. It could be implemented alternatively to be repeatable, given it is already buffering. That could improve debugging for issues like this.
I found this while reviewing my changes for #684, but I won't change it in that pull request.
| gharchive/issue | 2016-11-11T10:20:23 | 2025-04-01T06:38:29.589578 | {
"authors": [
"ansell",
"jeenbroekstra",
"rune66"
],
"repo": "eclipse/rdf4j",
"url": "https://github.com/eclipse/rdf4j/issues/647",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
487725817 | JMH runner ignores lock to avoid parallel builds failing
This PR addresses GitHub issue: #1529 .
Briefly describe the changes proposed in this PR:
configured surefire plugin set jmh.ignoreLock to true to avoid benchmark execution failing
ran local parallel test, seems to solve the issue (but proof is in Jenkins obviously)
Merging this in immediately in a "let's see if it actually works" kinda way.
| gharchive/pull-request | 2019-08-31T03:44:16 | 2025-04-01T06:38:29.592649 | {
"authors": [
"jeenbroekstra"
],
"repo": "eclipse/rdf4j",
"url": "https://github.com/eclipse/rdf4j/pull/1530",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
1212334239 | Merge develop into main
GitHub issue resolved: #
Briefly describe the changes proposed in this PR:
PR Author Checklist (see the contributor guidelines for more details):
[ ] my pull request is self-contained
[ ] I've added tests for the changes I made
[ ] I've applied code formatting (you can use mvn process-resources to format from the command line)
[ ] I've squashed my commits where necessary
[ ] every commit message starts with the issue number (GH-xxxx) followed by a meaningful description of the change
Main requires PR verify for java 8 :(
| gharchive/pull-request | 2022-04-22T13:01:27 | 2025-04-01T06:38:29.596187 | {
"authors": [
"hmottestad"
],
"repo": "eclipse/rdf4j",
"url": "https://github.com/eclipse/rdf4j/pull/3820",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
277480468 | Update third-party-libraries with latest output
third-party-libraries.xlsx as generated from eb34c80cd1c511ed7184f4ba4993a95208753e42
Can one of the admins verify this patch?
| gharchive/pull-request | 2017-11-28T17:57:19 | 2025-04-01T06:38:29.634614 | {
"authors": [
"Vogel612",
"genie-winery"
],
"repo": "eclipse/winery",
"url": "https://github.com/eclipse/winery/pull/203",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2556010091 | CadsObs adaptor to use the retrieve_list_of_results method
For consistency with other adaptors, the CadsObs adaptor should use the retrieve_list_of_results method.
This returns a list of paths, and is called by the default self.retrieve method. The self.make_download_object is then responsible turning the list of paths into a downloadable object for the retreive-api.
If the default self.make_download_object is incompatible with the adaptor, then please define your own retrieve method which handles the creation of a download object, which must be an open binary file.
This will give more flexibility with anticipated functionality, and allows use with the MultiAdaptor.
(Markel, this is lower priority than the task that Paul has given you, we can discuss in the next sprint meeting)
It should be easy to do, we only need to be careful with the error handling.
| gharchive/issue | 2024-09-30T08:58:31 | 2025-04-01T06:38:29.672875 | {
"authors": [
"EddyCMWF",
"garciampred"
],
"repo": "ecmwf-projects/cads-adaptors",
"url": "https://github.com/ecmwf-projects/cads-adaptors/issues/216",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2138316787 | Support for ecflow-generated variables
Is your feature request related to a problem? Please describe.
We can't use ecflow-generated variables in scripts as they are not supported by pyflow, which means they are not detected and exported at the beginning of the script. For instance, if I create a RepeatDate("YMD",...), ecflow will generate the following variables:
YMD
YMD_DD
YMD_DOW
YMD_JULIAN
YMD_MM
YMD_YYYY
But we can't use those because they are not added to the exportables list in pyflow (see https://github.com/ecmwf/pyflow/blob/master/pyflow/attributes.py#L259 and https://github.com/ecmwf/pyflow/blob/master/pyflow/nodes.py#L401). The RepeatDate class is only linked to one exported variable, following the name provided in the RepeatDate class.
The FAMILY variable, attached to a Family node is another example, but there are many others. We should list them.
Describe the solution you'd like
We could maybe add a list of exportables attached to an Exportable class?
Describe alternatives you've considered
No response
Additional context
No response
Organisation
No response
List of ECFLOW generated variables (missing repeat and maybe other attributes): https://ecflow.readthedocs.io/en/latest/ug/user_manual/ecflow_variables/generated_variables.html
| gharchive/issue | 2024-02-16T10:37:13 | 2025-04-01T06:38:29.679783 | {
"authors": [
"corentincarton"
],
"repo": "ecmwf/pyflow",
"url": "https://github.com/ecmwf/pyflow/issues/39",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
413211458 | 怎么给图表添加点击事件,以便反馈到page里进行重新赋值
提问前应该做的事
请确保提问前做了以下事,将完成的项目的 [] 改为 [x]:
[x] 我已通读过 README
[x] 我已阅读过 FAQ
需提供的信息
将符合项的 [] 改为 [x],并补充需要的信息:
简单描述问题:
想点击图表后,给界面上独立于图表外的值重新赋值
预期效果:
???
(如有需要请提供预期的图)
实际效果:
???
(如有需要请提供截图)
复现环境:
[] 在微信开发工具中存在该问题
[] 在真机上存在该问题
试试在tooltip的formatter的回调函数里面触发图表外方法
在初始化图表(init)里增加:
Chart.on('click', function (params) {
//你的具体逻辑流程
that.setData({
});
})
在初始化图表(init)里增加:
Chart.on('click', function (params) {
//你的具体逻辑流程
that.setData({
});
})
没效果...
| gharchive/issue | 2019-02-22T02:19:04 | 2025-04-01T06:38:29.690519 | {
"authors": [
"Lamenda",
"SingleShadow",
"curiousbabyMz",
"ehcsa"
],
"repo": "ecomfe/echarts-for-weixin",
"url": "https://github.com/ecomfe/echarts-for-weixin/issues/457",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
213584917 | 看图,折线图有两组数据的时候好像不准确
One-line summary [问题简述]
option = {
title: {
text: '折线图堆叠'
},
tooltip: {
trigger: 'axis'
},
legend: {
data:['邮件营销','联盟广告','视频广告','直接访问','搜索引擎']
},
grid: {
left: '3%',
right: '4%',
bottom: '3%',
containLabel: true
},
toolbox: {
feature: {
saveAsImage: {}
}
},
xAxis: {
type: 'category',
boundaryGap: false,
data: ['周一','周二','周三','周四','周五','周六','周日']
},
yAxis: {
type: 'value'
},
series: [
{
name:'邮件营销',
type:'line',
stack: '总量',
data:[120, 132, 101, 134, 90, 230, 210]
},
{
name:'联盟广告',
type:'line',
stack: '总量',
data:[220, 182, 191, 234, 290, 330, 310]
},
{
name:'视频广告',
type:'line',
stack: '总量',
data:[150, 232, 201, 154, 190, 330, 410]
},
{
name:'直接访问',
type:'line',
stack: '总量',
data:[320, 332, 301, 334, 390, 330, 320]
},
{
name:'搜索引擎',
type:'line',
stack: '总量',
data:[820, 932, 901, 934, 1290, 1330, 1320]
}
]
};
以上的数据显示,“搜索引擎”对应的线条高度是不对的!这个就是官网的示例
Version & Environment [版本及环境]
ECharts version [ECharts 版本]:
Browser version [浏览器类型和版本]:
OS Version [操作系统类型和版本]:
Expected behaviour [期望结果]
ECharts option [ECharts配置项]
option = {
}
Other comments [其他信息]
可能我对stack: '总量',不大理解,去掉又是正常的
| gharchive/issue | 2017-03-12T06:09:46 | 2025-04-01T06:38:29.699828 | {
"authors": [
"flycocke"
],
"repo": "ecomfe/echarts",
"url": "https://github.com/ecomfe/echarts/issues/5246",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1294354880 | fix(Discount): remove discount in Subscription GalaxPay
Relacionado a issue #28
https://apx-mods.e-com.plus/api/v1/create_transaction/schema.json?store_id=100
O params.items aqui realmente não tem a propriedade flags e também a amout.balance também não. Acredito que essas sejam as únicas pendências.
| gharchive/pull-request | 2022-07-05T14:08:02 | 2025-04-01T06:38:29.701584 | {
"authors": [
"wisley7l"
],
"repo": "ecomplus/app-galaxpay",
"url": "https://github.com/ecomplus/app-galaxpay/pull/31",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1581147805 | Update helpers.html
typo at makeEndpoint
update signle to single
Thank you, but sorry, gh-page is not where you fix issues. it's on /website on .md files.
Sorry, I tried to find it and didn't know how I ended up on gh-page
| gharchive/pull-request | 2023-02-12T06:55:16 | 2025-04-01T06:38:29.761712 | {
"authors": [
"ecyrbe",
"pkrinesh"
],
"repo": "ecyrbe/zodios",
"url": "https://github.com/ecyrbe/zodios/pull/331",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1909344806 | Require sb3 version 2 or newer
Our environment will not work with older sb3 versions, due to using gymnasium (where older sb3 versions used gym).
Edit: This may cause an issue with rllib, I will check the test results and see if it can be addressed if it does.
It seems that older Ray is installed on the tests that pass, so by specifying the gymnasium version we could be enforcing older Ray (which works with our config) to be installed. On the newer version of Ray, I also locally had some issues unless I try a different path format. Even outside of the path issue, I also had some issues with one of my environments with both Ray versions, something about the format of observations, but I didn't test it in-depth, so I don't know the issue was related to my Godot environment (which worked with SB3 which I mostly use for now), or something else.
I'll add the gymnasium version as a quick temporary fix, if it works, we can merge the PR for now since it's focused on ensuring the correct SB3 version, and in the future we can test our Ray configuration and see if something needs to be updated.
With this approach, newer Ray 2.7 seems to work, although I didn't perform much testing to see if everything works with this as before. As a quick fix I've changed the folder path to an absolute path as that seems to fix the error. An alternative could be to use older Ray or find a different fix for the issue.
It should be noted that one of my environments didn't work with rllib (some observation type error), but I think that holds for both Ray versions, and may need more troubleshooting at some future point as I'm not sure about the cause yet (maybe something with the env itself even though it works with SB3). Jumperhard worked on my PC as well, but there might be some difference in observations from my env (which has some floats and a raycast obs array).
For future reference and in case similar errors appear in the future, the error was:
ValueError: The two structures don't have the same nested structure.
First structure: type=dict str={'obs': [0.0028, 0.000441726500866935, 0.00281361560337245, 0.00664886436425149, 0.260605245828629, 0.39156648516655, -0.00112051225733, -1.99999463558197, 0.781054401397705, 0.746744704246521, 0.712950587272644, 0.76171236038208, 0.769729685783386, 0.743742609024048, 0.659272933006287, 0.363503074645996, 0, 0.292076635360718, 0.415652275085449, 0.244122076034546, 0, 0.216817808151245, 0.643476295471191, 0.747096061706543, 0.781146144866943]}
Second structure: type=OrderedDict str=OrderedDict([('obs', array([-0.19811498, 0.8486953 , 0.7596289 , 0.2623617 , -0.08332985,
0.21301392, 0.7458078 , -0.4820755 , -0.88564146, -0.6634987 ,
0.82966214, 0.6683578 , -0.18554112, -0.7013975 , 0.8797588 ,
-0.24309783, 0.77364075, 0.4879668 , -0.982385 , -0.4481921 ,
0.8362797 , -0.21034704, 0.9531232 , -0.33154443, 0.886062 ],
dtype=float32))])
More specifically: Substructure "type=list str=[0.0028, 0.000441726500866935, 0.00281361560337245, 0.00664886436425149, 0.260605245828629, 0.39156648516655, -0.00112051225733, -1.99999463558197, 0.781054401397705, 0.746744704246521, 0.712950587272644, 0.76171236038208, 0.769729685783386, 0.743742609024048, 0.659272933006287, 0.363503074645996, 0, 0.292076635360718, 0.415652275085449, 0.244122076034546, 0, 0.216817808151245, 0.643476295471191, 0.747096061706543, 0.781146144866943]" is a sequence, while substructure "type=ndarray str=[-0.19811498 0.8486953 0.7596289 0.2623617 -0.08332985 0.21301392
0.7458078 -0.4820755 -0.88564146 -0.6634987 0.82966214 0.6683578
-0.18554112 -0.7013975 0.8797588 -0.24309783 0.77364075 0.4879668
-0.982385 -0.4481921 0.8362797 -0.21034704 0.9531232 -0.33154443
0.886062 ]" is not
Entire first structure:
{'obs': [., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., ., .]}
Entire second structure:
OrderedDict([('obs', .)])
| gharchive/pull-request | 2023-09-22T18:23:50 | 2025-04-01T06:38:29.765379 | {
"authors": [
"Ivan-267"
],
"repo": "edbeeching/godot_rl_agents",
"url": "https://github.com/edbeeching/godot_rl_agents/pull/148",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
148631228 | [RFR] Fixed an issue with non-button toggles
Fix #23.
Yes.
Looks great
| gharchive/pull-request | 2016-04-15T11:08:32 | 2025-04-01T06:38:29.830623 | {
"authors": [
"HugoGiraudel",
"smartmike"
],
"repo": "edenspiekermann/a11y-toggle",
"url": "https://github.com/edenspiekermann/a11y-toggle/pull/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
999893432 | cicd: specify unittest directory
Description
cicd: specify unittest directory
Type of change
[ ] Bug fix
[ ] New feature
[ ] Document
[x] Test
[x] CI/CD
[x] Refactor
How has this been tested
[x] Unittest
[ ] Others:
Checklist:
[x] I have made corresponding changes to the documentation
[x] My changes generate no new warnings
[x] I have added tests that prove my fix is effective or that my feature works
[x] New and existing unit tests pass locally with my changes
[x] Let's make the world better✨😋🐍🌎
Codecov Report
Merging #37 (9336a49) into main (c60b059) will increase coverage by 0.00%.
The diff coverage is 100.00%.
@@ Coverage Diff @@
## main #37 +/- ##
=======================================
Coverage 95.56% 95.57%
=======================================
Files 21 21
Lines 609 610 +1
Branches 36 36
=======================================
+ Hits 582 583 +1
Misses 18 18
Partials 9 9
Impacted Files
Coverage Δ
pypj/task/githubactions.py
100.00% <100.00%> (ø)
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update c60b059...9336a49. Read the comment docs.
| gharchive/pull-request | 2021-09-18T03:38:34 | 2025-04-01T06:38:29.841222 | {
"authors": [
"codecov-commenter",
"edge-minato"
],
"repo": "edge-minato/pypj",
"url": "https://github.com/edge-minato/pypj/pull/37",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2082459432 | Restore Tenant Provisioning
Move it to the Tenants API, removing the Provisioning context.
Use AshJsonApi to expose a new JsonApi compatible API.
Edgehog.Tenants.Reconciler.Behaviour expects Edgehog.Tenants.Tenant to have a typespec @type t().
Ash.Resource has @type record :: struct().
I think we can start with @type record :: Ash.Resource.record() for Tenant and Tenant.record() in Reconciler.Behaviour
| gharchive/pull-request | 2024-01-15T17:42:24 | 2025-04-01T06:38:29.845025 | {
"authors": [
"rbino",
"szakhlypa"
],
"repo": "edgehog-device-manager/edgehog",
"url": "https://github.com/edgehog-device-manager/edgehog/pull/429",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
449498146 | Update edgex modules to v0.1.0 pre-edinburgh
Fix #1380
In preparation for cutting the edinburgh branch, we need to consume
v0.1.0 in the go.mod for all edgex modules
go-mod-core-contracts
go-mod-messaging
go-mod-registry
Signed-off-by: Trevor Conn trevor_conn@dell.com
Codecov Report
:exclamation: No coverage uploaded for pull request base (master@823421a). Click here to learn what that means.
The diff coverage is n/a.
@@ Coverage Diff @@
## master #1383 +/- ##
=========================================
Coverage ? 17.02%
=========================================
Files ? 78
Lines ? 8789
Branches ? 0
=========================================
Hits ? 1496
Misses ? 7142
Partials ? 151
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 823421a...904ce62. Read the comment docs.
| gharchive/pull-request | 2019-05-28T22:04:17 | 2025-04-01T06:38:29.864386 | {
"authors": [
"codecov-io",
"tsconn23"
],
"repo": "edgexfoundry/edgex-go",
"url": "https://github.com/edgexfoundry/edgex-go/pull/1383",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
886159117 | feat(notifications): Implement GET /transmission/id/{id} V2 API
Close: #3452
Signed-off-by: weichou weichou1229@gmail.com
PR Checklist
Please check if your PR fulfills the following requirements:
[x] Tests for the changes have been added (for bug fixes / features)
[ ] Docs have been added / updated (for bug fixes / features)
If your build fails due to your commit message not passing the build checks, please review the guidelines here: https://github.com/edgexfoundry/edgex-go/blob/master/.github/Contributing.md.
What is the current behavior?
Issue Number: #3452
What is the new behavior?
Implement GET /transmission/id/{id} V2 API according to the doc https://app.swaggerhub.com/apis-docs/EdgeXFoundry1/support-notifications/2.x#/default/get_transmission_id__id_
Does this PR introduce a breaking change?
[ ] Yes
[x] No
New Imports
[ ] Yes
[x] No
Specific Instructions
Are there any specific instructions or things that should be known prior to reviewing?
Other information
Rebased.
| gharchive/pull-request | 2021-05-11T06:12:28 | 2025-04-01T06:38:29.869849 | {
"authors": [
"weichou1229"
],
"repo": "edgexfoundry/edgex-go",
"url": "https://github.com/edgexfoundry/edgex-go/pull/3453",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
165922811 | Replace the use of the deprecated Dict module
The Dict module is deprecated.
Coverage remained the same at 90.135% when pulling 6f9925fd925e38535c7078934b2c3e43f3528b29 on nscyclone:fix/replace-dict into 925200ded23aa0009919a3e48a0dae1952398d63 on edgurgel:master.
thank you @nscyclone
| gharchive/pull-request | 2016-07-16T10:10:09 | 2025-04-01T06:38:29.874023 | {
"authors": [
"coveralls",
"duksis",
"nscyclone"
],
"repo": "edgurgel/tentacat",
"url": "https://github.com/edgurgel/tentacat/pull/91",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.