id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
107980444
|
SCSS formatting inconsistently follows coding guidelines
I've noticed that in some spots, spaces aren't used before { or after : characters despite this being part of the CSS coding guidelines. I don't think it's that big of a deal, but it's confusing as a contributor as to whether I should conform my code to the rest of the document or to follow the guidelines. An example of this is #1725, where the stylesheet doesn't include spaces and if I followed the guidelines it would make the code inconsistent, so in this case I chose to actually not follow the guidelines.
Thanks! I think it would be nice to have automated checks like we have for Python/JS.
Happy to set this up on Drone/Travis if someone cleans up the code (and sets up a style checker).
Fixed in #1737
|
gharchive/issue
| 2015-09-23T18:37:09 |
2025-04-01T04:36:06.694910
|
{
"authors": [
"alexgleason",
"gasman",
"kaedroho"
],
"repo": "torchbox/wagtail",
"url": "https://github.com/torchbox/wagtail/issues/1726",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2456605
|
Guard against referencing Infinispan-specific objects outside of TorqueBox
Be explicit about the fact that a dummy object is returned in that case.
RE: TORQUE-635
Updated our rake integration test and merged your fix in https://github.com/torquebox/torquebox/compare/67c73bc...f071d01 - thanks!
|
gharchive/issue
| 2011-12-06T00:03:13 |
2025-04-01T04:36:06.721906
|
{
"authors": [
"bbrowning",
"johnthethird"
],
"repo": "torquebox/torquebox",
"url": "https://github.com/torquebox/torquebox/issues/57",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1806402267
|
WIP: Rework emb_display
Work in progress rewrite of emb_display
This is generally ready for review now, however I haven't yet updated the D1 hardware implementations.
|
gharchive/pull-request
| 2023-07-16T02:25:17 |
2025-04-01T04:36:06.740774
|
{
"authors": [
"jamesmunns"
],
"repo": "tosc-rs/mnemos",
"url": "https://github.com/tosc-rs/mnemos/pull/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
161345251
|
UIMainStoryboardFile in Info.plist conflicts with Kitchen
Problem
If Info.plist's UIMainStoryboardFile is set, the app happen to have 2 UIWindows in parallel.
Either one who is keyWindow gets focused.
Currently Kitchen holds its own UIWindow in private sharedKitchen instance, and uses it to create a TVApplicationController instance.
See: https://github.com/toshi0383/TVMLKitchen/blob/swift2.2/Sources/Kitchen.swift#L264
Looks like this UIWindow conflicts with UIMainStoryboardFile's UIWindow.
Expected
The app have one UIWindow and Kitchen will share that instance.
That's said, Views appears in same view hierarchy, not in parallel in different windows.
Possible solutions (Investigation Needed)
Detect other UIWindow existence in prepare phase and yell out that something is wrong.
Add capability to pass a specific UIWindow instance to Kitchen, so Kitchen can use it to instantiate a TVApplicationController in prepare phase. (Don't know if it works)
Make Kitchen's UIWindow always on top. (Maybe configurable via windowLevel property?)
Current behavior can be observed in this branch's SampleRecipe app.
But I don't know what the trigger should be for going back to the mainWindow scene yet.
Looks like UINavigationConrollerDelegate can answer this question.
// AppDelegate.swift
// push empty viewcontroller at index 0 after `prepare`.
_ = Kitchen.prepare(cookbook)
Kitchen.navigationController.pushViewController(UIViewController(), animated: false)
func navigationController(_ navigationController: UINavigationController, willShow viewController: UIViewController, animated: Bool) {
print(viewController)
if viewController == Kitchen.navigationController.viewControllers[0] {
Kitchen.window.resignKey()
mainWindow.isHidden = false
mainWindow.makeKey()
}
}
```
So here is another solution.
Support multiple window! #98
Just supports Kitchen.serve(urlString:...) for now.
|
gharchive/issue
| 2016-06-21T04:04:45 |
2025-04-01T04:36:06.745728
|
{
"authors": [
"toshi0383"
],
"repo": "toshi0383/TVMLKitchen",
"url": "https://github.com/toshi0383/TVMLKitchen/issues/95",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1261176362
|
Lupickup/updates2206
Add plugin updates and various changes across the vault.
Finished off Weekly te plate for now, Monthly expanded a lot.
#3
|
gharchive/pull-request
| 2022-06-05T22:26:24 |
2025-04-01T04:36:06.749749
|
{
"authors": [
"tot0"
],
"repo": "tot0/ObsidianPPV",
"url": "https://github.com/tot0/ObsidianPPV/pull/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1924251420
|
🛑 MSP-HIT is down
In fd574f4, MSP-HIT (https://hit.hanati.co.kr/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MSP-HIT is back up in 2ed16b7 after 13 minutes.
|
gharchive/issue
| 2023-10-03T14:16:33 |
2025-04-01T04:36:06.766345
|
{
"authors": [
"touguy"
],
"repo": "touguy/uptime",
"url": "https://github.com/touguy/uptime/issues/297",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1908531707
|
🛑 Tracking - Europe 2 is down
In 764a4bd, Tracking - Europe 2 (https://tkg2.reinerouge.fr) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Tracking - Europe 2 is back up in 6b980fe after 6 minutes.
|
gharchive/issue
| 2023-09-22T09:24:59 |
2025-04-01T04:36:06.768825
|
{
"authors": [
"encreinformatique"
],
"repo": "tousleshoraires/uptime-rr",
"url": "https://github.com/tousleshoraires/uptime-rr/issues/84",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1266983300
|
Add init.py for execution engines
Signed-off-by: Kaiyuan Hu kaiyuan.hu@zilliz.com
Codecov Report
Merging #1353 (3808c9b) into main (b969113) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #1353 +/- ##
=======================================
Coverage 68.23% 68.23%
=======================================
Files 288 288
Lines 15085 15085
Branches 2427 2427
=======================================
Hits 10293 10293
Misses 4154 4154
Partials 638 638
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b969113...3808c9b. Read the comment docs.
/lgtm
/approve
|
gharchive/pull-request
| 2022-06-10T03:55:50 |
2025-04-01T04:36:06.774345
|
{
"authors": [
"Chiiizzzy",
"codecov-commenter",
"reiase"
],
"repo": "towhee-io/towhee",
"url": "https://github.com/towhee-io/towhee/pull/1353",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1696909864
|
Error: Compiling RuleSet failed: Unexpected property test in condition
This error was caused by vue-loader and css-loader.
vue-loader v15 has dependency like:
css-loader: "*"
and under pnpm install, sometimes, the css-loader will have 1.0.1 version installed (maybe affected by other package like cache-loader?), but the correct version should be css-loader: 6.x.
vue-loader 16+ is for vue3.x. for vue2.x, you should use vue-loader 15.x. and because vue-loader 15.x may not compatible with the latest webpack, so this error happened.
In monorepo with pnpm, set following in the .npmrc:
+resolve-peers-from-workspace-root=false
+dedupe-peer-dependents=false
|
gharchive/issue
| 2023-05-05T02:18:40 |
2025-04-01T04:36:06.776756
|
{
"authors": [
"towry"
],
"repo": "towry/n",
"url": "https://github.com/towry/n/issues/173",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
271209331
|
What's the intended audience?
The introduction (README.md) decribes the goals of this lecture material, but without stating who the intended audience is. The overall impression it leaves is "this is a good starting point for anyone doing scientific computing, except for those who need high-performance computing". However, it is probably unrealistic for any concrete down-to-earth material to address the needs of such a wide audience. Notebooks, for example, are great for some scientific applications but a bad match for others. More generally, any tool might be inappropriate simply because many people are constrained in their software choice by the habits of their field. Readers should know from the start that the material might not be right for them, and that it's not their fault (or stupidity) if things don't work out for them.
Hi @khinsen -- these are good points.
I think the intended audience is "anyone with some experience in using a programming language wanting to get more involved in computing".
Notebooks are just the support for the material, and not the only way to do computing, of course. But I suspect that they will be sufficient for 90% of users.
If you have some suggestions about how to reword, do you want to open a pull request?
Closed by #16
|
gharchive/issue
| 2017-11-04T17:30:27 |
2025-04-01T04:36:06.811882
|
{
"authors": [
"khinsen",
"tpoisot"
],
"repo": "tpoisot/ScientificComputingForTheRestOfUs",
"url": "https://github.com/tpoisot/ScientificComputingForTheRestOfUs/issues/15",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
551799703
|
Refactor out a generic OpenTracing span/tracer
This should let us more easily add OpenTracing backends other than Jaeger.
For example, I have it working with https://github.com/opentracing-contrib/java-xray-tracer locally, but that project is apparently having some problems with publishing artifacts. Once there's a reliable artifact available I'll PR the XRay backend too.
Hey @bpholt, do you want to continue with this or can I pick it up from what you've made?
@kubukoz please go ahead! I think the only real changes I made when refactoring from the original Jaeger implementation were called out in my comments above, so most of this is really Rob's work anyway 🙂
I can take this PR if you'd like to fix the conflicts but it wasn't immediately trivial. Sorry for sitting on this for so long, trying to get caught up. Please open a new PR if you'd like to continue.
I can take this PR if you'd like to fix the conflicts but it wasn't immediately trivial. Sorry for sitting on this for so long, trying to get caught up. Please open a new PR if you'd like to continue.
|
gharchive/pull-request
| 2020-01-18T17:38:33 |
2025-04-01T04:36:06.814384
|
{
"authors": [
"bpholt",
"kubukoz",
"tpolecat"
],
"repo": "tpolecat/natchez",
"url": "https://github.com/tpolecat/natchez/pull/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2457101786
|
fix wgpu compiler visibility
impl Display for Visibility {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Visibility::Read => f.write_str("read_write"),
Visibility::ReadWrite => f.write_str("read_write"),
}
}
}
should be
impl Display for Visibility {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Visibility::Read => f.write_str("read"),
Visibility::ReadWrite => f.write_str("read_write"),
}
}
}
but burn-wgpu test failed...
This is unfortunately quite hard to fix :/ See the discussion previously on the burn repo: tracel-ai/burn#1996
It will require some new approaches to memory management that prevents these conflicts.
I do think this might be important as it avoids some barriers wgpu otherwise inserts, and issues with workgroupBarrier .
Thanks for your reply! So, is the current code intentional? Sorry, I don't know much about wgpu and cubecl... I just encountered the following issue when using burn in wasm, so I want to know if it's possible to change the info variable to read-only.
@wcshds yes, the current code is intentional. We still pass the visibility to the compiler, but for now, given the restrictions of WGPU, we can't heavily reuse buffers with different visibility, even if it's for different slices of the buffer.
The issue I encountered in wasm can be resovled by following code:
fn format_binding(
f: &mut core::fmt::Formatter<'_>,
name: &str,
binding: &Binding,
num_entry: usize,
) -> core::fmt::Result {
let ty = match binding.size {
Some(size) => format!("array<{}, {}>", binding.item, size),
None => format!("array<{}>", binding.item),
};
#[cfg(all(not(target_family = "wasm"), feature = "std"))]
let visibility = binding.visibility.to_string();
#[cfg(any(target_family = "wasm", not(feature = "std")))]
let visibility = if name == "info" {
"read".to_string()
} else {
binding.visibility.to_string()
};
f.write_fmt(format_args!(
"@group(0)
@binding({})
var<{}, {}> {}: {};
\n",
num_entry, binding.location, visibility, name, ty
))?;
Ok(())
}
The workaround is really hacky though.
We have token another direction, we fixed the problem on wasm by using the simpler memory management.
|
gharchive/pull-request
| 2024-08-09T04:56:41 |
2025-04-01T04:36:06.840312
|
{
"authors": [
"nathanielsimard",
"wcshds"
],
"repo": "tracel-ai/cubecl",
"url": "https://github.com/tracel-ai/cubecl/pull/54",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
309298371
|
Adds an Extension that will add a custom Default value when copying
Allows for a default value to be specified for fields
also enables the Language Title to be passed in
This is getting a bit closer to what I had suggested over at https://github.com/tractorcow/silverstripe-fluent/pull/382#issuecomment-377102653, but I will give the same answer that I'm not convinced this should be a core feature in it's current state.
I will however, help you if you get stuck with getting this implemented in your project.
Thanks for your work and taking the time to contribute. It's appreciated even if it didn't make it in. :)
|
gharchive/pull-request
| 2018-03-28T09:58:28 |
2025-04-01T04:36:06.864262
|
{
"authors": [
"oilee80",
"tractorcow"
],
"repo": "tractorcow/silverstripe-fluent",
"url": "https://github.com/tractorcow/silverstripe-fluent/pull/383",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2128655605
|
Add EVB Token
Add Everburn Token
hello, please modify this PR to be logo upload ONLY. Your token does not meet the volume or liquidity requirements to be considered for token listing at this time
Will close this PR for now, please reopen when ready
|
gharchive/pull-request
| 2024-02-10T18:47:26 |
2025-04-01T04:36:06.865317
|
{
"authors": [
"Unsehrtain",
"everburn-dev"
],
"repo": "traderjoe-xyz/joe-tokenlists",
"url": "https://github.com/traderjoe-xyz/joe-tokenlists/pull/1156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2275126842
|
Warning when ExternalName service loading is enabled
Welcome!
[X] Yes, I've searched similar issues on GitHub and didn't find any.
[X] Yes, I've searched similar issues on the Traefik community forum and didn't find any.
What did you do?
Start traefik v3.0.0 with the option
--providers.kubernetescrd.allowExternalNameServices=true
I expected this to not have any effects besides allowing the use of ExternalName services.
What did you see instead?
There is a log with level warning about ExternalName services being enabled.
2024-05-02T10:04:22Z WRN ExternalName service loading is enabled, please ensure that this is expected (see AllowExternalNameServices option) providerName=kubernetescrd
Since ExternalName service loading is enabled because I explicitly configured it, I expect this to not print a log message of level warning, at most the level should be INFO in my opinion.
I'd expect a warning level log message to tell me about something that is wrong, not about me having configured an option.
What version of Traefik are you using?
v3.0.0
What is your environment & configuration?
Using the helm chart in v28.0.0 with the following values:
updateStrategy:
rollingUpdate:
maxUnavailable: 1
globalArguments: ~
providers:
kubernetesCRD:
allowExternalNameServices: true
additionalArguments:
- --serverstransport.insecureskipverify
- --certificatesresolvers.cloudflare.acme.dnschallenge.provider=cloudflare
- --certificatesresolvers.cloudflare.acme.email=REDACTED_EMAIL
- --certificatesresolvers.cloudflare.acme.storage=/data/acme.json
- --metrics.prometheus=true
- --providers.kubernetesingress.ingressclass=traefik
- --providers.kubernetesingress.ingressendpoint.ip=REDACTED_IP
logs:
general:
level: WARN
access:
enabled: true
service:
enabled: false
ports:
web:
hostPort: 80
redirectTo:
port: websecure
websecure:
hostPort: 443
tls:
enabled: true
certResolver: cloudflare
domains:
- main: redacted.example.com
sans:
- "*.redacted.example.com"
ingressClass:
enabled: true
env:
- name: CLOUDFLARE_DNS_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-token
key: CLOUDFLARE_DNS_API_TOKEN
persistence:
enabled: true
If applicable, please paste the log output in DEBUG level
No response
Hey @morremeyer,
Thanks for your suggestion, we think it makes sense.
Unfortunately, we are focused elsewhere. If you or another community member would like to build it, let us know.
Don’t forget to check out the [contributor docs] (https://github.com/traefik/contributors-guide/blob/master/pr_guidelines.md) and to link the PR to this issue.
Hi @nmengin,
I can contribute on it.
Marc
|
gharchive/issue
| 2024-05-02T10:14:47 |
2025-04-01T04:36:06.871390
|
{
"authors": [
"marcmognol",
"morremeyer",
"nmengin"
],
"repo": "traefik/traefik",
"url": "https://github.com/traefik/traefik/issues/10676",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2419131079
|
🛑 VMM is down
In 36b5ae1, VMM (https://vmm.$DOMAIN1) was down:
HTTP code: 0
Response time: 0 ms
Resolved: VMM is back up in a0a4164 after 6 minutes.
|
gharchive/issue
| 2024-07-19T14:45:40 |
2025-04-01T04:36:06.886101
|
{
"authors": [
"traktuner"
],
"repo": "traktuner/status",
"url": "https://github.com/traktuner/status/issues/400",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1623807142
|
[ office-hours ] Adding a new secure migration for @rogeruiz updated CAC
Summary
Check the 20230314155457_rogeruiz_cac_update.up.sql in the STG ATO
environment and see the SQL that will execute. The SQL file was
generated using the following command:
milmove gen certs-migration --cac -n rogeruiz_cac_update --update --certid ${RSR_CERTID_STG}
Verifying this work
To verify this work, download the migration file of the same name from the STG
ATO environment and verify the SQL code is updating a particular certificate and
adding a SHA256 fingerprint digest and a subject for my CAC card, e.g.
RUIZ.ROGER.STEVE. To verify the certificate ID, download the previous SQL file
that adding my CAC card which contains the filename rogeruiz_cac_up in the
same environment where this migration file exists.
Warnings
:warning:
Please add the JIRA issue key to the PR title (e.g. MB-123)
Generated by :no_entry_sign: dangerJS against 62f4d5e61f589e2d2bf141db208d0b11727b6e77
|
gharchive/pull-request
| 2023-03-14T16:11:01 |
2025-04-01T04:36:06.899032
|
{
"authors": [
"robot-mymove",
"rogeruiz"
],
"repo": "transcom/mymove",
"url": "https://github.com/transcom/mymove/pull/10257",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2489931886
|
INT-B-20522 SIT Validation
Agility ticket
Summary
Prime cannot set Payment Start date earlier than SIT entry date.
Testing Instructions
Create an HHG and move to prime queue
Prime Update shipment and add shipment weight
Prime Create SIT Service Item
TOO Approve SIT item
Prime Create payment request -> enter an earlier date for the SIT payment request than the SIT entry date
Get the following error: "is in a conflicting state cannot have payment date earlier than SIT Entry date"
Swagger instructions for testing first AC: (Use your own move and service item IDs...)
post: /prime/v1/payment-requests
payload:
{
"isFinal": false,
"moveTaskOrderID": "d29240f5-30fc-4f3d-8b87-1ab7ee2929eb",
"serviceItems": [
{
"id": "d15b2c8d-e89b-4c23-9d84-7adb7b5a1054",
"params": [
{
"key": "WeightBilled",
"value": "1000"
},
{
"key": "SITPaymentRequestStart",
"value": "2029-12-31"
},
{
"key": "SITPaymentRequestEnd",
"value": "2029-12-31"
}
]
}
],
"pointOfContact": "myself"
}
Testing instructions were stellar, easy approve
Fails
:no_entry_sign:
Danger failed to run dangerfile.ts.
Error SyntaxError
Unexpected token u in JSON at position 0
SyntaxError: Unexpected token u in JSON at position 0
at JSON.parse (<anonymous>)
at checkYarnAudit (dangerfile.ts:58:24)
at Object.<anonymous> (dangerfile.ts:94:5)
at Module._compile (node:internal/modules/cjs/loader:1364:14)
at requireFromString (/home/circleci/transcom/mymove/node_modules/require-from-string/index.js:28:4)
at /home/circleci/transcom/mymove/node_modules/danger/distribution/runner/runners/inline.js:161:68
at step (/home/circleci/transcom/mymove/node_modules/danger/distribution/runner/runners/inline.js:52:23)
at Object.next (/home/circleci/transcom/mymove/node_modules/danger/distribution/runner/runners/inline.js:33:53)
at /home/circleci/transcom/mymove/node_modules/danger/distribution/runner/runners/inline.js:27:71
at new Promise (<anonymous>)
Dangerfile
53| // Require new src/components files to include changes to storybook
54| const hasComponentChanges = danger.git.created_files.some((path) => path.includes('src/components'));
55| const hasStorybookChanges = allFiles.some(
56| (path) => path.includes('src/stories') || !!path.match(/src\/.*\.stories.jsx?/),
57| );
--------------------------^
58|
59| if (hasComponentChanges && !hasStorybookChanges) {
60| warn('This PR does not include changes to storybook, even though it affects component code.');
61| }
Generated by :no_entry_sign: dangerJS against 78ce2652a586c74b126e38112ad2a1f94d3d7d96
|
gharchive/pull-request
| 2024-08-27T17:08:05 |
2025-04-01T04:36:06.905093
|
{
"authors": [
"loganwc",
"paulstonebraker",
"robot-mymove"
],
"repo": "transcom/mymove",
"url": "https://github.com/transcom/mymove/pull/13576",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2644066908
|
SEO improvements: Add json+ld schema.org microdata for Blog Posts
Description
Initial steps on improving SEO for the blog posts. This adds https://schema.org/BlogPosting type microdata to the website which is one way to help Google/Bing to understand what the content is about.
@transitive-bullshit if you can show me how to get the publish dates and edited dates from the content that would be great 👌
We can also move this into the body if accessing these are easier there.
Notion Test Page ID
7875426197cf461698809def95960ebf
Love this; just need to clean up and remove the comments. I think my personal branch has some code to pull dates.
Thanks @onnimonni 🙏
|
gharchive/pull-request
| 2024-11-08T13:32:19 |
2025-04-01T04:36:06.975683
|
{
"authors": [
"onnimonni",
"transitive-bullshit"
],
"repo": "transitive-bullshit/nextjs-notion-starter-kit",
"url": "https://github.com/transitive-bullshit/nextjs-notion-starter-kit/pull/653",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2329429007
|
🛑 TechinAsia_Laravel_API is down
In 506548d, TechinAsia_Laravel_API (https://www.techinasia.com/api/2.0/companies) was down:
HTTP code: 403
Response time: 228 ms
Resolved: TechinAsia_Laravel_API is back up in c428a55 after 15 minutes.
|
gharchive/issue
| 2024-06-02T02:10:18 |
2025-04-01T04:36:07.015356
|
{
"authors": [
"traqy"
],
"repo": "traqy/upptime",
"url": "https://github.com/traqy/upptime/issues/10642",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2020834029
|
🛑 TechinAsia_WordPress_API is down
In cb9512b, TechinAsia_WordPress_API (https://www.techinasia.com/wp-json/techinasia/2.0/posts) was down:
HTTP code: 403
Response time: 172 ms
Resolved: TechinAsia_WordPress_API is back up in cb090cf after 22 minutes.
|
gharchive/issue
| 2023-12-01T12:57:43 |
2025-04-01T04:36:07.017840
|
{
"authors": [
"traqy"
],
"repo": "traqy/upptime",
"url": "https://github.com/traqy/upptime/issues/296",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2028151862
|
🛑 TechinAsia is down
In 109880c, TechinAsia (https://www.techinasia.com) was down:
HTTP code: 403
Response time: 1005 ms
Resolved: TechinAsia is back up in 0af0715 after 7 minutes.
|
gharchive/issue
| 2023-12-06T09:47:29 |
2025-04-01T04:36:07.020295
|
{
"authors": [
"traqy"
],
"repo": "traqy/upptime",
"url": "https://github.com/traqy/upptime/issues/587",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2274519191
|
🛑 TechinAsia is down
In 92f57d5, TechinAsia (https://www.techinasia.com) was down:
HTTP code: 403
Response time: 984 ms
Resolved: TechinAsia is back up in a9a23d5 after 8 minutes.
|
gharchive/issue
| 2024-05-02T03:27:13 |
2025-04-01T04:36:07.022529
|
{
"authors": [
"traqy"
],
"repo": "traqy/upptime",
"url": "https://github.com/traqy/upptime/issues/9134",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
97101676
|
C#: replace "nightly" with "weekly" for Mono version
We keep nightly working for now, but it is a misnomer. When the name switch eventually happens on the Mono repositories we can just update this in the background without impacting anyone (not that I expect this to be soon).
Corresponding docs PR: https://github.com/travis-ci/docs-travis-ci-com/pull/308
/cc @BanzaiMan @Joshua-Anderson
:+1: Sorry I haven't got to OS X support yet, I haven't had the time. I hopefully should get to it soon. :smiley:
@Joshua-Anderson no worries :smile:
|
gharchive/pull-request
| 2015-07-24T17:44:31 |
2025-04-01T04:36:07.035643
|
{
"authors": [
"Joshua-Anderson",
"akoeplinger"
],
"repo": "travis-ci/travis-build",
"url": "https://github.com/travis-ci/travis-build/pull/482",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1243875043
|
Update cats-effect to 3.3.12
Updates org.typelevel:cats-effect from 3.2.9 to 3.3.12.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Applied Scalafix Migrations
org.typelevel:{cats-effect,cats-effect-laws}:3.3.0 (created no change)
github:typelevel/cats-effect/v3_3_0?sha=series/3.x
Adjust future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.typelevel", artifactId = "cats-effect" } ]
Or, add this to slow down future updates of this dependency:
dependencyOverrides = [{
pullRequests = { frequency = "@monthly" },
dependency = { groupId = "org.typelevel", artifactId = "cats-effect" }
}]
labels: library-update, early-semver-minor, semver-spec-minor, scalafix-migrations, commit-count:1
Superseded by #396.
|
gharchive/pull-request
| 2022-05-21T05:21:41 |
2025-04-01T04:36:07.079325
|
{
"authors": [
"scala-steward"
],
"repo": "travisbrown/dhallj",
"url": "https://github.com/travisbrown/dhallj/pull/389",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
812842550
|
Update sbt-release to 1.0.14
Updates com.github.gseitz:sbt-release from 1.0.13 to 1.0.14.
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.github.gseitz", artifactId = "sbt-release" } ]
labels: sbt-plugin-update, semver-patch
Superseded by #479.
|
gharchive/pull-request
| 2021-02-21T12:19:39 |
2025-04-01T04:36:07.082409
|
{
"authors": [
"scala-steward"
],
"repo": "travisbrown/iteratee",
"url": "https://github.com/travisbrown/iteratee/pull/478",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
235632710
|
Updates to the API endpoint
The API endpoint changed some months back; this should update those endpoints to the correct ones :)
Thanks :)
|
gharchive/pull-request
| 2017-06-13T17:41:13 |
2025-04-01T04:36:07.083225
|
{
"authors": [
"StatusCakeDaniel",
"trbs"
],
"repo": "trbs/statuscake",
"url": "https://github.com/trbs/statuscake/pull/3",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1382458721
|
Meeting Agenda Action - Project Board Agenda Automation
GitHub Action workflow
[x] Brief / Description - Meeting Agenda Action - Project Board Agenda Automation
[x] Develop the Action
[x] Document the Action - https://quality-assurance-dao.gitbook.io/treasury-advisory-service/project-automation/project-board-automation/meeting-agenda
[x] Pay for the Action - Andre / Miro - $ 825
[x] Report it - 24th September 2022
Payment Link
|
gharchive/issue
| 2022-09-22T13:29:57 |
2025-04-01T04:36:07.109789
|
{
"authors": [
"Andre-Diamond",
"stephen-rowan"
],
"repo": "treasuryguild/Treasury-Advisory-Service",
"url": "https://github.com/treasuryguild/Treasury-Advisory-Service/issues/71",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
150797014
|
Tredlyfile > URL Configuration > urlRedirect has yet to be implemented
This functionality allows you to redirect a URL to the URL you are configuring. For example redirecting (HTTP 301) tredly.com to www.tredly.com This functionality is tabled to be implemented very soon. We will keep you updated on progress.
Moved to tredly-build issues.
|
gharchive/issue
| 2016-04-25T09:13:35 |
2025-04-01T04:36:07.111454
|
{
"authors": [
"laurieodgers",
"nathanaherne"
],
"repo": "tredly/tredly-host",
"url": "https://github.com/tredly/tredly-host/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2231152046
|
feature: pandoc raw_tex and raw_attribute support
Did you check the tree-sitter docs?
[X] I have read all the tree-sitter docs if it relates to using the parser
Is your feature request related to a problem? Please describe.
Pandoc has a few handy extensions, that allow embedding raw content in the output document. It would be nice to support these for e.g. highlighting.
Raw attributes
Inline spans and fenced code blocks with a special kind of attribute will be parsed as raw content with the designated format. ...
```{=latex}
\begin{tabular}{|l|l|}\hline
Age & Frequency \\ \hline
18--25 & 15 \\
26--35 & 33 \\
36--45 & 22 \\ \hline
\end{tabular}
```
Raw TeX
... pandoc allows raw LaTeX, TeX, and ConTeXt to be included in a document. Inline TeX commands will be preserved and passed unchanged to the LaTeX and ConTeXt writers. ...
\cite{jones.1967}
\begin{tabular}{|l|l|}\hline
Age & Frequency \\ \hline
18--25 & 15 \\
26--35 & 33 \\
36--45 & 22 \\ \hline
\end{tabular}
Describe the solution you'd like
Language injection for fenced blocks with raw attributes, just like with regular fenced code blocks
TeX injection for LaTeX environments (\begin{}-\end{} blocks); I'm not sure if supporting other *TeX commands would be feasible.
Describe alternatives you've considered
No response
Additional context
https://pandoc.org/MANUAL.html#extension-raw_attribute
https://pandoc.org/MANUAL.html#extension-raw_tex
Feature 1 should be quite doable, it would just imply improving the rules for parsing the language here:
https://github.com/tree-sitter-grammars/tree-sitter-markdown/blob/7fe453beacecf02c86f7736439f238f5bb8b5c9b/tree-sitter-markdown/grammar.js#L186-L196
I personally will probably have no capacity to work on this though.
Feature 2 would be quite hard, since it both is hard to detect (one would need to know if there is an end block before it is clear whether something is latex) and collides with other grammar rules.
Feature 1 already works, though, by virtue of injections? You just have to use proper language annotations:
```latex
\begin{tabular}{|l|l|}\hline
Age & Frequency \\ \hline
18--25 & 15 \\
26--35 & 33 \\
36--45 & 22 \\ \hline
\end{tabular}
```
your example would be pasted as a (visible) code block, and the one with {=latex} would be inserted in the actual latex output as latex code, or rendered, in case the output format is e.g. pdf, in this case as a table
That's completely out of scope for a Markdown tree-sitter parser, sorry.
|
gharchive/issue
| 2024-04-08T12:56:30 |
2025-04-01T04:36:07.118694
|
{
"authors": [
"MDeiml",
"anuramat",
"clason"
],
"repo": "tree-sitter-grammars/tree-sitter-markdown",
"url": "https://github.com/tree-sitter-grammars/tree-sitter-markdown/issues/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2018827050
|
[Bug]: tls: server sent certificate containing RSA key larger than 8192 bits
What happened?
Steps to Reproduce:
I've updated lakefs from v0.109.0 to v1.2.0 and lakectl from v0.104.0 to v1.2.0
When executing any lakectl command I'm getting an error message about a RSA key which exceeds a key length of 8192 bits, e.g.
Warning: failed sending statistics: Post "https://.../api/v1/statistics": tls: server sent certificate containing RSA key larger than 8192 bits Get "https://.../api/v1/auth/policies/...": tls: server sent certificate containing RSA key larger than 8192 bits Error executing command.
Expected behavior
I don't understand the limitation of the key length in the new version of lakectl. The CA certificates in the chain are longer than 4096 bits, but I can't change it. Switching back to lakectl v.0.104.0 "solves" the problem.
lakeFS version
1.2.0
How lakeFS is installed
Kubernetes on-premise
Affected clients
lakectl v1.2.0
Relevant log output
Warning: failed sending statistics: Post "https://.../api/v1/statistics": tls: server sent certificate containing RSA key larger than 8192 bits
Get "https://.../api/v1/auth/policies/...": tls: server sent certificate containing RSA key larger than 8192 bits
Error executing command.
Contact details
ingo.kemmerzell@gi-de.com
The error comes from Go's TLS package. You can set environment variable in order to override the limit. From the code doc:
In order to avoid denial of service attacks, the maximum RSA key size allowed in certificates sent by either the TLS server or client is limited to 8192 bits. This limit can be overridden by setting tlsmaxrsasize in the GODEBUG environment variable (e.g. GODEBUG=tlsmaxrsasize=4096).
@arielshaqed can we close this one, or there is anything actionable?
AFAIK this is resolved by setting GODEBUG as above.
I'm closing, so here's a description that will hopefully make this issue more searchable or understandable.
What's wrong?
A certificate with an RSA key larger than 8192 bits appears in the certificate chain. Usually this will be the certificate of the root CA. No public CAs have such large certificates, and this size is considerably larger than any NIST recommendation for key sizes. This is CVE-2023-29409: sending such a certificate to a loaded client is an easy way to cause denial of service, because the client or server needs to validate the certificate to know the identity of the server.
Recent versions of the standard Go cryptography libraries fix this security bug and refuse to validate such large certificates.
Workarounds
Change certificates
There is no publicly-known reason to have an RSA key with 8192 bits, and certainly not more than that. A summary of discussion with good references is on the Wikipedia page.
Allow Go crypto to validate large certificates
Set a higher value in the GODEBUG environment variable, for instance
GODEBUG=tlsmaxrsasize=16384
See Conn.Handshake documentation.
Note that this allows some DoS attacks during connection setup.
|
gharchive/issue
| 2023-11-30T14:46:36 |
2025-04-01T04:36:07.144272
|
{
"authors": [
"arielshaqed",
"ingoke",
"nopcoder"
],
"repo": "treeverse/lakeFS",
"url": "https://github.com/treeverse/lakeFS/issues/7091",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2093177520
|
[Bug]: Commits no longer work via the Python client
What happened?
What actually happened, including error codes if applies.
Steps to Reproduce:
Upload a file to a lakeFS repo/branch
Commit via something like:
from lakefs_client import ApiClient, Configuration
from lakefs_client.api.branches_api import BranchesApi
from lakefs_client.api.commits_api import CommitsApi
configuration = Configuration()
configuration.username = ...
configuration.password = ...
configuration.host = ...
client = ApiClient(configuration)
api_instance = CommitsApi(client)
commit_creation = CommitCreation(message="my commit message")
api_instance.commit(repo, to_branch, commit_creation)
Hopefully the error message is self-explanatory. It's not obvious to me how the CommitsAPI has changed.
Expected behavior
Commit to be successful as in Python lakeFS client versions <=1.8
lakeFS version
0.107.1
How lakeFS is installed
Local
Affected clients
Python lakeFS client 1.9.0
Relevant log output
TypeError: Commit._from_openapi_data() missing 2 required positional arguments: 'generation' and 'version'
...
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/api/commits_api.py", line 248, in commit
return self.commit_endpoint.call_with_http_info(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/api_client.py", line 835, in call_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/api_client.py", line 409, in call_api
return self.__call_api(resource_path, method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/api_client.py", line 224, in __call_api
return_data = self.deserialize(
^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/api_client.py", line 325, in deserialize
deserialized_data = validate_and_convert_types(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 1570, in validate_and_convert_types
converted_instance = attempt_convert_item(
^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 1454, in attempt_convert_item
return deserialize_model(input_value, valid_class,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 1374, in deserialize_model
return model_class._new_from_openapi_data(**kw_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 46, in wrapped_init
return fn(_self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 370, in _new_from_openapi_data
return cls._from_openapi_data(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<REDACTED>/lib/python3.11/site-packages/lakefs_client/model_utils.py", line 46, in wrapped_init
return fn(_self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
### Contact details
_No response_
Hi @dsgibbons, and thanks for catching the issue! This is a backwards compatibility failure and it fails our commitment for minor version bumps. I apologize for this.
The only workaround for now is to use an older client SDK. We shall release a patch today (2024-01-22) to fix this.
The patch works. Thank you for the fast turnaround.
|
gharchive/issue
| 2024-01-22T06:23:47 |
2025-04-01T04:36:07.149523
|
{
"authors": [
"arielshaqed",
"dsgibbons"
],
"repo": "treeverse/lakeFS",
"url": "https://github.com/treeverse/lakeFS/issues/7322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2519569884
|
Organize dlc/updates in separate folders with create_folder_per_game as false
Hi thanks for the great work here ;)
As per SteveMyers75 question on issue 55 I'd also like the ability to organize files in that way, ie
x:\switch (all base games in one folder, this is the only way I can get playnite to correctly scan)
x:\switch\updates (subfolder under base for all games updates)
x:\switch\DLC (subfolder under DLC for all games updates)
Perhaps having the create_folder_per_game to false and then setting paths for the DLC/updates would make it behave this way, and having create_folder_per_game to true leaves it behaving in the current way with DLC and updates under a per game folder?
The issue with per game subfolders or having all updates and DLC in one base folder causes playnite to scan everything as a game so you get multiple entries for all the updates/dlc etc, with the games in one folder then the updates etc stored away in subfolders the scanning on playnite ignores them.
Thanks :)
Nice one thanks ;)
available in the latest beta build under the Actions tab here on GitHub
Cool thanks, appears to work however will wait until the issue of seeing xcis with updates in them as updates is fixed also, as these are moved to the update folder as well at the mo 👍
i havent been able to get my hands on a multi-content file yet, so i'm yet to work on that one
no worries, it's actually fine using this new feature then just manually moving those files back, thanks :)
the multi-content problem should be fixes in the latest beta build now :)
Cool thanks will check it out later today
cheers, looking good mostly as per comment on the other thread :)
feature is included in the 1.9.0 release
|
gharchive/issue
| 2024-09-11T12:08:28 |
2025-04-01T04:36:07.160842
|
{
"authors": [
"benspray",
"trembon"
],
"repo": "trembon/switch-library-manager",
"url": "https://github.com/trembon/switch-library-manager/issues/64",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
59440389
|
What to do with all the console.logs
There are lots of console.log calls in the app. What do we do with them?
I sometimes even spot code of which it's sole purpose is eventually calling console.log like for example this little gem:
// Handle the login confirmed event raised by the authService
$scope.$on('event:auth-loginConfirmed', function() {
console.log('handling event:auth-loginConfirmed...');
});
What do we do with that? I feel like it shouldn't be in the source if it doesn't have a purpose beside 'placeholder' logging.
That particular one is for demonstration purposes
And all others? There are lots of console.log's
Neverwoods
Margrietlaan 10
Willemstad, Curaçao
T (+599 9) 522 9996
W http://www.neverwoods.com
"we create the future"
On Mon, Mar 2, 2015 at 9:36 AM, Keith D. Moore notifications@github.com
wrote:
That particular one is for demonstration purposes
—
Reply to this email directly or view it on GitHub
https://github.com/trendicity/trendicity/issues/65#issuecomment-76712230
.
Mostly to let the developer know what is going on in general.
I know what a console.log in general does :wink:
What I meant to say is: should it be there? We're publishing the app to the
App Stores; will they allow console.log's? I'm not sure if it's the way
to go but for example when I write about a feature that we didn't use in
Trendicity, I just add that to my writing like 'you can also do this code
example but we havent done that in Trendicity'.
Just checking here. This should be part of the tech editing but since
editing and tech editing isn't really done so far, we should consider
straightening out the code ourselves.
We could also replace the console.logs just with inline comments
console.log('This is a fancy feature that does stuff');
to
// This is a fancy feature that does stuff
That way, it's definitely descriptive to any dev reading the code and
doesn't look like we forgot to implement something (that's a personal
opinion).
I think most dev's will be looking at the source code to see what's
happening instead of checking the log when running the app.
Neverwoods
Margrietlaan 10
Willemstad, Curaçao
T (+599 9) 522 9996
W http://www.neverwoods.com
"we create the future"
On Mon, Mar 2, 2015 at 9:43 AM, Keith D. Moore notifications@github.com
wrote:
Mostly to let the developer know what is going on in general.
—
Reply to this email directly or view it on GitHub
https://github.com/trendicity/trendicity/issues/65#issuecomment-76713223
.
Shouldn't be adding issues late at night.. this is apparently another non-issue :wink:
We've discussed this topic on Google Hangouts and concluded that console.logs aren't that bad after all.
Unused code blocks will be removed.
|
gharchive/issue
| 2015-03-02T05:30:15 |
2025-04-01T04:36:07.171162
|
{
"authors": [
"keithdmoore",
"rvanbaalen"
],
"repo": "trendicity/trendicity",
"url": "https://github.com/trendicity/trendicity/issues/65",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
207065947
|
Enderchest per inv (or chared between inventories)
I believe it would be nice to be able to specify seperate enderchests per PJI inv, to where my creative worlds don't need to block all use of echest, but instead just have a seperate echest.
Its a planned feature
Added in latest release
Is there any documentation on this feature??
|
gharchive/issue
| 2017-02-12T17:48:15 |
2025-04-01T04:36:07.175135
|
{
"authors": [
"rdster",
"trentech",
"turtledude01"
],
"repo": "trentech/ProjectInventories",
"url": "https://github.com/trentech/ProjectInventories/issues/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1002693585
|
Frontend tooling: introduce Prettier and ESLint
This PR introduces Prettier and ESLint inside our development environment. This is propedeutical towards the adoption of a new frontend library to architect things in a more modern way.
You can actually see the steps I took looking at the commits, but I'll summarize that here too:
I installed the new packages using NPM;
Added some sensible default configuration just to ensure everything is on track from day one. I also added the eslint React plugin since we're moving towards React;
Formatted all the frontend source code using Prettier;
Corrected all the eslint errors either adjusting the configuration or disabling noisy rules for legacy code;
Added a make step in the CI so we're covered from now on.
Hope you like it!
@arbulu89 well this can be changed in the future, the cool thing about prettier is that we have one config to rule them all :D
|
gharchive/pull-request
| 2021-09-21T15:17:14 |
2025-04-01T04:36:07.177836
|
{
"authors": [
"dottorblaster"
],
"repo": "trento-project/trento",
"url": "https://github.com/trento-project/trento/pull/259",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
453310351
|
Fix github link in cabal file
Your current haddocks have a broken link pointing here!
Thanks!
|
gharchive/pull-request
| 2019-06-07T02:13:27 |
2025-04-01T04:36:07.178700
|
{
"authors": [
"isovector",
"trevorcook"
],
"repo": "trevorcook/hkd-delta",
"url": "https://github.com/trevorcook/hkd-delta/pull/1",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1746247429
|
Inability to select, create, or apply multi tags
Describe the bug
Unable to select or add to tag drop down list on 6.17.0 on mobile
Steps to reproduce
Add new row. Attempt to apply 1-n tag values. Tag remains highlighted and is not applied
Expected behavior
Application of tag
Are you using the mobile app?
Yes
Obsidian debug info
NA
Relevant log output
No response
@Compg I have fixed this. I will push out the fix tomorrow
@Compg Fixed in 6.18.0
|
gharchive/issue
| 2023-06-07T16:08:34 |
2025-04-01T04:36:07.183515
|
{
"authors": [
"Compg",
"trey-wallis"
],
"repo": "trey-wallis/obsidian-notion-like-tables",
"url": "https://github.com/trey-wallis/obsidian-notion-like-tables/issues/540",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2425485726
|
TypeError: Cannot read properties of null (reading 'path') at deduplicateNewName
I notice sometimes that instead of renaming pasted attachments, I get a TypeError. This happens when pasting immediately after editing the note content. Not sure what is causing it.
Obsidian 1.6.7
macOS 14.5
Plugin ver 0.9.13
@trganda I am so sorry but, for me this bug is still happening sometimes! Even after updating to 0.9.14. I don't know if it's an Obsidian bug or problem of the plugin.
Hi @luckman212, looks like the this.app.vault.getAbstractFileByPath(attachPath) return a null object.
I'm not sure why this happen in your case. If you encounter this error again, please check whether the attachPath has been success created in your vault and provide me with your plugin configuration file.
Ok trganda, I will try to debug a little closer and keep an eye. I think this is a timing issue, maybe a bug in Obsidian where the Vault cache is not yet updated when the plugin tries to change the text content.
|
gharchive/issue
| 2024-07-23T15:21:04 |
2025-04-01T04:36:07.240501
|
{
"authors": [
"luckman212",
"trganda"
],
"repo": "trganda/obsidian-attachment-management",
"url": "https://github.com/trganda/obsidian-attachment-management/issues/145",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
724521633
|
Add support for output quality (q) and lossless parameters
This PR adds support for the output quality (q) and lossless parameters within the Format set:
https://docs.imgix.com/apis/rendering/format
@lrworth Thanks for the recommendations, let me know what you think 👍
I've been able to keep the public interface the same as the existing code & close to the API itself whilst internally cleaning it up.
|
gharchive/pull-request
| 2020-10-19T11:24:10 |
2025-04-01T04:36:07.245676
|
{
"authors": [
"samuelgiles"
],
"repo": "tricycle/elm-imgix",
"url": "https://github.com/tricycle/elm-imgix/pull/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2071524024
|
Enforce ErrorProne DoubleBraceInitialization
We should remove the following 2 lines from pom.xml:
https://github.com/trinodb/trino-gateway/blob/7e30bbea02f0da38f1c9366fc0bbe379084bc9df/pom.xml#L119-L120
Fixed by https://github.com/trinodb/trino-gateway/pull/162
|
gharchive/issue
| 2024-01-09T02:35:30 |
2025-04-01T04:36:08.207399
|
{
"authors": [
"ebyhr",
"willmostly"
],
"repo": "trinodb/trino-gateway",
"url": "https://github.com/trinodb/trino-gateway/issues/155",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
853185551
|
Can't run .pb file on model_analyzer
Hello, I'm new at using model_analyzer.
I got a result of model_analyzer by doing QuickStart example, but I was not able to get the result on .pb file model especially.
I also check that I can execute perf-analyzer with these models. However, it is not possible to run in model-analyzer with the same setting.
Additionally, I have a question about the results of model analyzer.
First, I run these codes to see the results of QuickStart.
(1) Run the Triton Inference Server first
docker run --gpus=1 --rm -p8000:8000 -p8001:8001 -p8002:8002 -v /home/model_analyzer/examples/quick-start:/home/model_analyzer/examples/quick-start nvcr.io/nvidia/tritonserver:20.11-py3 tritonserver --model-control-mode=explicit --model-repository=/home/model_analyzer/examples/quick-start/
(2) Open the model-analyzer container on the docker
docker run -it --privileged --rm --gpus all \ -v /var/run/docker.sock:/var/run/docker.sock \ -v /home/model_analyzer/examples/quick-start:/home/model_analyzer/examples/quick-start \ --net=host --name model-analyzer \ model-analyzer /bin/bash
(3)
model-analyzer -m /home/model_analyzer/examples/quick-start -n add_sub --triton-launch-mode=remote --export-path=analysis_results
(4) Results
Server Only:
Model GPU ID GPU Memory Usage (MB) GPU Utilization (%) GPU Power Usage (W)
triton-server 0 626.0 0.0 15.0
Models (Inference):
Model Batch Concurrency Model Config Path Instance Group Dynamic Batcher Sizes Satisfies Constraints Throughput (infer/sec) p99 Latency (ms) RAM Usage (MB)
add_sub 1 16 add_sub 1/GPU Disabled Yes 12387.4 1.5 0.0
add_sub 1 4 add_sub 1/GPU Disabled Yes 12164.8 0.4 0.0
add_sub 1 8 add_sub 1/GPU Disabled Yes 12083.8 0.7 0.0
add_sub 1 2 add_sub 1/GPU Disabled Yes 11931.6 0.2 0.0
add_sub 1 1 add_sub 1/GPU Disabled Yes 3057.6 1.4 0.0
Models (GPU Metrics):
Model GPU ID Batch Concurrency Model Config Path Instance Group Dynamic Batcher Sizes Satisfies Constraints GPU Memory Usage (MB) GPU Utilization (%) GPU Power Usage (W)
add_sub 0 1 16 add_sub 1/GPU Disabled Yes 624.0 8.2 31.8
add_sub 0 1 4 add_sub 1/GPU Disabled Yes 624.0 8.1 31.5
add_sub 0 1 8 add_sub 1/GPU Disabled Yes 624.0 8.1 31.7
add_sub 0 1 2 add_sub 1/GPU Disabled Yes 624.0 8.1 31.6
add_sub 0 1 1 add_sub 1/GPU Disabled Yes 624.0 2.4 30.9
What I want to know is that how GPU memory usage and GPU Utilization are almost same from Concurrency 2 to Concurrency 16. I think that concurrency is the execution of many models in parallel. Could you give the more explanation of concurrency please?
I want to run other models on model-analyzer. In the each model folder, it has '1' folder where model.pb file is in and config.pbtxt and output_labels.txt. These model folders are in the same directory of QuickStart.
('add_sub', 'apple', 'new' location : /home/model_analyzer/examples/quick-start)
My custom model is in the 'apple' folder and simple_identity model is in the 'new' folder.
Config file(config.pbtxt) explanation is below:
`name: "simple_identity"
platform: "tensorflow_savedmodel"
max_batch_size: 8
input [
{
name: "INPUT0"
data_type: TYPE_STRING
dims: [ -1 ]
}
]
output [
{
name: "OUTPUT0"
data_type: TYPE_STRING
dims: [ -1 ]
label_filename: "output0_labels.txt"
}
]`
The error message what I've got was
root@d1:/# model-analyzer -m /home/model_analyzer/examples/quick-start -n new --triton-launch-mode=remote --export-path=analysis_results 2021-04-08 07:49:53.278 INFO[entrypoint.py:288] Triton Model Analyzer started: config={'model_repository': '/home/model_analyzer/examples/quick-start', 'model_names': [{'model_name': 'new', 'objectives': {'perf_throughput': 10}, 'parameters': {'batch_sizes': [1], 'concurrency': []}}], 'objectives': {'perf_throughput': 10}, 'constraints': {}, 'batch_sizes': [1], 'concurrency': [], 'perf_analyzer_timeout': 600, 'perf_analyzer_cpu_util': 80.0, 'run_config_search_max_concurrency': 1024, 'run_config_search_max_instance_count': 5, 'run_config_search_disable': False, 'run_config_search_max_preferred_batch_size': 16, 'export': True, 'export_path': 'analysis_results', 'summarize': True, 'filename_model_inference': 'metrics-model-inference.csv', 'filename_model_gpu': 'metrics-model-gpu.csv', 'filename_server_only': 'metrics-server-only.csv', 'max_retries': 100, 'duration_seconds': 5, 'monitoring_interval': 0.01, 'client_protocol': 'grpc', 'perf_analyzer_path': 'perf_analyzer', 'perf_measurement_window': 5000, 'perf_output': False, 'triton_launch_mode': 'remote', 'triton_docker_image': 'nvcr.io/nvidia/tritonserver:21.02-py3', 'triton_http_endpoint': 'localhost:8000', 'triton_grpc_endpoint': 'localhost:8001', 'triton_metrics_url': 'http://localhost:8002/metrics', 'triton_server_path': 'tritonserver', 'triton_output_path': None, 'triton_server_flags': {}, 'log_level': 'INFO', 'gpus': ['all'], 'output_model_repository_path': './output_model_repository', 'override_output_model_repository': False, 'config_file': None, 'inference_output_fields': ['model_name', 'batch_size', 'concurrency', 'model_config_path', 'instance_group', 'dynamic_batch_sizes', 'satisfies_constraints', 'perf_throughput', 'perf_latency', 'cpu_used_ram'], 'gpu_output_fields': ['model_name', 'gpu_id', 'batch_size', 'concurrency', 'model_config_path', 'instance_group', 'dynamic_batch_sizes', 'satisfies_constraints', 'gpu_used_memory', 'gpu_utilization', 'gpu_power_usage'], 'server_output_fields': ['model_name', 'gpu_id', 'gpu_used_memory', 'gpu_utilization', 'gpu_power_usage'], 'plots': [{'name': 'throughput_v_latency', 'title': 'Throughput vs. Latency', 'x_axis': 'perf_latency', 'y_axis': 'perf_throughput', 'monotonic': True}, {'name': 'gpu_mem_v_latency', 'title': 'GPU Memory vs. Latency', 'x_axis': 'perf_latency', 'y_axis': 'gpu_used_memory', 'monotonic': False}], 'top_n_configs': 3} 2021-04-08 07:49:53.280 INFO[entrypoint.py:79] Using remote Triton Server... 2021-04-08 07:49:53.280 WARNING[entrypoint.py:82] GPU memory metrics reported in the remote mode are not accuracte. Model Analyzer uses Triton explicit model control to load/unload models. Some frameworks do not release the GPU memory even when the memory is not being used. Consider using the "local" or "docker" mode if you want to accurately monitor the GPU memory usage for different models. 2021-04-08 07:49:53.280 WARNING[entrypoint.py:89] Config sweep parameters are ignored in the "remote" mode because Model Analyzer does not have access to the model repository of the remote Triton Server. 2021-04-08 07:49:53.337 INFO[driver.py:236] init 2021-04-08 07:49:54.404 INFO[entrypoint.py:327] Starting perf_analyzer... 2021-04-08 07:49:54.404 INFO[analyzer.py:82] Profiling server only metrics... 2021-04-08 07:49:55.431 INFO[gpu_monitor.py:73] Using GPU(s) with UUID(s) = { GPU-c8fdb676-2c11-669a-4cff-f300b28eb26a } for the analysis. 2021-04-08 07:49:56.464 INFO[run_search.py:155] Will sweep only through the concurrency values... 2021-04-08 07:49:56.464 INFO[run_search.py:262] Concurrency set to 1. 2021-04-08 07:49:56.468 INFO[client.py:82] Model new load failed: [StatusCode.INTERNAL] failed to load 'new', no version is available 2021-04-08 07:50:01.584 INFO[client.py:143] Model readiness failed for model new. Error None Traceback (most recent call last): File "/usr/local/bin/model-analyzer", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.8/dist-packages/model_analyzer/entrypoint.py", line 328, in main analyzer.run() File "/usr/local/lib/python3.8/dist-packages/model_analyzer/analyzer.py", line 95, in run run_config_generator = RunConfigGenerator( File "/usr/local/lib/python3.8/dist-packages/model_analyzer/config/run/run_config_generator.py", line 65, in __init__ self._generate_run_configs() File "/usr/local/lib/python3.8/dist-packages/model_analyzer/config/run/run_config_generator.py", line 290, in _generate_run_configs self._generate_run_config_for_model_sweep( File "/usr/local/lib/python3.8/dist-packages/model_analyzer/config/run/run_config_generator.py", line 229, in _generate_run_config_for_model_sweep model_config = ModelConfig.create_from_triton_api( File "/usr/local/lib/python3.8/dist-packages/model_analyzer/triton/model/model_config.py", line 111, in create_from_triton_api model_config_dict = client.get_model_config(model_name, num_retries) File "/usr/local/lib/python3.8/dist-packages/model_analyzer/triton/client/grpc_client.py", line 54, in get_model_config model_config_dict = self._client.get_model_config(model_name, File "/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/__init__.py", line 476, in get_model_config raise_error_grpc(rpc_error) File "/usr/local/lib/python3.8/dist-packages/tritonclient/grpc/__init__.py", line 61, in raise_error_grpc raise get_error_grpc(rpc_error) from None tritonclient.utils.InferenceServerException: [StatusCode.UNAVAILABLE] Request for unknown model: 'new' is not found
I can run perf-analyzer with the same setting, but can't run in model-analyzer.
@happyseone Concurrency is a parameter designed to adjust the request load on Triton Server. You can read more about it below:
https://github.com/triton-inference-server/server/blob/master/docs/perf_analyzer.md#request-concurrency
Regarding the error that you are seeing, it looks like new model is not in the model repository folder. Can you share the output of tree /home/model_analyzer/examples/quick-start? It should look like below:
/home/model_analyzer/examples/quick-start
├── add_sub
├── new
└── apple
The new model should be in the same model repository as other models.
What I want to know is that how GPU memory usage and GPU Utilization are almost same from Concurrency 2 to Concurrency 16. I think that concurrency is the execution of many models in parallel.
If you don't have dynamic batching enabled, GPU memory usage will likely not change throughout different concurrency values. The reason is there is no need for new memory allocations. But if you have dynamic batching enabled GPU memory usage will change.
Regarding the GPU utilization, if you have a single instance of your model all requests to that model will be serialized and because of this, the GPU utilization will be the same.
I also noticed that you are using remote mode. Did you know about the docker and local mode? In those modes, model analyzer will change some parameters of your model that may lead to better GPU utilization. It may take longer, but is better suited if you want to increase your model throughput/decrease latency.
Thank you very much. The problem was resolved. The cause of the problem was that model_name in the config file and folder name were different.
By the way, I have another issue to run model-analyzer.
My input shape in config file is [-1, 2] because my every input can have different length.
In the config file, Instance group count is 4 and dynamic batching size is 4.
When I run the model-analyzer with the input shape [-1,2], the error message is below.[My model name is SE-DKT+]
`2021-04-09 00:39:30.928 INFO[perf_analyzer.py:135] Running perf_analyzer ['perf_analyzer', '-m', 'SE-DKT+', '-b', '1', '-u', 'localhost:8001', '-i', 'grpc', '--measurement-interval', '5000', '--concurrency-range', '1'] failed with exit status 1 : error: failed to create concurrency manager: input inputs contains dynamic shape, provide shapes to send along with the request
2021-04-09 00:39:31.900 WARNING[result_manager.py:278] Requested top 3 configs, but none satisfied constraints. Showing available constraint failing configs for this model.
2021-04-09 00:39:31.900 WARNING[result_manager.py:283] Requested top 3 failing configs, but found only 0. Showing all available constraint failing configs for this model.
2021-04-09 00:39:31.900 WARNING[legend.py:1225] No handles with labels found to put in legend.
2021-04-09 00:39:31.901 WARNING[legend.py:1225] No handles with labels found to put in legend.`
Then, I can run model-analyzer after I change many types of input shape such as [10,2], [20,2], [200, 2] in config file.
I'm curious that how model-analyzer can calculate GPU-memory/Util and throughput with different input shape.
For example, when I change input shape as [20, 2] in config file but some input shapes are bigger than [20,2], because I run the model with dynamic shape inputs.
Does model-analyzer only use the part of input data [20,2] and waste the other parts?
Also, Let's say I change input shape as [200,2] in config file but some input shapes are smaller than [200,2] such as [20,2]
Does model-analyzer make the zero padding in these kinds of input data to calculate the GPU things?
@happyseone With regards to your error, that seems to be an error from perf_analyzer. perf_analyzer has a specific command line option to specify input shapes dynamically. You can read about providing input shapes to perf_analyzer here.
Model Analyzer allows you to pass the above flags (such as shape) to the perf_analyzer instances launched as well via the config. See these docs for more info.
Neither model analyzer, nor perf_analyzer nor Triton make any changes to your data. If you fix the input shape in the config, thats what the server will expect. If the shape is dynamic, the server will expect that the request to contain information about the input shape via the --shape argument to perf_analyzer.
Thank you for the explanation.
I want to change or make the yaml file for new configuration, but I can't find where the yaml file is.
Unfortunately, Changing are only available in the config.yaml file I guess.
root@d1:/home/model_analyzer# find . -name "*.yaml" ./qa/L0_custom_flags/config.yaml ./helm-chart/values.yaml ./helm-chart/templates/job.yaml ./helm-chart/Chart.yaml
Could you let me know where I make or put the yaml file?
My workspace is just cloned by this github except my model files, so I can easily follow your instruction.
If you want to use the YAML file, you need to create it by yourself. You can create it wherever you want and run model analyzer using -f flag:
model-analyzer -f <path-to-config.yml>
Documentation is available in the link below:
https://github.com/triton-inference-server/model_analyzer/blob/main/docs/config.md#example-1
This example is probably more than what you need but shows lots of different config options available in model analyzer.
I can run the model analyzer with my config.yaml file but I'm very sorry to write another issue.
I got the same error msg on the perf analyzer and model analyzer with dynamic input shape when the first dimension is '-1'
To run the perf analyzer,
perf_client -m SE-DKT+ --percentile=95 --concurrency-range 1:8 --shape inputs:-1,2
To run the model analyzer, I write the config.yml file like this.
model_repository: /home/model_analyzer/examples/quick-start
run_config_search_disable: false
triton_launch_mode: remote
export_path: /home/model_analyzer/examples/quick-start/metrics
model_names:
SE-DKT+:
parameters:
batch_sizes: 1, 2, 4, 8, 16, 32
perf_analyzer_flags:
percentile: 95
shape: inputs:-1,2
model_config_parameters:
dynamic_batching:
preferred_batch_size: [[4]]
instance_group:
-
-
kind: KIND_CPU
count: 4
My model config.pbtxt file is below
name: "SE-DKT+"
platform: "tensorflow_savedmodel"
max_batch_size: 32
input [
{
name: "inputs"
data_type: TYPE_INT32
dims: [-1 , 2] #model-analyzer can be run when the first dimension is not '-1'
},
{
name: "input_c_state"
data_type: TYPE_FP32
dims: [ 200]
},
{
name: "input_h_state"
data_type: TYPE_FP32
dims: [ 200]
}
output [
{
name: "preds"
data_type: TYPE_FP32
dims: [ 544 ]
},
{
name: "output_c_state"
data_type: TYPE_FP32
dims: [ 200 ]
},
{
name: "output_h_state"
data_type: TYPE_FP32
dims: [ 200 ]
}
]
instance_group [
{
count: 4
kind: KIND_GPU
}
]
dynamic_batching {
preferred_batch_size : [4]
the error message is
root@d1:/home/model_analyzer/examples/quick-start# model-analyzer -f /home/model_analyzer/examples/quick-start/config.yaml
2021-04-13 03:39:18.864 ERROR[entrypoint.py:285] Model Analyzer encountered an error: Export path /home/model_analyzer/examples/quick-start/metrics is not a directory.
root@d1:/home/model_analyzer/examples/quick-start# mkdir metrics
root@d1:/home/model_analyzer/examples/quick-start# model-analyzer -f /home/model_analyzer/examples/quick-start/config.yaml
2021-04-13 03:39:28.412 INFO[entrypoint.py:288] Triton Model Analyzer started: config={'model_repository': '/home/model_analyzer/examples/quick-start', 'model_names': [{'model_name': 'SE-DKT+', 'objectives': {'perf_throughput': 10}, 'parameters': {'batch_sizes': [1, 2, 4, 8, 16, 32], 'concurrency': []}, 'model_config_parameters': {'dynamic_batching': [{'preferred_batch_size': [[4]]}], 'instance_group': [[{'kind': ['KIND_CPU'], 'count': [4]}]]}, 'perf_analyzer_flags': {'percentile': '95', 'shape': 'inputs:-1,2'}}], 'objectives': {'perf_throughput': 10}, 'constraints': {}, 'batch_sizes': [1], 'concurrency': [], 'perf_analyzer_timeout': 600, 'perf_analyzer_cpu_util': 80.0, 'run_config_search_max_concurrency': 1024, 'run_config_search_max_instance_count': 5, 'run_config_search_disable': False, 'run_config_search_max_preferred_batch_size': 16, 'export': True, 'export_path': '/home/model_analyzer/examples/quick-start/metrics', 'summarize': True, 'filename_model_inference': 'metrics-model-inference.csv', 'filename_model_gpu': 'metrics-model-gpu.csv', 'filename_server_only': 'metrics-server-only.csv', 'max_retries': 100, 'duration_seconds': 5, 'monitoring_interval': 0.01, 'client_protocol': 'grpc', 'perf_analyzer_path': 'perf_analyzer', 'perf_measurement_window': 5000, 'perf_output': False, 'triton_launch_mode': 'remote', 'triton_docker_image': 'nvcr.io/nvidia/tritonserver:21.02-py3', 'triton_http_endpoint': 'localhost:8000', 'triton_grpc_endpoint': 'localhost:8001', 'triton_metrics_url': 'http://localhost:8002/metrics', 'triton_server_path': 'tritonserver', 'triton_output_path': None, 'triton_server_flags': {}, 'log_level': 'INFO', 'gpus': ['all'], 'output_model_repository_path': './output_model_repository', 'override_output_model_repository': False, 'config_file': '/home/model_analyzer/examples/quick-start/config.yaml', 'inference_output_fields': ['model_name', 'batch_size', 'concurrency', 'model_config_path', 'instance_group', 'dynamic_batch_sizes', 'satisfies_constraints', 'perf_throughput', 'perf_latency', 'cpu_used_ram'], 'gpu_output_fields': ['model_name', 'gpu_id', 'batch_size', 'concurrency', 'model_config_path', 'instance_group', 'dynamic_batch_sizes', 'satisfies_constraints', 'gpu_used_memory', 'gpu_utilization', 'gpu_power_usage'], 'server_output_fields': ['model_name', 'gpu_id', 'gpu_used_memory', 'gpu_utilization', 'gpu_power_usage'], 'plots': [{'name': 'throughput_v_latency', 'title': 'Throughput vs. Latency', 'x_axis': 'perf_latency', 'y_axis': 'perf_throughput', 'monotonic': True}, {'name': 'gpu_mem_v_latency', 'title': 'GPU Memory vs. Latency', 'x_axis': 'perf_latency', 'y_axis': 'gpu_used_memory', 'monotonic': False}], 'top_n_configs': 3}
2021-04-13 03:39:28.414 INFO[entrypoint.py:79] Using remote Triton Server...
2021-04-13 03:39:28.414 WARNING[entrypoint.py:82] GPU memory metrics reported in the remote mode are not accuracte. Model Analyzer uses Triton explicit model control to load/unload models. Some frameworks do not release the GPU memory even when the memory is not being used. Consider using the "local" or "docker" mode if you want to accurately monitor the GPU memory usage for different models.
2021-04-13 03:39:28.414 WARNING[entrypoint.py:89] Config sweep parameters are ignored in the "remote" mode because Model Analyzer does not have access to the model repository of the remote Triton Server.
2021-04-13 03:39:28.468 INFO[driver.py:236] init
2021-04-13 03:39:29.600 INFO[entrypoint.py:327] Starting perf_analyzer...
2021-04-13 03:39:29.601 INFO[analyzer.py:82] Profiling server only metrics...
2021-04-13 03:39:30.627 INFO[gpu_monitor.py:73] Using GPU(s) with UUID(s) = { GPU-c8fdb676-2c11-669a-4cff-f300b28eb26a } for the analysis.
2021-04-13 03:39:31.661 INFO[run_search.py:155] Will sweep only through the concurrency values...
2021-04-13 03:39:31.661 INFO[run_search.py:262] Concurrency set to 1.
2021-04-13 03:39:31.694 INFO[client.py:80] Model SE-DKT+ loaded.
2021-04-13 03:39:31.697 INFO[client.py:104] Model SE-DKT+ unloaded.
2021-04-13 03:39:31.776 INFO[client.py:80] Model SE-DKT+ loaded.
2021-04-13 03:39:31.777 INFO[run_config_generator.py:177] Profiling model SE-DKT+...
2021-04-13 03:39:32.808 INFO[gpu_monitor.py:73] Using GPU(s) with UUID(s) = { GPU-c8fdb676-2c11-669a-4cff-f300b28eb26a } for the analysis.
2021-04-13 03:39:37.861 INFO[perf_analyzer.py:135] Running perf_analyzer ['perf_analyzer', '-m', 'SE-DKT+', '-b', '32', '-u', 'localhost:8001', '-i', 'grpc', '--measurement-interval', '5000', '--concurrency-range', '1', '--percentile', '95', '--shape', 'inputs:-1,2'] failed with exit status 1 : error: input shape must be > 0
I put various input shapes but when the first dimension of input shape is '-1' for dynamic shape, I got the error message "error: input shape must be > 0"
I feel very sorry to ask several times, but I want to the problem to be solved.
Thank you for your instructions.
@happyseone It looks like the problem is with the shape field. When you are providing the shape field, you need to avoid specifying -1 as the dimension.
|
gharchive/issue
| 2021-04-08T07:57:56 |
2025-04-01T04:36:08.335669
|
{
"authors": [
"Tabrizian",
"aramesh7",
"happyseone"
],
"repo": "triton-inference-server/model_analyzer",
"url": "https://github.com/triton-inference-server/model_analyzer/issues/111",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1406976331
|
Integer value can't be read as double
I have a cell with 5000 in it and I'm reading it into a double when I get a bad variant exception from std::get(5000). Fix seems to be to change the stanza in XLCellValue.cpp at line 244 to look like this (add the Integer line):
if constexpr (std::is_floating_point_v<T>) {
if (m_type == XLValueType::Error) return std::nan("1");
if (m_type == XLValueType::Integer) return static_cast<T>(std::get<int64_t>(m_value));
return static_cast<T>(std::get<double>(m_value));
}
This is a crucial problem and needs attention imho.
There is also a case very similar to this: A cell value of 5,00 is read as an integer by OpenXLSX and yields bad variant exception when read as a float/double.
@SpareSimian's fix works 👍
|
gharchive/issue
| 2022-10-13T00:46:45 |
2025-04-01T04:36:08.394224
|
{
"authors": [
"SpareSimian",
"fauder"
],
"repo": "troldal/OpenXLSX",
"url": "https://github.com/troldal/OpenXLSX/issues/196",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1431888136
|
🛑 PiHole is down
In ab65ee6, PiHole (https://pihole.tronflix.app/admin/login.php) was down:
HTTP code: 520
Response time: 172 ms
Resolved: PiHole is back up in 7d89ffe.
|
gharchive/issue
| 2022-11-01T18:29:47 |
2025-04-01T04:36:08.399220
|
{
"authors": [
"tronyx"
],
"repo": "tronyx/upptime",
"url": "https://github.com/tronyx/upptime/issues/1289",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1436166571
|
🛑 PiHole Backup is down
In cd058c8, PiHole Backup (https://pihole-backup.tronflix.app/admin/login.php) was down:
HTTP code: 520
Response time: 157 ms
Resolved: PiHole Backup is back up in b785422.
|
gharchive/issue
| 2022-11-04T14:29:09 |
2025-04-01T04:36:08.401861
|
{
"authors": [
"tronyx"
],
"repo": "tronyx/upptime",
"url": "https://github.com/tronyx/upptime/issues/1720",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1902125708
|
🛑 Library is down
In 8dcbfe1, Library (https://library.tronflix.app) was down:
HTTP code: 523
Response time: 7241 ms
Resolved: Library is back up in 4e1df89 after 8 minutes.
|
gharchive/issue
| 2023-09-19T03:08:02 |
2025-04-01T04:36:08.404343
|
{
"authors": [
"tronyx"
],
"repo": "tronyx/upptime",
"url": "https://github.com/tronyx/upptime/issues/4121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1289115152
|
🛑 Ombi is down
In 8edaae4, Ombi (https://tronflix.app/ombi/) was down:
HTTP code: 403
Response time: 89 ms
Resolved: Ombi is back up in 6549fb5.
|
gharchive/issue
| 2022-06-29T18:16:02 |
2025-04-01T04:36:08.406586
|
{
"authors": [
"tronyx"
],
"repo": "tronyx/upptime",
"url": "https://github.com/tronyx/upptime/issues/504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2491074586
|
🛑 Radarr is down
In 9c8ebd2, Radarr (https://tronflix.app/radarr/activity/queue/) was down:
HTTP code: 302
Response time: 73 ms
Resolved: Radarr is back up in 4bad532 after 10 minutes.
|
gharchive/issue
| 2024-08-28T05:19:52 |
2025-04-01T04:36:08.408831
|
{
"authors": [
"tronyx"
],
"repo": "tronyx/upptime",
"url": "https://github.com/tronyx/upptime/issues/5404",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
177216708
|
Initialising a grid below minWidth disables resize and move
I found some odd behaviour when i was refreshing my browser and found that when a grid initialises and has less space than the minWidth resize and move gets disabled.
At time of writing (Sep 2016) you can see this happening on the gridstack home page demo grid (https://troolee.github.io/gridstack.js/), just resize your browser to less than 770px wide (i think thats the default minWidth) and then hit refresh and you will see that the grid is disabled.
I found a workaround in my project by setting the minWidth to 0 which is fine for my project but might be problematic if people want to use the single column mode.
So the media query breaking point is 768 for mobile view. The grid is disabled on dragging items yes, but once you've resized the window above 768 the grid is active again and you're able to move the objects around.
minWidth indicates a bound after which gridstack changed to one-column mode (for mobile devices). In this mode drag/drop function is disabled.
I'm seeing a similar issue where I set the minWidth to be 1280, refresh the page, and the drag/drop functionality disables around 1100 and the column mode does not enable until a width of about 685. Overall inconsistent based on the minWidth value. Only seeing this issue when refreshing the browser - once the screen has been made wider and and brought back down, it disappears.
|
gharchive/issue
| 2016-09-15T15:57:20 |
2025-04-01T04:36:08.411541
|
{
"authors": [
"DoctaWorm",
"gavJackson",
"kylietmo",
"troolee"
],
"repo": "troolee/gridstack.js",
"url": "https://github.com/troolee/gridstack.js/issues/527",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
168781048
|
Requiring directories that contain broken symlinks throws an error
If a directory contains a broken symlink at any level of its subdirectory tree, performing require-directoty() in it will always throw an ENOENT error.
For example, running index.js in the following tree:
├── dir
│ └── broken-subdir -> nonexistent
├── index.js
Where the contents of index.js is just require('require-directory')(module) will throw:
$ node index.js
fs.js:696
return binding.stat(pathModule._makeLong(path));
^
Error: ENOENT, no such file or directory '/Users/aleksey/require-directotry-test/dir/broken-subdir'
at Object.fs.statSync (fs.js:696:18)
at /Users/aleksey/require-directotry-test/node_modules/require-directory/index.js:65:12
at Array.forEach (native)
at requireDirectory (/Users/aleksey/require-directotry-test/node_modules/require-directory/index.js:59:24)
at /Users/aleksey/require-directotry-test/node_modules/require-directory/index.js:67:15
at Array.forEach (native)
at requireDirectory (/Users/aleksey/require-directotry-test/node_modules/require-directory/index.js:59:24)
at Object.<anonymous> (/Users/aleksey/require-directotry-test/index.js:1:91)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
I think it should still attempt to traverse and load the successful directories, and defer the error handling to the consumer.
I've come across a similar error where it decides to get relative paths wrong - but using fs.resolve(path) on the path before handing it over fixes that before it gets here. It's only related in that the error could do with being in user-land. Didn't track down the "why" hence not opening a separate issue.
|
gharchive/issue
| 2016-08-02T01:43:57 |
2025-04-01T04:36:08.441926
|
{
"authors": [
"Rycochet",
"lxe"
],
"repo": "troygoode/node-require-directory",
"url": "https://github.com/troygoode/node-require-directory/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1962909941
|
Add confirmation popup when certain actions (mostly delete actions) are performed by user
Should be somewhat easy. Will do a PR within this week.
I want to solve the problem of accidentally deleting mods by providing an undo buffer rather than adding a confirmation dialog: #81
|
gharchive/issue
| 2023-10-26T07:42:51 |
2025-04-01T04:36:08.531333
|
{
"authors": [
"AichiChikuwa",
"trumank"
],
"repo": "trumank/drg-mod-integration",
"url": "https://github.com/trumank/drg-mod-integration/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2274370867
|
fratch, aneath
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,f]
/trunk merge
|
gharchive/pull-request
| 2024-05-02T00:03:57 |
2025-04-01T04:36:08.532941
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/12952",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2277066782
|
autocarist
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a]
/trunk merge
|
gharchive/pull-request
| 2024-05-03T07:10:31 |
2025-04-01T04:36:08.534751
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/13853",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2279138299
|
frithsoken, eleutheromorph
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[e,f]
/trunk merge
|
gharchive/pull-request
| 2024-05-04T18:04:18 |
2025-04-01T04:36:08.536337
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/14886",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2280099360
|
embrowns, felon
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[e,f]
/trunk merge
|
gharchive/pull-request
| 2024-05-06T06:03:45 |
2025-04-01T04:36:08.537878
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/15935",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2387448568
|
fumatory, bouchees
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[b,f]
/trunk merge
|
gharchive/pull-request
| 2024-07-03T01:31:50 |
2025-04-01T04:36:08.539441
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/16748",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2389639675
|
bauckie, fibrous
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[b,f]
/trunk merge
|
gharchive/pull-request
| 2024-07-03T23:07:18 |
2025-04-01T04:36:08.541200
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/17735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2390116501
|
balebos, astronomic
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,b]
/trunk merge
|
gharchive/pull-request
| 2024-07-04T07:07:51 |
2025-04-01T04:36:08.542822
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/18135",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2390477324
|
expansionary
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[e]
/trunk merge
|
gharchive/pull-request
| 2024-07-04T10:05:27 |
2025-04-01T04:36:08.544584
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/18265",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2391359379
|
bachelordom, antimensium
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,b]
/trunk merge
|
gharchive/pull-request
| 2024-07-04T19:08:22 |
2025-04-01T04:36:08.546097
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/18742",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2391651361
|
burucha, disrobing
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[b,d]
/trunk merge
|
gharchive/pull-request
| 2024-07-05T02:19:18 |
2025-04-01T04:36:08.547645
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/19078",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2392681970
|
experiences
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[e]
/trunk merge
|
gharchive/pull-request
| 2024-07-05T14:07:49 |
2025-04-01T04:36:08.549210
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/19679",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2407340155
|
expositive, fino
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[e,f]
/trunk merge
|
gharchive/pull-request
| 2024-07-14T09:08:06 |
2025-04-01T04:36:08.550763
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/25008",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2414251638
|
guttular, chuckle
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[c,g]
/trunk merge
|
gharchive/pull-request
| 2024-07-17T18:10:38 |
2025-04-01T04:36:08.552341
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/26059",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2418318297
|
graspable, discing
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[d,g]
/trunk merge
|
gharchive/pull-request
| 2024-07-19T08:07:22 |
2025-04-01T04:36:08.554120
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/26591",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2420494468
|
behaviors, eelboat
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[b,e]
/trunk merge
|
gharchive/pull-request
| 2024-07-20T02:15:42 |
2025-04-01T04:36:08.555667
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/26968",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2241874226
|
ferrying, epigonic
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.4
logical conflict every: 5
sleep for: 300s
[pullrequest]
requests per hour: 30
deps=[e,f]
/trunk merge
|
gharchive/pull-request
| 2024-04-14T02:52:03 |
2025-04-01T04:36:08.557162
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/2764",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2422588307
|
effluency, bellows
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[b,e]
/trunk merge
|
gharchive/pull-request
| 2024-07-22T11:03:49 |
2025-04-01T04:36:08.558759
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/28716",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2428289200
|
arc, detainment
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,d]
/trunk merge
|
gharchive/pull-request
| 2024-07-24T19:07:10 |
2025-04-01T04:36:08.560344
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/31480",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2429197491
|
duteous, epithermally
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[d,e]
/trunk merge
|
gharchive/pull-request
| 2024-07-25T07:07:04 |
2025-04-01T04:36:08.561915
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/32077",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2433253642
|
altitonant, garrya
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,g]
/trunk merge
|
gharchive/pull-request
| 2024-07-27T05:03:45 |
2025-04-01T04:36:08.563687
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/34307",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2245756573
|
chinampa, dadoxylon
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 30
deps=[c,d]
/trunk merge
|
gharchive/pull-request
| 2024-04-16T11:01:40 |
2025-04-01T04:36:08.565351
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/4138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254342589
|
aversive, erythristic
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,e]
/trunk merge
|
gharchive/pull-request
| 2024-04-20T03:02:03 |
2025-04-01T04:36:08.566905
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/4605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2255656767
|
argonaut, grovelingly
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[a,g]
/trunk merge
|
gharchive/pull-request
| 2024-04-22T06:04:21 |
2025-04-01T04:36:08.568490
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/6017",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2262443877
|
chaldean, demideify
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 100
sleep for: 300s
close stale after: 4 hours
[pullrequest]
requests per hour: 0
deps=[c,d]
/trunk merge
|
gharchive/pull-request
| 2024-04-25T01:26:04 |
2025-04-01T04:36:08.570039
|
{
"authors": [
"epes"
],
"repo": "trunk-io/mergequeue-staging",
"url": "https://github.com/trunk-io/mergequeue-staging/pull/7977",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2409069382
|
arblast, erastus
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 600s
close stale after: 24 hours
[pullrequest]
requests per hour: 20
deps=[a,e]
/trunk merge
|
gharchive/pull-request
| 2024-07-15T15:47:29 |
2025-04-01T04:36:08.571615
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/103879",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2411142146
|
bionic, daverdy
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 600s
close stale after: 24 hours
[pullrequest]
requests per hour: 20
deps=[b,d]
/trunk merge
|
gharchive/pull-request
| 2024-07-16T13:25:48 |
2025-04-01T04:36:08.573375
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/104736",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2413147229
|
enhanced, docquet
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 600s
close stale after: 24 hours
[pullrequest]
requests per hour: 20
deps=[d,e]
/trunk merge
|
gharchive/pull-request
| 2024-07-17T09:42:56 |
2025-04-01T04:36:08.574954
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/105525",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2426343882
|
baratheas
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 600s
close stale after: 24 hours
[pullrequest]
requests per hour: 20
deps=[b]
/trunk merge
|
gharchive/pull-request
| 2024-07-24T00:22:09 |
2025-04-01T04:36:08.576499
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/111647",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2430709219
|
apprized, dengues
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 600s
close stale after: 24 hours
[pullrequest]
requests per hour: 20
deps=[a,d]
/trunk merge
|
gharchive/pull-request
| 2024-07-25T18:16:17 |
2025-04-01T04:36:08.578062
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/113305",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2233216906
|
decremented
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-09T11:22:24 |
2025-04-01T04:36:08.578756
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/15838",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2233335852
|
apologetical, cambyuskan
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-09T12:31:29 |
2025-04-01T04:36:08.579725
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/15926",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2234648933
|
danger, bandying
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-10T02:44:42 |
2025-04-01T04:36:08.580431
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/17422",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2235255570
|
fortuitousness, annularly
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-10T10:26:35 |
2025-04-01T04:36:08.581119
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/18252",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2236059314
|
emprosthotonus
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-10T17:03:51 |
2025-04-01T04:36:08.581984
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/18927",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2236942881
|
floccosely, bindweb
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-11T05:31:28 |
2025-04-01T04:36:08.582689
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/19919",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2241144404
|
bigot, fabulousness
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-13T00:22:42 |
2025-04-01T04:36:08.583381
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/22907",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2244285315
|
fluoroid, empidonax
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-15T17:53:01 |
2025-04-01T04:36:08.584190
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/27107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2244304521
|
floter, grassers
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-15T18:05:02 |
2025-04-01T04:36:08.584880
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/27132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2247474760
|
choledochitis, bactetiophage
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-17T06:20:23 |
2025-04-01T04:36:08.585577
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/29825",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2249040272
|
demultiplexing, blepharodyschroia
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-17T19:03:42 |
2025-04-01T04:36:08.586262
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/31316",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2251527154
|
copperytailed, glorifiers
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-18T20:44:21 |
2025-04-01T04:36:08.586944
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/34038",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2252534232
|
diffidently, flindersia
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-19T09:36:15 |
2025-04-01T04:36:08.587637
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/35391",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254379886
|
felsites, discreeter
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-20T04:55:50 |
2025-04-01T04:36:08.588339
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/37489",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2254881031
|
computers, amphrysian
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-21T03:27:46 |
2025-04-01T04:36:08.589033
|
{
"authors": [
"mmatheson"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/39876",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2255088626
|
durns
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-21T13:44:55 |
2025-04-01T04:36:08.589715
|
{
"authors": [
"joshmarinacci"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/40981",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2255123591
|
dendrophil, bedriddenness
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-21T15:06:47 |
2025-04-01T04:36:08.590574
|
{
"authors": [
"joshmarinacci"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/41139",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2255237828
|
dyspnoeic, fertil
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-04-21T19:54:16 |
2025-04-01T04:36:08.591242
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/41655",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2274948968
|
carboxydomonas, glossily
This pull request was generated by the 'mq' tool
/trunk merge
|
gharchive/pull-request
| 2024-05-02T08:47:16 |
2025-04-01T04:36:08.591939
|
{
"authors": [
"joshmarinacci"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/45707",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2286282857
|
chamaesyce, disobligatory
This pull request was generated by the 'mq' tool
[test]
flake rate: 0.1
logical conflict every: 1000
sleep for: 2100s
close stale after: 24 hours
[pullrequest]
requests per hour: 100
deps=[c,d]
/trunk merge
|
gharchive/pull-request
| 2024-05-08T19:20:56 |
2025-04-01T04:36:08.593544
|
{
"authors": [
"EliSchleifer"
],
"repo": "trunk-io/mergequeue",
"url": "https://github.com/trunk-io/mergequeue/pull/48107",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.