id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
689330901
|
fy2021q1 refresh
updated demos to SPFx v1.11
update pictures, resources, and steps to account for dependency updates
Excellent, thanks AC and Rob for the updates for 1.11.
|
gharchive/pull-request
| 2020-08-31T16:23:17 |
2025-04-01T04:55:37.573712
|
{
"authors": [
"VesaJuvonen",
"andrewconnell"
],
"repo": "SharePoint/sp-dev-training-spfx-spcontent",
"url": "https://github.com/SharePoint/sp-dev-training-spfx-spcontent/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
367366973
|
Security Trimmed menus and list
Category
[x] Question
[ ] Bug
[ ] Enhancement
Expected or Desired Behavior
I would like to see security trimming on menus and in footer. When a user is on the site, I would like them to see just what they have permission to see.
Observed Behavior
In the nav menus and in the footer all options are showing instead of just the sites where user has permission. If you look in Gear Icon > Site Settings > Site Collection Navigation the navigation settings option is missing. I understand the trimming option is on that settings page.
Is there a setting that is wrong? What am I missing?
Hub site navigation cannot be security trimmed currently. This is native functionality out of the box.
As for the footer that comes in the starter kit, you can permission menu items in the PnP-PortalFooter-Links library where the items are created.
If you are talking about the footer on the collaboration sites, these cannot be permission trimmed as it's using the term store.
Hope this helps.
I can understand that. Microsoft is aware of the limitation of the current hub navigation, so you can expect the functionality around hub nav to grow over time.
Alternatively, you have a couple options.
Put a "sites" web part on the page that show the current user's sites they have access to. So instead of linking them in the Nav, you link them in the page.
If you have a development team, build your own custom navigation that sits on top of Hub Navigation to pull in sites. I did something similar https://beaucameron.net/2018/04/17/security-trimmed-hub-site-navigation-updates/
Thank you for your suggestions. I think I have some reading to do on permissions. The portalfooter seems to show all the team sites regardless of permissions. I'll verify that I have the settings correct so only the users Teams show up.
The links in the portal footer are driven from the list. So you can permission items in that list to the users you want. If you are noticing that you made the chance, and it's still showing the same links. Clear your cache. :)
|
gharchive/issue
| 2018-10-05T21:00:26 |
2025-04-01T04:55:37.578605
|
{
"authors": [
"bcameron1231",
"gdavisMDA"
],
"repo": "SharePoint/sp-starter-kit",
"url": "https://github.com/SharePoint/sp-starter-kit/issues/171",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1222106519
|
使用说明
这个怎么用,能写个使用说明吗谢谢
1.在浏览器登录玩遍ios
2.按F12打开开发者工具,刷新页面,在网络选项卡中找到这个包
3.复制cookie:后面的所有内容
4.Fork本仓库
5.前往你Fork的仓库 Setting - Secret - Action添加一个名为 COOKIES 的密钥,将刚刚的内容粘贴进去
6.点亮你仓库的star
7.然后会每天签到,Cookie大概15天会过期,15天后重复1,2,3,5步骤更新一下secret里面的cookie就行
PS.吐槽下每天签到只给0.15 :-(
谢谢了
|
gharchive/issue
| 2022-05-01T10:26:21 |
2025-04-01T04:55:37.581287
|
{
"authors": [
"Shark-Lucas",
"hcliu987"
],
"repo": "Shark-Lucas/wanbianios_checkin",
"url": "https://github.com/Shark-Lucas/wanbianios_checkin/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
229351420
|
dateNF parsing option ignored
I still have a parsing problem in version 0.10.2.
dateNF is used for formatting the output but not as parsing option.
In my example
COD,DES,DATE,NUM
01,ZEROONE,12/01/2009,1.2
02,ZEROTWO,16/02/2016,3
dates are in dd/mm/yyyy format.
The parsing system from base64 text returns 12/01/2009 as 30/11/2009 (with time zone problem), clearly reading from the format mm/dd/yyyy. 16/02/2016 returns data type string, because 16 is not a valid month.
Fiddle proving it: https://jsfiddle.net/92qu04vd/2/
@mmancosu thanks for continuing this :) There are two questions to ask: what does Excel do and what should we do?
"What does Excel do?" Excel 2016 US locale treats 16/02/2016 as a string and 12/01/2009 as a "date with timezone problem", precisely how we handle it right now:
To reproduce, type '16/02/2016 in A2 to force text interpretation, type 16/02/2016 in B2, and =IF(TYPE(B2)=2,"TEXT",IF(TYPE(B2)=1,"NUMBER","OTHER")) in C2
"What should we do?" As for dateNF, that controls the final object but not how the values are read. Now that you mention it, that's not a bad idea (interpret the dateNF as a mask, and if the text matches the mask then interpret the numbers accordingly).
@SheetJSDev thanks for your work :)
My needs is to parse the date string and understand which date is. Excel does exactly what did you show us. It will be great if dateNF could be interpreted as a mask. The following part of Data Number Format documentation made me suppose that dateNF worked exactly in the way I need:
...
To get around this ambiguity, parse functions accept the dateNF option to override the interpretation of that specific format string.
The context of that paragraph was:
Format 14 (m/d/yy) is localized by Excel: even though the file specifies that number format, it will be drawn differently based on system settings. It makes sense when the producer and consumer of files are in the same locale, but that is not always the case over the Internet.
Specifically for that format, Excel will use the client settings to draw the date. If you go to regional and language settings, you can change the computer locale and Excel will show different text! That only happens for one number format, everything else remains the same. https://github.com/SheetJS/js-xlsx/issues/326#issuecomment-286014758 The original locale isn't stored in the file (its stored for some XLS files but XLSX and other formats have no provision for saving the original locale) so you can't magically deduce what was originally used, hence the option.
But your interpretation makes more sense :) What we can do is specifically test for the number format first.
Yeah, I misunderstood the meaning of Format 14 :)
What do you mean with your last words?
Hi @SheetJSDev, is there something new about this issue? :)
A complete solution would be the inverse of SSF, but for now the date format is tokenized and a "fixed" date is produced by matching fields and rewriting as ISO8601 (if the value matches the date format).
|
gharchive/issue
| 2017-05-17T13:25:11 |
2025-04-01T04:55:37.656942
|
{
"authors": [
"SheetJSDev",
"mmancosu"
],
"repo": "SheetJS/js-xlsx",
"url": "https://github.com/SheetJS/js-xlsx/issues/658",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
239257618
|
Rendered Documentation Freezes
There's seems to be some issues with the docs page.
It's been freezing: https://sheetjs.gitbooks.io/docs/
Browser tried: chrome, firefox, safari
OS tried: ubuntu, mac os
There's an open issue with the gitbook service: https://github.com/GitbookIO/gitbook/issues/1788
The rendered documentation displays the https://github.com/SheetJS/js-xlsx/blob/master/README.md with a table of contents on the left column. The content is the same, so use the README in the meanwhile.
|
gharchive/issue
| 2017-06-28T18:41:08 |
2025-04-01T04:55:37.659606
|
{
"authors": [
"mlhan1993",
"reviewher"
],
"repo": "SheetJS/js-xlsx",
"url": "https://github.com/SheetJS/js-xlsx/issues/710",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1164704745
|
vue3 vite2使用报错: does not provide an export named 'default'
SyntaxError: The requested module '/node_modules/.vite/xlsx.js?t=1646880501094&v=ac5f7de7' does not provide an export named 'default'
Use
import * as XLSX from 'xlsx'
or "named imports":
import { utils, writeFileXLSX } from 'xlsx';
Use
import * as XLSX from 'xlsx'
or "named imports":
import { utils, writeFileXLSX } from 'xlsx';
good
|
gharchive/issue
| 2022-03-10T03:29:56 |
2025-04-01T04:55:37.662100
|
{
"authors": [
"SheetJSDev",
"hb2013",
"mengandxa"
],
"repo": "SheetJS/sheetjs",
"url": "https://github.com/SheetJS/sheetjs/issues/2605",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
390742022
|
11613 DEV add routeRoles for route POST /passthrough/subscriptions/:id/customReports
https://github.com/Shippable/pm/issues/11613
this is done, closing.
|
gharchive/issue
| 2018-12-13T15:54:29 |
2025-04-01T04:55:37.893167
|
{
"authors": [
"ankul-shippable"
],
"repo": "Shippable/admiral",
"url": "https://github.com/Shippable/admiral/issues/2772",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1512032264
|
🛑 ShippersEdge Shipment API is down
In 0a771a9, ShippersEdge Shipment API (https://api.shippersedge.com/v4/shipment/29491477) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ShippersEdge Shipment API is back up in e527513.
|
gharchive/issue
| 2022-12-27T17:50:35 |
2025-04-01T04:55:37.920147
|
{
"authors": [
"josephtaylor-mn"
],
"repo": "ShippersEdge/upptime",
"url": "https://github.com/ShippersEdge/upptime/issues/282",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2309668112
|
how to test new model on BFCL?
I need to evaluate a model on BFCL, but it's not listed in the predefined model catalog. Do I have to create a custom model_handler.py file to achieve this, or is there another file that can be used directly to define a new model?
Hi @zhangyingerjelly, the process of adding a new model is documented in this section of the BFCL README. Let us know if you have more questions. We're happy to discuss this in the Discord channel as well if you're facing issues setting it up.
If the model is private, feel free to do step 1-4 in the above screenshot. Thanks!
Adding on to Charlie's comment. You also need to add the model information in MODEL_METADATA_MAPPING(code here); otherwise the final score table won't generate.
BTW, if your model outputs in a common format (eg, OpenAI, Claude, etc), you should take a look at all the existing model handlers first; very likely you can re-use one of them.
|
gharchive/issue
| 2024-05-22T06:22:02 |
2025-04-01T04:55:37.923160
|
{
"authors": [
"CharlieJCJ",
"HuanzhiMao",
"zhangyingerjelly"
],
"repo": "ShishirPatil/gorilla",
"url": "https://github.com/ShishirPatil/gorilla/issues/438",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1187489859
|
Image dimension getting shrinked (No control over it)
Current behavior
After compression image's dimensions get shrinked
Eg. I have an image with dimensions 4640x3472
after using the following compression code
const result = await Image.compress(b64_string, {
quality: 0.5,
input: "base64",
maxHeight: 10000,
maxWidth: 10000,
});
the output is an image of dimension 1250x936 even though i specified maxWidth and maxHeight to 10,000
Expected behavior
I want an output image of dimension 4640x3472 with 0.5 quality
Platform
[X] Android
[ ] iOS
React Native Version
react-native: 0.67.4
React Native Compressor Version
"react-native-compressor": "^1.5.2"
Never mind... Reading image dimensions programmatically is giving the wrong value... MaxWidth and MaxHeight is working as expected.
Image.getSize(`file://${path}`, (width, height) => {
files.push({uri: `file://${path}`, width, height, quality: 1});
});
This function is giving me wrong width/height not sure why
|
gharchive/issue
| 2022-03-31T04:42:45 |
2025-04-01T04:55:37.933663
|
{
"authors": [
"akashraj9828"
],
"repo": "Shobbak/react-native-compressor",
"url": "https://github.com/Shobbak/react-native-compressor/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1381215425
|
Add a lint rule to discourage the the usage of factory functions
WHY are these changes introduced?
This PR is continuation of this work to introduce some conventions into our codebase to ensure the codebase evolves healthily and that developers have a great experience contributing to the repository. In particular, I'm adding an ESLint rule that makes lint fail if new modules use the following pattern:
import { error } from "@shopify/cli-kit"
const SomeError = () => {
return new error.Abort("some custom message")
}
function someService() {
// Before
throw SomeError()
// Now
throw new error.Abort("some custom message")
}
WHAT is this pull request doing?
I'm adding the ESLint rule with a shitlist of modules that haven't yet adopted the new convention. This ensures that we adopt the new convention in new modules and that there's some accountability to motivate the migration of the modules in the list.
How to test your changes?
yarn lint must succeed.
Measuring impact
How do we know this change was effective? Please choose one:
[x] n/a - this doesn't need measurement, e.g. a linting rule or a bug-fix
[ ] Existing analytics will cater for this addition
[ ] PR includes analytics changes to measure impact
Checklist
[x] I've considered possible cross-platform impacts (Mac, Linux, Windows)
[x] I've considered possible documentation changes
[x] I've made sure that any changes to dev or deploy have been reflected in the internal flowchart.
Next here is adding some documentation around:
Error and result types.
When and how to throw errors
How to test logic that throws errors.
|
gharchive/pull-request
| 2022-09-21T16:42:00 |
2025-04-01T04:55:37.948602
|
{
"authors": [
"pepicrft"
],
"repo": "Shopify/cli",
"url": "https://github.com/Shopify/cli/pull/514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1847252908
|
Allow 404 to be caught be ErrorBoundary in Route
WHY are these changes introduced?
Currently, (with the remix V2 v2_errorBoundary flag) when a 404 is thrown, it is not caught by the Error boundary.
WHAT is this pull request doing?
This PR adds a default exported function that returns null. This allows the ErrorBoundary in root.tsx to catch the 404 errorStatus.
HOW to test your changes?
run skeleton build locally
visit a non existing URL (e.g. localhost:3000/dsadsadsa)
Ensure the Error boundary is rendered
Checklist
[x] I've read the Contributing Guidelines
x ] I've considered possible cross-platform impacts (Mac, Linux, Windows)
[ ] I've added a changeset if this PR contains user-facing or noteworthy changes
[ ] I've added tests to cover my changes
[ ] I've added or updated the documentation
Closing in favour of #1215
|
gharchive/pull-request
| 2023-08-11T18:30:24 |
2025-04-01T04:55:37.954608
|
{
"authors": [
"josh-sanger"
],
"repo": "Shopify/hydrogen",
"url": "https://github.com/Shopify/hydrogen/pull/1214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1555668645
|
Remove pg gem as a development requirement
PosgreSQL is not an actual development requirement for this gem and asking everyone to install it only discourages contributions and make it less approachable for cloud-based environments like codespaces.
For many parts of the maintenance tasks even an in-memory sqlite database will suffice.
So this PR removes the pg gem from the Gemfile and moves it to a separate gemfile which will be used on CI only.
I think the expected check can be removed from the required checks in the default branch settings.
Right ❤️ Thank you! I'm going to bring the postgres steps as required after merging this PR as those don't seem to show up until merged
Nice 👍
|
gharchive/pull-request
| 2023-01-24T21:07:22 |
2025-04-01T04:55:37.959133
|
{
"authors": [
"adrianna-chang-shopify",
"nvasilevski"
],
"repo": "Shopify/maintenance_tasks",
"url": "https://github.com/Shopify/maintenance_tasks/pull/755",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2629618129
|
Check for pending migrations when watcher is fired
Check for pending migrations, notify the user and offer to run them. The idea is that if any modifications are made to the db/migrate directory, we check if there are pending migrations.
If there are, we show a dialog and offers to run them. If the user decides to do it, we then fire a request to the server to run the migrations.
Note: we only show the dialog if a migration is deleted or created, otherwise it can become really annoying when you are editing the migration you just created.
Perhaps we can also have it check for pending migrations when launched?
|
gharchive/pull-request
| 2024-11-01T18:20:25 |
2025-04-01T04:55:37.967525
|
{
"authors": [
"andyw8",
"vinistock"
],
"repo": "Shopify/ruby-lsp-rails",
"url": "https://github.com/Shopify/ruby-lsp-rails/pull/504",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2530782529
|
Consolidate docs to Jekyll site
Motivation
We should move all in-used documentation to Jekyll so users won't need to jumping between GH and the documentation site. It'll also make content easier to maintain.
Implementation
Moved addons, roadmap, version managers, troubleshooting, semantic highlighting, and editors documentation to Jekyll
Updated all the links to these documents
Sidebar structure
Automated Tests
Manual Tests
@vinistock Do we still need SEMANTIC_HIGHLIGHTING.md? I don't see it being linked from anywhere.
Did you move the semantic highlighting docs? I don't see the file on GH.
|
gharchive/pull-request
| 2024-09-17T10:37:37 |
2025-04-01T04:55:37.970019
|
{
"authors": [
"st0012",
"vinistock"
],
"repo": "Shopify/ruby-lsp",
"url": "https://github.com/Shopify/ruby-lsp/pull/2561",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1851875715
|
Always use complete URI for store key
Motivation
There are scenarios where we can receive the same file path with different schemes from VS Code. For example, when someone is looking at the diff editor, we receive the same path twice with two different schemes:
file:///foo/bar.rb
git:///foo/bar.rb
Because we were using the file path as the storage key, these two URIs would conflict and result in inconsistent internal state in the LSP. If you closed the git version, it would delete the file version too and that file would no longer exist (from the LSP's perspective).
Implementation
The switch to use file paths and untitled names as the storage key was a mistake. We should always use the complete URI so that there are no conflicts and each scheme has a separate version of the document. VS Code sends duplicate requests for each scheme version, so we should respect that and treat them as separate documents.
Automated tests
Added some examples that catch the issue.
I think it's worth responding to requests on git schemes, since it's useful to have consistent highlighting and navigation available when comparing two versions of the same file. If we only support file, then we'd have different features available on each diff side.
|
gharchive/pull-request
| 2023-08-15T17:55:10 |
2025-04-01T04:55:37.972815
|
{
"authors": [
"vinistock"
],
"repo": "Shopify/ruby-lsp",
"url": "https://github.com/Shopify/ruby-lsp/pull/891",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
57181943
|
Support lz4 compression
https://github.com/cloudflare/golz4 should be simple enough
https://issues.apache.org/jira/browse/KAFKA-1456
https://issues.apache.org/jira/browse/KAFKA-1493
And yet as with snappy, the framing format kafka uses is not supported by the "normal" go lz4 package.
http://fastcompression.blogspot.ca/2013/04/lz4-streaming-format-final.html
Again? That's annoying...
This is still not documented on the official protocol page, we don't need it, and nobody else seems to be asking for it. I'm gonna icebox this until somebody actually wants it.
Kafka v 0.10 supports LZ4 with correct offsets:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-57+-+Interoperable+LZ4+Framing
https://issues.apache.org/jira/browse/KAFKA-3160
Can this be reopened?
I'll re-open, but I don't have any time to implement this in the next little while so it won't mean much.
See #786
@eapache I guess this can be closed. I tried the LZ4 on our 0.10.0.1 cluster. It didn't outperform GZIP for our usecase but it works.
Yup, thanks. If anybody really wants the weird framing support then that can be another ticket but I doubt anyone cares.
|
gharchive/issue
| 2015-02-10T14:34:11 |
2025-04-01T04:55:37.977896
|
{
"authors": [
"cdstelly",
"eapache",
"edenhill",
"rtreffer"
],
"repo": "Shopify/sarama",
"url": "https://github.com/Shopify/sarama/issues/256",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1095441247
|
Fix HighWaterMarkOffset and messages offset in mocks.Consumer
The behavior of the PartitionConsumer is the HighWaterMarkOffset()
method to only return zero until the first message is consumed but the
mocks.PartitionConsumer is updating the HighWaterMarkOffset() every time
the YieldMessage is called.
This commit changes the PartitionConsumer behavior to be the same as the
real PartitionConsumer.
Fix #2050
My bad I didn't saw that the behavior of the Consumer was fixed by #2082
My bad I didn't saw that the behavior of the Consumer was fixed by #2082
|
gharchive/pull-request
| 2022-01-06T15:44:02 |
2025-04-01T04:55:37.979805
|
{
"authors": [
"dkotvan"
],
"repo": "Shopify/sarama",
"url": "https://github.com/Shopify/sarama/pull/2101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2362954303
|
How to use api-codegen-preset in checkout UI extensions?
Is it possible to use api-codegen-preset within a checkout UI extension? I tried setting it up according to the docs, but it didn't work.
Here is the .graphqlrc.ts:
import { ApiType, shopifyApiProject } from '@shopify/api-codegen-preset';
// See https://www.npmjs.com/package/@shopify/storefront-api-client
export default {
schema: 'https://shopify.dev/storefront-graphql-direct-proxy/2024-04',
documents: ['./src/**/*.{js,ts,jsx,tsx}'],
projects: {
default: shopifyApiProject({
apiType: ApiType.Storefront,
apiVersion: '2024-04',
documents: ['./src/**/*.{js,ts,jsx,tsx}'],
outputDir: './types',
}),
},
};
Here is the output:
✔ Parse Configuration
⚠ Generate outputs
❯ Generate to ./types/storefront-2024-04.schema.json
✔ Load GraphQL schemas
✖
Unable to find any GraphQL type definitions for the following pointers:
- ./src/**/*.{js,ts,jsx,tsx}
◼ Generate
❯ Generate to ./types/storefront.types.d.ts
✔ Load GraphQL schemas
✖
Unable to find any GraphQL type definitions for the following pointers:
- ./src/**/*.{js,ts,jsx,tsx}
◼ Generate
❯ Generate to ./types/storefront.generated.d.ts
✔ Load GraphQL schemas
✖
Unable to find any GraphQL type definitions for the following pointers:
- ./src/**/*.{js,ts,jsx,tsx}
◼ Generate
Am I missing something, or does it not work in a UI extension?
I double checked the paths were correct.
I confirmed the query was named:
const { data } = await query(
`#graphql
query getProductsOnOffer($handle: String!, $first: Int!) {
collection(handle: $handle) {
products(first: $first) {
nodes {
id
title
images(first:1){
nodes {
url
}
}
variants(first: 1) {
nodes {
id
price {
amount
}
}
}
}
}
}
}`,
{
variables: { handle: 'automated-collection', first: 5 },
},
);
Hi @nboliver-ventureweb, what version of api-codegen-preset are you using?
@matteodepalo I'm not sure what version it was, it would have been the latest version out when this ticket was created. I have since removed it from the project because it wasn't working.
@pinktonio No, I haven't had a chance to revisit it, but would be keen to get it working!
Did you try adding the extension pluckConfig as described here? https://github.com/Shopify/shopify-app-js/blob/5134bcc4344fe42518873ec1b2d213a435948e11/packages/api-clients/api-codegen-preset/README.md#example-graphqlrcts-file
I had the same issue, but I was able to resolve it. I hope this helps others as well.
// .graphqlrc.ts
import {ApiType, shopifyApiProject} from '@shopify/api-codegen-preset';
export default {
schema: 'https://shopify.dev/admin-graphql-direct-proxy',
documents: ['src/graphql/*.ts'], // This doesn't work
projects: {
default: shopifyApiProject({
apiType: ApiType.Admin,
apiVersion: '2024-10',
outputDir: './types',
documents: ['src/graphql/*.ts'], // This configuration works
}),
documents: ['src/graphql/*.ts'], // This doesn't work
},
};
This is my configuration file. One thing to be careful about is that you need to include documents within shopifyApiProject. If you don't, the path you reference will be overwritten, and **/*.{ts,tsx} will always be referenced.
// src/graphql/query.ts
export const SHOP_QUERY = /* GraphQL */ `
query Shop {
shop {
name
}
}
`;
This is the query I wrote. There is another important thing to note here: you must use /* GraphQL */ as the magic comment instead of #graphql. While #graphql was referenced when I was working with Hydrogen, for some reason, @shopify/api-codegen-preset pluckConfig doesn't seem to function correctly.
@nboliver-ventureweb good point, I'll reopen this so that we can keep track of the change to the docs.
|
gharchive/issue
| 2024-06-19T18:40:27 |
2025-04-01T04:55:37.986052
|
{
"authors": [
"matteodepalo",
"nboliver-ventureweb",
"ujike-ryota"
],
"repo": "Shopify/shopify-app-js",
"url": "https://github.com/Shopify/shopify-app-js/issues/1081",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1330552610
|
The app couldn’t be loaded
When I start and type https://xxxx/login?shop=yyy.myshopify.com in browser, it goes to installation page to ask for permission. However, it does not works when I select a store to test from my app. There is no ‘/login’ in url
I have fixed this by adding
app.get('/)', async (req, res) => {
return res.redirect(‘/login?${req.query.shop}’);
});
However, it get’s caught in a redirect loop. App is installed incorrectly. It displays: “The app couldn’t be loaded. The app can’t load due to an issue with browser cookies. Try enabling cookies in your browser …”
I started application by “shopify app serve”
Any help here is appreciated.
Thanks.
We have fixed this.Thanks
We have fixed this.Thanks
|
gharchive/issue
| 2022-08-05T22:38:49 |
2025-04-01T04:55:37.996756
|
{
"authors": [
"huyvo"
],
"repo": "Shopify/shopify-marketplaces-admin-app",
"url": "https://github.com/Shopify/shopify-marketplaces-admin-app/issues/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
312956544
|
http://shopify.github.io/shopify_api
The url in the description 404s http://shopify.github.io/shopify_api
where i can find the complete documentation for this rails gem and its usage. i know shopify api reference. but i am looking for this gem specific usage.
Fixed, closing.
|
gharchive/issue
| 2018-04-10T14:45:39 |
2025-04-01T04:55:37.998404
|
{
"authors": [
"0ptimusrhyme",
"farooqch11",
"mikkoh"
],
"repo": "Shopify/shopify_api",
"url": "https://github.com/Shopify/shopify_api/issues/425",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
335613562
|
Add support for ApiPermissions#revoke
Fixes https://github.com/Shopify/shopify_api/issues/438.
Adds support for ApiPermission#revoke, which allows applications to remotely uninstall themselves.
Removes the OAuth class as it's now deprecated.
Actually, the OAuth#revoke action still seems to be working, so I don't think we want to 🔥it here. Maybe we can just add a comment to let folks know that it is deprecated in favour of ApiPermission#revoke.
Added the kernel warning as @jtgrenz suggested. I've also changed the class method on ApiPermission from self.revoke to self.destroy to be more descriptive.
|
gharchive/pull-request
| 2018-06-26T00:22:26 |
2025-04-01T04:55:38.000461
|
{
"authors": [
"jamiemtdwyer"
],
"repo": "Shopify/shopify_api",
"url": "https://github.com/Shopify/shopify_api/pull/448",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1600114972
|
Outdated
Just getting started with CLI 3.
I came here from a link in the Shopify docs but this example is already outdated as it uses a yml file instead of toml.
It also didn't make sense to me why a global app embed would go in the blocks folder. I guess there was nowhere else to put it?
Too bad.
|
gharchive/issue
| 2023-02-26T16:23:31 |
2025-04-01T04:55:38.002404
|
{
"authors": [
"Chang1ng",
"asacarter"
],
"repo": "Shopify/theme-extension-getting-started",
"url": "https://github.com/Shopify/theme-extension-getting-started/issues/25",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1755059214
|
Add an LRU cache to the volume manager
See issue #18 Part 1:
[X] Add the cache to the volume manager
[X] Check the cache for the sector root. If the sector does not exist in the cache, read it from disk, and add it.
[X] Add newly written sectors to the
[ ] Write unit tests
It would also be nice to track cache hits and misses. Adding counters to the volume manager and incrementing them with the atomic package should make that very straightforward.
go.sia.tech/hostd/rhp/v3/TestStoreSector is failing due to cache hit. You should be able to fix it with a cache eject rpc_test.go:220: expected error when reading sector
go.sia.tech/hostd/rhp/v3/TestStoreSector is failing due to cache hit. You should be able to fix it with a cache eject rpc_test.go:220: expected error when reading sector
In the updated code, after detecting a cache hit, the sector is returned as expected, but before doing so, the sector is removed from the cache using vm.cache.Remove(root) to simulate a cache miss. I hope the test should pass after making this change, do let me know your thoughts.
Looks like TestRemoveCorrupt also needs to have the cache set to 0
Looks like TestRemoveCorrupt also needs to set the cache to 0. Try running go test -race -tags=testing ./... to see if there are any more.
After running go test -race -tags=testing ./... no other packages appear to need to resize the cache to 0. The storage package is properly handling that in its tests.
Great job on this!
|
gharchive/pull-request
| 2023-06-13T14:43:23 |
2025-04-01T04:55:38.035868
|
{
"authors": [
"SAHU-01",
"n8maninger"
],
"repo": "SiaFoundation/hostd",
"url": "https://github.com/SiaFoundation/hostd/pull/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1342095101
|
🛑 Quotes API is down
In 8ec1908, Quotes API (https://quotes.api.vestal.tk) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Quotes API is back up in 8ba0d78.
|
gharchive/issue
| 2022-08-17T17:53:05 |
2025-04-01T04:55:38.074586
|
{
"authors": [
"Sid220"
],
"repo": "Sid220/server-status",
"url": "https://github.com/Sid220/server-status/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2423423343
|
User request: filter studies by flair
https://lichess.org/forum/team-l1chess-tools-users-team/is-there-any-way-to-organize-and-search-our-own-studies-in-lichess?page=4#35
Abandoning this, as filtering and ordering stuff should be done on the server.
|
gharchive/issue
| 2024-07-22T17:47:35 |
2025-04-01T04:55:38.076501
|
{
"authors": [
"Siderite"
],
"repo": "Siderite/lichessTools",
"url": "https://github.com/Siderite/lichessTools/issues/715",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1553611798
|
Expands functionality to actually use alpha value
Expands the functionality by actually passing the alpha and mode parameters to the extension, as well as updated documentation to explain how the different modes work, and how they affect the passed parameters.
Also removed superfluous operations in the DLL and GML, as the COLORREF structure used by the SetLayeredWIndowAttributes call is already in BGR color format. The bitmasking is only necessary because COLORREF should have 0 alpha, but GML allows composing ABGR colors. The resulting binary layout of casting to an int and bitmasking the passed gml color is equal to calling the RBG macro defined by wingdi.h on the different color components.
Honestly had no idea it used BGR already, guess that saves a calculation
|
gharchive/pull-request
| 2023-01-23T19:07:18 |
2025-04-01T04:55:38.081434
|
{
"authors": [
"AtlaStar",
"Sidorakh"
],
"repo": "Sidorakh/window-colorkey",
"url": "https://github.com/Sidorakh/window-colorkey/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
459286527
|
how to get the value using jquery?
How to use the JQuery to get the color value?
In this jsfiddle, I am trying to use $("#colorvalue").text($("#test").val()) to get my color values, e.g. #000000. However, if you see the console log, the $("#test").length is surprisingly 0. It seems that the new Pickr() somehow removed the #test and replaced it with something else. Anyway, is there a way to use JQuery to return the color value? Or how the color values should be returned?
Thank you for the great package! I love it.
You can use pickr.getColor().toHEXA() to get the current HEX color value as array (e.g. ['ff', 'fa', 'fb']), or pickr.getColor().toHEXA().toString() to get it as string value (e.g. #fffafb).
The element you pass to pickr gets replaced with pickr, so after initialization .color-picker wont exist anymore. I think I know what you want, checkout this updated fiddle: http://jsfiddle.net/z2qd3jkf/
You also used a really, really old version. Be sure to load the latest one :)
Thank you for the prompt reply. It works fine.
Yet, I think it could be more convenient if the replaced pickr can inherit the original id of the div so that it can be caught by id. I am asking this is because I will have dynamically multiple pickrs in the page and I would like to return the value of any of the pickr when it changes, instead of having a callback function for each of the pickrs.
I would like to reopen this question. Is there a way of getting the color by div ID or anything? Suppose I have two pickr on a page and I would like to get the color of each of them using jquery or pure javascript. Is it possible?
What are you trying to achieve? You need at least the instance of pickr to get a color.
Pickr is completely independent of any framework or library, if your are using JQuery as dom-selector or d3 makes no difference :) You can get the current color of each by just calling getColor() on a pickr instance. Take a look at the methods section in the docs.
Just create two pickr instances and you're done. Take a look at this fiddle
I just have multiple pickr on the page. Each pickr has its corresponding dynamically created id. I just would like to use jQuery to get any pickr color I want using the ids.
Why don't you have a centralized place where you store your instances / application state? You need to initialize pickr somewhere, therefore you got the instance stored in a variable on your page or bundle. Pickr isn't meant to polyfill or enhance a native HTML5 feature like input[type=color].
I can't imagine a situration where you don't have access to the pickr instance anymore... The only possible solution would be to wrap pickr into a web-component, then you could "access" it via document.getElement
I'll close this for now. Re-open if you have any questions.
calling initSimon wep picker two times at same page reset the value in edit scenario
|
gharchive/issue
| 2019-06-21T16:52:09 |
2025-04-01T04:55:38.207803
|
{
"authors": [
"Simonwep",
"nomanshabb",
"slfan2013"
],
"repo": "Simonwep/pickr",
"url": "https://github.com/Simonwep/pickr/issues/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
420146556
|
pickr.destroyAndRemove() crashes due to typo in destroy method's name
pickr.destroyAndRemove() and pickr.destroy() both crash because they try to delete a Selectable component that has a typo in its destroy method.
The bug can be seen in pickr/src/js/helper/selectable.js at row 19
https://github.com/Simonwep/pickr/blob/master/src/js/helper/selectable.js#L19
The method is called destory instead of destroy
Thanks!
Oh damn, fixed in ad2bd78763853d0b0e521bd916c9a33a87b37f62 ... Sorry must have happened when I was refactoring the selectable module :/
|
gharchive/issue
| 2019-03-12T18:39:49 |
2025-04-01T04:55:38.210534
|
{
"authors": [
"Simonwep",
"tlaanemaa"
],
"repo": "Simonwep/pickr",
"url": "https://github.com/Simonwep/pickr/issues/73",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1640648870
|
Wither drop special weapons
Description: withers should have a chance of dropping a special simply weapon
Extras (picture, video or a crash report if it applies):
enderdragons also should have a chance
|
gharchive/issue
| 2023-03-25T20:20:25 |
2025-04-01T04:55:38.268139
|
{
"authors": [
"SinmisModdingCommunity",
"Ultrilleon"
],
"repo": "SinmisModdingCommunity/The-Createville-Pack",
"url": "https://github.com/SinmisModdingCommunity/The-Createville-Pack/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
174952385
|
Can't copy from PDFs when opened with Evince
I tried copying text from four different PDFs (none of which are copy-protected I think) opened with Evince and wasn't able to. I am able to copy them if I open the files using pdf.js in Firefox so it might be an issue with Evince. Interestingly, when I try to copy the selected text in Evince using ^C then again using the right-click menu option, Evince crashes and the terminal outputs:
(evince:6556): Gdk-WARNING **: Error 22 (Invalid argument) dispatching to Wayland display.
If I don't try to copy using ^C beforehand, it won't crash when I try to copy using the right-click menu option. Not sure if that is related though. The problems don't occur in i3 so I suspect it is something related to sway/wlc. I think the relevant lines in sway.log are at ~514.
Maybe I should post a ticket in https://github.com/Cloudef/wlc as well?
wlc doesn't yet support syncronization between the x and wayland clipboards.
|
gharchive/issue
| 2016-09-04T14:48:01 |
2025-04-01T04:55:38.272919
|
{
"authors": [
"SirCmpwn",
"VoidNoire"
],
"repo": "SirCmpwn/sway",
"url": "https://github.com/SirCmpwn/sway/issues/880",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
212992234
|
How to import click-confirm?
I just got this message "Unknown custom element: ", either I import it in vue file or index.js and use it by Vue.use.
<click-confirm>
<button type="button" class="close" @click="()=>{window.alert(123);}">
<span aria-hidden="true">×</span>
</button>
</click-confirm>
I think this is more of a general VueJS thing than click-confirm specifically, however I'm not sure of the reason to enclose your JavaScript call in an arrow function, I think it's unnecessary. Try this (not tested):
<click-confirm>
<button type="button" class="close" @click="window.alert(123)">
<span aria-hidden="true">×</span>
</button>
</click-confirm>
For anything more complex, pull the code out in to a VueJS method and then call the method with @click.
Also, avoid ES6 syntax in environments which aren't going to be compiled and poly-filled, as some browsers in use don't support it.
https://kangax.github.io/compat-table/es6/
Also remember you'll need some other button content to compliment the area-hidden, for the case where it actually is hidden, as otherwise the screen reader won't know how to describe the button. An "aria-label" property on the button element would be one way to address it.
got it, thx
|
gharchive/issue
| 2017-03-09T10:17:38 |
2025-04-01T04:55:38.276697
|
{
"authors": [
"SirLamer",
"tychio"
],
"repo": "SirLamer/click-confirm",
"url": "https://github.com/SirLamer/click-confirm/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
381098032
|
FR: Generate Typescript Typings from Manifest
The manifest knows the fields of a component however without typings this information are lost and we don't get intellisense and type checking for our components.
In the following video you can see the src/components/Styleguide-FieldUsage-ItemLink from the Sitecore/jss react boilerplate with and without typings:
Describe the solution you'd like
I hacked a very very basic proof of concept into `scripts/disconnected-mode-proxy.js` to transform the manifest into a generated source file `src/temp/ComponentProps.ts`.
Click here to expand the code
// Generate type definitions
const manifestManager = new ManifestManager({appName: proxyOptions.appName,
rootPath: proxyOptions.appRoot,
watchOnlySourceFiles: proxyOptions.watchPaths,
requireArg: proxyOptions.requireArg
})
const manifest = manifestManager.getManifest(proxyOptions.language);
manifest.then(updateComponentDefinitions);
function updateComponentDefinitions(manifest) {
const fieldTypes = {
"Single-Line Text": "string",
"Multi-Line Text": "string",
"Rich Text": "string",
"Treelist": "string",
"Droptree": "string",
"General Link": "LinkFieldValue",
"Image": "ImageFieldValue",
"File": "string",
"Number": "number",
"Checkbox": "string",
"Date": "string",
"Datetime": "string"
}
const props = `import { ImageFieldValue, LinkFieldValue } from "@sitecore-jss/sitecore-jss-manifest/types/generator/manifest.types";
` +
manifest.templates.map((template) => (
`export type ${template.name.replace(/[-_ ]/g, '')}Props = {
fields: {${(template.fields || []).map(field => (`
${field.name}: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: ${fieldTypes[field.type]};
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},`)).join('')}
},
params: {}
}`)).join('\n\n');
fs.writeFileSync(path.resolve(__dirname, '../src/temp/ComponentProps.ts'), props);
}
The generated definition which is used for the video above looks like this
Click here to expand the code
import { ImageFieldValue, LinkFieldValue } from "@sitecore-jss/sitecore-jss-manifest/types/generator/manifest.types";
export type ExampleCustomRouteTypeProps = {
fields: {
headline: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: string;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
author: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: string;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
content: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: string;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
},
params: {}
}
export type StyleguideFieldUsageLinkProps = {
fields: {
externalLink: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: LinkFieldValue;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
internalLink: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: LinkFieldValue;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
emailLink: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: LinkFieldValue;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
paramsLink: {
/** The value property of a field object contains the "raw", unrendered value of a field stored in Sitecore. */
value?: LinkFieldValue;
/**
* When EE is active, editable will contain all of the additional markup that EE emits for a field.
* When EE is not active, editable will contain the rendered field value - which may differ from the "raw" field value for certain field types (e.g. Image, Rich Text, Link) or if the renderField pipeline has been extended / customized.
*/
editable?: string;
},
},
params: {}
}
Unfortunately the manifest does not include information about the params.
It would be cool if you could provide a helper to generate not only the json for the sidecore-import.json but also the typings including the params.
This would help new developers to get into JSS more easily because of intellisense and type checking.
This seems like a nice feature, but probably not one we'd bake in at least initially. While I'm a big fan of Typescript + React, the sample app is using ES6 for the simple reason that more people are using that compared to TS.
Upon general release, the JSS JavaScript will be open source (Apache 2.0), including the manifesting pieces. You're more than welcome to build such a helper, and release your own JSS + React + TS sample app if you want to. :)
Oh wow I didn't know about that!
Is there already a rough timeline for the general release?
Regarding the typings:
I understand that generate typings might not be that relevant for the react boilerplate.
However generate type definitions out of the manifest might also be interesting for Angular and VueJs no only React.
Also I guess that would be rather a tool of @sitecore-jss/sitecore-jss-dev-tools than for the jss examples.
If you don't want to provide that feature for now could you maybe add params to the manifest.templates items?
By params you're referring to rendering parameters, right? The manifest doesn't contain that for a template since from a Sitecore perspective they are on the rendering, not the template. But it should be possible to lookup the rendering and get the params there.
Yes exactly - addComponent requires ComponentDefinition arguments:
addComponent: (...components: ComponentDefinition[]) => void;
ComponentDefinition includes
export interface ComponentDefinition {
name?: string;
displayName?: string;
/**
* The data fields that provide content data to the component
*/
fields?: FieldDefinition[];
/**
* The names of JSS placeholders that this component exposes
* (keys of any placeholder components added to this component's JS view)
*/
placeholders?: PlaceholderDefinition[] | string[];
/**
* Defines non-content parameters.
* Parameters are more developer-focused options than fields, such as configurable CSS classes.
*/
params?: string[] | RenderingParameterDefinition[];
// ....
}
So all components are able to define params.
Unfortunately the only public method of the ManifestManager to access those values is getManifest which is tightly coupled to the required sitecore import json file structure.
So although I defined all my values with addComponent it is almost impossible to get those values back from the ManifestManager.
There are probably a lot of different solutions:
you could add the params information to the templates array of the manifest object (although sitecore would just ignore them)
you could add components to the manifest object (although sitecore would just ignore them)
you could extend the ManifestManager to provide a function getComponentDefinitions and provide the same information to setManifestUpdatedCallback or add setComponentDefinitionsCallback.
Correct, the interface used by disconnected mode is the generated manifest, not the object the manifest json is generated from.
This has its advantages and disadvantages, to be sure. Have you considered finding the params by looking at the renderings in the manifest (json)?
Hey @kamsar
while this would probably possible it would cause very hard to understandable and maintainable code in every project that follows such an approach because the developer has to have a very deep understanding of the manifest structure and I am not sure if it will work if a property isn't used in any rendering.
The Manifest is a full transpiler which transpiles definitions into the sidecore-import.json structure. Usually transpilers are split into three parts: parsing, modification and writing.
So if you could maybe split that a little bit so that it would be possible to access the full parsed values (ComponentDefinition, TemplateDefinition, PlaceholderDefinition, RouteDefinition and ItemDefinition) it would be very easy to add modification adapters which would generate the sidecore-import.json or the typescript definitions.
This pattern could also allow to reuse one file watcher of the ManifestManager instead of having multiple ManifestManagers running at the same time.
Furthermore it could be possible to have a writing part which would gather and write all changes at once to the disc so that there won't be multiple webpack recompilations for a single manifest change.
|
gharchive/issue
| 2018-11-15T10:33:54 |
2025-04-01T04:55:38.334831
|
{
"authors": [
"jantimon",
"kamsar"
],
"repo": "Sitecore/jss",
"url": "https://github.com/Sitecore/jss/issues/74",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
871223201
|
[sitecore-jss] [mediaApi] updateImageUrl fix for invalid hash errors on Sitecore
Description / Motivation
updateImageUrl now only switches to JSS media handler prefix (e.g. "/-/jssmedia") if imageParams are sent. Otherwise, the original media URL is returned.
This fixes hash errors ("ERROR MediaRequestProtection: An invalid/missing hash value was encountered.") from appearing in the Sitecore logs, due to the fact that the prefix is taken into account when calculating the hash value.
If image size parameters are used, the hash is removed from the image URL and the JSS handler and size whitelisting take over.
Testing Details
Unit test coverage, tested Next.js and React sample apps.
Types of changes
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to change)
[ ] Documentation update (non-breaking change; modified files are limited to the /docs directory)
And, please, pull latest changes from dev - here is conflict
|
gharchive/pull-request
| 2021-04-29T16:33:19 |
2025-04-01T04:55:38.339773
|
{
"authors": [
"ambrauer",
"illiakovalenko"
],
"repo": "Sitecore/jss",
"url": "https://github.com/Sitecore/jss/pull/681",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2369086145
|
爱奇艺签到报错[Bug]
💻 系统环境
Windows
📦 部署环境
青龙
🐛 问题描述
爱奇艺签到失败,重试过抓取CK,依然如此
📝 运行日志
----------开始执行「爱奇艺」签到----------
{'code': 'Q00393', 'msg': '超过次数赠送限额', 'data': None}
{'code': 'Q00393', 'msg': '超过次数赠送限额', 'data': None}
{'code': 'Q00393', 'msg': '超过次数赠送限额', 'data': None}
现在已经刷到了 0秒, 数据同步有延迟, 仅供参考
第 1 个账号: ❌❌❌❌❌
HTTPSConnectionPool(host='msg.qy.net', port=443): Max retries exceeded with url: /b?u=f600a23f03c26507f5482e6828cfc6c5&pu=1593061667&p1=1_10_101&v=5.2.66&ce=b2f9df8b83aa41f3addc2c8814150e73&de=1616773143.1639632721.1639653680.29&c1=2&ve=3e1798eba7674a13a844907e1b583b39&ht=0&pt=3348.501989&isdm=0&duby=0&ra=5&clt=&ps2=DIRECT&ps3=&ps4=&br=mozilla%2F5.0+%28windows+nt+10.0%3B+win64%3B+x64%29+applewebkit%2F537.36+%28khtml%2C+like+gecko%29+chrome%2F96.0.4664.110+safari%2F537.36&mod=cn_s&purl=https%3A%2F%2Fwww.iqiyi.com%2Fv_1eldg8u3r08.html%3Fvfrm%3Dpcw_home%26vfrmblk%3D712211_cainizaizhui%26vfrmrst%3D712211_cainizaizhui_image1%26r_area%3Drec_you_like%26r_source%3D62%2540128%26bkt%3DMBA_PW_T3_53%26e%3Db3ec4e6c74812510c7719f7ecc8fbb0f%26stype%3D2&tmplt=2&ptid=01010031010000000000&os=window&nu=0&vfm=&coop=&ispre=0&videotp=0&drm=&plyrv=&rfr=https%3A%2F%2Fwww.iqiyi.com%2F&fatherid=1043324375647345&stauto=1&algot=abr_v12-rl&vvfrom=&vfrmtp=1&pagev=playpage_adv_xb&engt=2&ldt=1&krv=1.1.85&wtmk=0&duration=6841865&bkt=&e=&stype=&r_area=&r_source=&s4=991811_dianshiju_tbrb_image2&abtest=1707_B%2C1550_B&s3=677892_dianshiju_tbrb&vbr=966873&mft=0&ra1=2&wint=3&s2=pcw_home&bw=10&ntwk=18&dl=15.27999999999997&rn=0.2847593695945669&dfp=&stime=1719197520355.854&r=1287194776986998&hu=1&t=2&tm=79&_=1719197520355.865 (Caused by SSLError(SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1007)')))
你自己网络问题
你自己网络问题
其他的签到都成功了呀
你自己网络问题
其他的签到都成功了呀
的确是网络原因,可能是你用了代理,拦截了签到用的域名,我也碰到了这个问题,添加白名单后就没问题了,可参考
|
gharchive/issue
| 2024-06-24T02:55:13 |
2025-04-01T04:55:38.356958
|
{
"authors": [
"AkenClub",
"Sitoi",
"mingrenmm"
],
"repo": "Sitoi/dailycheckin",
"url": "https://github.com/Sitoi/dailycheckin/issues/400",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
230201117
|
Shape2D::CreateFlag()の実装案
std::array<Shape2D, 2> CreateTriangleFlag(double poleHeight, double poleWidth, double flagHeight, double flagWidth, const Vec2& pos)
{
const float poleHeightF = static_cast<float>(poleHeight);
const float poleHalfWidthF = static_cast<float>(poleWidth)*0.5f;
const float flagHeightF = static_cast<float>(flagHeight);
const float flagWidthF = static_cast<float>(flagWidth);
Array<Float2> verticesPole(4, pos);
Array<uint32> indicesPole(6);
verticesPole[0].moveBy(-poleHalfWidthF, -poleHeightF);
verticesPole[1].moveBy(+poleHalfWidthF, -poleHeightF);
verticesPole[2].moveBy(+poleHalfWidthF, 0.0f);
verticesPole[3].moveBy(-poleHalfWidthF, 0.0f);
indicesPole = { 0,1,2, 2,3,0 };
Array<Float2> verticesFlag(3, verticesPole[1]);
Array<uint32> indicesFlag(3);
verticesFlag[1].moveBy(flagWidthF, flagHeightF*0.5f);
verticesFlag[2].moveBy(0.0f, flagHeightF);
indicesFlag = { 0,1,2 };
return { Shape2D{verticesPole,indicesPole}, Shape2D{verticesFlag,indicesFlag} };
}
std::array<Shape2D, 2> CreateRectFlag(double poleHeight, double poleWidth, double flagHeight, double flagWidth, const Vec2& pos)
{
const float poleHeightF = static_cast<float>(poleHeight);
const float poleHalfWidthF = static_cast<float>(poleWidth)*0.5f;
const float flagHeightF = static_cast<float>(flagHeight);
const float flagWidthF = static_cast<float>(flagWidth);
Array<Float2> verticesPole(4, pos);
Array<uint32> indicesPole(6);
verticesPole[0].moveBy(-poleHalfWidthF, -poleHeightF);
verticesPole[1].moveBy(+poleHalfWidthF, -poleHeightF);
verticesPole[2].moveBy(+poleHalfWidthF, 0.0f);
verticesPole[3].moveBy(-poleHalfWidthF, 0.0f);
indicesPole = { 0,1,2, 2,3,0 };
Array<Float2> verticesFlag(4, verticesPole[1]);
Array<uint32> indicesFlag(6);
verticesFlag[1].moveBy(flagWidthF, 0.0f);
verticesFlag[2].moveBy(flagWidthF, flagHeightF);
verticesFlag[3].moveBy(0.0f, flagHeightF);
indicesFlag = { 0,1,2, 2,3,0 };
return { Shape2D{ verticesPole,indicesPole }, Shape2D{ verticesFlag,indicesFlag } };
}
std::array<Shape2D, 2> CreateSwallowtailFlag(double poleHeight, double poleWidth, double flagHeight, double flagWidthLarger, double flagWidthSmaller, const Vec2& pos)
{
const float poleHeightF = static_cast<float>(poleHeight);
const float poleHalfWidthF = static_cast<float>(poleWidth)*0.5f;
const float flagHeightF = static_cast<float>(flagHeight);
const float flagWidthLargerF = static_cast<float>(flagWidthLarger);
const float flagWidthSmallerF = static_cast<float>(flagWidthSmaller);
Array<Float2> verticesPole(4, pos);
Array<uint32> indicesPole(6);
verticesPole[0].moveBy(-poleHalfWidthF, -poleHeightF);
verticesPole[1].moveBy(+poleHalfWidthF, -poleHeightF);
verticesPole[2].moveBy(+poleHalfWidthF, 0.0f);
verticesPole[3].moveBy(-poleHalfWidthF, 0.0f);
indicesPole = { 0,1,2, 2,3,0 };
Array<Float2> verticesFlag(5, verticesPole[1]);
Array<uint32> indicesFlag(9);
verticesFlag[1].moveBy(flagWidthLargerF, 0.0f);
verticesFlag[2].moveBy(flagWidthSmallerF, flagHeightF*0.5f);
verticesFlag[3].moveBy(flagWidthLargerF, flagHeightF);
verticesFlag[4].moveBy(0.0f, flagHeightF);
indicesFlag = { 0,1,2, 2,3,4, 4,0,2 };
return { Shape2D{ verticesPole,indicesPole }, Shape2D{ verticesFlag,indicesFlag } };
}
Shape2D に複数の形状を持たせる実装の方針が固まり次第採用します。
|
gharchive/issue
| 2017-05-21T06:44:00 |
2025-04-01T04:55:38.359111
|
{
"authors": [
"Reputeless",
"agehama"
],
"repo": "Siv3D/OpenSiv3D",
"url": "https://github.com/Siv3D/OpenSiv3D/issues/158",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
650957776
|
About the loss term
According to the paper, you use CE loss for pose label instead of bce loss. But in Ava dataset there exist some boxes without pose annotation (according to my calculation, about 1500 boxes) and also some boxes with more than one pose annotation. How do you deal with that?
You may refer to losses.py for details.
|
gharchive/issue
| 2020-07-04T21:19:02 |
2025-04-01T04:55:38.369553
|
{
"authors": [
"Eddiesyn",
"Siyu-C"
],
"repo": "Siyu-C/ACAR-Net",
"url": "https://github.com/Siyu-C/ACAR-Net/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1774547653
|
Better How to guide? Step by step readme or something.
I have installed everything on a fresh Linux Mint latest version as of today.
All the requirements are met, but it stops there, it seems firstly like there is an issue with:
"UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.24.3)
Cannot even get to the part where it generates the head, let alone, where do I feed it the image I want to create a head from?
@Lustgard Currently we're missing the camera pose estimation stage which means you can't use any random input image. What I did for testing was to use Stable Diffusion + ControlNet to generate new people in the exact same pose as the example dataset images, so I could keep the camera values from the existing dataset.json. Someone wrote up the instructions on my Reddit post about it: https://www.reddit.com/r/StableDiffusion/comments/14h0pf4/comment/jpf5qea/
If you want to generate the 3D PLY mesh as well as the video, you can check the required changes and dependencies on my forked repo:
https://github.com/hack-mans/PanoHead
Thanks! I was able to solve the wrong NumPy version with the help of ChatGPT.
For others who might need help:
"conda install numpy=1.22.3"
Simple yes, but I'm no linux master yet, so baby steps atm.
@hack-mans That forked repo is very helpful. Thank you for that.
I wonder if it's possible to infer a camera pose if we know the orientation of the face, using MediaPipe's face mesh pose estimation as an example.
https://developers.google.com/mediapipe/solutions/vision/face_landmarker/python
@OverwriteDev I've almost got it working using the EG3D + Deep3DFaceRecon code but it's slightly off
I have actually tried using Deep3DFaceRecon to estimate the pose, however, the result is a lot worse than what you can get by using the given pose (see below). I wonder if the authors could kindly provide a hint of what they used to estimate face poses?
given pose:
deep3d pose:
@OverwriteDev I've almost got it working using the EG3D + Deep3DFaceRecon code but it's slightly off
I have actually tried using Deep3DFaceRecon to estimate the pose, however, the result is a lot worse than what you can get by using the given pose (see below). I wonder if the authors could kindly provide a hint of what they used to estimate face poses?
given pose: deep3d pose:
Short answer is we use different cropping, centering, and pose estimation script other than Deep3DFaceRecon. And since that was a company service, I won't be able to share it. In fact, I myself also don't have the access now. Now I'm trying to find alternatives to achieve similar results but cannot guarantee either... Will keep you guys posted.
@SizheAn , do you know if the estimation ws done using a single image?
Or this is a case where multi-image is used , calculated the camera data and just one image was used?
I'm trying to figure out how it woudl be possible to get such precise data using only one photo.
@SizheAn , do you know if the estimation ws done using a single image? Or this is a case where multi-image is used , calculated the camera data and just one image was used?
I'm trying to figure out how it woudl be possible to get such precise data using only one photo.
Only single image. Pretty accurate if you can detect facial landmarks in the image. Our method is a combination of company's service + 3DDFA_V2 (https://github.com/cleardusk/3DDFA_V2). You can check their examples.
We update the scripts and example data for obtaining camera poses and cropping the images for PTI. See https://github.com/SizheAn/PanoHead/blob/main/3DDFA_V2_cropping/cropping_guide.md
This is fantastic @SizheAn , this was massively helpful.
Not only did I manage to get it up and running with the directions you provided but I'm also now more familiar with 3DDFA_V2 which is a pretty awesome project :D
|
gharchive/issue
| 2023-06-26T11:18:46 |
2025-04-01T04:55:38.381712
|
{
"authors": [
"ChikaYan",
"Lustgard",
"OverwriteDev",
"SizheAn",
"carlosedubarreto",
"hack-mans"
],
"repo": "SizheAn/PanoHead",
"url": "https://github.com/SizheAn/PanoHead/issues/13",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2034075439
|
feat: optionally generate comments in the sample file for all fields
todo:
[x] make it available as a checkbox before generating the sample yaml on the website
Added comments to the frontend value generator.
|
gharchive/pull-request
| 2023-12-09T21:36:11 |
2025-04-01T04:55:38.383595
|
{
"authors": [
"Skarlso"
],
"repo": "Skarlso/crd-to-sample-yaml",
"url": "https://github.com/Skarlso/crd-to-sample-yaml/pull/37",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
292867681
|
ENh - Enumerate Local Security Groups
Need to gather and report groups like SMS_SiteSystemToSiteServerConnection... and so on. Might be good to also enumerate local permissions like "Log on as a service" (e.g. use Carbon)
Update added to 1.0.4
|
gharchive/issue
| 2018-01-30T17:21:50 |
2025-04-01T04:55:38.385017
|
{
"authors": [
"Skatterbrainz"
],
"repo": "Skatterbrainz/CMHealthCheck",
"url": "https://github.com/Skatterbrainz/CMHealthCheck/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2400663959
|
WriteMem BunnyHop
You say BunnyHop doesn't write game's memory.... But, it isn't the case ... 🫣
I didn't notice in the README it was still saying bhop does not write to memory, already removed and will commit on the next update.
|
gharchive/issue
| 2024-07-10T12:44:23 |
2025-04-01T04:55:38.394202
|
{
"authors": [
"LauraSavall",
"Skwrr"
],
"repo": "Skwrr/Dexterion",
"url": "https://github.com/Skwrr/Dexterion/issues/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
854981972
|
您好请问 我为什么取不到值,是我没存上吗 还是没取到
class SplashPageState extends State {
//用户ID常量标识
static const String SP_USER_ID = "sp_user_id";
@override
void initState() {
super.initState();
// App启动时读取Sp数据,需要异步等待Sp初始化完成。
SpUtil.getInstance();
// //存储数据USER_ID
SpUtil.putString(SplashPageState.SP_USER_ID, 'CJP');
// //获取数据USER_ID
print('SpUtil === >${SpUtil.getString(SP_USER_ID, defValue: "")}');
这是打印结果:
Performing hot restart...
Syncing files to device Android SDK built for x86...
Restarted application in 935ms.
I/flutter (11877): SpUtil === >
一样的问题,求解答
一样的问题,求解答
请认真看demo!
|
gharchive/issue
| 2021-04-10T04:57:57 |
2025-04-01T04:55:38.397174
|
{
"authors": [
"Caojunping",
"Sky24n",
"zhouying2019"
],
"repo": "Sky24n/sp_util",
"url": "https://github.com/Sky24n/sp_util/issues/4",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1884742056
|
Custom MIDI option ?
Hey @SkyTNT
First of all, thank you for using my MIDI dataset and thank you for creating Hugging Face app/implementation. It is absolutely great and I love it :)
I was wondering if you want to collaborate and expand this project to add additional features like custom MIDI and inpainting?
I can help you with that if you are interested.
Let me know please. :)
And thank you again for this great project :)
Alex
@SkyTNT Nm, I see you have a custom MIDI option...Sorry I did not noticed it before....
What about inpainting and better models? I can help with that.
Let me know please :)
Nice to collaborate with you. inpainting is a very good idea. But it seems that only the diffusion model can do it. Do you have any ideas?
@SkyTNT It is possible to do inpainting with autoregressive models. There are also other things that can be done like melody/chords generation and harmonization.
Check out one of my demos here:
https://colab.research.google.com/github/asigalov61/Lars-Ulrich-Transformer/blob/main/Lars_Ulrich_Transformer.ipynb
Your model uses notes_on/notes_off events which is why it is limited to what it can do. A good idea to switch it score format (think score format of MIDI.py). Then you will be able to do different things with it.
Anyway, you seem to have better programming skills than I do, so if you can tell me what you need help with, I will try to do it and help you however I can. :)
What do you need help with for this project?
Alex.
@SkyTNT Sorry, my bad :) I have not slept yesterday so I did not read your code properly.
Yes, of'course I can help with inpainting. Let me start by making a colab for you and then if you like it we can integrate it to your codebase.
@SkyTNT Here is a very rough draft colab for instrument pitches inpainting. Check it out and let me know what you think, please :)
I am also attaching a sample output MIDI.
Let me know.
Alex.
Pitches_Inpainting_MIDI_Composer.zip
@asigalov61 Thank you so much. I understand how to implement inpainting. However I feel this approach is limited by the position of the events in the sequence. It would be even better if you can inpaint a track without restrictions (any time, any number of notes).
@SkyTNT You are welcome :)
Yes, this approach is limited in what it can do.
There is a seq2seq approach developed by Stanford: https://github.com/jthickstun/anticipation
This one can do a better job, but it is also somewhat limited and does not always produce good results either.
Let me know what you think.
Alex.
@asigalov61 Cool! This is exactly what I want. Can it be implemented on my model?
@SkyTNT AFAIK unfortunately no because they used seq2seq implementation, while your model is autoregressive.
However, your MIDI processor/tokenizer and your Gradio app can be adopted for their architecture AFAIK.
I can't help with their implementation because I mostly deal with autregressive models. But you are welcome to reach out to them for collaboration/help.
Hope this is helpful.
And if you need any more help with autoregressive stuff, feel free to let me know :)
Alex
@asigalov61 OK. Thank you. Maybe I'll try to improve my model to do that later.
Google did a lot with "inpainting" with autoregressive models. They first called it "masked language objective" and later "span corruption". See https://arxiv.org/abs/1910.10683
Basically they add sentinel tokens in the places they want to inpaint and then the model autoregressively predicts what tokens the sentinel tokens were masking
|
gharchive/issue
| 2023-09-06T20:56:51 |
2025-04-01T04:55:38.417894
|
{
"authors": [
"Permafacture",
"SkyTNT",
"asigalov61"
],
"repo": "SkyTNT/midi-model",
"url": "https://github.com/SkyTNT/midi-model/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
831282479
|
Compatibility with Unearthed mod.
Please add tools from stones from the unearthed mod.
https://www.curseforge.com/minecraft/mc-mods/unearthed-fabric
Nah, they can add their own tools for their own materials.
But they don't want to :(
|
gharchive/issue
| 2021-03-14T22:53:58 |
2025-04-01T04:55:38.430680
|
{
"authors": [
"RDKRACZ",
"Skylortrexler"
],
"repo": "Skylortrexler/Sentimentality-2",
"url": "https://github.com/Skylortrexler/Sentimentality-2/issues/13",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
80590491
|
[Feature Request] TiCon AutoSmelt Farming Station Support?
Would it be possible to add said feature?
Also if it is possible make sure to check if there is luck on the axe , because you get more charcoal that way in the "vanilla" ticon.
We call the harvest methods on the item....This should already work
On May 25, 2015 12:36 PM, "ProRed" notifications@github.com wrote:
Would it be possible to add said feature?
Also if it is possible make sure to check if there is luck on the axe ,
because you get more charcoal that way in the "vanilla" ticon.
—
Reply to this email directly or view it on GitHub
https://github.com/SleepyTrousers/EnderIO/issues/2511.
Nope, not working Video: https://youtu.be/pRIyYFTaKDM
Using FTB Infinity 1.5.1, therefor EnderIO version 367.
he meant in the dev version AKA the new EIO 2.3 beta .
No I didn't
On May 26, 2015 11:20 AM, "6210classick" notifications@github.com wrote:
he meant in the dev version AKA the new EIO 2.3 beta .
—
Reply to this email directly or view it on GitHub
https://github.com/SleepyTrousers/EnderIO/issues/2511#issuecomment-105564026
.
@ tterrag1098 i thought this was implemented in the latest dev version am i wrong ?
Sorry wrong button :(
|
gharchive/issue
| 2015-05-25T16:36:35 |
2025-04-01T04:55:38.477175
|
{
"authors": [
"6210classick",
"ProRedHD",
"tterrag1098"
],
"repo": "SleepyTrousers/EnderIO",
"url": "https://github.com/SleepyTrousers/EnderIO/issues/2511",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
157670427
|
[1.9] Render Bug
Issue Description:
Look at the picture. Menu on the second screen render double left image on first screen
Affected Versions:
EnderIO for Minecraft: 1.9
That's a problem with the texture pack.
@HenryLoenwind no. I checked
It's not texture pack problem it's ui bug.
This bug with many machines, but transiver doesn't :)
The power monitor looks fine with no texture pack:
As such, it is definitely a resource pack issue.
Are the tabs on the power monitor longer than the one on the transciever?
He is not wrong, they render differently. It is an inconsistency and makes texturing difficult.
I'l redo those tab graphics. I wanted to go over the tab code anyways.
For anyone who wants to texture the tabs with the news system:
There are now 3 textures that are rendered above each other. Lowest layer is the background (3rd "row' on the icon sheet). It will be rendered left-aligned, overlapping the main window by 3px and will be 3px shorter than the tab width. For inactive tabs those pixels will be skipped. Next is the left side (2nd texture). It will be rendered the same way as the background. Last one is the right side. It will be rendered right-aligned, 3px shorter than the tab width (meaning it will not overlap the main window).
Depending on your border style you may not want the left or right texture to render all the way. In that case just delete most of one of them. You only need the outermost 3px, the rest is overlapped.
If there's a need for it, it'll change the background texture u-mapping to align with the main texture's width. But for this I'd have to know the tiling grid size of your background (or make a texture that is as wide as possible).
|
gharchive/issue
| 2016-05-31T13:23:28 |
2025-04-01T04:55:38.483281
|
{
"authors": [
"4rz0",
"CrazyPants",
"HenryLoenwind",
"tterrag1098",
"xMrVizzy"
],
"repo": "SleepyTrousers/EnderIO",
"url": "https://github.com/SleepyTrousers/EnderIO/issues/3351",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
460644582
|
net.minecraft.util.ReportedException: Tesselating block in world CCL render error with refined storage conduit
Issue Description:
CCL render error with refined storage conduit
What happens:
Conduit is transparent and it gives an error in the log (see below) but it's fully working
What you expected to happen:
Conduit is visible and no error
Steps to reproduce:
Place a refined storage conduit
Affected Versions (Do not use "latest"):
EnderIO: 5.0.44
EnderCore: 1.12.2-0.5.57
Minecraft: 1.12.2
Forge: 14.23.5.2836
SpongeForge? No
Optifine? No
Single Player and/or Server? Single Player
Your most recent log file where the issue was present:
[Chunk Batcher 1/ERROR]:
CCL has caught an exception whilst rendering a block
BlockPos: x:218, y:68, z:115
Block Class: class crazypants.enderio.conduits.conduit.BlockConduitBundle
Registry Name: enderio:block_conduit_bundle
Metadata: 0
State: enderio:block_conduit_bundle[opaque=false]
Tile at position
Tile Class: class crazypants.enderio.conduits.conduit.TileConduitBundle
Tile Id: enderio:tile_conduit_bundle
Tile NBT: {conduits:{size:0},paintSource-:1b,x:218,y:68,ForgeCaps:{"nuclearcraft:capability_default_radiation_resistance":{radiationResistance:0.0d}},facadeType:0,z:115,id:"enderio:tile_conduit_bundle"}
You can turn off player messages in the CCL config file.
net.minecraft.util.ReportedException: Tesselating block in world
at net.minecraft.client.renderer.BlockRendererDispatcher.renderBlock(BlockRendererDispatcher.java:95) ~[bvm.class:?]
at codechicken.lib.render.block.CCBlockRendererDispatcher.renderBlock(CCBlockRendererDispatcher.java:82) [CCBlockRendererDispatcher.class:?]
at mrtjp.projectred.relocation.MovingBlockRenderDispatcher.renderBlock(renders.scala:275) [MovingBlockRenderDispatcher.class:?]
at net.minecraft.client.renderer.chunk.RenderChunk.rebuildChunk(RenderChunk.java:200) [bxr.class:?]
at net.minecraft.client.renderer.chunk.ChunkRenderWorker.processTask(SourceFile:100) [bxn.class:?]
at net.minecraft.client.renderer.chunk.ChunkRenderWorker.run(SourceFile:43) [bxn.class:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_51]
Caused by: java.lang.ClassCastException: crazypants.enderio.conduit.refinedstorage.conduit.RefinedStorageConduit cannot be cast to com.tfar.stellarfluidconduits.common.conduit.stellar.StellarFluidConduit
at com.tfar.stellarfluidconduits.common.conduit.stellar.StellarFluidConduitRenderer.addConduitQuads(StellarFluidConduitRenderer.java:35) ~[StellarFluidConduitRenderer.class:?]
at crazypants.enderio.conduits.render.DefaultConduitRenderer.addBakedQuads(DefaultConduitRenderer.java:80) ~[DefaultConduitRenderer.class:?]
at crazypants.enderio.conduits.render.ConduitBundleRenderer.addConduitQuads(ConduitBundleRenderer.java:161) ~[ConduitBundleRenderer.class:?]
at crazypants.enderio.conduits.render.ConduitBundleRenderer.getGeneralQuads(ConduitBundleRenderer.java:137) ~[ConduitBundleRenderer.class:?]
at crazypants.enderio.conduits.render.ConduitRenderMapper.mapBlockRender(ConduitRenderMapper.java:34) ~[ConduitRenderMapper.class:?]
at crazypants.enderio.base.render.pipeline.BlockStateWrapperBase.bakeBlockLayer(BlockStateWrapperBase.java:213) ~[BlockStateWrapperBase.class:?]
at crazypants.enderio.base.render.pipeline.BlockStateWrapperBase.bakeModel(BlockStateWrapperBase.java:187) ~[BlockStateWrapperBase.class:?]
at crazypants.enderio.conduits.conduit.BlockConduitBundle.getExtendedState(BlockConduitBundle.java:173) ~[BlockConduitBundle.class:?]
at net.minecraft.client.renderer.BlockRendererDispatcher.renderBlock(BlockRendererDispatcher.java:79) ~[bvm.class:?]
... 6 more
at com.tfar.stellarfluidconduits.common.conduit.stellar.StellarFluidConduitRenderer.addConduitQuads(StellarFluidConduitRenderer.java:35) ~[StellarFluidConduitRenderer.class:?]
I fixed that bug already https://github.com/Tfarcenim/StellarFluidConduits/commit/ec822a5cef1b631d153eff9afe41a8ed4dccaece
|
gharchive/issue
| 2019-06-25T21:10:45 |
2025-04-01T04:55:38.488786
|
{
"authors": [
"HenryLoenwind",
"Tfarcenim",
"maicol07"
],
"repo": "SleepyTrousers/EnderIO",
"url": "https://github.com/SleepyTrousers/EnderIO/issues/5158",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1076364006
|
Check out "Grindr - Gay chat"
https://play.google.com/store/apps/details?id=com.grindrapp.android
Please check out the project before opening any issues
|
gharchive/issue
| 2021-12-10T02:47:19 |
2025-04-01T04:55:38.489993
|
{
"authors": [
"MBWGit",
"Slenderman00"
],
"repo": "Slenderman00/Grindr-Web-Access",
"url": "https://github.com/Slenderman00/Grindr-Web-Access/issues/12",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
147902602
|
Release on SpigotMC
I'm hoping @Slikey sees this, I'm honestly not sure the best way to get in touch at this point :)
I'm wondering if it'd be ok to put up an EffectLib page on Spigotmc.com?
I'd also like to move the docs from dev.bukkit.org to the wiki, and maybe add an example plugin.
Yes, of course. If you give me the link to the wiki I can update the thread and dev site.
Am 13.04.2016 01:28 schrieb Nathan Wolf notifications@github.com:I'm hoping @Slikey sees this, I'm honestly not sure the best way to get in touch at this point :)
I'm wondering if it'd be ok to put up an EffectLib page on Spigotmc.com?
I'd also like to move the docs from dev.bukkit.org to the wiki, and maybe add an example plugin.
—You are receiving this because you were mentioned.Reply to this email directly or view it on GitHub
I don't think I actually need you to do anything since I've got access to the dev site, I just wanted to make sure you were ok with it, since I'd have to release it on spigotmc under my account.
The wiki will be right here, just the Wiki tab at the top :)
Thanks!
|
gharchive/issue
| 2016-04-12T23:28:30 |
2025-04-01T04:55:38.518460
|
{
"authors": [
"NathanWolf",
"Slikey"
],
"repo": "Slikey/EffectLib",
"url": "https://github.com/Slikey/EffectLib/issues/24",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1115077724
|
Improve USB and Serial interface
Right now all interactions with Serial interface of trackers is done when user presses WiFi button. This should be changed, a new background worker should monitor serial devices and wait for trackers to connect. When it finds new connected tracker, it should start talking to it. A few features that should be implemented:
Figure out if the connedted device is indeed a SlimeVR tracker
Make a pop-up that a tracker is connected with suggestion to user to configure it
If user set auto-configuration, send wifi credentials to it automatically and make a pop-up
Figure out WHICH tracker is it, and show it as connected via USB in the tracker list (a nice-to-have)
oh, i actually want to implement that
How should we identify the tracker?
The serial port itself does not have any standard to identify a device. We don't want to mess with existing serial ports.
I would recommend the following procedure:
Open a Configuration mode.
Get all serial ports
Monitor for new Serial Ports
Tell the user to connect the trackers
Handshake,...
The following might to be considered:
D1 Mini and Most ESP Dev boards use RTS and DTR to Reset or Flash. On some boards they are high on other low. So for DIY the should be able to be changed.
there is a serial command GET INFO that tracker should reply with info about itself
Well what i wanted to say:
"I'm against sending any Serial Port some characters, without you knowing what it is for."
It can be a old UPS, some other DIY hardware, ...
If we have a other possibility to identify it, like USB native it would be great.
Then we can wait a few seconds until periodical status message.
Just for clarification, are your ok with this approach?
I would recommend the following procedure:
Open a Configuration mode.
Get all serial ports
Tell the user to connect the trackers
Monitor for new USB Serial Ports
Open the new found USB Serial Port
Wait for a defined String OR send a String (like GET INFO\n)
I think that new serial devices should always be monitored and automatically connected. If user is not on the Add new tracker screen, they should get a pop up of sorts.
How often will you connect your tracker to the USB Port to change the configuration?
Once its configured to your network, you will change everything else over the Setting in the Server.
The only reason to connect it to a USB is Charging, WiFi Change, Firmware Update (DIY, not Production)
From looking at the USB Device i have no possibility to know what it is. (i see only a USB to Serial port, with some name)
If i open it as a serial port, i might reboot a diy 3d printer, or something else.
If the port is open no other application can access it.
I think the risk of not happy user is too big against the benefit.
That's fair. Let's make it work only on the Add tracker page.
If i open it as a serial port, i might reboot a diy 3d printer, or something else.
If the port is open no other application can access it.
That's why I hate cura. That hogs all the com ports and doesn't let go.
My opinion: Exclusive first -> Make sure that is tracker -> Release the other friends
@Eirenliel I believe these are all done. Can you confirm, and mark as closed if they are complete
|
gharchive/issue
| 2022-01-26T14:25:14 |
2025-04-01T04:55:38.534489
|
{
"authors": [
"Eirenliel",
"Kamilake",
"TheButlah",
"adigyran",
"unlogisch04"
],
"repo": "SlimeVR/SlimeVR-Server",
"url": "https://github.com/SlimeVR/SlimeVR-Server/issues/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1559540708
|
Allow user to provide SteamVR with a virtual hip tracker.
Currently, the options for providing SteamVR with virtual trackers correspond to the options available in the "tracker role" setting in SteamVR for the trackers. However, some games do not use the assigned tracker role and expect the lowest torso tracker to be a hip tracker. This means that spinal rotation is interpreted as hip rotation, even if the real hips are stationary.
As a concrete example, I play VRChat and use an avatar which has tails. My SlimeVR tracking setup includes a chest tracker and a hip tracker. The "waist" tracker seen in SteamVR very accurately approximates the orientation of my real waist (which does not have an actual tracker). However, the game is not expecting a waist tracker and interprets it as a hip tracker. As such, if I turn my upper body to look behind me, my avatar's tails move even though my real hips are stationary.
This can be a cause for frustration since I naturally move my upper body to look around, but I often try to keep my avatar's hips still to avoid making distracting tail movements. I spent a long time struggling with this and only today noticed the discrepancy between which body part SlimeVR was providing tracking for and which body part VRChat expected tracking for.
We do send the hip bone as the SteamVR waist tracker. The only reason it’s named this way is because SteamVR does not have a hip tracker option; only waist. Your problem is not what you think, since we are sending the hip tracker.
Make sure you’re assigning your hip tracker to hip, and not waist.
Please join our Discord server for further support, since this issue’s issue doesn’t exist :p
https://discord.gg/slimevr
|
gharchive/issue
| 2023-01-27T10:45:05 |
2025-04-01T04:55:38.538324
|
{
"authors": [
"Louka3000",
"tgiddings"
],
"repo": "SlimeVR/SlimeVR-Server",
"url": "https://github.com/SlimeVR/SlimeVR-Server/issues/520",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1679254248
|
Convert UDP Server stuff to Kotlin
Ignore the branch name >~>, should I remove any of the remaining deprecated objects?
Based on #647
This PR has a large number of formatting changes that will cause many conflicts with almost all of the open PRs based off of tracker-rewrite, please consider that before merging.
fixed :roll_eyes:
Noice, you can make format changes in a separate PR to make it easier to manage, thankyu <3
Uri asked for a review, I can't add anything objective other than it works, I've tested with the actual trackers and owotrack.
Personally I find packet definitions in a single file instead of 20 easier to navigate, but the final word is on the maintainers anyway.
Ready for review & merge
|
gharchive/pull-request
| 2023-04-22T01:11:47 |
2025-04-01T04:55:38.541187
|
{
"authors": [
"0forks",
"ButterscotchV",
"ImUrX"
],
"repo": "SlimeVR/SlimeVR-Server",
"url": "https://github.com/SlimeVR/SlimeVR-Server/pull/683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
407284349
|
Feature/webumenia 906 fix create new artwork in admin
Description
fix creating new Item (exception on missing $form)
fix updating Item without iipimg_url (disable validation NotBlank)
Fixes WEBUMENIA-906
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[ ] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[x] I have updated the CHANGELOG
[x] I have requested at least 1 reviewer for this PR
[ ] I have made corresponding changes to the documentation
Please create an issue for a more proper fix so we don't forget about it :)
https://jira.sng.sk/browse/WEBUMENIA-920
|
gharchive/pull-request
| 2019-02-06T15:18:08 |
2025-04-01T04:55:38.544941
|
{
"authors": [
"igor-kamil"
],
"repo": "SlovakNationalGallery/web-umenia-2",
"url": "https://github.com/SlovakNationalGallery/web-umenia-2/pull/168",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2192669338
|
I cannot create an invoice with safety valve date <1 month after invoice creation
@scottrepreneur not sure whethe this is expected behaviour
this is currently expected, thanks for clarifying. not a hard limit, can reconsider if hear requests from users. it could be adjusted also, but we should recommend it be at least a month out probably.
Does this functionality need testing? If so, I may want to ask you to temporarily lower it to 1 day.
Proposed Solution as discussed with @scottrepreneur
for dev purpose: check NODE_ENV and don't use (some) validations
for prod- 1 month by default and show warning if < 1month and let continue with warning for > 1 week
@memosys let me know when I can test it
|
gharchive/issue
| 2024-03-18T16:19:08 |
2025-04-01T04:55:38.548458
|
{
"authors": [
"benedictvscriticus",
"memosys",
"scottrepreneur"
],
"repo": "SmartInvoiceXYZ/smart-invoice",
"url": "https://github.com/SmartInvoiceXYZ/smart-invoice/issues/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1126572765
|
Dynamic update support
It looks like that the context is not automatically updated while editing. Until I reload the file (so that GPS query can refresh), nvim-gps fails to show the context for newly added scopes.
It would be great if we can support dynamic update of the context information powered by treesitter!
It should update dynamically. As soon as you exit insert mode, nvim-gps asks treesitter for new information to show. It seems that for some reason your treesitter isn't providing latest information 🤔
https://user-images.githubusercontent.com/43147494/152925448-9175dd66-e895-4ca2-a2a6-c02015036ae1.mp4
Hmm interesting. I am new to treesitter, so have no concrete ideas yet on how to debug this. FYI, my treesitter parser and plugins are up-to-date (and neovim is nightly-master). I have nothing special in my config, and :checkhealth nvim-treesitter seems fine.
The TSPlayground window also doesn't update and gets out of sync, so this might be related to the treesitter or its config itself.
@SmiteshP Is treesitter-playground as well automatically updating as you change the text?
@SmiteshP Is treesitter-playground as well automatically updating as you change the text?
Yep, playground should also update on the fly. So this means the issue is definitely with treesitter. You could try opening a issue on nvim-treesitter repo, they should be able to help.
Surprisingly, nvim-treesitter/nvim-treesitter#2492 says that currently the module highlight is required to make on-the-fly update work properly.
@SmiteshP So from the discussion nvim-treesitter/nvim-treesitter#2492 (-> nvim-treesitter/playground#64), it seems calling the parser to update the syntax tree for the current buffer (i.e., parser:parse(buf, lang)) is the responsibility of downstream treesitter plugins. Apparently both playground and nvim-gps missed that (and many other treesitter plugins probably also aren't aware of that).
I can confirm that adding the following lines somewhere in M.get_location() makes the syntax tree is updated, despite highlight { enabled = false }.
-- Request treesitter parser to update the syntax tree.
local parser = ts_parsers.get_parser(vim.fn.bufnr(), filelang)
parser:parse()
But this would be too excessive, frequent call because get_location() is usually called from statusline. A better way would be having some autocmd (like InsertLeave autocmd) set up, but I'm not sure this is something nvim-gps is responsible for or users are supposed to have it configuring treesitter. Either way, if I get convinced, I will make a PR for nvim-gps.
@SmiteshP So the conclusion is that ensuring the parse tree is up-to-date is the module's responsibility, which is however often subsumed when the treesitter highlight module is enabled. I submitted a PR #70 that fixes this bug, even when the highlight module is absent.
Oh interesting! I did use to have an InsertLeave autocommand to update the parser but at some point I realised everything works fine without it too, so I removed it. 😆
|
gharchive/issue
| 2022-02-07T22:31:07 |
2025-04-01T04:55:38.576278
|
{
"authors": [
"SmiteshP",
"wookayin"
],
"repo": "SmiteshP/nvim-gps",
"url": "https://github.com/SmiteshP/nvim-gps/issues/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1824883828
|
Cannot seem to be able to load /dev/input/event0 (a HORI PAD controller)
I get all others devices to load proper, but /.../event0, even though it calls the LibInterface and succeeds, doesn't seem to load. It shows properly in evtest or if I cat /dev/input/event0 I can see the events.
Same if I add an XBox controller (in my log below it's /.../event12); LibInterface::open_restricted is called and succeeds, but input doesn't poll the events correctly.
I'm still investigating and will update here if/when I found more information.
Log:
open_restricted: "/dev/input/event1"
....... Ok(OwnedFd { fd: 6 })
open_restricted: "/dev/input/event2"
....... Ok(OwnedFd { fd: 7 })
open_restricted: "/dev/input/event3"
....... Ok(OwnedFd { fd: 8 })
open_restricted: "/dev/input/event4"
....... Ok(OwnedFd { fd: 9 })
open_restricted: "/dev/input/event5"
....... Ok(OwnedFd { fd: 10 })
open_restricted: "/dev/input/event6"
....... Ok(OwnedFd { fd: 11 })
open_restricted: "/dev/input/event10"
....... Ok(OwnedFd { fd: 12 })
open_restricted: "/dev/input/event11"
....... Ok(OwnedFd { fd: 12 })
open_restricted: "/dev/input/event7"
....... Ok(OwnedFd { fd: 12 })
open_restricted: "/dev/input/event8"
....... Ok(OwnedFd { fd: 13 })
open_restricted: "/dev/input/event9"
....... Ok(OwnedFd { fd: 14 })
open_restricted: "/dev/input/event0"
....... Ok(OwnedFd { fd: 15 })
Added device: Added(DeviceAddedEvent @0x9f0288) "event1"
Added device: Added(DeviceAddedEvent @0x9fc638) "event2"
Added device: Added(DeviceAddedEvent @0xa05c88) "event3"
Added device: Added(DeviceAddedEvent @0xa0f610) "event4"
Added device: Added(DeviceAddedEvent @0xa18fb0) "event5"
Added device: Added(DeviceAddedEvent @0xa22788) "event6"
Added device: Added(DeviceAddedEvent @0xa2c050) "event7"
Added device: Added(DeviceAddedEvent @0xa358a0) "event8"
Added device: Added(DeviceAddedEvent @0xa3ef48) "event9"
open_restricted: "/dev/input/event12"
....... Ok(OwnedFd { fd: 15 })
Libinput generally doesn't handle gamepads, controllers, similar devices. It will check for evdev properties, that categorize it as absolute-pointer, tablet, relative-pointer, keyboard, etc device, but it does intentionally not pick up any device classes it doesn't recognize. And it is not built to handle game controllers.
Yes, I've just noticed that even the libinput test suite won't see those controllers (took me a while to build it for my target platform, sorry). I thought it was a bug because evtest does handle it properly, but evtest opens the file directly and doesn't use libinput.
This is not the fault of this library, so I'll close this issue.
|
gharchive/issue
| 2023-07-27T17:46:16 |
2025-04-01T04:55:38.580604
|
{
"authors": [
"Drakulix",
"hansl"
],
"repo": "Smithay/input.rs",
"url": "https://github.com/Smithay/input.rs/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2440720830
|
🛑 MPC Homepage is down
In c443e1f, MPC Homepage (https://www.minorplanetcenter.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MPC Homepage is back up in 3b1dca6 after .
|
gharchive/issue
| 2024-07-31T18:59:41 |
2025-04-01T04:55:38.583178
|
{
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/4518",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2406360135
|
🛑 NEOCP is down
In b82c8a3, NEOCP (https://minorplanetcenter.net/iau/NEO/toconfirm_tabular.html) was down:
HTTP code: 0
Response time: 0 ms
Resolved: NEOCP is back up in 028e569 after 1 minute.
|
gharchive/issue
| 2024-07-12T21:18:21 |
2025-04-01T04:55:38.585568
|
{
"authors": [
"ChrisMoriarty"
],
"repo": "Smithsonian/upptime",
"url": "https://github.com/Smithsonian/upptime/issues/922",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
240278402
|
minor fixes
• Removed the SFSafariViewControllerDelegate and its methods since we are not implementing them
• the "Go To Settings" button now links directly to the Manage Storage section rather than just the Usage section
Awesome, especially on the Manage Storage thing! Couldn't figure out what the link was for that.
|
gharchive/pull-request
| 2017-07-03T23:00:23 |
2025-04-01T04:55:38.591186
|
{
"authors": [
"CreatureSurvive",
"Sn0wCh1ld"
],
"repo": "Sn0wCh1ld/App-Installer",
"url": "https://github.com/Sn0wCh1ld/App-Installer/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1089624923
|
[Function / Feature Request] IFRAME EMBEDDING Support
Describe the feature
Add ability to load the CAD in an Iframe. Currently if you do, you cannot login. It just reloads the page. This is imperative for using it in game with tablet scripts such as ones that use the command /cad to open it up in the game. They always use iframes. Most CAD's do work with these embeds such as Sonoran and Gen-Cad etc but unfortunately Snaily does not.
Additional Context
No response
For reference http://nightfallroleplay.com/
Just a test page I'm running for now to show you.
I've just pushed a commit that should allow this now. Let me know if you're able to test this out :)!
To enable it add the following to your .env file, make sure to run the command after updating the .env file.
SECURE_COOKIES_FOR_IFRAME="true"
After adding SECURE_COOKIES_FOR_IFRAME="true" then running the command and restarting client, api
With https:// turned on for my domain name, the CAD does not respond in the iframe at all
With http:// and no SSL turned on, the CAD loads, but will not allow logins.
An error message shows in the console for the API
[2021-12-28T04:17:34.677] [ERROR] [TSED] - { method: 'POST', url: '/v1/user', headers: { host: 'MY IP TO CAD:8080', connection: 'keep-alive', 'content-length': '0', accept: 'application/json, text/plain, */*', 'is-from-dispatch': 'false', 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36', session: 'snaily-cad-session=undefined', origin: 'MY IP TO CAD:3000/', referer: 'MY IP TO CAD:3000/', 'accept-encoding': 'gzip, deflate', 'accept-language': 'en-US,en;q=0.9' }, body: {}, query: {}, params: {}, reqId: 'f6c3d043c2fc4f4b8b3b2eae4724f276', time: 2021-12-28T10:17:34.677Z, duration: 6, error: { name: 'UNAUTHORIZED', message: 'Unauthorized', status: 401, errors: [], stack: undefined }, stack: 'UNAUTHORIZED: Unauthorized\n' + ' at getSessionUser (C:\\Users\\Administrator\\Documents\\snaily-cadv4\\packages\\api\\src\\lib\\auth.ts:39:11)\n' + ' at IsAuth.use (C:\\Users\\Administrator\\Documents\\snaily-cadv4\\packages\\api\\src\\middlewares\\IsAuth.ts:58:34)\n' + ' at C:\\Users\\Administrator\\Documents\\snaily-cadv4\\node_modules\\@tsed\\platform-params\\src\\builder\\PlatformParams.ts:88:16' }
You require working https to enable SECURE_COOKIES_FOR_IFRAME. Secure cookies require https.
I recommend you setup a proxy such as NGINX or Apache to add a free SSL certificate from Let's Encrypt
@ViolentMinds since you got SSL working; is this working now?
Iframe issue seems to be fixed now! Thank you!
|
gharchive/issue
| 2021-12-28T05:34:53 |
2025-04-01T04:55:38.598514
|
{
"authors": [
"Dev-CasperTheGhost",
"ViolentMinds"
],
"repo": "SnailyCAD/snaily-cadv4",
"url": "https://github.com/SnailyCAD/snaily-cadv4/issues/196",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
168506919
|
What is the appropriate way to make constraints in initializer methods?
Apple recommend "Do not Use Accessor Methods in Initializer Methods and dealloc" with this document: https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmPractical.html
If I want to make constraints in the initializer methods, how I write better code?
@interface CustomView : UIView
@property (nonatomic, strong) UIView *viewLeft;
@property (nonatomic, strong) UIView *viewRight;
@end
@implementation CustomView
- (instancetype)init {
self = [super init];
if (self) {
UIView *viewLeft = [UIView new];
_viewLeft = viewLeft;
[self addSubview:_viewLeft];
[_viewLeft mas_makeConstraints:^(MASConstraintMaker *make){
make.left.top.bottom.mas_equalTo(self);
}];
UIView *viewRight = [UIView new];
_viewRight = viewRight;
[self addSubview:_viewRight];
[_viewRight mas_makeConstraints:^(MASConstraintMaker *make) {
make.right.top.bottom.mas_equalTo(self);
// by accessor set up
make.left.mas_equalTo(self.viewLeft.mas_right);
// or by iVar set up
make.left.mas_equalTo(self -> _viewLeft.mas_right);
}];
}
return self;
}
@end
@yangcaimu Apple recommends it because it can commonly crop up bugs, however, to be honest I do this in init all the time and I've never had issues.
The issues arise when an accessor does some initialisation but because you're in an init method you're probably in an undefined state.
Yes, I wrote a demo presentation issue.
|
gharchive/issue
| 2016-07-31T07:54:43 |
2025-04-01T04:55:38.601600
|
{
"authors": [
"robertjpayne",
"yangcaimu"
],
"repo": "SnapKit/Masonry",
"url": "https://github.com/SnapKit/Masonry/issues/369",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
963359782
|
'id' of undefined
Can you explain what you did to get this error?
Fixed in v3.4.3
|
gharchive/issue
| 2021-08-08T05:46:11 |
2025-04-01T04:55:38.610293
|
{
"authors": [
"Snazzah",
"norbssus"
],
"repo": "Snazzah/slash-create",
"url": "https://github.com/Snazzah/slash-create/issues/91",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2525486119
|
Images not rendering
Describe the bug
Images that are referred in guides doesnt appear in the guide web pages
URL of where you see the bug
http://localhost:8000/guide/cloudtrail_log_ingestion/index.html?index=..%2F..index#5
To Reproduce
I open the guide and the images doesnt appear
Expected behavior
Images that are referred in guides doesnt appear in the guide web pages
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: [e.g. iOS]
Browser [e.g. chrome, safari]
Version [e.g. 22]
Smartphone (please complete the following information):
Device: [e.g. iPhone6]
OS: [e.g. iOS8.1]
Browser [e.g. stock browser, safari]
Version [e.g. 22]
Additional context
Add any other context about the problem here.
Yes, this is a known issue related to Gulp 5.0 and we're looking into resolving it.
I'm also having this issue, thanks for reporting and acknowledging it. I was going crazy.
So I stumbled into this issue as well, following the getting started instructions to the letter. @iamontheinet is correct in that this is a gulp version issue, my guess is an incompatibility with the required version of node (14 - set here, but I am not a node/front end guy so I am really just guessing after multiple hours of messing with it.
However, I was able to reliably resolve this issue by uninstalling and down grading gulp:
cd site
# Remove old versions
npm uninstall -g gulp
npm uninstall gulp
# Install correct versions
npm install -g gulp-cli@2.3.0
npm install gulp@4.0.2 --save-dev
# Clean and reinstall
rm -rf node_modules package-lock.json
rm -rf sfguides/dist/* sfguides/build/*
npm install
# Verify versions
gulp --version
# Try running the server
npm run serve
Not sure if this is the "right" way or not, but it worked, and I have image rendering working now.
|
gharchive/issue
| 2024-09-13T19:04:34 |
2025-04-01T04:55:38.625621
|
{
"authors": [
"arkuyucu",
"iamontheinet",
"nicdavidson",
"sfc-gh-erhodes"
],
"repo": "Snowflake-Labs/sfquickstarts",
"url": "https://github.com/Snowflake-Labs/sfquickstarts/issues/1577",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1964970310
|
Fix: Add authenticator and private_key_path fields to ConnectionDetails dataclass
Pre-review checklist
[x] I've confirmed that instructions included in README.md are still correct after my changes in the codebase.
[ ] I've added new automated tests to verify correctness of my new code.
[x] I've confirmed that my changes are up-to-date with the target branch.
[ ] I've described my changes in the release notes.
[x] I've described my changes in the section below.
Changes description
tests/nativeapp/test_manager.py was failing on try of running it as single test module (pytest tests/nativeapp/test_manager.py) but everything was ok when running it as part of all tests (just pytest)
The error occurred only in test_get_snowsight_url test case.
The error was indicating that authenticator attribute is not present in ConnectionDetails in the global context.
The error was correct because that class did not have authenticator and private_key_path fields.
So why it worked for all other tests, why it worked in regular CLI invocations, why pytest didn't fail?
ConnectionDetails._connection_update function sets attributes from CLI connection options. It added new attributes (not present as dataclass fields) to ConnectionDetails object for --authenticator and --private-key-path options.
So every in regular CLI invocations, these attributes were present but not as dataclass fields.
The same is about all other test cases which use connection -> they all use CLI runner, so there is CLI options handling included in each of them.
pytest did not fail for tests/nativeapp/test_manager.py because other tests invoking CLI runner in other test modules set the attribute before that test.
So why it didn't work for pytest tests/nativeapp/test_manager.py?
No test case in tests/nativeapp/test_manager.py runs actual CLI. This test module tests manager without invoking the CLI runner. So there was no CLI flags handling and no invocation of ConnectionDetails._connection_update.
Fixed by:
Adding conftest.py # reset_global_context_after_each_test autouse fixture to both tests and tests_integrations to guarantee clear global context for each test case and make tests/nativeapp/test_manager.py fail also when running just pytest.
Adding authenticator and private_key_path fields to ConnectionDetails dataclass.
Adding pytest-randomly plugin as a dev dependency to introduce random order of tests execution.
Fixing other issues found after adding pytest-randomly.
We're currently processing your upload. This comment will be updated when the results are available.
|
gharchive/pull-request
| 2023-10-27T07:53:25 |
2025-04-01T04:55:38.634210
|
{
"authors": [
"codecov-commenter",
"sfc-gh-pjob"
],
"repo": "Snowflake-Labs/snowcli",
"url": "https://github.com/Snowflake-Labs/snowcli/pull/507",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1714118542
|
Refactor verification of updates in beacon client
We've duplicated a lot of verification code across the code paths for processing different kinds of updates. This makes the code harder to understand and maintain.
In some cases, the duplicated code differed slightly, leading to various bugs:
Not verifying sync committee supermajority when processing execution header update
Not checking for duplicated finalized header when processing sync committee update
Not performing weak subjectivity check when processing sync committee and execution header updates
Fixes: SNO-438
@yrong Would you mind take this PR over please?
After fixing some bugs, I discovered that some of the mock data used by tests_minimal.rs is incorrect, causing these failures:
failures:
tests_minimal::it_updates_a_committee_period_sync_update
tests_minimal::it_updates_a_invalid_committee_period_sync_update_with_duplicate_entry
The problem is this:
In minimal_initial_sync.json, header.slot is 152.
In minimal_sync_committee_update.json, finalized_header.slot is 128.
This is incorrect, as the finalized header in the initial checkpoint should always be older than finalized headers in future updates.
My code improvements here detected the bug: https://github.com/Snowfork/snowbridge/blob/8decb84e253a55d837df6e3c6fbeac884be0f4f5/parachain/pallets/ethereum-beacon-client/src/lib.rs#L706-L709
Can you update the mock data to fix this problem?
@yrong We also need to update the relayer, as I changed some field names in parachain/primitives/beacon/src/updates.rs
Thanks for this refactor - I agree it is necessary.
Can you update the mock data to fix this problem?
Actually fixture data used here is retrieved from lodestar directly with script generateBeaconData and 128 in minimal_sync_committee_update.json is actually the first slot in that sync period(i.e. period 2) also correct.
Seems the problem is we retrieve all these updates in one command, maybe we can split to multiple commands and generate sync_committee_update for the next period(i.e. period 3).
require https://github.com/Snowfork/cumulus/pull/25
E2E test should be fine now.
|
gharchive/pull-request
| 2023-05-17T15:05:28 |
2025-04-01T04:55:38.641014
|
{
"authors": [
"claravanstaden",
"vgeddes",
"yrong"
],
"repo": "Snowfork/snowbridge",
"url": "https://github.com/Snowfork/snowbridge/pull/835",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2124753300
|
There will be a lot of errors in vs generation. Why is that?
There will be a lot of errors in vs generation. Why is that?
try compiling => x64 & Release
|
gharchive/issue
| 2024-02-08T09:53:21 |
2025-04-01T04:55:38.657598
|
{
"authors": [
"AT7921",
"omgprod"
],
"repo": "SoTMaulder/SoTMaulder-Palworld",
"url": "https://github.com/SoTMaulder/SoTMaulder-Palworld/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1070438529
|
Added validator for rule 199.
Closes #64
Don't worry, it always turns out there was a simpler way in hindsight!
|
gharchive/pull-request
| 2021-12-03T10:24:06 |
2025-04-01T04:55:38.668471
|
{
"authors": [
"SLornieCYC",
"dezog"
],
"repo": "SocialFinanceDigitalLabs/quality-lac-data-beta-validator",
"url": "https://github.com/SocialFinanceDigitalLabs/quality-lac-data-beta-validator/pull/493",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2364320343
|
RFE: allow running multiple scc-state servers that vote the leader automatically
As documented at https://github.com/SocketCluster/socketcluster/blob/master/scc-guide.md#notes the current design requires that exactly one scc-state server is running for the cluster.
I'm asking for an improvement that I could run one scc-state server per system in the cluster and SocketCluster would automatically select one as the leader and other scc-state servers could just wait as hot standby systems, ready to take over if the hardware running the leader scc-state failed.
This would remove the only remaining single-point-of-failure in SocketCluster if I've understood its behavior correctly.
I think there wouldn't need to be any fancy way to vote the leader. Just make the first server to start the leader and every next server would join in queue to be the next leader. And maybe have a config specifying minimum amount of servers in the queue before designating one as the leader to avoid split brain situation? (For example, for 3 server cluster, one would specify minimum of 2 servers until scc-state server is elected and in that case the first server in the system would be the leader.)
I was thinking about running scc-state on bare metal without Kubernetes. I see that adding Kubernetes to the mix can get pretty similar behavior without SocketCluster being able to handle crashing scc-state by itself.
@mikkorantalainen Feel free to fork scc-state (https://github.com/SocketCluster/scc-state) and make those changes you suggested. You're welcome to customize it as you like. The approach you suggested has clear advantages when running on bare metal. The trade-off is just added complexity. If you want to share it as open source, I'd be happy to mention it as a possible alternative to the default scc-state.
|
gharchive/issue
| 2024-06-20T12:28:07 |
2025-04-01T04:55:38.678416
|
{
"authors": [
"jondubois",
"mikkorantalainen"
],
"repo": "SocketCluster/socketcluster",
"url": "https://github.com/SocketCluster/socketcluster/issues/596",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
497578130
|
Auth-Tokens cannot be used in webapps when restheart and restheart-security are used
Expected Behavior
Responses from restheart-security should contain CORS header access-control-expose-headers including all the values Location, ETag, Auth-Token, Auth-Token-Valid-Until, Auth-Token-Location, X-Powered-By
We're interested especially in Auth-Token, Auth-Token-Valid-Until, Auth-Token-Location here.
Current Behavior
Restheart responds with access-control-expose-headers: Location, ETag, X-Powered-By. Restheart security checks that CORS headers are already present and does not alter them. Since Restheart security cares about the auth tokens and all of that, the header values access-control-expose-headers: Auth-Token, Auth-Token-Valid-Until, Auth-Token-Location are not allowed to be read by browser-side javascript.
Context
we're moving to the new restheart major release 4.0+
Environment
n/a
Steps to Reproduce
Use Restheart-security and restheart
Send Request with valid basic auth credentials to Restheart-Security
Observe header access-control-expose-headers.
Possible Implementation
If access-control-expose-headers is present, add relevant values instead of simply accepting what downstream restheart did.
Hi @lenalebt ,
will fix in upcoming restheart-platform-security 4.1
|
gharchive/issue
| 2019-09-24T09:45:51 |
2025-04-01T04:55:38.720972
|
{
"authors": [
"lenalebt",
"ujibang"
],
"repo": "SoftInstigate/restheart-security",
"url": "https://github.com/SoftInstigate/restheart-security/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
443982135
|
Refused to execute script errors
I've upgraded from 0.6.0 to 0.8.2 and now I'm getting 2 errors that prevent the UI from loading after login (buildAuthenticatedRouter):
Refused to execute script from 'http://localhost:3000/admin/frontend/assets/app.bundle.js' because its MIME type ('application/octet-stream') is not executable, and strict MIME type checking is enabled.
Refused to execute script from 'http://localhost:3000/admin/frontend/assets/components.bundle.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
I'm not entirely sure what's causing this, unfortunately.
I'm using admin-bro-expressjs and admin-bro-mongoose as well.
it might be a problem with expressjs plugin - will check it out today and let you know
Which version of admin-bro-expressjs are you using? (latest?)
I believe so, yes. 0.1.6.
check out the AdminBro v 0.8.3 and admin-bro-expressjs v0.1.7 and let me know if that helped
Most definitely worked. Thanks!
|
gharchive/issue
| 2019-05-14T15:28:52 |
2025-04-01T04:55:38.727051
|
{
"authors": [
"derkweijers",
"wojtek-krysiak"
],
"repo": "SoftwareBrothers/admin-bro",
"url": "https://github.com/SoftwareBrothers/admin-bro/issues/83",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
572110218
|
Improve schema definition in BaseResource
Right now BaseResource has 2 methods which have to be implemented: property() and properties(). The common implementation is something like this
// in constructor
this.rawProperties = [
new BaseProperty({
path: 'name',
isId: false,
isSortable: true,
type: 'string',
}),
new BaseProperty({
path: 'key',
isId: true,
isSortable: true,
type: 'string',
}),
new BaseProperty({
path: 'image',
isId: false,
isSortable: false,
type: 'string',
}),
];
// ...
properties() {
return this.rawProperties;
}
property(path: string) {
return this.rawProperties.find((p) => p.path() === path) ?? null;
}
this is not super elegant. The simplest would be to allow users defining schema and propety and properties should be generated automaticaly (implement them in root class)
Also, please consider making all the accessor methods of the schema / resource properties as async. For example, these resource definitions might be stored inside a database and retrieved on demand. (for example, the properties() and property(path: string) should be async).
This is usually the case in multi-tenant systems, where the schemas are not pre-defined.
Closing as this has not been discussed locally since this issue had been opened and will likely not be implemented in near future. If you have any ideas on it's implementation please open a new issue.
|
gharchive/issue
| 2020-02-27T14:09:01 |
2025-04-01T04:55:38.729410
|
{
"authors": [
"KrishnaPG",
"dziraf",
"wojtek-krysiak"
],
"repo": "SoftwareBrothers/adminjs",
"url": "https://github.com/SoftwareBrothers/adminjs/issues/323",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
296502635
|
Add a check for empty result
When the query returns nothing, the absence of this check produces a really ugly and unnecessary exception.
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-59-33c3dea2a858> in <module>()
----> 1 test = dataset.query_to_pandas_safe(query)
/src/bq-helper/bq_helper.py in query_to_pandas_safe(self, query, max_gb_scanned)
86 query_size = self.estimate_query_size(query)
87 if query_size <= max_gb_scanned:
---> 88 return self.query_to_pandas(query)
89 print(f"Query cancelled; estimated size of {query_size} exceeds limit of {max_gb_scanned} GB")
90
/src/bq-helper/bq_helper.py in query_to_pandas(self, query)
78 rows = list(query_job.result(timeout=30))
79 return pd.DataFrame(
---> 80 data=[list(x.values()) for x in rows], columns=list(rows[0].keys()))
81
82 def query_to_pandas_safe(self, query, max_gb_scanned=1):
IndexError: list index out of range
Good catch. Thank you!
|
gharchive/pull-request
| 2018-02-12T19:50:59 |
2025-04-01T04:55:38.736011
|
{
"authors": [
"HeshamMeneisi",
"SohierDane"
],
"repo": "SohierDane/BigQuery_Helper",
"url": "https://github.com/SohierDane/BigQuery_Helper/pull/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
221340393
|
CF Dev guide - show output for running commands
Sample output for commands could be useful – if there’s a lot of output then a snippet showing what users should look for when a command completes successfully could be useful. This doesn't have to be done for every command or for simple ones, but where a task is more involved or takes a long time, it would be useful to see what to expect.
Thanks for your feedback, I think it would be hard to capture output samples (some are very long as you noted) and to even maintain that per release in the guide.... I think adding a test at the end of major steps to show success or failure would be clearer and more maintainable. I will add something to help with that.
|
gharchive/issue
| 2017-04-12T17:50:05 |
2025-04-01T04:55:38.737587
|
{
"authors": [
"PhilippeKhalife",
"solacedbaik"
],
"repo": "SolaceLabs/solace-messaging-cf-dev",
"url": "https://github.com/SolaceLabs/solace-messaging-cf-dev/issues/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
423883524
|
Detect stale and interpolated data
Closes #21
Functions check for constant difference (stale data) and constant slope (interpolated data)
Time steps are assumed to be equal.
ready for review
|
gharchive/pull-request
| 2019-03-21T18:37:46 |
2025-04-01T04:55:38.738753
|
{
"authors": [
"cwhanse"
],
"repo": "SolarArbiter/solarforecastarbiter-core",
"url": "https://github.com/SolarArbiter/solarforecastarbiter-core/pull/51",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
119545136
|
“feature flag” support in the UI to display/hide the search elements based on the presence/absence of the addon.
This should be a generalized solution that can be used with ‘known’ addons.
opened in wrong repo.
|
gharchive/issue
| 2015-11-30T17:54:02 |
2025-04-01T04:55:38.745147
|
{
"authors": [
"lexjacobs"
],
"repo": "Solinea/goldstone-server",
"url": "https://github.com/Solinea/goldstone-server/issues/139",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1360850703
|
4-7. 로그인 전역 관리하기
설명
axios를 이용해서 전역 로그인 유저상태 recoil 사용
Todo
[ ] atoms 추가하기
[ ] 컴포넌트에서 불러와 사용하기
React-쇼핑몰-클론코딩-4.-axios로-API-통신하기-로그인-메인-상세페이
[React] 로그인 로직 구현
|
gharchive/issue
| 2022-09-03T12:54:34 |
2025-04-01T04:55:38.848810
|
{
"authors": [
"00kang"
],
"repo": "Soongsil-Developers/22sdc-blog-team2",
"url": "https://github.com/Soongsil-Developers/22sdc-blog-team2/issues/33",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1769061622
|
Available version updates
Update available for body-parser... Latest 1.20.2 available
Update available for chalk... Latest 5.2.0 available
Update available for compression... Latest 1.7.4 available
Update available for connect-pause... Latest 0.1.0 available
Update available for cors... Latest 2.8.5 available
Update available for errorhandler... Latest 1.5.1 available
Update available for express... Latest 4.18.2 available
Update available for express-urlrewrite... Latest 2.0.1 available
Update available for json-parse-helpfulerror... Latest 1.0.3 available
Update available for lodash... Latest 4.17.21 available
Update available for lodash-id... Latest 0.14.1 available
Update available for lowdb... Latest 6.0.1 available
Update available for method-override... Latest 3.0.0 available
Update available for morgan... Latest 1.10.0 available
Update available for nanoid... Latest 4.0.2 available
Update available for please-upgrade-node... Latest 3.2.0 available
Update available for pluralize... Latest 8.0.0 available
Update available for server-destroy... Latest 1.0.1 available
Update available for standard... Latest 17.1.0 available
Update available for yargs... Latest 17.7.2 available
~UPD
Update available for body-parser... Latest 1.20.2 available
Update available for chalk... Latest 5.2.0 available
Update available for compression... Latest 1.7.4 available
|
gharchive/issue
| 2023-06-22T07:01:44 |
2025-04-01T04:55:38.854545
|
{
"authors": [
"Riyu44",
"navvay"
],
"repo": "Sopra-Banking-Software-Interns/auto-dependency-update",
"url": "https://github.com/Sopra-Banking-Software-Interns/auto-dependency-update/issues/82",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
765539436
|
Crash
Old V2M's crashed Winamp
Use this v2mplayer and not used WinAmp.
|
gharchive/issue
| 2020-12-13T17:15:06 |
2025-04-01T04:55:38.860771
|
{
"authors": [
"AntonZab",
"zvezdochiot"
],
"repo": "Sound-Linux-More/v2mplayer",
"url": "https://github.com/Sound-Linux-More/v2mplayer/issues/1",
"license": "Artistic-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
874247447
|
Request for example of using activity or service.
Hi! Thanks for sharing the great work!
I was trying to use non-static java function but my app is killed when I ran it.
Can you tell me how to use non-static java function(that starts without static indicator)?
Thanks in advance~!
same trouble
The plugin has the ability to use non-static Java functions. To do this, you need to call the first static function to return to the jobject of your Java class. In MobileUtilsBluprint.cpp in the function UMobileNativeCodeBlueprint::ExampleMyJavaObject shows an example, also in the README there is a separate paragraph with pictures about it.
|
gharchive/issue
| 2021-05-03T05:42:08 |
2025-04-01T04:55:38.911622
|
{
"authors": [
"DaeHyeonNam",
"Prikalel",
"Sovahero"
],
"repo": "Sovahero/PluginMobileNativeCode",
"url": "https://github.com/Sovahero/PluginMobileNativeCode/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
511577906
|
Adds openapi file
This adds the current implementation of the api.
Unfortunately there is a bug inside the validator so the swagger UI doesn't work well. And breaks if you try to validate a schema. A comparable curl would be
curl https://validator.spaceapi.io/v1/validate/ -H 'Content-Type: application/json' --data '{"data":{"api":"0.13"}}'
I wouldn't fix it right now since we discussed switching to a different implementation and no one is using the swagger UI anyways.
closed in favor of #38
|
gharchive/pull-request
| 2019-10-23T21:19:14 |
2025-04-01T04:55:38.927278
|
{
"authors": [
"gidsi"
],
"repo": "SpaceApi/validator",
"url": "https://github.com/SpaceApi/validator/pull/36",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
943937290
|
Asteroid
I created an asteroid gameobject. Currently, it just has an image.
Cool! It looks good. Glad to have an asteroid now!
Minor suggestion -- the name of setMercuryPoints() could be changed to something asteroid-specific
add asteroid game object to dev branch
|
gharchive/pull-request
| 2021-07-14T00:58:27 |
2025-04-01T04:55:38.928471
|
{
"authors": [
"TaylorKNoah",
"aferron"
],
"repo": "SpaceGameTeam/FarOut",
"url": "https://github.com/SpaceGameTeam/FarOut/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1073788843
|
Added CZ layout
Not tested, but here is script.
LOCALE CZ
DELAY 2000
STRING !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ\[]^_`abcdefghijklmnopqrstuvwxyz{|}~´ˇ°áčˇďéěíˇň´óřšˇťúůýž
ENTER
Test script works, everything looking good! Thanks
|
gharchive/pull-request
| 2021-12-07T22:00:24 |
2025-04-01T04:55:38.938120
|
{
"authors": [
"LiJu09",
"spacehuhn"
],
"repo": "SpacehuhnTech/WiFiDuck",
"url": "https://github.com/SpacehuhnTech/WiFiDuck/pull/97",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1984842983
|
about T&T data
Hi, when I run T&T dataset with VQ-TensoRF, I found that intrinsics.txt has problem, fail to run dataloader:
I have seen the result of this scene perform normal in your result.md, I am wondering how you process this data? Thank you !
intrinsic.txt in scene:
|
gharchive/issue
| 2023-11-09T05:15:33 |
2025-04-01T04:55:38.944333
|
{
"authors": [
"zwxdxcm"
],
"repo": "Spark001/VQ-TensoRF",
"url": "https://github.com/Spark001/VQ-TensoRF/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
91101220
|
datetime fails on collect
import com.github.nscala_time.time.Imports._
import org.apache.spark.sql.catalyst.dsl.expressions._
import org.sparklinedata.spark.dateTime.dsl.expressions._
//org.sparklinedata.spark.dateTime.Functions.register(hc)
val dT = dateTime('e_log_time)
val dT1 = dateTime('e_log_time) - 8.months
val dOW = dateTime('e_log_time) dayOfWeekName
val dOM = dateTime('e_log_time) dayOfMonth
val dOY = dateTime('e_log_time) dayOfYear
//val dOWNm = dateTime('e_log_time) dayOfWeekName
//val dOWNm2 = dateTimeWithTZ('e_log_time) dayOfWeekName
//val dTPST = dateTimeWithTZ('e_log_time)
//val dTFixed = dateTime("2015-05-22T08:52:41.903-07:00")
val sqlStr = date"select e_log_time, $dT, $dT1, $dOW, $dOM, $dOY from eventlog_sessions"
val t = sql(sqlStr)
t.cache
t.show
t.registerTempTable("dateTest")
import com.github.nscala_time.time.Imports._
import org.apache.spark.sql.catalyst.dsl.expressions._
import org.sparklinedata.spark.dateTime.dsl.expressions._
dT: org.sparklinedata.spark.dateTime.dsl.DateExpression = org.sparklinedata.spark.dateTime.dsl.package$DateExpression@f435268
dT1: org.sparklinedata.spark.dateTime.dsl.DateExpression = org.sparklinedata.spark.dateTime.dsl.package$DateExpression@5f986a78
warning: there were 1 feature warning(s); re-run with -feature for details
dOW: org.apache.spark.sql.catalyst.expressions.Expression = 'dayOfWeekName('dateTime('e_log_time))
warning: there were 1 feature warning(s); re-run with -feature for details
dOM: org.apache.spark.sql.catalyst.expressions.Expression = 'dayOfMonth('dateTime('e_log_time))
warning: there were 1 feature warning(s); re-run with -feature for details
dOY: org.apache.spark.sql.catalyst.expressions.Expression = 'dayOfYear('dateTime('e_log_time))
sqlStr: String = select e_log_time, dateTime(e_log_time), dateMinus(dateTime(e_log_time),period("P8M")), dayOfWeekName(dateTime(e_log_time)), dayOfMonth(dateTime(e_log_time)), dayOfYear(dateTime(e_log_time)) from eventlog_sessions
t: org.apache.spark.sql.DataFrame = [e_log_time: string, _c1: sparkdatetim, _c2: sparkdatetim, _c3: string, _c4: int, _c5: int]
res362: t.type = [e_log_time: string, _c1: sparkdatetim, _c2: sparkdatetim, _c3: string, _c4: int, _c5: int]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 12 in stage 1430.0 failed 4 times, most recent failure: Lost task 12.3 in stage 1430.0 (TID 54364, 172.31.57.134): java.lang.IllegalArgumentException: Invalid format: "2015-05-28T09:30:35Z" is malformed at "Z"
at org.joda.time.format.DateTimeFormatter.parseDateTime(DateTimeFormatter.java:866)
at org.sparklinedata.spark.dateTime.Functions$.parseDate(Functions.scala:284)
at org.sparklinedata.spark.dateTime.Functions$.dateTimeFn(Functions.scala:53)
at org.sparklinedata.spark.dateTime.Functions$$anonfun$register$2.apply(Functions.scala:190)
at org.sparklinedata.spark.dateTime.Functions$$anonfun$register$2.apply(Functions.scala:190)
at org.apache.spark.sql.catalyst.expressions.ScalaUdf$$anonfun$2.apply(ScalaUdf.scala:71)
at org.apache.spark.sql.catalyst.expressions.ScalaUdf$$anonfun$2.apply(ScalaUdf.scala:70)
at org.apache.spark.sql.catalyst.expressions.ScalaUdf.eval(ScalaUdf.scala:960)
at org.apache.spark.sql.catalyst.expressions.Alias.eval(namedExpressions.scala:118)
at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:68)
at org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:52)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:885)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1765)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1266)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1257)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1256)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1256)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:730)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:730)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1450)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1411)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
I cannot reproduce. Can you test with the latest jar. Also can you provide the version of joda being used. The stacktrace line numbers for Functions.parseDate and DateTimeFormatter.parseDateTime don't match the current version.
|
gharchive/issue
| 2015-06-25T23:17:52 |
2025-04-01T04:55:38.964487
|
{
"authors": [
"hbutani",
"sdesikan6"
],
"repo": "SparklineData/spark-datetime",
"url": "https://github.com/SparklineData/spark-datetime/issues/2",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1889694833
|
🛑 Yiffed Main is down
In f59e628, Yiffed Main (https://yiffed.net/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Yiffed Main is back up in f5b6252 after 7 minutes.
|
gharchive/issue
| 2023-09-11T05:30:34 |
2025-04-01T04:55:38.967160
|
{
"authors": [
"SparksTheFolf"
],
"repo": "SparksTheFolf/STF-Uptime-Status",
"url": "https://github.com/SparksTheFolf/STF-Uptime-Status/issues/466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2164787767
|
Feature request: search in custom directory
No folder needs to be opened. The users will need to specify the directory to search in.
This will be implemented soon. It's pending.
The way it was requested won't be implemented (searching in subdirectories from an open directory is already supported, but I don't think it makes much sense to search in a directory without opening it, user can open any folder any time if needs to do an specific search).
|
gharchive/issue
| 2024-03-02T12:25:55 |
2025-04-01T04:55:38.970344
|
{
"authors": [
"SpartanJ",
"ghost"
],
"repo": "SpartanJ/ecode",
"url": "https://github.com/SpartanJ/ecode/issues/151",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2051280447
|
New task/workflow: Batch update collectors from verbatimCollectors or buffered_collecting_event
Feature or enhancement
The DwC importer's name parsing isn't perfect, and a portion of the CEs don't have Collectors assigned.
I want to be able to set the collectors of many CEs at once. Configuring a batch_update endpoint is the faster approach, but a new stepwise task would be more effective in the long run.
It should support assigning multiple collectors to the events respect the order of People provided.
Location
Filter CE or a new task
Screenshot, napkin sketch of interface, or conceptual description
No response
Your role
No response
Related to #1836 as well.
I support the creation of this. Right now I can't batch update collectors from the data I get from Bionomia because I do not have this functionality.
Implemented in #3745. buffered_collecting_event isn't supported yet, but could be in the future.
|
gharchive/issue
| 2023-12-20T21:16:26 |
2025-04-01T04:55:38.991191
|
{
"authors": [
"LordFlashmeow",
"tmcelrath"
],
"repo": "SpeciesFileGroup/taxonworks",
"url": "https://github.com/SpeciesFileGroup/taxonworks/issues/3735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
417033591
|
Task - comprehensive specimen: transcribed coordinates from label outlined in red
The map indicates the correct position, but the user is left thinking something is worng.
@jrflood Please screenshot
Pushed, can you test again and see if still getting the validation error?
This fixed the issue :)
Thanks!
|
gharchive/issue
| 2019-03-04T23:04:03 |
2025-04-01T04:55:38.993210
|
{
"authors": [
"jlpereira",
"jrflood",
"mjy"
],
"repo": "SpeciesFileGroup/taxonworks",
"url": "https://github.com/SpeciesFileGroup/taxonworks/issues/851",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2361881507
|
数据预计开源时间
您好:
请问数据预计什么时间开源呢?
See https://huggingface.co/datasets/speechcolab/gigaspeech2
@yfyeung , 谢谢数据开源,我这边用whisper识别泰语,发现经常会识别出多余单词,请问您是怎么解决这个问题的呢?
@yfyeung 谢谢~
@yfyeung ,您好,请问论文中使用gigaspeech2训练的泰语ASR模型,会开源吗?
@yfyeung 谢谢~
See https://github.com/SpeechColab/GigaSpeech2/issues/2
|
gharchive/issue
| 2024-06-19T09:38:45 |
2025-04-01T04:55:39.006372
|
{
"authors": [
"yfyeung",
"yy524"
],
"repo": "SpeechColab/GigaSpeech2",
"url": "https://github.com/SpeechColab/GigaSpeech2/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
623569370
|
Spells: Poison - monsters stop completly attacking
hello,
when i do cast poison on a monster(easily tested on dragon) and trigger a stealth/hide ability (or run away), wait a few seconds, unhide, the monster doesn't attack me anymore, i can freely cast on it without being attacked, but anyone other approaching is still attacked by the monster automatically as it should
Step to reproduce: i suggest on a dragon.
-> Cast Poison
-> wait the spell land
-> stealth (or simply run away)
-> wait at least one tick of poison (a 3-4sec)
-> reveal yourself (come back if you did it the running way!)
-> Cast any offensive spell, harm/magic arrow
At this stade the dragon won't attack you anymore for around 10min. you can even tame it that way.
If any player other than me come near that dragon, the dragon will attack him on side and still ignore me.
Can't reproduce it :\
Ok Here the way to reproduce it (with a Gm it's more easy but the bug is the same with regular player)
1- .go despise
2- .add c_elemental_earth
3- .invul OFF
4- Wait the red message golem attack you
5- .cast 20 on the golem
6- Run from his LOS (go downstair, hide, run etc)
7- Wait 5 sec
8- Come back and walk next to him like you walk near a rock.
it doesn't seem to happen if you use the invisibility spell, but it does happen with the hiding skill as explained above.
1 - mob attacking you
2 - player hide
3 - mob going home
4 - player got revealed
Suppose to be fixe. Need to be test
I testet whit this way 10 / 15 time and now sems work good, npc back to attack when come back in LoS.
1- .go despise
2- .add c_elemental_earth
3- .invul OFF
4- Wait the red message golem attack you
5- .cast 20 on the golem
6- Run from his LOS (go downstair, hide, run etc)
7- Wait 5 sec
8- Come back and walk next to him like you walk near a rock.
|
gharchive/issue
| 2020-05-23T03:15:05 |
2025-04-01T04:55:39.014454
|
{
"authors": [
"Jhobean",
"Massi17",
"alexdan2",
"drk84",
"razyhost"
],
"repo": "Sphereserver/Source-X",
"url": "https://github.com/Sphereserver/Source-X/issues/485",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
386256983
|
Module configuration refactoring
The proposed changes are mostly aimed to improve overall code readability.
As a bonus, 2ae2a10 introduces a little feature of extending connector banner message with a total number of loaded rules with breakdown by rule source, e.g.:
2018/11/30 08:20:22 [notice] 9116#9116: ModSecurity-nginx v1.0.0 (rules loaded inline/local/remote: 0/1634/0)
Hi @defanator,
The only thing that is crucial in terms of Rules merge, is the fact that somehow they have to respect the hierarchy in order to reduce the memory footprint. Two different virtual hosts can make usage of a rules set loaded in server configuration, instead of load two different copies of the rules into memory.
In the effort SpiderLabs/ModSecurity#1970 we are trying to allow the rules to be reloaded is run time, also, we are going to reduce the memory usage by replacing some textual tags with binary properties (good for memory and cpu) and shared_pointers instead of string repetition, pretty simple change that will produce a great result. But, yet, better to load the rules into memory only once, than have it loaded multiple times.
With that said, i think your patch is fine :) Indeed, it does improved the readability. The amount of loaded rules is amazing.
@zimmerle rules merging hasn't changed with the proposed PR, I've just removed duplicate parts of code (i.e. merge_srv_conf and merge_loc_conf were actually doing the same stuff).
In regards to the dynamic modsecurity configuration management, I definitely like the idea. Currently one has to reload nginx by sending SIGHUP to master process after modsecurity configuration has changed; despite such approach can work without breaking live traffic (thanks to nginx' ability to gracefully pass incoming load to new set of workers), avoiding workers restart is a way better thing. It may bring a number of challenges though.
@zimmerle in regards to memory footprint - yes, I'm pretty sure (lib)modsecurity (and/or connector modules) should be smart enough to detect whether the exactly same set of rules is being loaded for multiple entities (servers / locations / virtual hosts / you name it), and avoid any sort of duplicates.
I can imagine how to achieve this with loading rules from file (e.g. you can store and check SHA sums / modification times of a conf and its includes). Using remote sources may require additional checks like passing "If-Modified" and checking response codes, etc. Anyway, there is a room for improvements.
Merged! ;) thanks!
|
gharchive/pull-request
| 2018-11-30T16:24:33 |
2025-04-01T04:55:39.018984
|
{
"authors": [
"defanator",
"zimmerle"
],
"repo": "SpiderLabs/ModSecurity-nginx",
"url": "https://github.com/SpiderLabs/ModSecurity-nginx/pull/139",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
333184991
|
Auto protobuf schema generation with Protobuf 2.0.2
Referring to #38, #39 and #40.
This is a merge between #38 and #40. A new change in this PR is bumping protoc-rust version to match the version of protobuf being used.
When this is ready if you could clean up the git history (mostly the most recent commits) that would be A+ and I'll close #40.
I just remove the wip commits from this PR.
You have 3.5.1 in 6 places in 2 files. It would be nice if we could set the protobuf version in a single place. E.g., create a top-level file called protobuf-version.txt that just reads 3.5.1 and then have build.rs and install-protobuf.sh both read that file. Or if you have another idea for how to simply maintain a single point of version declaration.
I have an idea to implement this. I will put libprotoc 3.5.1 string into PROTOC_VERSION file at the project root. build.rs will confirm the version of protoc from the (compile time) PROTOC_VERSION environment variable. I will also set up the Travis CI to do so.
Why did you get rid of cargo cache? Could you have avoided whatever problem by adding a call to cargo update before building?
Oh yes, my bad. Making a new change now.
|
gharchive/pull-request
| 2018-06-18T08:58:15 |
2025-04-01T04:55:39.210971
|
{
"authors": [
"dingxiangfei2009",
"nvesely"
],
"repo": "SpinResearch/merkle.rs",
"url": "https://github.com/SpinResearch/merkle.rs/pull/44",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
431906797
|
Gradle utils from bootstrap
This PR refactors the Gradle tools used in Spine and by our users:
all the Gradle-related and source-code-related utilities are moved into plugin-base and tool-base:
Gradle utils now reside in io.spine.tools.gradle;
code generation tools reside in io.spine.code.gen in tool-base;
project structure tools reside in io.spine.code.fs in tool-base;
Gralde utilities declared in the bootstrap repository are moved into plugin-base;
the unused Gradle Reflections plugin is deleted.
API changes:
the unused StringOption class is deleted;
the ResourceFiles utility replaces with the Resource—a wrapper object;
some methods of ClassName and other base types are removed and substituted with methods of tool-base types (e.g. ClassName.resolveFile() is substituted with SourceFile.whichDeclares(ClassName)).
As the result of the refactoring, we do not expose code generation tools to the clients and do not force the client code to implicitly depend on the JavaPoet library.
Some file-operating tools are left in base. These reside in io.spine.code.proto. The reason for this is the fact that files are part of the Protobuf runtime model.
@armiol, @alexander-yevsyukov, PTAL.
Gralde utilities
Please fix the typo on “Gradle”.
Codecov Report
Merging #390 into master will decrease coverage by 1.14%.
The diff coverage is 88.06%.
@@ Coverage Diff @@
## master #390 +/- ##
============================================
- Coverage 82.46% 81.32% -1.15%
- Complexity 2695 2714 +19
============================================
Files 416 428 +12
Lines 10265 10366 +101
Branches 595 605 +10
============================================
- Hits 8465 8430 -35
- Misses 1627 1758 +131
- Partials 173 178 +5
@alexander-yevsyukov, PTAL again.
@armiol, if you can suggest a name better than Depender, please do.
@dmdashenkov @alexander-yevsyukov to me Depender is in fact DependencyContainer. I understand that's too long a name, but "depender" means something different than what's we intended to tell. So I would stick to a longer but more comprehensible name.
Another approach to me would be to extend our Gradle Project wrapper with the methods of current Depender and try to get rid of MemoizingDepender in test utils.
|
gharchive/pull-request
| 2019-04-11T08:56:36 |
2025-04-01T04:55:39.239510
|
{
"authors": [
"alexander-yevsyukov",
"armiol",
"codecov-io",
"dmdashenkov"
],
"repo": "SpineEventEngine/base",
"url": "https://github.com/SpineEventEngine/base/pull/390",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
460444396
|
Disable code generation for model-only modules
Since the introduction of spine.assembleModel(), the model-only Gradle projects generated Java code in order to satisfy the internal API of the bootstrap plugin.
This PR disables the code generation activated by spine.assembleModel().
@armiol, PTAL>
|
gharchive/pull-request
| 2019-06-25T14:02:49 |
2025-04-01T04:55:39.241189
|
{
"authors": [
"dmdashenkov"
],
"repo": "SpineEventEngine/bootstrap",
"url": "https://github.com/SpineEventEngine/bootstrap/pull/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1743559227
|
Distinguish Shard pick-up results
This is a port of #1505 to master. Here is the full description of PR.
:warning: This PR introduces some breaking changes to public API.
What is Done
In this PR we start to distinguish the shard pick-up results. In particular, it is now possible to find out the reason of an unsuccessful shard pick-up. In particular, there may be some runtime issues, or the shard may already be picked-up by another worker.
Two new API endpoints were added to the DeliveryMonitor to provide end-users with some control over such cases:
FailedPickUp.Action onShardAlreadyPicked(AlreadyPickedUp failure)
This method will be invoked if the shard could not be picked as it's already picked by another worker. This method receives the ShardIndex of the shard that could not be picked, the WorkerId of the worker who owns the delivery session, and the Timestamp when the shard was picked. It is required to return an action to take in relation to this case. By default, it returns an action which just accepts this case (and ends the delivery session without any processing), but end-users may return a predefined action retrying the shard pick-up:
final class MyDeliveryMonitor extends DeliveryMonitor {
...
@Override
public FailedPickUp.Action onShardAlreadyPicked(AlreadyPickedUp failure) {
return failure.retry();
}
...
}
FailedPickUp.Action onShardPickUpFailure(RuntimeFailure failure)
This method is invoked if the shard could not be picked for some runtime technical reason, such as a runtime exception. This method receives the ShardIndex of the shard that could not be picked, and the instance of the occurred Exception. It also requires to return an action to handle this case. By default, such failures are just rethrown as RuntimeException, but end-users may choose to retry the pick-up:
final class MyDeliveryMonitor extends DeliveryMonitor {
...
@Override
public FailedPickUp.Action onShardPickUpFailure(RuntimeFailure failure) {
return failure.retry();
}
...
}
Breaking changes
The API of the ShardedWorkRegistry has been changed.
In particular, a new PickUpOutcome pickUp(ShardIndex index, NodeId node) method is introduced. Note, it returns an explicit result instead of Optional, as previously. This outcome contains either of two:
a ShardSessionRecord — meaning that the shard is picked successfully,
a ShardAlreadyPickedUp — a message that contains a WorkerID of the worker who owns the session at the moment, and the Timestamp when the shard was picked. This outcome means the session cannot be obtained as it's already picked.
Also, there is a new void release(ShardSessionRecord session) method that releases the passed session.
Here is a summary of code changes for those using ShardedWorkRegistry:
Before:
Optional<ShardProcessingSession> session = workRegistry.pickUp(index, currentNode);
if (session.isPresent()) { // Check if shard is picked.
// ...
session.get().complete(); // Release shard.
}
After:
PickUpOutcome outcome = workRegistry.pickUp(index, currentNode);
if (outcome.hasSession()) { // Check if shard is picked.
// ...
workRegistry.release(outcome.getSession()); // Release shard.
}
Also, the new API allows getting the WorkerId of the worker who owns the session in case if the shard is already picked by someone else and the Timestamp when the shard was picked:
PickUpOutcome outcome = workRegistry.pickUp(index, currentNode);
if (outcome.hasAlreadyPickedBy()) {
WorkerId worker = outcome.getAlreadyPicked().getWorker();
Timestamp whenPicked = outcome.getAlreadyPicked().getWhenPicked();
// ...
}
@alexander-yevsyukov PTAL.
|
gharchive/pull-request
| 2023-06-06T10:21:12 |
2025-04-01T04:55:39.251010
|
{
"authors": [
"armiol"
],
"repo": "SpineEventEngine/core-java",
"url": "https://github.com/SpineEventEngine/core-java/pull/1518",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
696645878
|
isNavigation does not make thumbnail a navigation link
Hi
I've got vue-splide working beautifully (great package) with a primary and secondary thumbnail slider. The primary and secondary are sync'd and working fine together, but there's one issue I don't understand how to resolve.
Secondary has isNavigation: true, which I can see adds some CSS properties to the thumbnail slider, but the thumbnails themselves do not become navigation links. Cursor on mouse hover remains as the default and clicking the thumbnail has no effect.
Comparing the <li> elements on my secondary to the example on splidejs.com shows that role="button" and aria-label="Go to slide X" are missing from mine.
Is this a bug, or is it possible I've used two incompatible options?
Thanks
splidePrimary: {
type : 'loop',
heightRatio: 0.6,
pagination : false,
arrows : true,
cover : true,
speed : 300,
lazyLoad : 'nearby',
preloadPages : 1,
classes: {
arrow : 'splide__arrow splide-custom-arrow',
},
},
splideSecondary: {
type : 'slide',
rewind : true,
fixedWidth : 130,
fixedHeight : 70,
isNavigation : true,
gap : 0,
focus : 'center',
pagination : false,
cover : true,
arrows : false,
lazyLoad : 'nearby',
preloadPages : 4,
breakpoints : {
'600': {
fixedWidth : 66,
fixedHeight : 40,
}
}
}
Close the issue due to the major version update.
Feel free to create a new issue or open a new discussion.
|
gharchive/issue
| 2020-09-09T09:24:57 |
2025-04-01T04:55:39.256061
|
{
"authors": [
"AndyTruffs",
"NaotoshiFujita"
],
"repo": "Splidejs/vue-splide",
"url": "https://github.com/Splidejs/vue-splide/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
420116238
|
Unable to adjust the Entity "power" tag for Fireballs.
Description of the tag by the Official Minecraft Wiki:
power: List of 3 doubles that adds to direction every tick. Act as the acceleration.
~ Official Minecraft Wiki - Ghast
Just from reading this description you can see why, when altering the course of a Fireball entity, you would need to be able to adjust this tag.
What you would want to use is Keys#VELOCITY.
Also, Entity have get/setVelocity shortcuts methods.
get/setVelocity only gets/sets the current velocity.
power is something different.
To clarify, this is how the wiki would read if you used terms that you would see in SpongeAPI.
power: Vector3d that is added to velocity every tick. Acts as the acceleration.
So really this is just a modifier on velocity, correct?
Oh I see, I must have misunderstood the wiki and misread minecraft's code.
And yes, power is simply added to motion/velocity each tick.
@Zidane Yes.
@RedNesto No problem.
It's one of those things where nobody would notice this unless they happened to be making some really awkward plugin that just happens to alter the course of fireballs.
|
gharchive/issue
| 2019-03-12T17:32:26 |
2025-04-01T04:55:39.300797
|
{
"authors": [
"Pulverizer",
"RedNesto",
"Zidane"
],
"repo": "SpongePowered/SpongeAPI",
"url": "https://github.com/SpongePowered/SpongeAPI/issues/1981",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
646856167
|
Update CactusBuilder from core:invalid
Located in: core:invalid
Part of: builders
This is part of the upgrade process for Sponge and is likely involving some work and investigation.
Advice and tips include:
Sometimes things are just a simple method name change
Might be a new class or field
Understanding what the original intention of the target code had
Analyzing whether the functionality achieved by the target class/mixin still exists
Understanding the new functionality in the new Minecraft version that requires rewriting for the functionality
The SpongeAPI usage could very well have gone missing out the window if functionality was deemed unnecessary for release
Closed in https://github.com/SpongePowered/SpongeCommon/commit/c4781d0bd176182a0dedfbf9551f409e69fef2e4
|
gharchive/issue
| 2020-06-28T06:52:20 |
2025-04-01T04:55:39.306832
|
{
"authors": [
"Zidane",
"gabizou"
],
"repo": "SpongePowered/SpongeCommon",
"url": "https://github.com/SpongePowered/SpongeCommon/issues/2915",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
314329225
|
HoconConfigurationLoader cannot handle empty list normally
A field including an empty list will be dropped silently (For example, {a: 42, b: []} => {a: 42}) when it is parsed as HOCON file format, and the node.getChildrenMap method will only return a map of 1 key-value pair which should have been 2. However, node.getList method will return an empty list if the node is virtual, so this bug will only happen when iterating values instead of getting values.
The reason is that HoconConfigurationLoader did not create a list before adding values. I don't know if similar bugs will also happen at YAML, Gson, etc. but I consider that those related codes should also be checked.
How is it going now?
I can look at this in a few days hopefully, but if you or maybe @lucko would like to contribute a PR, I'd be happy to review it and get it in as soon as possible.
News?
@Eufranio @ustc-zzzz @randombyte-developer
Can you provide me with a test case? I think I can see where the issue lies, but I'm not sure if it fixes the same problem.
I seem to have stumbled upon the same problem.
anEmptyListInHoconConfig.isVirtual() == false
Will there be any resolution on this issue?
|
gharchive/issue
| 2018-04-14T13:30:22 |
2025-04-01T04:55:39.312042
|
{
"authors": [
"Eufranio",
"Lignium",
"Meronat",
"lucko",
"ustc-zzzz"
],
"repo": "SpongePowered/configurate",
"url": "https://github.com/SpongePowered/configurate/issues/93",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2022050120
|
🛑 [F] IP Ending with .98 is down
In 01561be, [F] IP Ending with .98 ($IP_GRP_F.98:$MONITORING_PORT) was down:
HTTP code: 0
Response time: 0 ms
Resolved: [F] IP Ending with .98 is back up in 048add9 after 30 minutes.
|
gharchive/issue
| 2023-12-02T13:57:08 |
2025-04-01T04:55:39.318018
|
{
"authors": [
"SpookyServicesBot"
],
"repo": "SpookyServices/Spookhost-Hosting-Servers-Status",
"url": "https://github.com/SpookyServices/Spookhost-Hosting-Servers-Status/issues/722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.