id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1457396166 | React to UI code move
All UI code is about to be moved out of core into appui.
### Tasks
- [x] https://github.com/iTwin/presentation/issues/2
- [x] https://github.com/iTwin/presentation/issues/8
- [x] https://github.com/iTwin/presentation/issues/4
- [x] https://github.com/iTwin/presentation/issues/16
- [x] https://github.com/iTwin/presentation/issues/13
- [x] https://github.com/iTwin/presentation/issues/25
- [x] https://github.com/iTwin/presentation/issues/14
- [x] https://github.com/iTwin/presentation/issues/3
- [x] https://github.com/iTwin/presentation/issues/31
- [x] https://github.com/iTwin/presentation/issues/38
- [x] https://github.com/iTwin/presentation/issues/15
- [ ] https://github.com/iTwin/presentation/issues/40
- [ ] https://github.com/iTwin/presentation/issues/43
- [ ] https://github.com/iTwin/presentation/issues/42
All items done
| gharchive/issue | 2022-11-18T12:44:18 | 2025-04-01T06:39:00.200684 | {
"authors": [
"grigasp"
],
"repo": "iTwin/presentation",
"url": "https://github.com/iTwin/presentation/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
644750090 | Is there a sane way to store tokens created by a service?
Is it possible to use store provided by an existing non-OAuth authentication service?
I'm trying to use Providers.Credentials to integrate with an existing backend services that issues JWTs, but I'm having a heck of a time figuring out how to make this work.
When I POST /auth/tokens with the credentials, the access and refresh tokens are handed back to me. I need to have these tokens available to me in the API layer so that I can add them to the request headers of subsequent requests, but it's not clear to me how to sanely store these for later access (storing them in a global or singleton in a package feels side-effecty and generally "icky").
Here's what I have so far:
async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
const { data: user } = await client.get('/users/me', {
headers: {
Authorization: makeAuthHeader(access),
}
})
return user
} catch (error) {
console.error('ERROR: %o', error)
return null
}
}
Documentation feedback
[ ] Found the documentation helpful
[x] Found documentation but was incomplete
[ ] Could not find relevant documentation
[ ] Found the example project helpful
[x] Did not find the example project helpful
I've even dug through the source code a little to see if there was a way to "sneak" these tokens into the session, but I didn't see a clear path forward.
Hey thanks for the detail and for feedback on documentation.
I think I understand what you want to do and it makes sense. I actually had to check to see what would happen when I tried it because it's a reasonable expectation but I wasn't sure it was supported or not.
It turns out there is a bug with the credentials flow and user object isn't persisted when you sign in, but is supposed to be. You should be able to access the user object returned in the jwt() and signin() callbacks, but the user is coming though as a function rather than an object.
You can work around that by using a JWT callback option that takes care of that problem by calling the function:
callback: {
jwt: async (token) => {
if (typeof token.user === 'function') {
token.user = token.user()
}
console.log('JWT DEBUG', token)
return Promise.resolve(token)
},
}
This is a legit bug and should be fixed.
@iaincollins Thanks, but the problem isn't that I can't get at the user object....the problem is that there's no easy way to store the access and refresh tokens I get back from my server. After much tinkering around, here's what I currently have. As you can see from the code below, I basically had to pass the res object all the way down into my authorize function so that I could set a pair of cookies. If you can think of a better solution, I'm all ears!
export default function handleRequest(req, res) {
return nextAuth(req, res, makeConfig(req, res))
}
function makeConfig(req, res) {
const authorize = makeAuthorizeFn(req, res)
// redacted
}
export function makeAuthorizeFn(res) {
return async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
// This is a bit hacky, but at least it gets the tokens back
// to the client.
res.setHeader('Set-Cookie', [
serialize('access', access, { path: '/' }),
serialize('refresh', refresh, { path: '/' }),
])
// redacted
}
Oh sure! The idea is that you should be able to set them on the user object and for that to get persisted (securely) in the JWT.
One thing to note here is that when I tried to set anything other than { id, name, email, image } to the user object, it would not make it into
the session. I didn't dig far enough in the code to figure out why or where
the other keys were be weeded out.
On Wed, Jun 24, 2020 at 11:25 AM Iain Collins notifications@github.com
wrote:
Oh sure! The idea is that you should be able to set them on the user
object and for that to get persisted (securely) in the JWT.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/iaincollins/next-auth/issues/325#issuecomment-648989228,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AACCS2MZFCSZ6BRRKDJGGFDRYJAKHANCNFSM4OG2QJ2A
.
If you add a property to the user object in this scenario, then it's persisted to the JSON Web Token (assuming you use the work around above until the bug is fixed - when it's fixed you won't need to touch the jwt callback).
The session callback can be used to decide what properties can be safely exposed / exported from the JWT to the client side session object, if that makes sense.
@iaincollins, while I did notice that the example provided in the documentation was setting the user as a function, I do not have that problem because I'm already resolving the authorize function with the user object...but you couldn't know that because I redacted the code to keep the focus where it belongs. :-)
Here's the entirety of my makeAuthorizeFn higher-ordered function:
export function makeAuthorizeFn(res) {
return async function authorize({ username, password }) {
const client = makeClient()
try {
const {
data: {
access,
refresh,
},
} = await client.post('/auth/tokens', {
email: username,
password,
})
// This is a bit hacky, but at least it gets the tokens back
// to the client.
res.setHeader('Set-Cookie', [
serialize('access', access, { path: '/' }),
serialize('refresh', refresh, { path: '/' }),
])
const { data: user } = await client.get('/users/me', {
headers: {
Authorization: makeAuthHeader(access),
}
})
// N.B.: Adding fields to the user object at this point does not work
return user
} catch (error) {
console.error('ERROR: %o', error)
return null
}
}
}
Just for grins and giggles, I tried adding user.foo = "hello, world" where I currently have the N.B. comment, but when the session is dumped, the foo datum is nowhere to be found:
{"user":{"name":"Dan Kreft","email":"dan@kreft.net","image":null},"expires":"2020-07-24T23:20:32.599Z"}
This is what I was referring to in my previous comment.
I cannot simply use the jwt callback, because that callback does not have access to the tokens that were returned to me by the service.
This is what your callbacks needs to look like:
callbacks: {
jwt: async (token) => {
if (typeof token.user === 'function') { token.user = token.user() } // Workaround required for bug
return Promise.resolve(token)
},
session: async (session, token) => {
// Copy properties from token contents to the client side session
//
// By default only 'safe' values like name, email and image which are
// typically needed for presentation purposes (e.g. "you are logged in as…")
// to avoid exposing sensitive information to the client inadvertently.
session.user.data = token.user.data
return Promise.resolve(session)
}
}
The user object returned from the authorize callback is saved to the JWT (e.g. in token.user).
The session() callback controls what data is exposed from the JWT to the client session.
You shouldn't need the JWT callback, it's only used above because of a big with JWT and the Credentials Provider.
The bug related to this is now being tracked in #329
Note that async is not necessary (it actually causes eslint to complain) if you don't have an await in the function. I'll have another look at your proposed solution.
Okay, now I see what's going on here. It's a little confusing the way it's laid out here because I never would have thought that the session() callback's second argument would be the thing returned by authorize().
At this point, though, I'm thinking that I'm probably going to stick with my current res.setHeader() solution because at least with the way I have it laid out now, my client doesn't have to worry about extracting the tokens from the session and figuring out where to put them (e.g. in cookies or in localStorage)...it only has to concern itself with pulling the tokens out of their respective cookies.
Thanks for your diligence...I feel more confident in using NextAuth knowing that you're so attentive. :-)
Note that async is not necessary (it actually causes eslint to complain)
The purpose of it whenever I give an example code is to make it clear the function is async and a promise is expected.
(Otherwise people then immediately ask "How can I do async calls?")
Okay, now I see what's going on here. It's a little confusing the way it's laid out here because I never would have thought that the session() callback's second argument would contain the thing returned by authorize().
To clarify, as per the docs it's always the contents of the JSON Web Token as the second argument to the session() callback. It is present if, and only if, JWT sessions are enabled. This is not always the same as what is returned by authorize() as what is saved to the JWT can be overridden in the jwt() callback.
For users who are using other configurations, other data will be stored in the JWT (e.g. a user object from a database and/or the profile response from an OAuth Provider).
The session() callback provides provide a way selectively expose things to the client securely, in a simple and uniform away that works for all providers and is almost certainly less code and more performant than other solutions.
my client doesn't have to worry about extracting the tokens from the session and figuring out where to put them (e.g. in cookies or in localStorage).
Any properties accessed via a session object are be automatically kept up to date, and kept in sync across tabs and windows, and are persisted across page navigation in a single page app so that people can avoid doing exactly this (which actually, you have done - you have put tem.
I'd recommend to anyone else reading this they use the session property, if nothing else it's much less code to maintain and makes it easier to avoid bugs.
To confirm for anyone reading this in future, this is all you need to do to add data to a session from if you return it in a user object from authorize():
callbacks: {
session: (session, token) => {
session.user.data = token.user.data
return session
}
}
If you do that, they will be there when you access the session object from the client.
@iaincollins I'm taking another look at using the session (I was ignorant of getSession()) and this looks promising, but I don't see a way to update data in the session after login. If I'm to store my tokens in the session, I need to be able to write to it after I refresh my access token. I've tried Googling for a solution, but to no avail. Is there something else I'm missing?
| gharchive/issue | 2020-06-24T16:29:41 | 2025-04-01T06:39:00.226642 | {
"authors": [
"dkreft",
"iaincollins"
],
"repo": "iaincollins/next-auth",
"url": "https://github.com/iaincollins/next-auth/issues/325",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
2251468053 | Download annotations from GUI as CSV
Similar to the image saving, it'd be nice to have a button to download the annotations table directly from the GUI.
links to download both the annotation and the dot file have been added to the interface
| gharchive/issue | 2024-04-18T20:06:48 | 2025-04-01T06:39:00.228594 | {
"authors": [
"ialbert",
"j-andrews7"
],
"repo": "ialbert/genescape-central",
"url": "https://github.com/ialbert/genescape-central/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
90691302 | Disable tooltips on mobile devices
Hi,
I'm trying to find a way to disable tooltips from being displayed when the viewport is smaller than 800px
Thanks
Hi,
you should probably use a condition like if ($(window).width() >= 800) { $(sel).tooltipster({...}); }.
You may also do this check in functionBefore and return false if the screen is false, in case you want to account for the case when the user can change the resolution of his screen over time.
It's also possible to use a simple media query on .tooltipster-base if you want to hide the tooltips (they will actually still run though, but will be invisible). But beware that it might cause memory leaks in case they're not auto-closing.
Okay thanks
| gharchive/issue | 2015-06-24T14:25:42 | 2025-04-01T06:39:00.249876 | {
"authors": [
"karlhudsonphillips",
"louisameline"
],
"repo": "iamceege/tooltipster",
"url": "https://github.com/iamceege/tooltipster/issues/411",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2260319002 | Expose Input focus & blur event
Expose Input focus & blur event, so user can control the focus / blur event in a form
thanks for your help @waveo-wangxiao!
| gharchive/pull-request | 2024-04-24T04:57:49 | 2025-04-01T06:39:00.266043 | {
"authors": [
"iamstevendao",
"waveo-wangxiao"
],
"repo": "iamstevendao/vue-tel-input",
"url": "https://github.com/iamstevendao/vue-tel-input/pull/458",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2263872316 | Error When Prompting Mistral/Mixtral Model from AWS Bedrock
Description:
I successfully cloned the repository from the main branch and tested the application with LLMs deployed on Azure using the OpenAI resource, and everything worked as expected. However, I encountered an issue when attempting to prompt a Mistral model hosted on AWS Bedrock.
Steps to Reproduce:
Cloned the repository from the main branch.
Created temporary credentials on AWS for accessing the AWS Bedrock.
Attempted to send prompts to the Mistral/Mixtral LLM models.
Received the following error message for both model types.
Expected Behavior:
I expected that the prompts would be successfully processed by the Mistral/Mixtral models without any errors.
Observed Behavior:
When attempting to send prompts, the application failed to collect responses and suggested re-running the prompt node. The specific error logged was:
Errors collecting responses. Re-run prompt node to retry.
Mistral Mixtral: "c.get is not a function"
Environment:
Operating System: Ubuntu 22.04 + docker
Thanks for this issue. @massi-ang is the point person for Bedrock integrations.
massi-ang
@massi-ang might you know what is going on?
c.get doesn't sound like any code in the ChainForge code base (there is no "get" function or "c.get" call). So, it sounds like this might actually be a bug on the Bedrock side with loading the model. Not sure.
Hi @ianarawjo, do you use the boto3 AWS client to query the Bedrock models? If not, I'm curious—why don't you use it? I've noticed it's more straightforward since you can just use regular API keys without needing a session key.
I didn’t write the Bedrock integration, @massi-ang did. He would have to comment here.
Hi, we use the boto3 client. The @mirai73/bedrock-fm library is just a wrapper around the boto3 client to provide the correct prompt formatting for the different models. The reason why we require temporary credentials and a session token via the UI is to safeguard you against the possibility of accidentally leaking long term credentials.
I'll need to look how allow to use just an AccessKeyId and SecretAccessKey when running locally, that is when using chainforge/app.py serve.
@bahamasbahamas regarding the issue you are facing, I tested the latest main commit and I could not reproduce the error. Does this happen with any flow and prompt or just specific configurations?
HI @bahamasbahamas. I reproduced this problem both on chainforge.ai/play site and when using chainforge serve from a local install. On the other hand I do not get this error when I clone the repo locally and build the react backend from chainforge/react-server with npm run build and then serve it via python chainforge/app.py serve. Similarly there is no error when serving the react app with other means, as you can see by accessing this url https://d1sozkr3w0qe91.cloudfront.net/.
@ianarawjo can you share how the react app is built so that I can get to the root cause of the problem?
There is no difference with how the react app is built —npm run build is what is done in all cases. The sole difference is just adding /play the HTML index page on the /play site. There are no other differences.
| gharchive/issue | 2024-04-25T15:14:35 | 2025-04-01T06:39:00.314832 | {
"authors": [
"bahamasbahamas",
"ianarawjo",
"massi-ang"
],
"repo": "ianarawjo/ChainForge",
"url": "https://github.com/ianarawjo/ChainForge/issues/264",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
199793802 | bugfix:error in node 0.12.7
for issues TypeError: Object.keys called on non-object #88
modify the order of condition.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
Coverage remained the same at 93.592% when pulling 17a55be98fcc3d07f306aad58cfb8a7a55c8a473 on 81735595:master into 6971e27a577d165cde360ebed86a59dfc18ac55b on iarna:master.
An error in all versions of node, I expect. Thank you!
| gharchive/pull-request | 2017-01-10T11:12:05 | 2025-04-01T06:39:00.337825 | {
"authors": [
"81735595",
"coveralls",
"iarna"
],
"repo": "iarna/gauge",
"url": "https://github.com/iarna/gauge/pull/89",
"license": "isc",
"license_type": "permissive",
"license_source": "bigquery"
} |
1730888064 | error with installing pyfhel "ERROR: Could not build wheels for Pyfhel"
I have two issues @ibarrond
the first, I installed pyfhel 2.0.1, but when I tried to import, I got this error
ModuleNotFoundError: No module named 'pyfhel'
which leads me to the main issue that made me use pyfhel version 2.0.1
whenever I try to install the new versions using
pip install pyfhel==3.4.0
or
pip install pyfhel==3.4.1.
or even the plain pip install pyfhel
I got this error over and over again
I need this for my master's thesis. Please help as soon as possible.
Try with Pyfhel, capitalizing the first letter.
For the installation of a more recent version:
Try in a console with admin rights.
Otherwise install it in WSL.
Thank you for your replay
However, I have tried following your instructions, but unfortunately, it didn't work.
I even tried it on kali-linux system VMware version, but it didn't work either, showing the same error message. using sudo the superuser privileges
Closing, since the error was posted as images and not as text.
| gharchive/issue | 2023-05-29T14:43:16 | 2025-04-01T06:39:00.354685 | {
"authors": [
"aziz07ghm",
"ibarrond"
],
"repo": "ibarrond/Pyfhel",
"url": "https://github.com/ibarrond/Pyfhel/issues/193",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1975565777 | [Feature request] Run ping only till the first success
[Feature request] Run ping only till the first success
Currently user can choose the number of pings to make. But in fact quite often 1 successful in enough to understand that the server is online. While 1 fail should not be enough because the problem can be with user's mobile connection, so additional attempts would be required.
Proposal: Add a new checkbox "Stop after the first successful" near the ping amount (3 by default) settings.
Thanks for the suggestion. I see the point but this cannot be done in the way it's currently implemented. Java/Android API does not provide a library for ICMP/ping. The app simply calls the system ping and parses the output for additional info, Success is decided by the the ping process return code (0=success). The ping count is the option "-c" for the ping command and it simply pings n times and does not stop prematurely. Of course I could do several pings with "-c 1" and aggregate everything by myself but I think it's to much effort for to little gain. The current behaviour also makes sense and I like it, so it would mean 2 different implementations and a switch.
Of course I could do several pings with "-c 1" and aggregate everything by myself but I think it's to much effort for to little gain.
In this case I would definitely do that, call it several times.
The current behavior is not making much sense to me. E.g. what should happend if user set 3 pings and got 2 success and 1 fail? Should the user be notified that server is down or up? To my opinion - it is up, but that means that all pings out of 3 were useless except the first that succeeded, which is exactly what I proposed.
Maybe I am missing some use case for the current behavior.
It's the "standard" behaviour of the ping utility. It reports overall success on at least one successful attempt. For me this makes the most sense. More pings on one try offer additional info, like average latency. However with the suggested way I can do whatever behaviour seems reasonable and I'm not fixed to ping behavior, so it's obviously better but a matter if it's worth the effort, since now it may do additonal pings that may not be necessary, depending on ones view on this, but do not hurt too much either. I'll think about it when I find the time for the pings. The time schedule feature will take a lot of time anyway. Maybe I'm in the mood to do other things between, but not now.
Release 1.5.0 does provide "stop on success" switch for ping and connect
| gharchive/issue | 2023-11-03T07:06:16 | 2025-04-01T06:39:00.364706 | {
"authors": [
"baack",
"ibbaa"
],
"repo": "ibbaa/keepitup",
"url": "https://github.com/ibbaa/keepitup/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2470432897 | Provide Diamond with --jobs, not "--processes"
Description
When investigating the inconsistent --processes tag, I realised that run_diamond.sh was not written in a robust way, and "processes" is not a good description of its function.
Specifically, in this command (pulled out the parts not relevant to parallelization):
ls ${DB_PATH}/nr*.dmnd | parallel -j ${PROCESSES}
diamond blastx --quiet -d ${DB_PATH}/nr.{%}.dmnd \
--threads ${THREADS} -o ${OUTPUT}.{%}.tsv
PROCESSES is not really "processes", it's the number of jobs we tell parallel to run at once
we use the {%} replacement string, which is the job number, to decide which database we use in --d... I think they way we've set this up we need to literally always be counting from 1-6. (which is what screen.py was doing, but not run_pipeline.sh). Based on some local testing of parallel, if we were running with only 2 processes, then the outputs would get (I think? not sure what diamond does with pre-existing output files) overwritten
I have changed this to set the number of jobs to run in parallel based on the threads and CPUs (unless a number is supplied) and made some other parts of run_diamond a bit more robust, then propagated those changes.
Issues:
Addresses #10, though we'll need to change the wiki once this is released
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
[x] Breaking change (fix or feature that would cause existing functionality to not work as expected) (well, it's a breaking change to run_pipeline, but not to commec screen, which is the preferred interface)
Thanks for raising that, I hadn't caught the possible weird problem with integer division. I think (more or less by accident) this will be fine in this case, because GNU parallel interprets --jobs 0 as "run as many jobs in parallel as possible" according to the docs.
In the case where threads is 1, we aren't concerned about needing to manage the number of jobs because we're not multithreeading, so -j 0 --use-cpus-instead-of-cores is the behaviour you want.
| gharchive/pull-request | 2024-08-16T14:52:58 | 2025-04-01T06:39:00.370243 | {
"authors": [
"alexanian"
],
"repo": "ibbis-screening/common-mechanism",
"url": "https://github.com/ibbis-screening/common-mechanism/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1055613759 | feature(editor): set route to in progress on edit, warn about unapproved routes when exporting
Checklist
[x] Appropriate branch selected (all PRs must first be merged to dev before they can be merged to master)
[x] Any modified or new methods or classes have helpful JSDoc and code is thoroughly commented
[x] The description lists all applicable issues this PR seeks to resolve
[x] The description lists any configuration setting(s) that differ from the default settings
[x] All tests and CI builds passing
[x] The description lists all relevant PRs included in this release (remove this if not merging to master)
Description
Two main changes:
When any part of a route gets edited, the progress gets set to In Progress.
In the Create Snapshot window, any in progress or pending routes will be displayed as a warning, since they will not be included in the published result.
The snapshot pane directly requests the list of routes from the backend, which requires a lock. Therefore, in the Editor Feed Source Panel, a lock is requested before opening the Snapshot pane. After closing, the lock is removed.
A few issues: Why are the tests failing? It looks like it is not related to my code.
Also, I'm having a type issue on line 78 in CreateSnapshotModal.js that shows up in VSCode.
"Cannot call fetchRouteEntities().then because property then is missing in function [1].Flow(InferError)"
I think we finally got there in the end!
| gharchive/pull-request | 2021-11-17T01:45:00 | 2025-04-01T06:39:00.392888 | {
"authors": [
"daniel-heppner-ibigroup"
],
"repo": "ibi-group/datatools-ui",
"url": "https://github.com/ibi-group/datatools-ui/pull/737",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1856867591 | Speed up feed source retrieval
Checklist
[x] Appropriate branch selected (all PRs must first be merged to dev before they can be merged to master)
[x] Any modified or new methods or classes have helpful JSDoc and code is thoroughly commented
[x] The description lists all applicable issues this PR seeks to resolve
[x] The description lists any configuration setting(s) that differ from the default settings
[ ] All tests and CI builds passing
Description
Companion front end PR for https://github.com/ibi-group/datatools-server/pull/529. This PR makes use of the new FeedSourceSummary endpoint on the front end to speed up the FeedSourceTable.
e2e tests, of course, will fail until the back end PR is merged
Backend is merged, tests should be passing!
| gharchive/pull-request | 2023-08-18T14:58:52 | 2025-04-01T06:39:00.396342 | {
"authors": [
"philip-cline"
],
"repo": "ibi-group/datatools-ui",
"url": "https://github.com/ibi-group/datatools-ui/pull/981",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
175441338 | 使用emoji的时候在yylabel 右边间歇很大,特别是最靠右的输入一个emoji 再输入的时候就会导致换行
断行是由系统的 CoreText 控制的。。
如果不介意英文单词被断行,那可以设置 lineBreakMode 为 char wrapping。
| gharchive/issue | 2016-09-07T08:22:33 | 2025-04-01T06:39:00.399493 | {
"authors": [
"ibireme",
"liamscofield"
],
"repo": "ibireme/YYText",
"url": "https://github.com/ibireme/YYText/issues/488",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2484094152 | refactor: simplify, move frameworks, etc.
drop dagster (memory issue that needs investigation + drop orchestration complexiy)
add website
streamlit -> shiny
add a package/CLI
no more VM needed, just run in a GHA
generally a regression on the dashboard visualizations, but all the data is there
going to yolo merge this and go from there
| gharchive/pull-request | 2024-08-24T01:15:11 | 2025-04-01T06:39:00.401228 | {
"authors": [
"lostmygithubaccount"
],
"repo": "ibis-project/ibis-analytics",
"url": "https://github.com/ibis-project/ibis-analytics/pull/63",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1572895750 | Updated Notebook
Updated Notebook Pre-Trained Models
Updated Notebook Fine-Tune Models
@sahil11129 Table of contents still has a different format. Please let me or @Abhilasha-Mangal know if you are having trouble formatting it.
| gharchive/pull-request | 2023-02-06T16:33:14 | 2025-04-01T06:39:00.411763 | {
"authors": [
"biharicoder",
"sahil11129"
],
"repo": "ibm-build-lab/Watson-NLP",
"url": "https://github.com/ibm-build-lab/Watson-NLP/pull/41",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2296124131 | Pass OpenJCEPlus Branch to the build scripts
Also add ability to override openjceplus branch at build launch
Test cases
00:00:21.655 "PUBLISH_NAME": "jdk-11.0.23.1.2+9_openj9-0.44.0-rc2",
...
00:00:46.663 WARNING: OpenJCEPlus Branch cannot be determined based on PUBLISH_NAME. Using default branch: semeru-java11
00:01:02.701 "PUBLISH_NAME": "jdk-11.0.23+9_openj9-0.44.0-rc2",
...
00:01:27.776 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-11.0.23"
00:00:19.644 "PUBLISH_NAME": "jdk-11+9_openj9-0.44.0-rc2",
...
00:00:43.651 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-11"
00:00:15.454 "PUBLISH_NAME": "jdk-11.0.23.1+9_openj9-0.44.0-rc2",
...
00:00:40.814 BUILD_CONFIG[OPENJCEPLUS_BRANCH]="semeru-java-11.0.23.1"
| gharchive/pull-request | 2024-05-14T18:39:28 | 2025-04-01T06:39:00.430506 | {
"authors": [
"AdamBrousseau"
],
"repo": "ibmruntimes/ci-jenkins-pipelines",
"url": "https://github.com/ibmruntimes/ci-jenkins-pipelines/pull/223",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
899024480 | 🛑 Secret Site is down
In f874fa4, Secret Site ($SECRET_SITE) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Secret Site is back up in b814f28.
| gharchive/issue | 2021-05-23T13:48:44 | 2025-04-01T06:39:00.447186 | {
"authors": [
"sqeven"
],
"repo": "icarephone/upptime",
"url": "https://github.com/icarephone/upptime/issues/793",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2577217922 | Docs: add in the mailmerge scaffold & scripts used for MaDs & document their usage
Adds a new mailmerge directory with the scripts used the MaDs family allocation notification emails in 2024, alongside documentation as to how to use them.
@Gum-Joe could you include this folder in .dockerignore just cause we don't need it there?
@Gum-Joe could you include this folder in .dockerignore just cause we don't need it there?
Fixed in b0ab264b693e0b8f16d1304482dd528436d9a731 @cybercoder-naj
Updates applied, re-review requested
| gharchive/pull-request | 2024-10-10T00:00:18 | 2025-04-01T06:39:00.453429 | {
"authors": [
"Gum-Joe",
"cybercoder-naj"
],
"repo": "icdocsoc/mad3",
"url": "https://github.com/icdocsoc/mad3/pull/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1631545872 | fix: sass-loader and less-loader can't be resolved
背景
@ice/pkg-plugin-docusaurus 使用社区的 npm 包 docusaurus-plugin-less 和 docusaurus-plugin-sass 分别解析 less 和 sass。但我们使用子进程启动 docusaurus 本地预览服务,并且指定了 cwd 是 rootDir:
https://github.com/ice-lab/icepkg/blob/bca916d4093eda6f21995f8cea514fec100bbcc4/packages/plugin-docusaurus/src/doc.mts#L41
导致对于以下代码在 require sass-loader 的时候没法 resolve 到(因为是从项目根目录 node_modules 开始向上目录开始找)
https://github.com/rlamana/docusaurus-plugin-sass/blob/d97f88c3e24fd16477eac6d8f56a2703425ece9a/docusaurus-plugin-sass.js#L31
然后会报这样的错误:
虽然已经提了 PR,但一直没有发版本:
https://github.com/rlamana/docusaurus-plugin-sass/pull/25
因此考虑把 sass 和 less 的 webpack 解析规则内置到 docusaurus 插件
revert this PR after https://github.com/rlamana/docusaurus-plugin-sass/pull/25 was accepted
| gharchive/pull-request | 2023-03-20T07:28:43 | 2025-04-01T06:39:00.457229 | {
"authors": [
"luhc228",
"wssgcg1213"
],
"repo": "ice-lab/icepkg",
"url": "https://github.com/ice-lab/icepkg/pull/503",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
179136441 | "Comparison method violates its general contract" when executing the example
Hi,
I tried to run the example on hadoop local mode(Ubuntu 16.04).
But there was an exeption:
Verify Overlap: Exception in thread "main" java.io.IOException: Job failed!
Later I checked brush.details.log. It showed such message:
java.lang.IllegalArgumentException: Comparison method violates its general contract! at java.util.TimSort.mergeHi(TimSort.java:868) at java.util.TimSort.mergeAt(TimSort.java:485) at java.util.TimSort.mergeCollapse(TimSort.java:410) at java.util.TimSort.sort(TimSort.java:214) at java.util.TimSort.sort(TimSort.java:173) at java.util.Arrays.sort(Arrays.java:659) at java.util.Collections.sort(Collections.java:217) at Brush.VerifyOverlap$VerifyOverlapReducer.reduce(VerifyOverlap.java:252) at Brush.VerifyOverlap$VerifyOverlapReducer.reduce(VerifyOverlap.java:1) at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:260)
I'm not sure whether it was a hadoop configuration mistake or a java version issue.
But I've tried both java 7 and java 8 and got the same result.
Can you provide the software environment in which your experiments were run?
Same error here. Tested on ubuntu cluster. It used to work on earlier versions, maybe java 6?
uname -a
Linux clu04 3.2.0-120-generic #163-Ubuntu SMP Tue Dec 20 15:12:28 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
hadoop version
Hadoop 2.6.0.2.2.4.2-2
Subversion git@github.com:hortonworks/hadoop.git -r 22a563ebe448969d07902aed869ac13c652b2872
Compiled by jenkins on 2015-03-31T19:40Z
Compiled with protoc 2.5.0
From source with checksum b3481c2cdbe2d181f2621331926e267
This command was run using /usr/hdp/2.2.4.2-2/hadoop/hadoop-common-2.6.0.2.2.4.2-2.jar
hadoop log:
Log Type: stderr
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 790
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/filecache/14/mapreduce.tar.gz/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/hadoop/yarn/local/usercache/sbarberakis/appcache/application_1486050547626_0009/filecache/10/job.jar/job.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
log4j:WARN No appenders could be found for logger (org.apache.hadoop.ipc.Server).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Log Type: stdout
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 0
Log Type: syslog
Log Upload Time: Fri Apr 21 20:47:23 +0300 2017
Log Length: 26679123
Showing 4096 bytes of 26679123 total. Click here for the full log.
2017-04-21 20:47:15,887 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml_tmp
2017-04-21 20:47:15,919 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009.summary_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009.summary
2017-04-21 20:47:15,946 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009_conf.xml
2017-04-21 20:47:15,965 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009-1492794892265-sbarberakis-MatchPrefix+SRR001665_1_k21_asm.tmp%2F00%2Dpreprocess+-1492796834354-50-0-FAILED-default-1492794896416.jhist_tmp to hdfs://clu01.softnet.tuc.gr:8020/mr-history/tmp/sbarberakis/job_1486050547626_0009-1492794892265-sbarberakis-MatchPrefix+SRR001665_1_k21_asm.tmp%2F00%2Dpreprocess+-1492796834354-50-0-FAILED-default-1492794896416.jhist
2017-04-21 20:47:15,979 INFO [Thread-465] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2017-04-21 20:47:15,981 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Task failed task_1486050547626_0009_r_000008
Job failed as tasks failed. failedMaps:0 failedReduces:1
2017-04-21 20:47:15,985 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://clu01.softnet.tuc.gr:19888/jobhistory/job/job_1486050547626_0009
2017-04-21 20:47:15,998 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2017-04-21 20:47:16,189 INFO [IPC Server handler 23 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: MapCompletionEvents request from attempt_1486050547626_0009_r_000034_3. startIndex 0 maxEvents 10000
2017-04-21 20:47:16,326 INFO [Socket Reader #1 for port 38026] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1486050547626_0009 (auth:SIMPLE)
2017-04-21 20:47:16,344 INFO [IPC Server handler 8 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1486050547626_0009_r_9895604650191 asked for a task
2017-04-21 20:47:16,344 INFO [IPC Server handler 8 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1486050547626_0009_r_9895604650191 is invalid and will be killed.
2017-04-21 20:47:16,978 INFO [IPC Server handler 11 on 38026] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1486050547626_0009_r_000034_3 is : 0.0
2017-04-21 20:47:16,999 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:6 AssignedMaps:0 AssignedReds:37 CompletedMaps:50 CompletedReds:1 ContAlloc:214 ContRel:42 HostLocal:45 RackLocal:5
2017-04-21 20:47:17,000 INFO [Thread-465] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://clu01.softnet.tuc.gr:8020 /user/sbarberakis/.staging/job_1486050547626_0009
2017-04-21 20:47:17,016 INFO [Thread-465] org.apache.hadoop.ipc.Server: Stopping server on 38026
2017-04-21 20:47:17,016 INFO [IPC Server listener on 38026] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 38026
2017-04-21 20:47:17,016 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2017-04-21 20:47:17,016 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
brush.details.log.txt
I just tried with jdk1.6.0_91, same result
Hi,
According to the documentation we cloud found right now, the error of "Comparison method violates its general contract" is from the API changes of JAVA version 7 (or since a specific version of JAVA 6).
Please try to add the following arguments while running CloudBrush under Hadoop.
-javaopts -Djava.util.Arrays.useLegacyMergeSort=True
The original Hadoop environment we build the program is on Hadoop 0.20.203 with Java 1.6 (the exactly version is gone, sorry about that). On the other hand, the JAR file attached on the project is a RUNNABLE JAR built on the above specific environment. We strongly suggest users to rebuild the JAR file from source code according to the Hadoop version you choose to avoid some incompatible changes from the API changes between Hadoop 1 and 2.
Sincerely,
Antony.
Hi Antony,
Thanks for your reply.
Running:
hadoop jar CloudBrush.jar -javaopts -Djava.util.Arrays.useLegacyMergeSort=True -reads Ec10k -asm Ec10k_Brush -k 21 -readlen 36
didn't help, but the best way is to rebuild the project indeed.
However I have failed to do so, since I am not really familiar with native java building environment, and there is not any additional MANIFEST.MF or instructions included. Therefore I opened a separate issue about compilation. Any help would be much appreciated.
| gharchive/issue | 2016-09-26T04:16:17 | 2025-04-01T06:39:00.479050 | {
"authors": [
"chefarov",
"jacky6016",
"moneycat"
],
"repo": "ice91/CloudBrush",
"url": "https://github.com/ice91/CloudBrush/issues/5",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2386422169 | Fix ecoregions map selection
the default ecoregion selected should be North Sea and the selection should be visible when the map loads. the map selections needs to be connected to the dropdown selection box as well
It is also sometimes possible for more than 1 region to be selected in the dropdown box
| gharchive/issue | 2024-07-02T14:27:44 | 2025-04-01T06:39:00.504909 | {
"authors": [
"Neilmagi",
"lucalamoni"
],
"repo": "ices-tools-dev/fisheriesXplorer",
"url": "https://github.com/ices-tools-dev/fisheriesXplorer/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
217233241 | File / folder selection in tree view
Hi, would it be possible to allow multiple selection of files and folders? That would be much needed when cherrypicking remote files / folders for mass downloading.
Thank you.
+1
The new version already supports it.
| gharchive/issue | 2017-03-27T12:43:07 | 2025-04-01T06:39:00.508351 | {
"authors": [
"Deathturtle",
"icetee",
"niccolomineo"
],
"repo": "icetee/remote-ftp",
"url": "https://github.com/icetee/remote-ftp/issues/746",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1501607267 | Custom typings or type definition
Hi!
I'm finding that in both my first and second project I've had issues with transfering Date type over the trpc. Is there a way to define the type to be Date rather the string?
(Apologies for the autolabel)
It seems that the issue has been with how I was integrating superjson into my client therefore not getting correct types
| gharchive/issue | 2022-12-17T21:26:47 | 2025-04-01T06:39:00.514345 | {
"authors": [
"Indeedornot"
],
"repo": "icflorescu/trpc-sveltekit",
"url": "https://github.com/icflorescu/trpc-sveltekit/issues/48",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
186126053 | Communicate errors and warnings to user as appropriate
Instead of just spewing everything into the console (which we should still do), we should figure out what to display to the user when various error/warning conditions happen.
34df742
| gharchive/issue | 2016-10-30T09:31:09 | 2025-04-01T06:39:00.542691 | {
"authors": [
"sdukhovni"
],
"repo": "ichung/kataomoi",
"url": "https://github.com/ichung/kataomoi/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2661497824 | feat: stellar intent contract
Description:
Commit Message
type: commit message
see the guidelines for commit messages.
Changelog Entry
version: <log entry>
Checklist:
[ ] I have performed a self-review of my own code
[ ] I have documented my code in accordance with the documentation guidelines
[ ] My changes generate no new warnings
[ ] I have added tests that prove my fix is effective or that my feature works
[ ] I have run the unit tests
[ ] I only have one commit (if not, squash them into one commit).
[ ] I have a descriptive commit message that adheres to the commit message guidelines
Please review the CONTRIBUTING.md file for detailed contributing guidelines.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 89.51%. Comparing base (c7dae72) to head (3ecba4e).
Report is 3 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #9 +/- ##
============================================
- Coverage 89.62% 89.51% -0.12%
Complexity 77 77
============================================
Files 39 39
Lines 2275 2307 +32
Branches 37 37
============================================
+ Hits 2039 2065 +26
- Misses 219 225 +6
Partials 17 17
Flag
Coverage Δ
solidity
86.11% <ø> (-1.39%)
:arrow_down:
Flags with carried forward coverage won't be shown. Click here to find out more.
see 4 files with indirect coverage changes
| gharchive/pull-request | 2024-11-15T10:07:26 | 2025-04-01T06:39:00.553023 | {
"authors": [
"bishalbikram",
"codecov-commenter"
],
"repo": "icon-project/intent-contracts",
"url": "https://github.com/icon-project/intent-contracts/pull/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
991227585 | Implicit int cast to BigInteger during interscore calls isn't supported
In the current goloop version, a method with an int return type from a foreign SCORE is implicitely converted to BigInteger through an interscore call.
Related issue talking about this behavior : https://github.com/icon-project/goloop/issues/65
However, that behavior isn't implemented in javaee-unittest and make the tests fail with a ClassCastException:
this.decimals = ((BigInteger) Context.call(token_addr, "decimals")).intValue();
Such line fails to run because the "decimals" method from IRC2 returns an int natively. The unittest package uses the int return type instead of the BigInteger one.
How hard would it be to fix javaee-unittest in order to copy the goloop engine behavior ?
You might have noticed, I have modified the IRC2 javaee-tokens RI for decimals to have a return type of BigInteger.
https://github.com/sink772/javaee-tokens/commit/a4cad6c4c98a5307fd5a0b7e12e5e032cf64146e
This is just a workaround solution, but I believe you don't need to use int type here if you want to get the value via inter-call.
I think it's better to use BigInterger in all cases for external readonly methods.
| gharchive/issue | 2021-09-08T14:51:19 | 2025-04-01T06:39:00.557064 | {
"authors": [
"ICONationDevTeam",
"sink772"
],
"repo": "icon-project/javaee-unittest",
"url": "https://github.com/icon-project/javaee-unittest/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
82497060 | Unhandled WPF Exception at ICSharpCode.AvalonEdit.Utils.CharRope
See http://community.sharpdevelop.net/forums/t/22188.aspx for full exception output.
SharpDevelop Version : 5.1.0.5084-Beta 2-c5791b46
.NET Version : 4.5.50938
OS Version : Microsoft Windows NT 6.1.7601 Service Pack 1
Current culture : English (United States) (en-US)
Running under WOW6432, processor architecture: x86-64
Working Set Memory : 512768kb
GC Heap Memory : 209302kb
Unhandled WPF exception
Exception thrown:
System.OverflowException: Arithmetic operation resulted in an overflow.
at ICSharpCode.AvalonEdit.Utils.CharRope.ToString(Rope`1 rope, Int32 startIndex, Int32 length)
at ICSharpCode.AvalonEdit.Document.TextDocument.GetText(Int32 offset, Int32 length)
at CSharpBinding.Parser.TParser.AddCommentTags(SyntaxTree cu, IList`1 tagComments, ITextSource fileContent, FileName fileName, IDocument& document)
at CSharpBinding.Parser.TParser.Parse(FileName fileName, ITextSource fileContent, Boolean fullParseInformationRequested, IProject parentProject, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Parser.ParserServiceEntry.ParseWithExceptionHandling(ITextSource fileContent, Boolean fullParseInformationRequested, IProject project, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Parser.ParserServiceEntry.DoParse(ITextSource fileContent, IProject parentProject, Boolean fullParseInformationRequested, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Parser.ParserServiceEntry.Parse(ITextSource fileContent, IProject parentProject, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Parser.ParserService.Resolve(FileName fileName, TextLocation location, ITextSource fileContent, ICompilation compilation, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Parser.ParserService.Resolve(ITextEditor editor, TextLocation location, ICompilation compilation, CancellationToken cancellationToken)
at ICSharpCode.SharpDevelop.Editor.ToolTipRequestEventArgs.get_ResolveResult()
at ICSharpCode.AvalonEdit.AddIn.XmlDoc.XmlDocTooltipProvider.HandleToolTipRequest(ToolTipRequestEventArgs e)
at ICSharpCode.SharpDevelop.Editor.ToolTipRequestService.RequestToolTip(ToolTipRequestEventArgs e)
at ICSharpCode.AvalonEdit.AddIn.CodeEditorView.TextEditorMouseHover(Object sender, MouseEventArgs e)
at System.Windows.Input.MouseEventArgs.InvokeEventHandler(Delegate genericHandler, Object genericTarget)
at System.Windows.RoutedEventArgs.InvokeHandler(Delegate handler, Object target)
at System.Windows.RoutedEventHandlerInfo.InvokeHandler(Object target, RoutedEventArgs routedEventArgs)
at System.Windows.EventRoute.InvokeHandlersImpl(Object source, RoutedEventArgs args, Boolean reRaised)
at System.Windows.UIElement.RaiseEventImpl(DependencyObject sender, RoutedEventArgs args)
at System.Windows.UIElement.RaiseEvent(RoutedEventArgs e)
at ICSharpCode.AvalonEdit.Rendering.TextView.RaiseHoverEventPair(MouseEventArgs e, RoutedEvent tunnelingEvent, RoutedEvent bubblingEvent)
at ICSharpCode.AvalonEdit.Rendering.TextView.<.ctor>b__0(Object sender, MouseEventArgs e)
at ICSharpCode.AvalonEdit.Rendering.MouseHoverLogic.OnMouseHover(MouseEventArgs e)
at ICSharpCode.AvalonEdit.Rendering.MouseHoverLogic.OnMouseHoverTimerElapsed(Object sender, EventArgs e)
at System.Windows.Threading.DispatcherTimer.FireTick(Object unused)
at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
at MS.Internal.Threading.ExceptionFilterHelper.TryCatchWhen(Object source, Delegate method, Object args, Int32 numArgs, Delegate catchHandler)
Most likely this was an error in SharpDevelop's ParserService. This is impossible to fix on AvalonEdit's end.
| gharchive/issue | 2015-05-29T16:36:10 | 2025-04-01T06:39:00.563362 | {
"authors": [
"Rpinski",
"siegfriedpammer"
],
"repo": "icsharpcode/AvalonEdit",
"url": "https://github.com/icsharpcode/AvalonEdit/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
185311153 | VS15 Update 3 Support
Hi,
It's said that version 4.4.0.0 works under at least vs15 update2, though i've download and installed update3, it still does not work.
It's seen under tools=> extension and update but i can not use it at all. Is it a bug or do i miss something?
Thanks.
Please try with recently released VS 2017 RC. From what I see RE 4.4 works there. If not, please reopen this issue again.
Thx for your return but i dont have a chance to try it in vs17 rc mode, i need it to work on vs15 update3. but i guess it wont be possible.
| gharchive/issue | 2016-10-26T06:54:40 | 2025-04-01T06:39:00.574204 | {
"authors": [
"Rpinski",
"biliciburak"
],
"repo": "icsharpcode/RefactoringEssentials",
"url": "https://github.com/icsharpcode/RefactoringEssentials/issues/261",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
24862859 | Create/Extract NTFS Extra Data field for Timestamp etc
SD-1800, originally created on 12/20/2010 01:58:10 by David Pierson
We should have the ability to create the NTFS Extra Data field (0x000a)
via an option in FastZip, and an easy way to create it in ZipFile and
ZipOutputStream.
This would hold the NTFS Last Modified timestamp, avoiding the problem
of the 2 second granularity of the standard entry timestamp.
Relevant forum threads:
http://community.sharpdevelop.net/forums/t/12411.aspx
http://community.sharpdevelop.net/forums/t/14178.aspx
I see that there is already an 'NTTaggedData' class in the extra data handling that appears to handle the NTFS extra data block, but the only place it's referenced from the library is the ifdefed out block at:
https://github.com/icsharpcode/SharpZipLib/blob/fed3bd219f8bd2bac5287b6217b9c4c384bed35f/src/ICSharpCode.SharpZipLib/Zip/ZipEntry.cs#L1103
There is a comment there about disabling it by default to match InfoZip, but i'm not sure what the logic should be at this point - is there a reason to not use the data if present in the zip? (or maybe restrict it to extraction on Windows hosts?)
No idea. I have had it on my agenda to look through the extra data fields...
@Numpsy I added that #if RESPECT_NT_TIMESTAMP a long time ago for an admittedly rather specific use case:
Zero Intall is a cross-platform package manager. The Linux version uses CLIs like tar and unzip (InfoZIP) to extract archives. Zero Install for Windows uses SharpZipLib. After extracting archives (ZIP, TAR, etc.) both versions verify the files using checksums which include the timestamps. Therefore I needed SharpZipLib to produce identical timetamps to InfoZIP.
Perhaps a better, although still rather clumsy, alternative to the #if block might be a public static bool config toggle.
I'm not sure what the default should be as far as extraction goes (as far as FastZip goes, does it need an additional option on top of 'restore timestamps'? don't recall apps like 7-zip offering that).
Saying that, not sure what's supposed to happen if an entry contains both NTFS and Unix extra datas (I don't think anything actually prevents both from existing, even if that would be unusual?)
| gharchive/issue | 2013-12-29T17:24:40 | 2025-04-01T06:39:00.580652 | {
"authors": [
"Numpsy",
"bastianeicher",
"dgrunwald",
"piksel"
],
"repo": "icsharpcode/SharpZipLib",
"url": "https://github.com/icsharpcode/SharpZipLib/issues/40",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2555641905 | Fix: Default Colors deplete too Rapidly
Summary
This fix adjusts color selection by treating the earliest used default color as unused and reusing it if every default color is in use.
Test Plan
Ensure that the assignment works properly when classes are deleted out of order.
Ensure assignment works when users select changing the colors of courses, particularly to a default color.
Issues
Closes #647
This has been resolved by #1006
| gharchive/pull-request | 2024-09-30T06:01:43 | 2025-04-01T06:39:00.582722 | {
"authors": [
"MinhxNguyen7",
"nav800"
],
"repo": "icssc/AntAlmanac",
"url": "https://github.com/icssc/AntAlmanac/pull/1009",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
999971978 | Some latest build may return 0 when using with --help
With my old build, --help will return 2. Somehow it returns zero now (and the tests fail.)
Behavior changed due to upstream fix: https://github.com/golang/go/commit/dcf0929de6a12103a8fd7097abd6e797188c366d and https://github.com/golang/go/issues/37533
| gharchive/issue | 2021-09-18T08:33:14 | 2025-04-01T06:39:00.591096 | {
"authors": [
"icy"
],
"repo": "icy/genvsub",
"url": "https://github.com/icy/genvsub/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1659809048 | Markdown renderer
A renderer for markdown (markdown -> markdown, instead of just markdown -> HTML). This would be useful if you just want to insert/edit/remove parts of a markdown document (e.g. setting a default language for fenced code blocks).
Just a note on that use case: Markdown syntax has quite a lot ambiguities and alternative styles to express the same thing. So the process parse -> mutate AST -> stringify can introduce syntactic changes besides the ones that you actually intent to do. Maybe this doesn't matter much for your use case, but it's an annoyance (at least) for many applications.
To avoid this you would need some way to convey information about syntax variations from the parser to the stringifier. Currently the AST is not capable of that.
I see what you mean, for my use-case I'm setting the default language for doc comments in Crystal to crystal but that might cause issues if someone uses a tilde-based code block instead of a back tick one. At the same time, I'm not sure if it's worth implementing support for the variants as that specific edge case is unlikely.
| gharchive/issue | 2023-04-09T09:15:15 | 2025-04-01T06:39:00.593241 | {
"authors": [
"devnote-dev",
"straight-shoota"
],
"repo": "icyleaf/markd",
"url": "https://github.com/icyleaf/markd/issues/57",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2157861873 | fail compile
in ./src/components/tool/AStepItem.vue?vue&type=style&index=0&id=45f62939&lang=less&scoped=true
Syntax Error: TypeError: Cannot set properties of undefined (setting 'parent')
@ ./node_modules/vue-style-loader??ref--11-oneOf-1-0!./node_modules/css-loader/dist/cjs.js??ref--11-oneOf-1-1!./node_modules/vue-loader/lib/loaders/stylePostLoader.js!./node_modules/postcss-loader/src??ref--11-oneOf-1-2!./node_modules/less-loader/dist/cjs.js??ref--11-oneOf-1-3!./node_modules/style-resources-loader/lib??ref--11-oneOf-1-4!./node_modules/cache-loader/dist/cjs.js??ref--1-0!./node_modules/vue-loader/lib??vue-loader-options!./src/components/tool/AStepItem.vue?vue&type=style&index=0&id=45f62939&lang=less&scoped=true 4:14-561 15:3-20:5 16:22-569
@ ./src/components/tool/AStepItem.vue?vue&type=style&index=0&id=45f62939&lang=less&scoped=true
@ ./src/components/tool/AStepItem.vue
@ ./node_modules/cache-loader/dist/cjs.js??ref--13-0!./node_modules/babel-loader/lib!./node_modules/cache-loader/dist/cjs.js??ref--1-0!./node_modules/vue-loader/lib??vue-loader-options!./src/pages/result/Success.vue?vue&type=script&lang=js
@ ./src/pages/result/Success.vue?vue&type=script&lang=js
@ ./src/pages/result/Success.vue
@ ./src/router/async/router.map.js
@ ./src/router/async/config.async.js
@ ./src/router/index.js
@ ./src/main.js
@ multi (webpack)-dev-server/client?http://192.168.31.99:8080&sockPath=/sockjs-node (webpack)/hot/dev-server.js babel-polyfill whatwg-fetch ./src/main.js
我编译也是报这个错
| gharchive/issue | 2024-02-28T00:42:41 | 2025-04-01T06:39:00.603490 | {
"authors": [
"feelingzhn",
"zjf121"
],
"repo": "iczer/vue-antd-admin",
"url": "https://github.com/iczer/vue-antd-admin/issues/353",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
219955506 | Underdefined inputs while using PCA
When using the PCA transform for sampling, the user can specify the number of transform variables to use.
If the user chooses too few tranform variables, there may not be enough to provide values to all the original input space, resulting in zeroes for all under-represented original space variables.
This can result in nonsensical inputs being provided to the model (like a total cross section of zero).
We don't have anything in place to warn the user that this phenomenon is happening.
For Change Control Board: Issue Review
This review should occur before any development is performed as a response to this issue.
[ ] 1. Is it tagged with a type: defect or improvement?
[ ] 2. Is it tagged with a priority: critical, normal or minor?
[ ] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
[ ] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
[ ] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
For Change Control Board: Issue Closure
This review should occur when the issue is imminently going to be closed.
[ ] 1. If the issue is a defect, is the defect fixed?
[ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
[ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
[ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)?
[ ] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
From wangc on gitlab:
Currently, we do not have an error checking for that. I think if the PCA transformation leads to unrealistic inputs, the application code should catch that. This is because the requirements of inputs are usually determined via the application code, not RAVEN. In addition, if the user chooses too few transform variables, one way we can do is to implement the posteriori error checking, but this needs further discussion. I think this issue is related to the PCA method itself, not the algorithm. I suggest to change the label from defect to improvement. @talbpaul
| gharchive/issue | 2017-04-06T16:26:13 | 2025-04-01T06:39:00.679433 | {
"authors": [
"PaulTalbot-INL"
],
"repo": "idaholab/raven",
"url": "https://github.com/idaholab/raven/issues/69",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
176174171 | cannot find React
please modify this line:
import { Component, PropTypes } from 'react';
to
import React,{ Component, PropTypes } from 'react';
in Button.js
It already says that: https://github.com/ide/react-native-button/blob/f72b1c2596b21bed9e93e634d9f7a6d5fd91e797/Button.js#L1
| gharchive/issue | 2016-09-10T11:02:02 | 2025-04-01T06:39:00.686716 | {
"authors": [
"HananeAlSamrout",
"ide"
],
"repo": "ide/react-native-button",
"url": "https://github.com/ide/react-native-button/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1819243465 | Issue with lottie on IOS
Hi i'm facing an issue only related to the IOS build
dyld[39830]: Symbol not found: _$s6Lottie0A18BackgroundBehaviorO15pauseAndRestoreyA2CmFWC
Referenced from: <82425C10-5ADE-391C-A40A-5AC87F605028> /Users/shayantsital/Library/Developer/CoreSimulator/Devices/890D148E-BEDE-4EBC-A711-88FE6B6A8ADC/data/Containers/Bundle/Application/A324E85D-E968-4B06-8BDB-6F4C6CF9BC77/Runner.app/Frameworks/idenfyviews.framework/idenfyviews
Expected in: <0A8C52DF-5A1D-36CA-9680-C07B5A10339D> /Users/shayantsital/Library/Developer/CoreSimulator/Devices/890D148E-BEDE-4EBC-A711-88FE6B6A8ADC/data/Containers/Bundle/Application/A324E85D-E968-4B06-8BDB-6F4C6CF9BC77/Runner.app/Frameworks/Lottie.framework/Lottie
Message from debugger: Terminated due to signal 6
I got this error everytime i try to run the app (IOS). the runner fails immediately after the xcode build.
I even added the post install script
post_install do |installer|
installer.pods_project.targets.each do |target|
if target.name == "ZIPFoundation" || target.name == "lottie-ios"
target.build_configurations.each do |config|
config.build_settings['BUILD_LIBRARY_FOR_DISTRIBUTION'] = 'YES'
end
end
if target.name == "idenfy_sdk_flutter"
target.build_configurations.each do |config|
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '11.0'
config.build_settings['ENABLE_BITCODE'] = 'NO'
end
end
target.build_configurations.each do |config|
config.build_settings['ENABLE_BITCODE'] = 'NO'
if Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET']) < Gem::Version.new('11.0')
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = '11.0'
end
config.build_settings['GCC_PREPROCESSOR_DEFINITIONS'] ||= [
'$(inherited)',
## dart: PermissionGroup.camera
'PERMISSION_CAMERA=1',
# dart: PermissionGroup. photos
'PERMISSION_PHOTOS=1',
'PERMISSION_PHOTOS_ADD_ONLY=1',
# dart: [PermissionGroup. location, PermissionGroup. locationAlways, PermissionGroup. locationWhenInUse]
'PERMISSION_LOCATION=1',
# dart: PermissionGroup.mediaLibrary
'PERMISSION MEDIA LIBRARY=1'
]
end
flutter_additional_ios_build_settings(target)
end
end
But running with or without the above code results in the same error in XCODE.
pod version
pod 'iDenfySDK/iDenfyLiveness', '8.1.0'
It seems to be a pod related issue. Have you updated the SDK to a newer version, or is it a clean new install?
Maybe a clean pod deintegration and install would help.
Please, try out our sample project in this repository and let us know if the issue persists.
I've tried deintegration, also removed and reinstalled all pods twice. But still get rhe error.
Please try out out sample project and check if the issue can be replicated on your setup
Hi,
Ive tried the sample project and it did work, so what i did was replace the podfile with the one in the example. After validating it seemed to work in our app.
Thank you!
| gharchive/issue | 2023-07-24T22:31:54 | 2025-04-01T06:39:00.691391 | {
"authors": [
"shayant98",
"viktasidenfy"
],
"repo": "idenfy/FlutterSDK",
"url": "https://github.com/idenfy/FlutterSDK/issues/12",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
145707498 | Additional Scrolling Methods
Inspired by https://github.com/yiminghe/dom-scroll-into-view
You can't use this library since smooth-scrollbar use animation for scrolling.
The only additional functionality is to add an instance method that accept a node with some usefull parameters to improve user experience (onlyScrollIfNeeded, offsetTop etc. -> see dom-scroll-into-view).
Would be very cool when this library would support this!
You mean I need add an instance method that acts like scroll-into-view? In my view this should be done by users: you got the element and measured its position, then scroll to it through instance#scrollTo method. I want this library to focus on scrolling itself.
Yeah I'm currently on it to figure out how to handle this.
It's a little bit complicating and mess up my component with scrolling logic.
The Element.scrollIntoView - method which is supported by the browser not works here so the scrollbar component should carry over this job somehow imo.
For example: How do you want solve anchor problems? You have to work with different data and create a lot of code:
scrollbar.getSize() -> .content and .container
targetNode.offsetTop/offsetLeft
Additionally there is no method to fetch the current scrolling offset trough the instance API. I listen with "addListener" method and save the data from the event somewhere temporarly.
Anchor is a big problem that I don't know how to solve it properly, as for current offset, you can get it from instance.offset property (sorry for the lack of api docs).
@aight8 Hey buddy, I've added scrollIntoView() method support in version 5.3.0, check the documents here and give me a feedback :)
Sorry don't see that post.
Nice! I found some differences to my current implementation which have a slightly other behavior.
When the target element is fully inside of the container: do nothing
When the target element is partially outside of the container: Scroll the outside part into the container + offsetBottom px
Same for the top but then use offsetTop as padding px.
When the alignWithTop option is set then try to align the target element to the top + offsetTop px. (when possible) That's useful for anchors.
This code is only for vertical scrolling (but that was my needs):
let scrollbar = this.getInstance();
const offsetTop = config.offsetTop || 0;
const offsetBottom = config.offsetTop || 0;
const alignWithTop = config.alignWithTop || false;
let nodeOffset = {
y: node.offsetTop
};
let sbOffset = scrollbar.offset;
let { container } = scrollbar.getSize();
let nodeRect = node.getBoundingClientRect();
let nodeRectStart = nodeOffset.y;
let nodeRectEnd = nodeRectStart + nodeRect.height;
let vpRectStart = sbOffset.y;
let vpRectEnd = vpRectStart + container.height;
let newScrollbarY;
if (alignWithTop === true) {
newScrollbarY = node.offsetTop - offsetTop;
} else {
if (nodeRectStart < (vpRectStart + offsetTop)) {
// element it partially out of view on top OR to little top offset
newScrollbarY = node.offsetTop - offsetTop;
}
if (nodeRectEnd > (vpRectEnd - offsetBottom)) {
// element it partially out of view on bottom OR to little bottom offset
newScrollbarY = nodeRectEnd - container.height + offsetBottom;
}
}
if (newScrollbarY) {
scrollbar.scrollTo(0, newScrollbarY, 200);
}
Hmm...I checked Element.scrollIntoViewIfNeeded documents again, it appears that I misunderstood the behavior.
I can, however, do as much as dom-scroll-into-view plugin is. But I'm afraid this scrollbar plugin would be too heavy that I have to call it a library.
I am busy with my school courses currently, if you insist that we should completely support this feature, you can make some PRs and I'll check it once I have time :)
Thanks in advance.
I have the same problem/needs and this functionality would be awesome !
| gharchive/issue | 2016-04-04T14:52:12 | 2025-04-01T06:39:00.734901 | {
"authors": [
"HectorLS",
"aight8",
"idiotWu"
],
"repo": "idiotWu/smooth-scrollbar",
"url": "https://github.com/idiotWu/smooth-scrollbar/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
744614357 | create modularity
have it in master to check it with ansible
do much delay, closing pr
| gharchive/pull-request | 2020-11-17T10:35:57 | 2025-04-01T06:39:00.736674 | {
"authors": [
"DirectorSloan"
],
"repo": "idiv-biodiversity/ansible-role-nrpe",
"url": "https://github.com/idiv-biodiversity/ansible-role-nrpe/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
976114067 | format docs for idrall
Cosmetic tweaks to the Idrall docs
Let me know if you'd prefer the old explicit hyperlinks
Thanks @joelberkeley!
| gharchive/pull-request | 2021-08-21T11:20:10 | 2025-04-01T06:39:00.748285 | {
"authors": [
"Z-snails",
"joelberkeley"
],
"repo": "idris-community/inigo",
"url": "https://github.com/idris-community/inigo/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
701571014 | Some questions about searching
Hi, first of all, thanks for this interesting work. I have some questions about the searching method and results in this paper.
Could I understand the least square regression as this function: f(delta(depth), delta(width)) = dealt(A)? The situation of different kernel sizes and expand ratio is considered in the "Distillation" process?
In your settings, the search space of kernel size is {3,5}, DW-Block expand ration is {3,6,9}, BL-Block expand ratio is {0.5, 0.25}. However, the result of GENet, in which the kernel size all is 3, and the DW-Block expand ratio all is 3, the BL-Block expand ratio all is 0.25. This phenomenon is really weird. I'm looking forward to more analysis here.
Thx.
Hi LicharYuan,
Thanks for the feedback!
For Q1, yes, you are right. For different kernel size and different expansion ratio, we consider them as different block types, which means that, they have their own least square regression coefficients respectively.
For Q2, we agree that the searching results are surprising to us too. Our conjecture is that k=3 is more well optimized in CUDA. For comparison, we trained many k=5 networks but did not obtain better results. For BL-Block, the expansion ratios are all 0.25. We believe there should be some connection between our searching results and the manually design ones which also use 0.25 (in most networks). In our unpublished experiments, we tried to manually set ratio=1/6 in BL-Blocks but never obtained better results. So this phenomenon is not simply because the searching bias but because some deeper unknown reason making 0.25 a preferred value.
Yes, All k=3 is reasonable. I think the result may depend on your searching method . Have you ever try to maually change one or two blocks parameters? for example, change the first the XX-BLOCK kernel size to 5 and channel to 40 for compensating the latency. Is that result still worse?
Have you ever try to maually change one or two blocks parameters?
We did not manually change one or two blocks in our structures. We usually change all blocks in some principle way which is easier to explore.
OK. It sounds like the searching method may not stable when the searching space becomes larger.....
Actually, the searching space with prior knowledge does help to search in my experiments.
Looking forward to your next works!
| gharchive/issue | 2020-09-15T03:12:05 | 2025-04-01T06:39:00.785082 | {
"authors": [
"LicharYuan",
"MingLin-home"
],
"repo": "idstcv/GPU-Efficient-Networks",
"url": "https://github.com/idstcv/GPU-Efficient-Networks/issues/6",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2285923967 | Preventing unconnected nodes or subgrids.
Resolves #415
!test
| gharchive/pull-request | 2024-05-08T15:52:08 | 2025-04-01T06:39:00.835479 | {
"authors": [
"staudtMarius"
],
"repo": "ie3-institute/OSMoGrid",
"url": "https://github.com/ie3-institute/OSMoGrid/pull/429",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2691232741 | feat: private repository support 🎉
Adds support for private repositories 🎉
Fixes #155
TODO
[x] Improve init command with a prompt for a token
[x] Update docs
[x] CLI Docs
[x] Add new private repositories documentation
Stackblitz previews don't seem to allow storage of tokens. But it is working for me so I think we are all good.
| gharchive/pull-request | 2024-11-25T15:43:00 | 2025-04-01T06:39:00.837837 | {
"authors": [
"ieedan"
],
"repo": "ieedan/jsrepo",
"url": "https://github.com/ieedan/jsrepo/pull/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1126336832 | Computation of "place" is suspect
at https://dev.bibxml.org/get-one/by-docid/?doctype=IETF&docid=I-D sparks-sipcore-multiple-reasons&query=draft-sparks-sipcore-multiple-reasons&query_format=docid_regex&page=1
the data contains "places" pointing to Fremont, CA.
That is utterly wrong.
I'm guessing there's some attempt to map the knowledge that I work for AMS and AMS headquarters are in Fremont as having anything at all to do with this draft. It does not. Including this information is misleading at best.
If I've guessed correctly at the algorithm that added this, please remove that algorithm. If there's no explicit place declared in the document, do not try to derive one.
@rjsparks The "Fremont, CA" entry applies to the publisher's address, and it came from the assumption that the IETF is the publisher and based where AMS is based.
What is the correct "publisher place" for IETF? Should the "publisher place" be omitted?
Ping @andrew2net the issue refers to this line:
https://github.com/ietf-ribose/relaton-data-ids/blob/main/data/draft-sparks-sipcore-multiple-reasons-00.yaml
Yes, that line should be omitted (that semantic was very surprising to me).
Relaton spec allows contacts to be specified as contributor property, that seems more appropriate… Filed https://github.com/ietf-ribose/relaton-data-ids/issues/12, since this looks like an issue with source data.
@strogonoff in accordance with the other ticket let's remove "publisher place" for IETF. Thanks.
@strogonoff in accordance with the other ticket let's remove "publisher place" for IETF. Thanks.
Are we hiding Relaton’s “place” property for all bibliographic items shown by BibXML service, but leaving the place in source YAML?
For RFC and I-D bibliographic items, we should omit “place” entirely (this is a data source issue).
For other bibliographic items such as for 3GPP or IEEE, the “place” should still be used.
Then this looks like a data source change (cc @andrew2net). Should something be filed in relaton-data-* repositories?
I will not modify the codebase and this could be closed when data sources are reindexed.
For other bibliographic items such as for 3GPP or IEEE, the “place” should still be used.
@ronaldtse what places should be added for 3GPP, IEEE and others?
@andrew2net I believe Ronald means any existing place should be kept for others, but for Internet-Drafts the top-level the place should be removed (and probably for RFCs and rfcsubseries too).
I believe Ronald means any existing place should be kept for others (no change), but for Internet-Drafts the top-level the place should be removed (and probably for RFCs and rfcsubseries too).
Indeed. Let's remove "Place" for RFCs, RFC subseries, and Internet-Draft:
https://github.com/relaton/relaton-ietf/issues/64
We probably should "add" Places for 3GPP and IEEE but those are separate tickets:
https://github.com/relaton/relaton-3gpp/issues/8
https://github.com/relaton/relaton-ieee/issues/15
@strogonoff @ronaldtse there weren't other places, all the documents fetched by relaton-ietf were with pace "Fremont, CA". I removed it for now.
After reindexing the place should be gone, but indexing async task is stuck so I’ll wait for that to be resolved (filed in https://github.com/ietf-ribose/bibxml-infrastructure/issues/17).
Just letting you know that OGC expects to see a place of publication for standards, and will now not have one.
@opoudjis OGC will just need to deal with the lack of place for RFC/RFC subseries/Internet-Drafts. In any case, the lack of publication place is supported by ISO 690.
The place is no longer shown, since sources were reindexed: https://dev.bibxml.org/get-one/by-docid/?doctype=IETF&docid=I-D ietf-lamps-cmp-algorithms
| gharchive/issue | 2022-02-07T18:18:04 | 2025-04-01T06:39:00.849433 | {
"authors": [
"andrew2net",
"opoudjis",
"rjsparks",
"ronaldtse",
"strogonoff"
],
"repo": "ietf-ribose/bibxml-service",
"url": "https://github.com/ietf-ribose/bibxml-service/issues/112",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1727763350 | Clean up submission status page after async validation fails
Describe the issue
If a submission fails validation in a way that prevents metadata extraction (e.g., due to a title mismatch as in #5691), the submission status page shows a lot of loud warnings about metadata problems. These are irrelevant because the metadata simply weren't populated. Worse, they distract from the event history at the bottom of the page which has the only hint at what specifically went wrong.
This needs to be cleaned up to show only relevant information and, ideally, to more emphatically describe why the draft was rejected.
Code of Conduct
[X] I agree to follow the IETF's Code of Conduct
Kind of a special case of #4346
Another special case that triggered this was a draft containing SVG artwork including the x attribute on the <svg> element. As I understand it, this is not allowed by the RFC SVG schema. The result is this bug being triggered. The backend logs show a more helpful message that should be relayed to the user:
ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2183, in validate
self.v3_rng.assertValid(tree)
File "src/lxml/etree.pyx", line 3643, in lxml.etree._Validator.assertValid
lxml.etree.DocumentInvalid: Invalid attribute x for element svg, line 675
and eventually the less specific (but still better than current)
xml2rfc.writers.base.RfcWriterError: [draft path removed]: Error: Invalid document before running preptool.
Yet another: a draft came in with <artwork [...] src="art/something.ascii-art"></artwork>. This resulted in no actual error message in the log, just a useless "see above" message. Running it through xml2rfc revealed errors about the src file not existing. From the user's perspective, again, the draft just failed.
<rfc category="std" ipr="trust200902" consensus="true" submissionType="IETF" docName="draft-somebody-did-something-00.txt">
fails because of the .txt in docName, but the result on the UI is inscrutable.
A file failed submission because of the version="" in the header. This is accepted without comment by xml2rfc, but causes the following error in the celery logs:
[2023-08-22 07:09:25,002: WARNING/ForkPoolWorker-2] ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/workspace/ietf/submit/utils.py", line 1266, in process_and_validate_submission
render_missing_formats(submission) # makes HTML and text, unless text was uploaded
File "/workspace/ietf/submit/utils.py", line 948, in render_missing_formats
xmltree.tree = prep.prep()
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 219, in prep
tree = self.dispatch(self.selectors)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1925, in dispatch
func(e, e.getparent())
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2158, in validate_before
self.die(self.root, 'Expected <rfc> version="3", but found "%s"' % version)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1837, in die
raise RfcWriterError(msg)
xml2rfc.writers.base.RfcWriterError: /a/www/www6s/staging/draft-thomy-json-ntv-00.xml(12): Error: Expected <rfc> version="3", but found "0"
Changing the attribute to version="3" also causes it to fail, this time with
[2023-08-22 07:11:03,164: WARNING/ForkPoolWorker-2] ietf/submit/utils.py(1328) in process_and_validate_submission(): Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2121, in validate
self.v3_rng.assertValid(tree)
File "src/lxml/etree.pyx", line 3643, in lxml.etree._Validator.assertValid
lxml.etree.DocumentInvalid: Did not expect text in element rfc content, line 12
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/workspace/ietf/submit/utils.py", line 1266, in process_and_validate_submission
render_missing_formats(submission) # makes HTML and text, unless text was uploaded
File "/workspace/ietf/submit/utils.py", line 948, in render_missing_formats
xmltree.tree = prep.prep()
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 219, in prep
tree = self.dispatch(self.selectors)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1925, in dispatch
func(e, e.getparent())
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2159, in validate_before
if not self.validate('before'):
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/preptool.py", line 173, in validate
return super(PrepToolWriter, self).validate(when='%s running preptool'%when, warn=warn)
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 2153, in validate
self.die(self.root, 'Invalid document%s.' % (when, ))
File "/usr/local/lib/python3.9/site-packages/xml2rfc/writers/base.py", line 1837, in die
raise RfcWriterError(msg)
xml2rfc.writers.base.RfcWriterError: /a/www/www6s/staging/draft-thomy-json-ntv-00.xml(12): Error: Invalid document before running preptool.
Changing the attribute to version="2" fixes the problem (and presumably is the correct value for the structure of the XML).
As noted on #6221, that PR improves the status page to be less confusing after an error occurred but we should do a better job actually handling the errors we've gathered here before we close this.
It would be good to be more transparent about what the failures actually were.
It would be good to be more transparent about what the failures actually were.
Yes. There's some existing code that grabs exception messages and logs them in the event history. If we can get the actual errors into those event description we'll be almost there. The failures here are hitting generic handling. I'm hoping we can do this in a way that doesn't turn into a lot of special cases...
#6158 related item that should be reflected clearly on the submission status page.
I've renamed this ticket to be more specific about what it's evolved to track.
#7107 is another case that should be addressed
| gharchive/issue | 2023-05-26T14:43:25 | 2025-04-01T06:39:00.870091 | {
"authors": [
"jennifer-richards",
"rjsparks"
],
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/issues/5696",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2344374159 | add a "preview" option to the "edit charter" page
Description
add a "preview" option to the "edit charter" page so we can verify markdown/txt conversion to html before submitting a charter. This avoids silly revisions to be created by incompetent ADs like me :)
Code of Conduct
[X] I agree to follow the IETF's Code of Conduct
This should be provided wherever we accept markdown when someone is logged in.
| gharchive/issue | 2024-06-10T16:35:51 | 2025-04-01T06:39:00.872408 | {
"authors": [
"paulwouters",
"rjsparks"
],
"repo": "ietf-tools/datatracker",
"url": "https://github.com/ietf-tools/datatracker/issues/7519",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
925557849 | generate a new document for corrections
The format is simpler that the one in RFC 8540. If needed I can update the format.
We may also add ebml.xml as an annex, making it normative. But it's more work than necessary, possibly leading to new fixes...
Fixes #412
We need to create a new entry in the CELLAR documents https://datatracker.ietf.org/wg/cellar/documents/
@robUx4, thanks for starting this work. I'm not good at generating the EBML specifications, but I did poke around a bit.
If I could make a couple of suggestions (which you may have already taken care of - my apologies, if so):
I'd suggest using draft-ietf-cellar-rfc8794-corrections as the output file name. Unless @mcr has "other thoughts" ("spencer-corrections"?), I don't see that we need to go through the "post as individual and see if the working group adopts it, and then rename" shuffle.
It is going to be much easier to take this draft through the approvals process, at some point in time, if you actually cut-and-paste the text being modified, and then the complete text with corrections, for each correction, using "OLD" and "NEW" to prefix them, just as we would for errata (if we were doing the RFC errata process). If you're correcting a sentence in the middle of a paragraph, show the entire paragraph (OLD and NEW), so that it's obvious to the reader where this text is in the document.
If you can summarize in a sentence or two about why the original text needs to be corrected, that would be very helpful. I see something like that (I think!), so thank you for that.
Just looking at the corrections listed so far - I see that https://www.rfc-editor.org/rfc/rfc8141.html obsoletes RFC 2141, but I'm not seeing that RFC 2141 has been deprecated (unless I'm missing it). Is the goal here to make use of extensions that are allowed in RFC 8141 (https://www.rfc-editor.org/rfc/rfc8141.html#appendix-B),, or just to use the current version of the standards-track URN specification? Either way, we should talk about who deprecated RFC 2141, or change the wording to Obsolete.
done
OK, alhtough for the XML Schema it may look a bit odd
will do
#389 mentions deprecated but it's just obsolete. maybe we don't really need this change ?
Given how RFC errata work, I opted to keep updating the current document with changes, so it can be diff'ed with the current RFC to produce the errata text.
There is now an rfc8794 branch that collects all the errata so we can regenerate a plain with these errata applies.
| gharchive/pull-request | 2021-06-20T09:25:48 | 2025-04-01T06:39:00.880417 | {
"authors": [
"SpencerDawkins",
"robUx4"
],
"repo": "ietf-wg-cellar/ebml-specification",
"url": "https://github.com/ietf-wg-cellar/ebml-specification/pull/413",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
240598272 | 计划
2017.07.03-2017.07.07
修改完admin登录,兼容auth和本地
完成钉钉封装
php-channel跑起来
2017.07.03-2017.07.07
完成钉钉封装
优化php-channel的todo
| gharchive/issue | 2017-07-05T09:55:07 | 2025-04-01T06:39:00.882905 | {
"authors": [
"XIN-Dongzhe"
],
"repo": "ifintech/phplib",
"url": "https://github.com/ifintech/phplib/issues/12",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
721767344 | Merge ign-launch1 to ign-launch2
Signed-off-by: Nate Koenig nate@openrobotics.org
The ign-gazebo 3.4.0 failed for amd64 on Focal, which is causing the CI here to fail. We should fix that before merging this.
https://build.osrfoundation.org/job/ign-gazebo3-debbuilder/55/console
ign-gazebo 3.5.0 was correctly released for Focal / amd64. The Jenkins Ubuntu build's failure should be fixed by https://github.com/ignition-tooling/release-tools/pull/339.
I think this is good to go!
| gharchive/pull-request | 2020-10-14T20:43:23 | 2025-04-01T06:39:00.926678 | {
"authors": [
"chapulina",
"nkoenig"
],
"repo": "ignitionrobotics/ign-launch",
"url": "https://github.com/ignitionrobotics/ign-launch/pull/64",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
850327826 | Specific issue templates including GPU
Desired behavior
Issue templates were first added in #234 and then centralized organization-wide in #250. However, I think that it would be useful to ask the reporters for the GPU and driver versions they use when reporting rendering-related issues. I'm not sure about the best approach, though.
Alternatives considered
Leave it as it is. This requires extra effort from maintainers asking for the used GPUs and leaves some issues in a hard to replicate state when no concrete GPU is mentioned.
Implementation suggestion
If you do not want to maintain two versions of the issue template, you might as well add the GPU-specific section to the general template with a note that it is only needed when reporting rendering-related issues.
The issue template should include easy to follow instructions for getting the requested info - e.g. run glxinfo | head and uname -a or something like that (+ of course some instructions for Windows). If the instructions would get too long, they might be written as an ign-docs tutorial linked from the issue template.
This would also affect ign-sensors, ign-gui and ign-gazebo.
you might as well add the GPU-specific section to the general template with a note that it is only needed when reporting rendering-related issues.
This sounds like a reasonable compromise for now. Mind opening a PR to https://github.com/ignitionrobotics/.github?
Another idea that came to mind was adding a specific issue template for rendering bugs, but that would need to be added to the entire org, and I worry it may cause confusion on repositories that don't touch rendering.
On Linux, the following commands can give necessary information about GPU:
lspci -nn | grep VGA
lshw -C display -numeric
glxinfo -B
sudo X -version
Done: https://github.com/ignitionrobotics/.github/pull/3 .
| gharchive/issue | 2021-04-05T12:16:54 | 2025-04-01T06:39:00.931751 | {
"authors": [
"chapulina",
"darksylinc",
"peci1"
],
"repo": "ignitionrobotics/ign-rendering",
"url": "https://github.com/ignitionrobotics/ign-rendering/issues/300",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2068723231 | 🛑 Nexus: Fonts CDN is down
In 4f6f353, Nexus: Fonts CDN (https://fonts-cdn.nexuspipe.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Nexus: Fonts CDN is back up in 6da8e28 after 14 minutes.
| gharchive/issue | 2024-01-06T16:58:40 | 2025-04-01T06:39:00.934447 | {
"authors": [
"Exponential-Workload"
],
"repo": "ignore-me-lol/personal-uptime-monitor",
"url": "https://github.com/ignore-me-lol/personal-uptime-monitor/issues/430",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1031619356 | Update scalafmt-core to 3.0.7
Updates org.scalameta:scalafmt-core from 2.7.5 to 3.0.7.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Files still referring to the old version number
The following files still refer to the old version number (2.7.5).
You might want to review and update them manually.
.github/workflows/format.yml
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "org.scalameta", artifactId = "scalafmt-core" } ]
labels: library-update, semver-major, old-version-remains
Superseded by #176.
| gharchive/pull-request | 2021-10-20T16:43:18 | 2025-04-01T06:39:00.985568 | {
"authors": [
"scala-steward"
],
"repo": "iheartradio/ficus",
"url": "https://github.com/iheartradio/ficus/pull/175",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2104985440 | _ma_target_ref_db_details.gene_name item type change
When parsing AlphaFoldDB entries we found that some cases have a value for _ma_target_ref_db_details.gene_name that is incompatible with the item type as defined in ModelCIF (code). Examples are:
AF-A0A0M3KKW8-F1
AF-H6SHY4-F1
AF-O78126-F1
AF-Q6RH29-F1
AF-Q6RH28-F1
AF-Q6RH30-F1
AF-Q6XEC0-F1
We reported this issue to to the helpdesk of AlphaFoldDB who answered:
//
The pipeline for predictions of AlphaFold, retrieves metadata from UniProt.
Since, the mmCIF files were created by Google Deepmind, we can't modify the
content of the field ourselves.
We recommend you to please reach out to https://pdb-dev.wwpdb.org/ to look
into the updating regex for this field as it does accept whitespaces.
//
Rather than changing the regex for the type "code", perhaps the type for _ma_target_ref_db_details.gene_name can be set to "text" or another type that is compatible with Uniprot entries that have a gene name (from the 'GN' record I suppose) with spaces in it.
e.g. https://www.alphafold.ebi.ac.uk/entry/H6SHY4 contains
_ma_target_ref_db_details.gene_name "celH or egH"
while https://www.alphafold.ebi.ac.uk/entry/O78126 contains
_ma_target_ref_db_details.gene_name "MHC class I HLA-A"
| gharchive/issue | 2024-01-29T09:01:01 | 2025-04-01T06:39:01.009923 | {
"authors": [
"benmwebb",
"drlemmus"
],
"repo": "ihmwg/ModelCIF",
"url": "https://github.com/ihmwg/ModelCIF/issues/16",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1117422288 | gL9bboUa-articles-queries-and-user-end-points
What does this PR do?
build a RESTful API for my-brand by adding CRUD functionality on articles,queries and user end-points
Description of Task to be completed?
[x] CRUD operation on articles
[x] CRUD operation on queries
[x] CRUD operations on user
[x] Display articles on blog
[x] Posting comments
How should this be manually tested?
clone this project and install all required dependencies by using npm install , then run npm run dev command
test end-points by using postman
Any background context you want to provide?
you can connect to the to the local database or atlas by changing connection string in .env file
What are the relevant pivotal tracker stories?
https://trello.com/c/K5JX40AU/26-article-crud-operations
Screenshots (if appropriate)
N/A
Questions: No
Please finish the last check list
Finished
| gharchive/pull-request | 2022-01-28T13:44:02 | 2025-04-01T06:39:01.827886 | {
"authors": [
"ihonore"
],
"repo": "ihonore/my-brand-api",
"url": "https://github.com/ihonore/my-brand-api/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
663566642 | typo in docstring reporting/computations/select()
Small typo in docstring of ixmp/reporting/computations/select().
More typos will be added here in case they will be ecountered.
The check failures are expected/handled in #357, so I will merge this.
| gharchive/pull-request | 2020-07-22T08:13:48 | 2025-04-01T06:39:04.290392 | {
"authors": [
"francescolovat",
"khaeru"
],
"repo": "iiasa/ixmp",
"url": "https://github.com/iiasa/ixmp/pull/352",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
431796192 | 下载文件错误, Content-Length is zero, 重试 1/3
Ubuntu16.04 系统
软件版本是3.5.6
same problem。。
估计失效了?
新版已取消这个提示,建议更新后重新下载,有问题了再反馈吧
| gharchive/issue | 2019-04-11T02:04:32 | 2025-04-01T06:39:04.296091 | {
"authors": [
"iikira",
"skadandy",
"wqh0109663"
],
"repo": "iikira/BaiduPCS-Go",
"url": "https://github.com/iikira/BaiduPCS-Go/issues/682",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
162048494 | Add translate to pt-BR
Great theme.
I intend to polularizá it here in Brazil
Thank's
Thanks. :+1:
| gharchive/pull-request | 2016-06-23T23:53:06 | 2025-04-01T06:39:04.301807 | {
"authors": [
"iissnan",
"lesleyandrez"
],
"repo": "iissnan/hexo-theme-next",
"url": "https://github.com/iissnan/hexo-theme-next/pull/958",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1908496326 | 使用u--开头的组件编辑器报错,但是不影响使用
比如使用<u--input></u--input>。
报错:找不到名称“U”。ts(2304); 找不到名称“Input”。ts(2304)
这个是2.0遗留的语法,后边会统一用up-前缀代替u--前缀保证更好的兼容性。
| gharchive/issue | 2023-09-22T09:05:17 | 2025-04-01T06:39:04.317227 | {
"authors": [
"WEB-ZENG-GitHub",
"ijry"
],
"repo": "ijry/uview-plus",
"url": "https://github.com/ijry/uview-plus/issues/185",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1839306724 | 🛑 Penyanyi VF v3 is down
In a2066b3, Penyanyi VF v3 (https://discord-musicbot.nexter32.repl.co) was down:
HTTP code: 502
Response time: 528 ms
Resolved: Penyanyi VF v3 is back up in 4c56315.
| gharchive/issue | 2023-08-07T12:00:32 | 2025-04-01T06:39:04.325503 | {
"authors": [
"ikhwan32"
],
"repo": "ikhwan32/uptimevf",
"url": "https://github.com/ikhwan32/uptimevf/issues/2180",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
812832821 | [Fibaro_HC3] Error getting data from Home Center: Error: unable to get local issuer certificate
[2/21/2021, 12:20:44 PM] [Fibaro_HC3] Error getting data from Home Center: Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12) {
code: 'UNABLE_TO_GET_ISSUER_CERT_LOCALLY'
}
(node:3902) UnhandledPromiseRejectionWarning: Error: unable to get local issuer certificate
at TLSSocket.onConnectSecure (_tls_wrap.js:1497:34)
at TLSSocket.emit (events.js:315:20)
at TLSSocket._finishInit (_tls_wrap.js:932:8)
at TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
(Use node --trace-warnings ... to show where the warning was created)
(node:3902) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:3902) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
How did you configured the host parameter in config.json?
Is your Home Center exposing https interface?
Please enable also the http interface.
I'm currently implementing the support also for https. I will keep you updated.
I try but its locked, i will check this point on my side and reverse back to you when http will be activated, thanks for your quick support! :-)
it's works!! thanks a lot!very good job!
Check within a few hours, I should publish a release with https support.
@hoffmanncedric, try new version. You should put in the new param "url" the address of your Home Center like:
"url": "https://hc3-00000XXX.local"
Then you need to obtain the ca.cer file from your Home Center and put it in the same folder as config.json.
2/21/2021, 5:51:37 PM] [Fibaro_HC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-00014803.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-00014803.local',
response: undefined
}
(node:10159) UnhandledPromiseRejectionWarning: Error: getaddrinfo ENOTFOUND hc3-00014803.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26)
(Use node --trace-warnings ... to show where the warning was created)
(node:10159) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag --unhandled-rejections=strict (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
@hoffmanncedric, where did you put the ca.cer file?
i try to copy the .cer into module and fibaro but same problem
Are you running homebridge in docker?
I'm currently looking in .homebridge in the user home directory or in the /var/lib/homebridge when you run the homebridge UX. Which setup are you running?
yes my homebridge is on a docker on my synology nas, with last setup on homebridge for docker
Are you able to tell the path as seen by the homebridge process?
yes its here :
Are you using this: https://github.com/oznu/docker-homebridge ?
@hoffmanncedric, can you try with 1.1.2 ? Leave ca.cer where you put it.
@ilcato where should i put certificate for raspberry pi installation?
update on 1.1.2
i thinks its a warning
i will pass on http :-)
If you use https (with cer file uploaded to homebridge) it will not accept access using an IP address because only hc-0000xxxx.local is listed in certificate subject alternative name. But when homebridge is running in docker container (on raspberry pi) the .local addresses are not resolved. Non-secure (http) access is not working either. It is always complaining about the certificate like mentioned by @hoffmanncedric above.
For http use host param and delete url param.
I'm using raspberry image, https, and url param. My ca.cer file is in "/var/lib/homebridge/" folder. I'm still getting
[22/02/2021, 23:21:42] [FibaroHC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-00014186.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-00014186.local',
response: undefined
}
If you use https (with cer file uploaded to homebridge) it will not accept access using an IP address because only hc-0000xxxx.local is listed in certificate subject alternative name. But when homebridge is running in docker container (on raspberry pi) the .local addresses are not resolved. Non-secure (http) access is not working either. It is always complaining about the certificate like mentioned by @hoffmanncedric above. In the end I cannot make the 1.1.2 plugin working :-(
@galuszka, I confirm that no docker support is present for this feature. But you can use http by using the host param and removing the url param as it was before this change.
I'm using raspberry image, https, and url param. My ca.cer file is in "/var/lib/homebridge/" folder. I'm still getting
[22/02/2021, 23:21:42] [FibaroHC3] Error getting data from Home Center: Error: getaddrinfo ENOTFOUND hc3-XXX.local
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:67:26) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: 'hc3-XXX.local',
response: undefined
}
@vkamenski, are you able to resolve HC3 name from the raspberry?
i will pass on http :-)
@hoffmanncedric, as @galuszka said this doesn't work on Docker. Sorry.
| gharchive/issue | 2021-02-21T11:28:39 | 2025-04-01T06:39:04.358148 | {
"authors": [
"galuszka",
"hoffmanncedric",
"ilcato",
"vkamenski"
],
"repo": "ilcato/homebridge-fibaro-hc3",
"url": "https://github.com/ilcato/homebridge-fibaro-hc3/issues/25",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
281581189 | where I want to download the vm file?
Hi @ilkarman ,
This project is such good. I want to download the vm file to run these deep learning framework, Can you share the download url? or I may buy the vm file. Thanks a lot
Hey xxccry, do you mean the VM Image file? This link ( https://azuremarketplace.microsoft.com/en-us/marketplace/apps/microsoft-ads.dsvm-deep-learning?tab=Overview ) should let you create one.
| gharchive/issue | 2017-12-13T00:08:15 | 2025-04-01T06:39:04.363534 | {
"authors": [
"ilkarman",
"xxccry"
],
"repo": "ilkarman/DeepLearningFrameworks",
"url": "https://github.com/ilkarman/DeepLearningFrameworks/issues/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1396899026 | Allow socket access through access token authentication
Currently, the socket that allows real-time updates on the Queue verifies user authentication using the stored JWT token, which is only present if the user authenticates through Shibboleth. This prevents other services (e.g. apps, etc.) from getting real-time information such as the queue status without regularly polling the Queue's other APIs.
Can you give more information about the use case for real-time updates in your course app? Is polling the API causing issues?
There were a few ideas that I had considered which I believe would beneift from real-time information:
An interactive display board showing the current queue status - polling would work fine but introduces additional complexity in identifying the changes between each poll.
A native queue app for course staff to interact with the queue more easily on mobile devices - polling would probably not work well unless on a very short interval, or it could potentially lead to multiple staff attempting to answer the same question or similar issues.
I think that it may be better to bring up a WebView to handle Shibboleth authentication in the latter case and use the real-time socket as normal, but I'm not sure what the best strategy to work around the lack of real-time information for the first case.
| gharchive/issue | 2022-10-04T22:09:26 | 2025-04-01T06:39:04.366003 | {
"authors": [
"echuber2",
"hjstn"
],
"repo": "illinois/queue",
"url": "https://github.com/illinois/queue/issues/339",
"license": "NCSA",
"license_type": "permissive",
"license_source": "github-api"
} |
808059461 | Bug: fix getCalendar day cursor in short months
Create day cursor in local timezone
Set date at same time as month to avoid short month issues
For more discussion see #243
Thanks, github-actions bot. Verified the change works on the 244 docs as well.
Thank you so much for your help, @sallaben!
| gharchive/pull-request | 2021-02-14T22:14:14 | 2025-04-01T06:39:04.368235 | {
"authors": [
"aabounegm",
"sallaben"
],
"repo": "illright/attractions",
"url": "https://github.com/illright/attractions/pull/244",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
860274757 | Using reduction variable causes misaligned write with CUDA target?
Minimal working example (adapted from page 9 of the official cheat sheet to make it compile):
#include <algorithm>
#include <assert.h>
#include <CL/sycl.hpp>
using namespace cl::sycl;
int main() {
queue myQueue;
buffer<int> valuesBuf{1024};
{ // Initialize buffer on the host with 0, 1, 2, 3, ..., 1023
host_accessor a{valuesBuf};
std::iota(a.get_pointer(), a.get_pointer() + 1024, 0);
}
// Buffers with just 1 element to get the reduction results
int sumResult = 0;
buffer<int> sumBuf{&sumResult, 1};
int maxResult = 0;
buffer<int> maxBuf{&maxResult, 1};
myQueue.submit(
[&](handler &cgh) { // Input values to reductions are standard accessors
auto inputValues = valuesBuf.get_access<access_mode::read>(
cgh); // Create temporary objects describing variables with
// reduction semantics
accessor sumAccessor = sumBuf.get_access<access::mode::read_write>(cgh);
accessor maxAccessor = maxBuf.get_access<access::mode::read_write>(cgh);
auto sumReduction = reduction(sumAccessor, plus<int>());
auto maxReduction = reduction(
maxAccessor,
maximum<int>()); // parallel_for performs two reduction operations
// For each reduction variable, the implementation:
// - Creates a corresponding reducer
// - Passes a reference to the reducer to the lambda as a parameter
cgh.parallel_for(range<1>{1024}, sumReduction, maxReduction,
[=](id<1> idx, auto &sum, auto &max) {
// plus<>() corresponds to += operator, so sum can be
// updated via += or combine()
sum += inputValues[idx];
// maximum<>() has no shorthand operator, so max
// can only be updated via combine()
max.combine(inputValues[idx]);
});
});
// sumBuf and maxBuf contain the reduction results once
// the kernel completes
assert(maxBuf.get_host_access()[0] == 1023 &&
sumBuf.get_host_access()[0] == 523776);
}
When this is compiled (with /opt/hipSYCL/bin/syclcc -Wall -Wextra -Wpedantic -O3 -std=c++17 -g --hipsycl-gpu-arch=sm_50 --cuda-path=/usr/local/cuda -L/usr/local/cuda/lib64 rtest.cpp -o rtest) and run under cuda-memcheck, things don't look so good.
========= CUDA-MEMCHECK
[hipSYCL Warning] dag_direct_scheduler: Detected a requirement that is neither of discard access mode (SYCL 1.2.1) nor noinit property (SYCL 2020) that accesses uninitialized data. Consider changing to discard/noinit. Optimizing potential data transfers away.
[hipSYCL Warning] buffer_memory_requirement: Could not find embedded pointer in kernel blob for this requirement; do you have unnecessary accessors that are unused in your kernel?
[hipSYCL Warning] buffer_memory_requirement: Could not find embedded pointer in kernel blob for this requirement; do you have unnecessary accessors that are unused in your kernel?
========= Invalid __global__ write of size 4
========= at 0x00000428 in /opt/hipSYCL/bin/../include/hipSYCL/glue/cuda/../generic/hiplike/hiplike_reducer.hpp:76:__hipsycl_kernel__ZTSZZZN7hipsycl4glue23hiplike_kernel_launcherILNS_2rt10backend_idE0ENS2_10cuda_queueEE4bindI24__hipsycl_unnamed_kernelLNS2_11kernel_typeE1ELi1EZZ4mainENKUlRNS_4sycl7handlerEE21_7clESB_EUlNS9_2idILi1EEERT_RT0_E35_26JNS9_6detail29accessor_reduction_descriptorINS9_8accessorIiLi1ELNS9_11access_modeE2ELNS9_6targetE0ELNS9_6access11placeholderE0EEENS9_4plusIiEEEENSL_ISR_NS9_7maximumIiEEEEEEEvNSD_IXT1_EEENS9_5rangeIXT1_EEES10_mT2_DpT3_ENKUlPNS2_8dag_nodeEE605_16clES15_ENKUlDpT_E646_41clIJNS0_16hiplike_dispatch28hiplike_reduction_descriptorISU_NS1B_15reduction_stageILi1EEEEENS1C_ISX_S1E_EEEEEDaS18_EUliDpRS17_E744_44
========= by thread (0,0,0) in block (0,0,0)
========= Address 0x6a06ab1205f6b4d3 is misaligned
========= Saved host backtrace up to driver entry point at kernel launch time
========= Host Frame:/lib/libcuda.so.1 (cuLaunchKernel + 0x2b8) [0x223718]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 [0x1a23d]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 [0x1a2c7]
========= Host Frame:/usr/local/cuda/lib64/libcudart.so.10.0 (cudaLaunchKernel + 0x225) [0x4e3c5]
========= Host Frame:./rtest [0xc17f]
========= Host Frame:./rtest [0x1dced]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt10cuda_queue13submit_kernelERKNS0_16kernel_operationESt10shared_ptrINS0_8dag_nodeEE + 0x9e) [0x1252e]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x20f82]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt16kernel_operation8dispatchEPNS0_20operation_dispatcherESt10shared_ptrINS0_8dag_nodeEE + 0x56) [0x35886]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20multi_queue_executor15submit_directlyESt10shared_ptrINS0_8dag_nodeEEPNS0_9operationERKSt6vectorIS4_SaIS4_EE + 0x10b6) [0x200e6]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2cca9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xd3c) [0x281fc]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaMemcpyAsync.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x76703]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt10cuda_queue13submit_memcpyERKNS0_16memcpy_operationESt10shared_ptrINS0_8dag_nodeEE + 0x2ad) [0x11c4d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x210d2]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt16memcpy_operation8dispatchEPNS0_20operation_dispatcherESt10shared_ptrINS0_8dag_nodeEE + 0x57) [0x1b8f7]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20multi_queue_executor15submit_directlyESt10shared_ptrINS0_8dag_nodeEEPNS0_9operationERKSt6vectorIS4_SaIS4_EE + 0x10b6) [0x200e6]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2cca9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2d53d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x2ad63]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xb19) [0x27fd9]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_queue.cpp:221 @ submit_memcpy(): cuda_queue: Couldn't submit memcpy (error code = CUDA:4)
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaEventQuery.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x6c9de]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZNK7hipsycl2rt15cuda_node_event11is_completeEv + 0x2b) [0xf81b]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZNK7hipsycl2rt8dag_node11is_completeEv + 0x48) [0x22998]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt17dag_submitted_ops22update_with_submissionESt10shared_ptrINS0_8dag_nodeEE + 0x4dd) [0x30b5d]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt11dag_manager22register_submitted_opsESt10shared_ptrINS0_8dag_nodeEE + 0x4e) [0x2f90e]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_direct_scheduler6submitESt10shared_ptrINS0_8dag_nodeEE + 0xe20) [0x282e0]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so [0x30094]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt13worker_thread4workEv + 0x244) [0x32b44]
========= Host Frame:/lib/libstdc++.so.6 [0xcfbc4]
========= Host Frame:/lib/libpthread.so.0 [0x9299]
========= Host Frame:/lib/libc.so.6 (clone + 0x43) [0xff053]
=========
========= Program hit cudaErrorLaunchFailure (error 4) due to "unspecified launch failure" on CUDA API call to cudaEventSynchronize.
========= Saved host backtrace up to driver entry point at error
========= Host Frame:/lib/libcuda.so.1 [0x37b2c3]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so [0x6c83e]
========= Host Frame:/opt/hipSYCL/bin/../lib/hipSYCL/librt-backend-cuda.so (_ZN7hipsycl2rt15cuda_node_event4waitEv + 0x2b) [0xfd6b]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZN7hipsycl2rt20dag_multi_node_event4waitEv + 0x29) [0x24a69]
========= Host Frame:/opt/hipSYCL/bin/../lib/libhipSYCL-rt.so (_ZNK7hipsycl2rt8dag_node4waitEv + 0x22) [0x23ee2]
========= Host Frame:./rtest [0x1919b]
========= Host Frame:./rtest [0x18862]
========= Host Frame:./rtest [0xfec6]
========= Host Frame:./rtest [0x699c]
========= Host Frame:/lib/libc.so.6 (__libc_start_main + 0xd5) [0x27b25]
========= Host Frame:./rtest [0x570e]
=========
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_event.cpp:55 @ is_complete(): cuda_node_event: Couldn't query event status (error code = CUDA:4)
[hipSYCL Error] from /home/kozet/gwaith/uofl-bioinformatics/gpgpu/hipSYCL/src/runtime/cuda/cuda_event.cpp:66 @ wait(): cuda_node_event: cudaEventSynchronize() failed (error code = CUDA:4)
rtest: rtest.cpp:47: int main(): Assertion `maxBuf.get_host_access()[0] == 1023 && sumBuf.get_host_access()[0] == 523776' failed.
========= Error: process didn't terminate successfully
========= No CUDA-MEMCHECK results found
Is this a genuine bug with reduction variables or am I doing something wrong?
Thanks for reporting, it seems that #499 has introduced a regression that broke reductions with accessors. You can try reductions on top of USM pointers as a workaround. We will fix ASAP.
Fix in PR #528
This PR also adds a test case based on this code pattern where I have removed some things that are not necessary, such as the initialization of the buffers with a pointer.
| gharchive/issue | 2021-04-17T00:51:30 | 2025-04-01T06:39:04.373860 | {
"authors": [
"bluebear94",
"illuhad"
],
"repo": "illuhad/hipSYCL",
"url": "https://github.com/illuhad/hipSYCL/issues/527",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2129179439 | Allow setting a fan start/stop speed
Right now, if I have a fan curve like
50°C - 0%
60°C - 15%
70°C - 25%
and I'm running a light load (e.g. a 2D game) the graphics card reaches ~55°C and LACT will interpolate between 0% and 15%. This either leads to whining noises, because the fan can't start at lower than ~15%, or it leads to constant start/stop cycles as the temperature hovers around the point where the fan actually starts.
There should be a fan start speed setting. The fan would only start once the curve reaches that speed. If temps drop, the fan will slow down below this start speed (most fans can run below the speed they start at) and will stop once it reaches a certain minimum speed. It will then only start again once it reaches the start speed on the curve again. This way the start/stop cycles would be much longer and the fan would never get a PWM signal that it can't handle.
See a similar feature in corectrl and its implementation. They only have a start speed and are missing the stop speed.
In fancontrol these values are called MINSTART and MINSTOP
Sounds reasonable. This shouldn't be too difficult to implement.
@oehme Could you try building the feature/fan-threshold branch and see how it works for you? I've added fan start and stop thresholds, but my GPU doesn't have an audible transition between the fan being on and off, so I cannot test for sure that it works properly.
I guess this exactly the feature I wanted. I compiled and tried the feature branch "fan-threshold". For now I cannot say if it's working. What do the threshold numbers mean? Are they degree Celcius or percent? What I can tell is that these values are not saved. When I close Lact then my threshold settings are gone.
In my case I have a Radeon RX 6600 and when I watch videos for example on Youtube then the decocing happens on the GPU. By default without any tools like Lact the driver fan handling is a bit nervous going from zero to a bit noisy 1500 rpm when watching videos. With Lact I can lower that to 750 rpm which is silent to my ears. But as stated above the fans are going on and off the whole time. So lets say the fans should turn on at 55 °C with 750 rpm and then should cool down to 50 °C. How should I setup the threshold sliders?
Ah, and one more little thing. Can we have a make clean in the make file?
@dccoder84
Since this issue was created there was fan control hysteresis implemented in #291 , which lets you set a threshold for how much the temperature needs to change and for how long it needs to stay at a lower value. This can prevent the fan ramping up and down by keeping it at the same speed until the configured conditions are met. It should help in your case, and it's a more universal solution than start/stop thresholds.
However, if it doesn't work for you, and you want to try this branch:
What do the threshold numbers mean? Are they degree Celcius or percent?
They are in percentages, 0 to 100.
What I can tell is that these values are not saved. When I close Lact then my threshold settings are gone.
Make sure you're running a version of the daemon built from that branch, otherwise the settings won't get used.
For your setup, you should be able to set the stop threshold at your 55C speed and start threshold at the 50C speed.
Ah, and one more little thing. Can we have a make clean command in the make file? 😸
You just need to delete the target directory. You should only ever need to do this to free up space used up by old builds.
Sorry for the delayed response - The machine in question was not mine, but a friends, so I couldn't test for a while. The solution in #291 works fine for them, so this one is no longer needed from my perspective.
I'm going to close this as #291 should prevent the original issue of the fan speed being changed too often in the majority of cases.
Sorry to resurrect this, but is the fan stop/start threshold feature planned to be officially added? I have a 6600 xt which will not spin down below 31% if I use a custom fan curve (i.e. the fan will only turn off in Automatic mode). The fan start feature in Corectrl works nicely, so I'd like to see it implemented here. Was the aforementioned feature branch not working correctly?
Right now we have the features "Spindown delay" and "Speed change threshold". It solves the issue of permanently turning on and off of the fans to some degree. Personally I would like to be able to turn on the fans at 55 °C and turn them of at 50 °C. Maybe I could achieve this with the "speed change threshold" but then this would also affect the whole fan curve and not only at the beginning. So I think there is still room for improvement and discussion.
Yeah, the spindown delay and speed change threshold are good solutions for the issue of unnecessary speed ramping, but for my case of the Fan Stop feature not working in any custom mode, the originally proposed solution would be better (assuming it works like Corectrl). I'm not sure if I should create a new issue, or piggyback on this one. It may also be good to know if my issue is unique, as I would imagine that someone else might have noticed the Fan Stop feature not working if it were common spread. I know the 7XXX series has the issue of not being able to circumvent Fan Stop, but I haven't found any discussion about how it works in the 6XXX series.
So what I tried to say is that while I may be able to turn off fans at 50 °C and turn them on at 55 °C I cannot at the same time set a fine grained speed change for the rest of the fan curve for example change fan speed every 2 °C with the "speed change threshold".
| gharchive/issue | 2024-02-11T21:34:19 | 2025-04-01T06:39:04.399279 | {
"authors": [
"StarTroop",
"dccoder84",
"ilya-zlobintsev",
"oehme"
],
"repo": "ilya-zlobintsev/LACT",
"url": "https://github.com/ilya-zlobintsev/LACT/issues/266",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2290862577 | 🛑 Gitlab is down
In a41fa71, Gitlab (https://git.stis.ac.id) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Gitlab is back up in 6e46a96 after 15 hours, 29 minutes.
| gharchive/issue | 2024-05-11T11:54:20 | 2025-04-01T06:39:04.411258 | {
"authors": [
"im-perativa"
],
"repo": "im-perativa/stis-uptime",
"url": "https://github.com/im-perativa/stis-uptime/issues/945",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1698376138 | 本地txt文件有500M,加载的时候很慢,如何提高速度?
是warning 不是error不影响运行
cristianohello @.***>于2023年5月6日 周六10:07写道:
[image: yayRzxSYHP]
https://user-images.githubusercontent.com/109277248/236592902-f5ab338d-c1e9-43dc-ae16-9df2cd3c1378.jpg
—
Reply to this email directly, view it on GitHub
https://github.com/imClumsyPanda/langchain-ChatGLM/issues/251, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/ABLH5EXG66G4L36WBLUHNQ3XEWW5VANCNFSM6AAAAAAXXY3WPQ
.
You are receiving this because you are subscribed to this thread.Message
ID: @.***>
@imClumsyPanda
我知道没有报错。我的意思是如何提高加载的速度?代码上如何修改
这个问题如何优化?
| gharchive/issue | 2023-05-06T02:07:10 | 2025-04-01T06:39:04.415649 | {
"authors": [
"Helenailse1",
"cristianohello",
"imClumsyPanda"
],
"repo": "imClumsyPanda/langchain-ChatGLM",
"url": "https://github.com/imClumsyPanda/langchain-ChatGLM/issues/251",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
74274322 | Fix the -batch <file> handling
Triggered by a tweet of Stephan Preibisch, this developer tried to run a
macro in batch mode via
.../ImageJ-win32 -batch abc.ijm
and was surprised that it did not work. The reason it did not work was a
completely borked logic behind the -batch handling, which assumed that
.../ImageJ-win32 -eval ... -batch was the only use case.
This patch fixes that logic to work correctly.
Signed-off-by: Johannes Schindelin johannes.schindelin@gmx.de
Cool, thanks @dscho!
| gharchive/pull-request | 2015-05-08T07:59:29 | 2025-04-01T06:39:04.428241 | {
"authors": [
"dscho",
"hinerm"
],
"repo": "imagej/imagej-legacy",
"url": "https://github.com/imagej/imagej-legacy/pull/113",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
360989466 | Support for a configuration file
It would be pretty convenient to have support for reading plugin options from a configuration file.
It would also allow imagemin-cli and imagemin-micro to be more flexible in providing customizable image compression services.
Regarding to https://github.com/imagemin/imagemin/issues/171#issuecomment-315229115 there is at least this https://github.com/paulpflug/imagemin-manager, with some preprocessing, too.
| gharchive/issue | 2018-09-17T18:30:14 | 2025-04-01T06:39:04.436368 | {
"authors": [
"RiZKiT",
"chaucerbao"
],
"repo": "imagemin/imagemin",
"url": "https://github.com/imagemin/imagemin/issues/296",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
201163478 | Storyboard用什么姿势
求个demo
你先别用Storyboard吧~ 理论上 应该也是可以解决的 很久没维护了 我尽快重构 sorry~
| gharchive/issue | 2017-01-17T02:54:07 | 2025-04-01T06:39:04.437198 | {
"authors": [
"858665536",
"imagons"
],
"repo": "imagons/DNOLabelAnimation",
"url": "https://github.com/imagons/DNOLabelAnimation/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2488036833 | Rime重新部署会覆盖设置
Rime里面点击重新部署之后,同步路径和同步文件夹名称都还原成了默认的,输入法方案也还原成了默认的
是否开启了 iCloud 选项
是否开启了 iCloud 选项
对,开了icloud。部署之前需要关上icloud吗
检查 iCloud 中存储的内容是否正确
检查 iCloud 中存储的内容是否正确
正是这个原因,修改配置之前已经同步到icloud了
| gharchive/issue | 2024-08-27T00:47:39 | 2025-04-01T06:39:04.456112 | {
"authors": [
"imfuxiao",
"zhaoguibin"
],
"repo": "imfuxiao/Hamster",
"url": "https://github.com/imfuxiao/Hamster/issues/683",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2498735519 | update map coord math
the math used here worked for non-hw maps, but was noticeably off for hw maps. dalamud has logic to convert from raw to map coords that seems to be based on a game binary reference (https://github.com/goatcorp/Dalamud/blob/0fb7585973ffe22ab9353d853b0084a3f1c0803b/Dalamud/Utility/MapUtil.cs#L28). after some testing, we've confirmed that the dalamud calculation is accurate for all maps, including hw. so this change introduces the logic used by dalamud, to fix coords for hw maps, but especially the flags generated by hh for hw.
ah, don't publish yet ><. i did test this and it does work, but i just noticed it messes up the map ui. am about to make another pr to fix that ><
| gharchive/pull-request | 2024-08-31T08:48:11 | 2025-04-01T06:39:04.458066 | {
"authors": [
"dit-zy"
],
"repo": "img02/HuntHelper",
"url": "https://github.com/img02/HuntHelper/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
649005887 | examples incorrectly encode key/salt into hex
The example files at https://github.com/imgproxy/imgproxy/tree/master/examples show the key and salt encoded into hexadecimal before sending to the server. This is incorrect as it doesn't work in hex. They should be a normal string.
The server environment variable stores the key and salt in hexadecimal but the client does not.
Hey!
Sorry, I don't understand what's wrong with storing key/salt pair on the client side encoded to hex.
When I send the key/salt to the server as hex encoded, the server gives an invalid signature or forbidden response. However, when I send the key/salt to the server as an unencoded string it works fine.
Don't know if this is still an issue for you, but.
Taking the Go example, the key and the salt the way they are defined in the example ARE hex-encoded strings:
key := "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
salt := "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
They should be sent to the server just as they defined in the example:
IMGPROXY_KEY="943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
IMGPROXY_SALT="520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
Thus I don't see why you treat this as an error.
Don't know if this is still an issue for you, but.
Taking the Go example, the key and the salt the way they are defined in the example ARE hex-encoded strings:
key := "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
salt := "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
They should be sent to the server just as they defined in the example:
IMGPROXY_KEY="943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881"
IMGPROXY_SALT="520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5"
Thus I don't see why you treat this as an error.
Exactly, the key and salt should be sent to the server as they appear in the string.
However in the swift example it converts it to hex which is the issue.
let key = "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881".hexadecimal()!;
let salt = "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5".hexadecimal()!;
If you take out .hexadecimal()! the code will work.
Exactly, the key and salt should be sent to the server as they appear in the string.
However in the swift example it converts it to hex which is the issue.
let key = "943b421c9eb07c830af81030552c86009268de4e532ba2ee2eab8247c6da0881".hexadecimal()!;
let salt = "520f986b998545b4785e0defbc4f3c1203f22de2374a3d53cb7a7fe9fea309c5".hexadecimal()!;
If you take out .hexadecimal()! the code will work.
I'm absolutely not familiar with Swift, but it looks like hexdecimal function decodes hex-encoded string that is correct approach.
I'm absolutely not familiar with Swift, but it looks like hexdecimal function decodes hex-encoded string that is correct approach.
| gharchive/issue | 2020-07-01T13:59:35 | 2025-04-01T06:39:04.474978 | {
"authors": [
"DarthSim",
"waroo"
],
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
758696002 | go-nanoid compatibility
Hi
At the moment, the router is not compatible with the new go-nanoid version:
https://github.com/matoous/go-nanoid/commit/68b72474732cdc7bd8198c02d4af9c94846000df
For this version, nanoid.Nanoid() should become nanoid.New().
https://github.com/imgproxy/imgproxy/blob/9e3a1c6c2a4085ae1038787e72420ec2e6bb30c3/router.go#L79
Just a heads up to be careful when updating this dependency. (It also broke installation with go get for me but that is easily bypassed)
Hey!
Thanks for the warning!
It also broke installation with go get for me
Yeah, it turned out that go get completely ignores all the go mod stuff in most cases. That's why I rewrote the installation manual to use go build.
| gharchive/issue | 2020-12-07T17:19:31 | 2025-04-01T06:39:04.478202 | {
"authors": [
"DarthSim",
"toonvd"
],
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/525",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1241736938 | How to apply transformation to a signed imgproxy URL
Is it possible to apply transformations to a signed imgproxy url?
Say I have a url := http://imgproxy.example.com/oKfUtW34Dvo2BGQehJFR4Nr0_rIjOtdtzJ3QFsUcXH8/rs:fill:1080:1080:0/g:sm/aHR0cDovL2V4YW1w/bGUuY29tL2ltYWdl/cy9jdXJpb3NpdHku/anBn.png which has been signed after specifying env variables IMGPROXY_KEY and IMGPROXY_SALT, how might I go about changing the dimension to rs:fit:760:760:0/g:no?
OK thanks
| gharchive/issue | 2022-05-19T12:59:16 | 2025-04-01T06:39:04.479689 | {
"authors": [
"nwaughachukwuma"
],
"repo": "imgproxy/imgproxy",
"url": "https://github.com/imgproxy/imgproxy/issues/874",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2702480560 | 🛑 hanime is down
In cb780ba, hanime (https://hanime.onrender.com) was down:
HTTP code: 500
Response time: 1092 ms
Resolved: hanime is back up in 3c5eb7f after 18 minutes.
| gharchive/issue | 2024-11-28T15:12:27 | 2025-04-01T06:39:04.520479 | {
"authors": [
"immodded"
],
"repo": "immodded/up_time",
"url": "https://github.com/immodded/up_time/issues/70",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
836831264 | Add deprecated tags to API summaries
Update the extract-api summary to include potentially two release tags as a single API can be both in a deprecated state in addition to alpha/beta/public (and technically internal but that doesn't make much sense...). This change should help give a better overview of the state of our APIs.
UI Framework API change, I noticed that WidgetState enum had deprecated setup incorrectly. Switched it to the correct syntax, please let me know if that deprecated flag was mistakenly added.
All other API changes are a by-product of the change to include both deprecated and other tags.
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
@grigasp I could do it that way as well. I found this way easier to filter and graph in Excel rather than having it all on the same line. However, if it makes it easier to read when it's directly in csv I can switch it.
Not sure if it's possible, but wouldn't it be more convenient if there was one line per API, e.g. public;deprecated;MyApi?
@grigasp I could do it that way as well. I found this way easier to filter and graph in Excel rather than having it all on the same line. However, if it makes it easier to read when it's directly in csv I can switch it.
It's my personal preference to see them in one line, but I don't mind them being two lines if that's useful somewhere.
| gharchive/pull-request | 2021-03-20T15:07:41 | 2025-04-01T06:39:04.557167 | {
"authors": [
"calebmshafer",
"grigasp"
],
"repo": "imodeljs/imodeljs",
"url": "https://github.com/imodeljs/imodeljs/pull/1002",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
469745305 | AUR Package published
Hi,
FYI, I just published an AUR package for arch linux users: community/altair since I'm using altair for some time now and really like it :+1:
It installs altair without the AppImage wrapper and uses a shared electron runtime, which reduces total install size.
Up to you to if you want it in the README (I can also supply a PR) :)
Hey @JohnnyCrazy
Yeah do create a PR for that. It would be best if there was a way to automate updating the package whenever altair updates.
Theoretically possible, but it would either require setting up a CI Script in your repo, which works as a maintainer to that package and publishes new versions automatically via git push to AUR, or an external service.
Will have a look at it. For now, I'm watching the releases and will update the package manually, which is also just a 15sec task.
| gharchive/issue | 2019-07-18T12:36:22 | 2025-04-01T06:39:04.559842 | {
"authors": [
"JohnnyCrazy",
"imolorhe"
],
"repo": "imolorhe/altair",
"url": "https://github.com/imolorhe/altair/issues/874",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
406094498 | Generate meeting summons
When creating a remote record, allow either a direct remote invite or a meeting summon to be generated.
This should adjust the form to collect only the required information. For summons, it should detail that it shouldn't be used when beneficiaries know each others IDs.
The summon form should create a summon ID and provide a smn link to the user (impactasaurus/server#192)
Generic Questionnaire Links
| gharchive/issue | 2019-02-03T14:37:54 | 2025-04-01T06:39:04.561584 | {
"authors": [
"drimpact"
],
"repo": "impactasaurus/app",
"url": "https://github.com/impactasaurus/app/issues/442",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
301575707 | User directory dataloader
Need to dedupe the requests going to auth0. Use a dataloader
Assessment management
Closed in 141ae133e47aa6a44ea54d2dbb66f3a442774ed6
| gharchive/issue | 2018-03-01T22:00:51 | 2025-04-01T06:39:04.562945 | {
"authors": [
"drimpact"
],
"repo": "impactasaurus/server",
"url": "https://github.com/impactasaurus/server/issues/78",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2436302822 | Defusing comp
Об этом ПР'е:
Добавляет компонент, который позволяет разминировать взрывчатку с наследством от BasePlasticExplosive.
Почему / Баланс:
Изначально делал для ивента по CS, в процессе понял, что разминирование С4 и ей подобных вполне имеют право быть в обычной игре. Всегда можно дать игрокам дополнительное право на ошибку, особенно, если они успеют среагировать. Таймеры для разминирования по умолчанию:
Без инструментов: 8 секунд.
С инструментом: 4 секунды. (кусачки, челюсти жизни, etc.)
Технические детали:
DefusingSystem позволяет остановить остановить таймер взрывчатки и сбросить его. Спрайт тоже сбрасывается к состоянию до взвода.
В прототип BasePlasticExplosive добавлен Defusing для разминирования.
Разминирование взрывчатки логируется.
Добавил к коду визов пару строчек для нормального изменения спрайтов, повлиять ни на что не должно, комменты оставил.
Работоспособность сборки протестирована на локалке, ошибок в консоли нет.
Первый компонент от Казачка, поздравляю.
| gharchive/pull-request | 2024-07-29T20:36:23 | 2025-04-01T06:39:04.566829 | {
"authors": [
"KAZAK1984",
"Seven2280"
],
"repo": "imperial-space/space-station14-public",
"url": "https://github.com/imperial-space/space-station14-public/pull/353",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
384988782 | [Bug] Query: metadata endpoint /api/v1/label/<label_name>/values should return external labels
The metadata endpoint of Thanos Query /api/v1/label/<label_name>/values should return all values of the specified label name. (see Prometheus docs).
This works for all labels but not for the labels specified added by the external_labels of Thanos Store instances.
This is used for example by Grafana in label_values(<label_name>) which can be used to load lavel values to query variable see http://docs.grafana.org/features/datasources/prometheus/#query-variable.
Thanks, valid issue, PRs welcome (:
The actual work to be done, is to glue external labels from blocks in store Gateway.
| gharchive/issue | 2018-11-27T21:15:02 | 2025-04-01T06:39:04.577535 | {
"authors": [
"FUSAKLA",
"bwplotka"
],
"repo": "improbable-eng/thanos",
"url": "https://github.com/improbable-eng/thanos/issues/649",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2021730896 | Strange behavior on outputs
I am trying to define outputs for a dashboard page. I won't include all details to make it complete, but the relevant section can be seen below (this compiles):
page AOVDashboardPage {
contains DashboardSideNav as group DashboardNav
output AOVGraph shows record AOVData briefly "As a line graph"
output AOVSummary shows record AggregatedAOV
output OrderDetails shows record OrderDataTable
input TimeRangeSelector takes TimeRangeSelection
input OrderMetricSelector takes OrderMetricSelection
}
I would expect that the third line could/should be written as:
output AOVGraph shows graph AOVData briefly "As a line graph"
Likewise, line 5 could/should be written as:
output OrderDetails shows table OrderDataTable
However, when I make these changes I get an error like the following:
[error] Error: application/pages/dashboards/orderDashboards.riddl(24:35):
Expected one of ("}" | "input" | "form" | "text" | "button" | "picklist" | "selector" | "menu" | "output" | "document" | "list" | "table" | "graph" | "animation" | "picture" | "contains" | "group" | "page" | "pane" | "dialog" | "popup" | "frame" | "column" | "window" | "section" | "tab" | "flow" | "//" | "described" | "explained" | "briefly" | "brief" | "{" | "is" | "are" | ":" | "="):
output OrderDetails shows table OrderDataTable
^
Context: (outputDefinitions | briefly | description | parse0$3 | parse0$1 | "}")
Entered in the wrong place.
| gharchive/issue | 2023-12-01T23:34:32 | 2025-04-01T06:39:04.583253 | {
"authors": [
"JamesYopp"
],
"repo": "improving-app/riddl",
"url": "https://github.com/improving-app/riddl/issues/65",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1612468191 | Remove UpdateStatus command from Tenant
Issue:
Currently in the Tenant entity, there is an UpdateStatus command. We want to remove this and in its place have commands that explicitly change the state of the entity.
Changes:
Remove UpdateStatus
Remove StatusUpdated
Add ActivateTenant
Add SuspendTenant
Add TenantActivated
Add TenantSuspended
Testing:
validated riddl
Notes:
I realized I named the branch wrong, it is supposed to be remove-update-status-from-tenant
Treating this branch as now unnecessary because of IA-128 which completely changes and overwrites this. Will close this branch and PR.
| gharchive/pull-request | 2023-03-07T00:31:19 | 2025-04-01T06:39:04.586248 | {
"authors": [
"miguelTaningcoYoppworks"
],
"repo": "improving-app/riddl",
"url": "https://github.com/improving-app/riddl/pull/42",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
444315850 | Fix build issue with RN0.59
I was getting the following errors from Gradle when using this lib on a clean RN0.59 project:
This gradle-file resolve that, however, I know next nothing about gradle/java/android build process and have no idea if this affects backward-compatibility or other things. So please keep that in mind before merging :)
#370 #370
| gharchive/pull-request | 2019-05-15T08:56:12 | 2025-04-01T06:39:04.595429 | {
"authors": [
"davidohayon669",
"mikaelengstrom"
],
"repo": "inProgress-team/react-native-youtube",
"url": "https://github.com/inProgress-team/react-native-youtube/pull/367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
154515021 | Create news screen
Add a screen to let the user add a news
2.5 days
| gharchive/issue | 2016-05-12T15:40:44 | 2025-04-01T06:39:04.596204 | {
"authors": [
"agerace"
],
"repo": "inaka/Otec",
"url": "https://github.com/inaka/Otec/issues/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
368561607 | How to map melder in crbirding_observations
Our question:
Is melder gelijk aan user_id? Indien ja, hoe kunnen we dit invullen aangezien dat jullie user_id op toewijzen?
SOVON:
Melder in crbirding_observations is gelijk aan user_first_name en user_last_name in crbirding_users.
Done 1d74d8c5a1ccc684067349f6aa328a52bff1aa97
| gharchive/issue | 2018-10-10T09:01:00 | 2025-04-01T06:39:04.615858 | {
"authors": [
"damianooldoni"
],
"repo": "inbo/sovon",
"url": "https://github.com/inbo/sovon/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
63256368 | Add jshint and jslint files
Add jshint and jscs code style files
@incuna/js please reivew/merge
thumbsup
| gharchive/pull-request | 2015-03-20T16:13:07 | 2025-04-01T06:39:04.634280 | {
"authors": [
"Minglee01",
"grahamgilchrist"
],
"repo": "incuna/angular-external-link-interceptor",
"url": "https://github.com/incuna/angular-external-link-interceptor/pull/5",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
59414146 | Make Publish and Fetch task use the same client credentials as Material
The Material plugin picks up credentials as specified here - http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3Client.html#AmazonS3Client()
The Publish and Fetch tasks pick the credentials only from the environment variables.
+1
It would be simpler if the env vars for the key could be set in the s3material package repository dialog (or set it on a per-pipeline basis and use those env vars, like for pub and fetch).
Fixed in #53
| gharchive/issue | 2015-03-01T19:50:52 | 2025-04-01T06:39:04.649029 | {
"authors": [
"manojlds",
"mpgerlek"
],
"repo": "indix/gocd-s3-artifacts",
"url": "https://github.com/indix/gocd-s3-artifacts/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
187581031 | Improving the readme
Hi,
Interesting component 🙂
Here a template for your readme: iOS-readme-template
Hi, not sure exactly which part you're referencing but I do have plans to add support for package managers. Going to close this out for now, feel free to re-open if you had specific suggestions in mind.
| gharchive/issue | 2016-11-06T19:00:42 | 2025-04-01T06:39:04.650493 | {
"authors": [
"TofPlay",
"indragiek"
],
"repo": "indragiek/CocoaMarkdown",
"url": "https://github.com/indragiek/CocoaMarkdown/issues/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2267323881 | [Bug]: connecting to redis:6379. Name or service not known
Is there an existing issue for the same bug?
[X] I have checked the existing issues.
Branch name
main
Commit ID
ae501c5
Other environment information
No response
Actual behavior
just upload and parse a pdf
Expected behavior
No response
Steps to reproduce
1. Just follow the quick start steps to luanch a local server.
2. Upload a pdf file and parse it, that's it.
Additional information
I found that there is no redis service in docker-compose-base.yml
You could ignore the warning log.
And in latest code and image, I've already remove the default redis configuration so the message will not show again.
| gharchive/issue | 2024-04-28T03:52:33 | 2025-04-01T06:39:04.709905 | {
"authors": [
"KevinHuSh",
"t6am3"
],
"repo": "infiniflow/ragflow",
"url": "https://github.com/infiniflow/ragflow/issues/580",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1013583347 | ISPN-13338 SingleFileStore migration to single file is broken
https://issues.redhat.com/browse/ISPN-13338
Handle 12.1 segment files just like 12.0 segment files
Thanks @danberindei
| gharchive/pull-request | 2021-10-01T17:13:00 | 2025-04-01T06:39:04.714298 | {
"authors": [
"danberindei",
"ryanemerson"
],
"repo": "infinispan/infinispan",
"url": "https://github.com/infinispan/infinispan/pull/9568",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1798761116 | IPROTO-271 Deprecate ProtoDoc annotation + Remove sample-domain projects
https://issues.redhat.com/browse/IPROTO-271
https://issues.redhat.com/browse/IPROTO-272
About Remove sample-domain projects, with #11118 and #11119 we're going to publish these within the Infinispan code base, lowering the coupling and raising the cohesion, since these projects are used mainly from the same Infinispan code base.
Merged, thanks
Thank you Tristan
| gharchive/pull-request | 2023-07-11T11:50:41 | 2025-04-01T06:39:04.717073 | {
"authors": [
"fax4ever",
"tristantarrant"
],
"repo": "infinispan/protostream",
"url": "https://github.com/infinispan/protostream/pull/194",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
456529971 | ifc will replace alipay
ifc will replace alipay
大家对于核心钱包有什么要求和使用时的问题,请到github,提出issuse
https://github.com/withu2018/infinitecoin/issues
查看后,会筛选,并在一下一个版本中加入你的要求或修复提出的bug
| gharchive/issue | 2019-06-15T11:47:44 | 2025-04-01T06:39:04.721343 | {
"authors": [
"eeeflying"
],
"repo": "infinitecoin/infinitecoin",
"url": "https://github.com/infinitecoin/infinitecoin/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
231713318 | Fix for wrong example files being generated.
Fixes #1066
The failure is caused by a missing dependency tempy. I will fix that next, but it's not part of this PR.
Also self-merging.
| gharchive/pull-request | 2017-05-26T19:18:21 | 2025-04-01T06:39:04.722378 | {
"authors": [
"skellock"
],
"repo": "infinitered/ignite",
"url": "https://github.com/infinitered/ignite/pull/1067",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1122869297 | Tooltip Support.
It'll be very great if tooltips on tap feature is added especially in case on line charts with bubbles on data points.
Hey,
If you are okay with just showing data on that point then there is decoration for that already. You will have to add chartBehaviour and implement onItemClicked. Then you have to add SelectedItemDecoration with item you got from onItemClicked. That will show selected item in the chart.
Soon SelectedItemDecoration will also take a widget child that will be shown on click for more customisation.
Available in 2.0.0+2.
SelectedItemDecoration now takes a widget you can show on item when item is clicked
Hi @lukaknezic, since SelectedItemDecoration is deprecated now, can you show an example of implementing the same behavior using a WidgetDecoration
Hey @pablojimpas. Sure, I will add that to readme as well but here you go:
So this is a example where you can also individually change if you want to change chart item or the background/foreground:
class ChartTest extends StatefulWidget {
ChartTest({Key? key}) : super(key: key);
@override
State<ChartTest> createState() => _ChartTestState();
}
class _ChartTestState extends State<ChartTest> {
int? currentItem;
@override
Widget build(BuildContext context) {
return Chart<void>(
state: ChartState(
data: ChartData.fromList([5, 6, 4, 8, 2, 6].map((e) => ChartItem<void>(e.toDouble())).toList()),
/// 1) Here we change how the item will be shown. By default it will have 0.5 opacity. If we clicked on the item it will have be set as current item and it will have 1.0 opacity
itemOptions: BarItemOptions(barItemBuilder: (data) {
final isCurrent = data.itemIndex == currentItem;
return BarItem(
color: Colors.red.withOpacity(isCurrent ? 1 : 0.5),
);
}),
/// 2) Adding listener for clicking on the chart items, and setting that value to our current chart state
behaviour: ChartBehaviour(onItemClicked: (item) {
setState(() {
currentItem = item.itemIndex;
});
}),
backgroundDecorations: [
GridDecoration(
gridWidth: 1,
gridColor: Theme.of(context).dividerColor,
),
/// 3) If you want to change foreground/background of the selected item as well, then you should make your own `WidgetDecoration`.
/// This widget decoration will paint background of the item in gray if item is clicked. You can add info windows and similar things here.
WidgetDecoration(widgetDecorationBuilder: (context, chartState, itemWidth, verticalMultiplier) {
if (currentItem == null) {
return const SizedBox();
}
return Stack(
children: [
Positioned(
left: itemWidth * currentItem!,
top: 0,
bottom: 0,
width: itemWidth,
child: Container(
decoration: BoxDecoration(
color: Colors.grey.shade400.withOpacity(0.4),
),
),
),
],
);
}),
],
),
);
}
}
| gharchive/issue | 2022-02-03T09:52:10 | 2025-04-01T06:39:04.739512 | {
"authors": [
"lukaknezic",
"pablojimpas",
"srihariash999"
],
"repo": "infinum/flutter-charts",
"url": "https://github.com/infinum/flutter-charts/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1011887973 | influxdb-stress 连接influxdb 失败
[root@cdh6-pre2038 bin]# /root/go/bin/influx-stress insert -r 60s --strict --pps 800000 --host http://172.20.20.38:8086
Using point template: ctr,some=tag n=0i
Using batch size of 10000 line(s)
Spreading writes across 100000 series
Throttling output to ~800000 points/sec
Using 80 concurrent writer(s)
Running until ~18446744073709551615 points sent or until ~1m0s has elapsed
Failed to create database: Bad status code during Create(CREATE DATABASE stress): 401, body: {"code":"unauthorized","message":"unauthorized access"}
Aborting.
我的influxdb版本是2.0.8 然后说权限认证不通过 但是我怎么加上token呢
@chengshiwen
@mingzizenmequ influx-stress 目前还不支持influxdb 2.x
@mingzizenmequ influx-stress 目前还不支持influxdb 2.x
Running until ~18446744073709551615 points sent or until ~1m0s has elapsed
Write Throughput: 3222635
Points Written: 196830000
查询数据库实际数量对不上。。。
select count(*) from ctr;
name: ctr
time count_n
0 101001940
数量差了好多
我
| gharchive/issue | 2021-09-30T08:39:14 | 2025-04-01T06:39:04.752016 | {
"authors": [
"chengshiwen",
"mingzizenmequ"
],
"repo": "influxdata/influx-stress",
"url": "https://github.com/influxdata/influx-stress/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
422807193 | InfluxDB write timeout/freeze
Hi.
Reproduction
I'm using InfluxDB v1.6 with Docker For Windows through WSL : my setup is mostly basic (a single "docker-compose.yml" file and a 15 line script to push data)
docker-compose up -d
version: '3'
services:
influxdb:
image: influxdb:1.6
volumes:
- vol_influx:/var/lib/influxdb
restart: always
ports:
- "8086:8086"
environment:
- INFLUXDB_DB=testdb
- INFLUXDB_HTTP_AUTH_ENABLED=false
- INFLUXDB_ADMIN_USER=root
- INFLUXDB_ADMIN_PASSWORD=*********
- INFLUXDB_USER=user
- INFLUXDB_USER_PASSWORD=********
- INFLUXDB_HTTP_MAX_BODY_SIZE=0
- INFLUXDB_COORDINATOR_WRITE_TIMEOUT=60s
- INFLUXDB_REPORTING_DISABLED=true
chronograf:
image: chronograf:1.6
restart: always
environment:
INFLUXDB_URL: http://influxdb:8086
ports:
- "8888:8888"
links:
- influxdb
volumes:
vol_influx:
Then, i push a small batch with Node Influx library (https://github.com/node-influx/node-influx), with sparse timestamp :
const now = Date.now();
const data = [...new Array(20)].map((_, i) => ({
measurement: 'something',
timestamp: now + i * 10000, // (add 0s to timestamp, then 10s, then 20s, ...)
tags: { information: 'ok' },
fields: {
speed: 1,
},
}));
nodeInflux.write(data).then(console.log).catch(console.error);
Response -> "Internal Server Error"
Docker InfluxDB logs :
influxdb_1_5a52b0206eaa | [httpd] 192.168.16.1 - root [19/Mar/2019:15:40:49 +0000] "POST /write?db=testdb&p=%5BREDACTED%5D&precision=n&rp=&u=root HTTP/1.1" 500 20 "-" "-" 5f0d7c4d-4a5d-11e9-8023-000000000000 10000707
influxdb_1_5a52b0206eaa | ts=2019-03-19T15:40:59.865133Z lvl=error msg="[500] - \"timeout\"" log_id=0EHe_brl000 service=httpd
I had to use nano timestamp (otherwise using "{ precision: 'ms' }" as second argument of write function work).
Probably a Node Influx thing ;) .
Your httpd log line shows a data size of 10000707 which seems like the same problem as:
This seems related to https://github.com/influxdata/influxdb/issues/13342
@wkonkel is that value not the time the request took?
We're encountering similar things, and for a request with a similar value, we found it to be 850 KB in size, and not 10 MB. Also. The value has steadily followed our timeout values as we've been tweeking those. So I believe the value is in Micro-seconds, so that 10.000.707 is 10 seconds (default timeout on Influx).
| gharchive/issue | 2019-03-19T15:44:10 | 2025-04-01T06:39:04.764166 | {
"authors": [
"LordMike",
"kMeillet",
"wkonkel"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/issues/12731",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
103615622 | fix #2555: add backreference in CQs
Add new query syntax to allow the following in CQs:
INTO "1hPolicy".:MEASUREMENT FROM /.*/
In this example, the regular expression in the FROM clause would expand to all measurements and the points selected from each of those measurements would be written into the "1hPolicy" retention policy with the same measurement names.
Looks like Circle blew up due to memory pressure. I've seen this before, and it can happen if the code allocates huge buffers by mistake, perhaps in a loop.
Can you add a test to ensure that GROUP BY * works with this as well to bring over the tag data? Would actually like to see more integration level tests if possible.
@dgnorton -- did you work out what caused Circle to fail due to memory pressure?
@otoolep I added a Parent Node field to type Measurement which caused json.Marshal call in the tests to go into an infinite loop.
@pauldix added integration tests but issue #3968 prevents the GROUP BY * test from passing, so I've disabled that test until the issue is fixed.
This might be beyond the scope of this PR, but how do we handle:
A long running CQ or potentially a hung CQ?
Can the same CQ run again if the previous one is still running?
Looks good. +1
@corylanou
Long running - wait for it
Hung - not sure. Maybe ExecuteQuery times out?
I don't think it can run again because the service waits for results.
I mocked enough of the interfaces in the CQ service tests that it should be relatively easy to add tests for those things but I don't think they're currently tested.
I want to add suffix for measurement names ,may by like that
INTO "1hPolicy".'$1_sss' FROM /.*/
is that supported?if it is not supported ,how to add suuffix for all measurement names?
| gharchive/pull-request | 2015-08-27T23:29:02 | 2025-04-01T06:39:04.769827 | {
"authors": [
"corylanou",
"dgnorton",
"otoolep",
"pauldix",
"shanquanqiang"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/pull/3876",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
153583918 | Added distinct improvement as a bugfix
Required for all non-trivial PRs
[x] Rebased/mergable
[x] Tests pass
[x] CHANGELOG.md updated
[x] Sign CLA (if not already signed)
I'm looking for this one in 0.13 so I thought others may wish to see it listed in the CHANGELOG.
Can you close this PR and open it again against the 0.13 branch? (I really wish GitHub allowed repointing PR's...)
Also, if you can include "Updating CHANGELOG with ..." instead so that the commit message makes more sense, that would also be great. Thanks.
Trying again here: https://github.com/influxdata/influxdb/pull/6577
| gharchive/pull-request | 2016-05-07T09:04:11 | 2025-04-01T06:39:04.773672 | {
"authors": [
"MikeSchroll",
"jsternberg"
],
"repo": "influxdata/influxdb",
"url": "https://github.com/influxdata/influxdb/pull/6574",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
302486403 | Deadlock in start/stop of alert node
The alert node can cause a deadlock if task it is defined in is stopped before the task has registered a delete hook for the task.
In particular deadlock occurs when the sequence of events is start task -> encounter error -> begin stop task procedure -> attempt to register delete hook
the deadlock does not occur when the events are start task -> register delete hook -> encounter error -> stop task procedure
See StopTask and registerDeleteHook.
The effect of this issues is that if a user manages to hit the race condition the process deadlocks and will not start, effectively rendering the kapacitor service unusable.
This code that causes the race condition has existed for a year and we have only run into it once (that we know of). This indicates that the frequency of the race is quite low.
The workaround is to start the Kapacitor process up on a faster system so as to avoid the race and then edit the task that is causing the error. In the case of the one instance we saw, the race was not reproducible on a laptop but was it was reproducible on a cloud instance.
@nathanielc this deadlock happened to me a few times already (always when I update tasks with kapacitor define on a running task)
At first I thought it was just slow, after days I realize the task was never running after i update.
I'll see if I can find more clue next time it happens.
| gharchive/issue | 2018-03-05T22:17:15 | 2025-04-01T06:39:04.778767 | {
"authors": [
"Ehekatl",
"desa",
"nathanielc"
],
"repo": "influxdata/kapacitor",
"url": "https://github.com/influxdata/kapacitor/issues/1829",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.