id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1465413286
|
Feat/upload-component
O que foi feito? 📝
Criado component UploadPhoto
Melhorado component de RadioButton adicionando prop de label
Melhorado component de Checkbox utilizando a mesma cor (todo: adicionar variants)
Screenshots ou GIFs 📸
Tipo de mudança 🏗
[X] Nova feature (mudança non-breaking que adiciona uma funcionalidade)
[ ] Bug fix (mudança non-breaking que conserta um problema)
[ ] Refactor (mudança non-breaking que melhora o código ou débito técnico)
[ ] Chore (nenhuma das anteriores, como upgrade de libs)
[ ] Breaking change 🚨
Checklist 🧐
[X] Testado no iOS
[ ] Testado no Android
@fabinppk @luancurti @mdlucas
esse PR acabou ficando 2 em 1, em conjunto com o #102
Tentei te dar um help com o check, mas não rolou.
Se o PR ficar pendente eu posso dar uma olhada com calma no decorrer da semana.
|
gharchive/pull-request
| 2022-11-27T11:39:19 |
2025-04-01T06:40:01.397117
|
{
"authors": [
"ammichael",
"fabinppkbuilders"
],
"repo": "platformbuilders/fluid-react-native",
"url": "https://github.com/platformbuilders/fluid-react-native/pull/101",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1970817396
|
Temper when full demo runthrough is executed on PRs
Currently: each push to PR going into main
Suggestions:
when label is added
only on this repo id (no forks)
only delete project if previous steps succeed?
Add a comment on which label needs to be added to a PR to run tests
https://github.com/marketplace/actions/comment-pull-request
|
gharchive/issue
| 2023-10-31T16:12:37 |
2025-04-01T06:40:01.403575
|
{
"authors": [
"chadwcarlson"
],
"repo": "platformsh/demo-project",
"url": "https://github.com/platformsh/demo-project/issues/61",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2112856183
|
Fix Find() when / is used as input
This fixes issues with Platformifiers that were looking for relative files, like Laravel and Django
Fix #199
the only other challenge i see is that the message about adding the composer dependency is scrolled out of view quickly due to the size of our congrats message:
Normal size terminal window:
Expanded to be able to see it:
Good point, let's open a separate issue to discuss this.
message size moved to #204
|
gharchive/pull-request
| 2024-02-01T15:57:40 |
2025-04-01T06:40:01.406613
|
{
"authors": [
"akalipetis",
"gilzow"
],
"repo": "platformsh/platformify",
"url": "https://github.com/platformsh/platformify/pull/202",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1822354672
|
Обновление модуля
Решил продолжить разработку этой либы, так как тоже пришлось поиграться с SmsHub.
Изменения:
Оформил в виде устанавливаемого модуля
Перевел все Docstrings на английский (перфекционизм), а страны привел к ISO3166 Alpha-2 кодам
Добавил поддержку проксей
реализовал асинхронный подход
Сделал объекты активаций, чтобы удобнее было работать, не перебрасываясь лишними цифрами
Контекстные менеджеры и тд
Мне кажется, можно по приколу выложить либу в PyPi. Лично у меня такого опыта не было, а это отличный шанс.
Корневой репозиторий твой - можешь сделать это сам, если же нет - это сделаю я.
Спасибо, чуть позже гляну
Подскажи, почему ты поменял https на http, в #1 сослались на документацию, где написано что нужно использовать https
Из документации:
Все запросы (поддерживаются POST и GET) должны идти на https://smshub.org/stubs/handler_api.php
В коде используется https
Ошибки не возникает
Https безопаснее
Почему она возникала?
Покопался подробнее. Кажется там какая-то путаница с DNS.
При использовании домена без www, причем с обычными запросами (без браузера) - оператор блокирует соединение, и http и https (в случае https оно просто не устанавливается, из-за невозможности выполнить рукопожатие)
Через VPN в нидерландах - работает отлично. Возможно, потому что используются другие DNS и они определенно используются и ссылаются на другие адреса.
Тем не менее, методом тыка оказалось что с доменом www работает и без VPN, причем из requests и httpx в моем случае.
Кстати, requests заменены на httpx, это более качественная и функциональная библиотека, которую я использую везде, после того как словил memoryLeak на requests.
Спасибо!
Надо будет подумать как ожидать несколько SMS одновременно
|
gharchive/pull-request
| 2023-07-26T12:51:05 |
2025-04-01T06:40:01.417432
|
{
"authors": [
"Yarosvet",
"platon-p"
],
"repo": "platon-p/smshub_py",
"url": "https://github.com/platon-p/smshub_py/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
213989068
|
Upgrade Play! version from 2.4.10 to 2.4.11
No changes other than version numbers from Maven plugin or test projects point of view, but update anyway.
New 1.0.0-beta7-SNAPSHOT snapshot deployed, documentation updated.
|
gharchive/issue
| 2017-03-14T07:31:58 |
2025-04-01T06:40:01.419969
|
{
"authors": [
"gslowikowski"
],
"repo": "play2-maven-plugin/play2-maven-plugin",
"url": "https://github.com/play2-maven-plugin/play2-maven-plugin/issues/123",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
369434089
|
skyboxLayer enabled unexpectly when changing skyboxIntensity
For bug reports, include:
Description
With the new layer system, we can hide skybox easily by the following code:
pc.app.scene.layers.getLayerById(2).enabled = false;
But when changing skyboxIntensity, the skyboxLayer will be enabled again. This is annoying.
The skybox layer should not visible when changing skyboxIntensity.
Steps to Reproduce
pc.app.scene.layers.getLayerById(2).enabled = false
pc.app.scene.skyboxIntensity = 2
You guys fixed this issue one year later 😂
|
gharchive/issue
| 2018-10-12T07:38:02 |
2025-04-01T06:40:01.423179
|
{
"authors": [
"scarletsky"
],
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/issues/1401",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
738086189
|
Feature request: black or at least dark light within light cookies
Can you develop the option of black or at least dark light within light cookies? ... I want to make a tool for Tattoo artists, where edges need to be really black.
Not possible right now.
cf my project: https://playcanvas.com/editor/scene/968488
Sounds like you want to project a texture onto a model.
These two demos may help in that regard
https://github.com/playcanvas/playcanvas.github.io/blob/master/graphics/painter.html
https://developer.playcanvas.com/en/tutorials/character-damage-demo
Can you develop the option of black or at least dark light within light cookies? ... I want to make a tool for Tattoo artist, where edges need to be really black.
Not possible right now.
This would be also very useful to project blob shadows.
@yaustar: in case you are referring to https://playcanvas.github.io/#graphics/painter.html, it will not help. I have a model character which recieves the texture in a bad way (what is projected on the back is also projected on the front - a bit like a mirror effect, but only that the mapping is off {ps: UV unwrap is not always a walk in the park - many of my tried Blender unwrap methods does, for example, not transfer at FBX-export to PC})
the painter example paints on each of the 6 sides as well :-/
I am more interested in the damage-example, where the UV mapping seem to perfect already (there is no mirrored effect, as all seams and islands on the black/white rendertexture to the left are mapped correctly). Here I want to project a readymade texture to the model. Kind of the reverse procedure of the present 'damage' infliction. Here is my project with a naïve approach to the problem (blue tattoo put at decal-texture): https://playcanvas.com/editor/scene/1028837
BTW @yaustar: I have found a compromise solution for the mapping problem ... you don't have to pursue this (just in case ... but thx again)
Having summed up the different mehods to project an image to a character (to a total four methods), I hereby return to this issue as each of them seem to possess their own challenge. In case the developers think that it is possible to use the two most obvious ones (texture and rendertexture), they both have inherit flaws in the shape of 'seams' and 'distorted stretches' that prevails me to make fluint dynamic movement of an image across the character without a lot stretching: https://playcanv.as/b/9tQ1i4Cq/ (forked from a Leonidas tutorial) + https://playcanv.as/b/FiSGfBFX/ (note the stretch on the back of the upper left arm). Theese dynamic stretches are (close to) never a problem when making UV-mapping for games, as the textures are static in such cases. But here the texture is dynamic, and the inherit original mapping structure (from when being developed in Blender etc.) reveals itself ... conclusion: Although not-being a natural category of physique, 'black light' as a light cookie option, would be very useful, as it can bypass mapping issues al together.
Although not-being a natural category of physique, 'black light' as a light cookie option, would be very useful, as it can bypass mapping issues al together.
Even if this is possible, you are going to have the same problem as you are not projecting onto a flat surface.
I am already ahead of that problem, as I am using a 'rolling effect' that changes the UV-tiling of the material as a function of the camera-to-bodypart position.
Decals?
A) As an option I can go back to the https://developer.playcanvas.com/en/tutorials/character-damage-demo (that includes decals) option, but so far I have seen this example as being taylor made for special game situations (and thus very relevant for most PlayCanvas developers).
B) As a parallel I made this post in the forum: https://forum.playcanvas.com/t/shooting-an-image-on-to-a-surface/15755 yesterday. From there I pursued https://playcanvas.com/project/704805/overview/paint-3d-test (also decals)
From both A) and especially B) I seem to be stuck at this line [from B)]:
this.material.setParameter("paintColor",new pc.Vec3(1.0,1.0,1.0).data);
is there a material.setParameter method/approach for 'painting' with an image?
(please note that this issue now have made me post this parallel https://github.com/playcanvas/engine/issues/2556 - it might help my/an overall goal of a more wider option pool and/or a better rendering pipeline ... all in all; so at least of of the above mentioned four approaches [texture, rendertexture, decals and light cookie] improves :-) )
I've converting this discussion to a request to implement a decal system:
https://github.com/playcanvas/engine/issues/4053
|
gharchive/issue
| 2020-11-06T22:47:31 |
2025-04-01T06:40:01.435528
|
{
"authors": [
"FutureFireplace",
"Maksims",
"dexterdeluxe88",
"mvaligursky",
"yaustar"
],
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/issues/2538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1939426213
|
Allow the AppBase to be cleanly destroyed even when not initialized
avoiding undefined access
Closing as no go. I was hoping to avoid construction / destruction without a device, but it's more complicated than expected and not worth it.
Can't we just fix this error?
Basically only what you added already:
const canvasId = this.graphicsDevice?.canvas?.id;
if (canvasId !== undefined) {
AppBase._applications[canvasId] = null;
}
fair enough, I'll do that that.
new PR.
|
gharchive/pull-request
| 2023-10-12T07:57:22 |
2025-04-01T06:40:01.438246
|
{
"authors": [
"kungfooman",
"mvaligursky"
],
"repo": "playcanvas/engine",
"url": "https://github.com/playcanvas/engine/pull/5745",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1973236649
|
Singular workflow for CI
Change Type (required)
Indicate the type of change your pull request is:
[ ] patch
[ ] minor
[ ] major
/canary
|
gharchive/pull-request
| 2023-11-01T23:07:51 |
2025-04-01T06:40:01.440279
|
{
"authors": [
"brocollie08",
"sugarmanz"
],
"repo": "player-ui/player",
"url": "https://github.com/player-ui/player/pull/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
626321145
|
Uploading snapshots to bintray fails
/home/play/logs/nightly-deploy-master-1590634801.log
error] java.lang.RuntimeException: error uploading to https://api.bintray.com/maven/playframework/snapshots/snapshots/com/typesafe/play/twirl-compiler_2.12/1.5.0-2020-05-27-785c3ce-SNAPSHOT/twirl-compiler_2.12-1.5.0-2020-05-27-785c3ce-SNAPSHOT.pom: {"message":"Snapshot files cannot be uploaded to OSS repositories.
Looks like we're not fully using sbt-dynver here yet?
This is every night
Fixed in the private Play build server repository.
|
gharchive/issue
| 2020-05-28T08:30:01 |
2025-04-01T06:40:01.460827
|
{
"authors": [
"chbatey",
"ennru",
"raboof"
],
"repo": "playframework/twirl",
"url": "https://github.com/playframework/twirl/issues/342",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2382504099
|
🛑 Lindas Kingdoms is down
In e004615, Lindas Kingdoms (https://lindas.admin.ch/query?query=PREFIX rdf%3A <http%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22-rdf-syntax-ns%23>
SELECT DISTINCT %3Fk (SAMPLE(%3Fs1) as %3Fs)%0AWHERE%20%7B%0A%20%20%3Fs1%20a%20%3Chttp%3A%2F%2Ffilteredpush.org%2Fontologies%2Foa%2FdwcFP%23TaxonName%3E%3B%0A%20%20%3Chttp%3A%2F%2Frs.tdwg.org%2Fdwc%2Fterms%2Fkingdom%3E%20%3Fk.%0A%7D%0AGROUP%20BY%20%3Fk) was down:
HTTP code: 500
Response time: 1755 ms
Resolved: Lindas Kingdoms is back up in 7010a89 after 42 minutes.
|
gharchive/issue
| 2024-06-30T23:47:55 |
2025-04-01T06:40:01.479455
|
{
"authors": [
"retog"
],
"repo": "plazi/monitoring",
"url": "https://github.com/plazi/monitoring/issues/1140",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
273236563
|
Can middleware access controller instance?
I'd like to create a middleware that have access to the controller instance. is that possible?
What is your use case for this and why it cannot be achieved by modifying the request object?
Lets say i've a Controller similar to:
@JsonController()
class XXX {
instanceOfSomething = null;
@Get('/whatever')
public async getWhatever() {
return instanceOfSomething.getWhatever();
}
}
Lets say that I want to add a middleware that checks if instanceOfSomething was instantiated and if not return an error
Ofc this is usful cause i've a lot of methods using instanceOfSomething and depends on the "loading" of it.
Also using a method of controller as a middleware could be helpful.
It is a strange approach what you try to do. Use dependency injection instead and if instanceOfSomething needs some time to fully startup after being created then just defer all call on it. (eg return a response what is resolved when instanceOfSomething has finished setting up and completed your call.
Yes for that scenario as I advised you should keep track of the internal state of the service inside the service not in a middleware. You can do something like:
class MyService {
private ready: Promise<void>;
constructor() {
this.ready = this.setupStuff();
}
public async anyMethod(): Promise<any> {
await this.ready;
// do your stuff here
}
}
I already keep track of the readyness in the dependency.
I just want to hook up a middleware, within that middleware do
if (!this.dependency.isReady()) {
return { success: false, message: 'App is not ready' }
}
Isn't this the whole point of having a middleware? To perform "guards" over real code?
If right now middleware cannot be coupled to controller classes instances, to me that's a showstoppers unfortunately.
Isn't this the whole point of having a middleware? To perform "guards" over real code?
Yes it is, but why do you want to return an error response, when your client can wait until your app is ready (I assume your app doesn't need multi minutes setup time.) You just simply resolve the returned promise when the app has been setup and the request can be processed.
Btw you can inject the service into your middleware if you really want to and check for its readiness in the middleware, however as I already said, this seems a wrong design decision. The service itself should keep track of its state and if you decide the send error to requests which comes before then the service should create the error, not the middleware.
(I assume your app doesn't need multi minutes setup time.)
Can take hours. Don't ask :)
To me it's more clean to have the middleware check dependencies readyness and keep the Controller methods clean of checking the dependencies readyness for each route.
If I had to do that I'd need to duplicate the readyness (and response handling) code for every route. This looks like an antipattern to me.
BTW I dont want to inject the dependency to the middleware. I think i might create a decorator and decorate each method to perform such validation. Still duplicated code but a decorator looks better than code-dumplication to me.
Feel free to share your thoughts.
Can take hours. Don't ask :)
Wow, that is a long time! So then why don't you solve this at the level of load balancing? Just dont redirect any traffic to the app until it's not ready.
To me it's more clean to have the middleware check dependencies readiness and keep the Controller methods clean of checking the dependencies readiness for each route.
Your controllers shouldn't check for readiness, the services itself should check it.
BTW I don't want to inject the dependency to the middleware. I think i might create a decorator and decorate each method to perform such validation. Still duplicated code but a decorator looks better than code-duplication to me.
Yes, you can do that too, but if you do it via a vanilla decorator then you won't have access to routing-controller itself.
Feel free to share your thoughts.
I am still thinking you should handle this in your service (the way you want, returning a promise for deferred handling or throwing an error.) By trying to check the service readiness outside of the service you break the encapsulation of logic. Your middleware doesn't need to throw, your service needs to throw when it's not ready.
I am still thinking you should handle this in your service (the way you want, returning a promise for deferred handling or throwing an error.) By trying to check the service readiness outside of the service you break the encapsulation of logic. Your middleware doesn't need to throw, your service needs to throw when it's not ready.
Oh now I understand what you mean. Yes you're right. But right now the service code, which was not written by me, works that way so I need to adapt until I can fix that.
BTW I would really consider exposing the controller instance to middlewares as this would open up many other options.
Yes, you can do that too, but if you do it via a vanilla decorator then you won't have access to routing-controller itself.
What do you mean exactly? If I apply my own decorator after the ones from routing-controller then routing-controller will call my middleware before then actual route implementation (which will need to be called by my middleware)
Isn't this the whole point of having a middleware? To perform "guards" over real code?
A given middleware shouldnt know anything about other middlewares in the middleware chain. they all should be "self-contained". A controller (the whole routing-controllers actually) acts as a one big middleware. So accessing controller instance (part of one middleware) in other middleware would break the middleware separation principle. Only thing (in express example) that travels between middlewares is the request (and response) object itself.
Can you not just share an instance of your service between controller and a middleware via a DI container (not a best practice, but something to start with).
say, something like (pseudo-code, not tested in any way):
export class Middleware implements ExpressMiddlewareInterface {
@Inject('my-service')
protected service;
use(request: any, response: any, next?: (err?: any) => any): any {
if(!service.isInitialized()) {
throw 'service not ready';
}
next();
}
}
class Controller {
@Inject('my-service')
protected service;
@Get('/')
@UseBefore(Middleware)
anAction(){
// we want request to reach action only when service is "ready"
}
}
@sh3d2 one issue is that the service being injected can change for each request (i.e. depends on currentUser). How would you handle that in routing-controller?
|
gharchive/issue
| 2017-11-12T14:57:21 |
2025-04-01T06:40:01.508212
|
{
"authors": [
"NoNameProvided",
"sh3d2",
"tonyxiao",
"vekexasia"
],
"repo": "pleerock/routing-controllers",
"url": "https://github.com/pleerock/routing-controllers/issues/327",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
893893537
|
Update perfolation to 1.2.8
Updates com.outr:perfolation from 1.1.7 to 1.2.8.
GitHub Release Notes - Version Diff
I'll automatically update this PR to resolve conflicts as long as you don't change it yourself.
If you'd like to skip this version, you can just close this PR. If you have any feedback, just mention me in the comments below.
Configure Scala Steward for your repository with a .scala-steward.conf file.
Have a fantastic day writing Scala!
Ignore future updates
Add this to your .scala-steward.conf file to ignore future updates of this dependency:
updates.ignore = [ { groupId = "com.outr", artifactId = "perfolation" } ]
labels: library-update, semver-minor
Superseded by #197.
|
gharchive/pull-request
| 2021-05-18T02:48:31 |
2025-04-01T06:40:01.600484
|
{
"authors": [
"scala-steward"
],
"repo": "plokhotnyuk/fast-string-interpolator",
"url": "https://github.com/plokhotnyuk/fast-string-interpolator/pull/164",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1599937532
|
Improvements to dev proxy
Prefer RAZZLE_INTERNAL_API_PATH over RAZZLE_API_PATH for devProxyToApiPath. This makes more sense (the proxy target is accessed from the Volto server side) and it makes the dev proxy work without additional configuration when running Volto in a Docker container RAZZLE_API_INTERNAL_API_PATH is already set correctly.
Keep the URL protocol from the API path in the virtual hosting path passed to the backend, instead of hardcoding http. This lets the backend generate correct URLs when the backend is served over https.
Use a new environment variable, RAZZLE_DEV_PROXY_INSECURE, to control whether the proxy checks certificates on the backend.
Always log where API paths are proxied on startup, not only in development mode.
In the APIResourceWithAuth helper, always use devProxyToApiPath, not only in development mode. It's a potentially confusing inconsistency.
With these changes, it's possible to run Volto with a remote backend served over https. For example:
RAZZLE_INTERNAL_API_PATH=https://demo.plone.org RAZZLE_PROXY_REWRITE_TARGET=/++api++ RAZZLE_DEV_PROXY_INSECURE=1 yarn start
Note: I still need to check if there are any implications for the docs.
@davisagli seamless traefik docker config:
routers:
frontend:
rule: "Host(`localhost`)"
service: frontend
backend:
rule: "Host(`localhost`) && PathPrefix(`/++api++`)"
service: backend
middlewares:
- backend
middlewares:
backend:
replacePathRegex:
regex: "^/\\+\\+api\\+\\+($|/.*)"
replacement: "/VirtualHostBase/http/localhost/plone/++api++/VirtualHostRoot$1"
services:
frontend:
loadBalancer:
servers:
- url: "http://host.docker.internal:3000"
backend:
loadBalancer:
servers:
- url: "http://host.docker.internal:55001"
has localhost on it, could be the problem?
works like a charm... We should document it properly.
|
gharchive/pull-request
| 2023-02-26T05:33:22 |
2025-04-01T06:40:01.654807
|
{
"authors": [
"davisagli",
"sneridagh"
],
"repo": "plone/volto",
"url": "https://github.com/plone/volto/pull/4434",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
276458043
|
Use electron-builder and webpack@3
@chriddyp Could you review this PR?
I haven't gone as far as I originally planned with this work, but I'm going to open a PR in the main repo, so that people can test the installers generated with electron-builder.
A brief overview of what this PR does:
now falcon uses a patched version of ibm_db from my repo with @tarzzz 's fix (this is necessary for electron-builder to work)
I've upgraded dependencies if no code change was required (e.g. now we are using webpack@3 and mysql2)
I've removed unused dependencies, files and folders.
when possible I've moved modules from dependencies to devDependencies
Now the procedure to generate installers is:
$ yarn install
$ yarn build
$ yarn run pack
(Beware that yarn pack and yarn run pack do different things)
The procedure to build the app locally and test is still the same:
$ yarn install
$ yarn run rebuild:modules:electron
$ yarn build
$ yarn start
this looks great! 💃
|
gharchive/pull-request
| 2017-11-23T19:13:29 |
2025-04-01T06:40:01.682153
|
{
"authors": [
"chriddyp",
"n-riesco"
],
"repo": "plotly/falcon-sql-client",
"url": "https://github.com/plotly/falcon-sql-client/pull/277",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
376886429
|
K8s resources adjustment
part of plotly/streambed#11766
as per below I've miss-understood the CPU metric used by the auto-scale
It is based of the cpu requests in our case it is very low at 100m and the percentage set is 90% so that mean at 90m we start scaling and by that it start spinning up new node. As per below:
kubectl describe hpa
Normal SuccessfulRescale 38m (x591 over 36d) horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 14m (x499 over 36d) horizontal-pod-autoscaler New size: 5; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 10m (x98 over 36d) horizontal-pod-autoscaler New size: 10; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 6m22s (x29 over 36d) horizontal-pod-autoscaler New size: 14; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 20s (x1548 over 36d) horizontal-pod-autoscaler New size: 3; reason: All metrics below target
so in our case we barely use 1400m so by increase the requests to 400m + a CPU target of 150% = 600m * 3 total of 1800m which give us more than what we need. Worst case the auto-scale will kick-in.
@scjody ready for review
@mag009 I don't understand your explanation. Can you provide more details on what's happening now and how it needs to change?
It looks like the old target CPU utilization was 30% - where does 90% (in your explanation) come from? I also don't understand why we need to increase the CPU requests from 100m to 400m. Doesn't that number just affect scheduling, in other words what other pods can coexist on the same node as an imageserver pod?
Finally have you done any load testing of the new values to make sure the cluster still scales up when needed?
@mag009 I don't understand your explanation. Can you provide more details on what's happening now and how it needs to change?
From the metric above, we can see how often we scaled. In our case the auto-scale add/remove new node due to the node affinity that prevent from having 2 imageserver of running on the same node.
For example : We scaled to 14 replicas 29 times during last 36 days, which represent 90m / 940m per node (waste of resources).
The goal is to maximized the utilization of our resources and save money
It looks like the old target CPU utilization was 30% - where does 90% (in your explanation) come from? I also don't understand why we need to increase the CPU requests from 100m to 400m. Doesn't that number just affect scheduling, in other words what other pods can coexist on the same node as an imageserver pod?
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Finally have you done any load testing of the new values to make sure the cluster still scales up when needed?
I've tested on stage by generating load using ab -n 10000 -c 20 -p 0.json http://10.128.0.17:9091/ with single file in parallel and with random sized image to simulate regular traffic.
As we can see the result are much more efficient:
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 457% (457m) / 600%
Min replicas: 3
Max replicas: 6
Deployment pods: 3 current / 3 desired
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
imageserver-588b9fcd55-fx4zw 471m 2216Mi
imageserver-588b9fcd55-rqgj2 420m 1186Mi
imageserver-588b9fcd55-sv56m 480m 1813Mi
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-stage-default-pool-943c54e3-p8gl 496m 52% 2843Mi 107%
gke-stage-default-pool-e4669036-6gxh 677m 72% 3064Mi 115%
gke-stage-default-pool-e4669036-lpdr 393m 41% 2530Mi 95%
this is how prod currently looks like under (load) :
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 34% (34m) / 90%
Min replicas: 3
Max replicas: 18
Deployment pods: 10 current / 10 desired
kubectl top pods
NAME CPU(cores) MEMORY(bytes)
imageserver-7f65b96d76-229b4 40m 452Mi
imageserver-7f65b96d76-486tc 26m 594Mi
imageserver-7f65b96d76-96hsv 27m 430Mi
imageserver-7f65b96d76-d86ks 102m 1182Mi
imageserver-7f65b96d76-h7h9x 110m 1807Mi
imageserver-7f65b96d76-pq5bp 34m 402Mi
imageserver-7f65b96d76-snbk9 66m 638Mi
imageserver-7f65b96d76-vc648 92m 1730Mi
kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
gke-prod-default-pool-2e5abd50-2b7t 105m 11% 1381Mi 52%
gke-prod-default-pool-2e5abd50-2s37 80m 8% 1296Mi 49%
gke-prod-default-pool-2e5abd50-br3w 97m 10% 763Mi 28%
gke-prod-default-pool-2e5abd50-j9vd 79m 8% 952Mi 35%
gke-prod-default-pool-e20874d7-6lhh 58m 6% 878Mi 33%
gke-prod-default-pool-e20874d7-7f4x 65m 6% 875Mi 33%
gke-prod-default-pool-e20874d7-kgwd 130m 13% 2413Mi 91%
gke-prod-default-pool-e20874d7-t0w8 67m 7% 773Mi 29%
gke-prod-default-pool-ed1d55e6-0f5q 184m 19% 2618Mi 99%
gke-prod-default-pool-ed1d55e6-gs16 158m 16% 1936Mi 73%
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Yes, this should have been a PR, or at least a discussion with the team. When did you make this change?
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Where does 600% come from? How is that equivalent to 90% in your original explanation? Can you explain exactly what these numbers mean and how you calculated them?
Thanks for the details on your testing. My concern is: with the new settings, are we autoscaling enough to keep up with the load? Do you have any data on this?
I manually changed the value from 30% -> 90% , it was constantly taking up/down nodes and I saw the bill was increasing ( should of been in a pr ).
Yes, this should have been a PR, or at least a discussion with the team. When did you make this change?
I made the change on Sept 29 2-3 days after releasing it . I saw it was using 18 nodes so i just reacted and increased the %.
Correct the cpu requests is for scheduling only. We could leave it to 100m and set the TargetCPU to 600% = 60% Utilization per node.
Where does 600% come from? How is that equivalent to 90% in your original explanation? Can you explain exactly what these numbers mean and how you calculated them?
I never said it was equivalent of 90%, although I forgot to mention that we need to increase it in order to maximized the utilization of our nodes.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
Thanks for the details on your testing. My concern is: with the new settings, are we autoscaling enough to keep up with the load? Do you have any data on this?
This graph represent our highest peak during the last 30 days :
on all 3 pools central-a , central-b and central-c we never reached out more than 20% of cpu utilization.
We can set the CPUtarget to a less aggressive value : 200% and re-evaluate in a month ?
I made the change on Sept 29 2-3 days after releasing it .
OK, so it's been at 90% for a while, so we need to increase from that number. Got it.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
I still don't understand. The calculations alone don't really help me understand the reasoning - why are these the numbers that we want to change to? (Specifically your PR sets CPU requests to 400m and target CPU utilization to 150%. I'm OK discussing either these numbers or a different proposal, but I really want to understand where they come from before we make any more changes.)
I'm also still concerned about the safety of these changes. We don't want to end up in a situation where the cluster is at capacity but autoscaling doesn't happen. I thought from the work in #9865 we'd be able to say something like "When the CPU usage of our imageserver pods is over XX%, we need to scale up", and from there we could implement a solution. Would you be able to let us know these numbers? (If you'd still need to put in a lot of effort to figure out these numbers we could consider increasing the autoscaling parameters gradually and keeping an eye out for problems on prod, but my understanding is you already did the needed tests in #9865 and I'd prefer to take a safer approach based on measurements.)
I made the change on Sept 29 2-3 days after releasing it .
OK, so it's been at 90% for a while, so we need to increase from that number. Got it.
This is the math I used for that:
replicas 18 * 90m = 1620m
replicas 3 * 600m ( 100m with a CPUtarget of 600% = 600m ) = 1800m
I still don't understand. The calculations alone don't really help me understand the reasoning - why are these the numbers that we want to change to? (Specifically your PR sets CPU requests to 400m and target CPU utilization to 150%. I'm OK discussing either these numbers or a different proposal, but I really want to understand where they come from before we make any more changes.)
We have a total of 940mCPU per node and a minimum of 3 nodes ( 1 per zone ). Running 24/7
A total of 2820mCPU
1391mCPU is allocated to kube-system pods
Which leaves us 1429mCPU for the imageserver.
Based on the highest CPU request we had over the last 30 days (~1500mCPU). As per graph : https://github.com/plotly/orca/pull/144#issuecomment-436082074
This is where I got the 600 * 3 replicas = 1800mCPU. I agree it's a bit too aggressive :smile: but my goal was to avoid scaling and better utilized the node when we know that this is the only thing running.
I'm also still concerned about the safety of these changes. We don't want to end up in a situation where the cluster is at capacity but autoscaling doesn't happen. I thought from the work in #9865 we'd be able to say something like "When the CPU usage of our imageserver pods is over XX%, we need to scale up", and from there we could implement a solution. Would you be able to let us know these numbers? (If you'd still need to put in a lot of effort to figure out these numbers we could consider increasing the autoscaling parameters gradually and keeping an eye out for problems on prod, but my understanding is you already did the needed tests in #9865 and I'd prefer to take a safer approach based on measurements.)
I do have these number as I pointed out above : https://github.com/plotly/orca/pull/144#issue-227995995 these represent the number of time it scaled-up/down. It's hard to measure from this... It tells me that we scale too often.
CPU requests for a week :
We can see that it peaks often explaining the the scale-up/down.
As for #9865 I didn't have any metrics as the auto-scale wasn't setup yet.
@scjody what you say if I change the requests CPU to 300mCPU and a target of 90%
it should cover most of our small peaks and avoid us to spin-up new nodes.
Does it sound acceptable to you?
@mag009 I agree that we're scaling up too often. Obviously though we need to be concerned with the availability and responsiveness of the service, so we need to make sure we don't go too far in the other direction (in other words not scaling up often enough).
It doesn't sound like we have any measurements or data to support your new proposal though. How much time would it take for you to test autoscaling by doing load testing using stage? Like I said above if it's going to take a significant amount of time then experimenting using prod may be the way to go, but I'd like to consider the tradeoffs first.
@mag009 I agree that we're scaling up too often. Obviously though we need to be concerned with the availability and responsiveness of the service, so we need to make sure we don't go too far in the other direction (in other words not scaling up often enough).
It doesn't sound like we have any measurements or data to support your new proposal though. How much time would it take for you to test autoscaling by doing load testing using stage? Like I said above if it's going to take a significant amount of time then experimenting using prod may be the way to go, but I'd like to consider the tradeoffs first.
Should take me roughly 2-3h to test.
|
gharchive/pull-request
| 2018-11-02T16:43:37 |
2025-04-01T06:40:01.705524
|
{
"authors": [
"mag009",
"scjody"
],
"repo": "plotly/orca",
"url": "https://github.com/plotly/orca/pull/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
211812250
|
MsSQL Express Compatibility
I'd like to know if the Plotly Database Connector (when doing a MsSQL connection) supports a MSSQL EXPRESS EDITION database.
I tried to connect using the "sa" username and its password but nothing happened. It gives me the error:
failed to connect to [hostname]:[PortNumber] - connect ECONNREFUSED [my IP address]:[PortNumber]
This are my inputs:
Username: sa
Password: mypassword
Host: myHost
Port: (I left this port blank)
Database: myDB
So, does it is compatible?
Hi! After some searching it seems the connector currently does not support MSSQL EXPRESS (relevant sequelize issue) but it would it should be an easy fix.
Relevant code in database-connector https://github.com/sequelize/sequelize/issues/3097#issuecomment-73798671
A new option 'instanceName' should be provided.
We welcome contributions from the community and will be glad to review them and work on them to include them in the connector.
Forgive my ignorance but... I searched in all the project and I just don't get to know which path does contains the sequelize connector, also, I downloaded the project and just can't make it start (most of it because I don't even know how to do it).
Thanks for your attention! :)
Hey! Thanks you for getting your hands dirty :P
What did you try exactly to run it and how did it fail?
The following commands should get you up and running.
npm install
npm run build
npm run start
Hey Alexandre! I'm happy to tell you that I got myself a big cup of coffee and finally found the file you kindly referenced to me, so, I made the changes in my application and it worked!
In my connection-manager.js which is located in plotly-database-connector/resources/app/node_modules/sequelize/lib/dialects/mssql/connection-manager.js I just added instanceName: 'SQLEXPRESS' and everything worked fine!
Thank you again, for your consideration to answer my questions. I also made a [question] in(http://stackoverflow.com/questions/42589886/plotly-and-sql-server-express-compatibility) StackOverflow. And pasted the answer there.
If you wish, you could create a branch in this repository and make a PR that would add that instanceName parameter as one of the inputs the user gives when he/she selects Microsoft SQL as the database to which it has to connect. This way others will only have to enter in the user interface without having to change the source code. I can help you with reviewing the PR. It should be quite simple.
This is the place where it could be added in the front end https://github.com/plotly/plotly-database-connector/blob/master/app/constants/constants.js#L16
And we can read that input in the back end here https://github.com/plotly/plotly-database-connector/blob/master/backend/persistent/datastores/Sql.js#L14
I just did it, I hope that's the correct way to do it.
I'd appreciate if you told me if i did it right or wrong :)
@ArturoAguileraV The project has been quite since your last comment (sorry about that). But now a new version is about to be released that includes a fix for this issue. You can download a prerelease. Please, let us know if you have any issues:
https://github.com/plotly/falcon-sql-client/releases/tag/v2.3-pre
Fixed in https://github.com/plotly/falcon-sql-client/releases/tag/v2.3.2
|
gharchive/issue
| 2017-03-03T21:54:49 |
2025-04-01T06:40:01.717484
|
{
"authors": [
"ArturoAguileraV",
"alexandresobolevski",
"n-riesco"
],
"repo": "plotly/plotly-database-connector",
"url": "https://github.com/plotly/plotly-database-connector/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2319435257
|
🛑 KCG is down
In e18fff2, KCG (https://kunstrijclubgroningen.nl) was down:
HTTP code: 0
Response time: 0 ms
Resolved: KCG is back up in b90ba1a after 34 minutes.
|
gharchive/issue
| 2024-05-27T15:47:15 |
2025-04-01T06:40:01.821357
|
{
"authors": [
"pluim003"
],
"repo": "pluim003/upptime",
"url": "https://github.com/pluim003/upptime/issues/1583",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
273171910
|
no real changes
package.json keeps getting prettified back and forth, plus package-lock keeps faffing around.
heh, that's actually not true- the changes are pretty significant, it's just that this is an old PR title apparently. This is the breaking 0.25 changes from all the other repos.
|
gharchive/pull-request
| 2017-11-11T20:01:14 |
2025-04-01T06:40:01.832508
|
{
"authors": [
"ericeslinger"
],
"repo": "plumpstack/plump-store-postgres",
"url": "https://github.com/plumpstack/plump-store-postgres/pull/26",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2335319837
|
PlusPage 和 PlusTable自适应
需要增强的组件/模块
PlusPage PlusTable
描述
目前的版本,没有自适应的功能,每页数据多了之后,要一直往下滚动才能看到分页
额外留言(可选)
No response
我也有类似需求,通过 flex 布局来解决的:
<script lang="ts" setup>
// ...
</script>
<template>
<div class="auto-layout">
<PlusPage />
</div>
</template>
<style lang="scss">
// 高度自适应
.auto-layout {
padding: 16px;
height: calc(100vh - var(--pro-layout-height-header));
.plus-page {
display: flex;
flex-direction: column;
height: 100%;
.el-card.plus-page__table_wrapper {
flex: 1;
.el-card__body {
height: 100%;
.plus-table {
display: flex;
flex-direction: column;
height: 100%;
.el-table {
flex: 1;
}
.plus-pagination {
padding-bottom: 0;
}
}
}
}
}
}
</style>
|
gharchive/issue
| 2024-06-05T09:05:38 |
2025-04-01T06:40:01.839600
|
{
"authors": [
"FuAdmin",
"stormzhangbx"
],
"repo": "plus-pro-components/plus-pro-components",
"url": "https://github.com/plus-pro-components/plus-pro-components/issues/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
227206181
|
feat: add --fast mode
This PR changes the progress calculation to optionally skip frames that are between ones that have similar progress toward the target. When enabled, it speeds up processing by about 40% and has minimal impact when not enabled via the shortcut. This also aligns perceptual progress and visual progress by sharing the same computation code and fixes #48 by virtue of the fact that we can't wait for the global min when we only parse some of the JPEGs.
Timings
( for i in $(seq 10); do; speedline cnn.json --pretty; done; ) 55.15s user 4.63s system 98% cpu 1:00.44 total
( for i in $(seq 10); do; speedline cnn.json --pretty --fast; done; ) 34.28s user 3.32s system 99% cpu 37.737 total
1 - (34/55) = ~38%
Open Questions
Expose the direct threshold value as the option or keep as binary? If so, name? similarProgressThreshold?
Skipping frames can artificially make the visual jitter (jankiness) go away from the calculations. Please make sure that layout instability effects that perceptual speed index captures aren't sacrificed due to this frame-skipping (technically, this is temporal downsampling).
@pahammad by its nature, the impact on the indexes cannot be avoided as you point out. However, in my experience there are a number of issues with using these metrics to as a signal of layout stability in the first place and frequently I've seen jitter artificially be rewarded by speed index rather than punished. Would a warning and tunable threshold address your concerns or are you against including the option at all with these drawbacks?
I am OK with including the frame-skipping as an option if it comes with appropriate warning when the option is exercised. I am very curious about your sentence: "frequently I've seen jitter artificially be rewarded by speed index rather than punished". Are you referring to the classical (histogram-based) speed index, or are you referring to SSIM based perceptual speed index (PSI) ? If you are referring to PSI in that sentence (where it rewards jitter instead of punishing it), I would love to see an example or two - this is opposite to what I've seen.
@pahammad disclaimer text added and I've filed #50 to discuss layout stability tracking 👍
changes lgtm but 2 test failures we need to work out.
changes lgtm but 2 test failures we need to work out.
done
speedline@1.2.0 published with this.
|
gharchive/pull-request
| 2017-05-08T23:40:44 |
2025-04-01T06:40:01.887052
|
{
"authors": [
"pahammad",
"patrickhulce",
"paulirish"
],
"repo": "pmdartus/speedline",
"url": "https://github.com/pmdartus/speedline/pull/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1437501688
|
🛑 RPB Law is down
In 6ccee15, RPB Law (https://lawbob.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: RPB Law is back up in 49b2411.
|
gharchive/issue
| 2022-11-06T18:48:31 |
2025-04-01T06:40:01.998342
|
{
"authors": [
"pmjustin"
],
"repo": "pmjustin/pmuptime",
"url": "https://github.com/pmjustin/pmuptime/issues/236",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2154443806
|
🛑 JMS Law is down
In 9d7dad3, JMS Law (https://jmslawltd.com) was down:
HTTP code: 503
Response time: 10568 ms
Resolved: JMS Law is back up in 30a5ea2 after 5 minutes.
|
gharchive/issue
| 2024-02-26T15:22:29 |
2025-04-01T06:40:02.000920
|
{
"authors": [
"pmjustin"
],
"repo": "pmjustin/pmuptime",
"url": "https://github.com/pmjustin/pmuptime/issues/466",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2615406051
|
🛑 JMS Law is down
In 0c90ec9, JMS Law (https://jmslawltd.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: JMS Law is back up in 8aea11d after 1 hour, 23 minutes.
|
gharchive/issue
| 2024-10-26T00:39:28 |
2025-04-01T06:40:02.003436
|
{
"authors": [
"pmjustin"
],
"repo": "pmjustin/pmuptime",
"url": "https://github.com/pmjustin/pmuptime/issues/669",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2056852230
|
⚠️ GitHub has degraded performance
In cdc2258, GitHub (https://www.githubstatus.com/api/v2/status.json) experienced degraded performance:
HTTP code: 200
Response time: 186 ms
Resolved: GitHub performance has improved in 0807d24 after 8 minutes.
|
gharchive/issue
| 2023-12-27T02:52:53 |
2025-04-01T06:40:02.008203
|
{
"authors": [
"pmmmwh"
],
"repo": "pmmmwh/upptime",
"url": "https://github.com/pmmmwh/upptime/issues/562",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1118346029
|
⚠️ Slack has degraded performance
In 4881fc4, Slack (https://status.slack.com/api/v2.0.0/current) experienced degraded performance:
HTTP code: 200
Response time: 85 ms
Resolved: Slack performance has improved in 50353d9.
|
gharchive/issue
| 2022-01-29T22:50:09 |
2025-04-01T06:40:02.010602
|
{
"authors": [
"pmmmwh"
],
"repo": "pmmmwh/upptime",
"url": "https://github.com/pmmmwh/upptime/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
990107360
|
webview setState in each component is not responding
My intention is to customize items in character in webview and sent the items name from react native.
The data is send correctly but when I need to set the state inside the web add event listener.
It have no problem when set state in color, but it crash when set state color and geometry
`function Boy(props) {
const group = useRef()
const { nodes, materials } = useGLTF('/man6.gltf')
const state = proxy ({
current: null,
color: {
skin: "#fff6dd",
hair: 'Material.010', //RN2_Green.002
shirt: 'NolmalShirt02.001',
watch: "Material.007",
shoe: 'Shoes_Normal.001', //'Material.009'
}
})
const stateGeo = proxy ({
current: null,
geo: {
shoe: nodes.Shoes2.geometry,
shirt: nodes.man_Shirt.geometry,
}
})
const snap = useSnapshot(state)
const collectionShoesGeometry = {
"shoe3": nodes.Shoes2.geometry,
"shoe4": nodes.man_Shoes.geometry,
}
const collectionShirtsGeometry = {
"shirt3": nodes.Shirt2.geometry,
"shirt4": nodes.man_Shirt.geometry,
}
window.addEventListener("message", function(event) {
var requestTrim = event.data + ""
var value = requestTrim.split("|")
if (value.length > 0) {
if (value[0] === "closet") {
stateGeo.geo['shirt'] = collectionShirtsGeometry['shirt3']
state.color['shirt'] = "NolmalShirt02.001"
} else if(value[0] === "watch") {
state.color["watch"] = "Material.008"
}
}
});
useEffect(() => {
}, [])
return (
<group ref={group} {...props} dispose={null} scale={4.7}>
<mesh
geometry={stateGeo.geo['shirt']}
material-color={snap.color['shirt']}
material={materials[snap.color['shirt']]}
position={[0, -0.01, 0.02]}
rotation={[-1.6, 0, 0]}
/>
<mesh
geometry={nodes.watch1.geometry}
material={materials['Material.001']}
position={[0, -0.01, 0.02]}
rotation={[-1.6, 0, 0]}
scale={1.01}
/>
</group>
)
}
useGLTF.preload('/man6.gltf')`
I try to put the state outside the function, it's work.
Note: useSnapshot can't use twice in the same function and I'm not sure why.
Did anyone have any suggest about this? Thank you
this doesnt seem correct, color = "NolmalShirt02.001" is not valid
|
gharchive/issue
| 2021-09-07T15:30:28 |
2025-04-01T06:40:02.029777
|
{
"authors": [
"drcmda",
"teenkwn"
],
"repo": "pmndrs/gltfjsx",
"url": "https://github.com/pmndrs/gltfjsx/issues/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1452371426
|
Depth buffer issue when using DepthOfFieldEffect
Description of the bug
Follow-up to #420.
By using dofEffect.circleOfConfusionMaterial.adoptCameraSettings and worldFocusDistance, I was able to get a dynamic target to change the focus. I got it to match up / work in @vanruesc's sandbox.
However, I still don't get it to work elsewhere...
For me it looks like the depth buffer is in a wrong format, and that the calculation to go from linearized near/far values to world distances and vice versa doesn't match some setting here. I checked that we're not using logarithmic depth.
Maybe these images of expected and not expected cases help:
In the below images,
the textured plane is cut off by near and far clip planes of 1 and 10
the white cube is the focus target - placed at 1, 10, ~5.5 and ~2.5
✔️ Target at near clip plane - as expected: near clip plane is in focus
✔️ Target at far clip plane - as expected: far clip plane is in focus
❌ Target at center between near and far - not expected: focus is too close
❌ Target at 1/4 between near and far - not expected: focus is too close
So there seems to be some nonlinearity going on, but I have no idea why.
To Reproduce
I'm unfortunately unsure how to reproduce / what's wrong in this setup so far. Happy to answer any questions to hopefully figure out what I'm doing wrong.
Expected behavior
Ability to set the worldFocusDistance and get that distance in focus.
Screenshots
see above
Library versions used
Three: 0.145.4
Post Processing: 6.29.1
Desktop
OS: Windows 10 and 11
Browser Chrome
Graphics hardware: RTX 2070 Max-Q and RTX 3070
Thanks for the screenshots :+1:
How are you setting the worldFocusDistance? You'd want to set this to the distance from the camera to the cube. Alternatively, try setting dofEffect.target to cube.position;
(my mistake convoluting R3F and three here, of course it's vanilla...)
Yes, this distance is the world distance from the camera to the cube. Also logged these values and made sure they match up with what I'd expect (e.g. in screenshots 2 and 3 the worldFocusDistance values logged are 5.5 and 2.5).
I get the same behaviour when using dofEffect.target = cube.position.
(side note, for that to work the cube must be a at sceen root / must not be transformed by it's parents if I'm not mistaken which I made sure of).
Strange.. I'll take a closer look this weekend.
side note, for that to work the cube must be a at sceen root / must not be transformed by it's parents if I'm not mistaken which I made sure of
Thanks for pointing that out. The implementation currently doesn't use getWorldPosition which is a bug.
For low near clip values (e.g. 0.01) I only get any focus when the target is at far clip distance, and basically all other target distances result in focus planes very close to the near clip plane.
Well, the shader does linearize depth, so this sounds like the camera settings aren't being set correctly for some reason.
Thanks!
Right now I'm literally calling this
setInterval(() => {
dof.circleOfConfusionMaterial.adoptCameraSettings(camera);
dof.target = targetObject.position;
}, 50);
```
to ensure for now not messing up any timing here and that the right camera values are used. (Of course tried without that as well),
What's weird is that it does seem to work for "target distance == near clip" and "target distance == far clip" but nothing inbetween.
Hi again! Were you by any chance able to look into the issue @vanruesc? Thanks!
Sorry, I've been too busy to get back to this. However, I was able to confirm that viewZToOrthographicDepth returns correct values.
I got it to match up / work in @vanruesc's sandbox.
However, I still don't get it to work elsewhere...
Any chance you could provide a failing example with code for me to look at?
@hybridherbst does this sandbox or video demonstrate the same issue you are talking about ?
https://user-images.githubusercontent.com/6885294/204822593-cee0b9d4-25a2-4f03-b472-fb0fc02db443.mp4
@optimus007 Thanks for the sandbox! The unexpected behaviour that can be observed when the target is off-center has to do with perspective projection. The object would probably remain in focus at all times with an orthographic camera. The effect translates the world distance to a linear, orthographic depth value which basically counters the perspective projection. I wonder if we could use perspective depth instead :thinking:
Right now, I'm busy with work and the v7 redesign. I'll eventually get to the DepthOfFieldEffect.
@hybridherbst Does your project use logarithmic depth? That would explain why you're only able to focus objects close to the near and far planes.
The expected behaviour described in this ticket would be achieved by calculating the distance from individual fragments to the camera instead of using the scene depth. This would result in a spherical focus field around the camera instead of a box-like field between the near and far plane.
I don't plan on changing the current CoC implementation in postprocessing v6, but the new implementation in v7 will use the distance-based approach.
Closing this in favor of #569.
|
gharchive/issue
| 2022-11-16T22:32:25 |
2025-04-01T06:40:02.046121
|
{
"authors": [
"hybridherbst",
"optimus007",
"vanruesc"
],
"repo": "pmndrs/postprocessing",
"url": "https://github.com/pmndrs/postprocessing/issues/426",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
1464244772
|
New command: Apply a retentionlabel to a file using spo file retentionlabel ensure
Usage
m365 spo file retentionlabel ensure [options]
Description
Apply a retention label to a file
Options
Option
Description
-u, --webUrl <webUrl>
The url of the web
--fileUrl [fileUrl]
The server-relative URL of the file that should be labelled. Specify either fileUrl or fileId but not both.
i, --fileId [fileId]
The UniqueId (GUID) of the file that should be labelled. Specify either fileUrl or fileId but not both.
--name <name>
Name of the retention label to apply to the file.
Examples
Apply the label some label to a file
m365 spo file retentionlabel set --webUrl 'https://contoso.sharepoint.com/sites/sales' --fileUrl '/sites/sales/somelibrary/somefile.pdf' --name 'Some retention label'
More information
This command can be implemented with shared code created for #4158.
Putting this on hold till #4158 is implemented.
looks solid 👍
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hi @waldekmastykarz, because we might be able to use shared code. In principle files and folders are listItems as well, so we could possibly reuse the listItem command. That was my initial thought.
If you have a better idea though.., do let me know :)
Hey @martinlingstuyl, why are we waiting with implementation until #4158 is done?
Hi @waldekmastykarz, because we might be able to use shared code. In principle files and folders are listItems as well, so we could possibly reuse the listItem command. That was my initial thought.
Make sense. Next time, let's include this reasoning upfront just so that we make it clear to everyone why we're waiting 👍
This command can be implemented with shared code created for #4158.
I'd written something, but I agree it could have been more clear @waldekmastykarz
Can I work on this one please
Awesome, all yours!
|
gharchive/issue
| 2022-11-25T09:16:20 |
2025-04-01T06:40:02.086695
|
{
"authors": [
"Adam-it",
"Jwaegebaert",
"martinlingstuyl",
"nicodecleyre",
"waldekmastykarz"
],
"repo": "pnp/cli-microsoft365",
"url": "https://github.com/pnp/cli-microsoft365/issues/4159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1660977869
|
Adds "Create custom views to differentiate SharePoint news page types in Site Pages library" sample script. Closes #1782
Adds "Create custom views to differentiate SharePoint news page types in Site Pages library" sample script. Closes #1782
Thank you @nanddeepn! We'll try to review it ASAP!
Thanks @nicodecleyre
Added a reference to the script from the docs/mkdocs.yml file.
|
gharchive/pull-request
| 2023-04-10T15:52:41 |
2025-04-01T06:40:02.088434
|
{
"authors": [
"milanholemans",
"nanddeepn"
],
"repo": "pnp/cli-microsoft365",
"url": "https://github.com/pnp/cli-microsoft365/pull/4751",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
936776207
|
SharePoint Online Search Issue using PnP
Notice
Many bugs reported are actually related to the PnP Framework which is used behind the scenes. Consider carefully where to report an issue:
Are you using Invoke-PnPSiteTemplate or Get-PnPSiteTemplate? The issue is most likely related to the Provisioning Engine. The Provisioning engine is not located in the PowerShell repo. Please report the issue here: https://github.com/pnp/pnpframework/issues.
Is the issue related to the cmdlet itself, its parameters, the syntax, or do you suspect it is the code of the cmdlet that is causing the issue? Then please continue reporting the issue in this repo.
If you think that the functionality might be related to the underlying libraries that the cmdlet is calling (We realize that might be difficult to determine), please first double check the code of the cmdlet, which can be found here: https://github.com/pnp/powershell/tree/master/src/Commands. If related to the cmdlet, continue reporting the issue here, otherwise report the issue at https://github.com/pnp/pnpframework/issues
Reporting an Issue or Missing Feature
Please confirm what it is that your reporting
Expected behavior
Please describe what output you expect to see from the PnP PowerShell Cmdlets
Actual behavior
Please describe what you see instead. Please provide samples of output or screenshots.
Steps to reproduce behavior
Please include complete script or code samples in-line or linked from gists
What is the version of the Cmdlet module you are running?
(you can retrieve this by executing Get-Module -Name "PnP.PowerShell" -ListAvailable)
Which operating system/environment are you running PnP PowerShell on?
[ ] Windows
[ ] Linux
[ ] MacOS
[ ] Azure Cloud Shell
[ ] Azure Functions
[ ] Other : please specify
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 | @.@.> | TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=02|01|jestine_goh%40tal.sg|401a4a5984c443cc424108d8095629a9|0af12b508f1940928ace4dce8f8253e0|0|0|637269612992643648&sdata=yalG7HcSQL508xcqisIuQPsuOLUbCPKhNwAXPmV25%2FE%3D&reserved=0
@.***D771B7.0167E960]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth @.>
Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell @.>
Cc: Jestine Goh @.>; Author @.>
Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|jestine_goh%40tal.sg|6ce8f0fcd54d4430b58e08d93f8a3267|0af12b508f1940928ace4dce8f8253e0|0|0|637610685698210475|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000&sdata=AZTkGO0NUO5E5WwsJa2kNDkf2ZgM8BJxgpTMxXEbN3w%3D&reserved=0, or unsubscribehttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|jestine_goh%40tal.sg|6ce8f0fcd54d4430b58e08d93f8a3267|0af12b508f1940928ace4dce8f8253e0|0|0|637610685698210475|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|3000&sdata=NaEwgpsRKk0ivkvlLVm5ZrF2d%2F8FYXBNn882Vd16gPY%3D&reserved=0.
Hello Jestine,
Good day!
Thank you for your email reply, kindly could you provide more description regarding the issue that you are facing?
Best regards,
Elhadj
From: Jestine Goh @.>
Sent: Monday, July 5, 2021 4:01 PM
To: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd.) @.>
Cc: Author @.>; pnp/powershell @.>; pnp/powershell @.***>
Subject: [EXTERNAL] RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 | @.@.> | TAL Personal Data Policyhttps://nam06.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586315681|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=1hFBkeMJm%2F8Velx7k7eRoL%2Bg3a25B5oSfRgHIY3aDpw%3D&reserved=0
@.***D771BA.389E4070]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth @.@.>>
Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell @.@.>>
Cc: Jestine Goh @.@.>>; Author @.@.>>
Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586315681|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=Jk%2FCld9kxZT1BBw228ChAZWQrt2Ma%2F22O3vJUQbxPmc%3D&reserved=0, or unsubscribehttps://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|v-ediallo%40microsoft.com|d642b24525844526f9b308d93f8b18e0|72f988bf86f141af91ab2d7cd011db47|1|0|637610689586325643|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=aYTgAqI0nD%2Bp35ukDAzP17uZyh5pZJpQjFU5qtLpeZ0%3D&reserved=0.
Hi all,
We are using SharePoint web part Search function to search documents. Recently we noticed the search result is not working well. Some time ago, it was working.
If I tried to view source of the page and found the below codes for the Search input. Understand from Microsoft, PnP search is being used and there is a need to log issue with Github.
Please see below codes for your reference. For your advise please.
It looks like your browser does not have JavaScript enabled. Please turn on JavaScript and try again.
Thank you
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 | @.@.> | TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=02|01|jestine_goh%40tal.sg|401a4a5984c443cc424108d8095629a9|0af12b508f1940928ace4dce8f8253e0|0|0|637269612992643648&sdata=yalG7HcSQL508xcqisIuQPsuOLUbCPKhNwAXPmV25%2FE%3D&reserved=0
@.***D771BA.BFCEB610]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd.) @.>
Sent: Monday, July 5, 2021 4:24 PM
To: Jestine Goh @.>
Cc: Author @.>; pnp/powershell @.>; pnp/powershell @.***>
Subject: RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hello Jestine,
Good day!
Thank you for your email reply, kindly could you provide more description regarding the issue that you are facing?
Best regards,
Elhadj
From: Jestine Goh @.@.>>
Sent: Monday, July 5, 2021 4:01 PM
To: Elhadj Mamoudou Diallo (Shanghai Wicresoft Co,.Ltd.) @.@.>>
Cc: Author @.@.>>; pnp/powershell @.@.>>; pnp/powershell @.@.>>
Subject: [EXTERNAL] RE: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Hi Elhadj,
My issue has been close due to the following reason.
Regards
Jestine GOH
Senior IT Consultant, Information Technology (Applications)
Tripartite Alliance Limited
80 Jurong East St 21, #05-05/06, Devan Nair Institute for Employment and Employability, Singapore 609607
T: 6956 6407 | @.@.> | TAL Personal Data Policyhttps://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.tal.sg%2Fprivacy-statement%2F&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733310554|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=gFCfq2SS92Z36kbcbB%2Br9l2tfgVAorFcZxMaJitrI7g%3D&reserved=0
@.***D771BA.BFCEB610]
This message may contain confidential information intended only for the individual named. If you are not the named addressee, do not disseminate, distribute or copy this email. Please notify the sender immediately and delete this email from your system. Any views expressed in the email are those of the individual sender, except where the sender specifically states them to be the view of the Tripartite Alliance Limited (TAL). The information and/or comments provided or expressed in this message are not to be construed as legal advice and are not intended to replace legal advice. You should not rely on any information and/or comments provided or expressed in this message/research memorandum as legal advice. TAL shall not be responsible for any loss or damage arising from your reliance on any information and/or comments provided or expressed in this message.
From: Gautam Sheth @.@.>>
Sent: Monday, July 5, 2021 3:55 PM
To: pnp/powershell @.@.>>
Cc: Jestine Goh @.@.>>; Author @.@.>>
Subject: Re: [pnp/powershell] SharePoint Online Search Issue using PnP (#890)
Closing this as its not clear what the issue is with PnP.
Request you to please answer the issue template questions and reopen the issue.
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHubhttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnp%2Fpowershell%2Fissues%2F890%23issuecomment-873889554&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733310554|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=tC2yxoeDjw04BNdLxkiurn9x1NACazBUoeWtLhrW4o4%3D&reserved=0, or unsubscribehttps://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAUXL4ATYYC4BXS2CNJYVCU3TWFQNJANCNFSM472EKEAA&data=04|01|jestine_goh%40tal.sg|d1c8b1283bad4c7cc8eb08d93f8e4f6c|0af12b508f1940928ace4dce8f8253e0|0|0|637610702733320546|Unknown|TWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D|1000&sdata=qWQfePHEIc1miZstF%2FTfvX5ZBUsVFp6fi45ZygpfGPs%3D&reserved=0.
|
gharchive/issue
| 2021-07-05T07:19:43 |
2025-04-01T06:40:02.156419
|
{
"authors": [
"gautamdsheth",
"jestinegoh"
],
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/issues/890",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1301049766
|
Update Add-PnPPageWebPart.md
Correcting Title typo from "Add-PnPWebPart" to "Add-PnPPageWebPart"
Before creating a pull request, make sure that you have read the contribution file located at
https://github.com/pnp/powerShell/blob/dev/CONTRIBUTING.md
Type
[x] Typo Fix
Thanks @JimmyHang, well noticed!
|
gharchive/pull-request
| 2022-07-11T18:09:44 |
2025-04-01T06:40:02.159369
|
{
"authors": [
"JimmyHang",
"KoenZomers"
],
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/pull/2132",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1659610091
|
Adding Move-PnPTerm and Move-PnPTermSet commands
Type
[x] New Feature
Related Issues?
#2978
What is in this Pull Request ?
Adding two new commands
Move-PnPTerm
Move-PnPTermSet
@kunj-sangani - code looks great !!
Some minor changes before I merge it 😊
In both these cmdlets, can you
Replace -DestinationTermSet with -TargetTermSet
Replace -SourceTermSet with -TermSet
Replace -SourceTermGroup with -TermGroup
Replace -DestinationTermGroup with -TargetTermGroup
Replace -DestinationTerm with -TargetTerm
Don't forget to update the docs as well.
Hi @gautamdsheth
Updated the Name of the parameters
Thanks for the help :)
Thanks @kunj-sangani, merged it , much appreciated !
|
gharchive/pull-request
| 2023-04-08T18:43:24 |
2025-04-01T06:40:02.164104
|
{
"authors": [
"gautamdsheth",
"kunj-sangani"
],
"repo": "pnp/powershell",
"url": "https://github.com/pnp/powershell/pull/2989",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1609734961
|
Resize table columns
As per https://github.com/pocketbase/pocketbase/discussions/1542
It uses a fairly generic svelte action, so it doesn't interfere much with the existing code. Took a bit of time to smooth out any bugs, but seems stable now.
Thanks again for your work on the project; the latest release was great 👍.
I'm not sure about this feature.
I haven't tested it locally, nor reviewed the code changes, but just from the screenshot it looks kindof strange since the table is no longer 100% width (maybe because of the fixed layout?).
I'm not sure about this feature.
I haven't tested it locally, nor reviewed the code changes, but just from the screenshot it looks kindof strange since the table is no longer 100% width (maybe because of the fixed layout?).
I did think this after submitting the pull request with the gif.
It makes the implementation slightly less elegant, but I can keep a minimum width for the table. I will revise and see what you think.
I don't think table-layout: fixed is necessary and it could cause some issues with adding for example a new column after the resizing.
Additionally, there should be something that will prevent resizing below some min-width threshold, because from the screenshot the id seems to be cropped and I'm not sure if this is a good idea.
I've updated the code to keep the minimum table width:
This implementation does require a fixed table layout. Trying to keep columns a specific width and the resize-handle tracking the cursor was problematic with a fluid layout.
The columns will clip as you resize. I personally prefer this to min widths.
Let me know what you think, feel free to close if it doesn't make sense for the project.
Sorry, I appreciate the work you've put into this but I don't want to rush it and merge something just because there is a PR for it.
I like the idea of resizable columns but the current implementation feels a little brittle and I'm not confident that we are handling all edge cases (eg. on windows small->big resize I guess we need also to bind to the resize event to recalculate the initial table width? I'm also not sure how it will behave if we do external layout changes, eg. via plain css in a future responsive version or just toggling sibling dom elements, etc.).
I've added it in my local todo to search for other options and eventually something similar could be implemented in the future, but I don't want to invest time into it right now and for now remains out of the scope.
|
gharchive/pull-request
| 2023-03-04T11:33:34 |
2025-04-01T06:40:02.230738
|
{
"authors": [
"ganigeorgiev",
"mjadobson"
],
"repo": "pocketbase/pocketbase",
"url": "https://github.com/pocketbase/pocketbase/pull/1966",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1137106077
|
Campaign page/ UI-UX elements are missing according to UI mockup (Design)
To Reproduce
Steps to reproduce the behavior:
Go to dev.podkrepi.bg
Navigate to main menu "Дарителство"
Scroll down to 'Кампании.'
Choose random Кампания
5.Select button "Вижте повече"
6.You should be successfully redirected to the new Campaign page
Expected behavior
All UX elements from the Design UI should be according to the Design mock ups.
Actual result:
Campaign page/ UI-UX elements are missing according to UI mockup (Design):
Missing Subheader
Slider for sums (too large according to UI mocк ups from Design)
List of donors and donation sums missing (under the "Сподели" button)
Missing image carousel (under the Description of the Campaign)
Missing section "Последни новини"
Missing section "Коментари"
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: Windows
Browser chrome
Version - Version 98.0.4758.82 (Official Build) (64-bit)
@ani-kalpachka
@kachar
If any of the @podkrepi-bg/softuni-bootcamp team wanna try solving this issue it would be great
|
gharchive/issue
| 2022-02-14T10:39:09 |
2025-04-01T06:40:02.242902
|
{
"authors": [
"Polina1985",
"kachar"
],
"repo": "podkrepi-bg/frontend",
"url": "https://github.com/podkrepi-bg/frontend/issues/489",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
982049532
|
[Feature] Use external database for Keycloak
Currently we deploy Keycloak with a Helm chart that also deploys PostgreSQL next to it. Since our own modules will also be using PostgreSQL it's best if we have 1 instance and point all software to it. This will save resources and will make it easier to maintain
I see most of the images and infrastructure are managed in the api repo. Is that infrastructure repo then mandatory?
|
gharchive/issue
| 2021-08-29T09:39:48 |
2025-04-01T06:40:02.244417
|
{
"authors": [
"dimitur2204",
"imilchev"
],
"repo": "podkrepi-bg/infrastructure",
"url": "https://github.com/podkrepi-bg/infrastructure/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
646332784
|
Crash occuring on Xcode 12 when attempting to do enum reflection on child action.
Describe the bug
Attempting to send an action view the store causes EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0) when combining reducers.
Given a child action without any cases
enum ChildAction: Equatable {}
And a parent action
enum ParentAction: Equatable {
case childAction(ChildAction)
case parentAction
}
And a main reducer
let parentReducer = Reducer<ParentState, ParentAction, ParentEnvironment>.combine(
childReducer.pullback(
state: \.childState,
action: /ParentAction.childAction,
environment: { _ in ChildEnvironment() }
),
Reducer<ParentState, ParentAction, ParentEnvironment> { state, action, _ in
switch action {
case .parentAction:
state.childState += 1
return .none
case .childAction:
return .none
}
}
)
As soon as you send viewStore.send(.parentAction) a crash occurs.
The crash happens on EnumReflection line 75 from the CasePaths package
This crash does not happen on Xcode 11.4 with the same setup.
A work around is to add a case to the child enum.
enum ChildAction {
case banana
}
To Reproduce
Send any action viewStore.send(.parentAction) with the above setup is sufficient to cause a crash.
Expected behavior
Give a clear and concise description of what you expected to happen.
Screenshots
Environment
Xcode 12 Beta 1
Swift 5.3
OS (if applicable): iOS 14
Additional context
ComposableCasePathCrash.zip
Hey @vibrazy, thanks for the detailed report. We've actually logged this issue on swift-case-paths here: https://github.com/pointfreeco/swift-case-paths/issues/11
We probably won't get around to fixing it for a bit (if you wanna take a pass at it, please do!), but in the meantime, another workaround is to use the .never case path:
childReducer.pullback(
state: \.childState,
- action: /ParentAction.childAction,
+ action: .never,
environment: { _ in ChildEnvironment() }
),
This has been fixed upstream in Case Paths 0.1.2. Be sure to update your package dependencies!
|
gharchive/issue
| 2020-06-26T15:15:22 |
2025-04-01T06:40:02.265009
|
{
"authors": [
"stephencelis",
"vibrazy"
],
"repo": "pointfreeco/swift-composable-architecture",
"url": "https://github.com/pointfreeco/swift-composable-architecture/issues/200",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1841886866
|
Support for Non-Regional Forms and Gender Differences
Was making my Living Dex and noticed the site does not support forms besides regional forms. Unown, Vivillon, Alcremie and gender differences are the big form changes I would be looking for. Perhaps have it as an option to display and/or have the option to display forms/gender differences as seperate boxes.
Hello, I'm already looking for it but it doesn't seem to be about it. please
In contrast to the previous post, I think all "other" forms should be in their own box. This is entirely because what happens when they add another hat for pikachu? You will need to move 1300+ pokemon in HOME to keep the order straight. If they are in a separate box, the worst case is you need to move the whole box one over to make room for another.
This appears to be another duplicate of #256?
(The checkboxes are probably kinda implied, due to being available for Gmax forms)
|
gharchive/issue
| 2023-08-08T19:09:58 |
2025-04-01T06:40:02.288373
|
{
"authors": [
"ALonleyBanana",
"HavocsCall",
"Vrontis",
"fuer-lo"
],
"repo": "pokedextracker/pokedextracker.com",
"url": "https://github.com/pokedextracker/pokedextracker.com/issues/508",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1395320582
|
[TECHDEBT] [P2P] Raintree scalability improvements
Objective
Tend for the TODOs:
// INVESTIGATE(olshansky/team): Does not scale to 1,000,000,000 nodes
Related to #222 and #246
Origin Document
"make it work ⏩ make it fast ⏩ make it pretty"
#222 improves a lot in terms of time complexity, see what we can do to improve even further.
Some research might be required
Goals
[ ] Ensure that the network can scale
[ ] Verify empirically
Deliverable
[ ] Optimizations
[ ] Tests / Benchmarks
Non-goals / Non-deliverables
_REPLACE_ME: List of things that are out of scope
...
General issue deliverables
[ ] Update the appropriate CHANGELOG
[ ] Update any relevant READMEs (local and/or global)
[ ] Update any relevant global documentation & references
[ ] If applicable, update the source code tree explanation
[ ] If applicable, add or update a state, sequence or flowchart diagram using mermaid
[Optional] Testing Methodology
_REPLACE_ME: Make sure to update the testing methodology appropriately_
Task specific tests: make ...
All tests: make test_all
LocalNet: verify a LocalNet is still functioning correctly by following the instructions at docs/development/README.md
Creator: @deblasis
Co-Owners: @Olshansk
Moving this to M4
Closing this out as the scope is too large.
|
gharchive/issue
| 2022-10-03T20:54:12 |
2025-04-01T06:40:02.296717
|
{
"authors": [
"Olshansk",
"deblasis"
],
"repo": "pokt-network/pocket",
"url": "https://github.com/pokt-network/pocket/issues/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2123656169
|
Select instrumentation tooling (assumes OpenTelemetry exporter for Azure Monitor)
As a project lead, I want to make a decision on what tooling to use to instrument telemetry data, so that telemetry data can be collected and sent to the centralized monitoring solution
Mentioned in #228
|
gharchive/issue
| 2024-02-07T18:46:01 |
2025-04-01T06:40:02.307092
|
{
"authors": [
"polatengin"
],
"repo": "polatengin/indiana",
"url": "https://github.com/polatengin/indiana/issues/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
414258043
|
user-home-test branch
As mentioned in neurostars, the last few versions of the fmriprep docker images fail to run when specifying a user (docker run -u myuser ...).
@oesteban created a new docker image (poldracklab/fmriprep:user-home-test).
This new image fixes the permissions problem with $TEMPLATEFLOW_HOME. However, now I get an error similar to the one in a different Neurostats post when processing the (GRE) fieldmap:
Node: fmriprep_wf.single_subject_Pilot005_wf.func_preproc_ses_day1_task_TASK_acq_normal_run_01_echo_1_wf.sdc_wf.phdiff_wf.meta
Working directory: /tmp/work/fmriprep_wf/single_subject_Pilot005_wf/func_preproc_ses_day1_task_TASK_acq_normal_run_01_echo_1_wf/sdc_wf/phdiff_wf/meta
Node inputs:
bids_dir = None
bids_validate = False
fields = <undefined>
in_file = /data/phelpslab/Linda/BIDSdata/sub-Pilot005/ses-day1/fmap/sub-Pilot005_ses-day1_acq-GRE_run-01_phasediff.nii.gz
undef_fields = False
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 371, in run
outputs = self.aggregate_outputs(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 472, in aggregate_outputs
raise error
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 465, in aggregate_outputs
setattr(outputs, key, val)
File "/usr/local/miniconda/lib/python3.7/site-packages/traits/trait_handlers.py", line 172, in error
value )
traits.trait_errors.TraitError: The 'run' trait of a ReadSidecarJSONOutputSpec instance must be a unicode string, but a value of 1 <class 'int'> was specified.
This error was supposed to be fixed after v.1.3.0.post2, so I'm not sure from which version the user-home-test branch was created...
Thanks.
Hi @pvelasco, we've just released 1.3.0.post3 that should take care of both issues.
I'm going to close this one in favor of the neurostars thread (https://neurostars.org/t/singularity-fmriprep-permissionerror-errno-13-permission-denied-cache/3693). Please feel free to reopen if this is still an issue.
Hi @oesteban,
I tested 1.3.0.post3 (specifying a user) and I got a different error (also related to permissions inside the docker image):
Process Process-2:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 755, in build_workflow
err_on_aroma_warn=opts.error_on_aroma_warnings,
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 218, in init_fmriprep_wf
err_on_aroma_warn=err_on_aroma_warn,
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 516, in init_single_subject_wf
num_bold=len(subject_data['bold']))
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/base.py", line 399, in init_func_preproc_wf
bold_reference_wf = init_bold_reference_wf(omp_nthreads=omp_nthreads)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/util.py", line 121, in init_bold_reference_wf
omp_nthreads=omp_nthreads, pre_mask=pre_mask)
File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/util.py", line 302, in init_enhance_and_skullstrip_bold_wf
'epi_atlasbased_brainmask.json')),
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/registration.py", line 935, in __init__
super(Registration, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/base.py", line 76, in __init__
super(ANTSCommand, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 645, in __init__
super(CommandLine, self).__init__(**inputs)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 182, in __init__
self.load_inputs_from_json(from_file, overwrite=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 495, in load_inputs_from_json
with open(json_file) as fhandle:
PermissionError: [Errno 13] Permission denied: '/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/data/epi_atlasbased_brainmask.json'
The problem is that the package_data in fmriprep gets installed with the same permissions as in /src/fmriprep, which are -rw-rw---- (only root and its group have rw access).
I have a fix for it, and will be submitting a PR shortly.
(Note: I can only submit PRs to branches, not tags, so which branch do you want me to submit the PR to? To master, since the problem is still there?)
This is surprising, why tests would then even work?. Yes, send the PR to master, please.
Sorry, I got it wrong: in the master branch, the permissions are correct.
I think the problem is the --no-cache-dir in the pip install .[all]
I tried building the Docker image for 1.3.0.post3 with
pip install .[all]
(omitting --no-cache-dir) and it runs for a regular user.
Bottom line: tag 1.3.0.post3 is fine except for the --no-cache-dir.
So I'm closing the PR. Thanks a lot for your help!
Hi, the latest release 1.3.1 is out. Please let us know if that version resolves this problem!
Hi @oesteban,
Yes, it does. It works fine.
Thanks a lot!
|
gharchive/issue
| 2019-02-25T19:04:31 |
2025-04-01T06:40:02.316580
|
{
"authors": [
"oesteban",
"pvelasco"
],
"repo": "poldracklab/fmriprep",
"url": "https://github.com/poldracklab/fmriprep/issues/1517",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
320556257
|
Cowell method fails to converge with small perturbation accelerations
🐞 Problem
This code converges normally:
from astropy import units as u
from poliastro.twobody import Orbit
from poliastro.bodies import Earth
from poliastro.twobody.propagation import cowell
r0 = [-2384.46, 5729.01, 3050.46] * u.km
v0 = [-7.36138, -2.98997, 1.64354] * u.km / u.s
initial = Orbit.from_vectors(Earth, r0, v0)
def accel(t0, state, k):
v_vec = state[3:]
norm_v = (v_vec * v_vec).sum() ** .5
return 1e-5 * v_vec / norm_v
print(initial.propagate(3 * u.day, method=cowell, ad=accel))
But changing to 1e-6 * v_vec / norm_v fails to converge:
$ python ex0.py
/home/juanlu/.miniconda36/envs/poliastro36/lib/python3.6/site-packages/scipy/integrate/_ode.py:1095: UserWarning: dop853: larger nmax is needed
self.messages.get(istate, unexpected_istate_msg)))
Traceback (most recent call last):
File "ex0.py", line 20, in <module>
print(initial.propagate(3 * u.day, method=cowell, ad=accel))
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/orbit.py", line 271, in propagate
return propagate(self, time_of_flight, method=method, rtol=rtol, **kwargs)
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/propagation.py", line 209, in propagate
r, v = method(orbit, time_of_flight.to(u.s).value, rtol=rtol, **kwargs)
File "/home/juanlu/Development/poliastro/poliastro-library/src/poliastro/twobody/propagation.py", line 103, in cowell
raise RuntimeError("Integration failed")
RuntimeError: Integration failed
🖥 Please paste the output of following commands
pip freeze | grep astropy
astropy==3.0.2
pip freeze | grep poliastro
-e git+git@github.com:Juanlu001/poliastro.git@34a9e2c83cd77e918feb0182d2fa162ba06cbd07#egg=poliastro
🎯 Goal
I would expect a zero perturbation acceleration to be equivalent to a keplerian orbit.
💡 Possible solutions
📋 Steps to solve the problem
Comment below about what you've started working on.
Add, commit, push your changes
Submit a pull request and add this in comments - Addresses #<put issue number here>
Ask for a review in comments section of pull request
Celebrate your contribution to this project 🎉
The current API fails with this case:
def accel(t0, state, k):
v_vec = state[3:]
norm_v = (v_vec * v_vec).sum() ** .5
return 0.0 * v_vec / norm_v
Can you add a corresponding test to #368 to see if the new solvers pass?
|
gharchive/issue
| 2018-05-06T02:08:36 |
2025-04-01T06:40:02.324302
|
{
"authors": [
"Juanlu001"
],
"repo": "poliastro/poliastro",
"url": "https://github.com/poliastro/poliastro/issues/367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
407889796
|
Added option to configure tensorboard docker image
Should be something like that, @mouradmourafiq what do you think ?
Thank you @mouradmourafiq !
|
gharchive/pull-request
| 2019-02-07T20:46:17 |
2025-04-01T06:40:02.358030
|
{
"authors": [
"vfdev-5"
],
"repo": "polyaxon/polyaxon-chart",
"url": "https://github.com/polyaxon/polyaxon-chart/pull/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
455932711
|
Handle unset experiment in log_artifact(s) call
Not entirely sure why self.experiment is set to None in the Experiment class init, regardless I've added a simple if statement that will allow the use of the log_artifact(s) helper methods and shouldn't impact existing functionality.
Thank you for your submission, we really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.Ryan Armstrong seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
I think the implementation was just wrong, there are also 2 other methods, log_output and log_outputs on the base tracker, I am going to clean the client before the v0.5 release, especially these artifacts method, since the platform will be providing annotation for image, dataframes, model among others to specify the type of the artifacts.
Is the v0.5 release happening soon? Should I close this?
@rcarmstrong yes we are testing and pushing hard to have a RC soon.
Just an update, I fixed the docs to reference the log_output(s) methods and fixed the log_artifact(s) or next release since the other method are deprecated.
|
gharchive/pull-request
| 2019-06-13T20:15:41 |
2025-04-01T06:40:02.363398
|
{
"authors": [
"CLAassistant",
"mouradmourafiq",
"rcarmstrong"
],
"repo": "polyaxon/polyaxon-client",
"url": "https://github.com/polyaxon/polyaxon-client/pull/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
954437316
|
Polybar crashes when using i3-msg for moving containers quickly
/usr/include/c++/11.1.0/bits/stl_vector.h:1045: std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::operator[](std::vector<_Tp, _Alloc>::size_type) [with _Tp = std::__cxx11::basic_string<char>; _Alloc = std::allocator<std::__cxx11::basic_string<char> >; std::vector<_Tp, _Alloc>::reference = std::__cxx11::basic_string<char>&; std::vector<_Tp, _Alloc>::size_type = long unsigned int]: Assertion '__n < this->size()' failed.
this is the log that gives me when executes this piece of code
i3-msg "workspace 5; append_layout ~/.config/scripts/dummy_window.json"&
termite -e "nvim -u $HOME/.config/nvim/notes.vim -c "startinsert" $HOME/.notes/notes"&
pid="$!"
echo ${currentWsName}
i3-msg "workspace ${currentWsName}"
while : ; do
winid="`wmctrl -lp | awk -vpid=$pid '$3==pid {print $1; exit}'`"
[[ -z "${winid}" ]] || break
done
i3-msg '[id="'$winid'"] floating enable'
wmctrl -i -r $winid -e 0,$x,50,1000,1000
what it does is that it gives me a container window id and then move it to 5th workspace and take me back to workspace 1 and then bring the container within a window back to where i am, is so quick that it completely freezes polybar and then kill it
I'm unable to reproduce this. Could you share the following:
Your polybar config
The output of polybar -vvv
The entire polybar output (if possible using trace logging -l trace)
Closing due to inactivity
|
gharchive/issue
| 2021-07-28T03:28:17 |
2025-04-01T06:40:02.368807
|
{
"authors": [
"CodEsteban",
"patrick96"
],
"repo": "polybar/polybar",
"url": "https://github.com/polybar/polybar/issues/2474",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
802768296
|
AIFF invalid FORM chunk size fix
Hello, this is a fix to a bug with AIFF I just noticed - I forgot to update the FORM chunk header size when the ID3 size changes.
I've also updated the test so it uses temporary file, rather than writing to testdata directory.
Sorry for the issues, thanks for the work.
I could add ffprobe test, since that's how I found this issue, however it is present only in files with actual audio data, the sample AIFF I provided is just minimalistic hand made one without any data other than headers.
FFprobe is fine, there already is another test that uses it anyway ;) Would it be possible to transcode and commit quiet.mp3 so we can use that?
Hello, I've added the quiet.aiff file and updated the test to include ffprobe. Also I've changed the API - read_from_aiff - wants io::Read + io::Seek so it can be used for reading from memory for example. Now read_from_aiff_file does the same as the previous API did.
|
gharchive/pull-request
| 2021-02-06T19:09:35 |
2025-04-01T06:40:02.371277
|
{
"authors": [
"Marekkon5",
"polyfloyd"
],
"repo": "polyfloyd/rust-id3",
"url": "https://github.com/polyfloyd/rust-id3/pull/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1477302676
|
bringing the terraform state hack inside the cndi binary
Currently the way terraform state is maintained is within GitHub Actions. This works fine, but the code needs to be reimplemented within each supported CI systems (GitLab, etc.). What if the calls to git checkout _state, gpg --symmetric ... git add terraform.tfstate.gpg etc. were made inside the binary?
PR: https://github.com/polyseam/cndi/pull/105
|
gharchive/issue
| 2022-12-05T19:45:38 |
2025-04-01T06:40:02.373032
|
{
"authors": [
"johnstonmatt"
],
"repo": "polyseam/cndi",
"url": "https://github.com/polyseam/cndi/issues/99",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1877006807
|
Update googleAnalytics.tsx to properly track goals
I think the messages are not being logged because they had to be set as the label, instead of the value which should be a float.
As per https://developers.google.com/tag-platform/devguides/events
|
gharchive/pull-request
| 2023-09-01T08:41:46 |
2025-04-01T06:40:02.373888
|
{
"authors": [
"rihp"
],
"repo": "polywrap/evo.ninja",
"url": "https://github.com/polywrap/evo.ninja/pull/110",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
623946616
|
should these 2 links take you to the same page?
both "view 255 genes...." links go to the same page. Is this intentional?
They don't go to the same page. The first is a table of genes. The second link is a table of single allele genotypes and has an Allele column.
Of course. I was not seeing the "allele". I don't use this link much.
I wonder if it would be better to present the allele column first in this view? (it should not affect anything because the download options are not available from here).
I wonder if it would be better to present the allele column first in this view?
Sounds sensible. Shall I go ahead and do it?
Just wait in case @mah11 thinks there is a good reason not to.
I think it would be useful for this page to be a bit more obvious that it's genotypes (and a bit different from other pages), so I'm for it
no objection; no strong preference
I wonder if it would be better to present the allele column first in this view?
Should we continue order by the product as we do for gene tables?
OK...
All done.
|
gharchive/issue
| 2020-05-24T21:08:46 |
2025-04-01T06:40:02.385720
|
{
"authors": [
"ValWood",
"kimrutherford",
"mah11"
],
"repo": "pombase/website",
"url": "https://github.com/pombase/website/issues/1553",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
416525477
|
Unique URL for every commit
I think to be able to share a git-history URL pointing to a specific commit in the file history would be pretty useful. At least it's definitely useful to me, personally.
That's something I want to add too. Check #42
|
gharchive/issue
| 2019-03-03T15:04:41 |
2025-04-01T06:40:02.386998
|
{
"authors": [
"areebbeigh",
"pomber"
],
"repo": "pomber/git-history",
"url": "https://github.com/pomber/git-history/issues/114",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2368642169
|
Ensure config file directory exists before creating config file
If the config file directory doesn't already exist, the application will crash on startup while trying to create the default configuration. Fix this by creating the directory.
Drive-by: Replace os.path functions by pathlib's functions
LGTM. However, please use semantic commit messages.
In this case it would be "fix: ensure config file dir exists".
Correct the commit message and I will merge.
No problem, I've just updated it. Feel free to update the PR title to match if you prefer.
|
gharchive/pull-request
| 2024-06-23T15:38:50 |
2025-04-01T06:40:02.393278
|
{
"authors": [
"mathieu-lemay",
"pommee"
],
"repo": "pommee/Pocker",
"url": "https://github.com/pommee/Pocker/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1663269858
|
POND-983: Add support for torch_func [upstream]
What do these changes do?
[x] first commit message and PR title follow format outlined here
NOTE: If you edit the PR title to match this format, you need to add another commit (even if it's empty) or amend your last commit for the CI job that checks the PR title to pick up the new PR title.
[ ] passes flake8 modin/ asv_bench/benchmarks scripts/doc_checker.py
[ ] passes black --check modin/ asv_bench/benchmarks scripts/doc_checker.py
[ ] signed commit with git commit -s
[ ] Resolves #?
[ ] tests added and passing
[ ] module layout described at docs/development/architecture.rst is up-to-date
There's some incidental formatting changes from running black on numpy/arr.py. I tested that a few functions work locally on pushdown (torch.mul, torch.ge, etc.), but I haven't added in test cases since it would require adding an extra dependency.
|
gharchive/pull-request
| 2023-04-11T21:53:22 |
2025-04-01T06:40:02.397815
|
{
"authors": [
"noloerino"
],
"repo": "ponder-org/modin-public",
"url": "https://github.com/ponder-org/modin-public/pull/35",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
614830839
|
Participación ciudadana
La liga te lleva al mismo landing.
esto no ocupó cambio, sólo ocupabas publicar en plaza pública. cierro el issue
|
gharchive/issue
| 2020-05-08T15:58:03 |
2025-04-01T06:40:02.413818
|
{
"authors": [
"almaosorio",
"ponentesincausa"
],
"repo": "ponentesincausa/politicadedatos",
"url": "https://github.com/ponentesincausa/politicadedatos/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2047840778
|
Update README.zh-CN.md
修正 Pontx 配置指南的地址
Appreciate.
|
gharchive/pull-request
| 2023-12-19T03:16:37 |
2025-04-01T06:40:02.416778
|
{
"authors": [
"gaokun",
"yuu2lee4"
],
"repo": "pontjs/pontx",
"url": "https://github.com/pontjs/pontx/pull/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
242867282
|
Feature: Explicit partial calls.
This PR implements #1771.
As promised in #1771, I've prepared a script that can be used to automatically migrate your codebase to use explicit partial calls. You can find it here:
https://gist.github.com/jemc/95969e3e2b58ddb0dede138c737907f5
This is nearly done, but I'm running into issues with SEGVs in the JIT-using compiler tests. @Praetonus, I was hoping you could take a look, since you know the most about those JIT tests.
I don't understand how the SEGVs could be caused by this kind of change. My first thought was that f43d671 might be related since it changes a bit how token objects are freed in the parser, but reverting that commit and running the tests with all the other changes included still saw the same errors.
@jemc I'll take a look.
I'm not seeing any segfaults locally, do you have a minimal case for that?
Also, it looks like you didn't update the grammar file, the CI is failing because of that.
@Praetonus figured out why I was seeing SEGVs, and filed #2047 to fix it :+1:
As soon as this passes, it's ready to merge. However, I've left on the DO NOT MERGE label because I want to clear this wit @SeanTAllen first and make sure we have whatever release notes or other text we need written up to make this a release.
@jemc what platforms has the migration script been tested on?
@SeanTAllen - only my own (Fedora 22 Linux).
I suspect it should work on any Posix-compliant system with bash. I'm not sure what we should do about the Windows crowd - I hear that the latest windows has bash support, so that might work okay.
I can test on OSX sometime in the next few days (maybe this weekend).
I think it would be good to get someone to test on Windows in some fashion or come up with a Windows solution if we can.
@kulibali Would you have a moment to look at this?
Sure, I can take a look this evening.
I have created an equivalent Windows PowerShell script at https://gist.github.com/kulibali/cd5caf3a32d510bb86412f3fd4d52d0f
I am running into an issue where I have a class with an add method that is partial, which I can use the + sugar to call. Compiling with this update gives the usual error message:
C:\Users\Gordon\Dev\Pony\kiuatan\src\kiuatan\_test.pony:192:24: call is not partial but the method is - a question mark is required after this call
let next = start + str.size()
^
The script changes the code to
let next = start +? str.size()
But this doesn't compile. Is there a way to call a partial add method using the + operator? Or do I have to use add explicitly?
Discussed this in the sync call.
I will update the PR to make the +? syntax work for @kulibali.
I will rebase to fix the merge conflicts.
I will get a thumbs-up from @kulibali and @SeanTAllen on the migration script working before merging.
We'll ignore the pony-stable compilation failures in the CI, then follow up with a PR to fix pony-stable.
After merging we'll want to initiate the 0.16.0 release fairly soon, so users of the "latest release" of pony will be able to have codebases that compile, which also compile for users of the "latest master revision" of pony.
Alright, @kulibali and @SeanTAllen - this is ready for your final testing on Windows and MacOS before we merge.
Works for me 👍
@SeanTAllen gave me permission offline to merge this and fix any possible MacOS issues with the migration script later.
Here we go...
|
gharchive/pull-request
| 2017-07-14T00:38:45 |
2025-04-01T06:40:02.463031
|
{
"authors": [
"Praetonus",
"SeanTAllen",
"jemc",
"kulibali"
],
"repo": "ponylang/ponyc",
"url": "https://github.com/ponylang/ponyc/pull/2039",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
244828794
|
Feature Request: Bind to IP
I was wondering if there is any way to support binding to a particular IP address or if you might be able to address that in a future release? I have multiple domains on a single dedicated server with 16 individual addresses. It appears that Ponzu will listen to all addresses on the HTTPS port. Thanks!
We definitely should support this - and I can try to get it added this weekend.
Would another CLI flag paired with the run command suit your needs?
Are you kidding? YES. Right now that's my biggest limitation to implementing Ponzu. Whenever you are able to get to it, please know it will be very much appreciated!
Sure thing! I'll ping you here once I have it complete. Out of curiosity, would you be able to share a bit about:
how you found Ponzu
what you're building
the deployment / hosting environment you're running
Feedback like this is super helpful to make it a better product & dev experience.
Thanks,
Steve
I have been searching for headless cms for sometime. I have not been satisfied with the shoehorning of established CMS' into this area. I also believe that website development should be moving towards application-based designs. So I basically found Ponzu through Google.
My interest is two-fold: I have need of an API interface for a small chain of movie theatres where imdb/tmdb solutions for information is overkill and limits the owners ability to customize the results.
Second, I would like to build a frontend framework that could take information from Ponzu and generate complete web applications using Angular.
Currently I use NGHTTPX as a reverse proxy to Nginx. Why? Because of its HTTP/2 Push ability that Nginx still lacks (among other reasons). My server is an OVH dedicated server with 64gb a 2TB raid array and modern processor (forgot the nomenclature right now).
I was also curious about the References add-on for Ponzu. It reminds me of JSON-LD on the surface (I haven't had a chance to dive deeply into it).
Thanks, again, for your prompt response. I was stunned by the speed with which you responded!
Hey @webeau -
This is now available in the master branch. You can get it by running $ go get -u github.com/ponzu-cms/ponzu/... and then from inside your projects, you can run $ ponzu upgrade to make sure each project has the latest core code.
Let me know if this works for you. The new --bind option for the CLI run command is documented here: https://docs.ponzu-cms.org/CLI/General-Usage/#run
Thank you for the feedback -- that sounds like a great set up and I think Ponzu would be a perfectly suitable option for your CMS / API needs. If you need any other help or have other thoughts about Ponzu, feel free to file another issue or chat with the community on slack on the #ponzu channel at https://gophers.slack.com/messages/C3TBV356D/
I had not seen JSON-LD before, but you are right - the references concept in Ponzu is very similar! I think the added bonus of Ponzu's architecture is that since the JSON responses reference same-origin data URIs, you can easily push them down with HTTP/2 Server Push. You probably already saw Ponzu's Server Push integration, but if not, it's as easy as adding a Push() method to your Content types. Here are the docs for that: https://docs.ponzu-cms.org/Interfaces/Item/#itempushable
Second, I would like to build a frontend framework that could take information from Ponzu and generate complete web applications using Angular.
If you get around to starting this, please let me know -- I'd love to see if I could help or at least follow along :) You might find @natdm's Typewriter project interesting since it helps sync your Go content types (data models) with your front-end code. There is an example using Ponzu.
|
gharchive/issue
| 2017-07-22T06:47:24 |
2025-04-01T06:40:02.472855
|
{
"authors": [
"nilslice",
"webeau"
],
"repo": "ponzu-cms/ponzu",
"url": "https://github.com/ponzu-cms/ponzu/issues/177",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
121323880
|
Lack of removeEventListener in unmount event
As title, the eventListener would never be removed once added.
Added.
df238ef
|
gharchive/issue
| 2015-12-09T19:47:10 |
2025-04-01T06:40:02.474501
|
{
"authors": [
"Chibaheit",
"vuryleo"
],
"repo": "poooi/plugin-prophet",
"url": "https://github.com/poooi/plugin-prophet/issues/58",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2366079376
|
[BUG]: No checks applicable
Description
There are two bugs I would like to fix.
When the product is added to the cart, there is no requirement for size. Without the size being selected, it is being added to the cart. I want to add a validation check for that.
After checkout, the cart should be empty. Even after checkout, the cart shows all the items which were just purchased. I would like to add a validation check for that as well.
Kindly assign me this issue under GSSoC'24 with an appropriate level.
Screenshots
Bug1
Bug2
Any additional information?
No response
What browser are you seeing the problem on?
Chrome
Congratulations, @abckhush! 🎉 Thank you for creating your issue. Your contribution is greatly appreciated and we look forward to working with you to resolve the issue. Keep up the great work!We will promptly review your changes and offer feedback. Keep up the excellent work! Kindly remember to check our contributing guidelines
Hello @pooranjoyb I would like to work on this issue under gssoc'24
Assigned to @abckhush based on fcfs basis. @masterboy376 next on the line :)
|
gharchive/issue
| 2024-06-21T09:04:33 |
2025-04-01T06:40:02.479264
|
{
"authors": [
"abckhush",
"masterboy376",
"pooranjoyb"
],
"repo": "pooranjoyb/popShop",
"url": "https://github.com/pooranjoyb/popShop/issues/259",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2689021207
|
SingleSample VCF from VDS correction
Another day, another gVCF -> VDS -> MT -> VCF -> Validation hiccup. Here the issue is that MTs that originate from VDSs don't have the FILTERS field, it simply doesn't exist. The 'single sample VCF from MT' script expected a Filters column, so we run into a failure when running the mt.filters.length() == 0 test. When we write the MT to VCF it does generate an empty FILTERS field on all variant rows, so there are no other compatibility issues with downstream tools (VEP, Hap.py, VQSR).
Couple of changes:
Drop gvcf_info completely. We can put it back in later if we need it, but for now we can't export VCFs with this field present, so strip it out early on
Add a --clean flag to the VCF-from-MT script. If used this will remove all non-variant rows, AND filter on mt.filters (but only if filters exists in the MT)
Drops mt.variant_qc before writing the VCF. I think this would have been dropped anyway, but just to make sure
Replace repartition with naive_coalesce - there are genuinely empty partitions (at least in the validation dataset), so reducing the number of partitions can be done cheaply by removing empty ones completely. Computationally this is much cheaper than doing a full repartition. I'm not sure if we'll use this route anyway, as it would be better for us to feed exact partitions into the combiner up front.
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 26.62%. Comparing base (69e951c) to head (e54b7dc).
Additional details and impacted files
@@ Coverage Diff @@
## main #1011 +/- ##
=======================================
Coverage 26.62% 26.62%
=======================================
Files 9 9
Lines 1705 1705
=======================================
Hits 454 454
Misses 1251 1251
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
🚨 Try these New Features:
Flaky Tests Detection - Detect and resolve failed and flaky tests
|
gharchive/pull-request
| 2024-11-25T04:13:46 |
2025-04-01T06:40:02.526677
|
{
"authors": [
"MattWellie",
"codecov-commenter"
],
"repo": "populationgenomics/production-pipelines",
"url": "https://github.com/populationgenomics/production-pipelines/pull/1011",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2751605168
|
wip: annotated typedocs progress
Quite a few misc complications still with the typedocs. Not entirely sure if the problem is how we do things, how they assume people do things, or something in between.
I did discover that pkg-utils is blocking our ability to wholesale use all the typedocs tags. Should be able to manually add them via package.config.ts until we hit an inline tag. At the point the library will need an update since the type only supports block and modifier right now.
If the groups or categories will actually work, I think we'll be in good shape, but this implementation is still just "okay".
The createMarkdownBehaviors page is a good example of the bizarre way it handles some properties.
|
gharchive/pull-request
| 2024-12-19T23:26:18 |
2025-04-01T06:40:02.541136
|
{
"authors": [
"markmichon"
],
"repo": "portabletext/editor",
"url": "https://github.com/portabletext/editor/pull/631",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
256002720
|
Using https://my.host.name/v2/ for private repo chokes
I tried to configure a private registry using the host url I docker log'ed in to - which was like
https://my.host.name/v2/
It turns out that portainer didn't like that, but gave no immediate feedback. Instead it said:
"Image from container: invalid reference format"
when I tried to add a new container.
I recommend that when you add a registry portainer tries to validate it. If that's not possible then it'd be nice for the failure message to be way more explicit. Maybe including some technical error information.
You should probably try with https://my.host.name instead of https://my.host.name/v2/
I agree, we will try to test the connectivity before creating the registry.
Yes, that is the fix. My point was that it wasn't clear what the issue was. A connectivity test would have nailed it at registry creation time - if that's possible... Thanks!
i got the same issue
my setting was like below
Registry URL https://njdocker1.nj.thundersoft.com
name harbor
then i want to use pull image
name njdocker1.nj.thundersoft.com/public/rsyslog:1.0
Registry harbor
i was setting the auth info and if i use docker pull njdocker1.nj.thundersoft.com/public/rsyslog:1.0 can pull the img from my own Registry
You need to add the port number to the registry url... port 5000 is Docker default..
Rgds,
Neil Cresswell
On 20/09/2017, at 5:17 PM, zwx168238 <notifications@github.commailto:notifications@github.com> wrote:
i got the same issue
my setting was like below
Registry URL https://njdocker1.nj.thundersoft.comhttps://urldefense.proofpoint.com/v2/url?u=https-3A__njdocker1.nj.thundersoft.com&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=05gffEv0a9BEG9dhnOJVQ6Idmcy0-e1Qc4BDMsMSMHw&e=
name harbor
then i want to use pull image
name njdocker1.nj.thundersoft.com/public/rsyslog:1.0http://njdocker1.nj.thundersoft.com/public/rsyslog:1.0
Registry harbor
i was setting the auth info and if i use docker pull njdocker1.nj.thundersoft.com/public/rsyslog:1.0http://njdocker1.nj.thundersoft.com/public/rsyslog:1.0 can pull the img from my own Registry
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_1182-23issuecomment-2D330809591&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=aEJdJjYz1FStqqcwPSyWlyv6KoimWQ9j0aDevWJMVjE&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AWGrla8I91y6j95cuQrEc5TPPl7RFqQFks5skOZRgaJpZM4PQGVM&d=DwMFaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=H99wJQGgBWkcoainkjvqEarIuUvyd7LMw--mBBb76rw&s=jzqDW_7Ayi9402eZgpLYK2ZqvY6ILZyKa72_xkpVL1Q&e=.
@ncresswell ths but we use harbor ,the default port was 80.but even add the port it still does't work
throw "Failure:invalid reference format""
@zwx168238 are you using njdocker1.nj.thundersoft.com as the registry URL and public/rsyslog:1.0 as the image you pull ?
@deviantony i use njdocker1.nj.thundersoft.com/public/rsyslog:1.0 as the image,it was the full path
@zwx168238 you need to create a registry first using njdocker1.nj.thundersoft.com and then select that registry when creating a container / pulling an image and use public/rsyslog:1.0 as the image name.
@deviantony actually i was do like this
@zwx168238 and this is not working? Feel free to ping me on Slack to discuss this.
@deviantony ths for you hard work , now it was works
Hello! I would like to start my contribution to here. If no one working on this, I would like to take it :)
@asasmoyo Nobody is working on this yet, feel free to open a PR :-)
Great :)
I am thinking to do http request to URL/v2 then check wether it returns 200. Do you think is it enough to just do this?
@asasmoyo yes, that should be enough for a check.
@deviantony please take a look my pr
|
gharchive/issue
| 2017-09-07T16:48:51 |
2025-04-01T06:40:02.556430
|
{
"authors": [
"asasmoyo",
"deviantony",
"kwerle",
"ncresswell",
"zwx168238"
],
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1182",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
280592693
|
LDAP Auto create Users
Hi LDAP authentication is great but is it possible to have a switch to enable autocreate users ?
Same opinion! Also add support to get other attributes from LDAP and Attribute Mapping like Name, SureName, Email und Team Membership in relation to LDAP Attributes
We can, but you would still need to define which users can access which endpoints... unless we switch “teams” to be based on an LDAP group..
Rgds,
Neil Cresswell
On 9/12/2017, at 8:29 AM, Yanis LISIMA <notifications@github.commailto:notifications@github.com> wrote:
Hi LDAP authentication is great but is it possible to have a switch to enable autocreate users ?
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_portainer_portainer_issues_1483&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=hmnIAffQMcto5tvSPmMMdaWs-QW1o8TV2HHDRZnhMt0&s=ota0FOoR8rNY-fDxPbNmp5-SZC6OIb1qumK7pivMzSE&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AWGrldSZHhVDOIMwagBbaeMroEXfIyTkks5s-2DY4LgaJpZM4Q7ilt&d=DwMCaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=0fx0h4vB56iTLpw2McH1ZD6TqG_QGpbggVOB-PfMJpM&m=hmnIAffQMcto5tvSPmMMdaWs-QW1o8TV2HHDRZnhMt0&s=gBHIZC97XkHu6YovhKkMIWxCksNm4oweW-Oagc6Sd8k&e=.
@zeenlym See my comment, i think this feature is also needed. Don't use only LDAP Groups. I would suggest to make the relation to a team flexible with LDAP Attribute Mapping.
@xoxys i'm agree with you
@xoxys please open another issue for LDAP attribute mapping. We'll track the user auto-creation feature in this one.
@AlexJakeGreen feel free to open a PR :-)
@deviantony Here is the PR #1839
Thanks!
ps Maybe, it is better no not store ldap users in db at all since Portainer already uses signed cookies and user data (including roles) can be kept there. But I don't know how this approach fits further development plans and looks like it should be a different story.
I just had a look at the PR @AlexJakeGreen
How do you address the fact that even if you automatically creates the user in LDAP they're still unable to access any endpoint? Thus, new users are still blocked at the authentication screen.
New user receives jwt token, but yes, he is not added into any group and thus has to be managed additionally via web ui and this PR solves only a part regarding user autocreation.
So, user's permissions have to be granted in some way, and I can see several strategies,
1 Add new user into some preexisting 'default' team - in this way we skip ldap groups and still need manually give permissions via web ui. Possible, but not flexible for me because all my users live in LDAP
2 Add a third hardcoded ReadOnlyRole (first two are Admin and StandardUser) and assign it to user - user will be able to see resources from start, but admin still needs to assign him to a proper team later. This starts to be ok for me, but community may have different opinion, so confirmation needed. There was an issue for RO, but now it is closed and seems will be implemented in different way
3 Implement mapping ldap group -> team, is the best, but needs much more work in both golang and js code + still need something for readonly access...
IMO the best solution is to implement the LDAP group <> Portainer team mapping.
What's your thoughts @ncresswell ?
Regarding read-only access, this is going to be tackled in https://github.com/portainer/portainer/issues/1259
I agree with ldap - team mapping. When a user logs in, if they are a member of a ldap group that has a corresponding portainer team associated, then autocreate their account in that team
Hy every one,
I think LDAP mapping is the best solution, I can wait for it to be avalaible.
Thanks,
|
gharchive/issue
| 2017-12-08T19:29:07 |
2025-04-01T06:40:02.566228
|
{
"authors": [
"AlexJakeGreen",
"deviantony",
"ncresswell",
"xoxys",
"zeenlym"
],
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1483",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
290913836
|
Swarm visualizer - color by service
Could we implement a color by service model, like in https://github.com/dockersamples/docker-swarm-visualizer
I kwow color are for task status for the moment, but when we show only running tasks, it could be cool to identify that my service run on all nodes !
For information same cluster in portainer swarm visualizer :
Thanx for reading
Hi, I have tried a solution like the image posted below.
The idea is to check service status using task background color and status lable and use the border color to identify the service.
The function used to get the color for service is the same used into docker-swarm-visulizer, who use the service id.
Could this example be a good solution for this feature?
@maocorte looks good, I'll have a look at the PR.
I'm sorry for the necro-posting, but this is the only thing I found about those frame colors that are litteraly driving me nuts.
Form the PR it seems like they mena pretty much noting (thanks for make me go nuts on the for weeks, btw 🥹)
Checking the current code the function generating the random colour form the container ID is gone, bu and in its place I found this
visualizerTaskBorderColor
https://github.com/portainer/portainer/blob/develop/app/docker/views/swarm/visualizer/swarmvisualizer.html#L110
which I can't honestly find what it should mean.
So I'm pretty much back to square one.
What do the colored frame mean?
Kindly, my brain just goes spinning at 1000% and gets frustrated every time I drop into that view 😭, so many visual clues and not an explanation is just torture :(
|
gharchive/issue
| 2018-01-23T16:56:01 |
2025-04-01T06:40:02.572717
|
{
"authors": [
"WTFKr0",
"deviantony",
"maocorte",
"unlucio"
],
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/1597",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
541780120
|
Container not being listed in Container tab
Bug description
When I search for a container after I restarted it, it disappears in the container tab for a few minutes. docker ps seems to still find it.
Expected behavior
Portainer should always show every container that docker ps can show, given I have enough privilege for that container.
Steps to reproduce the issue:
Go to a container
Click on restart
Search it in the container tab
Can't find it
Technical details:
Portainer version: 1.23.0
Docker version (managed by Portainer): 18.09.1
Platform (windows/linux): Linux
Command used to start Portainer (docker run -p 9000:9000 portainer/portainer):
docker run -d -p 9000:9000 -p 8000:8000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
Browser: Firefox 71.0
Additional context
This problem appeared after the update to 1.23.0 and activating the Role Access Extension.
Update: It seems like Portainer has issues displaying containers with healthcheck in a starting state.
It immediately showed up after I stopped the container with the CLI.
Closing as I believe this is a duplicate of #3146
However I think you could mention This problem appeared after the update to 1.23.0 and activating the Role Access Extension. as this is an interesting observation that may help to fix this bug.
|
gharchive/issue
| 2019-12-23T14:34:33 |
2025-04-01T06:40:02.578768
|
{
"authors": [
"itsconquest",
"qmager"
],
"repo": "portainer/portainer",
"url": "https://github.com/portainer/portainer/issues/3479",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
2587196278
|
Include SVG as available output format for saving table to file
Prework
[X ] Read and abide by the great_tables code of conduct and contributing guidelines.
[ X] Search for duplicates among the existing issues (both open and closed).
Proposal
The ability to save tables to SVG file format would be very useful for including GreatTables into composite scientific figures! Albeit, I am ignorant on how large of an undertaking this may be.
SVG output would be great! I am using Typst and it cannot import pdfs only svg.
I added the following the _save_screenshot in _export.py that does the trick. Kluge fix, so not creating a pull request.
if path.endswith(".svg"):
# Get the table HTML content
table_html = driver.find_element(by=By.TAG_NAME, value="table").get_attribute('outerHTML')
# # Import svg converter
# _try_import(name="svgwrite", pip_install_line="pip install svgwrite")
# import svgwrite
# # Create SVG drawing
# dwg = svgwrite.Drawing(path, size=(f"{required_width}px", f"{required_height}px"))
# # Create foreignObject element with proper namespace
# foreign = dwg.add(dwg.g().add(svgwrite.container.SVG(
# insert=(0, 0),
# size=(required_width, required_height)
# )))
# foreign.set_desc(title='Table HTML', desc=table_html)
# # Save the SVG
# dwg.save()
#
#
# Get any styles from the head
# Get any styles from the head
styles = driver.execute_script("""
var styles = '';
// Get stylesheet rules
var styleSheets = document.styleSheets;
for(var i = 0; i < styleSheets.length; i++) {
try {
var rules = styleSheets[i].cssRules;
for(var j = 0; j < rules.length; j++) {
styles += rules[j].cssText + '\\n';
}
} catch (e) {
console.log('Error reading stylesheet:', e);
}
}
// Get computed styles for each element
var table = document.getElementsByTagName('table')[0];
var elements = table.getElementsByTagName('*');
var computedStyles = {};
// Important style properties to capture
var styleProps = [
'font-family', 'font-size', 'font-weight', 'color',
'background-color', 'border', 'border-color', 'border-width',
'border-style', 'padding', 'margin', 'text-align', 'vertical-align',
'width', 'height', 'display', 'position', 'top', 'left',
'border-collapse', 'border-spacing', 'line-height',
'border-top', 'border-bottom', 'border-left', 'border-right',
'padding-top', 'padding-bottom', 'padding-left', 'padding-right',
'background', 'white-space', 'text-decoration', 'font-style'
];
// Capture styles for each element with a class
for (var i = 0; i < elements.length; i++) {
var el = elements[i];
if (el.className) {
var computed = window.getComputedStyle(el);
var classStyles = '';
styleProps.forEach(function(prop) {
var value = computed.getPropertyValue(prop);
if (value) {
classStyles += prop + ':' + value + ';';
}
});
if (classStyles) {
styles += '.' + el.className.replace(/ /g, '.') + '{' + classStyles + '}\\n';
}
}
}
return styles;
""")
# Function to escape XML attribute content
def escape_xml_attr(s):
return (s.replace("&", "&")
.replace("<", "<")
.replace(">", ">")
.replace('"', """)
.replace("'", "'"))
# Process table HTML to escape attributes properly
import re
def escape_html_attrs(html):
def replace_attr(match):
attr_name = match.group(1)
attr_value = match.group(2)
escaped_value = escape_xml_attr(attr_value)
return f'{attr_name}="{escaped_value}"'
# Find and escape attribute values
pattern = r'(\w+)="([^"]*)"'
return re.sub(pattern, replace_attr, html)
# Clean up table HTML first
table_html = (
table_html
.replace(" ", " ") # Replace HTML entities with XML entities
.replace("<em>", "<span style='font-style: italic'>") # Convert em to styled span
.replace("</em>", "</span>") # Close styled span
)
# Then escape attributes
table_html = escape_html_attrs(table_html)
# Create SVG wrapper with embedded styles
svg_content = f'''<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"
width="{required_width}" height="{required_height}">
<foreignObject width="100%" height="100%">
<div xmlns="http://www.w3.org/1999/xhtml">
<style>
/* Base table styles */
table {{
border-collapse: collapse;
border-spacing: 0;
font-family: system-ui, -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Ubuntu, "Helvetica Neue", sans-serif;
width: 100%;
}}
th, td {{
padding: 8px;
border: 1px solid #ddd;
}}
th {{
background-color: #f8f9fa;
font-weight: bold;
}}
{styles}
</style>
{table_html}
</div>
</foreignObject>
</svg>
'''
# Final cleanup of any remaining issues
svg_content = (
svg_content
.replace("&#", "&#") # Fix double-escaped numeric entities
.replace("&amp;", "&") # Fix double-escaped ampersands
)
# Save the SVG file
with open(path, 'w', encoding='utf-8') as f:
f.write(svg_content)
|
gharchive/issue
| 2024-10-14T22:36:30 |
2025-04-01T06:40:02.662237
|
{
"authors": [
"MarekOzana",
"claysmyth"
],
"repo": "posit-dev/great-tables",
"url": "https://github.com/posit-dev/great-tables/issues/494",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2036427119
|
Unable to retrieve the complete list of server APIs: certificates.k8s.io/v1 with default deployment
I'm using the default options in deploy/k8s and only overriding the KUBERNETES_SERVICE_{HOST,PORT} environment variables but getting the following error about a minute or so after the pod starts:
E1211 19:49:34.608041 1 leaderelection.go:332]
{
"level":"ERROR",
"ts":"2023-12-11T19:44:49.789Z",
"logger":"controller-runtime.source.EventHandler",
"caller":"source/kind.go:68",
"msg":"failed to get informer from cache",
"error":"failed to get API group resources:
unable to retrieve the complete list of server APIs:
certificates.k8s.io/v1:
Get \"https://API_SERVER:6443/apis/certificates.k8s.io/v1\":
dial tcp: lookup API_SERVER: i/o timeout"
}
I know that pods are able to talk to the api server because I have a running deployment of kube-state-metrics that also overrides the same env vars with the same values.
I'm running kubernetes 1.28.3 if that's helpful.
then I would assume you misconfigured your environment variables 😉
can you show me how you did that ?
also, it's typically not needed to customize the KUBERNETES_SERVICE_{HOST,PORT} envs, because K8s sets those automatically. can you try to run it again without modifying these envs ?
closing without further info
|
gharchive/issue
| 2023-12-11T19:58:37 |
2025-04-01T06:40:02.680034
|
{
"authors": [
"clementnuss",
"onetwopunch"
],
"repo": "postfinance/kubelet-csr-approver",
"url": "https://github.com/postfinance/kubelet-csr-approver/issues/211",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
193929390
|
Add Zesty.io
[x] verified that the CMS I'm adding is still maintained.
[x] read CONTRIBUTING.md.
[x] did not generate README.md.
Thank you, @shrunyan!
|
gharchive/pull-request
| 2016-12-07T00:57:28 |
2025-04-01T06:40:02.706100
|
{
"authors": [
"mutewinter",
"shrunyan"
],
"repo": "postlight/awesome-cms",
"url": "https://github.com/postlight/awesome-cms/pull/35",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1469178659
|
Backup fails if an artifact is missing
Hi,
just tried the backup and it fails, I think as I've currently no lists defined.
Run potatoqualitee/fossilize@v1
Run if ("true" -eq $true) {
VERBOSE: Running script
VERBOSE: Exporting all to ./backups
VERBOSE: Going to https://ruhr.social/api/v1/accounts/verify_credentials
VERBOSE: Processing follows
VERBOSE: Exporting following
VERBOSE: Going to https://ruhr.social/api/v1/accounts/109303980548626757/following?limit=80
Directory: /home/runner/work/TwitterInflux/TwitterInflux/backups
UnixMode User Group LastWriteTime Size
-------- ---- ----- ------------- ----
-rw-r--r-- runner docker [11](https://github.com/Callidus2000/TwitterInflux/actions/runs/3581174700/jobs/6023965634#step:3:12)/30/2022 07:23 1079
VERBOSE: Processing lists
VERBOSE: Exporting lists
VERBOSE: Going to https://ruhr.social/api/v1/lists
Get-ChildItem: /home/runner/work/_actions/potatoqualitee/fossilize/v1/main.ps1:138
Line |
138 | Get-ChildItem -Path $filepath
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cannot find path
| '/home/runner/work/TwitterInflux/TwitterInflux/backups/lists.csv'
| because it does not exist.
Error: Process completed with exit code 1.
I think that if I manually select "all without lists" in my config it would run, but then I'd have to remeber to change this after I've started to use the list feature.
@potatoqualitee - agreed, having no lists causes an issue.
thank you! just fixed and rereleased v1. so you can just rerun the same workflow. sorry for the potato qualitee, I had too much test data appaarnetly 😅
Worked perfectly for me - thank you! ⭐
Works as designed, Thank you.
|
gharchive/issue
| 2022-11-30T07:33:54 |
2025-04-01T06:40:02.731063
|
{
"authors": [
"Callidus2000",
"jpomfret",
"potatoqualitee"
],
"repo": "potatoqualitee/fossilize",
"url": "https://github.com/potatoqualitee/fossilize/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
106979008
|
Introduce metrics.context
Any properties bound to metrics.context will be merged into options for any calls to the service API
Thanks!
|
gharchive/pull-request
| 2015-09-17T13:02:28 |
2025-04-01T06:40:02.732175
|
{
"authors": [
"opsb",
"poteto"
],
"repo": "poteto/ember-metrics",
"url": "https://github.com/poteto/ember-metrics/pull/36",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
100482117
|
Package release dates added to README.md
Updated README.md to reflect the version releases.
Release version is taken from /releases page
What benefit is this ?
I'm not sure that this adds anything useful to the project, but I think we should consider making "releases" a bit more documented using the features on Github to do so - if you look at Bootstrap's Release page, this contains a lot of useful information. Maybe we can tag 0.14.1 as a fix for the bower version issue and then use that to draft a release?
This idea is from Changelog files, which keep track of major things accomplished in chronological order. Which is followed by many opensource projects.
Another way to achieve the same thing is to tag a version like its described by @glenpike
I'm going to close this as I would favour using the releases page more.
|
gharchive/pull-request
| 2015-08-12T07:12:49 |
2025-04-01T06:40:02.827159
|
{
"authors": [
"exussum12",
"glenpike",
"nareshv"
],
"repo": "powmedia/backbone-forms",
"url": "https://github.com/powmedia/backbone-forms/pull/476",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
544284041
|
Broken Link in README
'developer guides' link in README.md, https://poynt.com/tag/guides/, is broken.
Should it link to https://poynt.github.io/developer-docs/ instead?
@mll11 yup good point, it should
|
gharchive/issue
| 2019-12-31T22:53:31 |
2025-04-01T06:40:02.841665
|
{
"authors": [
"charlesfeng",
"mll11"
],
"repo": "poynt/poynt-python",
"url": "https://github.com/poynt/poynt-python/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
849869107
|
Update kubernetes/master/golang/master build
This is an automated PR via build-bot
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: ltccci
To complete the pull request process, please assign after the PR has been reviewed.
You can assign the PR to them by writing /assign in a comment when ready.
The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
|
gharchive/pull-request
| 2021-04-04T12:53:46 |
2025-04-01T06:40:02.855394
|
{
"authors": [
"ltccci"
],
"repo": "ppc64le-cloud/builds",
"url": "https://github.com/ppc64le-cloud/builds/pull/1918",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1284812516
|
string index out of range
Describe the bug
When I try to build it gives the err
10:56:40 AM: Found section: 2-study/books
10:56:40 AM: Traceback (most recent call last):
10:56:40 AM: File "__site/convert.py", line 34, in <module>
10:56:40 AM: nodes[doc_path.abs_url] = doc_path.page_title
10:56:40 AM: File "/opt/build/repo/__site/utils.py", line 209, in page_title
10:56:40 AM: [
10:56:40 AM: File "/opt/build/repo/__site/utils.py", line 210, in <listcomp>
10:56:40 AM: item if item[0].isupper() else item.title()
10:56:40 AM: IndexError: string index out of range
10:56:41 AM: Building site...
Small Reproducible Example
https://github.com/AzadKshitij/My-Stuff
Steps to Reproduce the Bug or Issue
Check the link and repository
try cloning and hosting on Netlify
Expected behavior
Should build the pages properly.
Screenshots or Videos
What I got
What I have
Operating System Version
Windows.
Visual Studio Code Version
Latest.
Additional context
No response
Hi, I see that your site is up and running. What exactly was the issue? Did the build fail or was it some other error?
My build was failing then I tried deleting the folder and recreating them and it worked.
I would guess there were some symbolic link issues?
|
gharchive/issue
| 2022-06-26T05:42:04 |
2025-04-01T06:40:02.865434
|
{
"authors": [
"AzadKshitij",
"ppeetteerrs"
],
"repo": "ppeetteerrs/obsidian-zola",
"url": "https://github.com/ppeetteerrs/obsidian-zola/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1101639031
|
请问telegram转发的自定义地址格式是怎么样的
试过https://forward.address/bot<key>,提示无效
谢谢
是不是你的反代,只支持get形式?
是不是你的反代,只支持get形式?
用的是CF worker,试了一下应该是支持post的
(顺带一提吐槽一下,在miui12 20.6.18/小米6 上,tg api输入框是密码控件,没有粘贴选项,只能手打)
试试 https://forward.address/bot/sendMessage
不太清楚你的反代是怎么配置的,目前的机制:填写 http 开头时,不再进行api地址拼接,需要填写的完整的api接口地址
试试 https://forward.address/bot/sendMessage
成功了
多谢您的回答
是需要把~/sendMessage加上
|
gharchive/issue
| 2022-01-13T12:20:41 |
2025-04-01T06:40:02.892888
|
{
"authors": [
"lincww",
"pppscn"
],
"repo": "pppscn/SmsForwarder",
"url": "https://github.com/pppscn/SmsForwarder/issues/104",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2675294934
|
Fix multiple issues related to mania key count stats population
Opening this for self testing / review
Closes #295
[x] Make sure this works as expected.
This should be good for review now. I'm running this in production as a final test run (after manually testing on users in both active and inactive state, confirming that active users see no-change and inactive users see their pp/acc getting updated).
I did take this opportunity to make the logic conform close to the main user stats processor, so the diff might be a bit hard to read. Just viewing the few files directly and ignoring the diff, treating as if it's a new submission, may work better.
|
gharchive/pull-request
| 2024-11-20T10:10:02 |
2025-04-01T06:40:02.894853
|
{
"authors": [
"peppy"
],
"repo": "ppy/osu-queue-score-statistics",
"url": "https://github.com/ppy/osu-queue-score-statistics/pull/302",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
152856811
|
Create main menu
For starters, it could just consist of a "press to start" message.
I will create the start menu.
I have begun creating the start menu. There are some issues with importing font and some features from the Screen class like SetScreen.
Did you get it working in the tutorial?
I am implementing a version of the tutorials so that it fits our project.
|
gharchive/issue
| 2016-05-03T19:36:29 |
2025-04-01T06:40:02.965656
|
{
"authors": [
"carllei",
"pqbyte"
],
"repo": "pqbyte/coherence",
"url": "https://github.com/pqbyte/coherence/issues/7",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
530490947
|
希望能支持 Rime -- 鼠须管/或其他中文输入法的输入法内中英文切换状态显示
安装 鼠须管 和 ShowyEdge 配合有逻辑矛盾,即 shwoedge 并不能显示鼠须管内中英文切换,实际上,rime 本身自动化在不同程序间切换,并替代系统英文输入法是简单又好用的,但为了配合 showyedge,强行出现了这样的用法:
https://pbeta.me/macos-from-zero-3/
关闭鼠须管的英文输入
把中英文输入法之间切换行为独特强调出来
这样才能够使用ShowyEdge的功能
如题,希望能有这样的功能
https://github.com/rime/squirrel/issues/146
我在使用搜狗拼音输入法,期望还能区别出搜狗拼音的中文状态和英文状态,谢谢。
目前只能给搜狗输入法整体设定一个色条,无法给搜狗拼音的中文状态和英文状态分别设定色条。
@Justsoos @yetangye
我用的搜狗输入法,我也想让其区分下搜狗输入法下按 shift 切换中英文的状态。
请这个问题你们有解决办法吗?
|
gharchive/issue
| 2019-11-29T21:28:03 |
2025-04-01T06:40:02.972948
|
{
"authors": [
"GanZhiXiong",
"Justsoos",
"yetangye"
],
"repo": "pqrs-org/ShowyEdge",
"url": "https://github.com/pqrs-org/ShowyEdge/issues/16",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
157336179
|
'iohyve create' doesn't work correctly
iohyve create doesn't work without an explicit pool when no guest is present.
That was my fault ( introduced in #165 ), because I try to get the pool by looking for the first iohyve guest and take the same pool for the creation of the new one.
To fix this problem there are two (probably more) ways to fix it:
Use the old method in create (grep for the first appearance of 'iohyve' in zfs list)
Add a zfs attribute (like iohyve=yes) to all datasets used for iohyve and then search for that
The 'old' method should work in most cases, but can fail in edge cases like someone has a dataset for iohyve backups on another pool named: anotherpool/iohyve-backups which is not intended to be used for iohyve but the old method would use it to create the guest on it.
The 'zfs attribute' method would require the attribute to be present on the iohyve dataset, so there must be some kind of migration to support existing installations.
Like when no iohyve:pool attribute is found, try to find the pool with the 'old' method and add the attribute to it.
So what is the way to go?
I, too, have fallen victim to this trap. No worries.
I say let's add the attribute. I have done similar things things in the past where we included a "fix me" function. I think there's still an archive of that on the wiki.
Input is highly appreciated from the peanut gallery.
What would be the expected behaviour if you would have multiple datasets with the iohyve=yes set?
I can imagine that this would result in an unexpected behaviour.
The example you name your self is already a point of attention, what if you have 3 pools on a machine which one is the backup. Which pool would you decide to use if trying to detect, the attributes will be copied to the backup too in our backup system.
One of the main reasons I prefer to manual define the pool.
"If you pay with peanuts, you get monkeys."
Good point. Haven't thought about that.
So we need a good way to define a default pool or we have to make the pool argument during create required.
If we are using a property to define a dataset, we must make sure that it is unique on a machine.
It would be a possibility to set iohyve=yes (or iohyve:default=yes to be more expressive) in the setup command. When it is set on one pool, it is removed from the other (all other) dataset(s).
Also the clone command must remove it from the clone.
I think it is a lot easier to have a default dataset then to specify it on every create.
The create command need some changes anyway, to support a guest to live in any dataset the user wants (see Roadmap).
We should keep that at the back of our minds.
The changes I made in 'ioh-guest' should already allow that for most of the other commands.
I say let's use a default pool. Most setups only have one pool anyway.
"If you pay with peanuts, you get monkeys."
Lol, I love it.
So, when the user runs iohyve setup pool=poolname the first time, it's assumed that it's going to be their primary pool.
Because of this, I added the following to the __guest_create() function:
if [ -z $pool ]; then
local pool="$(zfs list -H | grep /iohyve/ISO | cut -f1 -d/)"
fi
I added this because I'm getting ready to submit v0.7.6 to ports.
This fix doesn't need to stay, but it gets the job done.
If there are no more comments from the monkeys in the gallery, I'll go ahead and close this issue out. If you feel this solution is not enough, I have no problem reopening.
For now the solution is fine, the over all create process could be nicer, I'll try to work out a concept first to demonstrate my idea.
:+1:
|
gharchive/issue
| 2016-05-28T12:20:17 |
2025-04-01T06:40:02.982620
|
{
"authors": [
"mariusvw",
"moogle19",
"pr1ntf"
],
"repo": "pr1ntf/iohyve",
"url": "https://github.com/pr1ntf/iohyve/issues/172",
"license": "bsd-2-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1569757320
|
feat: #536 adicionado referência sobre prazo de solicitação de serviço
ajustei para ficar assim:
|
gharchive/pull-request
| 2023-02-03T12:20:03 |
2025-04-01T06:40:02.991092
|
{
"authors": [
"andregiachini",
"zorteaadriano"
],
"repo": "practice-uffs/mural",
"url": "https://github.com/practice-uffs/mural/pull/542",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
802647428
|
Include subheadings in navigation tree
Describe the solution you'd like
Pages in my index toctree (with maxdepth > 1) should include subheadings in the navigation menu, similar to alabaster. Having this as optional behavior would be fine, as long as it's available to me. I've written longer pages, and would like the navigation tree to make it easier to jump to the right section without having to first go to that page, then click to expand the content menu, then click on the section.
Describe alternatives you've considered
Tried adding explicit toctree entries, but that does not work for linking to headings/anchors rather than documents, and the only other alternative is making more documents, but that seems like overkill, especially for things like autodoc content.
Additional context
Building docs for aioitertools: https://github.com/omnilib/aioitertools/tree/main/docs
index.rst includes api.rst in a toctree with maxdepth 2. alabaster includes the subheadings from api.rst in the nav tree, but furo doesn't.
With alabaster:
With furo:
Thanks for filing this issue!
I'm not going to be adding this since separating the "page structure" from "site structure" in Furo, is an explicit design choice. The left sidebar is for presenting site structure and the right sidebar is for presenting the page content hierarchy. I find the mixing of the two in other Sphinx themes to be a suboptimal experience when looking for information.
It seems you have since implemented this on your own documentation for Furo, but I see no way of activating it.
Those are shown because they are subpages, ie, TOC within the customisation/index.md page that was included in the main index.md page.
See https://github.com/pradyunsg/furo/discussions/318#discussioncomment-1706902.
|
gharchive/issue
| 2021-02-06T08:47:51 |
2025-04-01T06:40:02.997695
|
{
"authors": [
"jreese",
"pradyunsg",
"rlaphoenix"
],
"repo": "pradyunsg/furo",
"url": "https://github.com/pradyunsg/furo/issues/79",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
257737091
|
Validate against special characters in Message
If a message contains special characters it may be badly encoded when sent over SMS (depending on the carrier and device).
There are some characters which are frequently bad, easily replaceable with ascii characters and are hard to spot by eye.
This commit adds a validator to prevent them from being saved.
We said we'd do this in a separate ticket but it ended up being so straightforward I thought I'd go for it.
So, the reason why we wanted a separate ticket for this is that not all channel types have issues with special characters. So for say SMS, we probably don't want special characters, but for an IP messaging channel, we probably do.
So I don't think that we want an always on thing that we cannot disable.
Ah yeah that's true. I guess in this case I'd argue that the benefit to SMS users of rejecting these 4 characters is significantly greater than the benefit to IP users of including the characters (they all have equivalents that look very similar). Especially given that I think the vast majority of messages go over SMS at the moment (?)
@alexmuller Good point. Yeah I think it's fine because we're only targeting specific characters, not all special characters.
Thanks! I agree - I think when we've got the majority of messages not being sent over SMS we could probably remove this quite easily :)
|
gharchive/pull-request
| 2017-09-14T14:19:15 |
2025-04-01T06:40:03.000940
|
{
"authors": [
"alexmuller",
"rudigiesler"
],
"repo": "praekelt/seed-stage-based-messaging",
"url": "https://github.com/praekelt/seed-stage-based-messaging/pull/99",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1810253038
|
Moon Clip Animation[GSSOC'23]
Describe the project you want to add with tech stack
Hello Sir,
It is my new project of CSS animation where I have made a animation of MOON CLIP with night views in it. I want to add this project into GSSOC'23.
Expected behavior
I have made it using HTML & CSS. Here we can see a moon clip.
Screenshots (optional)
Additional context (optional )
Please assign me this issue under GSSOC'23.
@MohitGupta121 @TusharKesarwani Please assign me this issue under GSSOC'23.
@MohitGupta121 @TusharKesarwani Please assign me this issue under GSSOC'23.
|
gharchive/issue
| 2023-07-18T16:06:44 |
2025-04-01T06:40:03.010868
|
{
"authors": [
"apu52"
],
"repo": "pranjay-poddar/Dev-Geeks",
"url": "https://github.com/pranjay-poddar/Dev-Geeks/issues/3810",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
979051590
|
./gdrive: No such file or directory
I have extracted gdrive_2.1.1_linux_amd64.tar.gz on my ubuntu 20.04 but when I try to run gdrive with executable permissions I get this error.
You can use i386 binary instead amd64. It works fine for me.
wget -q -O /tmp/gdrive.tar.gz "https://github.com/prasmussen/gdrive/releases/download/2.1.1/gdrive_2.1.1_linux_386.tar.gz"
mkdir /tmp/gdrive && tar -xf /tmp/gdrive.tar.gz -C /tmp/gdrive
chmod +x /tmp/gdrive/gdrive
/tmp/gdrive/gdrive download 1o1qjRgkJtnF_8uGB1z6MRsQUjWinHUsw --recursive --path ~/Downloads
(Possible reason) The doc say this program is statically linked, which is false on amd64 build . It depends on... musl, and not glibc, which is the default on many linux distro (including ubuntu, arch, etc...). Use the i386 build which is statically linked.
I confirm that it needs musl, I had the same problem with Arch Linux
strings gdrive |egrep "[.]so$|[.]so[.]1"
yep: sudo apt install musl
strace gdrive help didnt help, but I think there is some command that let we see the dependencies other than force it thru strings
ldd is the command you are looking for I think.
i spent near one hour to find this problem.
all those workaround always tell me to "chmod +x gdrive", which leads to nothing.
THIS confirm my sanity, that indeed it's a real bug.
@ma3yta yes, thank you i386 binary works just fine. I temporarily using this, any downside compared to amd64?
@marhensa no downsides. the program just uses 32-bit (i386) CPU instructions instead of 64-bit (amd64) instructions which is OK for x86_64 processors.
I think we should keep the issue open, until readme atleast mentions this.
I have the same issue on 2.1.1
same issue
same issue
root@localhost:~# gdrive about
-bash: /usr/local/bin/gdrive: No such file or directory
root@localhost:~# ldd gdrive
ldd: ./gdrive: No such file or directory
root@localhost:~# ldd /usr/local/bin/gdrive
linux-vdso.so.1 (0x00007ffcaacbe000)
libc.musl-x86_64.so.1 => not found
root@localhost:~#
root@localhost:~# apt install -y musl
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
musl
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 418 kB of archives.
After this operation, 800 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian bullseye/main amd64 musl amd64 1.2.2-1 [418 kB]
Fetched 418 kB in 0s (2947 kB/s)
Selecting previously unselected package musl:amd64.
(Reading database ... 30457 files and directories currently installed.)
Preparing to unpack .../musl_1.2.2-1_amd64.deb ...
Unpacking musl:amd64 (1.2.2-1) ...
Setting up musl:amd64 (1.2.2-1) ...
Processing triggers for man-db (2.9.4-2) ...
root@localhost:~#
root@localhost:~#
root@localhost:~# gdrive about
Authentication needed
Go to the following url in your browser:
Confirm same issue, on a Linux 3.10 compute cluster. Musl didn't fix, using i386 seems fine.
Confirm same issue on Ubuntu x86_64 machine Linux 5.15.0-46-generic
Installing musl (via apt install musl) fixed the issue
|
gharchive/issue
| 2021-08-25T11:51:32 |
2025-04-01T06:40:04.487496
|
{
"authors": [
"Delta-dev-99",
"RemiDesgrange",
"ahmafi",
"dhairya137",
"duzhor",
"hiraksarkar",
"jonahpearl",
"ma3yta",
"marhensa",
"pcstuff",
"sergeken",
"t31k3",
"wxthss82"
],
"repo": "prasmussen/gdrive",
"url": "https://github.com/prasmussen/gdrive/issues/597",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2224296409
|
🛑 ChatGPT Beyond 2021 is down
In 1c0481a, ChatGPT Beyond 2021 (https://chatgpt-beyond-2021.onrender.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: ChatGPT Beyond 2021 is back up in 82d4722 after 22 minutes.
|
gharchive/issue
| 2024-04-04T02:12:09 |
2025-04-01T06:40:04.490283
|
{
"authors": [
"prateekralhan"
],
"repo": "prateekralhan/status",
"url": "https://github.com/prateekralhan/status/issues/707",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1013087590
|
Campus Placements Analysis & Prediction.
Define You:
[x] Hacktoberfest2021 Participant
[x] Contributor
Is your feature request related to a problem? Please describe.
To provide script for the Campus placement analysis & prediction using a Machine Learning Algorithm in python.
Approach to be followed:
Implementing Decision Tree Regressor algorithm using a project as Campus placement analysis & prediction which will include:
Data preprocessing & exploring,
Data visualization,
Data training &
Model creation.
@prathimacode-hub Kindly assign me for this issue.
Issue assigned. @ayushi424
|
gharchive/issue
| 2021-10-01T08:56:55 |
2025-04-01T06:40:04.505465
|
{
"authors": [
"ayushi424",
"prathimacode-hub"
],
"repo": "prathimacode-hub/PyAlgo-Tree",
"url": "https://github.com/prathimacode-hub/PyAlgo-Tree/issues/161",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1478761996
|
re7 vr controls
oculus controllers with Oculus Touch Button Prompts:
how to do a 180 degree turn? this was necessary in beginning hour demo to get dirty coin
how to sort inventory in trunk? button mod prompt gives xbox view button
any others not mentioned on getting started guide png?
In order to do a 180 turn, you hit down on the left stick, and press circle if you’re on PS4, and B if you’re on Xbox One. On PC, the keyboard button is the X button. I cannot find the control combination on the oculus controllers.
While the cursor is in the item box, you press either the view (xbox one) or the touchpad button (ps4) to auto-sort items. Not sure on PC.
https://github.com/praydog/REFramework/discussions
|
gharchive/issue
| 2022-12-06T10:28:40 |
2025-04-01T06:40:04.513374
|
{
"authors": [
"Buzbee",
"praydog"
],
"repo": "praydog/REFramework",
"url": "https://github.com/praydog/REFramework/issues/601",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1615979852
|
Duplicate mosque listing on production-2.
The issue in words:
In production, mosque IDs 44488, and 66818 are for the same mosque "Miami Beach Mosque" and "Ummah Of Miami (Miami Beach Masjid)" with the same address "7904 West Dr, North Bay Village, FL 33141. When searching for a mosque near Miami, FL, United States, two mosques with the names "Miami Beach Mosque", and "Miami Beach Mosque" is the same mosque. the current address for "Miami Beach Mosque" is 7904 West Dr, North Bay Village, FL 33141, USA.
Steps to reproduce the issue:
Open prayersconnect.com on any browser.
Click on the search bar to find mosques near you.
Type in "Miami, FL, United States" and search.
Will be able to find two mosque listing with the names "Miami Beach Mosque", and "Ummah Of Miami (Miami Beach Masjid)"
There should be just one mosque "Miami Beach Mosque" with the address of "7904 West Dr, North Bay Village, FL 33141, USA"
Done
|
gharchive/issue
| 2023-03-08T21:36:15 |
2025-04-01T06:40:04.526339
|
{
"authors": [
"LeonSubhan"
],
"repo": "prayersconnect/qa",
"url": "https://github.com/prayersconnect/qa/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1643935141
|
Jordan's new test issue or old GPS issue
test text
#290
|
gharchive/issue
| 2023-03-28T13:18:38 |
2025-04-01T06:40:04.539963
|
{
"authors": [
"saseehav"
],
"repo": "precision-sustainable-ag/Hotline-Issues",
"url": "https://github.com/precision-sustainable-ag/Hotline-Issues/issues/291",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2283454104
|
pixi add arize-phoenix fails due to dependency conflict, where as plain pip install arize-phoenix works.
Checks
[X] I have checked that this issue has not already been reported.
[X] I have confirmed this bug exists on the latest version of pixi, using pixi --version.
Reproducible example
Here is my pixi.toml file:
[project]
name = "bla_bla"
version = "0.1.0"
description = "Add a short description here"
authors = ["Damian Barabonkov <damianb@alum.mit.edu>"]
channels = ["conda-forge"]
platforms = ["osx-arm64"]
[tasks]
[dependencies]
black = "*"
jupyterlab = "*"
notebook = "*"
pandas = "*"
pre-commit = "*"
pyarrow = "*"
python = "3.11.*"
pyright = "*"
pip = ">=24.0,<25"
[pypi-dependencies]
openai = "*"
dspy-ai = ">=2.4.9"
arize-phoenix = "*"
Version: pixi 0.21.0
Issue description
I first thought that this was an issue with the library arize-phoenix, however after seeing that pip install works, that leads me to think that this is a pixi issue.
I have a pixi environment set up as above. Doing pixi add arize-phoenix or pixi add --pypi arize-phoenix leads to a dependency conflict. However, if I do pixi add pip and then pip install arize-phoenix, it works out just fine.
Here is the arize-phoenix repo: https://github.com/Arize-ai/phoenix
Expected behavior
The installation works like pip install does.
Hey @DamianB-BitFlipper, what error do you receive, and could you run with a -vv flag to make the output verbose?
Sure! Here's the output:
× failed to solve the pypi requirements of 'default' 'osx-arm64'
├─▶ Failed to build: `scikit-learn==1.4.2`
├─▶ Failed to install requirements from build-system.requires (install)
├─▶ Failed to download and build distributions
├─▶ Failed to fetch wheel: scipy==1.13.0
├─▶ Failed to build: `scipy==1.13.0`
╰─▶ Build backend failed to build wheel through `build_wheel()` with exit status: 1
--- stdout:
+ meson setup /Users/damianb/Library/Caches/rattler/cache/uv-cache/built-wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz /Users/damianb/Library/Caches/rattler/cache/uv-cache/built-
wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz/.mesonpy-z78og_vx -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/Users/damianb/Library/Caches/rattler/cache/
uv-cache/built-wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz/.mesonpy-z78og_vx/meson-python-native-file.ini
The Meson build system
Version: 1.4.0
Source dir: /Users/damianb/Library/Caches/rattler/cache/uv-cache/built-wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz
Build dir: /Users/damianb/Library/Caches/rattler/cache/uv-cache/built-wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz/.mesonpy-z78og_vx
Build type: native build
Project name: scipy
Project version: 1.13.0
C compiler for the host machine: cc (clang 15.0.0 "Apple clang version 15.0.0 (clang-1500.3.9.4)")
C linker for the host machine: cc ld64 1053.12
C++ compiler for the host machine: c++ (clang 15.0.0 "Apple clang version 15.0.0 (clang-1500.3.9.4)")
C++ linker for the host machine: c++ ld64 1053.12
Cython compiler for the host machine: cython (cython 3.0.10)
Host machine cpu family: aarch64
Host machine cpu: aarch64
Program python found: YES (/Users/damianb/Library/Caches/rattler/cache/uv-cache/.tmp85jgea/.venv/bin/python)
Found pkg-config: YES (/opt/homebrew/bin/pkg-config) 0.29.2
Run-time dependency python found: YES 3.11
Program cython found: YES (/Users/damianb/Library/Caches/rattler/cache/uv-cache/.tmp85jgea/.venv/bin/cython)
Compiler for C supports arguments -Wno-unused-but-set-variable: YES
Compiler for C supports arguments -Wno-unused-function: YES
Compiler for C supports arguments -Wno-conversion: YES
Compiler for C supports arguments -Wno-misleading-indentation: YES
Library m found: YES
../meson.build:78:0: ERROR: Unknown compiler(s): [['gfortran'], ['flang'], ['nvfortran'], ['pgfortran'], ['ifort'], ['ifx'], ['g95']]
The following exception(s) were encountered:
Running `gfortran --version` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `gfortran -V` gave "[Errno 2] No such file or directory: 'gfortran'"
Running `flang --version` gave "[Errno 2] No such file or directory: 'flang'"
Running `flang -V` gave "[Errno 2] No such file or directory: 'flang'"
Running `nvfortran --version` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `nvfortran -V` gave "[Errno 2] No such file or directory: 'nvfortran'"
Running `pgfortran --version` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `pgfortran -V` gave "[Errno 2] No such file or directory: 'pgfortran'"
Running `ifort --version` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifort -V` gave "[Errno 2] No such file or directory: 'ifort'"
Running `ifx --version` gave "[Errno 2] No such file or directory: 'ifx'"
Running `ifx -V` gave "[Errno 2] No such file or directory: 'ifx'"
Running `g95 --version` gave "[Errno 2] No such file or directory: 'g95'"
Running `g95 -V` gave "[Errno 2] No such file or directory: 'g95'"
A full log can be found at /Users/damianb/Library/Caches/rattler/cache/uv-cache/built-wheels-v3/pypi/scipy/1.13.0/63ZSyGLCiC4j7AqyrZiy7/scipy-1.13.0.tar.gz/.mesonpy-z78og_vx/meson-logs/meson-log.txt
--- stderr:
---
I think it might be related to system-requirements. Can you add macos = 13 to the system requirements table?
https://pixi.sh/latest/reference/configuration/#the-system-requirements-table
Can you try 13.0. Sorry.
It works! Why did this fix it? Also, why the 13.0? Thanks in any case!
14.0 would also work. The thing is that often times wheel files are not build for older osx versions anymore. Wheels make it much easier to install PyPI dependencies.
Could you please elaborate why you think it was failing in this case? This is a real learning moment for me with regards to Python packaging. Thanks!
If you look at the arm64 whl files listed here: https://pypi.org/project/scipy/#files you can see that they are all for macos_12.0. That means that they do require a higher "baseline". If we cannot find a wheel, we try to build the package from scratch (from the sdist).
That is why you are seeing the gfortran and other compilation issues from the attempt at compiling scipy.
If you want to speed up even more then you could also try to use the following two constraints in the conda dependencies (installing hdbscan was a bit slow for me):
hdbscan = ">=0.8.33"
joblib = "<1.4.0"
We (as in pixi) need to improve the systme requirements stuff. Probably by using a higher default.
Thanks for the clarification.
|
gharchive/issue
| 2024-05-07T14:02:52 |
2025-04-01T06:40:04.561730
|
{
"authors": [
"DamianB-BitFlipper",
"ruben-arts",
"wolfv"
],
"repo": "prefix-dev/pixi",
"url": "https://github.com/prefix-dev/pixi/issues/1345",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2115080972
|
pixi run selection should only open up if task is not in default feature
[project]
name = "polarify"
description = "Simplifying conditional Polars Expressions with Python 🐍 🐻❄️"
channels = ["conda-forge"]
platforms = ["linux-64", "osx-arm64", "osx-64", "win-64"]
[dependencies]
python = ">=3.9"
polars = ">=0.14.24,<0.21"
[tasks]
postinstall = "pip install --no-build-isolation --no-deps --disable-pip-version-check -e ."
[feature.py39.dependencies]
python = "3.9.*"
[feature.py310.dependencies]
python = "3.10.*"
[feature.py311.dependencies]
python = "3.11.*"
[feature.py312.dependencies]
python = "3.12.*"
[host-dependencies]
python = "*"
pip = "*"
hatchling = "*"
[feature.test.dependencies]
pytest = "*"
pytest-md = "*"
pytest-emoji = "*"
hypothesis = "*"
[feature.test.tasks]
test = "pytest"
[feature.lint.dependencies]
pre-commit = "*"
[feature.lint.tasks]
lint = "pre-commit run --all"
[environments]
default = ["py312", "test"]
py39 = ["py39", "test"]
py310 = ["py310", "test"]
py311 = ["py311", "test"]
py312 = ["py312", "test"]
$ pixi run postinstall
✨ Pixi task (default): pip install --no-build-isolation --no-deps --disable-pip-version-check -e .
...
$ pixi run test
? The task 'test' can be run in multiple environments.
Please select an environment to run the task in: ›
❯ default
py39
py310
py311
py312
I'm actually not sure if that is a better user experience, as there is no auto completion on the cli -e command.
The availability of the test command is basically the same as the postinstall command. I find it a bit irritating that they behave differently 😅
Maybe we should add all commands available to the default env to autocompletion?
Fixed by #772
|
gharchive/issue
| 2024-02-02T14:34:14 |
2025-04-01T06:40:04.565648
|
{
"authors": [
"pavelzw",
"ruben-arts"
],
"repo": "prefix-dev/pixi",
"url": "https://github.com/prefix-dev/pixi/issues/767",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2115466867
|
Add ability to not include host and/or build dependencies
Problem description
It would be nice to be able to not include host dependencies and build dependencies into the environment.
Maybe something like pixi install --no-include-host-dependencies
This would enable a use case where you want to have a prod environment without host dependencies
[dependencies]
python = ">=3.12"
pip = "*"
[tasks]
start = "uvicorn"
[host-dependencies]
hatchling = "*"
[feature.test.dependencies]
pytest = "*"
[feature.wheel.dependencies]
build = "*"
[feature.wheel.tasks]
build-wheel = "python -m build --no-isolation" # use hatchling from pixi env
[environments]
default = ["test"]
prod = []
in production docker container:
$ pixi run build-wheel
$ pixi install -e prod --no-include-host-dependencies
# not sure if we should still install this into the same location as if we would do with `pixi install -e prod`?
$ pixi run -e prod pip install dist/my-package.whl
$ pixi run -e prod start
Maybe we could also leverage rip to remove pip from the runtime dependencies and be able to install the wheel on top?
I might just be missing something, but couldn't this already be handled by multi-envs? Eg,
[dependencies]
python = ">=3.12"
pip = "*"
[tasks]
start = "uvicorn"
[feature.test.dependencies]
pytest = "*"
[feature.build.dependencies]
build = "*"
hatchling = "*"
[feature.build.tasks]
build-wheel = "python -m build --no-isolation" # use hatchling from pixi env
[environments]
default = ["test", "build"]
prod = []
Yes, you're right. But for my use-case, I want to keep the semantic information what is a host-dependency s.t. other tools can also use this information. (I'm planning a tool that keeps your recipe.yaml for rattler-build in sync with the pixi.toml). Also I would like to actually use the host-dependencies with what they were intended for 😅
Ah, fair enough! 🙂
Closing this in favor of the pixi build work which will most-likely reignite the use of the tables.
|
gharchive/issue
| 2024-02-02T17:46:53 |
2025-04-01T06:40:04.570333
|
{
"authors": [
"msegado",
"pavelzw",
"ruben-arts"
],
"repo": "prefix-dev/pixi",
"url": "https://github.com/prefix-dev/pixi/issues/770",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1975698171
|
feat: add environ support
Honestly, I haven't found a bug with the code yet, tried a few recipes with environ, but I definitely feel like I might be missing something, are there specs or docs about environ and it's behaviour?
Fixes #138
I think the most important thing is that we record the environment variables used to build so we can reproduce them. It would also be good to have a distinction between secrets and normal environment variables. Secrets should not be stored and/or outputted.
Also, can we please rename environ to either environment or env?
Yeah, I agree with Bas here. Although this "environ" is different from the script_env in that we just use it to template some values into the YAML.
Tracking them in the used variables might be good though.
Hmm. I think we should consider somehow avoiding storing the entire ENV seems like a security risk(considering people might use it with production .env).
Let me think what we can do, though not including secrets might also hurt reproducibility...
Hmm, I am also considering to just keep it as is for now. We'll have the rendered recipe (from #246) that contains the values of the environment variables and which we would use to rebuild the package.
It might be nice to warn if a environment variable is not set, but even that can be deliberate, so might be tricky. Maybe this is the way to go.
Some discussion:
To better capture the user intent, we could make env a minijinja object with two functions:
env.get("BLA") errors when the env variable isn't found
env.get_default("BLA", "BLUBB"), uses the BLUBB default when BLA is not set
Looks good to me.
Do you want to add a couple of unit tests?
|
gharchive/pull-request
| 2023-11-03T08:47:19 |
2025-04-01T06:40:04.575264
|
{
"authors": [
"baszalmstra",
"swarnimarun",
"wolfv"
],
"repo": "prefix-dev/rattler-build",
"url": "https://github.com/prefix-dev/rattler-build/pull/260",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1800251945
|
add support for custom service
fixes #244
Please update the readme env var section https://github.com/premAI-io/prem-app#environment-variables
|
gharchive/pull-request
| 2023-07-12T06:09:40 |
2025-04-01T06:40:04.591257
|
{
"authors": [
"Janaka-Steph",
"jigneshsolanki"
],
"repo": "premAI-io/prem-app",
"url": "https://github.com/premAI-io/prem-app/pull/250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1555777284
|
🛑 Beta is down
In fa3ee3f, Beta (https://beta.presquelabs.com) was down:
HTTP code: 400
Response time: 651 ms
Resolved: Beta is back up in dbe4c9c.
|
gharchive/issue
| 2023-01-24T22:44:02 |
2025-04-01T06:40:04.604222
|
{
"authors": [
"eugeneyng"
],
"repo": "presquelabs/upptime",
"url": "https://github.com/presquelabs/upptime/issues/465",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
263773052
|
Enable jwt middleware customization
When we want to create a custom jwt middleware, this is already possible, but the prest besides registering the customized middleware also registers the standard middleware, which often ends up taking out effect of the customized middleware
I'm very interested in this issue.
IMO the simplest solution is to add a config to disable prest jwt middleware.
So here instead of using
if !config.PrestConf.Debug {
MiddlewareStack = append(MiddlewareStack, JwtMiddleware(config.PrestConf.JWTKey))
}
Use something like
if !config.PrestConf.DisableJWTMiddleware {
MiddlewareStack = append(MiddlewareStack, JwtMiddleware(config.PrestConf.JWTKey))
}
What do you guys think?
May I propose a PR for that?
i guess we can i little bit different
if !config.PrestConf.Debug && config.PrestConf.EnableJWTDefault /*default true */ {
MiddlewareStack = append(MiddlewareStack, JwtMiddleware(config.PrestConf.JWTKey))
}
@franciscocpg feel free to send a PR
@felipeweb
If we have a separate config for enable/disable the default jwt, do we still need to check debug config?
I mean if config.PrestConf.EnableJWTDefault wouldn't be enough?
Yes
On Wed, 18 Oct 2017 at 13:46 Francisco Guimarães notifications@github.com
wrote:
@felipeweb https://github.com/felipeweb
If we have a separate config for enable/disable the default jwt, do we
still need to check debug config?
I mean if config.PrestConf.EnableJWTDefault wouldn't be enough?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/prest/prest/issues/230#issuecomment-337636045, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AEpfGVpO3Rup8XQd6lcKxTOUUIuMiBCIks5sth1ggaJpZM4Px8tZ
.
@franciscocpg to maintain the default behavior we need both, the debug mode off enable jwt and if we want customize jwt we need to disable default jwt so other env var
|
gharchive/issue
| 2017-10-09T02:28:38 |
2025-04-01T06:40:04.626858
|
{
"authors": [
"EltonSouza",
"felipeweb",
"franciscocpg"
],
"repo": "prest/prest",
"url": "https://github.com/prest/prest/issues/230",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
589034311
|
Fix predicate push-down accuracy
I think this is a coding error which will not affect the final result.
I think it's unreasonable for lower bound to have a BELOW bound type.
When the query condition is < 3, then it will change to <= 3 according to this code:
KuduPredicate.ComparisonOp op = (high.getBound() == BELOW) ? LESS : LESS_EQUAL;
With this error code, the predicate push-down will get more data returned.
@Praveen2112 Would you like to take a look?
|
gharchive/pull-request
| 2020-03-27T10:28:49 |
2025-04-01T06:40:04.673085
|
{
"authors": [
"Crossoverrr",
"kokosing"
],
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/pull/3257",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
640533905
|
Add secrets documentation
Contributed from https://docs.starburstdata.com/latest/security/secrets.html
Thanks!
|
gharchive/pull-request
| 2020-06-17T15:25:16 |
2025-04-01T06:40:04.674188
|
{
"authors": [
"electrum",
"mosabua"
],
"repo": "prestosql/presto",
"url": "https://github.com/prestosql/presto/pull/4066",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1973770983
|
Timer: executes an AJAX request, even if the "listener" attribute is absent
Describe the bug
I only use pe:timer to execute javascript:
<pe:timer
widgetVar="myTimer"
global="false"
visible="false"
timeout="100"
ontimercomplete="myJavaScriptFunction();"
singleRun="true" />
So, the listener attribute is missing and everything works fine..., but the ajax-request is still triggered as a result of the timer complete.
IMHO it's not right.
Furthermore, I can see in the source code (timer.js) that this shouldn't really be the case:
stop: function (silent, end) {
if (!silent && this.cfg.listener) {
this.cfg.listener();
}
but it doesn't seem to be working the way it should.
FYI: workaround, which I found, for my use-case is:
$(document).ready(
function() {
PF('myTimer').stop = function(silent, end) {
PrimeFaces.widget.ExtTimer.prototype.stop.call(this, true, end);
}
}
);
Reproducer
If there is no listener attribute, no ajax requests should occur
Expected behavior
If there is no listener attribute, no ajax requests should occur
PrimeFaces Extensions version
13.0.2
JSF implementation
Mojarra
JSF version
2.3
Browser(s)
Chrome/118.0.0.0
You can probably also use <pe:timer onstart="return false"/> as a workaround, it's what I use in general when I don't need AJAX requests to be sent. The this.cfg.listener is not what you define on the component, it appears to be always set
I will take a look at this.
@Gmugra this component has been around a long time (since way before I joined this project) and I don't think it is quite right as you mentioned and @blutorange mentioned its ALWAYS setting the listener for some reason?
Even weird I have no idea why it has a Handler: https://github.com/primefaces-extensions/primefaces-extensions/blob/master/core/src/main/java/org/primefaces/extensions/component/timer/TimerHandler.java
You can probably also use <pe:timer onstart="return false"/> as a workaround, it's what I use in general when I don't need AJAX requests to be sent. The this.cfg.listener is not what you define on the component, it appears to be always set
Thanks for the tip, this way is better :)
@melloware,
I also don't understand what for it has a Handler, and why listener is always set :)
(Furthermore, if it is always set, why check for it in javascript? )
But I'm not a big enough JSF expert to be 100% sure. Sorry.
I'd say it's just some legacy with some old bugs :)
Yes, the component exists since version 3, i.e. ~10 years.
It would be useful to check the history of old changes, maybe that would clarify things;
but as far as I can see, here exists a history of changes from 2020, not 2014. :(
|
gharchive/issue
| 2023-11-02T08:44:31 |
2025-04-01T06:40:04.776252
|
{
"authors": [
"Gmugra",
"blutorange",
"melloware"
],
"repo": "primefaces-extensions/primefaces-extensions",
"url": "https://github.com/primefaces-extensions/primefaces-extensions/issues/1345",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1879713960
|
Release/2.17.2
This release contains one change:
Fixes issue where Cardholder Name input field in headless would not accept spaces
Release checklist:
[x] Bump the version in .podspec file
[x] Commit and push change to release/2.17.2
[x] Add tag 2.17.2
[ ] Release via Spinnaker
[ ] Validate release on Cocoapods
|
gharchive/pull-request
| 2023-09-04T07:50:26 |
2025-04-01T06:40:04.859397
|
{
"authors": [
"NQuinn27"
],
"repo": "primer-io/primer-sdk-ios",
"url": "https://github.com/primer-io/primer-sdk-ios/pull/663",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1422412729
|
ActionMenu does not have hover styles in Dark high contrast theme
Description
https://user-images.githubusercontent.com/1863771/197777432-8afc5d79-a5ad-43b8-a3ec-00f92897e5b3.mov
Steps to reproduce
Go to https://primer.style/react/ActionMenu
Change theme to Dark high contrast
Hover on menu items
Version
v35.12.0
Browser
Chrome
@siddharthkp let's prioritize a fix for a11y sake
Hi @factscub, this one is already assigned to a Primer React maintainer but if you'd like to give it a go, feel free to submit a PR for consideration.
Hey @joshblack, @tallys, @siddharthkp,
I started looking into this issue as it was open for some time now. Below are the following details.
Currently the background of the ActionMenu is set to ${get('colors.canvas.overlay')} which is equivalent to overlay: get('scale.gray.8'). Here the overlay value is set to scale.gray.8 now if we see the hover background of ActionListItem which is all the elements in the ActionMenu we can see it has a background of scale.gray.8 as well. So since both has the same background the ActionMenu doesn't have hover styles. So to fix this we can do few things.
We can change the hoverBg of ActionListItem over here from scale.gray.8 to scale.gray.6. But doing this it can change the default hover style behaviour of the ActionListItem and also this change isn't required where the component containing ActionListItem doesn't have a background of scale.gray.8.
We can add a new variant over here along with default and danger which can be named as actionMenu and then we can add something like .
actionMenu: {
// this can be change to any other value but keeping it different here from bg which is scale.gray.8
// so that hover works
hoverBg: get('scale.gray.4'),
activeBg: get('scale.gray.6'),
}
Since activeBg is also dependent on variant here we need to add that as well. All the available dark_high_contrast scale gray colors can be found here.
And then we can add this new variant in ActionMenu.stories, ActionMenu.examples, ActionMenuStories.featues and this way we can fixed the hover of ActionMenu component without affecting any other component which uses ActionListItem.
Please let me know what you guys think about this or if you guys have any better suggestions to handle this will be really great.
Thanks!
Hi @electron97 👋🏻
Thanks so much for taking a look at this issue and digging deep into it! We appreciate your work 🙏🏻
I humbly have a couple of thoughts regarding the solutions and we can chat further about it them find the best way.
We can change the hoverBg of ActionListItem over here from scale.gray.8 to scale.gray.6. But doing this it can change the default hover style behaviour of the ActionListItem and also this change isn't required where the component containing ActionListItem doesn't have a background of scale.gray.8.
I agree with you. ActionListItem is used in multiple places without the ActionMenu context and changing it to scale.gray.6 would change all the other hover backgrounds too. This wouldn't be an isolated change and refraining from this sounds like a good idea 👍🏻
We can add a new variant over here along with default and danger which can be named as actionMenu and then we can add something like .
I would be hesitant about introducing a new variant and updating the variant of ActionList.Item components that are used in ActionMenu context to the new variant mainly because this would be a breaking chance. And this would require an update in all downstream repos.
One isolated solution I thought is that maybe we can look into conditionally setting the hover background color depending on where the ActionList.Item is used and we might determine this in the code like we did for roles. It is inspired by your second solution, it just adds a pivot into it to gather the context in the code rather than explicitly setting with a variant.
Let me know what you think and if you have any questions or anything I misunderstood! Thanks so much again!
Hi, After some more digging into the issue, I asked @langermank for some advise on whether we should update a primitive for it. She pointed me to view_component's implementation which includes a hover border which shows up in case of this particular theme.
Here's the link.
With this as a reference, I have a PR correcting this for primer-react. However storybook examples for ActionMenu in primitives also suffer from the same issue. Do we need to fix it as part of this issue? Maybe not?
|
gharchive/issue
| 2022-10-25T12:49:58 |
2025-04-01T06:40:04.874317
|
{
"authors": [
"broccolinisoup",
"electron97",
"lesliecdubs",
"pksjce",
"siddharthkp",
"tallys"
],
"repo": "primer/react",
"url": "https://github.com/primer/react/issues/2479",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.