id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
19673201
Zeus crash makes typing in console invisible I am running zeus 0.13.3 on OSX 10.8.5, iTerm1.0.0.20130624. Every single time zeus crashes, I am no longer able to see what I'm typing in the shell. I recover from the problem by using reset. Any suggestions as to what might be causing this? Here's one stacktrace, after which the problem appears: ↳ zeus start Starting Zeus server exit status 1hed] [running] [connecting] [waiting] [ready] [crashed] [running] [connecting] [waiting] boot └── default_bundle ├── development_environment │  └── prerake └── test_environment └── test_helper Available Commands: [waiting] [crashed] [ready] zeus destroy (alias: d) zeus server (alias: s) zeus rake zeus generate (alias: g) zeus console (alias: c) zeus runner (alias: r) zeus dbconsole zeus test (alias: rspec, testrb) slavenode.go:202: EOF panic: runtime error: invalid memory address or nil pointer dereference [signal 0xb code=0x1 addr=0x0 pc=0x73d27] goroutine 13 [running]: github.com/burke/zeus/go/unixsocket.(*Usock).WriteMessage(0x0, 0xf8400f96a0, 0x3a5300000010, 0xf84005d420, 0xe, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/unixsocket/unixsocket.go:81 +0xba github.com/burke/zeus/go/processtree.(*SlaveNode).bootSlave(0xf8400870d0, 0xf8400871a0, 0x0, 0x5200000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:274 +0x10e github.com/burke/zeus/go/processtree.(*SlaveNode).doCrashedOrReadyState(0xf8400870d0, 0xf800000001, 0x10dadc, 0x4300000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:247 +0x142 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf8400870d0, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:142 +0x184 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 1 [select]: github.com/burke/zeus/go/zeusmaster.doRun(0x0, 0x229c) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/zeusmaster/zeusmaster.go:47 +0x377 github.com/burke/zeus/go/zeusmaster.Run() /Users/turadg/Code/Go/src/github.com/burke/zeus/go/zeusmaster/zeusmaster.go:22 +0x1c main.main() /Users/turadg/Code/Go/src/github.com/burke/zeus/go/cmd/zeus/zeus.go:40 +0x29c goroutine 2 [syscall]: created by runtime.main /usr/local/Cellar/go/1.0.3/src/pkg/runtime/proc.c:221 goroutine 3 [syscall]: os/signal.loop() /usr/local/Cellar/go/1.0.3/src/pkg/os/signal/signal_unix.go:20 +0x1c created by os/signal.init·1 /usr/local/Cellar/go/1.0.3/src/pkg/os/signal/signal_unix.go:26 +0x2f goroutine 4 [select]: github.com/burke/zeus/go/filemonitor.start(0xf840097000, 0xf84008a3c0, 0xf84008a410, 0x0, 0x0, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:51 +0x148 created by github.com/burke/zeus/go/filemonitor.Start /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:21 +0x7e goroutine 5 [select]: github.com/burke/zeus/go/processtree._func_001(0xf84005e240, 0xf84005e250, 0xf84005e248, 0x0, 0x0, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:56 +0x314 created by github.com/burke/zeus/go/processtree.StartSlaveMonitor /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:65 +0xb6 goroutine 6 [select]: github.com/burke/zeus/go/clienthandler._func_001(0xf84005e268, 0xf84005e260, 0xf84005e258, 0x0, 0x0, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/clienthandler/clienthandler.go:47 +0x23c created by github.com/burke/zeus/go/clienthandler.Start /Users/turadg/Code/Go/src/github.com/burke/zeus/go/clienthandler/clienthandler.go:56 +0xb6 goroutine 7 [runnable]: syscall.Syscall() /usr/local/Cellar/go/1.0.3/src/pkg/syscall/asm_darwin_amd64.s:34 +0x61 syscall.Write(0x1, 0xf840081240, 0x4000000022, 0xf800000000, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/zsyscall_darwin_amd64.go:1279 +0x78 os.(*File).write(0xf84005e008, 0xf840081240, 0x4000000022, 0xf800000000, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/os/file_unix.go:188 +0x69 os.(*File).Write(0xf84005e008, 0xf840081240, 0x4000000022, 0x0, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/os/file.go:139 +0x83 fmt.Fprintf(0xf840043a80, 0xf84005e008, 0xf84012cc00, 0xf800000022, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/fmt/print.go:214 +0xa4 fmt.Printf(0xf84012cc00, 0x22, 0x0, 0x0, 0x1135f4, ...) /usr/local/Cellar/go/1.0.3/src/pkg/fmt/print.go:222 +0x97 github.com/burke/zeus/go/statuschart.(*StatusChart).draw(0xf840096c00, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/statuschart/statuschart.go:131 +0x220 github.com/burke/zeus/go/statuschart._func_001(0xf84005e270, 0xf84005e280, 0xf84005e278, 0x0, 0x0, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/statuschart/statuschart.go:76 +0x49f created by github.com/burke/zeus/go/statuschart.Start /Users/turadg/Code/Go/src/github.com/burke/zeus/go/statuschart/statuschart.go:83 +0xb6 goroutine 8 [finalizer wait]: created by runtime.gc /usr/local/Cellar/go/1.0.3/src/pkg/runtime/mgc0.c:882 goroutine 9 [syscall]: created by addtimer /usr/local/Cellar/go/1.0.3/src/pkg/runtime/ztime_amd64.c:72 goroutine 10 [syscall]: syscall.Syscall6() /usr/local/Cellar/go/1.0.3/src/pkg/syscall/asm_darwin_amd64.s:38 +0x5 syscall.kevent(0x11, 0x0, 0x0, 0xf84006ad88, 0xa, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/zsyscall_darwin_amd64.go:199 +0x88 syscall.Kevent(0xf800000011, 0x0, 0x0, 0xf84006ad88, 0xa0000000a, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/syscall_bsd.go:546 +0xa4 net.(*pollster).WaitFD(0xf84006ad80, 0xf840044d80, 0x0, 0x0, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd_darwin.go:96 +0x185 net.(*pollServer).Run(0xf840044d80, 0x0) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd.go:236 +0xe4 created by net.newPollServer /usr/local/Cellar/go/1.0.3/src/pkg/net/newpollserver.go:35 +0x382 goroutine 11 [chan receive]: net.(*pollServer).WaitRead(0xf840044d80, 0xf840064120, 0xf8400ea6c0, 0x23, 0x1, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd.go:268 +0x73 net.(*netFD).ReadMsg(0xf840064120, 0xf84011d000, 0x40000000400, 0xf840045ba0, 0x2000000020, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd.go:486 +0x2d5 net.(*UnixConn).ReadMsgUnix(0xf84005e488, 0xf84011d000, 0x40000000400, 0xf840045ba0, 0x2000000020, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/unixsock_posix.go:274 +0x144 github.com/burke/zeus/go/unixsocket.(*Usock).readFromSocket(0xf8400ec000, 0x0, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/unixsocket/unixsocket.go:186 +0xf4 github.com/burke/zeus/go/unixsocket.(*Usock).ReadFD(0xf8400ec000, 0xf800000000, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/unixsocket/unixsocket.go:109 +0xaa github.com/burke/zeus/go/processtree._func_002(0xf84005e318, 0xf84005e490, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:43 +0x28 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:49 +0x1a7 goroutine 12 [semacquire]: sync.runtime_Semacquire(0xf84005e148, 0xf84005e148) /usr/local/Cellar/go/1.0.3/src/pkg/runtime/zsema_amd64.c:146 +0x25 sync.(*Cond).Wait(0xf840081440, 0x1) /usr/local/Cellar/go/1.0.3/src/pkg/sync/cond.go:67 +0xaa github.com/burke/zeus/go/processtree.(*SlaveNode).WaitUntilReadyOrCrashed(0xf840087410, 0x51629) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:75 +0x133 github.com/burke/zeus/go/processtree.(*SlaveNode).doWaitingState(0xf8400874e0, 0xf800000001, 0x10e22c, 0x5700000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:163 +0x51 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf8400874e0, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:136 +0x2f2 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 14 [semacquire]: sync.runtime_Semacquire(0xf84005e5d8, 0xf84005e5d8) /usr/local/Cellar/go/1.0.3/src/pkg/runtime/zsema_amd64.c:146 +0x25 sync.(*Cond).Wait(0xf840081300, 0x1) /usr/local/Cellar/go/1.0.3/src/pkg/sync/cond.go:67 +0xaa github.com/burke/zeus/go/processtree.(*SlaveNode).WaitUntilReadyOrCrashed(0xf8400871a0, 0x51629) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:75 +0x133 github.com/burke/zeus/go/processtree.(*SlaveNode).doWaitingState(0xf840087270, 0xf800000001, 0x10e22c, 0x5700000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:163 +0x51 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf840087270, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:136 +0x2f2 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 15 [semacquire]: sync.runtime_Semacquire(0xf84005e4a8, 0xf84005e4a8) /usr/local/Cellar/go/1.0.3/src/pkg/runtime/zsema_amd64.c:146 +0x25 sync.(*Cond).Wait(0xf840081340, 0x1) /usr/local/Cellar/go/1.0.3/src/pkg/sync/cond.go:67 +0xaa github.com/burke/zeus/go/processtree.(*SlaveNode).WaitUntilReadyOrCrashed(0xf840087270, 0x51629) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:75 +0x133 github.com/burke/zeus/go/processtree.(*SlaveNode).doWaitingState(0xf840087340, 0xf800000001, 0x10e22c, 0x5700000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:163 +0x51 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf840087340, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:136 +0x2f2 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 16 [semacquire]: sync.runtime_Semacquire(0xf84005e5d8, 0x1915f) /usr/local/Cellar/go/1.0.3/src/pkg/runtime/zsema_amd64.c:146 +0x25 sync.(*Cond).Wait(0xf840081300, 0x1) /usr/local/Cellar/go/1.0.3/src/pkg/sync/cond.go:67 +0xaa github.com/burke/zeus/go/processtree.(*SlaveNode).WaitUntilReadyOrCrashed(0xf8400871a0, 0x51629) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:75 +0x133 github.com/burke/zeus/go/processtree.(*SlaveNode).doWaitingState(0xf840087410, 0xf800000001, 0x10e22c, 0x5700000001) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:163 +0x51 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf840087410, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:136 +0x2f2 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 17 [chan receive]: github.com/burke/zeus/go/processtree.(*SlaveNode).doUnbootedState(0xf8400871a0, 0xf84005d800, 0x0, 0x5500000000, 0x0, ...) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:184 +0x3d4 github.com/burke/zeus/go/processtree.(*SlaveNode).Run(0xf8400871a0, 0xf84005d800, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavenode.go:138 +0x294 created by github.com/burke/zeus/go/processtree._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/processtree/slavemonitor.go:52 +0x215 goroutine 19 [syscall]: syscall.Syscall() /usr/local/Cellar/go/1.0.3/src/pkg/syscall/asm_darwin_amd64.s:14 +0x5 syscall.Read(0x230000000b, 0xf840103000, 0x80000000800, 0x100000001, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/zsyscall_darwin_amd64.go:905 +0x78 os.(*File).read(0xf84005e3e0, 0xf840103000, 0x80000000800, 0x80000000800, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/os/file_unix.go:174 +0x58 os.(*File).Read(0xf84005e3e0, 0xf840103000, 0x80000000800, 0xf840103000, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/os/file.go:95 +0x83 github.com/burke/zeus/go/filemonitor._func_001(0xf84005e288, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:92 +0x87 created by github.com/burke/zeus/go/filemonitor.startWrapper /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:103 +0x22f goroutine 20 [syscall]: syscall.Syscall6() /usr/local/Cellar/go/1.0.3/src/pkg/syscall/asm_darwin_amd64.s:38 +0x5 syscall.wait4(0x7925, 0xf84005e4e8, 0x0, 0xf840064240, 0x1, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/zsyscall_darwin_amd64.go:32 +0x81 syscall.Wait4(0x7925, 0x226de34, 0x0, 0xf840064240, 0x0, ...) /usr/local/Cellar/go/1.0.3/src/pkg/syscall/syscall_bsd.go:136 +0x6a os.(*Process).wait(0xf840045d60, 0x0, 0x0, 0x0, 0x9e551, ...) /usr/local/Cellar/go/1.0.3/src/pkg/os/exec_unix.go:22 +0xe1 os.(*Process).Wait(0xf840045d60, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.0.3/src/pkg/os/doc.go:43 +0x25 os/exec.(*Cmd).Wait(0xf840069000, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.0.3/src/pkg/os/exec/exec.go:308 +0x1b7 github.com/burke/zeus/go/filemonitor._func_002(0xf84005e290, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:106 +0x29 created by github.com/burke/zeus/go/filemonitor.startWrapper /Users/turadg/Code/Go/src/github.com/burke/zeus/go/filemonitor/filemonitor.go:111 +0x246 goroutine 22 [chan receive]: net.(*pollServer).WaitRead(0xf840044d80, 0xf84011a000, 0xf8400ea6c0, 0x23, 0x1, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd.go:268 +0x73 net.(*netFD).accept(0xf84011a000, 0x641a5, 0x0, 0xf840043480, 0xf84005e040, ...) /usr/local/Cellar/go/1.0.3/src/pkg/net/fd.go:622 +0x20d net.(*UnixListener).AcceptUnix(0xf8400455c0, 0x0, 0x0, 0x0) /usr/local/Cellar/go/1.0.3/src/pkg/net/unixsock_posix.go:350 +0x4d github.com/burke/zeus/go/clienthandler._func_002(0xf84005e3f8, 0xf84005e038, 0x0, 0x0) /Users/turadg/Code/Go/src/github.com/burke/zeus/go/clienthandler/clienthandler.go:37 +0x28 created by github.com/burke/zeus/go/clienthandler._func_001 /Users/turadg/Code/Go/src/github.com/burke/zeus/go/clienthandler/clienthandler.go:44 +0x15a Bilbo:dimitar user_service (master) The issue is very old now, probably not relevant.
gharchive/issue
2013-09-18T10:40:39
2025-04-01T06:38:07.196324
{ "authors": [ "dalizard" ], "repo": "burke/zeus", "url": "https://github.com/burke/zeus/issues/404", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
209784276
Master CI build error Transpile... https://travis-ci.org/bustlelabs/shep/builds/204605328 will fix On Feb 23, 2017, at 6:55 AM, Steve Faulkner notifications@github.com wrote: Transpile... https://travis-ci.org/bustlelabs/shep/builds/204605328 — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread. I just restarted and it passed
gharchive/issue
2017-02-23T14:55:25
2025-04-01T06:38:07.240812
{ "authors": [ "southpolesteve", "zfoster" ], "repo": "bustlelabs/shep", "url": "https://github.com/bustlelabs/shep/issues/202", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
269252776
Single post, interesting people block should not appear The block "Interesting people" on right sidebar should not appear on single post when logged: Also there should be a bottom margin on sidebar right blocks Yes, was going to report this. Thanks for fixing it. Also noticed: on Recommended Posts, there's no context menu (i.e. I can't right-click), nor Cmd + Click to open in a new tab. I don't always want to navigate away from the page I'm on right away. I can file into another issue if you'd like. @ryanbaer yeah probably should put it in another issue Couldn't assign you, so here you go: https://github.com/busyorg/busy/issues/917 @ryanbaer thanks! Fixed in #916
gharchive/issue
2017-10-27T22:20:57
2025-04-01T06:38:07.243909
{ "authors": [ "bonustrack", "jm90m", "ryanbaer" ], "repo": "busyorg/busy", "url": "https://github.com/busyorg/busy/issues/915", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2249502540
Option to completely deactivate hitAreaMargins The whole hitAreaMargins is quite tricky and compote heavy as you state yourself in: https://github.com/bvaughn/react-resizable-panels/blob/638d0f6d3a9d1aeabae8333396ed725ecbeff513/packages/react-resizable-panels/src/PanelResizeHandleRegistry.ts#L169-L174 I propose we have an option to completely disable the behavior, getting back the old straight-forward behaviour. My problems with hitAreaMargins: clicking panning in the margin area triggers movement of the panel but also of a element behind it (a map that gets panned). This feels bad, because a user would expect either to pan the panel or the map but not both setting hitAreaMargins={{ coarse: 0, fine: 0}} still uses the tricky calculations you do in your code which is totally unnecessary and can lead to bugs. Reporting the bugs is one thing, but being able to disableable it would help a lot. I have some weird behaviour in combination with collapsible, which I can not reproduce in codesandbox but I suspect it has to do with the way hitAreaMargins works. What you're describing sounds like it's pretty specific to your website. If you can provide a Code Sandbox example, I'll take a look and see if I can recommend something to help. (Or if there is a bug in this library, that's also good to uncover.) Generally speaking, I don't want to support two separate mechanisms for resize handling based on hitAreaMargins though, so this is not a change I'm interested in making. This problem was related to https://github.com/bvaughn/react-resizable-panels/issues/342, so I am fine for the moment. I still think we should have the option to disable the "TRICKY" functionality from hitAreaMargins 😃 If you'd like to remove that feature, I suggest just forking this library. The license is very permissive so as to allow that. One of my problems ("clicking panning in the margin area triggers movement of the panel but also of a element behind it") should have actually been solved by https://github.com/bvaughn/react-resizable-panels/pull/338, correct? I still have it and will try to create a sandbox. Maybe this is related to how google maps api is also kind of greedy for panning events. For reference, here a screen recording: https://github.com/bvaughn/react-resizable-panels/assets/45362676/5f6d9679-3c67-49ae-8f15-1b14e5e461cd If you'd like to remove that feature, I suggest just forking this library. The license is very permissive so as to allow that. Would you allow a PR? No. This is not a change I'm interested in making. It's possible Google maps is also listening at the root of the window and intercepting the events before this library is. I don't know.
gharchive/issue
2024-04-18T00:35:06
2025-04-01T06:38:07.301470
{ "authors": [ "Fabioni", "bvaughn" ], "repo": "bvaughn/react-resizable-panels", "url": "https://github.com/bvaughn/react-resizable-panels/issues/341", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1046513358
🛑 SMR Vinay Cascade A Block is down In 6872cf7, SMR Vinay Cascade A Block (https://cloud.tymly.in/status/1094) was down: HTTP code: 521 Response time: 20349 ms Resolved: SMR Vinay Cascade A Block is back up in f963a8a.
gharchive/issue
2021-11-06T13:48:51
2025-04-01T06:38:07.304418
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/11425", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1051613440
🛑 Sowparnika Swastika 2 is down In f3dbd6b, Sowparnika Swastika 2 (https://cloud.tymly.in/status/1099) was down: HTTP code: 521 Response time: 232 ms Resolved: Sowparnika Swastika 2 is back up in 7d49a2d.
gharchive/issue
2021-11-12T06:00:39
2025-04-01T06:38:07.306854
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/11772", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1082820017
🛑 SMR Vinay Cascade A Block is down In eb9bb52, SMR Vinay Cascade A Block (https://cloud.tymly.in/status/1094) was down: HTTP code: 521 Response time: 20228 ms Resolved: SMR Vinay Cascade A Block is back up in b922c73.
gharchive/issue
2021-12-17T02:31:00
2025-04-01T06:38:07.309235
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/14712", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1141372229
🛑 Desai Grandeur 1 is down In 6b325f7, Desai Grandeur 1 (https://cloud.tymly.in/status/1133) was down: HTTP code: 521 Response time: 20248 ms Resolved: Desai Grandeur 1 is back up in 147c985.
gharchive/issue
2022-02-17T13:57:10
2025-04-01T06:38:07.311601
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/18536", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1160586557
🛑 Sowparnika Swastika 1 is down In 0500573, Sowparnika Swastika 1 (https://cloud.tymly.in/status/1101) was down: HTTP code: 521 Response time: 239 ms Resolved: Sowparnika Swastika 1 is back up in 0dadfe0.
gharchive/issue
2022-03-06T11:31:52
2025-04-01T06:38:07.314078
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/19543", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1221086780
🛑 SMR Vinay Cascade A Block is down In 0ba23dc, SMR Vinay Cascade A Block (https://cloud.tymly.in/status/1094) was down: HTTP code: 521 Response time: 229 ms Resolved: SMR Vinay Cascade A Block is back up in bf11fdc.
gharchive/issue
2022-04-29T15:27:47
2025-04-01T06:38:07.316674
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/22383", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1238229557
🛑 SBB Touchstone D Block is down In c3bd218, SBB Touchstone D Block (https://cloud.tymly.in/status/1085) was down: HTTP code: 521 Response time: 230 ms Resolved: SBB Touchstone D Block is back up in 9305ee5.
gharchive/issue
2022-05-17T07:56:59
2025-04-01T06:38:07.319026
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/23659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1278740366
🛑 PSR Aster 1 is down In 3292e42, PSR Aster 1 (https://cloud.tymly.in/status/1059) was down: HTTP code: 521 Response time: 20219 ms Resolved: PSR Aster 1 is back up in 40a9d12.
gharchive/issue
2022-06-21T17:11:03
2025-04-01T06:38:07.321392
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/25765", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1299914400
🛑 Dasta Concerto Clubhouse Gym is down In 797d729, Dasta Concerto Clubhouse Gym (https://cloud.tymly.in/status/1189) was down: HTTP code: 521 Response time: 20228 ms Resolved: Dasta Concerto Clubhouse Gym is back up in e2c96c8.
gharchive/issue
2022-07-10T12:52:53
2025-04-01T06:38:07.323783
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/26674", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1488979306
🛑 SBB Touchstone C Block is down In 1e7a523, SBB Touchstone C Block (https://cloud.tymly.in/status/1184) was down: HTTP code: 521 Response time: 20236 ms Resolved: SBB Touchstone C Block is back up in 795ac00.
gharchive/issue
2022-12-10T21:20:10
2025-04-01T06:38:07.326237
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/34336", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1493443262
🛑 Desai Grandeur 2 is down In f0a1978, Desai Grandeur 2 (https://cloud.tymly.in/status/1134) was down: HTTP code: 521 Response time: 20256 ms Resolved: Desai Grandeur 2 is back up in 4b2777c.
gharchive/issue
2022-12-13T05:55:59
2025-04-01T06:38:07.329458
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/34489", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1548534394
🛑 White Breeze 1 is down In 571c4f2, White Breeze 1 (https://cloud.tymly.in/status/1096) was down: HTTP code: 521 Response time: 258 ms Resolved: White Breeze 1 is back up in a43b21a.
gharchive/issue
2023-01-19T05:25:04
2025-04-01T06:38:07.331913
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/37006", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1571860629
🛑 SMD Altezz Block B is down In 750376f, SMD Altezz Block B (https://cloud.tymly.in/status/1176) was down: HTTP code: 521 Response time: 20247 ms Resolved: SMD Altezz Block B is back up in a8ff7b2.
gharchive/issue
2023-02-06T03:57:55
2025-04-01T06:38:07.334300
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/38410", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1621184058
🛑 SMD Altezz Block B is down In 9b5452d, SMD Altezz Block B (https://cloud.tymly.in/status/1176) was down: HTTP code: 521 Response time: 20213 ms Resolved: SMD Altezz Block B is back up in 52f43e5.
gharchive/issue
2023-03-13T10:17:37
2025-04-01T06:38:07.336753
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/40513", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1624013557
🛑 Trifecta Esplanade C Block is down In 7a89f71, Trifecta Esplanade C Block (https://cloud.tymly.in/status/1155) was down: HTTP code: 521 Response time: 225 ms Resolved: Trifecta Esplanade C Block is back up in a5db01c.
gharchive/issue
2023-03-14T17:48:32
2025-04-01T06:38:07.339327
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/40591", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1656168400
🛑 SMD Altezz Block C is down In 471d131, SMD Altezz Block C (https://cloud.tymly.in/status/1177) was down: HTTP code: 521 Response time: 20234 ms Resolved: SMD Altezz Block C is back up in 33bfb75.
gharchive/issue
2023-04-05T19:26:58
2025-04-01T06:38:07.341910
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/42184", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1659846082
🛑 DSR Sunrise Towers is down In adead4f, DSR Sunrise Towers (https://cloud.tymly.in/status/1025) was down: HTTP code: 521 Response time: 20238 ms Resolved: DSR Sunrise Towers is back up in 83a818c.
gharchive/issue
2023-04-09T11:38:42
2025-04-01T06:38:07.344333
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/42355", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1704226077
🛑 Nitesh Flushing Meadows Block B is down In bbc3483, Nitesh Flushing Meadows Block B (https://cloud.tymly.in/status/1047) was down: HTTP code: 521 Response time: 20247 ms Resolved: Nitesh Flushing Meadows Block B is back up in 0bbe81f.
gharchive/issue
2023-05-10T15:56:25
2025-04-01T06:38:07.346815
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/44627", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1747397011
🛑 SMD Altezz Block B is down In c2ecdd8, SMD Altezz Block B (https://cloud.tymly.in/status/1176) was down: HTTP code: 521 Response time: 224 ms Resolved: SMD Altezz Block B is back up in 9e28f8d.
gharchive/issue
2023-06-08T08:46:57
2025-04-01T06:38:07.349185
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/47001", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1758316910
🛑 SBB Touchstone C Block is down In 093271d, SBB Touchstone C Block (https://cloud.tymly.in/status/1184) was down: HTTP code: 521 Response time: 20249 ms Resolved: SBB Touchstone C Block is back up in 7c04d8a.
gharchive/issue
2023-06-15T08:30:14
2025-04-01T06:38:07.351638
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/47606", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1852355985
🛑 White Breeze 2 is down In 5e2d513, White Breeze 2 (https://cloud.tymly.in/status/1097) was down: HTTP code: 521 Response time: 241 ms Resolved: White Breeze 2 is back up in 977731e.
gharchive/issue
2023-08-16T01:18:45
2025-04-01T06:38:07.354229
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/51266", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1865013608
🛑 Madhuban Brindavan B Block is down In 9cfdd1c, Madhuban Brindavan B Block (https://cloud.tymly.in/status/1136) was down: HTTP code: 521 Response time: 242 ms Resolved: Madhuban Brindavan B Block is back up in fa32f0e after 875 days, 18 hours, 44 minutes.
gharchive/issue
2023-08-24T12:04:50
2025-04-01T06:38:07.356730
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/51872", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1873142906
🛑 Sowparnika Ananda is down In 0cf3bc2, Sowparnika Ananda (https://cloud.tymly.in/status/1036) was down: HTTP code: 521 Response time: 20232 ms Resolved: Sowparnika Ananda is back up in 48ebd7d after 26 minutes.
gharchive/issue
2023-08-30T08:00:29
2025-04-01T06:38:07.359113
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/52130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
929780444
🛑 SMR Vinay Cascade A Block is down In b04e0fe, SMR Vinay Cascade A Block (https://cloud.tymly.in/status/1094) was down: HTTP code: 521 Response time: 20235 ms Resolved: SMR Vinay Cascade A Block is back up in 4c230e4.
gharchive/issue
2021-06-25T02:54:24
2025-04-01T06:38:07.361598
{ "authors": [ "bvenkysubbu" ], "repo": "bvenkysubbu/tymlymonitor", "url": "https://github.com/bvenkysubbu/tymlymonitor/issues/681", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1679911003
Add EU smart plug mini MSS315 (Matter) {"uuid":"xxxxxxxxxxxxxxxxxxxxx","onlineStatus":1,"devName":"Reiskocher","devIconId":"device024","bindTime":1681137596,"deviceType":"mss315","subType":"eu","channels":[{}],"region":"eu","fmwareVersion":"9.3.26","hdwareVersion":"9.0.0","userDevIcon":"","iconType":1,"domain":"mqtt-eu-3.meross.com","reservedDomain":"mqtt-eu-3.meross.com","cluster":3,"hardwareCapabilities":[],"firmware":"9.3.26","hbDeviceId":"xxxxxxxxxxxxxxxxxxxxx","model":"MSS315" Hi @DeutscheMark Please install the beta version of the plugin https://github.com/bwp91/homebridge-meross/wiki/Beta-Version Hi @bwp91 Thanks for the fast response and the beta version with support for the MSS315. All the plugs I use are now found by the plugin (hybrid with simple confit) and I can see their current status in Home and turn them on/off. I also have some offline plugs that are ignored as intended. Thank you. 😊
gharchive/issue
2023-04-23T07:55:24
2025-04-01T06:38:07.401890
{ "authors": [ "DeutscheMark", "bwp91" ], "repo": "bwp91/homebridge-meross", "url": "https://github.com/bwp91/homebridge-meross/issues/519", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
302321001
Buttons "Create", "Edit" and "Delete" for template tags are absent for user Buttons "Create", "Edit" and "Delete" tags aren't available for role "User" in Images tab. Steps: Log in and go to Images; Go to Tags tab of any Template or ISO Actual result: Buttons for manage of tags aren't available for a role "User" Expected result: Buttons for manage of tags are available for a role "User" Connected feature: image_tag_create, image_tag_edit, image_tag_delete Screenshot: Test on: tamazlykar/1012-template-tags-action-buttons Regression: image_tag_create, image_tag_edit, image_tag_delete Tested on tamazlykar/1012-template-tags-action-buttons
gharchive/issue
2018-03-05T14:30:20
2025-04-01T06:38:07.405376
{ "authors": [ "rennervo", "tamazlykar" ], "repo": "bwsw/cloudstack-ui", "url": "https://github.com/bwsw/cloudstack-ui/issues/1012", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
793755106
Use with Tails Unfortunately Tails has not the recent Electrum version and since I cannot use pip I cannot either install from source. The only option is to use AppImage, but AFAIK this plugin cannot work with it. There is any workaround to this? (I cannot use the old Electrum, because my watch-only wallet has been created with a recent electrum version and is not compatible) I'm afraid that I'm not aware of a workaround for using the AppImage. It's not just that bwt can't work with it, it cannot be used with external electrum plugins at all. But I'll investigate some more and report back, I might have an idea that could work... I'm afraid that I'm not aware of a workaround for using the AppImage. It's not just that bwt can't work with it, it cannot be used with external electrum plugins at all. But I'll investigate some more and report back, I might have an idea that could work... Okay, so its actually pretty simple! # Extract AppImage (to a subdirectory named 'squashfs-root') $ ./electrum-x.y.z-x86_64.AppImage --appimage-extract # Copy the bwt plugin directory $ cp -r /path/to/bwt squashfs-root/usr/lib/python3.7/site-packages/electrum/plugins # Start Electrum, setup bwt, then run again without --offline ./squashfs-root/AppRun --offline (The --offline thing is unrelated to the AppImage, just a general recommendation to avoid acceidntly connecting to public servers.) Okay, so its actually pretty simple! # Extract AppImage (to a subdirectory named 'squashfs-root') $ ./electrum-x.y.z-x86_64.AppImage --appimage-extract # Copy the bwt plugin directory $ cp -r /path/to/bwt squashfs-root/usr/lib/python3.7/site-packages/electrum/plugins # Start Electrum, setup bwt, then run again without --offline ./squashfs-root/AppRun --offline (The --offline thing is unrelated to the AppImage, just a general recommendation to avoid acceidntly connecting to public servers.) I added some instructions and a small helper script to ease the setup. Thanks for getting me to look into this again! I added some instructions and a small helper script to ease the setup. Thanks for getting me to look into this again! Reopening until you confirm this works for you. Reopening until you confirm this works for you. It worked on various environments that I tried this on, closing.
gharchive/issue
2021-01-25T21:57:15
2025-04-01T06:38:07.411540
{ "authors": [ "shesek", "tiero" ], "repo": "bwt-dev/bwt-electrum-plugin", "url": "https://github.com/bwt-dev/bwt-electrum-plugin/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2633638339
Hitbox being offset and way too large I tried to make a boss that has a custom hitbox using this mod but the hitbox is really large and offset. I used the sample code from the repo's readme but whenever I returned a list with more than one element the hitbox became like the one in the image. I've tried to use extremely small numbers for the hitbox size but it didn't work. I've also customized and completely removed the setDimensions() method for the entity but it also didn't work The entity type I'm using is a PathAwareEntity Hi, can I see the code for your custom collider?
gharchive/issue
2024-11-04T19:17:09
2025-04-01T06:38:07.451391
{ "authors": [ "DevMC7", "byteManiak" ], "repo": "byteManiak/mecha", "url": "https://github.com/byteManiak/mecha/issues/4", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2015614708
Add Divya as recognized contributor I am nominating or self-nominating a Recognized Contributor. Name: Divya Mohan GitHub Username: @divya-mohan0209 Projects/SIGs: SIG-Documentation JCO Nomination Divya has made several contributions to documentation for component docs and JCO. Optional: Endorsements Bailey Hayes (@ricochet) [ ] I have read and understood the qualifications for a Recognized Contributor Thank you @ricochet :heart:
gharchive/pull-request
2023-11-29T01:18:17
2025-04-01T06:38:07.463136
{ "authors": [ "divya-mohan0209", "ricochet" ], "repo": "bytecodealliance/governance", "url": "https://github.com/bytecodealliance/governance/pull/59", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1732008081
aot/jit native stack bound check improvement summary: Move the native stack overflow check from the caller to the callee because the former doesn't work for call_indirect and imported functions. Make the stack usage estimation more accurate. Instead of making a guess from the number of wasm locals in the function, use the LLVM's idea of the stack size of each MachineFunction. The former is inaccurate because a) it doesn't reflect optimization passes and b) wasm locals are not the only reason to use stack. To use the post-compilation stack usage information without requiring 2-pass compilation or machine-code imm rewrites, introduce a global array to store stack consumption of each functions. for JIT, use a custom IRCompiler with an extra pass to fill the array. for AOT, use clang -fstack-usage equivalent instead because we support external llc. Re-implement function call stack usage estimation to reflect the real calling conventions better. (aot_estimate_stack_usage_for_function_call) Re-implement stack estimation logic (--enable-memory-profiling) based on the new machinery. discussions: https://github.com/bytecodealliance/wasm-micro-runtime/issues/2105 todo/known issues/open questions: implement 32-bit case fill the stack_sizes array for jit (use something similar to https://github.com/bytecodealliance/wasm-micro-runtime/pull/2216) fix jit tier up (or confirm it isn't broken) reading the code, i couldn't find anything broken. ensure appropriate jit partitioning (ensure to compile the function body before executing the corresponding wrapper) account caller-side stack consumption (cf https://github.com/bytecodealliance/wasm-micro-runtime/issues/2105#issuecomment-1543533575) what to do for native function calls? do nothing special, at least within this PR fix external llc. pass -fstack-usage to the external command? re-implement enable_stack_estimation based on the new machinary what to do for RtlAddFunctionTable? it seems broken regardless of this PR. but this PR might break it further. i'm not even sure how i can test it. is it for AddVectoredExceptionHandler? see also: https://github.com/bytecodealliance/wasm-micro-runtime/issues/2242 references: https://learn.microsoft.com/en-us/windows/win32/api/winnt/nf-winnt-rtladdfunctiontable https://learn.microsoft.com/en-us/windows/win32/debug/pe-format?redirectedfrom=MSDN#the-pdata-section test this test module consumes about 8MB of stack (wamr aot with llvm 14, amd64) https://github.com/yamt/toywasm/blob/1cc6d551b0fcd10cc8c8b3516c48ba08015e6ad6/wat/many_stack.wat.jinja#L37-L38 worked as expected investigate assertion failure seen with js.wasm investigate app heap corruptions seen with aot https://github.com/bytecodealliance/wasm-micro-runtime/issues/2275 non x86 archs benchmark. noinline can have severe implications for certain type of modules. however, as wasm is usually a compiler target, hopefully fundamental inlining has already been done before aot/jit compilation. https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244#issuecomment-1588769538 remove/disable debug code do something for func_ctx->debug_func. just disable for the wrapper func? add missing error checks reduce code dup. probably make aot_create_func_context use create_basic_func_context. fix errors caused by empty function look at x86_32 failure on the ci Segv https://github.com/bytecodealliance/wasm-micro-runtime/pull/2260 *Nan related issue: an i32.reinterpret_f32 test in conversions.wast was failing. it's x87 flds/fstp which doesn't preserve sNaN. the problem is not specific to this PR. it seems working on main branch just by luck. (it would fail if you disable optimizations.) i don't think there's a simple way to fix it w/o changing the aot ABI. https://github.com/bytecodealliance/wasm-micro-runtime/pull/2269 a quick benchmark with coremark # wamr versions # base: 7ec77598dd5c62eafdbe03eca883bc42781f097e # new: be166e8a4fdeef4b941a979d62ab81e62cdf4ddf (https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244) # wamrc options # bc0: --bounds-checks=0 # bc1: --bounds-checks=1 # bc1-emp: --bounds-checks=1 --enable-memory-profiling script: https://gist.github.com/yamt/78de859809694a893b7c7732a1025722 @yamt Do we need to add extra option if we want to enable the new stack overflow check? Or we just use it as normal, e.g. wamrc --target=i386 -o test.aot test.wasm? the same usage as before. @yamt Do we need to add extra option if we want to enable the new stack overflow check? Or we just use it as normal, e.g. wamrc --target=i386 -o test.aot test.wasm? the same usage as before. Got it, thanks. I just had a quick review, it looks good but it is a little complex, I need to read more carefully about the aot_llvm.c and aot_emit_function.c, and do some tests. while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit. while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit. there are at least two problems: with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it. it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included the stack size reported by MFI->getStackSize(). while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit. there are at least two problems: with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it. it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize(). Thanks, do you mean changing size += 16; to size = align_uint(size, 16) doesn't work for xtensa, or this PR has issue for xtensa? while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit. there are at least two problems: with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it. it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize(). Thanks, do you mean changing size += 16; to size = align_uint(size, 16) doesn't work for xtensa, or this PR has issue for xtensa? this PR. the approach with a wrapper function somehow assumes efficient tail call. while this works for x86. it doesn't seem working well for xtensa. let me investigate a bit. there are at least two problems: * with its windowed abi, tail call elimination is difficult. the xtensa version of llvm doesn't implement it. as a tail call in general seems impossible in xtensa windowed abi, i suspect there is no simple solution. also, riscv seems to prevent tail call optimization in some cases. (when a function have too many parameters to pass via registers?) a possible workaround is to tweak our aot abi to make >N parameters via a pointer like the following. that way the stack consumption of the wrapper functions will not be too large even w/o tail call optimization. i feel it's a bit too intrusive though. struct func1_stack_params { arg3 arg4 }; func1(exec_env, arg1, arg2, struct func1_stack_params *) caller() { struct func1_stack_params params; func1(exec_env, arg1, arg2, &params); } * it allocates the area of function call arguments as a part of caller's stack frame. unlike x86, it seems that it's already included by the stack size reported by MFI->getStackSize(). this is just a matter of adding some target dependent code. (eg if (xtensa)) It is good to me if it is only for xtensa 32-bit. For xtensa 64-bit linux/macos, we can also use stack hw boundary check, right? i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable. i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable. lightly tested on esp32-devkitc. it worked as expected so far. i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable. lightly tested on esp32-devkitc. it worked as expected so far. OK, it seems there is no comment from other developers, let's merge this PR? i fixed xtensa case. it's still inefficient, but not broken. while i haven't tested on a real hardware yet, the wamrc output looks reasonable. lightly tested on esp32-devkitc. it worked as expected so far. OK, it seems there is no comment from other developers, let's merge this PR? i have no problem with it
gharchive/pull-request
2023-05-30T10:53:07
2025-04-01T06:38:07.494580
{ "authors": [ "wenyongh", "yamt" ], "repo": "bytecodealliance/wasm-micro-runtime", "url": "https://github.com/bytecodealliance/wasm-micro-runtime/pull/2244", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2645465756
GlobalValueSet was moved to IRPartitionLayer recently, but we have a … …local definition anyway This resolves a compilation error against recent revisions of LLVM GlobalValueSet was moved to IRPartitionLayer recently In which llvm release? WAMR is somewhat sensitive to the version of LLVM. Currently WAMR depends on LLVM 15.x It was moved about a month ago in https://github.com/llvm/llvm-project/commit/04af63b267c391a4b0a0fb61060f724f8b5bc2be. Internally at Google we build WAMR against LLVM at approximately HEAD. In the above change, GlobalValueSet is now using GlobalValueSet = std::set<const GlobalValue *>; inside IRPartitionLayer. I think my change is safe, to my non-expert eyes the definition of GlobalValueSet is the same before and after this change.
gharchive/pull-request
2024-11-09T01:47:38
2025-04-01T06:38:07.498619
{ "authors": [ "lum1n0us", "sjamesr" ], "repo": "bytecodealliance/wasm-micro-runtime", "url": "https://github.com/bytecodealliance/wasm-micro-runtime/pull/3899", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2376259543
What should the default behavior of wasmtime serve be with scheme/authority? My changes in https://github.com/bytecodealliance/wasmtime/pull/8861 introduced a change in the default behavior of wasmtime serve. Notably this program: use wasi::http::types::*; struct T; wasi::http::proxy::export!(T); impl wasi::exports::wasi::http::incoming_handler::Guest for T { fn handle(request: IncomingRequest, outparam: ResponseOutparam) { println!("request.method = {:?}", request.method()); println!("request.scheme = {:?}", request.scheme()); println!("request.authority = {:?}", request.authority()); let resp = OutgoingResponse::new(Fields::new()); ResponseOutparam::set(outparam, Ok(resp)); } } (compiled component) When run with wasmtime serve and hit with curl http://localhost:8080 it prints: Serving HTTP on http://0.0.0.0:8080/ stdout [0] :: request.method = Method::Get stdout [0] :: request.scheme = Some(Scheme::Http) stdout [0] :: request.authority = Some("localhost:8080") On main, however, it prints Serving HTTP on http://0.0.0.0:8080/ stdout [0] :: request.method = Method::Get stdout [0] :: request.scheme = None stdout [0] :: request.authority = None This regression is due to these changes because I didn't understand what they were doing. Now why wasn't this caught by the test suite? I tried writing a test for this and it passed, but apparently it's due to our usage of hyper::Request::builder().uri("http://localhost/") in the test suite. That creates an HTTP requests that looks like: GET http://localhost/ HTTP/1.1 ... where using curl on the command line generates: GET / HTTP/1.1 ... That leads me to this issue. What should scheme and authority report in these two cases for wasmtime serve by default? The previous behavior means that GET / could not be distinguished from GET http://localhost/ which naively seems like what scheme and authority are trying to map to. Is the previous behavior of wasmtime serve buggy? Is the current behavior buggy? Should the spec be clarified? cc @elliottt @pchickey I'll note that the difference can be seen with: $ curl -v --request-target http://localhost/ http://localhost:8080 vs $ curl -v http://localhost:8080 in terms of how the headers are set. The former is basically what our test suite does while the latter is what the curl command line does by default. The former (full URL in request) is incorrect; that form is only applicable to CONNECT methods. The former (full URL in request) is incorrect; that form is only applicable to CONNECT methods. Edit: looks like I might be wrong about it being incorrect per se; it might just be very uncommon. That makes sense, and means we should probably update our tests, but I guess I'm also curious still what the behavior here should be. For example why do scheme and authority return an Option at the WIT level? Are they intended to map to this or is it expected that they're effectively always Some? scheme is derived from out of band info: whether the request came in over TLS or not my take is that we should have wasi:http either always provide an authority, or provide the host header. Otherwise there's no standard way for content to learn about the authority it's called under. I know that @lukewagner concluded that we must never provide the host header. If that stands, I conversely think that we must continue providing the authority for incoming requests. In effect, that means that for incoming requests what content sees is always the absoluteURI form, which the RFC seems to indicate is the way forward, too: To allow for transition to absoluteURIs in all requests in future versions of HTTP, all HTTP/1.1 servers MUST accept the absoluteURI form in requests, even though HTTP/1.1 clients will only generate them in requests to proxies. Don't pay too much attention to the 1.1 spec for future direction. HTTP/2 makes authority even more special by splitting it into a "pseudo-header". Great question. First of all, from asking some HTTP folks, I believe it is the case that we could tighten the spec wording to say that for methods other than CONNECT and OPTIONS, there must always be an authority (i.e., the return value is some). Apparently, CONNECT and OPTIONS have an unfortunate * option that simply has no authority. Next, my understanding from RFC 9110 is that the authority either comes from the :authority pseudo-header in H/2/3 or the Host header in H/1, and if both are present, they are not allowed to disagree (and this is Web-compatible). Thus, I think what WASI HTTP should say is that: Host is in the definitely-forbidden headers list The request.authority field is derived in a transport-dependent manner (and required to be present for non-CONNECT/OPTIONS) This allows the host implementation to do the transport-appropriate thing for requests coming in or out over the wire. Ok so for the use case of wasmtime serve specifically: [method]incoming-request.scheme is always some(http) because we don't implement https yet [method]incoming-request.authority uses the incoming URI's host if it's there (probably only for CONNECT and OPTIONS). Otherwise it uses Host, otherwise it returns .... None? "127.0.0.1"? The value of --addr? This all indicates to me that new_incoming_request should take both a scheme and an authority argument rather than inferring these from the Request as well? (sorry I'm really looking for guidance/second opinions here, I don't know why things were originally constructed the way they are or if they're just how things got shaken out) otherwise it returns .... None? Do we want/need to support HTTP/1.0? afaik host is mandatory for 1.1 Ah ok I didn't realize that was a 1.0 thing. Sounds like it should search for Host as a header in the host-side hyper::Request for the authority if it's not present in the URI. I try to make a PR with these changes tomorrow. The impression that I got talking to an HTTP server maintainer is that it's web-compatible to require the Host field (rejecting if it's absent in HTTP 1.0 or 1.1) and that allowing an empty authority in cases other than CONNECT/OPTIONS can transitively lead to security issues (random googling found this e.g.). Similarly, RFC 9110 (which also intends to be Web-compatible) says that there MUST be a Host header (when there is no :authority pseudo-header) without making an exception for HTTP 1.0. Thus, I'd suggest making it an error in wasmtime serve if there is no Host in HTTP 1.0, at least to start with, and see if anyone complains.
gharchive/issue
2024-06-26T20:50:40
2025-04-01T06:38:07.516135
{ "authors": [ "alexcrichton", "lann", "lukewagner", "tschneidereit" ], "repo": "bytecodealliance/wasmtime", "url": "https://github.com/bytecodealliance/wasmtime/issues/8878", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1342775765
为什么不用透明的webm格式呢? 透明的webm格式大小占用非常小,为什么不采用这种方案呢? 急急急!!! 大佬,我们现在公司就需要用webm透明通道的视频播放,背景透明,,这个你有方案吗???
gharchive/issue
2022-08-18T08:54:21
2025-04-01T06:38:07.517784
{ "authors": [ "TeeMoYan", "ke112" ], "repo": "bytedance/AlphaPlayer", "url": "https://github.com/bytedance/AlphaPlayer/issues/83", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2092538955
fix: BufReader should not panic when used after cancellation If a BufReader::fill_buf() call is cancelled when the internal buffer is held by .read(), re-allocate the buffer when used in the future. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Closing this - simply re-allocating the buffer is incorrect because cancellation is asynchronous and this can result in lost data. The error message could be improved though...
gharchive/pull-request
2024-01-21T11:14:41
2025-04-01T06:38:07.520581
{ "authors": [ "CLAassistant", "losfair" ], "repo": "bytedance/monoio", "url": "https://github.com/bytedance/monoio/pull/226", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1780151232
Undefined symbols for architecture x86_64 Hello.When I use sonic which version is 1.9.2,I get an error 'Undefined symbols for architecture x86_64'.I test this problem in linux and mac.My go version is 1.17.11.I don't know how to solve this problem Please describe your question in more details. Maybe you mean decoder.SyntaxError, it works now on v1.10.0-rc
gharchive/issue
2023-06-29T05:21:00
2025-04-01T06:38:07.522790
{ "authors": [ "AsterDY", "chenzhuoyu", "zacharytse" ], "repo": "bytedance/sonic", "url": "https://github.com/bytedance/sonic/issues/472", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
225336601
Update to ActorDB 0.10.25. Updated ActorDB to 0.10.25 and dumb-init to 1.20 Reviewed and tested, merging. Thanks @VisualFox!
gharchive/pull-request
2017-04-30T16:09:31
2025-04-01T06:38:07.548630
{ "authors": [ "VisualFox", "ianmjones" ], "repo": "bytepixie/actordb-for-docker", "url": "https://github.com/bytepixie/actordb-for-docker/pull/4", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2055597140
🛑 byzocker.de is down In 1e5a18e, byzocker.de (https://byzocker.de) was down: HTTP code: 502 Response time: 653 ms Resolved: byzocker.de is back up in 1c17927.
gharchive/issue
2023-12-25T10:36:23
2025-04-01T06:38:07.551679
{ "authors": [ "ByZockerBot" ], "repo": "byzocker-de/status.byzocker.de", "url": "https://github.com/byzocker-de/status.byzocker.de/issues/633", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1452292597
SeaExplorer delayed mode time series data loss In testing the new code for this pull request, I found an issue with processing the delayed mode SeaExplorer time series data for missions for which certain sensors (oxygen in this case) are severely oversampled. These missions end up with delayed mode data files that contain fewer actual (non-nan) data points than the realtime files. In other words, we are losing data during the processing. Currently, the dropna function is used to remove the oversampled oxygen data when converting the raw data. The dropna function is working correctly, however note that the resulting data has many nan values in it, for both the CTD and optics. These nan values will often not co-occur. I think the problem in the processing is caused by using the GPCTD_TEMPERATURE as the default time base in seaexplorer.py. This variable contains nan values that are not all co-located with the nan values in the oxygen and optical variables. It's desirable to use the CTD as the time base, but we may need to do some interpolation to avoid losing data when the other variables are mapped onto this base. We have had similar issues with our delayed mode datasets in the past. To get around it, we use NAV_LATITUDE as the timebase in all of our datasets, as this has a non-nan value for every line. Here's an example yml for one of our gliders with a GPCTD. https://github.com/voto-ocean-knowledge/deployment-yaml/blob/master/mission_yaml/SEA70_M15.yml Would this solution work for you? This also raises the broader point that perhaps we should have a test with delayed mode data, as currently all tests are run with nrt data We use NAV_LATITUDE as timebase in conjunction with keep_variables that account for each of the sensors. In this way, all data pass the dropna function, but rows with no data from any of the sensors are dropped by the 'keep_variables' in ncvar block. I'm not sure if this is related, but I had previously noticed that the timestamps recorded for sensors that are known to be outputting at 1Hz (according to the sensor) are never exactly 1Hz, likely due to the time that they arrive at the payload computer -- in other words, timing information from the sensor is not recorded by the PLD but it assigns it's own. The result is that two different sensors sampling at the same rate (e.g. a CTD and an optical O2 sensor) end up with times that are different by microseconds from one sample to the next, and even though we expect them to be "simultaneous", in terms of the recorded time stamps, they aren't. For our data, we have the situation where indeed "Nav" is written every heartbeat; certainly though there is no new data in Nav. Some heartbeats have no data at all, some have CTD data (all variables), some have Optical, some have O2. Sometimes these line up, and currently we only save when the other sensors line up with the CTD sensors. I think what needs to happen is that for each instrument we need a timeseries, and then we need to align it with the timebase. For our SeaExplorers, that time base is most naturally the CTD. The other samples may be offset from that a it, but the CTD is sampling at 2 Hz, and I don't think we care about the other sensors slight phase errors at less than 2 Hz. I haven't explored how to do this with polars, but in xarray you would just do an interp operation before filling it into the parent. So in pseudocode: time, ctd_temp = decode('ctd_temp', drop_na=True) ds['time'] = ('time', time) ds['temperature'] = ctd_temp # etc other ctd variables time_O2, O2_sat = decode('O2_sat', drop_na=True) ds['O2'] = np.interp(time, time_O2, O2_sat) ... # etc other O2 variables I don't think this step would slow things down very much, and I think linear interpolation should be pretty OK from a data point of view. A second option would be just to save the three instruments as separate polars arrays as raw output and then merge as a second step. That would allow double checking the raw data. However, I think the raw SeaExplorer data is simple enough that it's pretty usable as-is for any debugging. I'll ping about this. Is there a consensus about how to proceed? I think what needs to happen is that for each instrument we need a timeseries, and then we need to align it with the timebase. For our SeaExplorers, that time base is most naturally the CTD. The other samples may be offset from that a it, but the CTD is sampling at 2 Hz, and I don't think we care about the other sensors slight phase errors at less than 2 Hz. I agree with this approach, with the clarification (not really important for the decision or discussion) that I'm 99% sure the GP-CTD samples internally at a max of 1Hz. That's not true for a legato, which can be programmed to sample much faster (up to 16Hz), though the SeaExplorer PLD can only be configured to sample at 1Hz or "as fast as possible" (which IIRC is something like 20Hz) @callumrollo any objections to this? I can take a crack at doing it the next few days. @hvdosser, do we have a link to some data where this is problematic? The delayed L0-timeseries (dfo-bb046-20200908_delayed.nc) for this mission is a nice example of the problem. Is there a time, or ideally a set of raw files where this problem occurred? I couldn't replicate, though I couldn't get it to work at all with the setup in that directory. Sorry for the delay on this, just got back from vacation. The idea of aligning timebases sounds good to me. Should we linearly interpolate, or do a nearest neighbour? I'm always a little cautious about interpolating data. Especially as it would lead to some strange results with integer data like raw counts for chlorophyll etc. If we want to do it at the raw to raw nc step I can look at implementing it in polars tomorrow OK, I ran the whole data set, and can see the problem now. @callumrollo if you wanted to do this, happy to let you. I'm less nervous about linear interpolation - nearest neighbour leads to phase errors; However, if there is a difference for other data types, maybe we need an entry in the yml metadata that says which interpolation is used? The tests are currently failing on the main branch. Looks like this way caused by something in #129. That PR was only for slocum data, so I'll work from the last commit where all the tests cleared for this PR. I've downloaded the dataset that @hvdosser indicated and will use that as a test case for timestamp alignment @callumrollo I think the tests should be OK on main now (I hope) I think we need to do some work on the test infrastructure. In particular, we should force rewriting of the files we compare with. Right now the way the processing works is it does incremental. Thanks for working to get the tests passing again @jklymak. I'll start on resolving this Issue now. I agree on forced reprocessing in the tests. would using the incremental=False flag suffice for this? @hvdosser I think the keep_vars functionality present in pyglider already could solve this problem to first order. Is it something you're using? I've put a demo together of the difference between using CTD as a timebase and using NAV_LATITUDE as a timebase then cutting down to the rows where at least one of the sensors has a sample. It's not perfect, but has been working pretty well for us so far. https://github.com/callumrollo/keep_vars_experiment/blob/main/timebase_keep_experiment.ipynb Sorry, it's a bit of a rushed Friday afternoon job! OK, I forgot about this, and I'm not sure it's documented. What does this do exactly? I haven't had a chance to look in detail yet, but the first thing I'd check is whether this works for a delayed-mode dataset. We didn't see much of an issue with the realtime data. looking at it quickly it seems to keep the data if any of the listed sensors are present in a line. I guess this is a philosophical thing - do we want all the raw data in a time series, which means any given sensor is riddled with NaN, or do we want time series where the sensors are time-aligned to one sensor's time base. I guess I'll argue for the latter. If someone needs the time series with raw O2, for instance, they can rerun the processing with just that variable, and that variable as the timebase. Or they can load the raw parquet files. I think by the time we make the first time series netcdf, it should be time-aligned, and not full of NaN's. This seems particularly apropos for the O2 and the optics on the SeaExplorer, which so far as I can tell are often ludicrously oversampled? But happy to discuss further. II agree we need a way to reduce the size of the final timeseries while keeping as much science data as possible. I've been white-boarding this morning to represent the way pyglider currently does this for seaexplorer data, and what potential improvements could look like. Here's some ASCII art: Key Time: PLD_REALTIMECLOCK nav: NAV_LATITUDE ctd: LEGATO_TEMPERATURE or GPCTD_TEMPERATURE oxy: e.g. AROD_FT_DO nitrate: e.g. NITRATE_CONCENTRATION or other rarely sampling sensor X: original data -: no data I: interpolated data N: nearest neighbor data Input pld1 file Time nav ctd oxy nitrate 0 X X - X 1 X - X - 2 X - X - 3 X X - - 4 X - X - 5 X - X - 6 X X - - 7 X - - X 8 X - X - 9 X - X - 10 X X - - Current output options timebase: ctd Time nav ctd oxy nitrate 0 X X - X 3 X X - - 6 X X - - 10 X X - - This illustrates @hvdosser's problem where oxygen is on a slightly offset timestamp to ctd so it is all dropped timebase: nav, keep_vars: [ctd, oxy] Time nav ctd oxy nitrate 0 X X - X 1 X - X - 2 X - X - 3 X X - - 4 X - X - 5 X - X - 6 X X - - 8 X - X - 9 X - X - 10 X X - - With keep_vars, as @jklymak identified, any line with a non-nan value for one of the sensors in keep_vars is kept. Resultant files are big, particularly if one of the keep_vars has a high sampling rate. This is the method we currently implement at VOTO, as the scientists we supply data to don't want any interpolation of data. Some of our delayed mode datasets are almost 10 GB though! Not ideal. Potential improvements timebase:ctd linear interpolation Time nav ctd oxy nitrate 0 X X I X 3 X X I I 6 X X I I 10 X X I I This solves the problem nicely for oversampling sensors like oxygen. However, this would be a severe distortion of e.g. a methane sensor that records a sample every 10 minutes and now has an apparent sample every second timebase:ctd nearst neighbour Time nav ctd oxy nitrate 0 X X N X 3 X X N - 6 X X N - 10 X X N N This avoids over-interpolation of slow sampling sensors, but the downsampling of more fast sampling sensors may be less preferable. May also be more complex to implement. Whatever we decide to do going forward, I recommend that it is either controllable from the yaml, like the keep_vars solution, or operates on the l0 timeseries as an extra step, so that an end user can get the final l0 timeseries without any interpolation if they want. We should also explain these various options in the documentation. Perhaps using diagrams like the ones in this comment. Folks who don't want any interpolation or alignment of sensors have two options (that I agree should be documented) I would argue that the raw merged parquet file is what the folks who don't want any interpolation are after. That, to my knowledge, doesn't drop any data or do any processing? They can make a oxy.yaml that aligns everything to oxygen instead of the ctd, and then they have the best of both worlds - pure oxygen data, and interpolated quantities of everything else. For interpolation options, totally fine with both linear and nearest. I'd never use nearest, but... The original reason for using the CTD for our data sets was that indeed oxygen had much more data, but it was all repeated and was really being sampled at 1 Hz or 2 Hz or something, and seemed a silly error of Alseamars to be sampling it at 48 Hz or whatever they were doing. Of course if someone has a real data set that needs sampling at higher frequency than the CTD, they should be using that as the timebase. This sounds like a good way to implement the interpolation. I've started working on a PR.
gharchive/issue
2022-11-16T21:24:36
2025-04-01T06:38:07.601092
{ "authors": [ "callumrollo", "hvdosser", "jklymak", "richardsc" ], "repo": "c-proof/pyglider", "url": "https://github.com/c-proof/pyglider/issues/128", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
927241606
sciter.js using console.log cannot exceed 11 lines Using usciter debugging, console.log cannot exceed 11 lines, otherwise it cannot be displayed Fixed already.
gharchive/issue
2021-06-22T13:34:24
2025-04-01T06:38:07.607931
{ "authors": [ "c-smile", "zyxk" ], "repo": "c-smile/sciter-js-sdk", "url": "https://github.com/c-smile/sciter-js-sdk/issues/125", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
1443975313
JSON Agent This PR implements a fully-fledged JSON Agent deployment with HTTP and MQTT transports as #185 requested. Specifically, the following got implemented. Manifests to run JSON Agent in a K8s cluster. The agent comes with an internal endpoint for provisioning and two external endpoints to get JSON device data in, one over HTTP, the other over MQTT---see #181 for our current MQTT setup. Routing. We expose the HTTP endpoint at <kitt4sme-base-url>/jsonagent/ so devices can POST JSON data at <kitt4sme-base-url>/jsonagent/iot/json?k=<api-key>&i=<device-id>. To send JSON data over MQTT, devices use the MQTT WebSocket implemented in #181 and the topic: json/<api-key>/<device-id>/attrs. Security. Istio handles TLS termination for the HTTP endpoint and delegates security decisions to the existing FIWARE OPA policy. Security over MQTT works as explained in #181. In-memory configuration. The service pre-loads device mappings and other config in memory to speed up JSON to FIWARE data translation---see config.js. Persistence. No Mongo DB backend. Data sits in memory (it isn't much actually) but it's loaded from K8s configmaps, so that's the persistence backend. (If the pod restarts, config map data gets loaded again in memory; all service pods share the same config map data.) IaC management. The manifests of all agent-related resources are tied to an Argo CD app in the mesh infra project so we can easily manage deployments through a GUI too. Simplified config model. All the agent config sits in our repo, including devices and mappings. Argo CD automatically deploys this data to the configmaps the service uses as a persistence backend---see point above. No need to fiddle with awkward service calls to figure out what devices have been defined, what their mappings are, or to add new devices. It's all defined in config.js in our repo. Look at that file to figure out the lay of the land at a glance. Edit it to add or modify device defs, mappings etc. Argo CD takes care of propagating your changes to the cluster. Demo Here's a sum up of how to demo most of what this PR implemented. First off, build your own KITT4SME cluster in a Multipass VM as explained in the bootstrap procedure. Wait a bit until all services are ready. The commands below use the 192.168.64.20 IP address to reach the Multipass VM; replace 192.168.64.20 in each command with your VM's IP. IaC Log into Argo CD. There's a json-agent app among the platform infra services. Check out all the resources deployed and have a look at the logs. Open the config map, you should be able to see the exact same content as in the config.js included in this PR. Provisioning Provision a service with two devices, one sending data over MQTT and the other over HTTP. Notice it's best to do that in config.js, but we'll do that manually here to speed things up. So as a rule, there would be no need to expose JSON Agent's provisioning API. Since we're bending the rules here, port-forward the agent's provisioning port. $ kubectl port-forward svc/jsonagent 4041:4041 On to the provisioning business. Create a service with an API key of gr33t, entity type of Greeting and HTTP endpoint of /iot/d. $ curl -iX POST 'http://localhost:4041/iot/services' \ -H 'Content-Type: application/json' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' \ -d '{ "services": [ { "apikey": "gr33t", "entity_type": "Greeting", "resource": "/iot/json" } ] }' Create two devices to send a greeting message. The message is in the JSON format: { "w": data }, where data is a greeting string the device sends. The corresponding NGSI entity has type Greeting and a words attribute holding the actual greeting. The first device has an ID of greeter001 and sends its data over MQTT, whereas the second has ID greeter002 and sends data over HTTP. $ curl -iX POST \ 'http://localhost:4041/iot/devices' \ -H 'Content-Type: application/json' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' \ -d '{ "devices": [ { "device_id": "greeter001", "entity_name": "urn:ngsi-ld:Greeting:001", "entity_type": "Greeting", "protocol": "PDI-IoTA-JSON", "transport": "MQTT", "attributes": [ { "object_id": "w", "name": "words", "type": "Text" } ] } ] } ' $ curl -iX POST \ 'http://localhost:4041/iot/devices' \ -H 'Content-Type: application/json' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' \ -d '{ "devices": [ { "device_id": "greeter002", "entity_name": "urn:ngsi-ld:Greeting:002", "entity_type": "Greeting", "protocol": "PDI-IoTA-JSON", "transport": "HTTP", "attributes": [ { "object_id": "w", "name": "words", "type": "Text" } ] } ] } ' With this setup, greeter001 is expected to send its UL payload to the MQTT topic json/gr33t/greeter001/attrs whereas greeter002 is supposed to POST its JSON payload to the URL http://192.168.64.20/jsonagent/iot/d?k=gr33t&i=greeter002 since our Istio config routes /jsonagent/<rest> to /<rest> on port 7896 of the jsonagent service. Finally, check you can retrieve the service and devices you've just created. $ curl 'http://localhost:4041/iot/services' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' {"count":1,"services":[{"apikey":"gr33t","resource":"/iot/json","service":"greeting","subservice":"/","_id":1,"creationDate":1668087518563,"entity_type":"Greeting"}]} $ curl 'http://localhost:4041/iot/devices' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' {"count":2,"devices":[{"device_id":"greeter001","service":"greeting","service_path":"/","entity_name":"urn:ngsi-ld:Greeting:001","entity_type":"Greeting","transport":"MQTT","attributes":[{"object_id":"w","name":"words","type":"Text"}],"commands":[],"static_attributes":[],"protocol":"PDI-IoTA-JSON","explicitAttrs":false},{"device_id":"greeter002","service":"greeting","service_path":"/","entity_name":"urn:ngsi-ld:Greeting:002","entity_type":"Greeting","polling":true,"transport":"HTTP","attributes":[{"object_id":"w","name":"words","type":"Text"}],"commands":[],"static_attributes":[],"protocol":"PDI-IoTA-JSON","explicitAttrs":false}]} Sending device data over MQTT We're going to use an external WebSocket client to simulate device data coming in over MQTT. Browse to http://www.emqx.io/online-mqtt-client. Hit the "New Connection" button and enter the following data: name=kitt4sme, client-id=tasty, host=ws://192.168.64.20, path=/mqtt/, port=80, username=iot. You also need to enter the "iot" user's password, which I can't type here obviously. Hit connect, then send the following JSON message to the json/gr33t/greeter001/attrs topic: { "w": "howzit!" }. Check the "howzit!" greeting trekked all the way to Orion. It should be stored in the "Greeting" entity having an ID of urn:ngsi-ld:Greeting:001. $ curl \ 'http://192.168.64.20/orion/v2/entities/urn:ngsi-ld:Greeting:001/attrs/words/value' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' "howzit!" Sending device data over HTTP Let's also send a greeting from greeter002. This device sends its data over HTTP $ curl -iX POST \ 'http://192.168.64.20/jsonagent/iot/json?k=gr33t&i=greeter002' \ -H 'Content-Type: application/json' \ -d '{ "w": "ahoy, matey!" }' HTTP/1.1 403 Forbidden date: Thu, 10 Nov 2022 14:00:52 GMT server: istio-envoy content-length: 0 What?! Yep, that's right. Our FIWARE OPA policy checks you've got a valid JWT token, since we didn't have one, we got shown the door. How rude. Well, it's too much of a mission to get a token, so let's zap the security policy $ kubectl -n istio-system delete authorizationpolicy/fiware-opa and try again $ curl -iX POST \ 'http://192.168.64.20/jsonagent/iot/json?k=gr33t&i=greeter002' \ -H 'Content-Type: application/json' \ -d '{ "w": "ahoy, matey!" }' This time the POST goes through and Orion gets our friendly greeting $ curl \ 'http://192.168.64.20/orion/v2/entities/urn:ngsi-ld:Greeting:002/attrs/words/value' \ -H 'fiware-service: greeting' \ -H 'fiware-servicepath: /' "ahoy, matey!" As expected, the greeting got stored in the "Greeting" entity having an ID of urn:ngsi-ld:Greeting:002. Cool bananas!
gharchive/pull-request
2022-11-10T14:10:45
2025-04-01T06:38:07.627151
{ "authors": [ "c0c0n3" ], "repo": "c0c0n3/kitt4sme.live", "url": "https://github.com/c0c0n3/kitt4sme.live/pull/184", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1100813961
🛑 www.ozpsav.sk is down In 6dea76f, www.ozpsav.sk (https://www.ozpsav.sk/sk/Odborovy-zvaz.alej) was down: HTTP code: 403 Response time: 1184 ms Resolved: www.ozpsav.sk is back up in 51ed266.
gharchive/issue
2022-01-12T21:36:57
2025-04-01T06:38:07.636768
{ "authors": [ "c1rus" ], "repo": "c1rus/uptime", "url": "https://github.com/c1rus/uptime/issues/1348", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1161071606
🛑 cms5.alejtech.eu is down In 8cddacf, cms5.alejtech.eu (http://cms5.alejtech.eu/UserSystem/login/login.alej) was down: HTTP code: 403 Response time: 479 ms Resolved: cms5.alejtech.eu is back up in 6640b71.
gharchive/issue
2022-03-07T08:37:35
2025-04-01T06:38:07.639861
{ "authors": [ "c1rus" ], "repo": "c1rus/uptime", "url": "https://github.com/c1rus/uptime/issues/2162", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1190637415
🛑 www.leopoldov.sk is down In 2e3051d, www.leopoldov.sk (https://www.leopoldov.sk/) was down: HTTP code: 403 Response time: 932 ms Resolved: www.leopoldov.sk is back up in 5fa2d4a.
gharchive/issue
2022-04-02T13:42:30
2025-04-01T06:38:07.642932
{ "authors": [ "c1rus" ], "repo": "c1rus/uptime", "url": "https://github.com/c1rus/uptime/issues/2965", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
324668178
Hide private symbols (issue #33). Hi, this is a PR for patch in https://github.com/cacalabs/libcaca/issues/33 Thanks for the PR.
gharchive/pull-request
2018-05-19T23:25:50
2025-04-01T06:38:07.704314
{ "authors": [ "samhocevar", "yugr" ], "repo": "cacalabs/libcaca", "url": "https://github.com/cacalabs/libcaca/pull/34", "license": "WTFPL", "license_type": "permissive", "license_source": "github-api" }
1914422801
🛑 CBTB Coffee House Shop is down In 5fc484c, CBTB Coffee House Shop (https://shop.cbtb.coffee/) was down: HTTP code: 403 Response time: 865 ms Resolved: CBTB Coffee House Shop is back up in a175878 after 13 minutes.
gharchive/issue
2023-09-26T23:10:52
2025-04-01T06:38:07.706758
{ "authors": [ "rcwombat" ], "repo": "cachetech/service-status", "url": "https://github.com/cachetech/service-status/issues/3505", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1993560072
🛑 CBTB Coffee House Shop is down In e3c7417, CBTB Coffee House Shop (https://shop.cbtb.coffee/) was down: HTTP code: 403 Response time: 1383 ms Resolved: CBTB Coffee House Shop is back up in 0489b98 after 16 minutes.
gharchive/issue
2023-11-14T20:58:02
2025-04-01T06:38:07.709178
{ "authors": [ "rcwombat" ], "repo": "cachetech/service-status", "url": "https://github.com/cachetech/service-status/issues/3931", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2189861946
🛑 CBTB Coffee House Shop is down In 8317883, CBTB Coffee House Shop (https://shop.cbtb.coffee/) was down: HTTP code: 403 Response time: 1220 ms Resolved: CBTB Coffee House Shop is back up in 59c1bd6 after 12 minutes.
gharchive/issue
2024-03-16T08:35:59
2025-04-01T06:38:07.712469
{ "authors": [ "rcwombat" ], "repo": "cachetech/service-status", "url": "https://github.com/cachetech/service-status/issues/5399", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
53579515
[WIP] Notifications DO NOT MERGE THIS YET. IT'S TOTALLY UNREADY! @Crash-- You've misused migrations here. That belongs in a seeder. @GrahamCampbell OK but I absolutely need these data in the DB. Without, we'll have a problem, this is why I put it into a migration because migrations are mandatory and seeds are optional, right? @Crash-- you can still put the data into tables, but you need to look doing that in a seeder, not the migration. @jbrooksuk Yes, but in the Cachet's documentation we can read " then you may want to seed the database with some example data." and from Laravel's "with test data using seed classes". But in my case, these datas are needed at the beginning. I can ofc change this behavior, but I'll add more lines of code :( @GrahamCampbell Can you close this PR? I'll open a new one with several fixes and improvements. @Crash-- adding more lines of code isn't a problem if it's in the right place. If you're relying on default values to be inserted before the service can run, then you need to add error handling to only look for the service if the service is indeed setup. This needs rebasing and needs all language filled synced again. Travis is failing because of a package not being able to download: - Installing symfony/process (v2.5.8) Cloning 62c77d834c6cbf9cafa294a864aeba3a6c985af3 Failed to download symfony/process from source: Failed to clone git@github.com:symfony/Process.git via git, https, ssh protocols, aborting. Hmm. I see nothing on github/travis status pages about this. Also I saw "Could not authenticate github.com" Do we have to add an API key or something (I've never had to). No, not at all. Travis has just f***ed up somewhere. Bizarre. They need to run Cachet and let us know about this kind of shenanigans! Maybe we should tweet to them to let them know this is happening? Bizarre. They need to run Cachet and let us know about this kind of shenanigans! lol Will do from the Cachet account. Done :) Needs rebasing. Rebased but not squashed yet. @GrahamCampbell please tell me that I didn't !@#$ it up this time? @GrahamCampbell are you ok continuing work on this one? If not, what's left? @GrahamCampbell Maybe this will help https://github.com/dinkbit/notifyme @joecohens how do you want to do this? Close this PR and implement fresh with notifyme, or modify this one? I think more than half of this branch is reusable. I'll take it form here :) @cachethq/owners Closing this for now, I'm starting fresh. :+1:
gharchive/pull-request
2015-01-06T23:53:41
2025-04-01T06:38:07.721818
{ "authors": [ "Crash--", "GrahamCampbell", "jbrooksuk", "joecohens" ], "repo": "cachethq/Cachet", "url": "https://github.com/cachethq/Cachet/pull/317", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
583898123
pruebafinal kisto confirmar
gharchive/pull-request
2020-03-18T17:41:45
2025-04-01T06:38:07.723969
{ "authors": [ "cacoronel" ], "repo": "cacoronel/github-slideshow", "url": "https://github.com/cacoronel/github-slideshow/pull/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1696397086
Rationale of :443 in ":443, example.com" The doc says In the Caddyfile the addresses must start with :443 for the forward_proxy to work for proxy requests of all origins. Could you help further clarify? I thought example.com alone should have both 80 and 443 covered. The magic :443, example.com looks a self contradiction to me. It's not a contradiction. (But this is a good question.) A site block name in the Caddyfile serves three purposes (somewhat regrettably): To tell the web server what port to listen on To tell the web server what domain name(s) to manage certs for To tell the web server how to route HTTP requests In most cases, these are correlate and align identically as long as we assume the default port(s) of 80/443: you can tell the server you have example.com and it will listen on 443, get a cert for example.com, and serve HTTP requests with a Host header of example.com accordingly. But when you're running a forward proxy, the Host header can contain basically anything, so you need to listen on :443 to not black-hole those HTTP requests (no. 3). But without a domain name it can't get a cert (no. 2), so you need to tell which certificate to serve in the TLS handshake. Hence, both :443, example.com.
gharchive/issue
2023-05-04T17:21:11
2025-04-01T06:38:07.755587
{ "authors": [ "Lingxi-Li", "mholt" ], "repo": "caddyserver/forwardproxy", "url": "https://github.com/caddyserver/forwardproxy/issues/101", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
53557099
Deploy vraptor-site Taking a look at vraptor-site commit, if I'm not worng vraptor-site isn't deployed since October with "articles and presentations" page. The download page still with 4.1.1 vraptor's version. Since we had some improvements on documentation, what about deploy it ? Done! (actually, I forgot that it's done manually!) Thanks
gharchive/issue
2015-01-06T20:09:02
2025-04-01T06:38:07.770678
{ "authors": [ "Turini", "renanigt" ], "repo": "caelum/vraptor4", "url": "https://github.com/caelum/vraptor4/issues/922", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
214252053
Explicitly pass CXX to NCCL Makefile Necessary if CXX isn't set when cmake is called. The CXX variable will then be empty which prevents make from using its own default. cc Mr @slayton58 by the way - in case NV finds errors in other nccl clients. @Yangqing Thanks! Good catch @pietern!
gharchive/pull-request
2017-03-15T01:01:12
2025-04-01T06:38:07.774461
{ "authors": [ "Yangqing", "pietern", "slayton58" ], "repo": "caffe2/caffe2", "url": "https://github.com/caffe2/caffe2/pull/202", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
160403061
Fix broken Publish-GitHub-Release Task Both both the 0.12.0 and 0.13.0 release of Cake, we were met with an error similar to this: https://ci.appveyor.com/project/cakebuild/cake/build/0.12.0.build.2049#L506 When trying to Publish the GitHub Release. Initially, I thought this was an error with GitReleaseManager, but I have just ran the following test: And as you can see here, the asset got added correctly: https://github.com/gep13/FakeRepository/releases/tag/untagged-694b79655a063bd1e6f7 So, if we get this https://github.com/cake-build/cake/issues/923 working, we can try to figure out what is going on. Looks like there is an issue in one of the parameters that are being passed in, but I can't replicate it on my test repository. @gep13 Should be possible now when we can set the verbosity of the logger from the script. In an update to this, it also failed for the 0.14.0 release: https://ci.appveyor.com/project/cakebuild/cake/build/0.14.0.build.2320#L575 This time, with the aid of the additional logging, I have come to the conclusion that the problem is the password that is being passed into the command line. I think it must contain a " or similar, that is breaking the input to GitReleaseManager. Going to change this password for the next release, and assume that everything is going to work :smile: Will re-open if required. Tested this locally using an newly generated personal access token, and it seems to work :+1:
gharchive/issue
2016-06-15T11:49:58
2025-04-01T06:38:07.800274
{ "authors": [ "gep13", "patriksvensson" ], "repo": "cake-build/cake", "url": "https://github.com/cake-build/cake/issues/988", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
256446872
GH1625 Escape comma and semicolon in msbuild property values Replace comma and semicolon in msbuild properties values by hex equivalent to fix issue #1625 @Julien-Mialon, Thanks for your contribution. To ensure that the project team has proper rights to use your work, please complete the Contribution License Agreement at https://cla2.dotnetfoundation.org. It will cover your contributions to all .NET Foundation-managed open source projects. Thanks, .NET Foundation Pull Request Bot @Julien-Mialon, thanks for signing the contribution license agreement. We will now validate the agreement and then the pull request. Thanks, .NET Foundation Pull Request Bot @Julien-Mialon your changes have been merged, thanks for your contribution 👍
gharchive/pull-request
2017-09-09T15:43:35
2025-04-01T06:38:07.803822
{ "authors": [ "Julien-Mialon", "devlead", "dnfclas" ], "repo": "cake-build/cake", "url": "https://github.com/cake-build/cake/pull/1793", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2398882603
🛑 xmr-node.cakewallet.com:18081 is down In 111cb0f, xmr-node.cakewallet.com:18081 (xmr-node.cakewallet.com) was down: HTTP code: 0 Response time: 0 ms Resolved: xmr-node.cakewallet.com:18081 is back up in e95f794 after 52 minutes.
gharchive/issue
2024-07-09T18:32:37
2025-04-01T06:38:07.817335
{ "authors": [ "tuxpizza" ], "repo": "cake-tech/upptime-cakewallet", "url": "https://github.com/cake-tech/upptime-cakewallet/issues/1363", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1857487580
🛑 XMR USA Servers is down In 0f8a7a1, XMR USA Servers ($XMR_USA) was down: HTTP code: 0 Response time: 0 ms Resolved: XMR USA Servers is back up in f92bfe0 after 277 days, 9 hours, 57 minutes.
gharchive/issue
2023-08-19T02:18:32
2025-04-01T06:38:07.819764
{ "authors": [ "SamsungGalaxyPlayer" ], "repo": "cake-tech/upptime-cakewallet", "url": "https://github.com/cake-tech/upptime-cakewallet/issues/140", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2421154136
🛑 XMR USA Servers is down In cbceb8e, XMR USA Servers ($XMR_USA) was down: HTTP code: 0 Response time: 0 ms Resolved: XMR USA Servers is back up in e78f1ae after 32 minutes.
gharchive/issue
2024-07-21T01:02:36
2025-04-01T06:38:07.821961
{ "authors": [ "tuxpizza" ], "repo": "cake-tech/upptime-cakewallet", "url": "https://github.com/cake-tech/upptime-cakewallet/issues/2298", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2433313506
🛑 xmr-node.cakewallet.com:18081 is down In c4a2349, xmr-node.cakewallet.com:18081 (xmr-node.cakewallet.com) was down: HTTP code: 0 Response time: 0 ms Resolved: xmr-node.cakewallet.com:18081 is back up in 9c6b061 after 6 minutes.
gharchive/issue
2024-07-27T06:52:24
2025-04-01T06:38:07.825390
{ "authors": [ "tuxpizza" ], "repo": "cake-tech/upptime-cakewallet", "url": "https://github.com/cake-tech/upptime-cakewallet/issues/2827", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1421848717
🛑 node.moneroworld.com:18089 is down In a25f6e6, node.moneroworld.com:18089 (node.moneroworld.com) was down: HTTP code: 0 Response time: 0 ms Resolved: node.moneroworld.com:18089 is back up in 4ad0e99.
gharchive/issue
2022-10-25T04:57:02
2025-04-01T06:38:07.828588
{ "authors": [ "SamsungGalaxyPlayer" ], "repo": "cake-tech/upptime-monerocom", "url": "https://github.com/cake-tech/upptime-monerocom/issues/56", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
486217982
"Phinx\Console\Command\Init" cannot have an empty name (symfony/console >= 4.3.4) PHP Fatal error: Uncaught Symfony\Component\Console\Exception\LogicException: The command defined in "Phinx\Console\Command\Init" cannot have an empty name. in /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php:453 Stack trace: #0 /var/www/localhost/htdocs/vendor/robmorgan/phinx/src/Phinx/Console/Command/Init.php(47): Symfony\Component\Console\Command\Command->getName() #1 /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php(77): Phinx\Console\Command\Init->configure() #2 /var/www/localhost/htdocs/vendor/robmorgan/phinx/src/Phinx/Console/PhinxApplication.php(60): Symfony\Component\Console\Command\Command->__construct() #3 /var/www/localhost/htdocs/vendor/nartex/nx-phinx/bin/nx-phinx(12): Phinx\Console\PhinxApplication->__construct() #4 {main} thrown in /var/www/localhost/htdocs/vendor/symfony/console/Command/Command.php on line 453 With composer json: { "require": { "robmorgan/phinx": "~0.10" }, } composer info | grep console symfony/console v4.3.4 Symfony Console C... When symfony/console is downgraded to v4.3.3: { "require": { "robmorgan/phinx": "~0.10", "symfony/console": "=4.3.3" }, } It works. I guess symfony/console v4.3.4 introduces a breaking change with phinx. We released a fix today. see: https://github.com/cakephp/phinx/pull/1596
gharchive/issue
2019-08-28T07:57:29
2025-04-01T06:38:07.868738
{ "authors": [ "dereuromark", "tarnagas" ], "repo": "cakephp/phinx", "url": "https://github.com/cakephp/phinx/issues/1597", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
469400083
Cache names of created tables for exists check Fixes #1569 where because running under dry-run meant that the table was never actually created subsequent checks for existence of the table would fail, and an extra invalid CREATE TABLE statement would end up getting inserted into the dry-run log. I'm not 100% satisfied with how this looks, but unsure of how might improve things and reduce the amount of copy/paste of things. It'll also still fail in the case where someone would directly call execute() with a create/drop/rename query. I think we can't fix all cases, but this sure looks already like an improvement for most standard cases :+1: Yeah, unless you put in a full SQL parser, the real complex cases around direct execute will never be fully captured. This will also not capture case of doing say an insert and save, then select over the table, and then use that to create additional queries. That case (assuming using internal functions) could be captured by just caching the various inserts/updates, but that's probably a bridge to burn if people actually report it. Can also probably just put a note in the docs about dry-run not being able to fully generate all SQL for complex cases (e.g. using execute, or the above example) due to it not being hooked up to a real DB. Thanks, this makes sense to me Can also probably just put a note in the docs about dry-run not being able to fully generate all SQL for complex cases (e.g. using execute, or the above example) due to it not being hooked up to a real DB. Good idea.
gharchive/pull-request
2019-07-17T19:21:13
2025-04-01T06:38:07.871818
{ "authors": [ "MasterOdin", "dereuromark", "lorenzo", "raph1mm" ], "repo": "cakephp/phinx", "url": "https://github.com/cakephp/phinx/pull/1575", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
216424762
JWT Token Verification using RS256 I was having some problems getting token verification working if the token was signed using RS256 instead of HS256. In RS256 you need to pass in the public key in PEM format. My issues was that the PEM format is strict both in terms of encoding (base64) AND line breaking. Download your pem file it should like this -----BEGIN CERTIFICATE----- MIIC+DC... ...j+NjK0Bjo= -----END CERTIFICATE----- Your PEM file needs to have a line break after BEGIN CERTIFICATE---- and then a line break every 64 characters. A quick way to do this on *nix/mac is this fold filename.pem > filename-wrapped.pem Since its pretty hard to pass things with line breaks you should set an environment variable from it like this. (this folds it automatically, if its already wrapped it will do nothing export GRAPHQL_SECRET="$(cat filename.pem | fold -64)"` Then start up your graphql server passing --secret=${GRAPHQL_SECRET} You still need to figure out how to pass audience until this issue closes @angelosarto Nice auto-close to give documentation for others who may struggle in future 👍 It’s really awesome to know RS256 works without any other modifications! Does signing work fine? Thanks @natejenkins; I've modified the issue description with your fixes 👍 I was having some problems getting token verification working if the token was signed using RS256 instead of HS256. In RS256 you need to pass in the public key in PEM format. My issues was that the PEM format is strict both in terms of encoding (base64) AND line breaking. Download your pem file it should like this -----BEGIN CERTIFICATE----- MIIC+DC... ...j+NjK0Bjo= -----END CERTIFICATE----- Your PEM file needs to have a line break after BEGIN CERTIFICATE---- and then a line break every 64 characters. A quick way to do this on *nix/mac is this fold filename.pem > filename-wrapped.pem Since its pretty hard to pass things with line breaks you should set an environment variable from it like this. (this folds it automatically, if its already wrapped it will do nothing export GRAPHQL_SECRET="$(cat filename.pem | fold -64)" Then start up your graphql server passing --jwt-secret="${GRAPHQL_SECRET}" You still need to figure out how to pass audience until this issue closes Could you please explain how RS256 works in this case? I'm using PostGraphile 4.6.0, has option set as: createServer( postgraphile(env.DATABASE_URL, "public", { jwtSecret: "./publickey.pem", jwtVerifyAlgorithms: ["RS256"], jwtPgTypeIdentifier: "public.jwt_token", graphiql: true, enhanceGraphiql: true, graphqlRoute: env.POSTGRAPHILE_ROUTE + "/graphql", graphiqlRoute: env.POSTGRAPHILE_ROUTE + "/graphiql", }) ).listen(port, () => { console.log("Listening at port:" + port); }); But when I sending the Authorization header in Postman with RS256 encrypted JWT token, always get 'errors: invalid algorithm'. In fact, the JWT token I get back from this PostGraphile server is always HS256 encrypted. Appears that RS256 doesn't take effect. Could you show your working example of PostGraphile JWT verification with RS256? It is treating the literal string you have passed it as the secret, as the secret. Same as it would if you passed it “MY_SECRET_HERE”. You need to read the file and send through the file contents.
gharchive/issue
2017-03-23T13:19:56
2025-04-01T06:38:07.905489
{ "authors": [ "angelosarto", "benjie", "calebmer", "yleigh" ], "repo": "calebmer/postgraphql", "url": "https://github.com/calebmer/postgraphql/issues/404", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
271053887
Did something break with netlify? Hello, This isn't an issue with your repo per say. I had a working website for the past many months (forked your code.. and modified it considerably for my own purposes). But today, it looks really wierd on all my browsers: http://rsangole.netlify.com Did netlify break? Why has this happened suddenly? Rahul The https://bootswatch.com/solar/bootstrap.css gives a 404 error. Thanks @Tafkas. They have a version 3 and version 4 now, which broke the link.
gharchive/issue
2017-11-03T17:16:46
2025-04-01T06:38:07.908528
{ "authors": [ "Tafkas", "rsangole" ], "repo": "calintat/minimal", "url": "https://github.com/calintat/minimal/issues/36", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
270156208
[Table Pagination] export LabelDisplayedRowArgs interface and improve label props types Export LabelDisplayedRowsArgs from TablePagination.d.ts. Change labelDisplayedRows return type to JSX.Element | string so it is aligned with the implementation. Change labelRowsPerPage return type to JSX.Element | string. Hi @oliviertassinari , do you have any idea on how to address the differences in Argos test ? I have no clue and the regression test run successful on my environment. @t49tran thanks
gharchive/pull-request
2017-11-01T00:08:20
2025-04-01T06:38:07.910622
{ "authors": [ "oliviertassinari", "t49tran" ], "repo": "callemall/material-ui", "url": "https://github.com/callemall/material-ui/pull/8930", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1319542777
🛑 TAIF is down In e537560, TAIF (https://taif.ru) was down: HTTP code: 0 Response time: 0 ms Resolved: TAIF is back up in 2010ccf.
gharchive/issue
2022-07-27T13:09:24
2025-04-01T06:38:07.913211
{ "authors": [ "callmeurpapa" ], "repo": "callmeurpapa/uptime", "url": "https://github.com/callmeurpapa/uptime/issues/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1737655399
Feat: change press enter action enterを押した時の挙動を以下の仕様に変更しました。 デフォルトのinsertBreak 行の先頭でenterを押した場合、行の上にデフォルトのElementを挿入する。 行の途中の場合、後ろのテキストを下に改行し、デフォルトのElementにする。 行の最後の場合、行の下にデフォルトのElementを挿入する。 テキストが空の場合、その行をデフォルトのElementで上書きする。 toggle/body Elementを下にコピーする。 list , order , todo , toggle , callout Elementを下にコピーする。 textが空の場合、Elementを削除して次の行の末尾に移動する。 ツールバー enterを押してもツールバーを消さない。 動作テスト↓ https://www.notion.so/cyberagent-group/bento-8021ff081c644ed2b44e63b028390477 https://github.com/cam-inc/bento/pull/92/files#diff-1d9cd182bbb8fc009625fd0116ea08750087fd2d7cf35a701c5179de3361ec17R25-R28 endPathは型がPathなので、別途でoffsetを指定する必要があります。 https://github.com/cam-inc/bento/pull/92/files#diff-1d9cd182bbb8fc009625fd0116ea08750087fd2d7cf35a701c5179de3361ec17R25-R28 endPathは型がPathなので、別途でoffsetを指定する必要があります。 このコメントは該当のコードの部分にほしい~
gharchive/pull-request
2023-06-02T07:59:57
2025-04-01T06:38:07.939515
{ "authors": [ "MasanobuRyuman", "hiteto218" ], "repo": "cam-inc/bento", "url": "https://github.com/cam-inc/bento/pull/92", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
93406900
Damn! This could be a powerful classroom tool for reading fluency - in fact, I found it on an educational website - but I would be very hesitant to use it in my third grade classroom when the word "damn" pops up every time they finish reading. Is there a way to get that changed? Cathi Palmer I agree, this should be changed. @notolaf, this project is no longer maintained (thanks to legal threats from Spritz), so I don't think this will be changed. If you're looking for something to use in class, jetzt works very well, although it's less polished. Good luck.
gharchive/issue
2015-07-07T00:24:10
2025-04-01T06:38:07.943949
{ "authors": [ "IBPX", "notolaf" ], "repo": "cameron/squirt", "url": "https://github.com/cameron/squirt/issues/174", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
542526705
プレミアムユーザーじゃないときにモーダルが表示される条件 プレミアムユーザーじゃないときにモーダルが表示される条件を変更する。 現在リロードすると何度も表示されるので、プレミアムが必要な操作のときだけにする v3では、セッション作成時にだけ表示すれば良さそう
gharchive/issue
2019-12-26T11:19:31
2025-04-01T06:38:07.981580
{ "authors": [ "atrn0", "dora1998" ], "repo": "camphor-/relaym-client", "url": "https://github.com/camphor-/relaym-client/issues/130", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
462763881
New release The Puppet Development Kit includes pinned gems and that includes facterdb 0.6.0. That is unfortunately a bit behind the times and leads to things like RHEL 6 and 7 not including the networking facts hash. It would be nice to see a newer version they could include in the next PDK release. @rnelson0 I'm working on closing out some open issues and updating factsets to get ready for a release :)
gharchive/issue
2019-07-01T15:00:23
2025-04-01T06:38:08.011655
{ "authors": [ "rnelson0", "rodjek" ], "repo": "camptocamp/facterdb", "url": "https://github.com/camptocamp/facterdb/issues/101", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
224649868
empty plain object cause "unsupported BodyInit type" error fetch(url,{ body : {}, method : 'POST' }) this code may cause "unsupported BodyInit type" error because of below code from fetch-ie8 source this._initBody = function(body, options) { this._bodyInit = body if (typeof body === 'string') { this._bodyText = body } else if (support.blob && Blob.prototype.isPrototypeOf(body)) { this._bodyBlob = body this._options = options } else if (support.formData && FormData.prototype.isPrototypeOf(body)) { this._bodyFormData = body } else if (!body) { this._bodyText = '' } else if (support.arrayBuffer && ArrayBuffer.prototype.isPrototypeOf(body)) { // Only support ArrayBuffers for POST method. // Receiving ArrayBuffers happens via Blobs, instead. } else { throw new Error('unsupported BodyInit type') } } Fetch body does not support object. https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API/Using_Fetch#Body 抱歉,我没有表述清楚我的意思,空对象是不被支持的,但你的库则会直接报错, 而不是被忽略 在 win10 chrome 58.0.3029.81 和 firefox 53 上 fetch('./',{method : 'POST',body:{}}) 上述代码仍然会产生请求,只不过 body 部分被忽略了,fetch 请求仍然会正常进行。
gharchive/issue
2017-04-27T02:16:28
2025-04-01T06:38:08.015740
{ "authors": [ "camsong", "mingzepeng" ], "repo": "camsong/fetch-ie8", "url": "https://github.com/camsong/fetch-ie8/issues/9", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1604492702
Shortcuts override the search input in template selection modal Describe the bug If you open the template selector popup window the search input field has the focus, and if you press the button "p", everything works as expected. But if you press any button which is a shortcut, for example "n", or "a", it does not search but execute the command bound to that shortcut. The biggest problem is the backspace. Press p, and then backspace. It will remove the Task element, but won't close the Template selector popup window. If you select the template after this, it will throw an error saying : Cannot read properties of undefined (reading 'children') Steps to reproduce Open editor Create a new Task Click on template selection button Press "p" to start filtering the templates Press backspace to delete the letter "p" It will remove the Task OR Open editor Create a new Task Click on template selection button Press "n" to start filtering the templates It won't filter the templates as expected but it will open the Create element context menu Expected behavior When the search input field has the focus, we should disable shortcuts. Environment OS: Windows 11 Camunda Modeler Version: 5.9.0-nightly.20230227 Execution Platform: Camunda Platform Installed plug-ins: none Additional context Related to SUPPORT-21053 I can reproduce the issue in 5.21.0 but cannot reproduce the backspace-related error anymore. Closed via https://github.com/camunda/camunda-modeler/issues/4195.
gharchive/issue
2023-03-01T08:18:47
2025-04-01T06:38:08.029082
{ "authors": [ "barmac", "markfarkas-camunda", "nikku" ], "repo": "camunda/camunda-modeler", "url": "https://github.com/camunda/camunda-modeler/issues/3483", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1408126176
[BUG] Connection insecure on Operate with Keycloak enabled Describe the bug: Operate and Keycloak is exposed via a secure https endpoint. When I authenticate to Operate in the Keycloak UI, I have a "Connection insecure" warning. Tested via Google Chrome and Mozilla Firefox. Environment: Platform: IBM Cloud Kubernetes Service Chart version: 8.0.14 Values file: # Default values for Camunda Platform helm. # This is a YAML-formatted file. # Declare variables to be passed into your templates. # The values file follows helm best practices https://helm.sh/docs/chart_best_practices/values/ # # This means: # * Variable names should begin with a lowercase letter, and words should be separated with camelcase. # * Every defined property in values.yaml should be documented. The documentation string should begin with the name of the property that it describes, and then give at least a one-sentence description # # Furthermore, we try to apply the following pattern: # [VarName] [conjunction] [definition] # # VarName: # # * In the documentation the variable name is started with a big letter, similar to kubernetes resource documentation. # * If the variable is part of a subsection/object we use a json path expression (to make it more clear where the variable belongs to). # The root (chart name) is omitted (e.g. zeebe). This is useful for using --set in helm. # # Conjunction: # * [defines] for mandatory configuration # * [can be used] for optional configuration # * [if true] for toggles # * [configuration] for section/group of variables # Global configuration for variables which can be accessed by all sub charts global: # Annotations can be used to define common annotations, which should be applied to all deployments annotations: {} # Labels can be used to define common labels, which should be applied to all deployments labels: app: camunda-platform # Image configuration to be used in each sub chart image: # Image.tag defines the tag / version which should be used in the chart tag: 8.0.0 # Image.pullPolicy defines the image pull policy which should be used https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy pullPolicy: IfNotPresent # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed. Only useful if an ingress controller is available, like Ingress-NGINX. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: cwa.xxxxxxxxxxxxxx.com # Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com # Elasticsearch configuration which is shared between the sub charts elasticsearch: # Elasticsearch.disableExporter if true, disables the elastic exporter in zeebe disableExporter: false # Elasticsearch.url can be used to configure the URL to access elasticsearch, if not set services fallback to host and port configuration url: # Elasticsearch.host defines the elasticsearch host, ideally the service name inside the namespace host: "elasticsearch-master" # Elasticsearch.port defines the elasticsearch port, under which elasticsearch can be accessed port: 9200 # Elasticsearch.clusterName defines the cluster name which is used by Elasticsearch clusterName: "elasticsearch" # Elasticsearch.prefix defines the prefix which is used by the Zeebe Elasticsearch Exporter to create Elasticsearch indexes prefix: zeebe-record # ZeebeClusterName defines the cluster name for the Zeebe cluster. All Zeebe pods get this prefix in their name and the brokers uses that as cluster name. zeebeClusterName: "{{ .Release.Name }}-zeebe" # ZeebePort defines the port which is used for the Zeebe Gateway. This port accepts the GRPC Client messages and forwards them to the Zeebe Brokers. zeebePort: 26500 # Identity configuration to configure identity specifics on global level, which can be accessed by other sub-charts identity: keycloak: # Identity.keycloak.fullname can be used to change the referenced Keycloak service name inside the sub-charts, like operate, optimize, etc. # Subcharts can't access values from other sub-charts or the parent, global only. # This is useful if the identity.keycloak.fullnameOverride is set, and specifies a different name for the Keycloak service. fullname: "" # Identity.auth configuration, to configure Identity authentication setup auth: # Identity.auth.enabled if true, enables the Identity authentication otherwise basic-auth will be used on all services. enabled: true # Identity.auth.publicIssuerUrl defines the token issuer (Keycloak) URL, where the services can request JWT tokens. # Should be public accessible, per default we assume a port-forward to Keycloak (18080) is created before login. # Can be overwritten if, ingress is in use and an external IP is available. publicIssuerUrl: "https://keycloak.xxxxxxxxxxxxxx.com/auth/realms/camunda-platform" # Identity.auth.operate configuration to configure Operate authentication specifics on global level, which can be accessed by other sub-charts operate: # Identity.auth.operate.existingSecret can be used to reference an existing secret. If not set, a random secret is generated. # The existing secret should contain an `operate-secret` field, which will be used as secret for the Identity-Operate communication. existingSecret: # Identity.auth.operate.redirectUrl defines the redirect URL, which is used by Keycloak to access Operate. # Should be public accessible, the default value works if port-forward to Operate is created to 8081. # Can be overwritten if, ingress is in use and an external IP is available. redirectUrl: "https://operate.xxxxxxxxxxxxxx.com" # Identity.auth.tasklist configuration to configure Tasklist authentication specifics on global level, which can be accessed by other sub-charts tasklist: # Identity.auth.tasklist.existingSecret can be used to use an own existing secret. If not set a random secret is generated. # The existing secret should contain an `tasklist-secret` field, which will be used as secret for the Identity-Tasklist communication. existingSecret: # Identity.auth.tasklist.redirectUrl defines the root (or redirect) URL, which is used by Keycloak to access Tasklist. # Should be public accessible, the default value works if port-forward to Tasklist is created to 8082. # Can be overwritten if, ingress is in use and an external IP is available. redirectUrl: "https://tasklist.xxxxxxxxxxxxxx.com" # Identity.auth.optimize configuration to configure Optimize authentication specifics on global level, which can be accessed by other sub-charts optimize: # Identity.auth.optimize.existingSecret can be used to use an own existing secret. If not set a random secret is generated. # The existing secret should contain an `optimize-secret` field, which will be used as secret for the Identity-Optimize communication. existingSecret: # Identity.auth.optimize.redirectUrl defines the root (or redirect) URL, which is used by Keycloak to access Optimize. # Should be public accessible, the default value works if port-forward to Optimize is created to 8082. # Can be overwritten if, ingress is in use and an external IP is available. redirectUrl: "https://optimize.xxxxxxxxxxxxxx.com" # Zeebe configuration for the Zeebe sub chart. Contains configuration for the Zeebe broker and related resources. zeebe: # Enabled if true, all zeebe related resources are deployed via the helm release enabled: true # Image configuration to configure the zeebe image specifics image: # Image.repository defines which image repository to use repository: camunda/zeebe # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # ClusterSize defines the amount of brokers (=replicas), which are deployed via helm clusterSize: "1" # PartitionCount defines how many zeebe partitions are set up in the cluster partitionCount: "1" # ReplicationFactor defines how each partition is replicated, the value defines the number of nodes replicationFactor: "1" # Env can be used to set extra environment variables in each zeebe broker container env: - name: ZEEBE_BROKER_DATA_SNAPSHOTPERIOD value: "5m" - name: ZEEBE_BROKER_DATA_DISKUSAGECOMMANDWATERMARK value: "0.85" - name: ZEEBE_BROKER_DATA_DISKUSAGEREPLICATIONWATERMARK value: "0.87" # ConfigMap configuration which will be applied to the mounted config map. configMap: # ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. # See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623 defaultMode: 0754 # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # LogLevel defines the log level which is used by the zeebe brokers logLevel: info # Log4j2 can be used to overwrite the log4j2 configuration of the zeebe brokers log4j2: '' # JavaOpts can be used to set java options for the zeebe brokers javaOpts: >- -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/usr/local/zeebe/data -XX:ErrorFile=/usr/local/zeebe/data/zeebe_error%p.log -XX:+ExitOnOutOfMemoryError # Service configuration for the broker service service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.httpPort defines the port of the http endpoint, where for example metrics are provided httpPort: 9600 # Service.httpName defines the name of the http endpoint, where for example metrics are provided httpName: "http" # Service.commandPort defines the port of the command api endpoint, where the broker commands are sent to commandPort: 26501 # Service.commandName defines the name of the command api endpoint, where the broker commands are sent to commandName: "command" # Service.internalPort defines the port of the internal api endpoint, which is used for internal communication internalPort: 26502 # Service.internalName defines the name of the internal api endpoint, which is used for internal communication internalName: "internal" # extraPorts can be used to expose any other ports which are required. Can be useful for exporters extraPorts: [] # - name: hazelcast # protocol: TCP # port: 5701 # targetPort: 5701 # ServiceAccount configuration for the service account where the broker pods are assigned to serviceAccount: # ServiceAccount.enabled if true, enables the broker service account enabled: true # ServiceAccount.name can be used to set the name of the broker service account name: "" # ServiceAccount.annotations can be used to set the annotations of the broker service account annotations: {} # CpuThreadCount defines how many threads can be used for the processing on each broker pod cpuThreadCount: "3" # IoThreadCount defines how many threads can be used for the exporting on each broker pod ioThreadCount: "3" # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 800m memory: 1200Mi limits: cpu: 960m memory: 1920Mi # PersistenceType defines the type of persistence which is used by Zeebe. Possible values are: disk, local and memory. # disk - means a persistence volume claim is configured and used # local - means the data is stored into the container, no volumeMount nor volume nor claim is configured # memory - means zeebe uses a tmpfs for the data persistence, be aware that this takes the limits into account persistenceType: disk # PvcSize defines the persistent volume claim size, which is used by each broker pod https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims pvcSize: "16Gi" # PvcAccessModes can be used to configure the persistent volume claim access mode https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes pvcAccessModes: ["ReadWriteOnce"] # PvcStorageClassName can be used to set the storage class name which should be used by the persistent volume claim. It is recommended to use a storage class, which is backed with a SSD. pvcStorageClassName: '' # ExtraVolumes can be used to define extra volumes for the broker pods, useful for additional exporters extraVolumes: [] # ExtraVolumeMounts can be used to mount extra volumes for the broker pods, useful for additional exporters extraVolumeMounts: [] # ExtraInitContainers can be used to set up extra init containers for the broker pods, useful for additional exporters extraInitContainers: [] # PodAnnotations can be used to define extra broker pod annotations podAnnotations: {} # PodLabels can be used to define extra broker pod labels podLabels: {} # PodDisruptionBudget configuration to configure a pod disruption budget for the broker pods https://kubernetes.io/docs/tasks/run-application/configure-pdb/ podDisruptionBudget: # PodDisruptionBudget.enabled if true a pod disruption budget is defined for the brokers enabled: false # PodDisruptionBudget.minAvailable can be used to set how many pods should be available. Be aware that if minAvailable is set, maxUnavailable will not be set (they are mutually exclusive). minAvailable: # podDisruptionBudget.maxUnavailable can be used to set how many pods should be at max. unavailable maxUnavailable: 1 # PodSecurityContext defines the security options the Zeebe broker pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the Zeebe broker container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the broker pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity # The default defined PodAntiAffinity allows constraining on which nodes the Zeebe pods are scheduled on https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity # It uses a hard requirement for scheduling and works based on the Zeebe pod labels affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app.kubernetes.io/component" operator: In values: - zeebe-broker topologyKey: "kubernetes.io/hostname" # PriorityClassName can be used to define the broker pods priority https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass priorityClassName: "" # ReadinessProbe configuration for the zeebe broker readiness probe readinessProbe: # ReadinessProbe.probePath defines the readiness probe route used on the zeebe brokers probePath: /ready # ReadinessProbe.periodSeconds defines how often the probe is executed periodSeconds: 10 # ReadinessProbe.successThreshold defines how often it needs to be true to be marked as ready, after failure successThreshold: 1 # ReadinessProbe.timeoutSeconds defines the seconds after the probe times out timeoutSeconds: 1 # Gateway configuration to define properties related to the standalone gateway zeebe-gateway: # Replicas defines how many standalone gateways are deployed replicas: 1 # Image configuration to configure the zeebe-gateway image specifics image: # Image.repository defines which image repository to use repository: camunda/zeebe # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # PodAnnotations can be used to define extra gateway pod annotations podAnnotations: {} # PodLabels can be used to define extra gateway pod labels podLabels: {} # LogLevel defines the log level which is used by the gateway logLevel: info # Log4j2 can be used to overwrite the log4j2 configuration of the gateway log4j2: '' # JavaOpts can be used to set java options for the zeebe gateways javaOpts: >- -XX:+ExitOnOutOfMemoryError # Env can be used to set extra environment variables in each gateway container env: [] # ConfigMap configuration which will be applied to the mounted config map. configMap: # ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. # See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623 defaultMode: 0744 # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # PodDisruptionBudget configuration to configure a pod disruption budget for the gateway pods https://kubernetes.io/docs/tasks/run-application/configure-pdb/ podDisruptionBudget: # PodDisruptionBudget.enabled if true a pod disruption budget is defined for the gateways enabled: false # PodDisruptionBudget.minAvailable can be used to set how many pods should be available. Be aware that if minAvailable is set, maxUnavailable will not be set (they are mutually exclusive). minAvailable: 1 # PodDisruptionBudget.maxUnavailable can be used to set how many pods should be at max. unavailable maxUnavailable: # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 400m memory: 450Mi limits: cpu: 400m memory: 450Mi # PriorityClassName can be used to define the gateway pods priority https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass priorityClassName: "" # PodSecurityContext defines the security options the gateway pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the gateway container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the gateway pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity # The default defined PodAntiAffinity allows constraining on which nodes the Zeebe gateway pods are scheduled on https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity # It uses a hard requirement for scheduling and works based on the Zeebe gateway pod labels affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: "app.kubernetes.io/component" operator: In values: - zeebe-gateway topologyKey: "kubernetes.io/hostname" # ExtraVolumeMounts can be used to mount extra volumes for the gateway pods, useful for enabling tls between gateway and broker extraVolumeMounts: [] # ExtraVolumes can be used to define extra volumes for the gateway pods, useful for enabling tls between gateway and broker extraVolumes: [] # ExtraInitContainers can be used to set up extra init containers for the gateway pods, useful for adding interceptors extraInitContainers: [] # Service configuration for the gateway service service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.loadBalancerIP defines public ip of the load balancer if the type is LoadBalancer loadBalancerIP: "" # Service.loadBalancerSourceRanges defines list of allowed source ip address ranges if the type is LoadBalancer loadBalancerSourceRanges: [] # Service.httpPort defines the port of the http endpoint, where for example metrics are provided httpPort: 9600 # Service.httpName defines the name of the http endpoint, where for example metrics are provided httpName: "http" # Service.gatewayPort defines the port of the gateway endpoint, where client commands (grpc) are sent to gatewayPort: 26500 # Service.gatewayName defines the name of the gateway endpoint, where client commands (grpc) are sent to gatewayName: "gateway" # Service.internalPort defines the port of the internal api endpoint, which is used for internal communication internalPort: 26502 # Service.internalName defines the name of the internal api endpoint, which is used for internal communication internalName: "internal" # Service.annotations can be used to define annotations, which will be applied to the zeebe-gateway service annotations: {} # ServiceAccount configuration for the service account where the gateway pods are assigned to serviceAccount: # ServiceAccount.enabled if true, enables the gateway service account enabled: true # ServiceAccount.name can be used to set the name of the gateway service account name: "" # ServiceAccount.annotations can be used to set the annotations of the gateway service account annotations: {} # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed with the Zeebe gateway deployment. Only useful if an ingress controller is available, like nginx. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "GRPC" # Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules path: / # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: zeebe.xxxxxxxxxxxxxx.com # Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com # Operate configuration for the Operate sub chart. operate: # Enabled if true, the Operate deployment and its related resources are deployed via a helm release enabled: true # Image configuration to configure the Operate image specifics image: # Image.repository defines which image repository to use repository: camunda/operate # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # ContextPath can be used to make Operate web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain. # contextPath: "/operate" # PodAnnotations can be used to define extra Operate pod annotations podAnnotations: {} # PodLabels can be used to define extra Operate pod labels podLabels: {} # Logging configuration for the Operate logging. This template will be directly included in the Operate configuration yaml file logging: level: ROOT: INFO io.camunda.operate: DEBUG # Service configuration to configure the Operate service. service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.port defines the port of the service, where the Operate web application will be available port: 80 # Service.annotations can be used to define annotations, which will be applied to the Operate service annotations: {} # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 600m memory: 400Mi limits: cpu: 2000m memory: 2Gi # Env can be used to set extra environment variables in each Operate container env: [] # ConfigMap configuration which will be applied to the mounted config map. configMap: # ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. # See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623 defaultMode: 0744 # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # ExtraVolumes can be used to define extra volumes for the Operate pods, useful for tls and self-signed certificates extraVolumes: [] # ExtraVolumeMounts can be used to mount extra volumes for the Operate pods, useful for tls and self-signed certificates extraVolumeMounts: [] # ServiceAccount configuration for the service account where the Operate pods are assigned to serviceAccount: # ServiceAccount.enabled if true, enables the Operate service account enabled: true # ServiceAccount.name can be used to set the name of the Operate service account name: "" # ServiceAccount.annotations can be used to set the annotations of the Operate service account annotations: {} # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed with the Operate deployment. Only useful if an ingress controller is available, like nginx. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" # Ingress.path defines the path which is associated with the Operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules path: / # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: operate.xxxxxxxxxxxxxx.com # Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com # PodSecurityContext defines the security options the Operate pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the Operate container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the Operate pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity affinity: {} # Tasklist configuration for the tasklist sub chart. tasklist: # Enabled if true, the tasklist deployment and its related resources are deployed via a helm release enabled: true # Image configuration to configure the tasklist image specifics image: # Image.repository defines which image repository to use repository: camunda/tasklist # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # ContextPath can be used to make Tasklist web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain. # contextPath: "/tasklist" # Env can be used to set extra environment variables on each Tasklist container env: [] # PodAnnotations can be used to define extra Tasklist pod annotations podAnnotations: {} # PodLabels can be used to define extra tasklist pod labels podLabels: {} # ConfigMap configuration which will be applied to the mounted config map. configMap: # ConfigMap.defaultMode can be used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. # See https://github.com/kubernetes/api/blob/master/core/v1/types.go#L1615-L1623 defaultMode: 0744 # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # Service configuration to configure the tasklist service. service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.port defines the port of the service, where the tasklist web application will be available port: 80 # GraphqlPlaygroundEnabled if true, enables the graphql playground graphqlPlaygroundEnabled: "" # GraphqlPlaygroundEnabled can be set to include the credentials in each request, should be set to "include" if graphql playground is enabled graphqlPlaygroundRequestCredentials: "" # ExtraVolumes can be used to define extra volumes for the Tasklist pods, useful for tls and self-signed certificates extraVolumes: [] # ExtraVolumeMounts can be used to mount extra volumes for the Tasklist pods, useful for tls and self-signed certificates extraVolumeMounts: [] # PodSecurityContext defines the security options the Tasklist pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the Tasklist container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the Tasklist pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity affinity: {} # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 400m memory: 1Gi limits: cpu: 1000m memory: 2Gi # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed with the tasklist deployment. Only useful if an ingress controller is available, like nginx. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" # Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules path: / # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: tasklist.xxxxxxxxxxxxxx.com tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com # Optimize configuration for the Optimize sub chart. optimize: # Enabled if true, the Optimize deployment and its related resources are deployed via a helm release enabled: true # Image configuration to configure the Optimize image specifics image: # Image.repository defines which image repository to use repository: camunda/optimize # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: 3.9.0-preview-2 # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # ContextPath can be used to make Optimize web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain. # contextPath: "/optimize" # PodAnnotations can be used to define extra Optimize pod annotations podAnnotations: {} # PodLabels can be used to define extra Optimize pod labels podLabels: {} # PartitionCount defines how many Zeebe partitions are set up in the cluster and which should be imported by Optimize partitionCount: "1" # Env can be used to set extra environment variables in each Optimize container env: [] # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # ExtraVolumes can be used to define extra volumes for the Optimize pods, useful for tls and self-signed certificates extraVolumes: [] # ExtraVolumeMounts can be used to mount extra volumes for the Optimize pods, useful for tls and self-signed certificates extraVolumeMounts: [] # ServiceAccount configuration for the service account where the Optimize pods are assigned to serviceAccount: # ServiceAccount.enabled if true, enables the Optimize service account enabled: true # ServiceAccount.name can be used to set the name of the Optimize service account name: "" # ServiceAccount.annotations can be used to set the annotations of the Optimize service account annotations: {} # Service configuration to configure the Optimize service. service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.port defines the port of the service, where the Optimize web application will be available port: 80 # Service.annotations can be used to define annotations, which will be applied to the Optimize service annotations: {} # PodSecurityContext defines the security options the Optimize pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the Optimize container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the Optimize pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity affinity: {} # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 600m memory: 1Gi limits: cpu: 2000m memory: 2Gi # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed with the Optimize deployment. Only useful if an ingress controller is available, like nginx. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" # Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules path: / # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: optimize.xxxxxxxxxxxxxx.com # Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com # RetentionPolicy configuration to configure the elasticsearch index retention policies retentionPolicy: # RetentionPolicy.enabled if true, elasticsearch curator cronjob and configuration will be deployed. enabled: false # RetentionPolicy.schedule defines how often/when the curator should run schedule: "0 0 * * *" # RetentionPolicy.zeebeIndexTTL defines after how many days a zeebe index can be deleted zeebeIndexTTL: 1 # RetentionPolicy.zeebeIndexMaxSize can be set to configure the maximum allowed zeebe index size in gigabytes. # After reaching that size, curator will delete that corresponding index on the next run. # To benefit from that configuration the schedule needs to be configured small enough, like every 15 minutes. zeebeIndexMaxSize: # RetentionPolicy.operateIndexTTL defines after how many days an operate index can be deleted operateIndexTTL: 30 # RetentionPolicy.tasklistIndexTTL defines after how many days a tasklist index can be deleted tasklistIndexTTL: 30 # Image configuration for the elasticsearch curator cronjob image: # Image.repository defines which image repository to use repository: bitnami/elasticsearch-curator # Image.tag defines the tag / version which should be used in the chart tag: 5.8.4 # PrometheusServiceMonitor configuration to configure a prometheus service monitor prometheusServiceMonitor: # PrometheusServiceMonitor.enabled if true then a service monitor will be deployed, which allows an installed prometheus controller to scrape metrics from the deployed pods enabled: false # PromotheuServiceMonitor.labels can be set to configure extra labels, which will be added to the servicemonitor and can be used on the prometheus controller for selecting the servicemonitors labels: release: metrics # PromotheuServiceMonitor.scrapeInterval can be set to configure the interval at which metrics should be scraped # Should be *less* than 60s if the provided grafana dashboard is used, which can be found here https://github.com/camunda/zeebe/tree/main/monitor/grafana, # otherwise it isn't able to show any metrics which is aggregated over 1 min. scrapeInterval: 10s # Identity configuration for the identity sub chart. identity: # Enabled if true, the identity deployment and its related resources are deployed via a helm release # # Note: Identity is required by Optimize. If Identity is disabled, then Optimize will be unusable. # If you don't need Optimize, then make sure to disable both: set global.identity.auth.enabled=false AND optimize.enabled=false. enabled: true # FirstUser configuration to configure properties of the first Identity user, which can be used to access all # web applications firstUser: # FirstUser.username defines the username of the first user, needed to log in into the web applications username: demo # FirstUser.password defines the password of the first user, needed to log in into the web applications password: demo # Image configuration to configure the identity image specifics image: # Image.repository defines which image repository to use repository: camunda/identity # Image.tag can be set to overwrite the global tag, which should be used in that chart tag: # Image.pullSecrets can be used to configure image pull secrets https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod pullSecrets: [] # FullURL can be used when Ingress is configured (for both multi and single domain setup). # Note: If the `ContextPath` is configured, then value of `ContextPath` should be included in the URL too. # fullURL: "https://camunda.example.com/identity" # ContextPath can be used to make Identity web application works on a custom sub-path. This is mainly used to run Camunda Platform web applications under a single domain. # contextPath: "/identity" # PodAnnotations can be used to define extra Identity pod annotations podAnnotations: {} # Service configuration to configure the identity service. service: # Service.type defines the type of the service https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types type: ClusterIP # Service.port defines the port of the service, where the identity web application will be available port: 80 # Service.annotations can be used to define annotations, which will be applied to the identity service annotations: {} # PodSecurityContext defines the security options the Identity pod should be run with podSecurityContext: {} # ContainerSecurityContext defines the security options the Identity container should be run with containerSecurityContext: {} # NodeSelector can be used to define on which nodes the Identity pods should run nodeSelector: {} # Tolerations can be used to define pod toleration's https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ tolerations: [] # Affinity can be used to define pod affinity or anti-affinity https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity affinity: {} # Resources configuration to set request and limit configuration for the container https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits resources: requests: cpu: 600m memory: 400Mi limits: cpu: 2000m memory: 2Gi # Env can be used to set extra environment variables in each identity container. See the documentation https://docs.camunda.io/docs/self-managed/identity/deployment/configuration-variables/ for more details. env: [] # Command can be used to override the default command provided by the container image. See https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/ command: [] # ExtraVolumes can be used to define extra volumes for the identity pods, useful for tls and self-signed certificates extraVolumes: [] # ExtraVolumeMounts can be used to mount extra volumes for the identity pods, useful for tls and self-signed certificates extraVolumeMounts: [] # Keycloak configuration, for the keycloak dependency chart which is used by identity. See the chart documentation https://github.com/bitnami/charts/tree/master/bitnami/keycloak#parameters for more details. keycloak: # Keycloak.service configuration, to configure the service which is deployed along with keycloak service: # Keycloak.service.type can be set to change the service type. # We use clusterIP for keycloak service, since per default LoadBalancer is used, which is not supported on all cloud providers. # This might prevent scheduling of the service. type: ClusterIP ## Keycloak authentication parameters ## ref: https://github.com/bitnami/bitnami-docker-keycloak#admin-credentials ## ## Identity uses the secrets generated by keycloak, to access keycloak. auth: # Keycloak.auth.adminUser defines the keycloak administrator user adminUser: admin # Keycloak.auth.existingSecret can be used to reuse an existing secret containing authentication information. # See https://docs.bitnami.com/kubernetes/apps/keycloak/configuration/manage-passwords/ for more details. # # Example: # # Keycloak.auth.existingSecret: # name: mySecret # keyMapping: # admin-password: myPasswordKey # management-password: myManagementPasswordKey # tls-keystore-password: myTlsKeystorePasswordKey # tls-truestore-password: myTlsTruestorePasswordKey existingSecret: "" # ServiceAccount configuration for the service account where the identity pods are assigned to serviceAccount: # ServiceAccount.enabled if true, enables the identity service account enabled: true # ServiceAccount.name can be used to set the name of the identity service account name: "" # ServiceAccount.annotations can be used to set the annotations of the identity service account annotations: {} # Ingress configuration to configure the ingress resource ingress: # Ingress.enabled if true, an ingress resource is deployed with the identity deployment. Only useful if an ingress controller is available, like nginx. enabled: true # Ingress.className defines the class or configuration of ingress which should be used by the controller className: public-iks-k8s-nginx # Ingress.annotations defines the ingress related annotations, consumed mostly by the ingress controller annotations: ingress.kubernetes.io/rewrite-target: "/" nginx.ingress.kubernetes.io/ssl-redirect: "true" # Ingress.path defines the path which is associated with the operate service and port https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules path: / # Ingress.host can be used to define the host of the ingress rule. https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules # If not specified the rules applies to all inbound http traffic, if specified the rule applies to that host. host: identity.xxxxxxxxxxxxxx.com # Ingress.tls configuration for tls on the ingress resource https://kubernetes.io/docs/concepts/services-networking/ingress/#tls tls: # Ingress.tls.enabled if true, then tls is configured on the ingress resource. If enabled the Ingress.host need to be defined. enabled: true # Ingress.tls.secretName defines the secret name which contains the TLS private key and certificate secretName: xxxxxxxxxxxxxx.com elasticsearch: enabled: true extraEnvs: - name: "xpack.security.enabled" value: "false" replicas: 1 persistence: labels: enabled: true volumeClaimTemplate: accessModes: ["ReadWriteOnce"] resources: requests: storage: 16Gi esJavaOpts: "-Xmx1g -Xms1g" resources: requests: cpu: 1 memory: 1Gi limits: cpu: 2 memory: 2Gi Hi @vctrmn How did you setup the TLS in the ingress? Also the values file doesn't show how the Ingress is setup for Keycloak. It should be under the identity key like this: identity: [...] keycloak: ingress: enabled: true ingressClassName: nginx hostname: "keycloak.camunda.example.com" extraEnvVars: - name: KEYCLOAK_PROXY_ADDRESS_FORWARDING value: "true" - name: KEYCLOAK_FRONTEND_URL value: "https://keycloak.camunda.example.com" For more details on the setup, please take a look at the Ingress setup guide. Hi @aabouzaid, Thank you for your help ! Your configuration effectively fix the issue. Would it be possible to add this configuration (at least as a comment) in the default values.yaml ? https://github.com/camunda/camunda-platform-helm/blob/main/charts/camunda-platform/values.yaml Also, would it be possible to add the tls configuration in the keycloak ingress ? Below is the generated ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: meta.helm.sh/release-name: demo meta.helm.sh/release-namespace: default creationTimestamp: "2022-10-18T08:43:40Z" generation: 1 labels: app.kubernetes.io/component: keycloak app.kubernetes.io/instance: demo app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: keycloak helm.sh/chart: keycloak-7.1.6 name: demo-keycloak namespace: default resourceVersion: "356570" uid: 466259b4-065c-471f-8cf7-3598deb09845 spec: ingressClassName: public-iks-k8s-nginx rules: - host: keycloak.xxxxxxxxxxxxxxxxxxx.com http: paths: - backend: service: name: demo-keycloak port: name: http path: / pathType: ImplementationSpecific status: loadBalancer: ingress: - hostname: xxxxxxxxxxxxxxxxxxxxxx
gharchive/issue
2022-10-13T16:45:43
2025-04-01T06:38:08.049294
{ "authors": [ "aabouzaid", "vctrmn" ], "repo": "camunda/camunda-platform-helm", "url": "https://github.com/camunda/camunda-platform-helm/issues/443", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1395546237
Add GR 09: restrict public IPs for VMs and SQL instances via organization policy IAM organization policy - restrict SQL public IPs - https://console.cloud.google.com/iam-admin/orgpolicies/sql-restrictPublicIp?organizationId=743091813895&supportedpurview=project AM organization policy - allowd external IPs for VMs - https://console.cloud.google.com/iam-admin/orgpolicies/compute-vmExternalIpAccess?organizationId=743091813895&supportedpurview=project https://github.com/GoogleCloudPlatform/pubsec-declarative-toolkit/issues/155 https://github.com/GoogleCloudPlatform/pbmm-on-gcp-onboarding/issues/184
gharchive/issue
2022-10-04T01:34:52
2025-04-01T06:38:08.092323
{ "authors": [ "obriensystems" ], "repo": "canada-ca/accelerators_accelerateurs-gcp", "url": "https://github.com/canada-ca/accelerators_accelerateurs-gcp/issues/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2249527679
feat: initial charm Applicable spec: ISD-143 Overview A charm that periodically builds ubuntu image. Supports relation to GitHub runner and provides the image ID on openstack for consumption. Rationale To support GitHub runner with ready-to-use images. Juju Events Changes On install: installs packages needed to build custom images. On config changed: builds new image according to the config and resets cron if necessary. On build-image action: juju action to manually trigger a new image build. On cron-trigger: custom internal hook to enable rebuild with cron jobs, propagating new images to existing relations. On image relation joined: provides any existing latest image to the relation if available. Module Changes builder: responsible for building images charm: main charm event handlers chroot: module for handling chroot environments cron: module for handling cron hooks image: observer module for handling image relation event handlers openstack_manager: module responsible for communicating with openstack state: the charm state utils: provides retry Library Changes Uses operator libs linux. Checklist [x] The charm style guide was applied [x] The contributing guide was applied [x] The changes are compliant with ISD054 - Managing Charm Complexity [x] The documentation is generated using src-docs [ ] The documentation for charmhub is updated. [x] The PR is tagged with appropriate label (urgent, trivial, complex) @jdkandersson somehow the comments are left as comments not as conversation 😭, i just have cloud config modelling left!
gharchive/pull-request
2024-04-18T01:02:16
2025-04-01T06:38:08.201544
{ "authors": [ "yanksyoon" ], "repo": "canonical/github-runner-image-builder-operator", "url": "https://github.com/canonical/github-runner-image-builder-operator/pull/2", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2243766354
Charm is not in Blocked state when cannot access let's enrypt Describe the bug I have deployed the charm, but I did not provide auth configs. However, juju shows my charm as green. Also, firewall is not yet opened, so it cannot talk to the site I think charm could not properly reconcile events after I started relating with nginx To Reproduce 10 2024-04-15 10:39:23 juju deploy nginx-ingress-integrator 11 2024-04-15 10:39:31 juju status 12 2024-04-15 10:39:34 juju status 13 2024-04-15 10:39:47 juju status 14 2024-04-15 10:39:50 juju status 15 2024-04-15 10:39:58 juju status 16 2024-04-15 10:40:00 juju status 17 2024-04-15 10:40:04 juju status 18 2024-04-15 10:40:27 juju status 19 2024-04-15 10:40:34 juju status 20 2024-04-15 10:40:39 watch -c juju status 21 2024-04-15 10:40:51 juju status 22 2024-04-15 10:41:02 juju trust nginx-ingress-integrator --scope=cluster 23 2024-04-15 10:41:04 juju status 24 2024-04-15 10:41:16 juju relate charmed-cla-checker nginx-ingress-integrator 25 2024-04-15 10:41:19 juju status 26 2024-04-15 10:41:22 juju status 27 2024-04-15 12:17:49 juju deploy httprequest-lego-k8s 28 2024-04-15 12:19:12 juju status 29 2024-04-15 12:19:30 juju relate httprequest-lego-k8s nginx-ingress-integrator 30 2024-04-15 12:19:35 juju status 31 2024-04-15 12:19:42 juju status 32 2024-04-15 12:52:04 juju config httprequest-lego-k8s 33 2024-04-15 12:52:26 juju config httprequest-lego-k8s | grep httpreq_endpoint -a 5 34 2024-04-15 12:52:34 grep --help 35 2024-04-15 12:52:40 juju config httprequest-lego-k8s | grep httpreq_endpoint -A 5 36 2024-04-15 12:53:08 juju config httprequest-lego-k8s httpreq_endpoint='https://lego-certs.canonical.com' 37 2024-04-15 12:53:31 juju config httprequest-lego-k8s email='is-admin@canonical.com' 38 2024-04-15 12:53:44 juju config httprequest-lego-k8s | grep username -A 5 39 2024-04-15 12:54:03 juju config httprequest-lego-k8s | grep timeout -A 5 40 2024-04-15 12:54:40 juju status 41 2024-04-15 12:55:17 juju config nginx-ingress-integrator | grep name -A 5 42 2024-04-15 12:57:40 juju config nginx-ingress-integrator service-hostname=cla-checker.canonical.com 43 2024-04-15 12:58:49 juju model-config juju-http-proxy = "http://squid.internal:3128" 44 2024-04-15 12:58:49 juju model-config juju-https-proxy = "http://squid.internal:3128" 45 2024-04-15 12:58:49 juju model-config juju-no-proxy = "127.0.0.1,localhost,::1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com" 46 2024-04-15 12:59:18 juju model-config juju-no-proxy="127.0.0.1,localhost,::1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com" 47 2024-04-15 12:59:48 juju model-config juju-http-proxy="http://squid.internal:3128" 48 2024-04-15 12:59:48 juju model-config juju-https-proxy="http://squid.internal:3128" 49 2024-04-15 12:59:49 juju model-config juju-no-proxy="127.0.0.1,localhost,::1,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.canonical.com,.launchpad.net,.internal,.jujucharms.com" 50 2024-04-15 12:59:59 juju model-config juju-http-proxy 51 2024-04-15 13:50:27 juju config httprequest-lego-k8s httpreq_endpoint 52 2024-04-15 13:50:30 juju status Expected behavior charm should be blocked Logs prod-cla-checker@enterprise-engineering-bastion-ps6:~$ juju status Model Controller Cloud/Region Version SLA Timestamp prod-cla-checker prodstack-is k8s-prod-general/default 3.1.7 unsupported 13:57:36Z App Version Status Scale Charm Channel Rev Address Exposed Message charmed-cla-checker active 1 charmed-cla-checker edge 1 10.87.244.2 no httprequest-lego-k8s active 1 httprequest-lego-k8s stable 40 10.87.26.217 no nginx-ingress-integrator 24.2.0 active 1 nginx-ingress-integrator stable 84 10.87.128.91 no Ingress IP(s): 10.141.14.128 Unit Workload Agent Address Ports Message charmed-cla-checker/0* active idle 192.168.100.243 httprequest-lego-k8s/0* active idle 192.168.100.245 nginx-ingress-integrator/0* active idle 192.168.102.11 Ingress IP(s): 10.141.14.128 2024-04-15T12:53:38.012Z [container-agent] 2024-04-15 12:53:38 INFO juju-log Received Certificate Creation Request for domain charmed-cla-checker 2024-04-15T12:54:08.108Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log Exited with code 1. Stderr: 2024-04-15T12:54:08.112Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:53:38 No key found for account is-admin@canonical.com. Generating a P256 key. 2024-04-15T12:54:08.116Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:53:38 Saved key to /tmp/.lego/accounts/acme-v02.api.letsencrypt.org/is-admin@canonical.com/keys/is-admin@canonical.com.key 2024-04-15T12:54:08.120Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log 2024/04/15 12:54:08 Could not create client: get directory at 'https://acme-v02.api.letsencrypt.org/directory': Get "https://acme-v02.api.letsencrypt.org/directory": dial t cp 172.65.32.248:443: i/o timeout 2024-04-15T12:54:08.124Z [container-agent] 2024-04-15 12:54:08 ERROR juju-log Failed to execute lego command Additional context another example. Server returns 500, but charm is green ot pass JSON Schema validation unit-nginx-ingress-integrator-0: 16:26:19 INFO juju.worker.uniter.operation ran "certificates-relation-changed" hook (via hook dispatching script: dispatch) unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: Exited with code 1. Stderr: unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] acme: Registering account for is-admin@canonical.com unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Obtaining bundled SAN certificate given a CSR unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] AuthURL: https://acme-v02.api.letsencrypt.org/acme/authz-v3/338846294087 unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Could not find solver for: tls-alpn-01 unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Could not find solver for: http-01 unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: use dns-01 solver unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:19 [INFO] [cla-checker.canonical.com] acme: Preparing to solve DNS-01 unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:26:49 [INFO] [cla-checker.canonical.com] acme: Cleaning DNS-01 challenge unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:15 [WARN] [cla-checker.canonical.com] acme: cleaning up failed: httpreq: unexpected status code: [status code: 500] body: <!doctype html> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <html lang="en"> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <head> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <title>Server Error (500)</title> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </head> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <body> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: <h1>Server Error (500)</h1><p></p> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </body> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: </html> unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:16 [INFO] Deactivating auth: https://acme-v02.api.letsencrypt.org/acme/authz-v3/338846294087 unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: 2024/04/15 16:27:16 Could not obtain certificates: unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: error: one or more domains had a problem: unit-httprequest-lego-k8s-0: 16:27:16 ERROR unit.httprequest-lego-k8s/0.juju-log certificates:3: [cla-checker.canonical.com] [cla-checker.canonical.com] acme: error presenting token: httpreq: unable to communicate with the API server: error: Post "https://lego-certs.canonical.com/present": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Hello @beliaev-maksim, indeed the charm status does not depend on the endpoint being available. We simply validate that it is valid. I will re-classify this issue as a "request for enhancement". @gruyaume not sure how this should be treated. From my perspective I expect that if the charm is Active and green, then everything has worked out and I got my certs. Since if everything is green, I should see TLS on my app. When I do not see it, I have to go and debug what is wrong. Charm status should be able to assist me in this. Closing as this is the same concern as discussed in #154, the effort will be tracked there.
gharchive/issue
2024-04-15T13:58:48
2025-04-01T06:38:08.208066
{ "authors": [ "beliaev-maksim", "gruyaume" ], "repo": "canonical/httprequest-lego-k8s-operator", "url": "https://github.com/canonical/httprequest-lego-k8s-operator/issues/123", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2516153223
Flutter snaps with 3.24.1 shows inverted colors Hey guys, I know this is probably a flutter bug again, just like earlier reported https://github.com/canonical/iot-example-graphical-snap/issues/31, but I think it still should be listed here, as this seems the only place where people talk about snapping flutter applications. Indeed on flutter versions >3.24.1 the vector shading issue reported in issue 31 is fixed. However, the colors displayed are not correct. It seems that the blue and red channel are swapped somehow. I see this issue when I build the snap on branch 22/Flutter-demo, I see the issue when I build my own flutter application (both running in Ubuntu Core and with frame-it on a system running an X session), and someone else reported it here. The last reporter also reported that the issue is fixed with core24, which makes me believe that there could be an issue in how flutter and the mesa libraries interact? Do you have any suggestions on how to move forward with this? Should I make a bug on the flutter project? Or on ubuntu-frame? Kind regards, Charlee Hey @CharleeSF, there's no point in reporting it as an issue with Frame, as the problem clearly has nothing to do with Frame. As the issue seems to be related to Flutter changes I would guess that's most likely where the problem lies. (But I don't know the Flutter linux embedder code well enough to give an informed opinion.) The best way forward would be try reproducing the problem on a standard 22.04 system (without snaps). If that also has colour issues, that's an easier scenario to report to the Flutter project. BW, Alan The normal Linux build on 22.04 on my intel machine has no color problems, only when running it as a snap. So there is something different between Ubuntu Desktop 22.04 and the way the flutter application on core22 snaps are packaged.. Is there any way you can recommend to get my standard 22.04 system to be closer to the snaps to see if I can reproduce? If I do this for the 22/Flutter-demo on a Ubuntu Desktop 22.04 amd64 system snapcraft --verbose sudo snap install iot-example-graphical-snap_0+git.dcf41bf_amd64.snap --dangerous frame-it iot-example-graphical-snap Then the application is pink, see image 2. Also runs without problems with correct colors, and the application is purple. (I know I have wayland by doing echo $WAYLAND_DISPLAY and it gives wayland-0) If instead of frame-it iot-example-graphical-snap You run directly on your desktop: iot-example-graphical-snap That doesn't work for me.. I tried: charlee@lpks0013-ubuntu:~/xs4$ iot-example-graphical-snap (flutterdemo:352541): Gtk-WARNING **: 16:30:13.430: cannot open display: this crashes, then I tried charlee@lpks0013-ubuntu:~/xs4$ sudo iot-example-graphical-snap Setting up watches. Watches established. This waits forever, then I tried setting WAYLAND_DISPLAY because I remember reading somewhere that that was necessary charlee@lpks0013-ubuntu:~/xs4$ WAYLAND_DISPLAY=99 iot-example-graphical-snap Setting up watches. Watches established. That waits forever as well. I don't know why, from what I can see the display should be available: charlee@lpks0013-ubuntu:~/xs4$ ls -l $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY srwxrwxr-x 1 charlee charlee 0 sep 10 15:35 /run/user/1000/wayland-0 I have the interfaces connected: charlee@lpks0013-ubuntu:~/xs4/iot-example-graphical-snap$ snap connections iot-example-graphical-snap Interface Plug Slot Notes content[graphics-core22] iot-example-graphical-snap:graphics-core22 mesa-core22:graphics-core22 - opengl iot-example-graphical-snap:opengl :opengl - wayland iot-example-graphical-snap:wayland ubuntu-frame:wayland manual I also tried to use the systems wayland interface, by doing a disconnect on ubuntu-frame:wayland and sudo snap connect iot-example-graphical-snap:wayland :wayland, but this gives the same Watches established results. I am probably missing something obvious here... Sorry, missed charlee@lpks0013-ubuntu:~/xs4$ iot-example-graphical-snap (flutterdemo:352541): Gtk-WARNING **: 16:30:13.430: cannot open display: That looks like it is failing to connect to X11. Try: $ env -u DISPLAY iot-example-graphical-snap Whoop that worked!! AND the application is pink (with the wrong colors), like it is on Ubuntu Core.. I took the effort to update the iot-example-graphical-snap to core24, the snapcraft.yaml is here: https://github.com/CharleeSF/iot-example-graphical-snap/blob/24/Flutter-demo/snap/snapcraft.yaml Indeed, on core24 the application is purple again..... So summary: running on Ubuntu Desktop as snap without frame-it has wrong colors, but doing the same thing when building the snap on core24 has the correct colors. Whoop that worked!! AND the application is pink (with the wrong colors), like it is on Ubuntu Core.. Now try to eliminate snap: run the unsnapped version with env -u DISPLAY That does not work... charlee@lpks0013-ubuntu:~/xs4/iot-example-graphical-snap/flutterdemo$ env -u DISPLAY ./build/linux/x64/debug/bundle/flutterdemo (flutterdemo:155030): Gtk-WARNING **: 09:47:23.814: cannot open display: That does not work... OK, so that build of flutter doesn't (or isn't configured to) support Wayland. Unfortunately, I don't know how to change that. Anyway, we can eliminate Frame from the stack as you see the same behaviour on desktop. I've hacked the demo (as follows) to run the core22 version on X11: $ snap install --dangerous --devmode iot-example-graphical-snap_0+git.dcf41bf_amd64.snap $ snap run --shell iot-example-graphical-snap $ env -u WAYLAND_DISPLAY DISPLAY=:0 $SNAP/bin/flutterdemo To summarise our current findings: protocol Unsnapped Snapped/core22 Snapped/core24 X11 :white_check_mark: :x: n/a Wayland ?? :x: :white_check_mark: That strongly suggests that there is something odd about the core22 snap builds. Does the move to core24 unblock you? Or do you need to investigate further? We'll be updating the tutorials to core24 in the coming weeks, so this might be a moot issue. I have yet to figure out how to run our own snap on core24, but I will be working on this today/tomorrow and then I will know if it unblocks me :) but I will be trying this. It is not ideal as I think we want to wait with updating our whole system tot core24, but might at least be an MVP solution.. Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here? It is not ideal as I think we want to wait with updating our whole system tot core24 For what it's worth, you can mix different base:s on Ubuntu Core, regardless of version. UC24 really just means that the kernel and root filesystem (core24) are 24.04-based, but you can run older base:d snaps, and the other way around. It does add to the memory and disk usage, though. Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here? https://github.com/canonical/iot-example-graphical-snap/ I'll get the Flutter example updated in a bit. Actually let me move this issue there. Where would be a good channel to get support on getting my flutter snap to work on core24? Just the snapcraft forum, or here? I guess that depends on the nature of your problems. It sounds like you've already got a Snap recipe for the Flutter side of the move. If your dependencies are available for 24.04, then the rest should be trivial. If not, then it depends on why the dependency isn't available and the options for resolving that. @CharleeSF I just pushed the 24 version of the example: https://github.com/canonical/iot-example-graphical-snap/compare/22/Flutter-demo...24/Flutter-demo That said, whether I build 22 or 24, env -u WAYLAND_DISPLAY or env -u DISPLAY, snapped or not, it's always purple for me…? You're probably on a GLES platform? I honestly don't know :') I know very little about graphics. It is happening on my laptop, but also on a NUC-based device for which we are trying to build the application. If you have any commands you want me to run on either device (bear in mind that on the target device we have Ubuntu Core available, although I could boot a live linux USB to try some commands) to get more details, let me know :smile: You're probably on a GLES platform? I'm running on my laptop (so EGL, but 24.04). A further datapoint is that running the the snap content directly shows the correct colours: env -u DISPLAY /snap/iot-example-graphical-snap/current/bin/flutterdemo env -u WAYLAND_DISPLAY /snap/iot-example-graphical-snap/current/bin/flutterdemo Even trying to load the mesa-core22 userspace gives the right result... env -u DISPLAY __EGL_EXTERNAL_PLATFORM_CONFIG_DIRS="/snap/mesa-core22/current/usr/share/egl/egl_external_platform.d" __EGL_VENDOR_LIBRARY_DIRS="/snap/mesa-core22/current/usr/share/glvnd/egl_vendor.d" LIBGL_DRIVERS_PATH="/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu/dri/:/snap/mesa-core22/current/usr/lib/i386-linux-gnu/dri/" LD_LIBRARY_PATH="/var/lib/snapd/lib/gl:/var/lib/snapd/lib/gl32:/var/lib/snapd/void:/snap/iot-example-graphical-snap/x25/usr/lib:/snap/iot-example-graphical-snap/x25/usr/lib/x86_64-linux-gnu:/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu:/snap/mesa-core22/current/usr/lib/x86_64-linux-gnu/vdpau:/snap/mesa-core22/current/usr/lib/i386-linux-gnu:/snap/mesa-core22/current/usr/lib/i386-linux-gnu/vdpau" /snap/iot-example-graphical-snap/current/bin/flutterdemo Ah! the snap has GDK_GL: gles And that is what causes the colour problem I think that worked!!! That's amazing. Do you have any docs I can read to understand what happened? I will now test this on my own snap and I will let you know if it fixes it, but I am assuming it will Yessssssss it fixed my application! Thanks sooooo much! Do you have any docs I can read to understand what happened? I can't point to any docs that the details. But GLES is a somewhat dated graphics API but one that is simpler and better supported by some embedded devices (typically ARM with bespoke graphics stacks). We have that line in the Snap recipe as it forcing use of the better supported API helps on a lot of embedded systems and should be harmless on more capable systems. Another datapoint relevant to further progress: With the base: core22 example on RPi3/Ubuntu Core 22 GDK_GL=gles is needed to render at all... # env -u GDK_GL $SNAP/bin/flutterdemo (flutterdemo:5092): Gdk-CRITICAL **: 16:07:37.480: gdk_gl_context_make_current: assertion 'GDK_IS_GL_CONTEXT (context)' failed ** (flutterdemo:5092): WARNING **: 16:07:37.481: Failed to initialize GLArea: Unable to create a GL context However, the colours are still wrong. And a final datapoint: on the same setup, with the base: core24 example we no longer need GDK_GL=gles Looking into the RGB/BGR logic (which I suspect a mismatch has caused the color switch shown above) in the Flutter Linux embedder shows some questionable code. I've made https://github.com/flutter/engine/pull/55121 to make this consistent with the Windows embedder. I wasn't able to reproduce the issue here, but if anyone can reproduce and try with the change in the PR that would be very helpful! Interesting @robert-ancell, I would like to try this for you, but not sure how I bump the flutter engine to that specific commit? We get flutter like this: parts: flutter-git: source: https://github.com/flutter/flutter.git source-tag: 3.24.1 source-depth: 1 plugin: nil override-build: | mkdir -p $CRAFT_PART_INSTALL/usr/bin mkdir -p $CRAFT_PART_INSTALL/usr/libexec cp -r $CRAFT_PART_SRC $CRAFT_PART_INSTALL/usr/libexec/flutter ln -s $CRAFT_PART_INSTALL/usr/libexec/flutter/bin/flutter $CRAFT_PART_INSTALL/usr/bin/flutter ln -s $SNAPCRAFT_PART_INSTALL/usr/libexec/flutter/bin/dart $SNAPCRAFT_PART_INSTALL/usr/bin/dart $CRAFT_PART_INSTALL/usr/bin/flutter doctor I am assuming there is a step I can add in the override-build to get the flutter engine to follow your commit? (I am still new to flutter and learning how all the parts click together) @CharleeSF it is largely a mystery to me too, but I can offer some guidance. The above snippet is building the app (not Flutter) and Robert's changes are in the Flutter engine. Unfortunately building the engine locally isn't something I know well enough to integrate into a snap recipe. But you shouldn't need to snap to verify the fix: On your Ubuntu 22.04, you should be able to reproduce the colour problem by prefixing the launch with GDK_GL=gles: $ GDK_GL=gles ./build/linux/x64/debug/bundle/flutterdemo If you succeed with building Robert's branch, then using that will, hopefully, fix the colours.
gharchive/issue
2024-09-10T11:17:21
2025-04-01T06:38:08.238950
{ "authors": [ "AlanGriffiths", "CharleeSF", "Saviq", "robert-ancell" ], "repo": "canonical/iot-example-graphical-snap", "url": "https://github.com/canonical/iot-example-graphical-snap/issues/34", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2587986565
[DPE-5612] Update shared workflows version This PR updates the shared workflows we use in our CI. In addition, I also upgraded the GH actions checkout and download-artifact. This will prevent a nodejs version warning in the logs, and we were at risk of our CI breaking soon. I would have separated the project update (not the CI workflow update) to a different PR Yes, I need to be mindful of keeping things in scope. In that case however, I did not have much choice since the charm-lib dependencies group is required for the workflows, and scenario 6 breaks with ops 2.17 Oh I see. That's fine. Again, it was more of a niptick but I'm happy to hear that we are on the same page :D
gharchive/pull-request
2024-10-15T08:09:11
2025-04-01T06:38:08.245114
{ "authors": [ "Batalex", "deusebio" ], "repo": "canonical/kafka-k8s-operator", "url": "https://github.com/canonical/kafka-k8s-operator/pull/143", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1173467088
Every time the constructor is called, an HTTP request is sent to Loki asking it for it's version Bug Description Every time the constructor is called, an HTTP request is sent to Loki asking it for its version. This information is then logged into the debug logs, spamming them with unneeded log lines. To Reproduce juju deploy loki-k8s --channel edge look in the debug logs Environment charm version: latest/edge Relevant log output N/A Additional context No response The _provide_loki() method queries the Loki server version using a HTTP API call and raise different types of exceptions depending on the success or failure of this call. This method is only being used to set a status message by the Loki provider charm depending on if the Loki server is active and reachable or not. The docstring of the LokiPushApiProvider object also recommends the use of such a _provide_loki() method in this manner. However neither the docstring nor the Loki charm use the exceptions raised by the _provide_loki() to conditionally instantiate the LokiPushApiProvider object only if the Loki server is active and reachable. The LokiPushApiProvier object does use the Loki HTTP API in its _check_alert_rules method. This method is invoked in response to any changes in the relation with the consumer object and in response to a pebble ready method (see PR 132 ). Hence instantiating the LokiPushApiProvider object can lead to false negative result in _check_alert_rules just because the Loki HTTP API is not responsive. In view of these observations it is proposed that The _provide_loki() method be renamed to something like _is_loki_active(). Instead of raising exceptions this method may return a boolean that asserts if the Loki server HTTP API is responsive or not. Instantiation of the LokiPushApiProvider be guarded by check for _is_loki_active(). There may be a few concerns here. For example how should such a scenario be handled The Loki charm receives a relation changed event but its workload container is not yet alive so this event is ignored (as per PR 132 ). Subsequently on pebble ready the Loki charm fails to see an active loki server because of the time lag between the workload container becoming active and the time it takes a Loki HTTP API to become responsive. hence the Loki charm does not instantiate the LokiPushApiProvider . Hence the pending relation changed event is still not handled. In fact it will never be handled. What is missing here is the functionality of a "Readyness Probe" and the Loki charm being informed through a suitable event when the status of such a readyness probe changes. In the absence of such functionality one approach to solving the problem is to use the periodic update-status event to check for active status of Loki server (i.e. responsiveness of its HTTP API) and update all related consumers. I think renaming the method and guarding the instantiation of LokiPushApiProvider is a good idea. Let's keep in mind that we will need to observe the on.loki_push_api_alert_rules_changed event inside the guard, for instance: if self._is_loki_active(): self.loki_provider = LokiPushApiProvider( self, address=external_url.hostname or "", port=external_url.port or self._port, scheme=external_url.scheme, path=f"{external_url.path}/loki/api/v1/push", ) self.framework.observe(self.loki_provider.on.loki_push_api_alert_rules_changed, self._loki_push_api_alert_rules_changed) I think renaming the method and guarding the instantiation of LokiPushApiProvider is a good idea. Let's keep in mind that we will need to observe the on.loki_push_api_alert_rules_changed event inside the guard, for instance: if self._is_loki_active(): self.loki_provider = LokiPushApiProvider( self, address=external_url.hostname or "", port=external_url.port or self._port, scheme=external_url.scheme, path=f"{external_url.path}/loki/api/v1/push", ) self.framework.observe(self.loki_provider.on.loki_push_api_alert_rules_changed, self._loki_push_api_alert_rules_changed) As mentioned earlier, we have to be very, very careful with this. The actual Loki charm does not listen to relation_* events, but they will still be emitted as it is part of the relation. If there are no observers, they will simply disappear. In the case of, say, a bundle deployment, where a relation-joined event may occur before Loki is actually ready, this means that relation-joined simply disappears into the ether, never to be seen again. Any data which is expected to be set in relation_joined will not be. The alternative to this is quite literally the "common exit hook" which we have a number of issue about moving away from. If the charm/library looks at every relation data bag on every event to compare the state of the charm, we should probably try to hash absolutely everything. In order to protect the validity of relation-* events in this case, I'd suggest waiting until after container operations are moved into the charm itself, so the provider can manage relation data only. I honestly do not see why we need to have Loki active in order to create the rules in the container. For that, we need only Pebble. I am also very conflicted about providing the URL via LokiPushApiProvider ONLY if Loki is active, as relation changes have a lag to propagate to the other side. Interestingly enough, if Loki is not active, we do NOT remove the URL from LokiPushApiProvider, which is at very least least inconsistent. Interestingly enough, if Loki is not active, we do NOT remove the URL from LokiPushApiProvider, which is at very least least inconsistent. How can a Provider charm let the Consumer know that its workload endpoint is temporarily not in service (due to say upgrade or maintenance etc) ? This is to prevent errors in the Consumer charm should it try to connect to the workload endpoints of the Provider charm during this process. There was previously an attempt to cater to this issue using ready() and unready() methods in the Operator Framework through an relation management object that was removed as it was not being used as intended. Closed PR https://github.com/canonical/loki-k8s-operator/pull/135 since fix is moved to https://github.com/canonical/loki-k8s-operator/pull/151
gharchive/issue
2022-03-18T11:09:55
2025-04-01T06:38:08.258708
{ "authors": [ "Abuelodelanada", "balbirthomas", "mmanciop", "rbarry82", "simskij" ], "repo": "canonical/loki-k8s-operator", "url": "https://github.com/canonical/loki-k8s-operator/issues/112", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1629265588
Feature/tls runtime set Issue Slow and fragile TLS setup. DPE-1447 Solution Port TLS hot reload with lib from merged PR#142 Should we delete the rolling ops lib since we are no longer using it? what a miss, thanks! Done db310a1
gharchive/pull-request
2023-03-17T12:53:22
2025-04-01T06:38:08.284801
{ "authors": [ "paulomach" ], "repo": "canonical/mysql-k8s-operator", "url": "https://github.com/canonical/mysql-k8s-operator/pull/175", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1057003499
support python 3.5 in libs Note: @balbirthomas I converted JujuTopology from a dataclass to a regular class. Tests still pass but I wanted to bring that to your attention. Bump LIBPATCH? manually merging so leon can continue work
gharchive/pull-request
2021-11-18T06:54:26
2025-04-01T06:38:08.301169
{ "authors": [ "dstathis", "sed-i" ], "repo": "canonical/prometheus-operator", "url": "https://github.com/canonical/prometheus-operator/pull/168", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2240115656
chore: Use the rustup snap to install latest stable rust Description This changes the way Rust and Cargo are installed for building the charm. Instead of using the deb packages available for the base, it uses the rustup snap to install the latest stable Rust and Cargo releases. This will solve building issues for modules like pydantic that now depend on newer releases of Rust to build. Checklist: [x] My code follows the style guidelines of this project [x] I have performed a self-review of my own code [ ] I have made corresponding changes to the documentation [ ] I have added tests that validate the behaviour of the software [x] I validated that new and existing unit tests pass locally with my changes [ ] Any dependent changes have been merged and published in downstream modules [ ] I have bumped the version of the library My feeling here was more or less "let's use what is in the latest Ubuntu". But I understand if we really want the latest pydantic we need this workaround.
gharchive/pull-request
2024-04-12T13:03:47
2025-04-01T06:38:08.306621
{ "authors": [ "ghislainbourgeois", "gruyaume" ], "repo": "canonical/sdcore-nssf-k8s-operator", "url": "https://github.com/canonical/sdcore-nssf-k8s-operator/pull/111", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
294632411
Controller tests for parent child This change is  https://github.com/canoo/dolphin-platform/issues/603 Coverage increased (+0.07%) to 57.736% when pulling 007da71d8ea45d03de3c493564522a4ee974a19d on ControllerTestsForParentChild into 5483d3fbcbb5f61ca999e0cba6f932f6b844d4e8 on master. Review Comments Fixed.
gharchive/pull-request
2018-02-06T04:56:19
2025-04-01T06:38:08.317563
{ "authors": [ "coveralls", "kunsingh" ], "repo": "canoo/dolphin-platform", "url": "https://github.com/canoo/dolphin-platform/pull/840", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1100755625
global refactoring and make possible to access longs Hi! Sorry, there were long holidays in my country, that's why I kept silent all that time. Thank you very much for the hints you gave me in #1. I successfully adapted your code to longs, and now I'm happy to share the results. I made a minor refactoring, so now it's more like a library that, in theory, is ready for prod use. I would be more that happy to hear any thoughts about this work. Thank you once again. It looks awesome! And, very neat indeed. I once again understand how primitive I am at software engineering particularly with respect to high-level design :) Thank you for your effort!
gharchive/pull-request
2022-01-12T20:22:22
2025-04-01T06:38:08.319329
{ "authors": [ "canozbey", "izemlyanskiy" ], "repo": "canozbey/finite-state-rice-decoder", "url": "https://github.com/canozbey/finite-state-rice-decoder/pull/2", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
124668952
When the WebsiteAgent receives Events, we do not need to require that they contain a url keyword I think it’s unfortunate that we wrote this Agent to use the url value from an event. The newer url_from_event is much better, and could just be set to {{ url }} to emulate the old behavior, but dropping this would be a breaking change. I’m not sure how to handle this, but it sucks that if an incoming event just contains a url keyword, even when you just wanted merging behavior, it changes the behavior of the Agent. @dsander, I've added a migration and removed usage of the Event's url payload value. Can you or @knu think of any other way that this could break existing users who expect this behavior? The migration looks good to me. Thanks!
gharchive/pull-request
2016-01-03T20:16:08
2025-04-01T06:38:08.339206
{ "authors": [ "cantino", "dsander" ], "repo": "cantino/huginn", "url": "https://github.com/cantino/huginn/pull/1205", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1372461514
Question: Extend the RestEaseGeneratedType ? When using the RestEase.SourceGenerator, a internal class is generated: internal class Implementation_1_ITestApi : global::Test.Net.Client.ITestApi. I want to extend this class like: public class MyClient : RestEaseGeneratedTypes.Implementation_1_ITestApi, ITestClient { public Task<Result> DoSomethingAsync(string request, CancellationToken cancellationToken = default) { throw new System.NotImplementedException(); } } But I encounter several issues: CS0060 Inconsistent accessibility: base class 'Implementation_1_ITestApi' is less accessible than class 'MyClient' CS7036 There is no argument given that corresponds to the required formal parameter 'requester' of 'Implementation_1_ITestApi.Implementation_1_ITestApi(IRequester)' Do you have a solution or plan to change this behavior in a next version? No, that is explicitely not supported. Even if you declare your subclass internal and give it the right constructor, there's an [Obsolete] attribute which will cause a compiler error. If you manage to hack your way around that, RestClient.For doesn't know anything about your subclass and won't instantiate it. The name of a generated type is not stable (it may change from build to build), and the constructor syntax may change from release to release. The supported way to add extra methods is using extension methods, as documented here. Thank you.
gharchive/issue
2022-09-14T06:57:55
2025-04-01T06:38:08.343104
{ "authors": [ "StefH", "canton7" ], "repo": "canton7/RestEase", "url": "https://github.com/canton7/RestEase/issues/235", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2195447437
🛑 Keycloak is down In ce603eb, Keycloak (https://id.cantorgymnasium.de/) was down: HTTP code: 502 Response time: 412 ms Resolved: Keycloak is back up in 9a66f13 after 12 minutes.
gharchive/issue
2024-03-19T16:37:13
2025-04-01T06:38:08.345651
{ "authors": [ "denyskon" ], "repo": "cantorgymnasium/status", "url": "https://github.com/cantorgymnasium/status/issues/253", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
256631357
Validation changed in original source. I think you need to update it again, please look original source here https://github.com/caouecs/Laravel-lang/blob/827aa1240855862582573495aa67b50835792bc5/script/en/validation.php#L44 :'( Headache? 😺 I open a issue for all languages ( #752 )
gharchive/issue
2017-09-11T09:06:07
2025-04-01T06:38:08.355333
{ "authors": [ "caouecs", "idhamperdameian" ], "repo": "caouecs/Laravel-lang", "url": "https://github.com/caouecs/Laravel-lang/issues/751", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
318179531
not_regex translated and province attribute added this is my first pull request 😄 i'll work on fa language and help to make it perfect. Thank you
gharchive/pull-request
2018-04-26T20:10:31
2025-04-01T06:38:08.356398
{ "authors": [ "caouecs", "mohsen4887" ], "repo": "caouecs/Laravel-lang", "url": "https://github.com/caouecs/Laravel-lang/pull/819", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
486334814
Source files for the web front end Any idea where can we see the source for the front end (askbayou.com)? I can see that there is a flask server running. Ports 8080,8081 & 8084 are open. But 8084 seems to be accessible for API access only. Any documentation on how to call the same (http://localhost:8084/apisynthesis)? Thanks a lot...
gharchive/issue
2019-08-28T11:55:32
2025-04-01T06:38:08.364280
{ "authors": [ "nnganesha" ], "repo": "capergroup/bayou", "url": "https://github.com/capergroup/bayou/issues/221", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
364200663
Add MilkDrop title text animation Text doesn't look great at the default size, not sure if there is something wrong or just its small so looks jaggy. im working on fixing the blurriness of the text in butterchurn the text should be less aliased now (we were rendering at a smaller resolution and scaling it up before)
gharchive/pull-request
2018-09-26T20:34:54
2025-04-01T06:38:08.399441
{ "authors": [ "jberg" ], "repo": "captbaritone/webamp", "url": "https://github.com/captbaritone/webamp/pull/659", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
791450758
menu item doesn't show active state in hamburg menu in masthead talked with Anna Wen on this issue. She was able to reproduce it on current testing environment. When setting the last menu item link in the masthead to be selected, the item does not have a selected state in the mobile nav, only desktop DESKTOP: MOBILE MENU: (You can see the selected menu item in desktop does not have selected state in mobile) @annawen1 Issue is ready to finish the work. Airtable updated, labels and release added. @annawen1 Issue is ready to finish the work. Airtable updated, labels and release added.
gharchive/issue
2021-01-21T19:57:35
2025-04-01T06:38:08.409217
{ "authors": [ "RobertaJHahn", "yellowdragonfly" ], "repo": "carbon-design-system/carbon-for-ibm-dotcom", "url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/4971", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
902719681
[Content group] without expressive, update to use Carbon core Detailed description Describe in detail the issue you're having. Removing the expressive theme, Link and list within Content group should be updated to use Link large and List large variants from Carbon core. I am hoping that updating the Link and list within Content group will cascade to all instances of Content group simple, with image, etc. Please let me know if we need additional issues to update the other stories individually Assuming CTA will be updated through Link with icon in issue #6179 Is this a feature request (new component, new icon), a bug, or a general issue? Is this issue related to a specific component? What did you expect to happen? What happened instead? What would you like to see changed? What browser are you working in? What version of Carbon for IBM.com are you using? What offering/product do you work on? Any pressing ship or release dates we should be aware of? Additional information PR removing expressive Web components React Closing in favor of #6185 should be able to update both instances within utility fix.
gharchive/issue
2021-05-26T17:27:45
2025-04-01T06:38:08.415609
{ "authors": [ "oliviaflory" ], "repo": "carbon-design-system/carbon-for-ibm-dotcom", "url": "https://github.com/carbon-design-system/carbon-for-ibm-dotcom/issues/6200", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
449361673
feat(tutorial): complete step 1 Closes # {{short description}} Changelog New {{new thing}} Changed {{change thing}} Removed {{removed thing}} Congratulations! 🥇 You have successfully completed part 1.
gharchive/pull-request
2019-05-28T16:19:28
2025-04-01T06:38:08.418061
{ "authors": [ "ibmer20", "jcharnetsky" ], "repo": "carbon-design-system/carbon-tutorial", "url": "https://github.com/carbon-design-system/carbon-tutorial/pull/63", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1185450703
Update usage.mdx Removed screen reader info; usual editorial tweaks with Keyboard. Made "group label" the consistent name by removing "heading" (as discussed). Most important change to scrutinize: the repositioning of the 2nd bullet of the label to the group label area. Took a guess at what was meant; may not be correct. The other change to confirm: I moved the last bullet of Group labels to Checkbox labels and modified one word. It otherwise made no sense to me. @mbgower Maybe this should be a discussion topic. Group labels should be concise and to the point. For other types of form inputs, after selecting something, you have the option to set it as a warning state in case you need to communicate more information based off of a current selection which could be something similar to what you are talking about? Checkbox does not currently have a warning state though. @laurenmrice I've removed the 'instruction' text, and added it to the next agenda. With that done, I think this is ready to go? bump @aledavila or @dakahn when you get a sec!
gharchive/pull-request
2022-03-29T21:49:51
2025-04-01T06:38:08.420524
{ "authors": [ "joshblack", "laurenmrice", "mbgower" ], "repo": "carbon-design-system/carbon-website", "url": "https://github.com/carbon-design-system/carbon-website/pull/2831", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
509305734
Design kit: add icon cell variants Carbon added additional icon cell variations to design kit 02 hover 03 selected 04 disabled Acceptance criteria: [ ] white theme [ ] gray 10 theme [ ] gray 90 theme [ ] gray 100 theme not stale
gharchive/issue
2019-10-18T20:54:03
2025-04-01T06:38:08.447508
{ "authors": [ "oliviaflory" ], "repo": "carbon-design-system/ibm-dotcom-library-design-kit", "url": "https://github.com/carbon-design-system/ibm-dotcom-library-design-kit/issues/7", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2017235785
Clean up Datagrid stories Reorganize Datagrid stories so that base component features are exposed top level and “extensions” are included as extensions. The base component of the Datagrid includes the following features – Table headers – Clickable row items – Row action buttons – Batch actions – Empty state – Frozen columns (scrolling) – Responsiveness (scrolling) – Column alignment – Infinite scrolling – Resizable columns Some earlier proposals for reorganizing the stories although batch actions still may better suit the base section for now. Moving to Later for now since some of the previous cleanup broke docs. May want to revisit this after we do some Storybook improvements. Done
gharchive/issue
2023-11-29T19:07:01
2025-04-01T06:38:08.450502
{ "authors": [ "elycheea", "ljcarot" ], "repo": "carbon-design-system/ibm-products", "url": "https://github.com/carbon-design-system/ibm-products/issues/3863", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }