id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1090430584
|
🛑 Wedding HTTPS is down
In fb90c56, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 839e256.
|
gharchive/issue
| 2021-12-29T10:44:32 |
2025-04-01T04:56:04.589626
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/3960",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1157216701
|
🛑 Wedding HTTPS is down
In 35cf75c, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in eab7cc3.
|
gharchive/issue
| 2022-03-02T13:40:53 |
2025-04-01T04:56:04.591836
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/5204",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1168121318
|
🛑 Wedding HTTPS is down
In fac8470, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 81a041c.
|
gharchive/issue
| 2022-03-14T09:42:28 |
2025-04-01T04:56:04.593997
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/5410",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1226827205
|
🛑 Wedding HTTPS is down
In 0c76426, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in accdb2d.
|
gharchive/issue
| 2022-05-05T15:26:10 |
2025-04-01T04:56:04.595987
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/6325",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1313715158
|
🛑 Wedding HTTPS is down
In e02bbd5, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in 1ee2d8d.
|
gharchive/issue
| 2022-07-21T19:37:08 |
2025-04-01T04:56:04.598245
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/7596",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1510233962
|
🛑 Wedding HTTPS is down
In 845f8c3, Wedding HTTPS ($WEDDINGHTTPS) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Wedding HTTPS is back up in cf15fc1.
|
gharchive/issue
| 2022-12-25T02:45:43 |
2025-04-01T04:56:04.600267
|
{
"authors": [
"atatkin"
],
"repo": "atatkin/milos-uptime",
"url": "https://github.com/atatkin/milos-uptime/issues/9943",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
500963410
|
Could not find field action definition for action 'Split'
this occurs when there is split in integration and some bath update for example:
DB - > Split -> Datamaper -> DB(Batch update)
Step to reproduce: create mentioned integration and publish.
Edit the integration
exported integration:
asdf-export.zip
This looks like a duplicate of #1235 reported in a comment in issues.jboss.org/browse/ENTESB-11691 - should have been fixed in 1.42.4.
This is the expected behavior for one-to-many where target is a collection I can see with upstream master.
|
gharchive/issue
| 2019-10-01T15:11:04 |
2025-04-01T04:56:04.648804
|
{
"authors": [
"igarashitm",
"mmelko"
],
"repo": "atlasmap/atlasmap",
"url": "https://github.com/atlasmap/atlasmap/issues/1252",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
411731906
|
MICROS-6976 PDB name should change if configuration changes
PodDisruptionBudget is immutable and can't be modified by smith, therefore if we change the config, we need to give the object a new name.
Note for future self: In playground I tested adding a second pod disruption budget with the same label selectors, and discussed it with KITT, if a new one exists for a short period of time before the old one is deleted by smith, we there won't be any major issues.
|
gharchive/pull-request
| 2019-02-19T03:46:17 |
2025-04-01T04:56:04.653985
|
{
"authors": [
"halcyonCorsair"
],
"repo": "atlassian/voyager",
"url": "https://github.com/atlassian/voyager/pull/183",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1077687429
|
⚠️ AtmoLive06 Server has degraded performance
In b89dfd5, AtmoLive06 Server ($ATMOLIVE06) experienced degraded performance:
HTTP code: 200
Response time: 12200 ms
Resolved: AtmoLive06 Server performance has improved in 56e49cf.
|
gharchive/issue
| 2021-12-12T00:57:35 |
2025-04-01T04:56:04.670334
|
{
"authors": [
"atmovantage"
],
"repo": "atmovantage/atmostatus",
"url": "https://github.com/atmovantage/atmostatus/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
165963360
|
Haddock experiment (WIP)
I would like to implement support for showing Haddock docstrings in tooltips alongside "info" or "type" information.
I'm hoping for some guidance both overall in the design and specifics of using the API and interfacing with the existing code. I am totally new to Coffeescript and Atom package development.
One of the outputs of the stack haddock --no-haddock-deps command is a text file for each package that says the following at the top:
-- Hoogle documentation, generated by Haddock
The contents of the file are pretty straightforward to parse. I wrote a function in haddock.js that generates a dictionary with symbol names as keys and haddock docstrings as values. Actually it's a nested dictionary with module names as the outer key.
For starters, the plugin could request that the user manually run the stack haddock command before documentation can be shown. Having even this basic functionality would be valuable for users that just need to browse through a Haskell codebase. Perhaps later the plugin could automatically run stack in the background, but I've noticed that it takes a long time (~1:30 in my project, or ~30s with --fast option).
I was able to get the Haddock strings to print to the console when mousing over a symbol, but somehow I was not able to propagate the same string all the way to the tooltip display. I think a short chat on IRC or something could clear up a lot of details for me.
Thanks for your efforts, that's appreciated.
You can usually find me on #ghc-mod at freenode (irc.freenode.net) -- nick's Lierdakil. Ping me and if I'm near IRC client, I'll try to answer. Bear in mind I'm on UTC+3 and usually busy in the afternoon, so 16 to 19 UTC would probably be the best time to catch me.
|
gharchive/pull-request
| 2016-07-17T06:31:35 |
2025-04-01T04:56:04.678024
|
{
"authors": [
"kostmo",
"lierdakil"
],
"repo": "atom-haskell/haskell-ghc-mod",
"url": "https://github.com/atom-haskell/haskell-ghc-mod/pull/166",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
189074528
|
Atom 1.12.2 fails to build on Slackware Linux 14.2: permissions errors
Hi,
On Slackware 14.2 with Node.js 6.9.1 and NPM 3.10.8 I get the build error for Atom 1.12.2:
Node: v6.9.1
Npm: v3.10.8
Installing script dependencies
Installing apm
npm ERR! Linux 4.4.14
npm ERR! argv "/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/bin/node" "/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/node_modules/.bin/npm" "dedupe"
npm ERR! node v4.4.5
npm ERR! npm v3.10.5
npm ERR! path /tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/node_modules/npm/node_modules
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! syscall access
npm ERR! Error: EACCES: permission denied, access '/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/node_modules/npm/node_modules'
npm ERR! at Error (native)
npm ERR! { [Error: EACCES: permission denied, access '/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/node_modules/npm/node_modules']
npm ERR! errno: -13,
npm ERR! code: 'EACCES',
npm ERR! syscall: 'access',
npm ERR! path: '/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/node_modules/npm/node_modules' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.
npm ERR! Please include the following file with any support request:
npm ERR! /tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/npm-debug.log
module.js:327
throw err;
^
Error: Cannot find module 'node-gyp/bin/node-gyp'
at Function.Module._resolveFilename (module.js:325:15)
at Function.require.resolve (internal/module.js:16:19)
at new Install (/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/lib/install.js:53:38)
at Object.module.exports.run (/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/lib/apm-cli.js:226:18)
at Object.<anonymous> (/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/lib/cli.js:8:7)
at Object.<anonymous> (/tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/lib/cli.js:12:4)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
child_process.js:506
throw err;
^
Error: Command failed: /tmp/SBo/atom-1.12.2/apm/node_modules/atom-package-manager/bin/apm --loglevel=error install
at checkExecSyncError (child_process.js:483:13)
at Object.execFileSync (child_process.js:503:13)
at module.exports (/tmp/SBo/atom-1.12.2/script/lib/install-atom-dependencies.js:19:16)
at Object.<anonymous> (/tmp/SBo/atom-1.12.2/script/bootstrap:28:1)
at Module._compile (module.js:570:32)
at Object.Module._extensions..js (module.js:579:10)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
using this SlackBuild. The way I am building this package is via running sudo ./atom.SlackBuild where the file atom.SlackBuild is the previously linked SlackBuild. I also get the same exact error building Atom 1.12.0 using the SlackBuild provided in the official repositories here.
Thanks for your time,
Brenton
I can reproduce this problem with MarIuX64, a distribution based on Linux From Scratch using the Bee package manager.
Atom 1.10.2, and 1.12.2 show this problem.
sudo is used here, and looking at the permissions, it looks like the ID of the user running sudo are used to create the directories.
Installing Atom under the user, everything works fine.
@fusion809, are you using sudo?
@fusion809, I just saw, that you alsso use sudo in sudo ./atom.SlackBuild.
Yeah, should have used su - c "./atom.SlackBuild". Just learnt that so I'm gonna close.
The problem seems to be with script/lib/install-apm.js.
$ nl -ba script/lib/install-apm.js
1 'use strict'
2
3 const childProcess = require('child_process')
4 const path = require('path')
5
6 const CONFIG = require('../config')
7
8 module.exports = function () {
9 console.log('Installing apm')
10 childProcess.execFileSync(
11 CONFIG.getNpmBinPath(),
12 ['--global-style', '--loglevel=error', 'install'],
13 {env: process.env, cwd: CONFIG.apmRootPath}
14 )
15 }
@fusion809, in my opinion, the installation should work without any problems using sudo. It’s a regression in my opinion from 1.9.0. So I’d keep it open, but rephrase the title.
Builds fine for me now though, I even uploaded my tgz to https://github.com/fusion809/SlackBuilds/releases/tag/atom-1.12.2.
Do you have more information on how you got it to work with sudo?
Nope, as I said I just kept trying it and eventually it worked. Dunno why but it did.
We worked around this problem with
unset SUDO_USER SUDO_UID SUDO_COMMAND SUDO_GID
start_cmd script/build
in our package build script. Needed other workarounds, because the new atom build system doesn't honor DESTDIR etc.
|
gharchive/issue
| 2016-11-14T10:16:10 |
2025-04-01T04:56:04.691928
|
{
"authors": [
"buczek",
"fusion809",
"paulmenzel"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/13220",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
297953210
|
Uncaught TypeError: Cannot read property 'replace' of undefined
I find that in certain file types I get this exception thrown (Atom Beta). So far I've found it reliably does it in git commit message types (invoked by git-plus) and in Go files, but have not seen it with the same version in JS or Java files. It tends to keep throwing this error over and over as I type.
Worth noting this error was definitely not present until the latest Beta update.
Atom: 1.25.0-beta0 x64
Electron: 1.7.11
OS: Mac OS X 10.13.3
Thrown From: Atom Core
Stack Trace
Uncaught TypeError: Cannot read property 'replace' of undefined
At /Applications/Atom Beta.app/Contents/Resources/app/node_modules/autocomplete-plus/lib/autocomplete-manager.js:527
TypeError: Cannot read property 'replace' of undefined
at AutocompleteManager.getWordCharacterRegex (/Applications/Atom Beta.app/Contents/Resources/app/node_modules/autocomplete-plus/lib/autocomplete-manager.js:527:72)
at AutocompleteManager.getPrefix (/Applications/Atom Beta.app/Contents/Resources/app/node_modules/autocomplete-plus/lib/autocomplete-manager.js:507:43)
at AutocompleteManager.findSuggestions (/Applications/Atom Beta.app/Contents/Resources/app/node_modules/autocomplete-plus/lib/autocomplete-manager.js:260:31)
Commands
2x -0:09.1.0 core:move-down (input.hidden-input)
-0:07.3.0 editor:delete-to-beginning-of-word (input.hidden-input)
-0:06.6.0 core:backspace (input.hidden-input)
-0:05.8.0 editor:delete-to-beginning-of-word (input.hidden-input)
-0:04.3.0 core:move-up (input.hidden-input)
Non-Core Packages
Was able to reproduce in safe mode
This looks like an autocomplete-plus issue. Specifically, additionalWordChars is returning undefined in the following function.
getWordCharacterRegex (scopeDescriptor) {
const additionalWordChars = getAdditionalWordCharacters(scopeDescriptor)
let regex = wordCharacterRegexCache.get(additionalWordChars)
if (!regex) {
regex = new RegExp(`[${UnicodeLetters}${additionalWordChars.replace(']', '\\]')}]`)
wordCharacterRegexCache.set(additionalWordChars, regex)
}
return regex
}
Please close this issue and open one on the autocomplete-plus repo.
Oops 🤦♂️ ! Good call
See https://github.com/atom/autocomplete-plus/issues/956
|
gharchive/issue
| 2018-02-16T23:41:38 |
2025-04-01T04:56:04.697381
|
{
"authors": [
"Aerijo",
"drorata",
"tylerFowler"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/16767",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
455788159
|
Uncaught Error: The specified module could not be found./?~\AppData\Local\atom\app-1.38.1\res...
[Enter steps to reproduce:]
...
...
Atom: 1.38.1 x64
Electron: 2.0.18
OS: Microsoft Windows 10 Home
Thrown From: Atom Core
Stack Trace
Uncaught Error: The specified module could not be found.
\?\C:\Users\luci\AppData\Local\atom\app-1.38.1\resources\app.asar.unpacked\node_modules\keyboard-layout\build\Release\keyboard-layout-manager.node
At ELECTRON_ASAR.js:166
Error: The specified module could not be found.
\\?\C:\Users\luci\AppData\Local\atom\app-1.38.1\resources\app.asar.unpacked\node_modules\keyboard-layout\build\Release\keyboard-layout-manager.node
at process.module.(anonymous function) [as dlopen] (ELECTRON_ASAR.js:166:20)
at Object.Module._extensions..node (module.js:671:18)
at Object.module.(anonymous function) [as .node] (ELECTRON_ASAR.js:180:18)
at Module.load (module.js:561:32)
at tryModuleLoad (module.js:504:12)
at Function.Module._load (module.js:496:3)
at Module.require (/app.asar/static/index.js:60:45)
at require (internal/module.js:11:18)
at customRequire (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:1:688769)
at get_KeyboardLayoutManager (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:14:2692775)
at get_manager (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:14:2692855)
at Object.getCurrentKeyboardLayout (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:14:2692981)
at e.keystrokeForKeyboardEvent (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:14:1078253)
at KeymapManager.t.exports.KeymapManager.keystrokeForKeyboardEvent (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:11:1244475)
at KeymapManager.t.exports.KeymapManager.handleKeyboardEvent (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:11:1242460)
at WindowEventHandler.handleDocumentKeyEvent (~/AppData/Local/atom/app-1.38.1/resources/app/static/<embedded>:11:283624)
Commands
-1:06.5.0 core:select-all (input.hidden-input)
-0:38.9.0 tree-view:add-file (ol.tree-view-root.full-menu.list-tree.has-collapsable-children.focusable-panel)
Non-Core Packages
atom-beautify 0.33.4
atom-html-preview 0.2.6
atom-ternjs 0.19.1
autoclose-html 0.23.0
busy-signal 2.0.1
csslint 1.2.0
emmet 2.4.3
intentions 1.1.5
language-ejs 0.4.0
linter 2.3.0
linter-eslint 8.5.5
linter-jshint 3.1.16
linter-ui-default 1.7.1
pigments 0.40.2
Sublime-Style-Column-Selection 1.7.5
keyboard layout manager.node electron_asar.js:166
Thanks for taking the time to contribute!
We noticed that this is a duplicate of https://github.com/atom/atom/issues/14461. You may want to subscribe there for updates.
Because we treat our issues list as the Atom team's backlog, we close duplicates to focus our work and not have to touch the same chunk of code for the same reason multiple times. This is also why we may mark something as duplicate that isn't an exact duplicate but is closely related.
For information on how to use GitHub's search feature to find out if something is a duplicate before filing, see the How Can I Contribute? section of the Atom CONTRIBUTING guide.
|
gharchive/issue
| 2019-06-13T14:52:09 |
2025-04-01T04:56:04.703727
|
{
"authors": [
"Arcanemagus",
"lumarini"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/19524",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
54553602
|
Uncaught Error: connect ETIMEDOUT
[Enter steps to reproduce below:]
...
...
Atom Version: 0.169.0
System: Mac OS X 10.10.1
Thrown From: Atom Core
Stack Trace
Uncaught Error: connect ETIMEDOUT
At stream.js:94
Error: connect ETIMEDOUT
at exports._errnoException (util.js:746:11)
at Object.afterConnect [as oncomplete] (net.js:990:19)
Commands
-1:41.8 core:select-all (atom-text-editor.editor)
-1:41.5 core:paste (atom-text-editor.editor)
-1:23.5 settings-view:open (atom-text-editor.editor)
-0:18.9 editor:consolidate-selections (atom-text-editor.editor.mini)
-0:18.9 core:cancel (atom-text-editor.editor.mini)
-0:00.0 grammar-selector:show (atom-text-editor.editor)
Config
{
"core": {
"themes": [
"atom-light-ui",
"atom-light-syntax"
],
"projectHome": "/Users/pbondt/Downloads"
},
"editor": {
"fontSize": 13,
"softWrap": true,
"scrollPastEnd": true,
"invisibles": {}
}
}
Installed Packages
# User
atom-html-preview, v0.1.3
color-picker, v1.2.6
command-toolbar, v1.0.1
file-icons, v1.4.5
file-type-icons, v0.5.3
language-freebasic, v0.0.10
language-lua, v0.9.0
language-nagios, v0.2.0
linter, v0.10.0
linter-jshint, v0.1.0
linter-lua, v0.1.3
minimap, v3.5.5
open-recent, v2.0.0
# Dev
No dev packages
Please try updating to Atom v0.174.0-this should have been fixed in that version.
|
gharchive/issue
| 2015-01-16T09:04:39 |
2025-04-01T04:56:04.707686
|
{
"authors": [
"50Wliu",
"Pettrie-ilionx"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/5100",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
59642492
|
Large Files Cause Hangups
Hi,
Been using atom for a while now, love it and totally swapped full time from sublime, great work guys!
I have been having a few issues with large files.... I have to edit large files some times, a PHP file that has approx 5395 lines and another JS file with approx 9672 lines.... ;)
I currently have a custom theme in use (my own razor-atom-syntax-theme) and have a lot of other packages installed... linters, save-session, file-icons, highlight-selected, lesscompile, color-picker, minimap, open-recent.....
I seem to be getting really sluggish performance on these files, with the UI refresh update taking a good second or two on every key press.... The laptop is more than capable, new, plenty of ram, SSD so it is defo not the laptop as sublime is fine.
I just wondered what large file size support is like in Atom? What sizes should we be good for?
turning the linters off most certainly helps but still has a slight lag, any thoughts?
thanks
Paul
@smiffy6969 Thanks for the feedback. There's already an issue which discusses large files and such slowdown, so I'm going to close this in favor of that issue (best to have all feedback in one place so that it's not lost of forgotten). If you'd like to leave more feedback or ask more questions -- please feel free to do so there: https://github.com/atom/atom/issues/307
Also, I'm not sure if there's a clear answer for your question "What sizes should we be good for?". It kind of depends on what you consider sluggish and which packages you use. Personally, I don't even observe some slowness which other users looking at the same screen do, and as you noticed -- installing lots of packages will slow things down. Improving large file support is on the roadmap, and something we'd love to do.
|
gharchive/issue
| 2015-03-03T13:40:02 |
2025-04-01T04:56:04.711676
|
{
"authors": [
"izuzak",
"smiffy6969"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/5816",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
61324318
|
Difficulty building atom on Debian-like system
Ran script/build using latest version of atom. Build failed with output:
Node: v0.10.35
npm: v2.1.17
Installing build modules...
Installing apm...
Installing modules â
scrollbar-style@2.0.0 install /media/lin/atom/node_modules/scrollbar-style
node-gyp rebuild
keyboard-layout@0.10.0 install /media/lin/atom/node_modules/atom-keymap/node_modules/keyboard-layout
node-gyp rebuild
nslog@2.0.0 install /media/lin/atom/node_modules/nslog
node-gyp rebuild
runas@2.0.0 install /media/lin/atom/node_modules/runas
node-gyp rebuild
pathwatcher@3.3.3 install /media/lin/atom/node_modules/pathwatcher
node-gyp rebuild
oniguruma@4.0.0 install /media/lin/atom/node_modules/oniguruma
node-gyp rebuild
git-utils@3.0.0 install /media/lin/atom/node_modules/git-utils
node-gyp rebuild
npm WARN cannot run in wd atom@0.188.0 node -e 'process.exit(0)' (wd=/media/lin/atom)
npm WARN engine specificity@0.1.3: wanted: {"node":"~0.8.x"} (current: {"node":"0.10.35","npm":"2.5.1"})
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/scrollbar-style/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/scrollbar-style
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/atom-keymap/node_modules/keyboard-layout/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/atom-keymap/node_modules/keyboard-layout
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/nslog/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/nslog
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/runas/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/runas
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/pathwatcher/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/pathwatcher
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/oniguruma/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/oniguruma
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/media/lin/atom/node_modules/git-utils/.node-gyp"
gyp WARN install got an error, rolling back install
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: 404 status code downloading tarball
gyp ERR! stack at Request. (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/lib/install.js:246:14)
gyp ERR! stack at Request.emit (events.js:117:20)
gyp ERR! stack at Request.onRequestResponse (/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/request/request.js:1176:10)
gyp ERR! stack at ClientRequest.emit (events.js:95:17)
gyp ERR! stack at HTTPParser.parserOnIncomingClient (http.js:1693:21)
gyp ERR! stack at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:122:23)
gyp ERR! stack at Socket.socketOnData (http.js:1588:20)
gyp ERR! stack at TCP.onread (net.js:528:27)
gyp ERR! System Linux 3.9-1-mepis64
gyp ERR! command "node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /media/lin/atom/node_modules/git-utils
gyp ERR! node -v v0.10.35
gyp ERR! node-gyp -v v1.0.2
gyp ERR! not ok
npm ERR! Linux 3.9-1-mepis64
npm ERR! argv "/media/lin/atom/apm/node_modules/atom-package-manager/bin/node" "/media/lin/atom/apm/node_modules/atom-package-manager/node_modules/npm/bin/npm-cli.js" "--globalconfig" "/root/.atom/.apm/.apmrc" "--userconfig" "/root/.atom/.apmrc" "install" "--target=0.21.0" "--arch=x64"
npm ERR! node v0.10.35
npm ERR! npm v2.5.1
npm ERR! code ELIFECYCLE
npm ERR! scrollbar-style@2.0.0 install: node-gyp rebuild
npm ERR! Exit status 1
npm ERR!
This looks like it was a permissions issue with the build folders: EACCES user "root" does not have permission to access the dev dir "/root/.atom/.node-gyp/.node-gyp/0.21.0"
You can clean all build folders and reset the permissions using script/clean. You can also customize the build directory using the --build-dir option to script/build to build in a custom location.
Closing this out.
|
gharchive/issue
| 2015-03-14T00:52:45 |
2025-04-01T04:56:04.744437
|
{
"authors": [
"drbsci",
"kevinsawicki"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/issues/5966",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
134400829
|
Add Issue template and extra version info
/cc @atom/feedback for more :eyes:
Also, the whole file seems to be indented using 4 spaces and not 2...not sure if that was intentional or not.
I indent Markdown using 4 spaces habitually because some Markdown parsers require that. (It's one of the places where original Markdown isn't clear.) I can change it though.
:ok_hand:
:surfer: Will be super interesting to see how people interact with this!
Yes, it will! I'm all settled in for a long day :laughing:
|
gharchive/pull-request
| 2016-02-17T21:12:10 |
2025-04-01T04:56:04.747170
|
{
"authors": [
"50Wliu",
"benogle",
"lee-dohm",
"nathansobo"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/pull/10870",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
384768132
|
Support contentRegex for TextMate grammar
Requirements for Adding, Changing, or Removing a Feature
Fill out the template below. Any pull request that does not include enough information to be reviewed in a timely manner may be closed at the maintainers' discretion.
The pull request must contribute a change that has been endorsed by the maintainer team. See details in the template below.
The pull request must update the test suite to exercise the updated functionality. For guidance, please see https://flight-manual.atom.io/hacking-atom/sections/writing-specs/.
After you create the pull request, all status checks must be pass before a maintainer reviews your contribution. For more details, please see https://github.com/atom/atom/tree/master/CONTRIBUTING.md#pull-requests.
Issue or RFC Endorsed by Atom's Maintainers
https://github.com/atom/first-mate/pull/109
Description of the Change
NOTE: Requires update on first-mate dependency to make the contentRegex property be picked up in the first place.
Adds support for the contentRegex property existing on TextMate grammars. Treats it as an Oniguruma regex, so that the TextMate grammar is consistent with itself.
Alternate Designs
Could make the contentRegex a regular JS regex, but then it would be the only regex like that for a TextMate grammar.
Possible Drawbacks
Currently it looks at the whole file, which is potentially dangerous for performace. The use case in the linked issue would be fine with 20 lines or less. Declaring how many lines it wants would require more design choices though.
Verification Process
Manually
Release Notes
Add support for contentRegex on TextMate grammars
I added some basic test coverage for the added logic. Waiting on a green build and then I will merge. Thanks for contributing this!
Argh. Got bit by a flaky build failure that I'm actively fighting on another thread. Rebuilding.
Re-triggered the build manually and it passed. Not sure why that is not indicated on the PR.
Thanks again for your contribution!
|
gharchive/pull-request
| 2018-11-27T12:17:24 |
2025-04-01T04:56:04.754206
|
{
"authors": [
"Aerijo",
"nathansobo"
],
"repo": "atom/atom",
"url": "https://github.com/atom/atom/pull/18499",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
119814383
|
update need
(electron) loadUrl is deprecated. Use loadURL instead
Fixed in #19 :sparkles:
|
gharchive/issue
| 2015-12-01T21:42:01 |
2025-04-01T04:56:04.759494
|
{
"authors": [
"jlord",
"x8core"
],
"repo": "atom/electron-quick-start",
"url": "https://github.com/atom/electron-quick-start/issues/12",
"license": "cc0-1.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
78387055
|
Documentation implies Squirrel checks for updates regularly, but it doesn't
The Electron documentation mentions that "Squirrel will check for an update again at the interval you specify." This doesn't seem to be true—the auto-updater seems to check for updates at launch, and again if manually invoked, but the Electron source code never calls Squirrel's startAutomaticChecksWithInterval to enable periodic checking.
I think it'd make sense to expose an API on the auto-updater module that calls through to startAutomaticChecksWithInterval, or update the docs. I'd be happy to submit a pull request if this is already on the wish list!
The misunderstanding description is already gone.
|
gharchive/issue
| 2015-05-20T05:59:25 |
2025-04-01T04:56:04.761749
|
{
"authors": [
"bengotow",
"zcbenz"
],
"repo": "atom/electron",
"url": "https://github.com/atom/electron/issues/1735",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
111945590
|
[webContents.savePage] Is not perfect, download the file can not reach chorme 'ctrl+s'
The method appears to be very imperfect, the data is missing, and it is not normal.
Contrast test, url is 'www.baidu.com'
The [webContents.savePage] effect of elentron is:
Chrome is:
Web page and can not be offline Preview
Could you load http://baidu.com in Electron and open the Sources panel in DevTools to find recheck all the external resources? From the images, seems that the url is loaded with a baidu acount in Chrome while it isn't in electron.
The reason why web pages don't be previewed correctly is a wrong saving directory of the external resources, which is a bug of Electron.
Additionally I noticed that it will not save the content of an embedded webview... :(
I believe this has been fixed by #3128.
|
gharchive/issue
| 2015-10-17T05:37:50 |
2025-04-01T04:56:04.765430
|
{
"authors": [
"appelgriebsch",
"hokein",
"zcbenz",
"zg2013"
],
"repo": "atom/electron",
"url": "https://github.com/atom/electron/issues/3121",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124071734
|
HTML5 Canvas - Rendering Problems (Chart.js)
The current Chromium: 47.0.2526.73 Version causes problems while rendering charts within the html5 canvas element.
Does anyone have a workaround for this problem?
Please see this post for better illustration:
https://code.google.com/p/chromium/issues/detail?id=567308&q=status%3Aunconfirmed&sort=-id&colspec=ID Stars Area Feature Status Summary Modified OS
No we don't have a workaround for this, the only thing we can do is to wait for Chrome to fix the bug.
|
gharchive/issue
| 2015-12-28T14:02:31 |
2025-04-01T04:56:04.767342
|
{
"authors": [
"Bratkartoffl",
"zcbenz"
],
"repo": "atom/electron",
"url": "https://github.com/atom/electron/issues/3935",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
126305926
|
Add and update Japanese translated docs.
this change includes
add translation
fix typos
improve grammer
Thanks!
|
gharchive/pull-request
| 2016-01-12T23:49:44 |
2025-04-01T04:56:04.768590
|
{
"authors": [
"j5a",
"zcbenz"
],
"repo": "atom/electron",
"url": "https://github.com/atom/electron/pull/4075",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
191463542
|
CRLF showing as changes when git commits them in LF
I have a file with CRLFs.
I use the git setting that automatically takes care of it, and commits files with LF instead of CRLF.
When I type git diff on that file, I don't see any changes.
Whereas in the Atom Minimap, all the lines are orange, except for the last one.
This makes it impossible to properly use this package since all lines have changes.
Maybe this package should not consider the CRLF instead of LF as a change.
What do you think ?
I already submitted this issue here, and they directed me here.
I also have this issue (albeit inverted). My files are LF and commited as CRLF.
All lines are marked as modified while git says no files are modified :(
Ill add a me too here
|
gharchive/issue
| 2016-11-24T08:53:31 |
2025-04-01T04:56:04.776223
|
{
"authors": [
"matthieuheitz",
"oskbor",
"the-j0k3r"
],
"repo": "atom/git-diff",
"url": "https://github.com/atom/git-diff/issues/116",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
364953535
|
memo_core filesystem sync status?
First thanks for Atom, Xray, Memo, and whatever's next.
At the moment I'm particular interested in memo_core and how it plans to sync with the filesystem. The "Update" posts over the summer described lots of work related to filesystem sync. But looking at the memo README it seems like what is in memo_core right now is just the "light client" as described here:
Library: Memo provides a reference library implementation written in Rust that produces and consumes the Memo protocol messages to synchronize working trees. We plan to ship a "light client" version of the library that compiles to WebAssembly and exposes a virtual file system API
Is the next step code to sync this model with the filesystem? Any idea on how long until such code appears? Also will it depend on the underlying filesystem being a git repository? This seems to suggest that it might:
as well as a full version based on Libgit2 that synchronizes with a full replica on the local file system.
But I think it would be really useful to be able to sync memo with any local filesystem (git or not). In the case where git isn't present then it wouldn't make sense to sync memo state with multiple clients, because no shared git commit... but it would still be useful to have a single client reading memo state... that is synced with filesystem changes that are made outside of memo API.
Hi, thanks for your interest. I’m on my phone at a conference right now, so I can’t grab you the commit right now, but we did have an implementation of file system sync working in a previous iteration. In timeline.rs you can find it randomized rested against a simulated OS. When we decided to ship the light client and build more explicitly on top of Git, we dropped that work temporarily. We should be setup to handle it though and have confidence it will work based on that previous attempt. I can follow up with a more specific link later if needed.
Ah, thanks. I think I've found the latest version of timeline.rs.
We definitely intend to implement syncing once we deal with commits. I’m due for an update and will explain our present thinking. But I’m short we shouldn’t require Git, but then all paths will need to be represented via operations.
|
gharchive/issue
| 2018-09-28T16:11:35 |
2025-04-01T04:56:04.793233
|
{
"authors": [
"jessegrosjean",
"nathansobo"
],
"repo": "atom/xray",
"url": "https://github.com/atom/xray/issues/138",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
302903412
|
Help wanted: Optimize fragment identifiers
This is an optimization that's in my head, but I haven't performed the measurements necessary to prove that it's really worth the complexity. If you want to work on this, helping with those measurements would be an important first step.
Background
Xray's buffer is a CRDT that stores every piece of text ever inserted in a giant copy-on-write B-tree. Insertions are immutable, and each insertion is initially added to the tree in its whole form. Insertions are subsequently split into fragments whenever new text is inserted within an existing piece of inserted text. As a result, the tree ends up containing a sequence of interleaved fragments, each originating from a different original insertion.
Retaining all of these fragments and their connection back to the original insertion allows us to describe positions in the buffer logically, in terms of an (insertion_id, offset) pair that uniquely describes a position inside the immutable insertions.
To determine where such a logical position lands in the document, we need to determine which fragment of the position's specified insertion contains the position's specified offset. To enable this, each insertion is associated with an auxilarry tree via the Buffer::insertions map. These trees contain only fragments from the original insertion, and map offset ranges to FragmentIds. Once you find a fragment id for a particular (insertion_id, offset) pair, you can use it to perform a lookup in the main fragments tree which contains the full text of the document.
See the presentation I gave at InfoQ 2017 for a clearer explanation.
Dense ids
In the teletype-crdt JS library, we map from an (insertion_id, offset) pair to a fragment in the global document by making our fragments members of two splay trees simultaneously. So you can look up the tree containing all the fragments of a single insertion and seek to the fragment containing a particular offset, then use different tree pointers on that same fragment belonging to the main document tree to resolve its location. It's kind of weird, and the talk may clarify this somewhat, but it may not be mandatory to understand. In Xray, we're storing fragments in a persistent B-tree rather than a splay tree, which means we really can't associate fragments with parent pointers. This means we'll need to find a new strategy for understanding where a particular insertion fragment lives inside the main tree.
In Xray, we instead rely on representing fragment ids as dense, ordered identifiers. "Dense" essentially means that between any two of these identifiers, there should always be room to allocate another identifier. You could think of infinite precision floating point numbers as satisfying this property. Between 1 and 2 there is 1.5, and between 1.5 and 2 there is 1.75, and so on. The inspiration for this approach came from a different text-based CRDT called Logoot.
Currently, we represent a FragmentId as a vector of 16 bit integers. To compare ids, we simply compare each element of the two vectors lexicographically. When we want to create a new id that is ordered in between two existing ids, we search for the first index corresponding to elements in either vector that have a difference of at least one. Once we find one, we allocate a new integer randomly between the two ids.
The LSEQ strategy, discussed in a followup paper, seems like it could be a good source of inspiration for optimizing these ids so as to minimize their length. The longer the ids, the more memory they consume and the longer they take to compare. The linked paper has a really good discussion of the performance profile of allocating these ids based on different insertion patterns in a sequence.
One key idea in the paper is to allocate ids closer to the left or right based on a random strategy at each index of the vector. Another idea is to increase the base of each number in the id, so that the key space expands exponentially as an id gets deeper. It's all in the paper.
The next valuable optimization would build on the LSEQ work and avoid allocation whenever ids are below a certain length. We can do this with unsafe code that implements a tagged pointer. Basically, if the id is short enough, we should be able to stuff it into the top 63 bits of a pointer. If it exceeds that quantity, we can interpret those bits as a pointer to a heap-allocated vector of u16s. The comparison code can then be polymorphic over the two representations, and hopefully most of the time we wouldn't need the allocation.
It will be an experiment. Tying it to some sort of benchmarks related to editing performance will be important to ensuring this idea warrants the complexity.
@chaintip
0.00029641 BCH| ~ 0.28 USD has been tipped to this issue by @DesWurstes.
To claim this bounty, get a pull request merged with @chaintip fixes #24 in the creation comment.
@chaintip
0.00714285 BCH| ~ 4.55 USD has now been tipped to this issue.
To claim this bounty, get a pull request merged with @chaintip fixes #24 in the creation comment.
0 BCH| ~ 0.00 USD has now been tipped to this issue.
To claim this bounty, get a pull request merged with @chaintip fixes #24 in the creation comment.
|
gharchive/issue
| 2018-03-06T23:09:23 |
2025-04-01T04:56:04.804755
|
{
"authors": [
"DesWurstes",
"chaintip",
"derianalanrojas",
"nathansobo"
],
"repo": "atom/xray",
"url": "https://github.com/atom/xray/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1375586968
|
#209 better drive switcher
PR Checklist:
[ ] Link to related issue
[ ] Add changelog entry linking to issue
[ ] Added tests (if needed)
[ ] (If new feature) added in description / readme
please rebase first on main, your PR contains many of the same changes
ok, good stuff! Switcher is definitely an improvement. But it's not done:
[ ] WebSocket shows disconnected on full page reload (add a test for this)
[ ] Some default drives (such as localhost) look a bit weird if they don't work. Perhaps we should hide these defaults if they are offline, or only show them if the user is in fact visiting from localhost (add a test for this)
[ ] The order of the list changes, the current one is always at the top. I think the order needs to be stable.
[ ] Maybe add the new button to the dropdown list?
[ ] If the user is not signed in, they should be told that they should, because then they can save their drives!
[ ] We need some sort of explanation on what a Drive is. This is also missing in the docs.
ok, good stuff! Switcher is definitely an improvement. But it's not done:
[ ] WebSocket shows disconnected on full page reload (add a test for this)
[ ] Some default drives (such as localhost) look a bit weird if they don't work. Perhaps we should hide these defaults if they are offline, or only show them if the user is in fact visiting from localhost (add a test for this)
[ ] The order of the list changes, the current one is always at the top. I think the order needs to be stable.
[ ] Maybe add the new button to the dropdown list?
[ ] If the user is not signed in, they should be told that they should, because then they can save their drives!
[ ] We need some sort of explanation on what a Drive is. This is also missing in the docs.
[ ] The button consistency is off. The drive switcher button is round with a border, the SideBarItem buttons are rounded and filled with an overlay.
[ ] The alignment of the rows is off. Top item has too much padding, text is not centrally aligned.
Localhost is only visible in development, localhost:5137 is just the origin so it would not look weird in production. The only case where a drive would look weird (eg, a red url) would be when a user manually adds a broken drive or other resource to their agents drives property.
The order of the list changes because it is a history of drives with a max of 5 drives. Maybe this can be better indicated by using a clock icon instead of the checkbox.
I don't think making a new drive will be that common of a use-case to have a dedicated option in the dropdown.
4 & 5) Good point
Damnit forgot to fix it again 😅
Oops
|
gharchive/pull-request
| 2022-09-16T08:09:56 |
2025-04-01T04:56:04.813848
|
{
"authors": [
"Polleps",
"joepio"
],
"repo": "atomicdata-dev/atomic-data-browser",
"url": "https://github.com/atomicdata-dev/atomic-data-browser/pull/214",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2153679348
|
How to use the epos reader properly?
I am trying to use the epos reader.
when I create an epos object using the reader class, I correctly get the number of events
>>> epos.number_of_events
6596033
However, when I am trying to get values in the file, I get this error for all the available methods.
>>> m = epos.get_mass_to_charge_state_ratio()
>>> m.values
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'NxField' object has no attribute 'values'
Am I doing something wrong? Any help would be greatly appreciated.
Dear @skatnagallu:
Which version of the ifes_apt_tc_data_modeling library are you using? I suppose you use some.
I have done heavy refactoring of this library last month and just today merged and pushed that new version on pypi.
Today is my first day after the parental leave vacation.
Using >=v0.2 reading should work like this: See an example here how it is done in paraprobe:
https://gitlab.com/paraprobe/paraprobe-toolbox/-/blob/main/code/transcoder/src/python/paraprobe_transcoder.py?ref_type=heads line 134 and following.
If the issue persist, let's meet for a short zoom to get it working for you and if necessary I can then fix it immediately.
As you will be likely using the multiplicities and so on it would be good if you also have a critical look into this function
https://github.com/atomprobe-tc/ifes_apt_tc_data_modeling/blob/main/ifes_apt_tc_data_modeling/epos/epos_reader.py
and confirm or suggest changes where the description is incorrect.
Thank you very much, Markus
Indeed I was using the older version. I will use the new version and keep you posted.
|
gharchive/issue
| 2024-02-26T09:27:39 |
2025-04-01T04:56:04.820471
|
{
"authors": [
"mkuehbach",
"skatnagallu"
],
"repo": "atomprobe-tc/ifes_apt_tc_data_modeling",
"url": "https://github.com/atomprobe-tc/ifes_apt_tc_data_modeling/issues/4",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1097977618
|
Request and response going through well-known paths
Lead: @kalluriramkumar (please update if labeled incorrectly)
Is your feature request related to a problem? Please describe.
Right now the marshaling and unmarshalling of requests and responses do not happen through well-defined code paths.
Describe the solution you'd like
**Define abstractions and provide the implementations
Describe alternatives you've considered
None
Additional context
This task came as a result of the maintainability and testability of the SDK code
** Subtasks
[x] Propose new abstractions for request and response parsers and design review
[x] Implement response parsing abstractions
[x] Implement request parsing abstractions
[x] Write unit tests
[x] Plug in the transformers in the SDK
Sub tasks
[ ] Propose new abstractions for request and response parsers and design review
[ ] Implement request parsing abstractions
[ ] Implement response parsing abstractions
Completed coding for Response Transformers and for getAtkeys method on at_client_impl
@kalluriramkumar can you update the subtasks section in the main issue description above please?
Added request - response transformers to enable all requests/responses go through a single path. Integrated the code with get method and put method. The get method changes are published to pub.dev packages in at_client: 3.0.9. The put method changes are completed and yet to be published.
|
gharchive/issue
| 2022-01-10T14:38:22 |
2025-04-01T04:56:04.832447
|
{
"authors": [
"VJag",
"gkc",
"kalluriramkumar"
],
"repo": "atsign-foundation/at_client_sdk",
"url": "https://github.com/atsign-foundation/at_client_sdk/issues/372",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
176471934
|
flickr/slurp: Treat fail as an error
Reject the promise when we get a fail status.
This makes it clear why the cron job is failing... "Invalid auth token"!
LGTM
@aboodman as suspected
|
gharchive/pull-request
| 2016-09-12T20:10:50 |
2025-04-01T04:56:04.841491
|
{
"authors": [
"arv",
"kalman"
],
"repo": "attic-labs/noms",
"url": "https://github.com/attic-labs/noms/pull/2544",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
193309005
|
Sysex
handles sysex receiving
Thanks!
|
gharchive/pull-request
| 2016-12-03T19:35:50 |
2025-04-01T04:56:04.886532
|
{
"authors": [
"aure",
"eljeff"
],
"repo": "audiokit/AudioKit",
"url": "https://github.com/audiokit/AudioKit/pull/641",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
102571656
|
Whiteboard Enhancement
Resolved issue #12: Added Undo/Redo
Resolved issue #15: event.originalEvent.touches fails on event.type = touchend
Resolved issue #16: Whiteboard erase adds another color to image
Awesome, thanks @sumansaurabh !
|
gharchive/pull-request
| 2015-08-22T22:38:48 |
2025-04-01T04:56:04.889210
|
{
"authors": [
"aullman",
"sumansaurabh"
],
"repo": "aullman/opentok-whiteboard",
"url": "https://github.com/aullman/opentok-whiteboard/pull/17",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2152614283
|
输出文件按日期生成文件夹保存
直播平台
全平台
需求描述
生成日期命名的文件夹,让输出的文件可以保存在对应日期的文件夹中,如D:\output\date***.mp4 ,特别时直播流断断续续的时候同日期的文件太多了,保存在同一个文件夹中有点乱
操作步骤(可选)
...
需求截图(可选)
No response
文件整理的功能不属于一个直播录制软件该有的功能,如确有需要按日期分类存放录制文件这种特殊需求,请自行修改以下代码,将日期添加到模板字符串中
https://github.com/auqhjjqdo/LiveRecorder/blob/4a83c9485487abff4ac36cbfdce7abe74bbf03dd/live_recorder.py#L149
|
gharchive/issue
| 2024-02-25T05:52:54 |
2025-04-01T04:56:04.910676
|
{
"authors": [
"auqhjjqdo",
"ripperX309"
],
"repo": "auqhjjqdo/LiveRecorder",
"url": "https://github.com/auqhjjqdo/LiveRecorder/issues/93",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
108479142
|
App bundle including Aurelia modules
So the latest blog post demonstrates how to create two bundles, one for the app and one for Aurelia, along the lines of this:
var config = {
...
bundles: {
"dist/app-build": {
includes: [
'*',
...
],
...
},
"dist/aurelia": {
includes: [
'aurelia-bootstrapper',
'aurelia-fetch-client',
...
],
...
}
}
};
What I'm running into is that when the app is bundled, the bundler tries to walk the Aurelia modules and fails with the following error:
Error on fetch for aurelia-framework at file:///Volumes/Data/Temp/aurelia-bundle-error/app/aurelia-framework.js
Error: ENOENT, open '/Volumes/Data/Temp/aurelia-bundle-error/app/aurelia-framework.js'
at Error (native)
Where the app.js looks like this:
var framework = require('aurelia-framework');
...
I found I could limit the scope of the app bundle by using the module syntax as described here:
var config = {
...
bundles: {
"dist/app-build": {
includes: [
'[*]',
...
],
...
},
...
};
Anyways, I'm wondering if I'm doing something wrong, something has changed or if the blog post is not right. I pushed a repo that demonstrates this here.
Thx for all the help BTW. Aurelia rocks!
Something that was confusing to me (And this is not necessarily an Aurelia thing) are the paths in config.js and the bundle includes. Originally I was thinking the paths in the config.js were globs but they are not. Then I thought the bundler includes were globs, and they kinda are but you can't include extensions (Which evidently is a new thing). And then what's the relationship between the config.js path and the bundler includes? It's starting to become clearer but it's hard to track down info on this stuff and on top of it things are changing so fast. So what you think you understand today no longer applies to tomorrow (Literally today and tomorrow... O_o).
Anyways, point is that although Aurelia has been really straight forward, the underlying bits (Like SystemJS/jspm) have not been. I think it would really help uptake if the upcoming Aurelia documentation covered some of these details in addition to the Aurelia specific stuff.
@mikeobrien Aurelia bundle does not support [..] pattern yet. But we do support excludes : [ ] just like includes.
So here is one way to do what you are trying:
var aureliaLibs = [
'aurelia-bootstrapper',
'aurelia-fetch-client',
...
];
var config = {
bundles: {
"dist/app-build": {
includes: [
'*',
],
excludes : aureliaLIbs.
},
"dist/aurelia": {
includes: aureliaLibs,
...
}
}
};
Hmmm, I'm using the module syntax with the latest bits and it's working. AFAICT the bundle config is passed directly to the SystemJS bundler so Aurelia is not involved at that point. I could be missing something tho.
Yes, some of expression will work. For example:
var config = {
bundles: {
"dist/app-build": {
includes: [
'app/**/*',
],
excludes : [ '[app/**/*]' ].
}
};
This will create an app bundle excluding any external libraries.
What I'm seeing with the latest bits is that the following is creating an app bundle excluding any external libraries:
var config = {
bundles: {
"dist/app-build": {
includes: [
'[app/**/*]',
]
}
};
No need for the exclude. In fact your config above will only include the external libraries, the opposite of what I'm trying to accomplish. This is consistent with what I'm seeing in the SystemJS docs.
Anyways, here's the point:
If I'm doing something wrong and you don't need to use the module syntax (As demonstrated in the blog post), I'd love to know what I'm doing wrong so I can fix it.
If you do need to use the module syntax above, the blog post needs to be updated otherwise people will get the error I mentioned.
If things have changed since the blog post and you now need to use the module syntax, again the blog post needs to be changed.
Bottom line is I'm trying to figure out if the blog post is wrong or if I'm doing something wrong and one of the two needs fixed.
You are right.
This will create an app bundle excluding any external libraries.
That's a wrong conclusion from my part. Apologies. I was not probably in my right mind.
var config = {
bundles: {
"dist/app-build": {
includes: [
'app/**/*',
],
excludes : [ '[app/**/*]' ].
}
};
This configuration should technically produce an aurelia bundle. app/**/* includes all the app module including the dependencies. [app/**/*] includes just the app modules. So the above configuration should produce a bundle of the dependencies only. (app/**/* - [app/**/*]).
But, in case of Aurelia it will not quite do that because all the Aurelia dependencies cannot be detected by static analysis of import or require statements. In some cases Aurelia loads modules dynamically. As of now those modules must be explicitly included in the bundle configuration.
So to answer your original question, you have this bundle config:
var config = {
...
bundles: {
"dist/app-build": {
includes: [
'*',
...
],
...
},
"dist/aurelia": {
includes: [
'aurelia-bootstrapper',
'aurelia-fetch-client',
...
],
...
}
}
};
* here means everything in the app folder because you have a path config like:
paths: {
"*": "app/*",
Now, your app/app.js looks like:
var framework = require('aurelia-framework');
As you are requiring aurelia-framework, that means one of two things here as long as module resolving is concern.
You have a map configuration in config.js that maps aurelia-framework to an installed module with version. eg: map : { "aurelia-framework" : "github:aurelia-framework@0.16.0"}. or
You have module in app/aurelia-framework because you have * : app/* path configured.
As, you don't have any file like app/aurelia-framewok and the map configuration is not there too, the bundler is throwing error.
What you probably want is to create a map in config.js with for aurelia-framework
It's not a bundler issue necessarily. Your app will not run with this configuration because SystemJS will fail to find aurelia-framework
Aaaahhh, ok that makes sense now. So when I was referencing aurelia-framework my config.js looked like this with no alias mapped to aurelia-framework:
system.config({
...
map: {
"aurelia-bootstrapper": "github:aurelia/bootstrapper@0.17.0",
...
"github:aurelia/framework@0.16.0": { ...
}
});
I was thinking that that since the Aurelia framework module was downloaded as a dependency of the bootstrapper that the aurelia-framework alias would be automatically mapped but obviously that's not the case.
Also, makes sense now why you have to include the github:* modules in the bundle config. Didn't realize those were dynamically loaded.
Thanks for clarifying all that!
|
gharchive/issue
| 2015-09-26T16:35:32 |
2025-04-01T04:56:04.933772
|
{
"authors": [
"ahmedshuhel",
"mikeobrien"
],
"repo": "aurelia/bundler",
"url": "https://github.com/aurelia/bundler/issues/14",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
181894620
|
Scss require issue in view
I'm submitting a feature request
Aurelia Skeleton Version
skeleton-esnext-webpack 1.0.2
Framework Version:
aurelia-framework 1.0.6
Please tell us about your environment:
Operating System:
OSX 10.11.6
Node Version:
6.7.0
NPM Version:
3.10.8
JSPM OR Webpack AND Version
webpack 2.1.0-beta.22
Browser:
all
Language:
ESNext
Current behavior:
We have 2 versions of style import in view: one for SASS and one for SCSS (@easy-webpack/config-sass was added in webpack configuration)
<require from="./style1.sass"></require>
<require from="./style1.scss"></require>
webpack config:
require('@easy-webpack/config-css') ({ filename: 'styles.css', allChunks: true, sourceMap: false }),
require('@easy-webpack/config-sass') ({ filename: 'styles.css', allChunks: true, sourceMap: false }),
SASS import works successfully but for SCSS it doesn't work at all:
Unhandled rejection Error: Cannot find module './style1.scss.html'. at webpackContextResolve (http://localhost:9000/app.bundle.js:521:9) at webpackContext (http://localhost:9000/app.bundle.js:516:29) at http://localhost:9000/aurelia-bootstrap.bundle.js:15802:48
The reason is "*.scss.html" in file extension.
Expected/desired behavior:
Please advise why aurelia add unnecessary extension "html" to the module path.
What do we need to do for valid SCSS import?
Thanks in advance.
Both imports (SASS and SCSS) doesn't work for now, right? what configuration is needed?
There is a work around for this as you can switch to normal webpack instead of easy webpack and import sass inside javascrpit.
Detail at #626
@bigopon You can import SASS inside JavaScript using Easy Webpack, no problem. Just use easy-webpack/config-sass, or add the loader yourself.
@niieani I realized my stupidity after hitting send button, deleted that comment.
|
gharchive/issue
| 2016-10-09T16:02:54 |
2025-04-01T04:56:04.948544
|
{
"authors": [
"ApRoland",
"andlebed",
"bigopon",
"niieani"
],
"repo": "aurelia/skeleton-navigation",
"url": "https://github.com/aurelia/skeleton-navigation/issues/691",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
963014229
|
Update ExecutionResult to SubmitResult with the new binary format
Updates the schema to align with https://github.com/aurora-is-near/aurora-engine/pull/218
Includes a few sanity tests to make sure it works.
Related: https://github.com/aurora-is-near/aurora-relayer/pull/22
|
gharchive/pull-request
| 2021-08-06T20:06:29 |
2025-04-01T04:56:04.950436
|
{
"authors": [
"artob",
"birchmd"
],
"repo": "aurora-is-near/aurora.js",
"url": "https://github.com/aurora-is-near/aurora.js/pull/8",
"license": "CC0-1.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1818255059
|
Add CI jobs for Windows
The job builds a single-file version of the Au library, and runs a
simple test. This test is enough to reproduce the errors in #143:
https://github.com/aurora-opensource/au/actions/runs/5496101666/job/14882361539
It also catches another few compiler errors on other versions of MSVC,
which we fix here. The issue is that function parameters can't be used
in a constexpr context in general. These particular parameters are
monovalue types
(https://aurora-opensource.github.io/au/main/reference/detail/monovalue_types/),
so the compiler's being a little pedantic in enforcing this rule, but
this also means that it's easy to fix the error.
With this PR, we go from 0 to 1 in terms of CI jobs for "best effort"
platforms. This gives us a pattern we can use for future new platforms.
Helps #144.
Since these new jobs are the first in the "best effort" tier, technically they don't have to always be passing. However, I think we want to make, well, our best effort to keep them passing, so I'll make them blocking unless and until something forces us to let them fail.
|
gharchive/pull-request
| 2023-07-24T11:47:36 |
2025-04-01T04:56:04.953873
|
{
"authors": [
"chiphogg"
],
"repo": "aurora-opensource/au",
"url": "https://github.com/aurora-opensource/au/pull/151",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1245057690
|
Failed to build ublox_dgnss_node on Rolling/Humble
This package is failing to build on the buildfarm for Rolling and Humble, one example is: https://build.ros2.org/job/Hbin_uJ64__ublox_dgnss_node__ubuntu_jammy_amd64__binary/9/consoleFull .
The problem ends up being this:
00:01:22.492 CMake Error at /usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
00:01:22.492 Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
The containers that the ROS buildfarm uses only install things that are listed in the package.xml. In this case, the package.xml does not list a dependency on pkg-config, and so that executable is not available for CMake to find. If you add a <build_depend>pkg-config</build_depend> to your package.xml and do a new release, that should fix this problem.
It’s exactly the same as I used for galactic. Have no idea what is missing?
@clalancette its exactly the same as what I’m using to build it on galactic … where is this requirement listed?
@clalancette i don’t use pkg-config and it compiles/builds successfully on my local environment. This seems to be an build-farm configuration issue?
https://github.com/aussierobots/ublox_dgnss/blob/120c4c89db9e5bf993c9e091da5bce158b537d9e/ublox_dgnss_node/CMakeLists.txt#L28-L29
?
@clalancette that was to include libusb-1.0 only … works everywhere else. Is libusb-1.0 library available still on the buildfarm virtual machines for humble on Ubuntu jammy?
I would agree with @clalancette: this is more likely a problem with you not stating that dependency in your manifest than with the buildfarm config or whether Jammy introduced some sort of change.
Regardless of whether your package happened to build previously, you should list all your dependencies explicitly. Avoid transitively depending on something, as this can lead to problems with your immediate dependencies change their dependencies.
@gavanderhoorn it’s a cmake module https://cmake.org/cmake/help/latest/module/FindPkgConfig.html .. libusb-1.0 is the actual dependency.
So CMake tries to use Pkg-Config to find libusb, as you ask it to do here:
https://github.com/aussierobots/ublox_dgnss/blob/120c4c89db9e5bf993c9e091da5bce158b537d9e/ublox_dgnss_node/CMakeLists.txt#L28-L29
if Pkg-Config is not present on the system where CMake tries to do that, you'll get this error:
Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
which is the error you find in your buildfarm build log.
So even though your code depends on libusb, your build additionally depends on pkg-config.
If you don't state that dependency, your build will fail.
@gavanderhoorn @clalancette change made but I cant bloom-release it
==> Checking on GitHub for a fork to make the pull request from...
Could not find a fork of ros/rosdistro on the hortovanyi GitHub account.
Would you like to create one now?
Continue [Y/n]?
Aborting pull request: HTTP Error 401: Unauthorized (https://api.github.com/repos/ros/rosdistro/forks)
The release of your packages was successful, but the pull request failed.
Please manually open a pull request by editing the file here: 'https://raw.githubusercontent.com/ros/rosdistro/master/humble/distribution.yaml'
<== No pull request opened.
Project building successfully now thank you @gavanderhoorn @clalancette for your help
|
gharchive/issue
| 2022-05-23T12:03:10 |
2025-04-01T04:56:04.968345
|
{
"authors": [
"clalancette",
"gavanderhoorn",
"hortovanyi"
],
"repo": "aussierobots/ublox_dgnss",
"url": "https://github.com/aussierobots/ublox_dgnss/issues/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
236316762
|
config file not being read?
it appears the config file is not being read, even though I'm passing in a parameter for it at the command line.
$ flare_beacon -c configs/elasticsearch.ini
[SUCCESS] Connected to elasticsearch on localhost:9200
$ grep port configs/elasticsearch.ini
es_port=9201
shouldn't the first one connect on 9201?
I identified the bug. The connection was being established to 9201, but the message was not being displayed properly.
I have updated the code. Please pull down the new code and try again.
https://github.com/austin-taylor/flare/commit/68fa618085cda96f7339155cef68e8afea36b974
Confirmed fix on local instance. Thank you for submitting the bug!
|
gharchive/issue
| 2017-06-15T21:16:10 |
2025-04-01T04:56:04.971235
|
{
"authors": [
"austin-taylor",
"raleel"
],
"repo": "austin-taylor/flare",
"url": "https://github.com/austin-taylor/flare/issues/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1590218590
|
Create and plot temperature delta statistics
create data that contains general statistics of the temperature delta information
min, max, mean, median, std, var, etc.
plot this data as a function time
start with yearly plots, then monthly, then daily
plot this as a function of ambient temperature once mirror cooling is off
Temperature delta data has been created and is stored in a pickled pandas DataFrame.
This can be found in py/desiforecast/data/temp_delta.pkl
Late update: I was able to produce the plots of the temperature delta statistics using the unfiltered telemetry data. It took a bit of tweaking because the unfiltered data has lots of null values.
The data can be found in doc/nb/time_plots/temp_statistics.
Plot time to thermalize as a function of start and end temperature delta
Update
Over the past couple of days I was able to get some plots of the response time as a function of the temperature delta variance. These plots have a colorbar which indicates the final ambient temperature, which is what it would be just before observations. An example of one such plot is shown below:
|
gharchive/issue
| 2023-02-18T05:37:53 |
2025-04-01T04:56:04.976657
|
{
"authors": [
"austinlake04"
],
"repo": "austinlake04/desiforecast",
"url": "https://github.com/austinlake04/desiforecast/issues/5",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
919676245
|
Add the LanguageIETF tag
Matroska v4 introduces a new language tag which contains an ASCII string.
I also wanted to say thanks for the crate! It has been very useful.
Thanks for the contribution! I'm glad it's been helpful.
|
gharchive/pull-request
| 2021-06-12T22:49:43 |
2025-04-01T04:56:04.978587
|
{
"authors": [
"austinleroy",
"robmikh"
],
"repo": "austinleroy/webm-iterable",
"url": "https://github.com/austinleroy/webm-iterable/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
839559170
|
Ability to invoke handleRedirectCallback() from the useAuth0 hook
Please do not report security vulnerabilities here. The Responsible Disclosure Program details the procedure for disclosing security issues.
Thank you in advance for helping us to improve this library! Your attention to detail here is greatly appreciated and will help us respond as quickly as possible. For general support or usage questions, use the Auth0 Community or Auth0 Support. Finally, to avoid duplicates, please search existing Issues before submitting one here.
By submitting an Issue to this repository, you agree to the terms within the Auth0 Code of Conduct.
Describe the problem you'd like to have solved
I am currently building a Capacitor App that generates a link using buildAuthoriseUrl, navigates to that link using a popover Capacitor Browser window and then allows the user to sign in. An authorisation code is generated, however I need to manually invoke handleRedirectCallback to authenticate the user.
It would be great if the handleRedirectCallback method could be accessed anywhere using the useAuth0 hook.
Thanks!
Describe the ideal solution
Allow the handleRedirectCallback method to be accessed through the useAuth0 hook (similar to how buildAuthoriseUrl can).
Alternatives and current work-arounds
A clear and concise description of any alternatives you've considered or any work-arounds that are currently in place.
None at this stage.
Add any other context or screenshots about the feature request here.
I have forked the repo to achieve the functionality I want (available here).
Would you be interested in merging this feature?
Hey @benandrew - loginWithPopup accepts a custom config.popup option, which looks compatible with the Capacitor Browser API - would this work for you?
Hey @benandrew - loginWithPopup accepts a custom config.popup option, which looks compatible with the Capacitor Browser API - would this work for you?
Hey @adamjmcgrath, cheers for the response. Unfortunately, loginWithPopup doesn't appear to give the functionality I need. I currently have a listener setup that checks whether the Capacitor Browser navigates to the ./?code=XXX&state=XXX address (after successful authentication), and upon receiving that call, fires the handleRedirectCallback() to finalise the authentication process.
I tried implementing it using loginWithPopup but unfortunately it seemed to stop after generating the ./?code=XXX&state=XXX address.
Appreciate the help!
Hey @benandrew - I was planning on having a go at using loginWithPopup with Capacitor myself, but I haven't gotten round to it yet. I should have some time next week
Hey @ben-hunter-andrew-robertson - we're going to do some work to expose the handleRedirectCallback for this use case. I'll update this thread when we have a PR ready
Fantastic, thank you very much @adamjmcgrath!
|
gharchive/issue
| 2021-03-24T10:10:57 |
2025-04-01T04:56:05.007405
|
{
"authors": [
"adamjmcgrath",
"benandrew",
"benhunterandrewrobertson"
],
"repo": "auth0/auth0-react",
"url": "https://github.com/auth0/auth0-react/issues/222",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
982953447
|
Application Sessions for Multiple organizations on client
Please do not report security vulnerabilities here. The Responsible Disclosure Program details the procedure for disclosing security issues.
Thank you in advance for helping us to improve this library! Please read through the template below and answer all relevant questions. Your additional work here is greatly appreciated and will help us respond as quickly as possible. For general support or usage questions, use the Auth0 Community or Auth0 Support. Finally, to avoid duplicates, please search existing Issues before submitting one here.
By submitting an Issue to this repository, you agree to the terms within the Auth0 Code of Conduct.
Describe the problem
Provide a clear and concise description of the issue
Hi,
Here is the situation I am dealing with--
Lets suppose that we have two organizations org1 and org2. Both these orgs have a common connection of logging in via google besides other connections as well. Let's also assume that user1 is already a part of both these orgs.
Now when the user logs into org1 Auth0 takes the user successfully through the entire log in flow and everything works smoothly. Once the user has successfully logged into org1 Auth0 saves a cookie called auth0.is.authenticated on the browser. Now whenever I visit any route Auth0 at the background calls the /authorize endpoint with prompt=None.
When the user now tries to log into org2 the user is not taken through the log in flow and is directly logged into the organization and I also receive a token.
My question is why is this behavior happening? How do I override this behavior? I would like the user to sign in every time they visit an organization for the first time(even if two orgs share a common connection and the user is already a member of both orgs).How do I maintain an application session for every organization the user is a part of on the client side?
Is it possible to dynamically change the organization parameter being passed into the Auth0Provider for different organizations?
I am new to using Auth0 and have tried a lot from my end but can't seem to get the desired behavior I am expecting of having the user log into an org if they do not have a valid session.
I would highly appreciate some guidance on the above scenario. I have also tried to go through the library source code but cannot figure out why this is happening.
Thanks!
Environment
Please provide the following:
Version of auth0-react used: 1.6.0
Which browsers have you tested in? Google Chrome
Which framework are you using, if applicable (Angular, React, etc): React
Other modules/plugins/libraries that might be involved: auth0-spa-js(used internally by auth0-react)
When the user now tries to log into org2 the user is not taken through the log in flow and is directly logged into the organization and I also receive a token.
My question is why is this behavior happening?
This is happening because you already have a session on auth0.com so the authorization server doesn't have to prompt you again to login. You can force the login prompt by passing prompt: 'login' on your login request (see https://auth0.github.io/auth0-react/interfaces/auth0_context.redirectloginoptions.html#prompt and https://auth0.com/docs/brand-and-customize/text-customization-new-universal-login/prompt-login)
Is it possible to dynamically change the organization parameter being passed into the Auth0Provider for different organizations?
You can't dynamically change this field on the fly, but you can dynamically initialise the Auth0Provider with an org id on page load - which should be ok for logging into different orgs with loginWithRedirect
|
gharchive/issue
| 2021-08-30T16:04:49 |
2025-04-01T04:56:05.017711
|
{
"authors": [
"PuravManojShah",
"adamjmcgrath"
],
"repo": "auth0/auth0-react",
"url": "https://github.com/auth0/auth0-react/issues/267",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
353570221
|
AWS Load Balancer Auth
AWS recently added the functionality to authenticate a user on the load balancer and have a authenticated and hydrated user details in the request header.
I wasn't able to decode the object that comes from the load balancer even though it will decode on jwt.io. The example AWS give is in python but should be straight forward enough to decode the token.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-authenticate-users.html
Has anyone attempted to decode the x-amzn-oidc-data header using jwt.decode?
Interesting!
Could you provide the code that you tried that did not work?
Off the top of my head it should be pretty easy, and would be something along the lines of:
// this should always work
console.log(jwt.decode(token));
const pubKey = fs.readFileSync('/path/to/publickey'); // replace this line with the HTTP request to get the key
jwt.verify(token, pubKey, (err, decoded) => {
console.log('err:', err);
console.log(decoded);
});
@MitMaro
This is what I was trying to do but I've even tried your example and it still didn't work. The only thing I added was the { algorithms: [ 'ES256' ]}.
const verifyJwt = async (token, db) => {
const encoded_jwt_header = token.split('.')[0];
const decoded_jwt = JSON.parse(base64.decode(encoded_jwt_header));
const request = await fetch(
`https://public-keys.auth.elb.us-west-2.amazonaws.com/${decoded_jwt.kid}`,
);
const cert = await request.text(); // -----BEGIN PUBLIC KEY----...
return new Promise((resolve, reject) => {
jwt.verify(
token,
cert,
{ algorithms: ['ES256'] },
async (err, decoded) => {
if (err) {
reject(err);
}
resolve(decoded);
},
);
});
};
@MitMaro
Same data added to jwt.io
@MitMaro figured it out and found this:
https://github.com/brianloveswords/node-jws/pull/84
That's a common mistake but Amazon should know better. Should be easy to do a simple string replace to work around the problem.
@jreeter ,
Amazon is using base64 to encode the tokens, which has a different character set than base64url, which is what the JWT specification requires.
Specifically, the + character is replaced with -, the / character is replaced with _ and the trailing =s are optional. You can do a basic string replace on the invalid tokens to make them valid base64url encoded tokens.
Untested Code, but the basic idea would be:
const correctToken = token.replace(/+/g, '-').replace(/\//g, '_');
@daerion I found a way to make this work, but it's not pretty......
Basically, the verify method in this library won't work, but the signature can be verified using the underlying node-jwa library. Then you just have to check things like is the token still valid (I am only checking if token is not expired):
const base64Url = require("base64url");
const jwt = require("jsonwebtoken");
const jws = require("jws");
const fetch = require("node-fetch").default;
async function verifyToken(token) {
var base64UrlToken = base64Url.fromBase64(token);
const decoded = jwt.decode(base64UrlToken, { complete: true });
const { kid, signer } = decoded.header;
const region = signer.split(":")[3];
const uri = `https://public-keys.auth.elb.${region}.amazonaws.com/${kid}`;
console.log(`Fetching key at: ${uri}`);
const response = await fetch(uri);
const key = await response.text();
console.log(key);
try {
const verify = jws.verify(token, "ES256", key);
if (!verify) {
return null;
}
var clockTimestamp = Math.floor(Date.now() / 1000);
if (clockTimestamp >= decoded.header.exp) {
// Token expired.
return null;
}
} catch (err) {
console.error(err);
throw err;
}
return decoded.payload;
}
@morganabel Thanks for sharing your implementation :)
Just ran into the same issue...
|
gharchive/issue
| 2018-08-23T22:02:11 |
2025-04-01T04:56:05.043881
|
{
"authors": [
"MitMaro",
"daerion",
"delianides",
"morganabel",
"trallnag"
],
"repo": "auth0/node-jsonwebtoken",
"url": "https://github.com/auth0/node-jsonwebtoken/issues/514",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1032875145
|
Supporting Multi-tenant Use Cases
Summary
The purpose of this issue is to start a discussion around supporting multi-tenant use cases in spicedb. Spicedb today does not support a multi-tenant topology - meaning that there is no representation of independent tenants or any form of tenant isolation mechanisms.
Motivation
The motivation behind this work is to facilitate the administration of and deployment of a spicedb server that can serve more than one tenant at a time and respect/honor common asks from organizations in a multi-tenant architecture. These include but are not limited to:
Tenant isolation - it's common for organizations sharing infrastructure to request database instance, database within an instance, or schema level isolation.
Tenant authorization - facilitate operations over relations and permissions scoped to a single tenant and enforce/restrict actions against spicedb resources or APIs for those tenants. For example, restricting write/read access for relation tuples in a namespace within a specific tenant.
Multi-tenant development - support for one or more development environments to assist in the QA process of promoting software from a development environment to a production environment.
etc..
Use Cases
As an organizations using spicedb within my platform, I want to be able to separate my customers/tenants permissions from one another to facilitate tenant isolation and to promote performance for a single tenant (e.g. reduce shared table contention by separating the tenants by database or schema or table level constructs).
As a client I would like spicedb to enforce access controls on top of the spicedb API to ensure only certain parties can mutate the contents of a namespace or a namespace's configuration and to limit/restrict read access to relationships/permissions within a namespace within my tenant. This will help ensure that untrusted parties cannot erroneously or maliciously mutate the relationships pertaining to my namespace.
As a client integrating with spicedb on behalf of some tenant, I want to see relation tuple changes only for the namespaces within my tenant that I care about, not across the whole system.
As an administrator within an organization integrating with spicedb, I'd like to be able to manage all of the tenants in the spicedb system including provisioning new tenants dynamically when new customers/tenants have signed up for my organizaton's platform.
I have some really good detailed design ideas around these things, but I wanted to start with an open discussion first. We can provide more concrete design details once we've settled on a strategy/plan for the degree of support we want to introduce.
Any update here? How much of this can be done with schema design and/or identifier prefixing (examples welcome)? Any where the "really good detailed design ideas" have been posted? Is this getting discussed somewhere else? Thanks!
In our experience, logical multi-tenancy as a solution for isolation is not that compelling and, often times, has significant performance impacts.
If you need fully isolated permission systems, it is highly recommended to run distinct clusters, which is easy to do via tools such as the SpiceDB Operator. This ensures that there is no chance of one system impacting the other, whether from security or performance perspectives.
If you need permission systems to share resources, this implies that you have, in fact, a single permission system, and it is recommended to use prefixes to provide logical identification of the resource types for each "portion" of the permission system. This also typically involves having each team submit their portion of the schema, and having tooling to combine the schema (usually via CI) to produce the final schema that will be used overall.
For this combined use case, we have work coming in the new year that will allow for fine-grained access, to ensure that API calls are restricted to the portion of the overall permission system that is necessary.
In our experience, logical multi-tenancy as a solution for isolation is not that compelling and, often times, has significant performance impacts.
If you need fully isolated permission systems, it is highly recommended to run distinct clusters, which is easy to do via tools such as the SpiceDB Operator. This ensures that there is no chance of one system impacting the other, whether from security or performance perspectives.
If you need permission systems to share resources, this implies that you have, in fact, a single permission system, and it is recommended to use prefixes to provide logical identification of the resource types for each "portion" of the permission system. This also typically involves having each team submit their portion of the schema, and having tooling to combine the schema (usually via CI) to produce the final schema that will be used overall.
For this combined use case, we have work coming in the new year that will allow for fine-grained access, to ensure that API calls are restricted to the portion of the overall permission system that is necessary.
@josephschorr any progress on the "combined use case"?
Our user case is different from above.
We have multiple products, each product designs its own permission, but all products share the same account. Just like a Google account can be used accross all google products, and thoese products share some account information, some of the shared information is stored outside spicedb (such as username, password, avatar...), Some of the shared information is stored within spicedb (such as, orgnization structure, user groups, corporation-membership...).
So if we deploy each product into a distinct spicedb, those shared spicedb data need be replicated accross spicedb, this is not easy to maintain the data integrity.
So we want to adopt the "schema combination" method, that is each product design its own schema and then combine all schemas into a united one. To avoid name conflication, a distinct prefix will be enfored to all object defintions. But the difficult for this method is to parse the schema file syntax and correctly prefix the object. So if authzed itself supports such kind of prefixing or schema combination that would be easy.
@josephschorr any progress on the "combined use case"?
Yes, the paid offering of SpiceDB has support for fine-grained authorization tokens: https://authzed.com/docs/spicedb-enterprise/fgam
But the difficult for this method is to parse the schema file syntax and correctly prefix the object.
Presumably you'd have each team name it with a prefix matching the team. SpiceDB supports prefixes (with arbitrary nesting) on definitions ala definition foo/bar/baz { ... }
|
gharchive/issue
| 2021-10-21T19:38:36 |
2025-04-01T04:56:05.079507
|
{
"authors": [
"enthal",
"jonwhitty",
"josephschorr",
"yyscamper"
],
"repo": "authzed/spicedb",
"url": "https://github.com/authzed/spicedb/issues/204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1581648840
|
🛑 Automat eshop § obchod.auto-mat.cz is down
In 8f26ea6, Automat eshop § obchod.auto-mat.cz (https://obchod.auto-mat.cz/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Automat eshop § obchod.auto-mat.cz is back up in 5df7b64.
|
gharchive/issue
| 2023-02-13T04:44:58 |
2025-04-01T04:56:05.087034
|
{
"authors": [
"timthelion"
],
"repo": "auto-mat/automat-statuspage",
"url": "https://github.com/auto-mat/automat-statuspage/issues/962",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2600975329
|
Bump version
Why are these changes needed?
Bump version to 0.3.2
Minor change in readme
Related issue number
Checks
[x] I've included any doc changes needed for https://autogenhub.github.io/autogen/. See https://autogenhub.github.io/autogen/docs/Contribute#documentation to build and test documentation locally.
[ ] I've added tests (if relevant) corresponding to the changes introduced in this PR.
[ ] I've made sure all auto checks have passed.
Shall we comment out the banner for the NuGet package and recover the # downloads/month banner for pyautogen? Also comment out the news for AutoGen.NET? They can be moved to a new separate repo for .NET.
Agreed on the nuget package and AutoGen.NET commenting out.
Are we going back to Aug as the date (instead of Oct)?
I am fine either way. The autogen-ai org was created in Aug, and on Oct we moved here.
|
gharchive/pull-request
| 2024-10-20T23:43:07 |
2025-04-01T04:56:05.092497
|
{
"authors": [
"marklysze",
"qingyun-wu"
],
"repo": "autogenhub/autogen",
"url": "https://github.com/autogenhub/autogen/pull/75",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1934389518
|
Exporting as data/URL failed in live website
Description of the bug
User was unable to import a project they exported to themselves.
This didn't work in testing and wasn't working until halfway through writing this issue. The tester used to same machine as the one I am using on the same day with the same browser. It is working now but please keep a lookout for it. I don't know what happened.
To reproduce
Steps the user took:
Go to live website
Open/create any project, preferable add a state
Go to File -> Export -> Export as URL/Raw Data...
Copy either (link is easier)
Return to the projects page
Delete the shared project
Import the project
Expected behavior
I get the project that I exported.
Additional information
No response
Can confirm exporting from Firefox (123.0) to Edge (124.0.2478.51) works fine, and exporting from the same versions of Edge to Firefox also works fine. Is it a specific machine issue? If so, do you still have that machine?
I've installed a different OS on the machine so I wouldn't say I have the same machine anymore. I should've mentioned more clearly in the issue that it never happened again with the same system (Windows) with the same browser (whatever version of Firefox at the time).
There were no other reports of it happening for anyone else, but this was a thing that happened during user testing so I had to document it somehow in case it cropped up elsewhere.
Looking at this now it seems like this was the same issue as #489 where the root cause was the share URL/raw data not refreshing - this has been fixed in #490
|
gharchive/issue
| 2023-10-10T05:14:27 |
2025-04-01T04:56:05.097619
|
{
"authors": [
"anioncat",
"s3897720",
"tdib"
],
"repo": "automatarium/automatarium",
"url": "https://github.com/automatarium/automatarium/issues/453",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2158270939
|
Add tests
homepage performance
lead generation & management
login and signup
search functionality
shopping experience
checkout and purchase experience
cart sharing functionality
Checkout Test
// create a product with a price < $1 (declined payment)
import { addProductFromPage, checkCart, existingUserEmailLogin, fillForm, logOut } from "../utils"
const customer1 = {
full_name: "John Doe",
email: "johndoe@doejohn.com",
phone_number: "+50766887744"
}
const address1 = {
street_1: "coconut st, bldg 1, apt 2",
city: "Panama City",
state: "panama_ciudad",
zip: "00000",
country: "panama"
}
const customer2 = {
full_name: "Sally Smith",
email: "sally@smithsally.com",
phone_number: "+50766887744",
}
const address2 = {
address_1: "ocean st, bldg 2, apt 1",
address_2: "by the stairwell",
city: "Panama City",
state: "panama_ciudad",
zip: "00000",
country: "panama"
}
const validMasterCard = {
ccNumber: "5431111111111111",
ccExp: "10/25",
cvv: "999"
}
const validVisa = {
ccNumber: "4111111111111111",
ccExp: "10/25",
cvv: "999"
}
const invalidVisa = {
ccNumber: "4111111231111411",
ccExp: "10/25",
cvv: "999"
}
describe('A lead tests the functionality of the checkout process', () => {
before(() => {
Cypress.session.clearAllSavedSessions()
cy.clearAllCookies()
})
beforeEach(() => {
cy.session('lead', () => {
cy.visit('localhost:3000')
.wait(500)
cy.getCookie('leadId').then(leadId => {
cy.setCookie('leadId', leadId.value, {httpOnly: true})
})
}, {cacheAcrossSpecs: true})
})
it('Adds products to the cart', () => {
addProductFromPage('chair-gamer-prodigy-gr', 2)
addProductFromPage('frame-3stage-wh', 1)
})
/* it('tests lead signin during checkout', () => {
const customerInfo = 'informacion_de_contacto'
cy.log('Tests lead signin during checkout !!')
cy.visit('localhost:3000/checkout')
cy.get('#login').click()
cy.url().should('include', '/login')
cy.log('navigate to login page')
existingUserEmailLogin(customer1.email, 'checkout')
cy.url().should('include', '/checkout')
cy.get(`#${customerInfo}`).should('be.visible')
cy.get(`#${customerInfo}`).find(`input[name="full_name"]`)
.should('have.value', customer1.full_name)
cy.get(`#${customerInfo}`).find(`input[name="email"]`)
.should('have.value', customer1.email)
logOut()
}) */
it('tests the billing is same as shipping checkbox', () => {
const steps = [
'direccion_de_envio',
'direccion_de_facturacion',
'informacion_del_pago'
]
cy.visit('localhost:3000/checkout')
cy.get(`#${steps[0]}-btn`).click()
cy.get(`#${steps[0]}`).should('be.visible')
fillForm(steps[0], address1, ['state'])
cy.get(`#${steps[1]}`).should('be.visible')
cy.get(`#same_as_shipping`).click()
Object.keys(address1).map(k => {
cy.get(`#${steps[1]} input[name="${k}"]`).should('have.value', address1[k])
})
// the payment form should be visible after ticking the checkbox
//cy.get(`#${steps[2]}`).should('be.visible')
})
it('tests the appllication of a valid coupon', () => {})
})
describe('A new lead makes a succesful order', () => {
before(() => {
Cypress.session.clearAllSavedSessions()
cy.clearAllCookies()
})
beforeEach(() => {
cy.session('lead', () => {
cy.visit('localhost:3000')
.wait(500)
cy.getCookie('leadId').then(leadId => {
cy.setCookie('leadId', leadId.value, {httpOnly: true})
})
}, {cacheAcrossSpecs: true})
})
it('Adds products to the cart', () => {
addProductFromPage('chair-gamer-prodigy-gr', 2)
addProductFromPage('frame-3stage-wh', 1)
})
it('Checks cart', () => {
cy.visit('localhost:3000/cart')
.wait(1000)
cy.get(`#lead_id`).should("be.visible").invoke('text')
.then(text => {
expect(text.trim().length).to.be.greaterThan(8);
});
const expectedStock = {
'chair-gamer-prodigy-gr': 2,
'frame-3stage-wh': 1
}
checkCart(Object.keys(expectedStock).length, expectedStock)
cy.get('#checkoutBtn').click()
cy.url().should('include', '/checkout')
})
it('Fills out checkout', () => {
const steps = [
'informacion_de_contacto',
'direccion_de_envio',
'direccion_de_facturacion',
'informacion_del_pago'
]
cy.visit('localhost:3000/checkout')
// validate subtotal, tax, delivery and total
cy.get(`#${steps[0]}`).should('be.visible')
steps.map((s, i) => cy.get(`#${s}`).should(i===0? 'be.visible': 'not.be.visible'))
fillForm(steps[0], customer1)
cy.get(`#${steps[1]}`).should('be.visible')
fillForm(steps[1], address1, ['state'])
cy.get(`#${steps[2]}`).should('be.visible')
fillForm(steps[2], address1, ['state'])
cy.get(`#${steps[3]}`).should('be.visible')
fillForm(steps[3], {...validMasterCard, name: customer1.full_name})
cy.get('#mastercard-logo').should('be.visible')
// validate subtotal, tax, delivery and total
cy.get('#checkout-btn').click()
//const success_msg = '¡Gracias por tu compra!'
//cy.contains(success_msg).should("be.visible")
cy.url().should('include', '/order_confirmation?order=')
cy.task('getLastEmail', customer1.email).then((email:{body:string, html:string})=> {
cy.log('EMAIl ', email)
const body = email.body.toString()
const orderId = body.split('identificador unico de tu orden es: ')[1]
expect(orderId).to.not.be.empty
expect(orderId).to.not.equal('undefined')
})
})
})
//ERRORS
// test form validations
describe('A lead tries to make a purchase without filling out all the required inputs', () => {})
//empty order
describe('A lead tries to submit an empty order', () => {})
// invalid coupon
describe('A lead tries to use an invalid coupon', () => {})
// payment declined
describe('A lead tries to pay with a credit card that is declined', () => {})
// invalid card
describe('A lead tries to pay with an invalid credit card', () => {})
// reduced stock
describe('A lead tries to submit an order but one of the porducts has less stock than the cartItem qty', () => {})
// invalid amount
describe('A lead tries to submit an order but total doesnt match backend validation', () => {})
Test for Lead generation
import { clearLocalStorage } from '../utils'
before(() => {
cy.log('CLEARING COOKIES ...')
Cypress.session.clearAllSavedSessions()
cy.clearAllCookies()
})
beforeEach(() => {
//cy.clearCookies()
cy.session('lead', () => {
cy.visit('localhost:3000')
.wait(500)
cy.getCookie('leadId').then(leadId => {
cy.log('LEAD ID ****** ', leadId.value)
cy.setCookie('leadId', leadId.value, {httpOnly: true})
})
}, {cacheAcrossSpecs: true})
})
describe('Home Page Loads with Acceptable Performance', () => {
it('loads succesfully', () => {
cy.visit('localhost:3000')
})
//add stuff for performance testing
})
describe('New Person Enters the Site', () => {
it(`should:
1. create a new lead
2. set the local lead id
3. create a shopping cart w/ the correct lead id`,
() => {
cy.clearCookies()
cy.visit('localhost:3000')
.wait(1500)
cy.getCookie('leadId').then(leadId => {
expect(leadId).to.not.be.null
cy.request('http://localhost:3000/api/trpc/lead.getOne?input='+encodeURIComponent(`{"json":"${leadId}"}`))
.then(response => expect(response.body).to.not.be.empty)
})
/* cy.getLocalStorage('ergonomica_cart_id').then(cartId => {
expect(cartId).to.not.be.null
cy.request('http://localhost:3000/api/trpc/cart.getOne?input='+encodeURIComponent(`{"json":"${cartId}"}`))
.then(response => expect(response.body).to.not.be.empty)
}) */
})
})
/* describe('Person Enters the Site with a Fake Lead ID', () => {
clearLocalStorage()
localStorage.setItem('ergonomica_lead_id', 'random_lead_id')
it(`should:
1. create a new lead
2. overwrite the current local lead id
3. create a shopping cart w/ the correct lead id`,
() => {
cy.visit('localhost:3000')
.wait(1500)
cy.getLocalStorage('ergonomica_lead_id').then(leadId => {
expect(leadId).to.not.be.null
expect(leadId).to.not.be.equal('random_lead_id')
cy.request('http://localhost:3000/api/trpc/lead.getOne?input='+encodeURIComponent(`{"json":"${leadId}"}`))
.then(response => expect(response.body.result.data.json).to.not.be.null)
})
cy.getLocalStorage('ergonomica_cart_id').should('not.be.null')
})
}) */
describe('Returning Person Enters the Site with a real lead ID and an Active Cart', () => {
it(`should setup local lead_id and cart_id for an existing lead`, () => {
cy.clearCookies()
cy.session('leadId', ()=>{
cy.request('http://localhost:3000/api/trpc/lead.getFirst')
.then(response => {
expect(response.body.result.data.json).to.not.be.null
const lead = response.body.result.data.json
cy.setCookie('leadId', lead.id, {httpOnly: true})
cy.request('http://localhost:3000/api/trpc/cart.getActiveCartByUser?input='+encodeURIComponent(`{"json":"${lead.id}"}`))
.then(response => {
expect(response.body.result.data.json).to.not.be.null
})
})
})
})
it(`should:
1. maintain local lead_id and cart_id
2. not create a new lead
3. not create a new shopping cart`,
() => {
cy.request('http://localhost:3000/api/trpc/lead.getCount')
.then(response => {
expect(response.body.result.data.json).to.not.be.null
const firstLeadCount = response.body.result.data.json
cy.request('http://localhost:3000/api/trpc/cart.getCount')
.then(response => {
expect(response.body.result.data.json).to.not.be.null
//const firstCartCount = response.body.result.data.json
cy.visit('localhost:3000')
.wait(1500)
cy.getCookie('leadId').then(leadId => {
expect(leadId).to.not.be.null
cy.request('http://localhost:3000/api/trpc/lead.getCount').then(response => {
expect(response.body.result.data.json).to.equal(firstLeadCount)
})
})
/* cy.getLocalStorage('ergonomica_cart_id').then(cartId => {
expect(cartId).to.not.be.null
cy.request('http://localhost:3000/api/trpc/cart.getCount').then(response => {
expect(response.body.result.data.json).to.equal(firstCartCount)
})
}) */
})
})
})
})
// needs seeded user data with purchased carts but no active cart
describe('Returning Person Enters the Site with a real lead ID and no Active Carts', () => {
it('loads succesfully', () => {
cy.visit('localhost:3000')
})
//add stuff for performance testing
})
Test search functionality
describe('Home Page Loads with Acceptable Performance', () => {
it('loads succesfully', () => {
cy.visit('localhost:3000')
})
//add stuff for performance testing
})
describe('Search functionality works as expected', () => {
it('performs the search query on large screen', ()=> {
cy.viewport(1300, 800)
cy.visit('localhost:3000')
.wait(1500)
cy.get('#search')
.type('sillas')
.should('have.value', 'sillas')
.type('{enter}')
cy.url().should('include', '/products')
})
it('performs the search query on small screen', ()=> {
cy.visit('localhost:3000')
.wait(1500)
cy.get('#sidemenu-collapse').click()
cy.get('#search')
.type('sillas')
.should('have.value', 'sillas')
.type('{enter}')
cy.url().should('include', '/products')
})
})
describe('Search for an existing product with spaces', () => {
it('performs the search query on large screen', ()=> {
cy.viewport(1300, 800)
cy.visit('localhost:3000')
.wait(1500)
cy.get('#search')
.type('silla phaser')
.should('have.value', 'silla phaser')
.type('{enter}')
cy.url().should('include', '/products')
cy.get('#products').children().should('have.lengthOf', 1)
cy.get('#chair-phaser-bl').click()
cy.url().should('include', '/products/chair-phaser-bl')
})
})
|
gharchive/issue
| 2024-02-28T07:12:01 |
2025-04-01T04:56:05.106555
|
{
"authors": [
"gkpty"
],
"repo": "automate-sales/automate-commerce",
"url": "https://github.com/automate-sales/automate-commerce/issues/24",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
497372556
|
Update Issue - 44629374
default description
URL tested: http://ci.bsstag.com/welcome
Open URL on Browserstack
Property
Value
Browser
Chrome 76.0
Operating System
Windows 7
Resolution
1024x612
Screenshot Attached
Screenshot URL
Click here to reproduce the issue on Browserstack
Updated Screenshot Attached
|
gharchive/issue
| 2019-09-23T23:13:52 |
2025-04-01T04:56:05.114741
|
{
"authors": [
"automationbs"
],
"repo": "automationbs/testbugreporting",
"url": "https://github.com/automationbs/testbugreporting/issues/1155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2544994215
|
False positive with new style gradle test suite definition
Build scan link
Plugin version
2.0.2
Gradle version
8.10.1
JDK version
17
(Optional) Kotlin and Kotlin Gradle Plugin (KGP) version
NA
(Optional) Android Gradle Plugin (AGP) version
NA
(Optional) reason output for bugs relating to incorrect advice
warns junit -jupiter not needed at by testImplementation, but that dep is not explicitly present, nor is it shown by depedency-tree output.
Advice for root project
Unused dependencies which should be removed:
testImplementation 'org junit jupiter: junit-jupiter:5.11.0'
Describe the bug
Gradle now offers "suite" concept for configuring junit with
testing {
suites {
configureEach {
useJunitJupiter(junitVersion)
**To Reproduce**
Steps to reproduce the behavior:
1. ... run buildHealth
**Expected behavior**
health passes
**Additional context**
<!-- Add any other context about the problem here. -->
The plugin doesn't currently have any support for the jvm test suites stuff.
Duplicates https://github.com/autonomousapps/dependency-analysis-gradle-plugin/issues/1273
Do you have a minimal reproducer?
|
gharchive/issue
| 2024-09-24T10:24:39 |
2025-04-01T04:56:05.129694
|
{
"authors": [
"autonomousapps",
"gregallen"
],
"repo": "autonomousapps/dependency-analysis-gradle-plugin",
"url": "https://github.com/autonomousapps/dependency-analysis-gradle-plugin/issues/1269",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1555391935
|
Retrigger timing
Inconsistencies in behaviour of retrigger mode
Choose the delay more carefully. Maybe trigger delay from conductor ?
Answer : let the user choose
|
gharchive/issue
| 2023-01-24T17:37:07 |
2025-04-01T04:56:05.132833
|
{
"authors": [
"autonym8"
],
"repo": "autonym8/LiveScaler",
"url": "https://github.com/autonym8/LiveScaler/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1567671769
|
feat(interpolation): add curvature calculation
Signed-off-by: Takayuki Murooka takayuki5168@gmail.com
Description
updated interpolation package for the refactoring PR of obstacle_avoidance_planner https://github.com/autowarefoundation/autoware.universe/pull/2796
feat: Curvature calculation is added with a unit test.
for point
https://github.com/autowarefoundation/autoware.universe/blob/b2715d051a419724d1308a3c6272c5cb06e0f91b/common/interpolation/src/spline_interpolation_points_2d.cpp#L172-L190
for points
https://github.com/autowarefoundation/autoware.universe/blob/b2715d051a419724d1308a3c6272c5cb06e0f91b/common/interpolation/src/spline_interpolation_points_2d.cpp#L192-L200
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
After all checkboxes are checked, anyone who has write access can merge the PR.
@rej55 @kosuke55 Could you approve the PR. I confirmed that the unit test for curvature calculation has passed.
|
gharchive/pull-request
| 2023-02-02T09:37:25 |
2025-04-01T04:56:05.182166
|
{
"authors": [
"takayuki5168"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/2801",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2112943467
|
feat(map): add launch_pointcloud_map_loader
Description
loading pcd map and map_hight_fitter with pointcloud ara time consuming (e.g. 3sec for initial pose).
they are not necessary for planning simulator, so add launch_pointcloud_map_loader to disable that
(I also feel it's not a good way)
Tests performed
put initial pose in psim with following
ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=lexus sensor_model:=aip_xx1 map_path:=/home/kosuke55/data/map/odaiba launch_pointcloud_map_loader:=false
ros2 launch autoware_launch planning_simulator.launch.xml vehicle_model:=lexus sensor_model:=aip_xx1 map_path:=/home/kosuke55/data/map/odaiba launch_pointcloud_map_loader:=true
Effects on system behavior
Not applicable.
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
After all checkboxes are checked, anyone who has write access can merge the PR.
@KYabuuchi will think better way!!
|
gharchive/pull-request
| 2024-02-01T16:40:17 |
2025-04-01T04:56:05.188169
|
{
"authors": [
"kosuke55"
],
"repo": "autowarefoundation/autoware.universe",
"url": "https://github.com/autowarefoundation/autoware.universe/pull/6290",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2051644948
|
Adoption of Changelog Enforcer GitHub Action in Autoware Repositories
Checklist
[X] I've read the contribution guidelines.
[X] I've searched other issues and no duplicate issues were found.
[X] I've agreed with the maintainers that I can plan this task.
Description
To improve the management and tracking of changes across the Autoware repositories, I propose the adoption of the Changelog Enforcer GitHub Action.
This action will ensure that every pull request includes a changelog update, helping us maintain a clear and consistent record of changes.
We will also enforce the https://keepachangelog.com/en/1.0.0/ format.
We aim to start this with the autoware.universe repository and, if successful, expand to other repositories like autoware_launch and autoware_msgs.
Purpose
The primary goals of implementing this action are:
Ensuring Consistency: Each PR should contribute to the changelog, making it easier to track changes over time.
Improving Transparency: Clear documentation of changes enhances transparency for both developers and users.
Simplifying Release Process: With quarterly releases planned using CalVer, a well-maintained changelog will streamline the release process.
Possible approaches
Gradual Implementation: Start with autoware.universe, assess the effectiveness, and then proceed to other repositories.
Automate Changelog Entries: Utilize the Changelog Enforcer to automate the inclusion of changelog updates in PRs.
Quarterly Sync with CalVer: Align the changelog updates with our quarterly CalVer release cycle, ensuring that each release has a comprehensive changelog.
Centralized Changelog for Main Autoware Folder: Compile changes from the child repositories' changelogs into the main Autoware folder's changelog for each release.
Definition of done
[ ] Agreement on the adoption of the Changelog Enforcer GitHub Action.
[ ] Successful implementation and testing in the autoware.universe repository.
[ ] Development of a clear guideline for contributors on how to update the changelog.
[ ] Establishment of a process for syncing the main Autoware folder's changelog with child repositories.
[ ] Review and adjust the process after the first quarterly release cycle.
cc. @mitsudome-r @yukkysaito @isamu-takagi @TakaHoribe
There is also an automated way of doing it using github-changelog-generator.
It generates CHANGELOG.md files like this.
There is also Github Releases.
automatically-generated-release-notes
I will investigate these options more and also share my findings. Please don't hesitate to add your inputs.
https://github.com/googleapis/release-please#release-please
Release Please automates CHANGELOG generation, the creation of GitHub releases, and version bumps for your projects.
https://github.com/google-github-actions/release-please-action
@xmfcx I strongly support this activity. Moreover, it should be linked with future releases of Autoware.
One question: is it required to modify the changelog file in all PRs? Or, the changelog is updated automatically from the PR's description?
@TakaHoribe thank you!
is it required to modify the changelog file in all PRs? Or, the changelog is updated automatically from the PR's description?
For the https://github.com/marketplace/actions/changelog-enforcer mentioned in the first issue, you must manually edit the changelog.
But if we use https://github.com/googleapis/release-please#release-please then it is completely automated (from Commit Message.
It generates ChangeLogs like this: https://github.com/googleapis/gapic-generator/blob/master/CHANGELOG.md
release-please seems very popular and also maintained by Google(?) for now it is the most promising option. And it requires the least additional effort from the contributors.
@xmfcx Interesting investigation!
Actually, we TIER IV has already automated changelog here.
They are created by generate-changelog CI which wraps git-cliff.
I'm curious if changelog-enforcer and other changelog generator are better than our current automated changelog.
|
gharchive/issue
| 2023-12-21T03:56:45 |
2025-04-01T04:56:05.202951
|
{
"authors": [
"TakaHoribe",
"shmpwk",
"xmfcx"
],
"repo": "autowarefoundation/autoware",
"url": "https://github.com/autowarefoundation/autoware/issues/4079",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1321716392
|
fix: use sim time option
Signed-off-by: tanaka3 ttatcoder@outlook.jp
Description
Without this PR, we cannot designate use_sim_time by ros2 launch
With this PR fix use sim time option for planning simulator and logging simulator
I confirm use sim time is correctly set with simulation clock rviz plugin
https://user-images.githubusercontent.com/65527974/181677348-ada9f356-d719-4f0f-b884-0012e23a630b.mp4
Pre-review checklist for the PR author
The PR author must check the checkboxes below when creating the PR.
[x] I've confirmed the contribution guidelines.
[x] The PR follows the pull request guidelines.
In-review checklist for the PR reviewers
The PR reviewers must check the checkboxes below before approval.
[ ] The PR follows the pull request guidelines.
Post-review checklist for the PR author
The PR author must check the checkboxes below before merging.
[ ] There are no open discussions or they are tracked via tickets.
After all checkboxes are checked, anyone who has write access can merge the PR.
LGTM.
@kenji-miyake Could you check this PR.
Hmm? I guess it already exists.
diff
I've confirm current change works with use sim time
https://user-images.githubusercontent.com/65527974/182082442-9be6bc07-007a-4a9f-9559-9a5a9241e7e3.mp4
|
gharchive/pull-request
| 2022-07-29T03:34:58 |
2025-04-01T04:56:05.210897
|
{
"authors": [
"kenji-miyake",
"taikitanaka3",
"takayuki5168"
],
"repo": "autowarefoundation/autoware_launch",
"url": "https://github.com/autowarefoundation/autoware_launch/pull/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1640427495
|
менять путь до настроек
Добавить метод/параметр конструктора для поделки, в который можно передать каталог/путь к файлу с настройками
Ещё и сами настройки, да
|
gharchive/issue
| 2023-03-25T08:15:32 |
2025-04-01T04:56:05.213298
|
{
"authors": [
"Nivanchenko",
"nixel2007"
],
"repo": "autumn-library/autumn",
"url": "https://github.com/autumn-library/autumn/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
140180545
|
Fix gitcop link to contribution guidelines
As pointed out in #79, the link to the contribution guidelines is broken since it links to the develop branch.
The correct link should be:
https://github.com/autumnai/leaf/blob/master/CONTRIBUTING.md#git-commit-guidelines
Gitcop is updated with the link to the master branch
|
gharchive/issue
| 2016-03-11T13:15:33 |
2025-04-01T04:56:05.214713
|
{
"authors": [
"MichaelHirn",
"hobofan"
],
"repo": "autumnai/leaf",
"url": "https://github.com/autumnai/leaf/issues/80",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
480992914
|
Corrigi para o python 3 segue o código, não consegui subir no git, pois ta negando
No init.py o nome da lib esta minusculo e necessario mudar para Correios e também instalar o Beautifysoup4
-- coding: utf-8 --
"""
correios.py
Api para usar dados dos Correios
"""
from urllib.request import urlopen
version = '0.1.0'
author = {
'Thiago Avelino': 'thiagoavelinoster@gmail.com',
'Dilan Nery': 'dnerylopes@gmail.com',
}
from urllib import parse,request,response,error
import re
from xml.dom import minidom
try:
from bs4 import BeautifulSoup
except ImportError:
raise Exception('Você não tem o modulo BeautifulSoup execute pip install beautifulsoup4', ImportError)
class Correios(object):
PAC = 41106
SEDEX = 40010
SEDEX_10 = 40215
SEDEX_HOJE = 40290
E_SEDEX = 81019
OTE = 44105
NORMAL = 41017
SEDEX_A_COBRAR = 40045
def __init__(self):
self.status = 'OK'
def _getDados(self, tags_name, dom):
dados = {}
for tag_name in tags_name:
try:
dados[tag_name] = dom.getElementsByTagName(tag_name)[0]
dados[tag_name] = dados[tag_name].childNodes[0].data
except:
dados[tag_name] = ''
return dados
# Vários campos viraram obrigatórios para cálculo de frete:
# http://www.correios.com.br/webServices/PDF/SCPP_manual_implementacao_calculo_remoto_de_precos_e_prazos.pdf (páginas 2 e 3)
def frete(self, cod, GOCEP, HERECEP, peso, formato,
comprimento, altura, largura, diametro, mao_propria='N',
valor_declarado='0', aviso_recebimento='N',
empresa='', senha='', toback='xml'):
base_url = "http://ws.correios.com.br/calculador/CalcPrecoPrazo.aspx"
fields = [
('nCdEmpresa', empresa),
('sDsSenha', senha),
('nCdServico', cod),
('sCepOrigem', HERECEP),
('sCepDestino', GOCEP),
('nVlPeso', peso),
('nCdFormato', formato),
('nVlComprimento', comprimento),
('nVlAltura', altura),
('nVlLargura', largura),
('nVlDiametro', diametro),
('sCdMaoPropria', mao_propria),
('nVlValorDeclarado', valor_declarado),
('sCdAvisoRecebimento', aviso_recebimento),
('StrRetorno', toback),
]
url = base_url + "?" + parse.urlencode(fields)
dom = minidom.parse(urlopen(url))
tags_name = ('MsgErro',
'Erro',
'Codigo',
'Valor',
'PrazoEntrega',
'ValorMaoPropria',
'ValorValorDeclarado',
'EntregaDomiciliar',
'EntregaSabado',)
return self._getDados(tags_name, dom)
def cep(self, numero):
url = 'http://cep.republicavirtual.com.br/web_cep.php?formato=' \
'xml&cep=%s' % str(numero)
dom = minidom.parse(urlopen(url))
tags_name = ('uf',
'cidade',
'bairro',
'tipo_logradouro',
'logradouro',)
resultado = dom.getElementsByTagName('resultado')[0]
resultado = int(resultado.childNodes[0].data)
if resultado != 0:
return self._getDados(tags_name, dom)
else:
return {}
def encomenda(self, numero):
# Usado como referencia o codigo do Guilherme Chapiewski
# https://github.com/guilhermechapiewski/correios-api-py
url = 'http://websro.correios.com.br/sro_bin/txect01$.QueryList?' \
'P_ITEMCODE=&P_LINGUA=001&P_TESTE=&P_TIPO=001&P_COD_UNI=%s' % \
str(numero)
html = urlopen(url).read()
table = re.search(r'<table.*</TABLE>', html, re.S).group(0)
parsed = BeautifulSoup(table)
dados = []
for count, tr in enumerate(parsed.table):
if count > 4 and str(tr).strip() != '':
if re.match(r'\d{2}/\d{2}/\d{4} \d{2}:\d{2}',
tr.contents[0].string):
dados.append({
'data': str(tr.contents[0].string),
'local': str(tr.contents[1].string),
'status': str(tr.contents[2].font.string)
})
else:
dados[len(dados) - 1]['detalhes'] = str(
tr.contents[0].string)
return dados
@nospelon consegue mandar PR corrigindo?
|
gharchive/issue
| 2019-08-15T04:49:46 |
2025-04-01T04:56:05.277581
|
{
"authors": [
"avelino",
"nospelon"
],
"repo": "avelino/pycorreios",
"url": "https://github.com/avelino/pycorreios/issues/25",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1069783569
|
Support page and page_size parameter in Text Analysis Export
Is your feature request related to a problem? Please describe.
The product API has the two fields page and pageSize that can be used to limit the results. Currently, these arguments are not accepted by our API.
Describe the solution you'd like
The function export_text_analysis(self, annotation_types: str = None) -> dict: should accept these two parameters.
Describe alternatives you've considered
None
Should be page_size?
I added an implementation of page and page_size and tested it on 50 documents in a live product.
client = Client("localhost:8800")
project = client.get_project("test")
collection = project.get_document_collection("Codex_50Train")
process = collection.get_process("discharge")
print("Batchwise export documents (page_size=4):")
for page in range(1,15):
print(f"Page (Batch) number {page}")
out = process.export_text_analysis(page_size=4, page=page)
for d in out['textAnalysisResultDtos']:
print("Document name: " + d["documentName"])
It works as it should, only the order of the documents does not seem to follow any pattern.
Batchwise export documents (page_size=4):
Page (Batch) number 1
Document name: Arztbrief (10).txt
Document name: Arztbrief (14).txt
Document name: Arztbrief (11).txt
Document name: Arztbrief (17).txt
Page (Batch) number 2
Document name: Arztbrief (21).txt
Document name: Arztbrief (20).txt
Document name: Arztbrief (18).txt
Document name: Arztbrief (2).txt
Page (Batch) number 3
Document name: Arztbrief (29).txt
Document name: Arztbrief (31).txt
Document name: Arztbrief (26).txt
Document name: Arztbrief (23).txt
Page (Batch) number 4
Document name: Arztbrief (33).txt
Document name: Arztbrief (28).txt
Document name: Arztbrief (4).txt
Document name: Arztbrief (34).txt
Page (Batch) number 5
Document name: Arztbrief (41).txt
Document name: Arztbrief (43).txt
Document name: Arztbrief (47).txt
Document name: Arztbrief (44).txt
Page (Batch) number 6
Document name: Arztbrief (48).txt
Document name: Arztbrief (46).txt
Document name: Arztbrief (51).txt
Document name: Arztbrief (45).txt
Page (Batch) number 7
Document name: Arztbrief (59).txt
Document name: Arztbrief (53).txt
Document name: Arztbrief (6).txt
Document name: Arztbrief (52).txt
Page (Batch) number 8
Document name: Arztbrief (60).txt
Document name: Arztbrief (69).txt
Document name: Arztbrief (7).txt
Document name: Arztbrief (66).txt
Page (Batch) number 9
Document name: Arztbrief (71).txt
Document name: Arztbrief (73).txt
Document name: Arztbrief (74).txt
Document name: Arztbrief (72).txt
Page (Batch) number 10
Document name: Arztbrief (8).txt
Document name: Arztbrief (76).txt
Document name: Arztbrief (79).txt
Document name: Arztbrief (78).txt
Page (Batch) number 11
Document name: Arztbrief (90).txt
Document name: Arztbrief (85).txt
Document name: Arztbrief (84).txt
Document name: Arztbrief (83).txt
Page (Batch) number 12
Document name: Arztbrief (95).txt
Document name: Arztbrief (94).txt
Document name: Arztbrief (98).txt
Document name: Arztbrief (91).txt
Page (Batch) number 13
Document name: Arztbrief (99).txt
Document name: Arztbrief (96).txt
Page (Batch) number 14
Traceback (most recent call last):
File "/home/huebner/src/python/averbis-python-api/tests/david2.py", line 18, in <module>
out = process.export_text_analysis(page_size=4, page=page)
File "/home/huebner/src/python/averbis-python-api/averbis/core/_rest_client.py", line 794, in export_text_analysis
return self.project.client._export_text_analysis(
File "/home/huebner/src/python/averbis-python-api/averbis/core/_rest_client.py", line 2047, in _export_text_analysis
response = self.__request_with_json_response(
File "/home/huebner/src/python/averbis-python-api/averbis/core/_rest_client.py", line 1412, in __request_with_json_response
self.__handle_error(raw_response)
File "/home/huebner/src/python/averbis-python-api/averbis/core/_rest_client.py", line 2489, in __handle_error
raise RequestException(error_msg)
requests.exceptions.RequestException: 400 Server Error: 'Bad Request' for url: 'http://ecstasy.averbis.intern:8702/health-discovery/rest/v1/textanalysis/projects/test/documentSources/Codex_50Train/processes/discharge/export?page=14&pageSize=4'.
Endpoint error message is: 'The requested page '14' is bigger than the last page '13''
Closed after merge from https://github.com/averbis/averbis-python-api/pull/99
|
gharchive/issue
| 2021-12-02T17:03:07 |
2025-04-01T04:56:05.283026
|
{
"authors": [
"DavidHuebner",
"reckart"
],
"repo": "averbis/averbis-python-api",
"url": "https://github.com/averbis/averbis-python-api/issues/98",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
97560647
|
Fixed use of '++' operator
Hello,
In my investigation for JuliaLang/julia#11686 I found a usage of ++ in your package, which I'm pretty sure was an incorrect translation from C/C++. Now that it's merged, it causes a syntax error, so hopefully this the correct functionality you were hoping to have.
Thanks @phobon ! Appreciate this.
|
gharchive/pull-request
| 2015-07-27T21:43:04 |
2025-04-01T04:56:05.292096
|
{
"authors": [
"aviks",
"phobon"
],
"repo": "aviks/Ito.jl",
"url": "https://github.com/aviks/Ito.jl/pull/15",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
784283493
|
Rock, Paper & Scissors Game
Aim
Rock, Paper & Scissors Game
Details
The Goal is to create a command-line game where a user is given a chance to choose between rock, paper, and scissors and if the user wins the score is added, and at the end when the user finishes the game the score is shown to the user.
plss assign me this issue @avinashkranjan sir
plss assign me this issue @avinashkranjan sir
Assigned to you @Ayush7614
Assigned to you @Ayush7614
Okk No Problem sir @avinashkranjan
Okk No Problem sir @avinashkranjan
|
gharchive/issue
| 2021-01-12T14:25:00 |
2025-04-01T04:56:05.298972
|
{
"authors": [
"Ayush7614",
"avinashkranjan"
],
"repo": "avinashkranjan/Amazing-Python-Scripts",
"url": "https://github.com/avinashkranjan/Amazing-Python-Scripts/issues/238",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
738138351
|
Accept CompletionStage class as until method parameter
In JDK8 you can use CompletionStage to work on other non-blocking threads - like in Play! Framework.
It will be very good to implement this functionality to help with unit tests over this interface.
No importance at all.
Closing this issue.
|
gharchive/issue
| 2020-11-07T02:19:47 |
2025-04-01T04:56:05.353595
|
{
"authors": [
"felipebonezi"
],
"repo": "awaitility/awaitility",
"url": "https://github.com/awaitility/awaitility/issues/190",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1940624069
|
curl: fix CVE-2023-38545
see https://curl.se/docs/CVE-2023-38545.html
Thanks for getting to this so quickly!
|
gharchive/pull-request
| 2023-10-12T19:09:07 |
2025-04-01T04:56:05.354783
|
{
"authors": [
"baloo",
"jsoo1"
],
"repo": "awakesecurity/nixpkgs",
"url": "https://github.com/awakesecurity/nixpkgs/pull/164",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
71871204
|
npm version bump
Looks like the latest version available on npm is still 1.1.2, while 1.1.3 and 1.1.4 have been released.
1.1.5 is on npm now :+1:
|
gharchive/issue
| 2015-04-29T13:12:24 |
2025-04-01T04:56:05.434478
|
{
"authors": [
"davidvanleeuwen",
"jimf"
],
"repo": "awkward/backbone.modal",
"url": "https://github.com/awkward/backbone.modal/issues/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1425595531
|
Dynamic group subscriptions
Description of changes
Add support for dynamic groups in subscriptions with enhanced filtering
Issue #, if available
Description of how you validated changes
E2E, unit, manual testing
Checklist
[ ] PR description included
[ ] yarn test passes
[ ] Tests are changed or added
[ ] Relevant documentation is changed or added (and PR referenced)
[ ] New AWS SDK calls or CloudFormation actions have been added to relevant test and service IAM policies
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Closing this PR in favor of https://github.com/aws-amplify/amplify-category-api/pull/944, which is on a branch multiple folks on the team can iterate on.
|
gharchive/pull-request
| 2022-10-27T12:57:45 |
2025-04-01T04:56:05.440482
|
{
"authors": [
"alharris-at",
"marcvberg"
],
"repo": "aws-amplify/amplify-category-api",
"url": "https://github.com/aws-amplify/amplify-category-api/pull/933",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
738466773
|
Removing @searchable causes unintended impacts on unrelated Elasticsearch endpoints
Note: If your issue/bug is regarding the AWS Amplify Console service, please log it in the
Amplify Console GitHub Issue Tracker
Describe the bug
I had 2 projects using amplify-cli (Project1 and Project2), both using the @searchable keyword on a schema object and so both having an Elasticsearch domain. I removed the @searchable tag from the schema on Project1 and pushed the change. Subsequently, not only was the Elasticsearch domain for Project1 deleted, but also the Elasticsearch domain for Project2 was impacted with loss of all data and loss of a customized index mapping.
Amplify CLI Version
4.10.0
Expected behavior
Expected no impact on the Elasticsearch domain for Project2
Desktop (please complete the following information):
OS: Mac Catalina 10.15.7
Node Version. 12.19.0
@scowie This seems concerning. How was the other ES domain configured?
When you remove the @searchable directive, only the ElasticSearch domain configured using the CloudFormation stack by the CLI should be removed and impacted.
@kaustavghosh06 The ES domain was configured to allow searching for activities by location (map bounding box) as follows: 1) Created a new mapping type property to have "type": "geo-point", 2) Deleted the index "activity" which had been created by the amplify cli for the searchable graphql type "Activity", 3) Re-created the "activity" index on the elasticsearch domain to include use of the new "geo-point" property.
I will also note that the elasticsearch domain that was deleted from Project1 was configured the same way with a "activity" index having been created from a searchable graphql "Activity" type. Both projects were working and had activities in elasticsearch domains. Then I removed the searchable from Project1 because it was no longer needed. The next time I loaded Project2 I noticed all of the activities were gone and I had to repeat the 3 steps that I mentioned above to get it working again.
re-adding pending triage until repro steps are provided.
Hey @scowie :wave: unforunately I'm unable to reproduce this using the following steps on the latest version of the CLI, 4.50.2:
in two separate directories, project1/ and project2/:
initialize Amplify project with amplify init -y
add a GraphQL API with amplify add api and accepting defaults
modify the Todo schema with
```graphql
type Todo @model @key(fields: ["name"]) @searchable {
name: String!
description: String
}
```
run amplify push and follow the prompts
Using the GraphQL API, add a few sample data items
Confirm searchable queries are functional
Remove @searchable from schema in project1
run amplify push and follow the prompts
confirm searchable queries are not valid in project1
confirm searchable queries still function in project2
Please let me know if there are any differences between what you experienced and the procedure noted above (with the updated CLI), and if applicable provide a sample of the GraphQL schema used when this issue occurred.
|
gharchive/issue
| 2020-11-08T13:34:57 |
2025-04-01T04:56:05.449331
|
{
"authors": [
"attilah",
"josefaidt",
"kaustavghosh06",
"scowie"
],
"repo": "aws-amplify/amplify-cli",
"url": "https://github.com/aws-amplify/amplify-cli/issues/5809",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1327443695
|
feat: support for cta labels
Issue #, if available:
Updates FormDefinition and StudioForm to accept StudioFormButtons to customize CTA label names, visibility, and row position.
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
This change shouldn't affect the other components in the integ tests. Maybe a rebase should address it?
|
gharchive/pull-request
| 2022-08-03T15:49:46 |
2025-04-01T04:56:05.451395
|
{
"authors": [
"SwaySway",
"scottyoung"
],
"repo": "aws-amplify/amplify-codegen-ui",
"url": "https://github.com/aws-amplify/amplify-codegen-ui/pull/554",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
784654577
|
AWS Console has broken pages for projects with unsuccessful builds
Describe the bug
For the past couple weeks, I have noticed many of our amplify builds (Gatsby projects) getting stuck at the "Provisioning" stage. When that happens, I would need to click on the "Build" link, then click the "Cancel" button, and then click the "Redeploy This Version" button.
I'm still seeing many builds stuck in the provisioning stage, but now when I click on the "Build" button, the AWS Console page is broken, and doesn't render all the way. I've reproduced this on Safari and Chrome. There's an error in the browser's javascript console that says:
TypeError: null is not an object (evaluating 'r.summary.startTime.getTime') in main.js
To Reproduce
See above
Expected behavior
You should see a page like this that has the indicated button:
Screenshots
Here's a screenshot of the pages with the error above:
Desktop (please complete the following information):
OS: Mac OS 10.15.7
Browser: Chrome and Safari
Thank you for reporting this. In regard to the many builds being stuck in the provisioning stage, are you potentially hit the limit of 5 concurrent builds documented here: https://docs.aws.amazon.com/general/latest/gr/amplify.html
Thank you for reporting this. In regard to the many builds being stuck in the provisioning stage, are you potentially hit the limit of 5 concurrent builds documented here: https://docs.aws.amazon.com/general/latest/gr/amplify.html
Thanks, perhaps so. Is there a way to increase that quota? Still need the
UI bug addressed, though, so I can restart the builds manually. --
Aaron Campos
CEO, Dhamira Inc.
805-910-7829
Thanks, perhaps so. Is there a way to increase that quota? Still need the
UI bug addressed, though, so I can restart the builds manually. --
Aaron Campos
CEO, Dhamira Inc.
805-910-7829
@aaroncampos We will prioritize fixing this but so that you can manually restart builds. Also note that the builds will eventually run without intervention once the number of concurrent builds is less than 5.
@aaroncampos We will prioritize fixing this but so that you can manually restart builds. Also note that the builds will eventually run without intervention once the number of concurrent builds is less than 5.
|
gharchive/issue
| 2021-01-12T22:56:53 |
2025-04-01T04:56:05.459117
|
{
"authors": [
"aaroncampos",
"hsspain",
"nimacks"
],
"repo": "aws-amplify/amplify-console",
"url": "https://github.com/aws-amplify/amplify-console/issues/1452",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1295178884
|
AWS amplify not rendering image while local build render image correctly
Please describe which feature you have a question about?
I'm currently using AWS amplify to run one of my react app. I added one image in my react public folder so I can use that as a background. When I test it in my local, the image shows correctly. However, after I push my code to git and use AWS amplify to build it, I found the image is missing. Based on the build log, seems the image is never copy to the build directory. Is there anything I need to do to make sure the image is correct uploaded?
Provide additional details
My git branch: https://github.com/andreling01/mltd-ranking
My app id: d3azsbdqn01xd0
What AWS Services are you utilizing?
AWS amplify
Provide additional details e.g. code snippets
I also make sure the manifest file is changed and added with my new image information.
Hi @andreling01 👋🏽 thanks for raising this question. I was able to reproduce and find a solution for you. 🙂 We have a rewrite rule for SPA applications that routes all files to index.html. We use regular expressions to identify the different file extensions that we rewrite:
Your background image is a jpeg which we did not add to the regex. If you add |jpeg| like so , then you should be able to see your background image.
Workaround:
In the console, navigate to your app and select Rewrites and redirects
Select Edit and replace the source address with </^[^.]+$|\.(?!(css|gif|ico|jpg|jpeg|js|png|txt|svg|woff|ttf|map|json)$)([^.]+$)/>
Save and refresh your hosted page.
Please let us know that this works for you.
Thanks! That solved my problem!
|
gharchive/issue
| 2022-07-06T05:36:34 |
2025-04-01T04:56:05.514667
|
{
"authors": [
"andreling01",
"hloriana"
],
"repo": "aws-amplify/amplify-hosting",
"url": "https://github.com/aws-amplify/amplify-hosting/issues/2849",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1212582207
|
chore(datastore): Temporal performance enhancements
Issue #, if available:
Addresses:
#1734
Partially addresses:
#1579
#1428
Description of changes:
Performance Improvements
Re-write of Temporal implementation details relating to String <-> Date conversion to cache DateFormatters. The public API surface remains mostly the same (with a few minor exceptions documented below).
Current situation:
Every String -> Date conversion currently creates up to 4 separate DateFormatters. This is due to this API supporting the conversion of String -> Date when the exact format isn't known, but still limited to a maximum of 4 different formats per TemporalSpec conforming type (Temporal.Date, Temporal.DateTime, Temporal.Time).
DateFormatter instantiation isn't a particularly cheap process.
With this change:
These changes cache DateFormatters for reuse in a thread safe way, up to a maximum of 14, using a Random Replacement (RR) eviction policy. It also adds lower level utility converters (Date -> String and String -> Date) leveraged by the new implementation.
API Changes
Temporal was previously a namespace struct with a private init. It's now a namespace (caseless) enum. This change has no potential to break client code.
~The TemporalSpec protocol requirement init(iso8601String: String) throws is now init(iso8601String: String, format: Format) throws. Every concrete type conforming to TemporalSpec (Temporal.Date, Temporal.DateTime, Temporal.Time) has a default argument for format to prevent any possibility of breakage. This is an improvement in the API, because while there are cases where we don't know the exact dateFormat, there are other cases where we do know it. This eliminates the need to attempt to convert based on various different formats if the format is known.~ Added a separate init(iso8601String:format:) with a default implementation, so no breaking.
TemporalFormat in the previous implementation was an enum. This has been changed to a struct with static members. This change does have the possibility to break client code, although the likelihood is exceedingly low. The only situation this could happen is if client code is switching over all cases exhaustively; the change to a struct means that this check would no longer be exhaustive. Switching over the formats as they exist today would serve no purpose. Even if this is happening somewhere, the fix is adding a default case to the end of the switch. The main reason behind this change is to support the future expansion of further date formats as documented in #1583 and #1713, without breaking client code.
Deprecates TemporalUnit, a public protocol that was not used anywhere within Amplify.
~Adds AnyTemporalSpec, a type-erasing existential that provides access to iso8601String and is needed due to the introduction of an associatedtype in the TemporalSpec protocol.~ Change reverted
Performance Wins:
Micro and Macro performance testing was conducted to accurately test and improvements or regressions these changes introduce.
Micro testing consisted of calling Temporal.Date(iso8601String:), Temporal.DateTime(iso8601String:), and Temporal.Time(iso8601String:) 50 times for each of the applicable iso8601String supported formats.
Macro testing consisted of decoding an array of 100 Model conforming objects, each with 50 child models. The parent contained (among other primitives) two Temporal.x properties, and each child contained one.
Results:
*All times displayed in seconds.
Micro
Temporal.Date
short (yyyy-MM-dd)
medium / long / full (yyyy.MM.ddZZZZ)
Old
0.027
0.007
New
0.005
0.003
Δ Absolute
0.022
0.004
Δ %
440%
133%
Temporal.DateTime
short (yyyy-MM-dd'T'HH:mm)
medium (yyyy-MM-dd'T'HH:mm:ss)
long (yyyy-MM-dd'T'HH:mm:ssZZZZZ)
full (yyyy-MM-dd'T'HH:mm:ssSSSZZZZZ)
Old
0.027
0.021
0.013
0.007
New
0.009
0.007
0.004
0.002
Δ Absolute
0.018
0.014
0.009
0.005
Δ %
200%
200%
225%
250%
Temporal.Time
short (HH:mm)
medium (HH:mm:ss)
long (HH:mm:ss.SSS)
full (HH:mm:ss.SSSZZZZZ)
Old
0.025
0.020
0.015
0.010
New
0.007
0.006
0.005
0.004
Δ Absolute
0.018
0.014
0.010
0.006
Δ %
257%
233%
200%
150%
Macro
Old
1.517
New
0.553
Δ Absolute
0.964
Δ %
174%
Check points: (check or cross out if not relevant)
[ ] ~Added new tests to cover change, if needed~
[x] Build succeeds with all target using Swift Package Manager
[x] All unit tests pass
[x] All integration tests pass
[x] Security oriented best practices and standards are followed (e.g. using input sanitization, principle of least privilege, etc)
[x] Documentation update for the change if required
[x] PR title conforms to conventional commit style
[ ] If breaking change, documentation/changelog update with migration instructions
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
@lawmicha can you provide any insights into the failing DataStore tests? I can't seem to reproduce them failing locally and from what I can tell the tests have nothing to do with these temporal changes.
Despite the tests passing, there's an underlying issue here. In local tests, running testUpdatedModelNoLongerMatchesPredicateRemovedFromSnapshot repeatedly 100 times results in failures. main passes, as does the commit before these changes. So this requires some more investigation before approving / merging.
@lawmicha can you provide any insights into the failing DataStore tests? I can't seem to reproduce them failing locally and from what I can tell the tests have nothing to do with these temporal changes.
I added a commit onto this PR https://github.com/aws-amplify/amplify-ios/pull/1760/commits/c384882a1ff99050c6f5ea756faf0ec838a03d26 for fixing the test that failed on "run repeatedly". The race condition was adding the operation to the operation queue while publishing events to the publisher. Not really sure why it was not appearing before the changes in this PR, as the only code path that changed was the initialization of createdAt field in MutationEvent using .now(). The test started failing when the events were published first, then a thread picked up the operation from the operation queue. Since the observe query operation started running after the events that were published from the test logic, it would not receive them.
Locally I'm going to run a few API integration tests to exercise the decoding of strings to Temporal types, and also enabled the integration test workflow in this separate PR to get more visibility into the test results: https://github.com/aws-amplify/amplify-ios/pull/1827
Will investigate the flaky test https://app.circleci.com/pipelines/github/aws-amplify/amplify-ios/4953/workflows/23e03ca6-e27c-4a90-9f43-7b5aa9c13d93/jobs/42782 testSentModelWithNilVersion_Reconciled
|
gharchive/pull-request
| 2022-04-22T16:59:18 |
2025-04-01T04:56:05.542717
|
{
"authors": [
"atierian",
"lawmicha"
],
"repo": "aws-amplify/amplify-ios",
"url": "https://github.com/aws-amplify/amplify-ios/pull/1760",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1309846988
|
fix(datastore): Fix ModelSyncedEventEmitter, add lifecycle events logging
Issue #, if available:
Description of changes:
Check points: (check or cross out if not relevant)
[ ] Added new tests to cover change, if needed
[ ] Build succeeds with all target using Swift Package Manager
[ ] All unit tests pass
[ ] All integration tests pass
[ ] Security oriented best practices and standards are followed (e.g. using input sanitization, principle of least privilege, etc)
[ ] Documentation update for the change if required
[ ] PR title conforms to conventional commit style
[ ] If breaking change, documentation/changelog update with migration instructions
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Codecov Report
Merging #2022 (13e3c90) into feat/datastore-custom-pk (b6171b1) will decrease coverage by 0.60%.
The diff coverage is 54.54%.
@@ Coverage Diff @@
## feat/datastore-custom-pk #2022 +/- ##
============================================================
- Coverage 59.52% 58.92% -0.61%
============================================================
Files 719 660 -59
Lines 22544 20845 -1699
============================================================
- Hits 13420 12282 -1138
+ Misses 9124 8563 -561
Flag
Coverage Δ
API_plugin_unit_test
62.47% <ø> (ø)
AWSPluginsCore
?
Amplify
48.23% <ø> (ø)
Analytics_plugin_unit_test
71.09% <ø> (ø)
Auth_plugin_unit_test
78.33% <ø> (ø)
DataStore_plugin_unit_test
75.65% <54.54%> (-0.07%)
:arrow_down:
Geo_plugin_unit_test
?
Predictions_plugin_unit_test
26.81% <ø> (ø)
Storage_plugin_unit_test
58.04% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
Impacted Files
Coverage Δ
...WSDataStoreCategoryPlugin/AWSDataStorePlugin.swift
75.17% <0.00%> (-1.64%)
:arrow_down:
...gin/Sync/InitialSync/InitialSyncOrchestrator.swift
75.19% <0.00%> (-0.59%)
:arrow_down:
...goryPlugin/Sync/InitialSync/SyncEventEmitter.swift
92.10% <0.00%> (-2.49%)
:arrow_down:
...gin/Sync/InitialSync/ModelSyncedEventEmitter.swift
90.72% <100.00%> (+1.13%)
:arrow_up:
...Engine+IncomingEventReconciliationQueueEvent.swift
51.02% <100.00%> (+1.02%)
:arrow_up:
...ataStoreCategoryPlugin/Sync/RemoteSyncEngine.swift
94.50% <100.00%> (+0.02%)
:arrow_up:
...tionSync/AWSIncomingEventReconciliationQueue.swift
57.50% <0.00%> (-1.67%)
:arrow_down:
...gin/Subscribe/DataStoreObserveQueryOperation.swift
74.64% <0.00%> (-0.94%)
:arrow_down:
...LocationGeoPlugin/AWSLocationGeoPlugin+Reset.swift
...e/AWSPluginsCore/Auth/AWSAuthServiceBehavior.swift
... and 59 more
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
|
gharchive/pull-request
| 2022-07-19T17:39:31 |
2025-04-01T04:56:05.564451
|
{
"authors": [
"codecov-commenter",
"lawmicha"
],
"repo": "aws-amplify/amplify-ios",
"url": "https://github.com/aws-amplify/amplify-ios/pull/2022",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2050149266
|
Having issue with refresh token, getting Nil while calling awsmobileclient.default().gettokens in swift
Describe the bug
while calling this method in swift on Xcode "awsmobileclient.default().gettokens" I am getting nil value.
I am calling after the successful login with Amplify.Auth.signIn
after that when I am trying to call any mutation I'm getting Auth error
Steps To Reproduce
Steps to reproduce the behavior:
1. Login with amplify Amplify.Auth.signIn
2. after successfully login call awsmobileclient.default().gettokens
3. in the result I'm getting Nil instead of token
Expected behavior
There should be an token in the return
Amplify Framework Version
2.25.0
Amplify Categories
API, Auth
Dependency manager
Swift PM
Swift version
5.8
CLI version
10.8.1
Xcode version
15.0.1
Relevant log output
<details>
<summary>Log Messages</summary>
Error: notSignedIn(message: "User is not signed in to Cognito User Pool, please sign in to use this API.")
```
Is this a regression?
Yes
Regression additional context
No response
Platforms
iOS
OS Version
iOS 17.0
Device
iPhone 14 Pro Max
Specific to simulators
No response
Additional context
No response
I'm also experiencing this issue
I see you're using Amplify version 2.25.0
After calling Amplify.Auth.signIn(), you may fetch the credentials/tokens by following the documentation here:
https://docs.amplify.aws/swift/build-a-backend/auth/accessing-credentials/
I see you're using Amplify version 2.25.0 - AWSMobileClient is no longer used in Amplify v2.
After calling Amplify.Auth.signIn(), you may fetch the credentials/tokens by following the documentation here:
https://docs.amplify.aws/swift/build-a-backend/auth/accessing-credentials/
After calling the documentation method I'm unable to call any mutation with this method
appDelegate.appSyncClient?.fetch(query: query, cachePolicy: .fetchIgnoringCacheData) { [weak self] (result, error) in
after getting the token AWSMobileClient is setting the auth token and can we do in the amplify V2 ?
After calling Amplify.Auth.signIn(), Amplify automatically fetches tokens from Cognito UserPool and Identity Pool and saves it in Keychain.
To get the aws credentials/cognito user tokens, you can call like this:
import AWSPluginsCore
do {
let session = try await Amplify.Auth.fetchAuthSession()
// Get user sub or identity id
if let identityProvider = session as? AuthCognitoIdentityProvider {
let usersub = try identityProvider.getUserSub().get()
let identityId = try identityProvider.getIdentityId().get()
print("User sub - \(usersub) and identity id \(identityId)")
}
// Get AWS credentials
if let awsCredentialsProvider = session as? AuthAWSCredentialsProvider {
let credentials = try awsCredentialsProvider.getAWSCredentials().get()
// Do something with the credentials
}
// Get cognito user pool token
if let cognitoTokenProvider = session as? AuthCognitoTokensProvider {
let tokens = try cognitoTokenProvider.getCognitoTokens().get()
// Do something with the JWT tokens
}
} catch let error as AuthError {
print("Fetch auth session failed with error - \(error)")
} catch {
}
I'm facing a similar issue, when doing:
// Get cognito user pool token
if let cognitoTokenProvider = session as? AuthCognitoTokensProvider {
let tokens = try cognitoTokenProvider.getCognitoTokens().get()
// Do something with the JWT tokens
}
the cast on the iflet cognitoTokenProvider is nil so it won't work, but interesting thing is that if i debug it and do it in the debugger the cast works
Hi @IvanSolaris , If you have enabled guest user in Cognito Identity Pool and no user is signed in, you will be able to access only identityId and AWS credentials. All other session details will give you an error.
|
gharchive/issue
| 2023-12-20T09:14:11 |
2025-04-01T04:56:05.665423
|
{
"authors": [
"IvanSolaris",
"Montchat",
"abdurrazax",
"thisisabhash"
],
"repo": "aws-amplify/amplify-swift",
"url": "https://github.com/aws-amplify/amplify-swift/issues/3431",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1247024631
|
Update aws_signature to 0.3.1
It looks like only the AWS.Signature uses the aws_signature package, and only the sign_v4 function which seems to maintain it's function signature between the versions.
However it would be nice to be able to use this package along with the new options in sign_v4_query_params/8.
@0urobor0s thank you! :purple_heart:
|
gharchive/pull-request
| 2022-05-24T19:19:43 |
2025-04-01T04:56:05.675284
|
{
"authors": [
"0urobor0s",
"philss"
],
"repo": "aws-beam/aws-elixir",
"url": "https://github.com/aws-beam/aws-elixir/pull/140",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1678417742
|
Parsing error handling rule file
General Issue
yes
The Question
I am running the cfn-guard validation command to test the cloudformation template but getting the error with multiple rules. Attaching the screenshot with the Parsing error handling rule file error message. Am I doing it in a wrong way or there is any issue with cfn-guard handling these rules files.
using command: cfn-guard validate -v --data cfn-template --rules ./aws-guard-rules-registry-1.0.2/rules/aws
where i have my template file in yaml format inside cfn-template folder.
CloudFormation Guard Version
2.1.3
OS
Amazon Linux
OS Version
No response
Other information
Running the command while build in AWS CodeBuild. was testing it for IAM policy and all the rules related to IAM policy were PASS. But not sure why this parsing error with other rule files.
Can anyone help on this issue?
Hi, have you downloaded the release rule sets and used the rule set files in there? Check out the releases and download the aggregated rule set files from there.
Take a look at Using Guard Rules Registry docs for examples.
Hi, have you downloaded the release rule sets and used the rule set files in there? Check out the releases and download the aggregated rule set files from there.
Yes, did the same.
Take a look at Using Guard Rules Registry docs for examples.
yes, followed the same.
@grolston, Any other suggestion. I am not sure if I am the only one facing this issue.
What is appears you are doing is using the cfn-guard command not as documented:
You have cfn-guard validate -v --data cfn-template --rules ./aws-guard-rules-registry-1.0.2/rules/aws
Your rules you are using is the Guard rules directory and not a the compiled rules (it was not tested nor intended to be used like that in the raw form). When you download the release rules from here and unzip the folder you will see files you can use in the output directory. For example, in there is a file named NIST800-53Rev5.guard if I use the command:
cfn-guard validate -v --data cfn-template --rules ./NIST800-53Rev5.guard
it will test the template against the rules in the NIST800-53Rev5.guard file.
If you are looking for testing against every rule (not recommended as you should have a plan for what your rule set should include), you an use the guard-rules-registry-all-rules.guard located in the output directory of the release.
|
gharchive/issue
| 2023-04-21T12:08:01 |
2025-04-01T04:56:05.683057
|
{
"authors": [
"grolston",
"nitinitare"
],
"repo": "aws-cloudformation/aws-guard-rules-registry",
"url": "https://github.com/aws-cloudformation/aws-guard-rules-registry/issues/245",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1289660503
|
Revert "Update Otel Version15.0 dependencies and licenses"
Reverts aws-observability/aws-otel-java-instrumentation#180
Codecov Report
Merging #189 (0d4e2f3) into main (4654c1e) will not change coverage.
The diff coverage is n/a.
@@ Coverage Diff @@
## main #189 +/- ##
=========================================
Coverage 73.91% 73.91%
Complexity 15 15
=========================================
Files 3 3
Lines 46 46
Branches 5 5
=========================================
Hits 34 34
Misses 8 8
Partials 4 4
:mega: Codecov can now indicate which changes are the most critical in Pull Requests. Learn more
|
gharchive/pull-request
| 2022-06-30T06:28:53 |
2025-04-01T04:56:05.693612
|
{
"authors": [
"codecov-commenter",
"vasireddy99"
],
"repo": "aws-observability/aws-otel-java-instrumentation",
"url": "https://github.com/aws-observability/aws-otel-java-instrumentation/pull/189",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2671585862
|
Bug: Examples should serialize under parameters when using Path parameters
Expected Behaviour
Original report: https://github.com/aws-powertools/powertools-lambda-python/pull/5575/files#r1846670069
The examples should be under parameters, not under schema.
https://swagger.io/specification/#parameter-object
I was not able to change the output location of the examples. In this PR, it will only enable the definition of examples in the Schema Object.
https://swagger.io/specification/#schema-object
wrong
"parameters": [
{
"required": false,
"schema": {
"type": "integer",
"exclusiveMaximum": 100.0,
"exclusiveMinimum": 0.0,
"title": "Count",
"default": 1,
"examples": [
{
"summary": "Example 1",
"description": null,
"value": 10,
"externalValue": null
}
]
},
"name": "count",
"in": "query"
}
],
expected
"parameters": [
{
"required": false,
"schema": {
"type": "integer",
"exclusiveMaximum": 100.0,
"exclusiveMinimum": 0.0,
"title": "Count",
"default": 1
},
"examples": [
{
"summary": "Example 1",
"description": null,
"value": 10,
"externalValue": null
}
],
"name": "count",
"in": "query"
}
],
Current Behaviour
This is being serialized under schema and not showing in SwaggerUI.
"parameters": [
{
"required": false,
"schema": {
"type": "integer",
"exclusiveMaximum": 100.0,
"exclusiveMinimum": 0.0,
"title": "Count",
"default": 1,
"examples": [
{
"summary": "Example 1",
"description": null,
"value": 10,
"externalValue": null
}
]
},
"name": "count",
"in": "query"
}
],
Code snippet
from typing import Annotated, List
import requests
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.utilities.typing import LambdaContext
from aws_lambda_powertools.event_handler.openapi.params import Path
from aws_lambda_powertools.event_handler.openapi.models import Example
from pydantic import BaseModel, Field
from dataclasses import dataclass
app = APIGatewayRestResolver(enable_validation=True)
app.enable_swagger(path="/swagger") # (1)!
@app.get("/todos/<todo_id>")
def get_todo_by_id(todo_id: Annotated[str, Path(examples=Example(summary='a',value="1"))]) -> str:
todo = requests.get("https://jsonplaceholder.typicode.com/todos")
todo.raise_for_status()
return "test"
def lambda_handler(event: dict, context: LambdaContext) -> dict:
return app.resolve(event, context)
if __name__ == "__main__":
print(app.get_openapi_json_schema())
Possible Solution
No response
Steps to Reproduce
Run the code above
Powertools for AWS Lambda (Python) version
latest
AWS Lambda function runtime
3.13
Packaging format used
Lambda Layers, PyPi
Debugging logs
In FastAPI, the example definitions for JSON Schema and OpenAPI are handled separately.
I think it would be better to follow this approach in Powertools as well and prepare openapi_examples separately.
openapi_examples: Optional[Dict[str, Example]] = None
https://fastapi.tiangolo.com/tutorial/schema-extra-example/#using-the-openapi_examples-parameter
https://github.com/fastapi/fastapi/blob/0.115.5/fastapi/params.py#L69
|
gharchive/issue
| 2024-11-19T10:11:12 |
2025-04-01T04:56:05.702369
|
{
"authors": [
"leandrodamascena",
"tonsho"
],
"repo": "aws-powertools/powertools-lambda-python",
"url": "https://github.com/aws-powertools/powertools-lambda-python/issues/5587",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2134490899
|
Ack addon must support inline policies for EKS
Description of changes:
The EKS ack controller doesn't have a recommended managed policy, only an inline one, and the currently provided managed policy in blueprints doesn't seem to give any eks permissions, nor does any other managed policy seem to be appropriate. The best option is to support inline policies. There may be other ack controllers with similar issues, but EKS is the one I'm aware of.
Also, this file appeared to have CRLF, so I've converted it to LF to match the rest of the project - I suggest viewing the diff without whitespace
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
I don't observe tests for this, how did you validate it?
I validated by running the EKS ACK controller, which does not work without this change as it needs eks permissions which were not present before. You can simply add the following addon to a blueprint to test:
const addOn = new blueprints.addons.AckAddOn({
serviceName: AckServiceName.EKS,
}),
And then you can try applying a resource such as:
apiVersion: eks.services.k8s.aws/v1alpha1
kind: PodIdentityAssociation
metadata:
name: test
namespace: default
spec:
clusterName: your-cluster-name
namespace: default
roleARN: some-role-arn-in-this-account
serviceAccount: default
and running kubectl describe to see that the ack controller had enough permissions to attempt to create it (most likely it will fail if you didnt set the trust policy on the role appropriately, but that still proves the permissions work)
@Pjv93 that is weird. If you put the following in examples/blueprint-construct/index.ts:
import * as cdk from 'aws-cdk-lib';
import {Construct} from "constructs";
import * as blueprints from '../../lib';
const blueprintID = 'blueprint-construct-dev';
export default class BlueprintConstruct {
constructor(scope: Construct, props: cdk.StackProps) {
const addOns: Array<blueprints.ClusterAddOn> = [
new blueprints.addons.AckAddOn({
serviceName: blueprints.AckServiceName.EKS
}),
];
blueprints.EksBlueprint.builder()
.version("auto")
.addOns(...addOns)
.build(scope, blueprintID, props);
}
}
And run npx cdk synth, you should see the inline policy as an attachment to the role:
blueprintconstructdevekschartsaRoleB7051B6A:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Statement:
- Action: sts:AssumeRoleWithWebIdentity
Condition:
StringEquals:
Fn::GetAtt:
- blueprintconstructdevekschartsaConditionJson6E37C7DB
- Value
Effect: Allow
Principal:
Federated:
Ref: blueprintconstructdevOpenIdConnectProvider3733B8DD
Version: "2012-10-17"
ManagedPolicyArns:
- Fn::Join:
- ""
- - "arn:"
- Ref: AWS::Partition
- :iam::aws:policy/AmazonEKSClusterPolicy
...
inlinepolicy168DC373:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
Statement:
- Action:
- eks:*
- iam:GetRole
- iam:PassRole
Effect: Allow
Resource: "*"
Version: "2012-10-17"
PolicyName: inlinepolicy168DC373
Roles:
- Ref: blueprintconstructdevekschartsaRoleB7051B6A
Metadata:
aws:cdk:path: blueprint-construct-dev/inline-policy/Resource
And when I run npx cdk deploy --require-approval=any-change, it does show the inline policy:
So I am not sure what could be the problem
As a sidenote, I think I should remove the EksClusterPolicy for EKS as it is unnecessary and confusing
@jackkleeman Thanks for that, the mistake was actually on my end. After running npx cdk synth && npx cdk deploy --require-approval=any-change, i see the inline policy and it looks good! I can also confirm the s3 inline policy pattern is also working. Great work.
@elamaran11 your review is marked as "requested changes". can't merge with that one, please review.
aa923ee
+1. Thankyou for the Contribution
|
gharchive/pull-request
| 2024-02-14T14:33:43 |
2025-04-01T04:56:05.709430
|
{
"authors": [
"Pjv93",
"elamaran11",
"jackkleeman",
"shapirov103"
],
"repo": "aws-quickstart/cdk-eks-blueprints",
"url": "https://github.com/aws-quickstart/cdk-eks-blueprints/pull/929",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
608654237
|
Docker packaging improvements
Overview
Fixes package output when using docker in docker (codebuild/dockerized taskcat) and fixes cases where package zips could be owned by root.
/do-e2e-tests
/do-e2e-tests
|
gharchive/pull-request
| 2020-04-28T22:24:20 |
2025-04-01T04:56:05.712883
|
{
"authors": [
"jaymccon"
],
"repo": "aws-quickstart/taskcat",
"url": "https://github.com/aws-quickstart/taskcat/pull/564",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2325606526
|
Adding Converse API notebooks
Issue #, if available: None.
Description of changes: Adding 2 notebooks for the Converse API in Bedrock, one in the 'introduction-to-bedrock' folder, and another in the 'function-calling' folder.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Intro notebook works
Can we simplify this:
tools_list_obj = ToolsList()
function_calling = ''
tool_args = {}
for c in output['output']['message']['content']:
if 'toolUse' in c:
function_calling = c['toolUse']
#Check if function calling is triggered:
if (function_calling):
```
2. Error in the function calling example, full trace
```
---------------------------------------------------------------------------
ParamValidationError Traceback (most recent call last)
Cell In[10], line 7
1 prompts = [
2 "What is the weather like in Queens, NY?",
3 "What is the capital of France?"
4 ]
6 for prompt in prompts:
----> 7 converse(
8 system = "You're provided with a tool that can get the weather information for a specific locations 'get_weather'; \
9 only use the tool if required. Don't make reference to the tools in your final answer.",
10 prompt = prompt
11 )
Cell In[9], line 18, in converse(prompt, system)
15 print(f"\n{datetime.now().strftime('%H:%M:%S')} - Initial prompt:\n{json.dumps(messages, indent=2)}")
17 #Invoke the model the first time:
---> 18 output = converse_with_tools(messages, system)
19 print(f"\n{datetime.now().strftime('%H:%M:%S')} - Output so far:\n{json.dumps(output['output'], indent=2)}")
21 #Add the intermediate output to the prompt:
Cell In[7], line 3, in converse_with_tools(messages, system, toolConfig)
2 def converse_with_tools(messages, system='', toolConfig=toolConfig):
----> 3 response = bedrock.converse(
4 modelId=modelId,
5 system=system,
6 messages=messages,
7 toolConfig=toolConfig
8 )
9 return response
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:565, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
561 raise TypeError(
562 f"{py_operation_name}() only accepts keyword arguments."
563 )
564 # The "self" in this scope is referring to the BaseClient.
--> 565 return self._make_api_call(operation_name, kwargs)
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:974, in BaseClient._make_api_call(self, operation_name, api_params)
970 if properties:
971 # Pass arbitrary endpoint info with the Request
972 # for use during construction.
973 request_context['endpoint_properties'] = properties
--> 974 request_dict = self._convert_to_request_dict(
975 api_params=api_params,
976 operation_model=operation_model,
977 endpoint_url=endpoint_url,
978 context=request_context,
979 headers=additional_headers,
980 )
981 resolve_checksum_context(request_dict, operation_model, api_params)
983 service_id = self._service_model.service_id.hyphenize()
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:1048, in BaseClient._convert_to_request_dict(self, api_params, operation_model, endpoint_url, context, headers, set_user_agent_header)
1039 def _convert_to_request_dict(
1040 self,
1041 api_params,
(...)
1046 set_user_agent_header=True,
1047 ):
-> 1048 request_dict = self._serializer.serialize_to_request(
1049 api_params, operation_model
1050 )
1051 if not self._client_config.inject_host_prefix:
1052 request_dict.pop('host_prefix', None)
File /opt/conda/lib/python3.10/site-packages/botocore/validate.py:381, in ParamValidationDecorator.serialize_to_request(self, parameters, operation_model)
377 report = self._param_validator.validate(
378 parameters, operation_model.input_shape
379 )
380 if report.has_errors():
--> 381 raise ParamValidationError(report=report.generate_report())
382 return self._serializer.serialize_to_request(
383 parameters, operation_model
384 )
ParamValidationError: Parameter validation failed:
Invalid type for parameter system, value: You're provided with a tool that can get the weather information for a specific locations 'get_weather'; only use the tool if required. Don't make reference to the tools in your final answer., type: <class 'str'>, valid types: <class 'list'>, <class 'tuple'>
updated boto3 and botocore to latest as of now. @markproy @rodzanto
works without passing in the system message FYI
|
gharchive/pull-request
| 2024-05-30T12:50:43 |
2025-04-01T04:56:05.716557
|
{
"authors": [
"rodzanto",
"w601sxs"
],
"repo": "aws-samples/amazon-bedrock-samples",
"url": "https://github.com/aws-samples/amazon-bedrock-samples/pull/177",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1468168747
|
How to make a rewrite?
The docs says that is possible to use a rewrite, but there isn't any example on how to achieve that
CloudFront Functions Docs
I tried to change the request, in the test tab it works but no in the cloudfront distribution.
Code:
if (host.startsWith("www.")) {
host = host.replace("www.", "");
request.headers.host.value = host;
return request;
}
Test in cloudfront Function:
Test against distribution URL:
502 ERROR
The request could not be satisfied.
The CloudFront function tried to add, delete, or change a read-only header. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
I ended with this code for all my needs:
function handler(event) {
var shouldRedirect = false;
var request = event.request;
var queryString = queryStringObjectToString(request.querystring);
var uri = request.uri;
var regex = /[A-Z]/g;
var headers = request.headers;
var host = headers.host.value;
var last_uri_character = uri.slice(-1);
// If the URI have a dot we assume is a file and we don't redirect or rewrite
if (uri.includes(".")) {
return request;
}
// If the URI contains a capital letter, rewrite to the lowercase version
if (regex.test(uri)) {
uri = request.uri.toLowerCase();
request.uri = uri;
}
// If the host is www we redirect to the naked domain
if (host.startsWith("www.")) {
host = host.replace("www.", "");
delete headers.host;
shouldRedirect = true;
}
// If the URI ends with a slash, redirect to the non-slash version
if (last_uri_character !== "/") {
uri = uri + "/";
shouldRedirect = true;
}
// If we should redirect, do it
if (shouldRedirect) {
var url = "https://" + host + uri + queryString;
var response = {
statusCode: 301,
statusDescription: "Permanently Moved",
headers: { location: { value: url } },
cookies: request.cookies,
};
return response;
}
return request;
}
/**
* @description: Converts an event.request.querystring object back to a query string
* @date 2022-11-28
* @param {object} queryStringObject: Query string object as provided by event.request
* @return {string}: A normalised query string in the form ?key1=valn&key2=val2...
*/
function queryStringObjectToString(queryStringObject) {
// Convert the query string object to an array of entries and reduce to a single string
return (
Object.entries(queryStringObject)
.reduce((p, q) => {
if (!q[1].multiValue) {
// Process a single key/value property
return (p += q[0] + "=" + q[1].value + "&");
} else {
// Process multiValue properties. E.g arrays
return (p +=
q[1].multiValue.map((v) => q[0] + "=" + v.value).join("&") + "&");
}
}, "?")
// Remove the trailing ampersand
.slice(0, -1)
);
}
Thanks.
|
gharchive/issue
| 2022-11-29T14:32:44 |
2025-04-01T04:56:05.720905
|
{
"authors": [
"renzit"
],
"repo": "aws-samples/amazon-cloudfront-functions",
"url": "https://github.com/aws-samples/amazon-cloudfront-functions/issues/31",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1478887944
|
Codeserver Icon doesn't appear
So I've followed this guide to set up coderserver on SageMaker Studio.
The codeserver icon doesn't appear but I can still access the codeserver by slightly changing the URL -
[domain-id].studio.[aws-region].sagemaker.aws/jupyter/default/codeserver. Screenshot below.
I suspect that icon fails to attach because of the new SageMaker Studio UI. Screenshot below.
I noticed the new UI on 5th December 22 which is when I tried to set up the codeserver on SageMaker Studio.
Hi @nskrjabins, thanks for reaching out and for opening this issue.
We are aware of this behavior, and we will provide an update soon.
In the meantime, the workaround you have identified is what we would suggest doing to continue accessing code-server.
I'll keep this issue open for updates.
@giuseppeporcelli Any ETA on when this will be fixed?
Any ETA?
It appears for me now with the latest version of the studio install script.
This didn't work for me, the code server icon still doesn't appear in the UI
I also don't see the icon, changing the url works (JL3).
After updating the domain and setting the default SageMakerImageArn to "arn:aws:sagemaker:eu-central-1:936697816551:image/jupyter-server-3" the icon shows up...
Hi,
I'm happy to share that this issue is solved by upgrading to the latest version (5.1000.0) of Amazon SageMaker Studio (JupyterServer). Then, you just need to re-install code-server as explained in this repo.
The code-server icon will appear in the launcher as expected, as shown below:
Instructions on how to upgrade Amazon SageMaker Studio are here: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-tasks-update-studio.html
Thanks for your patience.
@giuseppeporcelli I think https://github.com/jupyterlab/jupyterlab/releases/tag/v3.6.1 might have broken the icons in studio again.
|
gharchive/issue
| 2022-12-06T11:36:12 |
2025-04-01T04:56:05.727704
|
{
"authors": [
"Almenon",
"giuseppeporcelli",
"massi-ang",
"nskrjabins",
"orangewise",
"tonyszhang",
"wiliscavalcante"
],
"repo": "aws-samples/amazon-sagemaker-codeserver",
"url": "https://github.com/aws-samples/amazon-sagemaker-codeserver/issues/12",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
481686208
|
Log downloads NoRegionError.
I am trying to download the logs with cw_utils but I keep getting NoRegionError. Does anyone know how I can I fix this?
stream_name = 'XXXX'
fname = 'logs/deepracer-%s.log' %stream_name
cw_utils.download_log(fname, stream_prefix=stream_name)
---------------------------------------------------------------------------
NoRegionError Traceback (most recent call last)
<ipython-input-48-da5f8f73ddad> in <module>
----> 1 cw_utils.download_log(fname, stream_prefix=stream_name)
~/projects/deepracer-models/aws-deepracer-workshops/log-analysis/cw_utils.py in download_log(fname, stream_name, stream_prefix, log_group, start_time, end_time)
59 end_time=end_time
60 )
---> 61 for event in logs:
62 f.write(event['message'].rstrip())
63 f.write("\n")
~/projects/deepracer-models/aws-deepracer-workshops/log-analysis/cw_utils.py in get_log_events(log_group, stream_name, stream_prefix, start_time, end_time)
12
13 def get_log_events(log_group, stream_name=None, stream_prefix=None, start_time=None, end_time=None):
---> 14 client = boto3.client('logs')
15 if stream_name is None and stream_prefix is None:
16 print("both stream name and prefix can't be None")
/anaconda3/lib/python3.7/site-packages/boto3/__init__.py in client(*args, **kwargs)
89 See :py:meth:`boto3.session.Session.client`.
90 """
---> 91 return _get_default_session().client(*args, **kwargs)
92
93
/anaconda3/lib/python3.7/site-packages/boto3/session.py in client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)
261 aws_access_key_id=aws_access_key_id,
262 aws_secret_access_key=aws_secret_access_key,
--> 263 aws_session_token=aws_session_token, config=config)
264
265 def resource(self, service_name, region_name=None, api_version=None,
/anaconda3/lib/python3.7/site-packages/botocore/session.py in create_client(self, service_name, region_name, api_version, use_ssl, verify, endpoint_url, aws_access_key_id, aws_secret_access_key, aws_session_token, config)
837 is_secure=use_ssl, endpoint_url=endpoint_url, verify=verify,
838 credentials=credentials, scoped_config=self.get_scoped_config(),
--> 839 client_config=config, api_version=api_version)
840 monitor = self._get_internal_component('monitor')
841 if monitor is not None:
/anaconda3/lib/python3.7/site-packages/botocore/client.py in create_client(self, service_name, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, api_version, client_config)
84 client_args = self._get_client_args(
85 service_model, region_name, is_secure, endpoint_url,
---> 86 verify, credentials, scoped_config, client_config, endpoint_bridge)
87 service_client = cls(**client_args)
88 self._register_retries(service_client)
/anaconda3/lib/python3.7/site-packages/botocore/client.py in _get_client_args(self, service_model, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, client_config, endpoint_bridge)
326 return args_creator.get_client_args(
327 service_model, region_name, is_secure, endpoint_url,
--> 328 verify, credentials, scoped_config, client_config, endpoint_bridge)
329
330 def _create_methods(self, service_model):
/anaconda3/lib/python3.7/site-packages/botocore/args.py in get_client_args(self, service_model, region_name, is_secure, endpoint_url, verify, credentials, scoped_config, client_config, endpoint_bridge)
45 final_args = self.compute_client_args(
46 service_model, client_config, endpoint_bridge, region_name,
---> 47 endpoint_url, is_secure, scoped_config)
48
49 service_name = final_args['service_name']
/anaconda3/lib/python3.7/site-packages/botocore/args.py in compute_client_args(self, service_model, client_config, endpoint_bridge, region_name, endpoint_url, is_secure, scoped_config)
115
116 endpoint_config = endpoint_bridge.resolve(
--> 117 service_name, region_name, endpoint_url, is_secure)
118
119 # Override the user agent if specified in the client config.
/anaconda3/lib/python3.7/site-packages/botocore/client.py in resolve(self, service_name, region_name, endpoint_url, is_secure)
400 region_name = self._check_default_region(service_name, region_name)
401 resolved = self.endpoint_resolver.construct_endpoint(
--> 402 service_name, region_name)
403 if resolved:
404 return self._create_endpoint(
/anaconda3/lib/python3.7/site-packages/botocore/regions.py in construct_endpoint(self, service_name, region_name)
120 for partition in self._endpoint_data['partitions']:
121 result = self._endpoint_for_partition(
--> 122 partition, service_name, region_name)
123 if result:
124 return result
/anaconda3/lib/python3.7/site-packages/botocore/regions.py in _endpoint_for_partition(self, partition, service_name, region_name)
133 region_name = service_data['partitionEndpoint']
134 else:
--> 135 raise NoRegionError()
136 # Attempt to resolve the exact region for this partition.
137 if region_name in service_data['endpoints']:
NoRegionError: You must specify a region.
I will close this as the problem got fixed.
|
gharchive/issue
| 2019-08-16T16:23:50 |
2025-04-01T04:56:05.730762
|
{
"authors": [
"decarvalhohenrique"
],
"repo": "aws-samples/aws-deepracer-workshops",
"url": "https://github.com/aws-samples/aws-deepracer-workshops/issues/34",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
998096443
|
Why MUSL?
Lambda Rust with provided.al2 can run on native linux binaries: https://github.com/umccr/s3-rust-noodles-bam/tree/s3-server
See: https://github.com/awslabs/aws-lambda-rust-runtime/discussions/306
For now, Lambda Adapter is used in container images, not custom runtime.
We use musl to build Lambda Adapter as a statically linked binary, so that it works in any linux base images that customers may choose. It should even work in a blank baseimage such as SCRATCH.
|
gharchive/issue
| 2021-09-16T11:22:25 |
2025-04-01T04:56:05.733565
|
{
"authors": [
"bnusunny",
"brainstorm"
],
"repo": "aws-samples/aws-lambda-adapter",
"url": "https://github.com/aws-samples/aws-lambda-adapter/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1692971559
|
Fix stack cleanup
On some instances TemplateDescription can have '\n' encoded inside it but when we process it through bash echo then it would cause line break resulting in invalid json.
Here is sample output I received from list-stacks that resulted in breaking cleanup.sh (notice \n in json response)
➜ aws-saas-factory-ref-solution-serverless-saas git:(adil/fix_stack_cleanup) ✗ saas aws cloudformation list-stacks --stack-status-filter CREATE_COMPLETE ROLLBACK_COMPLETE UPDATE_COMPLETE UPDATE_ROLLBACK_COMPLETE IMPORT_COMPLETE IMPORT_ROLLBACK_COMPLETE
...
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
},
...
@suhussai
Hi @adilhafeez ,
I'm having trouble replicating the issue.
I copied the output in your comment and ran jq to parse it, and it didn't throw any error:
❯ cat jsonNewLine
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
❯ cat jsonNewLine | jq
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
❯ cat jsonNewLine | jq '.TemplateDescription'
"Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n"
I might be missing something. Can you elaborate on the exact error you are seeing and how I can reproduce it?
here is how to repro
➜ ~ cat > json
[
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
➜ ~ X=`cat json`
➜ ~ echo $X | jq .
parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 6, column 1
and the fix
➜ ~ Y=$(cat json | sed -e 's/\\n//g')
➜ ~ echo $Y
[
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
➜ ~ echo $Y | jq .
[
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
This is really interesting. I have been using this cleanup.sh script for a while, and have never seen this error before. I'll have to do some digging to figure out why.
FWIW I am using mac m1 max (13.1 (22C65))
This was a real head scratcher, but I think I got it. It seems to be a shell issue. I created this small script to mimic what the script was doing:
cat <<EOT >> jsonNewLine.txt
[
{
"StackId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas-APIs-17O4B7RH955WQ/da23b660-e53e-11ed-8b82-12cd17ee3641",
"StackName": "serverless-saas-APIs-17O4B7RH955WQ",
"TemplateDescription": "Template to setup api gateway, apis, api keys and usage plan as part of bootstrap\n",
"CreationTime": "2023-04-27T21:02:43.084000+00:00",
"StackStatus": "CREATE_COMPLETE",
"ParentId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"RootId": "arn:aws:cloudformation:us-east-1:264380604816:stack/serverless-saas/ebd2a610-e53d-11ed-a021-12e60651c6c3",
"DriftInformation": {
"StackDriftStatus": "NOT_CHECKED"
}
}
]
EOT
response=$(cat jsonNewLine.txt)
name=$(echo "$response" | jq -r '.[0].StackName')
echo "$name"
rm jsonNewLine.txt
Now, when I run that script using bash, it works:
❯ bash script.sh
serverless-saas-APIs-17O4B7RH955WQ
But, when I run it using sh, I get the error you posted:
❯ sh script.sh
parse error: Invalid string: control characters from U+0000 through U+001F must be escaped at line 6, column 1
So, I think the reason I never saw this was because I typically run scripts using bash directly.
Regardless, I don't think we need to do anything else here.
Thanks again for the contribution!
That resolves the mystery - I didn't know bash and sh treated escape characters differently. Thanks for reporting back.
|
gharchive/pull-request
| 2023-05-02T19:20:33 |
2025-04-01T04:56:05.745571
|
{
"authors": [
"adilhafeez",
"suhussai"
],
"repo": "aws-samples/aws-saas-factory-ref-solution-serverless-saas",
"url": "https://github.com/aws-samples/aws-saas-factory-ref-solution-serverless-saas/pull/52",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2539761354
|
Improvements/#427
Issue #, if available: https://github.com/aws-samples/awsome-distributed-training/issues/427
Description of changes: This PR has 3 commits, each addressing separate issues.
Commit 92d1b0c to fix incorrect config param in config.py, previously undocumented issue.
Commit abd677e to modify order which Docker / Enroot / Pyxis is called in lifecycle scripts, to mitigate chance of encountering race condition documented in issue 427
Commit 3c9a655 to further address 427 by adding a while loop that will poll (max 120s) dlami-nvme.service for active and execStart messages. This provides assurance that /opt/dlami/nvme is mounted to node prior to executing enroot configuration which will use /opt/dlami/nvme. This commit also updates the order of if/elif statement to first try /opt/dlami/nvme before /opt/sagemaker.
This PR has been tested successfully on a HyperPod cluster with 1 p5 and 4 c5.4xlarge to verify intended outcome and logs analyzed to confirm while loop functioning properly.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
resolved comment from mhugueaws
|
gharchive/pull-request
| 2024-09-20T22:15:23 |
2025-04-01T04:56:05.750020
|
{
"authors": [
"nghtm"
],
"repo": "aws-samples/awsome-distributed-training",
"url": "https://github.com/aws-samples/awsome-distributed-training/pull/438",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
867142086
|
SG update to be able to use the template
Opening up the security group to avoid an error in the UI when you try to open the EMR Studio. The SG permission should be updated to the latest definition in the AWS documentation; this is just a patch for the people trying to use this template.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution under the terms of your choice.
Feel free to reject it :) this is just a "patch work" to be able to use the
current template
On Tue, 27 Apr 2021, 02:39 emrnotebooks, @.***> wrote:
@.**** commented on this pull request.
Thanks for submitting this PR. Actually there are more to be updated. Let
us update the service role policy, which should be present in both full and
min dependency YAML files.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/aws-samples/emr-studio-samples/pull/1#pullrequestreview-644892142,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACODEJMFAAKCNKMQHU5KCULTKWJLXANCNFSM43RZAPUA
.
Yep. We have updated the full and min dependency template files. Please let us know if the latest version is working for you.
Again, thanks for bringing this to our attention.
Best regards, Ray
From: Alberto Cubeddu @.>
Reply-To: aws-samples/emr-studio-samples @.>
Date: Monday, April 26, 2021 at 1:43 PM
To: aws-samples/emr-studio-samples @.>
Cc: emrnotebooks @.>, Comment @.***>
Subject: Re: [aws-samples/emr-studio-samples] SG update to be able to use the template (#1)
Feel free to reject it :) this is just a "patch work" to be able to use the
current template
On Tue, 27 Apr 2021, 02:39 emrnotebooks, @.***> wrote:
@.**** commented on this pull request.
Thanks for submitting this PR. Actually there are more to be updated. Let
us update the service role policy, which should be present in both full and
min dependency YAML files.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/aws-samples/emr-studio-samples/pull/1#pullrequestreview-644892142,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACODEJMFAAKCNKMQHU5KCULTKWJLXANCNFSM43RZAPUA
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHubhttps://github.com/aws-samples/emr-studio-samples/pull/1#issuecomment-827133648, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AN4IHLKKWVOZUVJGYB3NUSLTKXF3RANCNFSM43RZAPUA.
You're welcome and thanks for the update
On Tue, 27 Apr 2021, 06:46 emrnotebooks, @.***> wrote:
Yep. We have updated the full and min dependency template files. Please
let us know if the latest version is working for you.
Again, thanks for bringing this to our attention.
Best regards, Ray
From: Alberto Cubeddu @.>
Reply-To: aws-samples/emr-studio-samples @.>
Date: Monday, April 26, 2021 at 1:43 PM
To: aws-samples/emr-studio-samples @.>
Cc: emrnotebooks @.>, Comment @.***>
Subject: Re: [aws-samples/emr-studio-samples] SG update to be able to use
the template (#1)
Feel free to reject it :) this is just a "patch work" to be able to use
the
current template
On Tue, 27 Apr 2021, 02:39 emrnotebooks, @.***> wrote:
@.**** commented on this pull request.
Thanks for submitting this PR. Actually there are more to be updated.
Let
us update the service role policy, which should be present in both full
and
min dependency YAML files.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<
https://github.com/aws-samples/emr-studio-samples/pull/1#pullrequestreview-644892142>,
or unsubscribe
<
https://github.com/notifications/unsubscribe-auth/ACODEJMFAAKCNKMQHU5KCULTKWJLXANCNFSM43RZAPUA>
.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<
https://github.com/aws-samples/emr-studio-samples/pull/1#issuecomment-827133648>,
or unsubscribe<
https://github.com/notifications/unsubscribe-auth/AN4IHLKKWVOZUVJGYB3NUSLTKXF3RANCNFSM43RZAPUA>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/aws-samples/emr-studio-samples/pull/1#issuecomment-827135724,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ACODEJN5B3YJU42G6O7MTADTKXGJJANCNFSM43RZAPUA
.
|
gharchive/pull-request
| 2021-04-25T23:28:02 |
2025-04-01T04:56:05.774049
|
{
"authors": [
"albertocubeddu",
"emrnotebooks"
],
"repo": "aws-samples/emr-studio-samples",
"url": "https://github.com/aws-samples/emr-studio-samples/pull/1",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2581955042
|
Event Model
Model Description
The event model should provide a way to receive events to they can be sent to an event backbone.
It should also allow the ability to manage subscriptions to different capabilities can subscribe to events of interest.
Required Actions
Receive Event - a way to create a new event.
Create/Update/Delete Subscriptions - a way to manage subscriptions. Use callbacks/webhooks for delivery of matched events.
@gchagnon check out the feature branch called /feat/event-support to see a candidate model and README I added for event support. With this model we would have a dedicated /events endpoint that would be the place events are created. The server for the events model would have a connector just like any other service. The first connector we'd probably create is to Event Bridge. We can also consider using asyncapi to model this, probably requires more discussion.
|
gharchive/issue
| 2024-10-11T18:27:00 |
2025-04-01T04:56:05.776444
|
{
"authors": [
"paulfryer"
],
"repo": "aws-samples/industry-reference-models",
"url": "https://github.com/aws-samples/industry-reference-models/issues/11",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1697484397
|
Error with logging permissions - needs redoing with BucketPolicy
When creating the cloudfront with the yaml file it gives error
Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting (Service: Amazon S3; Status Code: 400; Error Code: InvalidBucketAclWithObjectOwnership;
Access logs are now correctly pushed to the S3 logging bucket
Access logs are now correctly pushed to the S3 logging bucket
|
gharchive/issue
| 2023-05-05T11:47:38 |
2025-04-01T04:56:05.778072
|
{
"authors": [
"jeanbaptisteguillois",
"scdba"
],
"repo": "aws-samples/react-cors-spa",
"url": "https://github.com/aws-samples/react-cors-spa/issues/12",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
796581491
|
The source code for lambda functions is not present in simple-ssr/edge-build folder
The lambda code referenced in the cdk part doesn't exist in respective folders
This code is generated by webpack during build phase.
You need to build simple-ssr project before deploying it with AWS CDK.
You should run the next commands
cd ../simple-ssr
npm install
npm run build-all
cd ../cdk
cdk deploy SSRAppStack --parameters mySiteBucketName=
This code is generated by webpack during build phase.
You need to build simple-ssr project before deploying it with AWS CDK.
You should run the next commands
cd ../simple-ssr
npm install
npm run build-all
cd ../cdk
cdk deploy SSRAppStack --parameters mySiteBucketName=
|
gharchive/issue
| 2021-01-29T05:05:20 |
2025-04-01T04:56:05.780899
|
{
"authors": [
"roman-boiko",
"sachin10101998"
],
"repo": "aws-samples/react-ssr-lambda",
"url": "https://github.com/aws-samples/react-ssr-lambda/issues/1",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1635007117
|
add lora fine tuning
Issue #, if available:
Description of changes:
support safetensor/hf model path/ckpt format
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
It's good for me !
|
gharchive/pull-request
| 2023-03-22T03:46:02 |
2025-04-01T04:56:05.782411
|
{
"authors": [
"qingyuan18",
"stevensu1977"
],
"repo": "aws-samples/sagemaker-stablediffusion-quick-kit",
"url": "https://github.com/aws-samples/sagemaker-stablediffusion-quick-kit/pull/23",
"license": "MIT-0",
"license_type": "permissive",
"license_source": "github-api"
}
|
801437140
|
Vue Warning deleting vocabulary: "customVocabularyName" was assigned to but it has no setter.
Transcribe -> Save Vocabulary -> Delete Vocabulary
From stack overflow:
https://stackoverflow.com/questions/46106037/vuex-computed-property-name-was-assigned-to-but-it-has-no-setter
I'm not sure when this was fixed, but it's not occurring anymore so I'm closing the issue.
|
gharchive/issue
| 2021-02-04T16:23:02 |
2025-04-01T04:56:05.803788
|
{
"authors": [
"aburkleaux-amazon",
"ianwow"
],
"repo": "aws-solutions/aws-media-insights-content-localization",
"url": "https://github.com/aws-solutions/aws-media-insights-content-localization/issues/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1973749308
|
How to turn off monitoring ?
Hello,
Looking at the doc, I can't find how to turn off monitoring completely. I'd like to turn off cloudwatch + X-ray.
When I get to the the lambda function settings ("monitoring tools" section) I found:
Thanx for your help.
Hi @smknstd
You can try following the answer in this stackoverflow question, to prevent the function from creating logs.
Let me know if you have any issues,
Simon
Thanx for your answer. Will try the workaround even if it seems brutal though (ideally I'd like to keep logging errors I guess).
In the mean time it seems I managed to switch off some logging set up by default from the api gateway:
it seems those stopped immediately:
|
gharchive/issue
| 2023-11-02T08:29:57 |
2025-04-01T04:56:05.808536
|
{
"authors": [
"simonkrol",
"smknstd"
],
"repo": "aws-solutions/serverless-image-handler",
"url": "https://github.com/aws-solutions/serverless-image-handler/issues/517",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
867237138
|
Fix the number of secondary IP (aliases)
Issue #, if available:
None
Description of changes:
When the number of secondary IP is greater than 1, ${aliases} can only get the first element(same as ${aliases[0]}).
By modifying it to ${aliases[@]}, all elements can be traversed.
Changes:
Pathch rewrite_aliases()
${aliases} -> ${aliases[@]}
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
Thanks for your accurate diagnosis and fix.
The test suite recently introduced would also have caught this, but didn't actually test >2 (1 primary, 1 secondary) IPv4 addresses on an interface. I will add an additional commit to cover that scenario.
|
gharchive/pull-request
| 2021-04-26T04:15:33 |
2025-04-01T04:56:05.854195
|
{
"authors": [
"nmeyerhans",
"yhr123"
],
"repo": "aws/amazon-ec2-net-utils",
"url": "https://github.com/aws/amazon-ec2-net-utils/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
782234370
|
Updating OTA_README.md
Description
Added comments to OTA_README.md regarding supporting Python requirements and compilation flags.
Checklist:
[ ] I have tested my changes. No regression in existing tests.
[ ] My code is Linted.
No code has been changed.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
/bot run checks
/bot run checks
|
gharchive/pull-request
| 2021-01-08T16:19:12 |
2025-04-01T04:56:05.869458
|
{
"authors": [
"aggarw13",
"keithmwheeler"
],
"repo": "aws/amazon-freertos",
"url": "https://github.com/aws/amazon-freertos/pull/2928",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1057651864
|
No "--force" argument to automate delete
Is your feature request related to a problem? Please describe.
Automating dotnet dotnet-aws delete-deployment does not offer an argument to force delete and avoid the interactive confirmation prompt of "Are you sure you want to delete ...?"
Describe the solution you'd like
Add the argument --force and -f to improve automation reliability.
Describe alternatives you've considered
The workaround echo "y" | dotnet dotnet-aws delete-deployment ... is possible, but fragile since the use of piping "y" isn't explicit.
Additional context
Similar to #403
This is a :rocket: feature request
@ericis Thank you for trying out our tooling and providing feedback.
We added the functionality to delete applications without any user prompts as part of https://github.com/aws/aws-dotnet-deploy/pull/445.
Closing this issue as it has been resolved.
|
gharchive/issue
| 2021-11-18T18:10:16 |
2025-04-01T04:56:06.198987
|
{
"authors": [
"96malhar",
"ericis"
],
"repo": "aws/aws-dotnet-deploy",
"url": "https://github.com/aws/aws-dotnet-deploy/issues/404",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
963162279
|
feat: Save last used stack and order existing deployments by MRU stack
Issue #, if available:
Description of changes:
feat: Save last used stack and order existing deployments by MRU stack
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
General question: what is the rational behind storing the timestamp in aws-deployments.json file than using LastUpdatedTime to order all the stacks deployed through deploy tool?
This approach will be more accurate in case stack gets updated out of the lifecycle of deploy tool and requires less maintenance on client side.
Another downside of this approach is, given aws-deployments.json file is source controlled, as soon as multiple developers will commit their changes, this logic will not work.
By keeping stack tracking per system (somewhere in temp location) would solve this problem.
|
gharchive/pull-request
| 2021-08-07T05:49:30 |
2025-04-01T04:56:06.201980
|
{
"authors": [
"ganeshnj",
"philasmar"
],
"repo": "aws/aws-dotnet-deploy",
"url": "https://github.com/aws/aws-dotnet-deploy/pull/288",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1143665789
|
Revert "Integ-tests: Temporarily disable DCV test."
This reverts commit cc97913e40f30fcd39bec7253c853e977bc123ca.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Closed as duplicate of: https://github.com/aws/aws-parallelcluster/pull/3757
|
gharchive/pull-request
| 2022-02-18T19:47:48 |
2025-04-01T04:56:06.231277
|
{
"authors": [
"enrico-usai",
"hanwen-pcluste"
],
"repo": "aws/aws-parallelcluster",
"url": "https://github.com/aws/aws-parallelcluster/pull/3794",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
873184640
|
g++-11 compile error (Ubuntu 20.04 x86_64)
Confirm by changing [ ] to [x] below to ensure that it's a bug:
[x] I've gone though Developer Guide and API reference
[x] I've searched for previous similar issues and didn't find any solution
Describe the bug
Seeing the following compiler error when trying to build with g++-11
[ 20%] Building C object crt/aws-crt-cpp/crt/s2n/CMakeFiles/s2n.dir/pq-crypto/bike_r1/converts_portable.c.o
/home/ubuntu/Crypto/build/release/externalprojects/awssdk-src/crt/aws-crt-cpp/crt/aws-c-cal/source/unix/openssl_platform_init.c: In function 'aws_cal_platform_init':
/home/ubuntu/Crypto/build/release/externalprojects/awssdk-src/crt/aws-crt-cpp/crt/aws-c-cal/source/unix/openssl_platform_init.c:386:43: error: the comparison will always evaluate as 'false' for the address of 's_locking_fn' will never be NULL [-Werror=address]
386 | if (CRYPTO_get_locking_callback() == s_locking_fn) {
| ^~
/home/ubuntu/Crypto/build/release/externalprojects/awssdk-src/crt/aws-crt-cpp/crt/aws-c-cal/source/unix/openssl_platform_init.c: In function 'aws_cal_platform_clean_up':
/home/ubuntu/Crypto/build/release/externalprojects/awssdk-src/crt/aws-crt-cpp/crt/aws-c-cal/source/unix/openssl_platform_init.c:402:39: error: the comparison will always evaluate as 'false' for the address of 's_locking_fn' will never be NULL [-Werror=address]
402 | if (CRYPTO_get_locking_callback() == s_locking_fn) {
| ^~
/home/ubuntu/Crypto/build/release/externalprojects/awssdk-src/crt/aws-crt-cpp/crt/aws-c-cal/source/unix/openssl_platform_init.c:411:34: error: the comparison will always evaluate as 'false' for the address of 's_id_fn' will never be NULL [-Werror=address]
411 | if (CRYPTO_get_id_callback() == s_id_fn) {
| ^~
cc1: all warnings being treated as errors
make[5]: *** [crt/aws-crt-cpp/crt/aws-c-cal/CMakeFiles/aws-c-cal.dir/build.make:146: crt/aws-crt-cpp/crt/aws-c-cal/CMakeFiles/aws-c-cal.dir/source/unix/openssl_platform_init.c.o] Error 1
make[5]: *** Waiting for unfinished jobs....
SDK version number
1.8.186, 1.9, and the latest
Platform/OS/Hardware/Device
g++-11 Ubuntu 20.04 x86_64
COLLECT_GCC=g++-11
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/11/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none:amdgcn-amdhsa
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 11.1.0-1ubuntu1~20.04' --with-bugurl=file:///usr/share/doc/gcc-11/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++,m2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-11 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --enable-bootstrap --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-plugin --enable-default-pie --with-system-zlib --enable-libphobos-checking=release --with-target-system-zlib=auto --enable-objc-gc=auto --enable-multiarch --disable-werror --disable-cet --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none=/build/gcc-11-2V7zgg/gcc-11-11.1.0/debian/tmp-nvptx/usr,amdgcn-amdhsa=/build/gcc-11-2V7zgg/gcc-11-11.1.0/debian/tmp-gcn/usr --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu --with-build-config=bootstrap-lto-lean --enable-link-serialization=2
Thread model: posix
Supported LTO compression algorithms: zlib zstd
gcc version 11.1.0 (Ubuntu 11.1.0-1ubuntu1~20.04)
To Reproduce (observed behavior)
Ran the build
Expected behavior
A clear and concise description of what you expected to happen.
Hi @bjkaria ,
Thanks for bringing this up to us, we'll take a look.
In the mean time though, if possible I'd suggest downgrading to gcc-10.
I tried gcc-10 too, i get the same error. gcc-9 does work.
https://github.com/awslabs/aws-c-cal/pull/93 fixes the specific warning for GCC11, and a related commit in aws-c-common has adjusted the posture of -Werror and /WX to off by default and can be enabled with an option for all aws-c-* builds.
|
gharchive/issue
| 2021-04-30T19:45:48 |
2025-04-01T04:56:06.237551
|
{
"authors": [
"DavidOgunsAWS",
"KaibaLopez",
"bjkaria"
],
"repo": "aws/aws-sdk-cpp",
"url": "https://github.com/aws/aws-sdk-cpp/issues/1635",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
938097485
|
Memory leak if s2n_cleanup is not called
Confirm by changing [ ] to [x] below to ensure that it's a bug:
[x] I've gone though Developer Guide and API reference
[x] I've searched for previous similar issues and didn't find any solution
Describe the bug
When upgrading from 1.8.x version to 1.9.49 we see memory leaks reported by the ASAN tool. See ASAN log below.
SDK version number
1.9.49
Platform/OS/Hardware/Device
Linux Debian
To Reproduce (observed behavior)
We create Aws::S3::S3Client objects on several threads. When program is terminated, several leaks are reported.
When we add a call to s2n_cleanup, https://github.com/aws/s2n-tls/blob/main/docs/USAGE-GUIDE.md#s2n_cleanup, during shutdown for each thread created there are no leaks reported.
Expected behavior
No memory leaks.
Do we really need to call s2n_cleanup explicitly just because the implementation creates a UUID?
Logs/output
1
malloc
/usr/local/bin/engine
2
CRYPTO_zalloc
/usr/local/bin/engine
3
s2n_defend_if_forked
/home/vagrant/jws/qix-pc3-build-ws/build/lib/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:144:9
4
s2n_get_private_random_data
/home/vagrant/jws/qix-pc3-build-ws/build/lib/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:175:5
5
s2n_openssl_compat_rand
/home/vagrant/jws/qix-pc3-build-ws/build/lib/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:295:29
6
…::SecureRandomBytes_OpenSSLImpl::GetBytes
/home/vagrant/jws/qix-pc3-build-ws/build/lib/aws-cpp-sdk-core/source/utils/crypto/openssl/CryptoImpl.cpp:142:31
7
…::UUID::RandomUUID
/home/vagrant/jws/qix-pc3-build-ws/build/lib/aws-cpp-sdk-core/source/utils/UUID.cpp:79:27
8
…::STSAssumeRoleWebIdentityCredentialsProvider::STSAssumeRoleWebIdentityCredentialsProvider
/home/vagrant/jws/qix-pc3-build-ws/build/lib/aws-cpp-sdk-core/source/auth/STSCredentialsProvider.cpp:91:25
9
…::__1::__compressed_pair_elem<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2196:46
10
…::__1::__compressed_pair<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2289:42
11
…::__1::__shared_ptr_emplace<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:3562:12
12
…::__1::shared_ptr<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:4195:9
13
…::__1::enable_if<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:4415:12
14
…::__1::shared_ptr<…>
/home/vagrant/jws/qix-pc3-build-ws/build/lib/aws-cpp-sdk-core/include/aws/core/utils/memory/stl/AWSAllocator.h:51:16
15
…::DefaultAWSCredentialsProviderChain::DefaultAWSCredentialsProviderChain
/home/vagrant/jws/qix-pc3-build-ws/build/lib/aws-cpp-sdk-core/source/auth/AWSCredentialsProviderChain.cpp:41:17
16
…::__1::__compressed_pair_elem<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2196:46
17
…::__1::__compressed_pair<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2289:42
18
…::__1::__shared_ptr_emplace<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:3562:12
19
…::__1::shared_ptr<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:4195:9
20
…::__1::enable_if<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:4415:12
21
…::__1::shared_ptr<…>
/home/vagrant/jws/engine-common-ws/build/HEAD-release/../../Packages/aws-sdk-cpp/include/aws/core/utils/memory/stl/AWSAllocator.h:51:16
22
GetCredentialProvider
/home/vagrant/jws/engine-common-ws/build/HEAD-release/../../src/qaws/src/S3Client.cpp:115:12
23
…::S3Client::S3Client
/home/vagrant/jws/engine-common-ws/build/HEAD-release/../../src/qaws/src/S3Client.cpp:131:9
24
…::__1::__compressed_pair_elem<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2196:46
25
…::__1::__compressed_pair<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:2289:42
26
…::__1::__shared_ptr_emplace<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:3562:12
27
…::__1::enable_if<…>
/usr/lib/llvm-10/bin/../include/c++/v1/memory:4400:26
28
…::S3ObjectStream::S3ObjectStream
The first frame is our own:
Client = Aws::MakeShared<Aws::S3::S3Client>(
"S3Client",
Aws::MakeShared<Aws::Auth::DefaultAWSCredentialsProviderChain>("DefaultAWSCredentialsProviderChain"),
Config,
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::RequestDependent,
false
);
Additional context
Memory leaks were not reported on several 1.8.x versions we have used.
Could you try this workaround (not the fix) to suppress a specific leak in S2N by:
export LSAN_OPTIONS=suppressions=scripts/suppressions.txt
exporting the above (or should I have the file present with some content?) doesn't change a bit.
==261254== 6,336 bytes in 24 blocks are indirectly lost in loss record 6 of 6
==261254== at 0x4842839: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==261254== by 0x4A726DD: CRYPTO_zalloc (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)
==261254== by 0x4A63754: EVP_CipherInit_ex (in /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1)
==261254== by 0x50B9251: s2n_drbg_instantiate (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/crypto/s2n_drbg.c:162)
==261254== by 0x50AE15D: s2n_defend_if_forked (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:143)
==261254== by 0x50AE210: s2n_get_private_random_data (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:175)
==261254== by 0x50AE544: s2n_openssl_compat_rand (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_random.c:295)
==261254== by 0x4F9DF69: Aws::Utils::Crypto::SecureRandomBytes_OpenSSLImpl::GetBytes(unsigned char*, unsigned long) (aws-sdk-cpp/aws-cpp-sdk-core/source/utils/crypto/openssl/CryptoImpl.cpp:142)
==261254== by 0x4F88337: Aws::Utils::UUID::RandomUUID() (aws-sdk-cpp/aws-cpp-sdk-core/source/utils/UUID.cpp:79)
and
==261254== 1,560 bytes in 20 blocks are still reachable in loss record 2 of 6
==261254== at 0x4842839: malloc (in /usr/libexec/valgrind/vgpreload_memcheck-amd64-linux.so)
==261254== by 0x50AD85C: s2n_mem_malloc_no_mlock_impl (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_mem.c:126)
==261254== by 0x50ACC25: s2n_realloc (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_mem.c:195)
==261254== by 0x50AC9B2: s2n_alloc (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_mem.c:157)
==261254== by 0x50AD3D5: s2n_dup (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_mem.c:240)
==261254== by 0x5104172: s2n_cipher_suites_init (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/tls/s2n_cipher_suites.c:1029)
==261254== by 0x50AB632: s2n_init (aws-sdk-cpp/crt/aws-crt-cpp/crt/s2n/utils/s2n_init.c:44)
==261254== by 0x505C657: aws_tls_init_static_state (aws-sdk-cpp/crt/aws-crt-cpp/crt/aws-c-io/source/s2n/s2n_tls_channel_handler.c:132)
==261254== by 0x5050CF2: aws_io_library_init (aws-sdk-cpp/crt/aws-crt-cpp/crt/aws-c-io/source/io.c:213)
==261254== by 0x4FDAF58: aws_mqtt_library_init (aws-sdk-cpp/crt/aws-crt-cpp/crt/aws-c-mqtt/source/mqtt.c:186)
==261254== by 0x4FBB422: Aws::Crt::s_initApi(aws_allocator*) (aws-sdk-cpp/crt/aws-crt-cpp/source/Api.cpp:39)
==261254== by 0x4FBB3E1: Aws::Crt::ApiHandle::ApiHandle(aws_allocator*) (aws-sdk-cpp/crt/aws-crt-cpp/source/Api.cpp:52)
Kind Reminder of this issue.
Aren't there any other people out there experience this issue?
Kind monthly reminder of this issue :)
We are facing the same issue. We make a lot of s3-crt GetObjectAsync calls, each one of which spins up a new thread (as we are using the DefaultExecutor) and adds to the memory usage.
@theShmoo how exactly did you get the export LSAN_OPTIONS hack working? It isn't helping me.
Hoi! Nice to hear that we are not alone!
We use LSAN as part of ASAN and if you use a suppression list you need to configure ASAN to recover from failures:
-fsanitize-recover=address -fsanitize-address-use-after-scope
these are the flags we use.
Certainly you are not alone. Suppressing the warnings does not feel so right :| We end up using thread_local to call s2n_cleanup(); whenever a thread is terminated. Now valgrind does not complain any more. We are happy again.
File: S2nCleanup.h
#include <s2n.h>
class S2nCleanup {
public:
~S2nCleanup() {
s2n_cleanup();
}
};
Any where we use s3Client.
thread_local S2nCleanup s2nCleanup{};
...
Aws::S3::Model::GetObjectOutcome outcome = s3Client->GetObject(request);
...
Nice! Thanks for your tipp with the thread_local variable. This also works for my use case.
Maybe it will help others as well:
I use s3 in combination with https://github.com/yhirose/cpp-httplib
to get your fix working I need to create a custom thread pool and add the thread_local s2n clean up:
proof of concept:
struct thread_cleanup
{
~thread_cleanup()
{
s2n_cleanup();
}
};
class task_queue : public httplib::TaskQueue
{
public:
explicit task_queue(size_t n) : m_pool(n) {}
void enqueue(std::function<void()> fn) override
{
m_pool.enqueue(
[f = std::move(fn)]
{
f();
thread_local thread_cleanup cleanup{};
});
}
void shutdown() override
{
m_pool.shutdown();
}
private:
httplib::ThreadPool m_pool;
};
// when configuring the server:
m_server->new_task_queue = []
{
// NOLINTNEXTLINE
return new task_queue(12);
};
This PR fixed a memory leak. Please let me know if you are still seeing any more leaks with the current version of this sdk.
|
gharchive/issue
| 2021-07-06T17:07:55 |
2025-04-01T04:56:06.262280
|
{
"authors": [
"databucketio",
"dhananjays",
"jmklix",
"kwach",
"magnushakansson",
"theShmoo",
"wps132230"
],
"repo": "aws/aws-sdk-cpp",
"url": "https://github.com/aws/aws-sdk-cpp/issues/1706",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.