id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
365883256
|
truncate_with_folder_marker shorten strategy does not work in $HOME
When the truncate_with_folder_marker stategy is used and POWERLEVEL9K_DIR_PATH_ABSOLUTE is not enabled, the dir segment is broken.
When I go to $HOME/.config, the prompt is ~~/.config.
This is because of the ${current_path#${last_marked_folder}*} at the end of the truncate_with_folder_marker case (powerlevel9k.zsh-theme:889). current_path is ~/.config and last_marked_folder is /home/myuser/.config.
Thx for the report @CedricFinance .
For me it is even worse:
Could you try #893 and see if that helps in your case? Planning to merge this PR in next branch soon..
It's a bit better with #893 (the name of the folder containing the marker is missing).
You can take the unit test I've added in #1005 to make sure that the prompt is correct.
Thanks for the feedback. I improved #893 now and incorporated your Test. Could you try again?
Btw. I rebased #893 onto next branch, so you'll need to port your configuration over (plus side is that you are already prepared for the next release ;) )
Nice. It's better.
The path is not shortened when we are in the folder with the marker.
I have written a test to highlight the bug.
function testTruncateWithFolderMarkerInMarkedFolder() {
typeset -a P9K_LEFT_PROMPT_ELEMENTS
P9K_LEFT_PROMPT_ELEMENTS=(dir)
local P9K_DIR_SHORTEN_STRATEGY="truncate_with_folder_marker"
local BASEFOLDER=/tmp/powerlevel9k-test
local FOLDER=$BASEFOLDER/1/12
mkdir -p $FOLDER
# Setup folder marker
touch $FOLDER/.shorten_folder_marker
cd $FOLDER
assertEquals "%K{004} %F{000}/…/12 %k%F{004}%f " "$(__p9k_build_left_prompt)"
cd -
rm -fr $BASEFOLDER
}
Thanks for the test. I incorporated it iny PR, fixed it and already merged it into next. Could you test it again?
Fixed with #893 . If there is still a problem, feel free to reopen.
It's working fine now. Thanks!
|
gharchive/issue
| 2018-10-02T12:53:24 |
2025-04-01T04:56:09.378674
|
{
"authors": [
"CedricFinance",
"dritter"
],
"repo": "bhilburn/powerlevel9k",
"url": "https://github.com/bhilburn/powerlevel9k/issues/1004",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
354584335
|
GetLowestPricedOffersForASIN returns xml value instead of object
Thank you for developing this module. I looked up bunch of modules out there and this is by far the best I tried so far. It helped my workflow a lot. Thank you.
I however had some issues with specific apis and was wondering if you can help me. Thanks in adance.
GetLowestPricedOffersForASIN result returns xml instead of clean object. same with offerforSKU.
`{
"data": [
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <GetLowestPricedOffersForASINResult MarketplaceID="ATVPDKIKX0DER" ItemCondition="New" ASIN="B0792JK6K6" status="Success">"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " ATVPDKIKX0DER"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " B0792JK6K6"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " New"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 2018-08-28T04:31:46.412Z"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 7"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <OfferCount condition="new" fulfillmentChannel="Amazon">2"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <OfferCount condition="new" fulfillmentChannel="Merchant">5"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <LowestPrice condition="new" fulfillmentChannel="Amazon">"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <LowestPrice condition="new" fulfillmentChannel="Merchant">"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 20.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 4.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <BuyBoxPrice condition="New">"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 25"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <OfferCount condition="new" fulfillmentChannel="Amazon">2"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <OfferCount condition="new" fulfillmentChannel="Merchant">5"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 91.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 15875"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="24" maximumHours="48" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 20.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 4.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " TX"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " US"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 100.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 254402"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="0" maximumHours="0" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.40"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 98.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 7883"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="24" maximumHours="24" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 24.95"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 99.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 107221"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="0" maximumHours="0" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 25.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 94.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 7617"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="24" maximumHours="48" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 25.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " UT"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " US"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 90.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 292"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="24" maximumHours="48" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 27.55"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " GB"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " new"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 97.0"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 73"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " <ShippingTime minimumHours="144" maximumHours="240" availabilityType="NOW"/>"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 61.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " USD"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 0.00"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " AZ"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " US"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " false"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " true"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": "
"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " 886afe0c-82c1-411a-8438-9d765d37a28f"
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": " "
},
{
"<GetLowestPricedOffersForASINResponse xmlns="http://mws.amazonservices.com/schema/Products/2011-10-01\">": ""
}
],
"Headers": {
"x-mws-quota-max": "200.0",
"x-mws-quota-remaining": "197.0",
"x-mws-quota-resetson": "2018-08-28T06:20:00.000Z",
"x-mws-timestamp": "2018-08-28T05:37:57.796Z"
}
}`
@yortrosal Would you please share API request sample that you have made?
@yortrosal It's working as expected. You might do wrong API call.
|
gharchive/issue
| 2018-08-28T06:10:03 |
2025-04-01T04:56:09.624692
|
{
"authors": [
"bhushankumarl",
"yortrosal"
],
"repo": "bhushankumarl/amazon-mws",
"url": "https://github.com/bhushankumarl/amazon-mws/issues/46",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1160195657
|
What about multi-line comments?
What about multi-line comments? Would you like one in SLAP?
Maybe something like this?
##
This is a multi-line comment.
##
Alternatively this?
#{
Comment!
}#
Well, there's no response, so I decided to use the second one. See here f4f96ba.
I guess second one is better.
|
gharchive/issue
| 2022-03-05T02:36:22 |
2025-04-01T04:56:09.654115
|
{
"authors": [
"GalaxianMonster",
"bichanna"
],
"repo": "bichanna/slap",
"url": "https://github.com/bichanna/slap/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
124362068
|
建议不要使用maven modules,可以用git的子模块!
将每一个模块分离出去
@biezhi
我看了目前这个项目的结构,各个模块之间并不是相互依赖的,而且有的模块中重复包含了另一个,为什么不将这些模块单独出去维护呢,另外目前我自己的开发环境对modules支持不太好 :smile:
@biezhi
我看了目前这个项目的结构,各个模块之间并不是相互依赖的,而且有的模块中重复包含了另一个,为什么不将这些模块单独出去维护呢,另外目前我自己的开发环境对modules支持不太好 :smile:
目前的项目是一个标准的maven多模块结构,git子模块的方式我没有用过,当然你的提议我最初是那么做的,这样要建很多仓库喽。而且大部分项目都是这么干的,模块间的依赖都是以core为核心,你是什么IDE? idea吗
目前的项目是一个标准的maven多模块结构,git子模块的方式我没有用过,当然你的提议我最初是那么做的,这样要建很多仓库喽。而且大部分项目都是这么干的,模块间的依赖都是以core为核心,你是什么IDE? idea吗
而且大部分项目都是这么干的
大部分项目都是不是用maven来管理模块的吧,基本上都是git或者svn来管理模块吧。我是vim配上插件。
https://github.com/wsdjeg/DotFiles
而且大部分项目都是这么干的
大部分项目都是不是用maven来管理模块的吧,基本上都是git或者svn来管理模块吧。我是vim配上插件。
https://github.com/wsdjeg/DotFiles
|
gharchive/issue
| 2015-12-30T15:33:23 |
2025-04-01T04:56:09.661875
|
{
"authors": [
"biezhi",
"wsdjeg"
],
"repo": "biezhi/blade",
"url": "https://github.com/biezhi/blade/issues/38",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
272665625
|
Send the hash to tendermint
description will be added soon
@sbellem @vrde The tendermint docs here describe that the commit should return a byte array which would be included in the header of the next block. So, I guess we could send the merkle root of the transactions included in the previous block?
Sounds good to me. Should we also add in the hash the merkle root of the previous block?
@vrde yes, I think that is good idea. Do we store the hash of the previous block ourselves or query it from tendermint?
I guess we should add a new collection to store this information. We will need to store something like:
{
"height": "<int>",
"app_hash": "<string>"
}
This information should be exposed by the info method.
|
gharchive/issue
| 2017-11-09T18:10:50 |
2025-04-01T04:56:09.740908
|
{
"authors": [
"kansi",
"sbellem",
"vrde"
],
"repo": "bigchaindb/bigchaindb",
"url": "https://github.com/bigchaindb/bigchaindb/issues/1829",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
314971030
|
Problem: we don't have integration tests
Solution: have a simple way to start a node and run scripts against it.
For major fun I've also added some documentation, try to run make integration-test-doc :gift:
I noticed sometimes make integration-test-doc fails. It should be fixed once https://github.com/pycco-docs/pycco/pull/101 is merged.
|
gharchive/pull-request
| 2018-04-17T09:10:55 |
2025-04-01T04:56:09.742460
|
{
"authors": [
"vrde"
],
"repo": "bigchaindb/bigchaindb",
"url": "https://github.com/bigchaindb/bigchaindb/pull/2216",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2363187744
|
🤗 [REQUEST] - LLaMa2 7B HHRLHF QLoRA
Model introduction
This model is created by Sharan as an experiment for my thesis. I am working on testing different Quantization and PEFT combinations.
Model URL
https://huggingface.co/Sharan1712/llama2_7B_hhrlhf_qlora_4bit_1d
Additional instructions (Optional)
The model is quantized and merged with LoRA after finetuning.
Author
Yes
Security
[X] I confirm that the model is safe to run which does not contain any malicious code or content.
Integrity
[X] I confirm that the model comes from unique and original work and does not contain any plagiarism.
Hi @Sharan1712, is this a base model but with further finetuning?
Hi @terryyz, yes I took https://huggingface.co/meta-llama/Llama-2-7b-hf, quantized it, added LoRA layers, and finetuned the LoRA layers using HHRLHF dataset.
I am working on a small project of Quantization & PEFT methods.
Thanks for the clarification! So I guess this will be an instruction-tuned model as you used hh-rlhf. However, I saw you didn't include the chat template in the tokenizer config, which would make the evaluation use direct completion. In addition, missing the chat template means that you can't evaluate the model on the BigCodeBench-Instruct.
Do you want me to add your results to the leaderboard? Or do you just want to do the eval? If it's for the eval only, you can also do it by following the steps documented in the README :)
Closed the issue for now. Feel free to reopen it if you have further questions :)
|
gharchive/issue
| 2024-06-19T22:07:35 |
2025-04-01T04:56:09.746814
|
{
"authors": [
"Sharan1712",
"terryyz"
],
"repo": "bigcode-project/bigcodebench",
"url": "https://github.com/bigcode-project/bigcodebench/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
158472098
|
overlap query reflects new formats
Resolves #1042
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/ADAM-prb/1254/
Test PASSed.
+1
Thanks, @erictu!
|
gharchive/pull-request
| 2016-06-03T23:20:48 |
2025-04-01T04:56:09.756816
|
{
"authors": [
"AmplabJenkins",
"erictu",
"heuermh"
],
"repo": "bigdatagenomics/adam",
"url": "https://github.com/bigdatagenomics/adam/pull/1043",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
224618675
|
Tutorial link in Realm website is broken
Hi, the tutorial link in Realm website is broken: https://realm.io/news/building-an-ios-clustered-map-view-in-objective-c/
I remember visiting this link before. Was it removed intentionally?
Looks like the hostname on the link should be news.realm.io. There's also a Swift version
https://news.realm.io/news/building-an-ios-clustered-map-view-in-objective-c
https://news.realm.io/news/building-an-ios-clustered-map-view-in-swift
|
gharchive/issue
| 2017-04-26T22:32:23 |
2025-04-01T04:56:09.761627
|
{
"authors": [
"phatblat",
"rsacchi"
],
"repo": "bigfish24/ABFRealmMapView",
"url": "https://github.com/bigfish24/ABFRealmMapView/issues/41",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
434865933
|
Increase coverage report percentage #15
Increasing unit test coverage.
Still working on this. Modules that have low coverage:
response_retriever.py
request_director.py
url_director.py
method_creator.py
model_creator.py
singleton.py
Still working on this. Modules that have low coverage:
response_retriever.py
request_director.py
url_director.py
method_creator.py
model_creator.py
singleton.py
Chatted with @tristan-fickes-bfg about this and learned that #15 is just about the SonarCloud link... The tests added for the Singleton and some other classes should bring lines tested over 95%.
PyCharm is showing 100% line coverage for the gamebench_api_client package when running the entire suite.
Ready for review!
|
gharchive/pull-request
| 2019-04-18T16:56:45 |
2025-04-01T04:56:09.766029
|
{
"authors": [
"kawika-brennan-bfg"
],
"repo": "bigfishgames/GameBenchAPI-PyClient",
"url": "https://github.com/bigfishgames/GameBenchAPI-PyClient/pull/26",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
246498216
|
17.07.29-about 深度学习/机器学习
循环神经网络 - 掘金
一文看尽深度学习RNN:为啥就它适合语音识别、NLP与机器翻译? - 掘金
什么是机器学习?它会给我们的生活带来哪些改变?
机器学习算法在今日的应用,我们需要提取可以表示业务的特征,然后通过算法来训练模型,用这些模型对于未知结果的预测集进行预测。
20世纪70年代,随着理论算法的逐步成熟,人工智能的发展遇到了计算资源上的瓶颈。随着计算复杂度的指数性增长,20世纪70年代的大型机器无法负担这一切。同时,当时的互联网还处于发展初期,在数据积累方面也才刚刚起步。科学家往往没有足够的数据去训练模型。
AlphaGo的成功不仅仅验证了深度学习和蒙特卡洛搜索算法的实践性,更加再一次印证了这样的事实,即人类不再是产生智能的唯一载体。任何机器,只要能够进行信息的接收、存储和分析,都是可以产生智能的。而这里面的关键因素是信息的量级以及算法的深度。
人工智能的发展史,就是对于过往经验的收集和分析方法不断演绎的历史。在机器出现之前,人类只能通过别人的分享和自己的实践在很小的信息量级上来对事物进行判断,这种对于外界事物的认知受限于人的脑力和知识量。不同于人类的脑力,抽象意义上的机器可以被当成一个信息黑洞,吸收所有的信息,而且可以不分昼夜地对这些数据进行大维度的分析、归纳以及演绎,如果人类将这些机器学习后得到的认知进行分享,就形成了人工智能。于是,随着人类社会的发展,数据的积累以及算法的迭代将进一步推动整个人工智能的发展。
截止到今天,全世界每年保存下来的数据只占到数据产生总量的百分之一不到,其中可以被标记并且分析的数据更是连百分之十都不到。这种现状造成了两方面的瓶颈,一方面是数据产生和数据收集的瓶颈,另一方面是采集到的数据和能被分析的数据之间的瓶颈。
针对数据产生和数据采集的瓶颈,其原因一方面是硬件存储成本的限制,但是随着硬盘技术的发展和产能的提升,这方面的缺陷正逐渐弱化。笔者认为,造成目前数据采集与数据生成失衡的主要原因是数据的采集缺乏标准(尤其是对于非互联网企业)。
目前可以供分析的数据还只占很小的比例。造成这样的困境主要有两方面因素,一个是目前比较主流的机器学习算法都是监督学习算法,监督学习需要的数据源是打标过的数据,打标数据很多时候是依赖于人工标记。另一个导致可分析数据比例较低的因素是对于非结构化的数据(文本或者图片、语音、视频)处理能力较低。
机器学习的一些高频场景:
聚类场景:人群划分和产品种类划分等。
分类场景:广告投放预测和网站用户点击预测等。
回归场景:降雨量预测、商品购买量预测和股票成交额预测等。
文本分析场景:新闻的标签提取、文本自动分类和文本关键信息抽取等。
关系图算法:社交网络关系(Social Network Site,SNS)网络关系挖掘和金融风险控制等。
模式识别:语音识别、图像识别和手写字识别等。
|
gharchive/issue
| 2017-07-29T04:27:15 |
2025-04-01T04:56:09.770386
|
{
"authors": [
"bighuang624"
],
"repo": "bighuang624/bighuang624.github.io",
"url": "https://github.com/bighuang624/bighuang624.github.io/issues/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1844806393
|
🛑 Jefferson Real is down
In 86f78e0, Jefferson Real (https://jeffersonreal.uk) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Jefferson Real is back up in 9c1d518.
|
gharchive/issue
| 2023-08-10T09:36:30 |
2025-04-01T04:56:09.812461
|
{
"authors": [
"bigupjeff"
],
"repo": "bigupjeff/my-site-uptime-monitor",
"url": "https://github.com/bigupjeff/my-site-uptime-monitor/issues/751",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1219208877
|
Migrate devcontainer to a straight docker image
Currently the devcontainer installs nix onto a debian machine. This is a problem because
Nix can be slow inside a container
Launching a new container takes a long time
VSCode has to use the nix environment selector otherwise the language server doesn't work. This is buggy as heck, and will not support flakes.
We use the same versions of things inside both the devcontainer and when developing locally, but we still have a disconnect with production which uses a different Docker container
It's possible to get nix/flakes to spit out a docker image for us - but then we would have to figure out how to ship the image to codespaces which expects Docker files not Docker images.
I do think the above is the ideal state of things, but it's also very cutting edge and would take a lot of effort to get there. As a short/medium term solution let's stop messing about with putting userland nix inside docker containers and instead build a devcontainer as a docker image directly.
We can follow the Dockerfile we have for deploying to production. Ideally we would share the same file (or rather the devcontainer docker image is stage 1 of the 2 stage production docker). Baring that, we should at least use the same versions of elixir/erlang. If they have to be separate files, they should at least share a config option with the same name(s).
Note: sourcing env.local is done inside the flake.nix. Creating the file and other setup stuff are done in the [postcreate]https://github.com/bikebrigade/dispatch/blob/456442668df0efd13477a247c8332d1645db5dfa/.devcontainer/devcontainer.json#L43).
Both need to happen for the dev environment. I'm not sure how env var configuration is intended to work in devcontainers -- this nix hook relies on the shell being started. You could source the env file in the Dockerfile, or perhaps there's a happy path Github intends you to take here.
ill start working on it
the project root already has the docker-compose and dockerfile that are used in prod , ill switch the devcontainer to use those , instead of using a separate dev config.
note
:if we want the dev env to have more features (Ubuntu instead of alpine, ZSH, etc...), separate configs will be needed
the project root already has the docker-compose and dockerfile that are used in prod , ill switch the devcontainer to use those , instead of using a separate dev config.
I found that I had to do this for the devcontainer to launch postgres right:
https://github.com/bikebrigade/dispatch/blob/456442668df0efd13477a247c8332d1645db5dfa/.devcontainer/docker-compose.yml#L4-L5
You may need to move it to the other docker compose to get things to work.
it might not work because dev env needs vscode user , that we don't want in prod
note : if we want the dev env to have more features (Ubuntu instead of alpine, ZSH, etc...), separate configs will be needed
This is a very good point! I guess they should be separate envs and we just want to share elixir & OTP versions
this is a "fast-load" dev image it does not run mix deps , etc... i can add it in if we want to
env.local has :
PHONE_NUMBER=647-555-5555
NEW_PHONE_NUMBER=647-555-5555
but im getting
18:48:14.253 [info] == Running 20210404181506 BikeBrigade.Repo.Migrations.PopulateRiderIds.up/0 forward
** (RuntimeError) Missing environment variable PHONE_NUMBER
i get the same error when on master , getenv() is not picking up .env.local
|
gharchive/issue
| 2022-04-28T19:53:43 |
2025-04-01T04:56:09.822434
|
{
"authors": [
"ShayMatasaro",
"mveytsman"
],
"repo": "bikebrigade/dispatch",
"url": "https://github.com/bikebrigade/dispatch/issues/94",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1988539542
|
filechooser: Support current_folder with OpenFile
<flatpak/xdg-desktop-portal#1045>
Removed the changes to the demo as requested
Usually we comment the new options that were added in a specific version so we won't forget about handling them in the future once we support only serializing supported options. But the changes as is are good, thanks!
|
gharchive/pull-request
| 2023-11-10T23:06:11 |
2025-04-01T04:56:09.833221
|
{
"authors": [
"bilelmoussaoui",
"sophie-h"
],
"repo": "bilelmoussaoui/ashpd",
"url": "https://github.com/bilelmoussaoui/ashpd/pull/173",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
568818178
|
下载安装最新的0.33 遇到错误信息
开发环境
macOS: Catalina 10.15.3 (19D76)
golang: ~ go version go1.12.6 darwin/amd64
之前安装过 kratos version 0.2.3
想升级为最新的版本.
升级操作如下
$ GO111MODULE=on && go get -u github.com/bilibili/kratos/tool/kratos
执行完后出现错误信息如下:
➜ ~ GO111MODULE=on && go get -u github.com/bilibili/kratos/tool/kratos
# github.com/bilibili/kratos/tool/kratos
go/src/github.com/bilibili/kratos/tool/kratos/env.go:25:14: undefined: os.UserConfigDir
undefined: os.UserConfigDir 这是什么东西? 我需要怎么配置这个配置目录?
将golang版本升级到
go version go1.13.8 darwin/amd64
解决上面的问题.
|
gharchive/issue
| 2020-02-21T08:50:23 |
2025-04-01T04:56:09.835934
|
{
"authors": [
"wangleihd"
],
"repo": "bilibili/kratos",
"url": "https://github.com/bilibili/kratos/issues/508",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1680721300
|
Blog app - Views
Hellooo 😄
In this PR I have:
Implemented the design from the sneak peek wireframes.
Focused on making sure everything functions first with just basic HTML structuring without CSS styling.
Used methods that I have created in "processing data in models".
@brytebee Thanks
|
gharchive/pull-request
| 2023-04-24T08:28:15 |
2025-04-01T04:56:09.837764
|
{
"authors": [
"bill7pearl"
],
"repo": "bill7pearl/DevJournal",
"url": "https://github.com/bill7pearl/DevJournal/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
218792693
|
Improvements to player.get_operator(operator)
If a stat's value is 0, it isn't included in data, and a KeyError is raised. This changes it to use .get() instead, to avoid the issue.
Also uses a predetermined mapping for operator "location," their n:m "ID" that identifies their stats. These never change, so it is better to do it this way. It solves cases where a player has 0 of an operator's special stat, which means it isn't included in the results, and everything else breaks as a result.
Closing this because I implemented a slightly more future-proof solution using the API "definitions" call.
I had had this code implemented for a while now, didn't realize I hadn't pushed it to the github yet, sorry!
|
gharchive/pull-request
| 2017-04-02T20:51:35 |
2025-04-01T04:56:09.839406
|
{
"authors": [
"billy-yoyo",
"tmercswims"
],
"repo": "billy-yoyo/RainbowSixSiege-Python-API",
"url": "https://github.com/billy-yoyo/RainbowSixSiege-Python-API/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
312378682
|
无法使用自己训练的网络
你好,我现在正在复现siamfc网络,但在你的框架下始终无法复现结果。
我先自己训练了一个用pytorch写的siamfc,替换了原作者放出的tensorflow版本的siamfc中的网络,可以复现结果。代码可见:https://github.com/huanglianghua/siamfc-pytorch
主要网络部分如下:
def __init__(self):
super(SiameseNet, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 96, 11, 2),
nn.BatchNorm2d(96),
nn.ReLU(inplace=True),
nn.MaxPool2d(3, 2)
)
self.conv2 = nn.Sequential(
nn.Conv2d(96, 256, 5, 1, groups=2),
nn.BatchNorm2d(256),
nn.ReLU(inplace=True),
nn.MaxPool2d(3, 2)
)
self.conv3 = nn.Sequential(
nn.Conv2d(256, 384, 3, 1),
nn.BatchNorm2d(384),
nn.ReLU(inplace=True)
)
self.conv4 = nn.Sequential(
nn.Conv2d(384, 384, 3, 1, groups=2),
nn.BatchNorm2d(384),
nn.ReLU(inplace=True)
)
self.conv5 = nn.Sequential(
nn.Conv2d(384, 32, 3, 1, groups=2)
)
self.branch = nn.Sequential(
self.conv1,
self.conv2,
self.conv3,
self.conv4,
self.conv5
)
self.bn_adjust = nn.BatchNorm2d(1)
def forward(self, z, x):
z = self.branch(z)
x = self.branch(x)
out = self.xcorr(z, x)
out = self.bn_adjust(out)
return out
然后我修改了你的框架,替换了inference.tracker中的网络。具体方法如下:
修改了inference_wrapper.inference_step(self, sess, input_feed)函数,使其返回self.search_images和self.exemplar_images(我查看过返回的图像,没有问题)
在inference_wrapper.tracker中利用代码:
outputs, metadata = self.siamese_model.inference_step(sess, input_feed)
从outputs中拿到search_images和exemplar_images,利用我自己的网络得到response,再用tf.image.resize_images放大8倍得到最后的输出response,替换原来的response
训练用的是datasets部分生成的数据,label的标签是0, 1分类,使用binary_cross_entropy_with_logits得到最后的loss
然而运行run_tracking.py后得到的效果很差,几乎追踪不到目标,我在想是不是因为response的范围不同造成的?希望能指点一下,谢谢!
hi,
response 响应图是否合理?
upsample 是否使用了 bicubic upsamping,以及 align corner?
放大倍数应该是 16 ?
response我取出来看过没问题
使用的参数和代码里的一样:
tf.image.resize_images(inputs, [272, 272],
method=tf.image.ResizeMethod.BICUBIC,
align_corners=True)
这里因为我有两个版本的网络,所以搞混了。但最后都是放大到[272, 272]
https://github.com/huanglianghua/siamfc-pytorch
这个链接是一个pytorch版本的siamfc,数据预处理和目标追踪的办法用的是原作者的代码,仅仅修改了网络,可以直接加载 baseline-conv5_e55.mat 来对目标进行追踪。我用它测试过VOT数据集,没有问题。
我把该网络按照上面的方法嵌入到你的代码中,运行出来的效果同样非常差。
刚刚我又看了几遍视频,感觉像是坐标没有同步跟新,检测出的方框位置几乎没有变过。
那可以把每一帧的 search images 保存下来,看 search image 是否更新了?
我大概发现问题在哪了。在目标偏离search image的中心位置后,你的网络中的response的最大值会跟着一起偏离中心。而在我的网络中,repsonse的最大值始终处于中心的位置。
之后我让exemplar_images始终设置为第一张图片的exemplar_images,一切就都正常了。所以是在更新exemplar_images的步骤上有什么问题?但为什么你的网络就能正常运行呢?
太感谢了!那是我没完全搞懂你的代码,我还以为你代码中的exemplar image是一直更新的。
不客气 : )
你好,我根据cfnet的baseline(输入变了,网络步长减小)修改,直接只将configuration中的输入exemplar image大小改变成2552553,但好像不太对,能给点建议吗?还有我觉得是不是这个exemplar image每次都更新的话,遮挡问题就会跟踪的好很多。
|
gharchive/issue
| 2018-04-09T03:15:21 |
2025-04-01T04:56:09.849076
|
{
"authors": [
"Mabinogiysk",
"bilylee",
"zhangyujia0223"
],
"repo": "bilylee/SiamFC-TensorFlow",
"url": "https://github.com/bilylee/SiamFC-TensorFlow/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
415605068
|
Add example quines and ouroboros functions
Manually tested (with the included test script)
Please note linting is deliberately not done as it impacts the code.
@rylandg Thanks! Further improved the readme. if it can still be improved, please don't hesitate to open an issue and assign it to me :-)
|
gharchive/pull-request
| 2019-02-28T12:42:37 |
2025-04-01T04:56:09.876641
|
{
"authors": [
"michaeladda"
],
"repo": "binaris/functions-examples",
"url": "https://github.com/binaris/functions-examples/pull/47",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
312364781
|
npm run serve 报错
D:\test-demo>npm run serve
bui-weex-template@1.0.7 serve D:\test-demo
node build/init.js && serve -p $npm_package_ports_serverPort
net.js:1483
throw new RangeError('"port" argument must be >= 0 and < 65536');
^
RangeError: "port" argument must be >= 0 and < 65536
at Server.listen (net.js:1483:13)
at Function.app.listen (D:\test-demo\node_modules\serve\node_modules\connect\lib\proto.js:229:24)
at Object. (D:\test-demo\node_modules\serve\bin\serve:132:8)
at Module._compile (module.js:652:30)
at Object.Module._extensions..js (module.js:663:10)
at Module.load (module.js:565:32)
at tryModuleLoad (module.js:505:12)
at Function.Module._load (module.js:497:3)
at Function.Module.runMain (module.js:693:10)
at startup (bootstrap_node.js:188:16)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! bui-weex-template@1.0.7 serve: node build/init.js && serve -p $npm_package_ports_serverPort
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the bui-weex-template@1.0.7 serve script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
package.json文件里的‘$npm_package_ports_serverPort’这个指向配置问题,severPort这个参数设置,范围是‘>= 0 and < 65536’,如下图,你看看是否有效。
就是默认不行,改课几个,怎么改都一样的错
端口号就是在这个范围,无论怎么设置都没用,有没被占用
不知道你的 weex 和 weex create demo的是什么版本
我的是在项目文件夹下demo/configs/config.js这个文件下改port就行
@gendseo
你用的是bui-weex这个UI框架不
我的weex是最新的v1.2.9
我是自己写的ui
bui-weex 恕在下直言,bug还很多,要用到生产环境需谨慎些
在windows环境不支持$npm_package_ports_serverPort语法,另外要升级下相关的依赖包。
在windows环境不支持$npm_package_ports_serverPort语法,另外要升级下相关的依赖包。最新的版本已经修复你的问题@yin2017168
|
gharchive/issue
| 2018-04-09T01:46:29 |
2025-04-01T04:56:09.960623
|
{
"authors": [
"gendseo",
"liyueqing",
"yin2017168"
],
"repo": "bingo-oss/bui-weex",
"url": "https://github.com/bingo-oss/bui-weex/issues/44",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
554577947
|
Instructions incorrect in readme
Server listening on 0.0.0.0 port 22.
User bastion authorized keys /var/lib/bastion/authorized_keys is not a regular file
Connection closed by authenticating user bastion 172.19.0.1 port 38242 [preauth]
This is because authorized_keys is a directory, and not a file.
Thanks for reporting! Would you describe in more detail the error that you have?
By default your docker-compose example create a folder for the auth keys. It should be a file.
Per your example config:
volumes:
- $PWD/authorized_keys:/var/lib/bastion/authorized_keys:ro
This is incorrect. It will attempt to mount a directory when it needs to be a file.
Per the rest of the documentation:
Add rsa public key to .bastion_keys file
$ cat $HOME/.ssh/id_rsa.pub > $PWD/.bastion_keys
".bastion_keys" is never referenced elsewhere.
A "working" config (working is in quotes because you still cannot SSH due to #7) would be:
AUTHORIZED_KEYS: "/var/lib/bastion/config/authorized_keys"
volumes:
- $PWD/config:/var/lib/bastion/config:ro
Assuming the "authorized_keys" file was placed in $PWD/config.
|
gharchive/issue
| 2020-01-24T07:08:07 |
2025-04-01T04:56:09.966247
|
{
"authors": [
"Slyke",
"binlab",
"swichers",
"xXcoronaXx"
],
"repo": "binlab/docker-bastion",
"url": "https://github.com/binlab/docker-bastion/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
96300810
|
problems syncing to maven central
Hey guys. I love Bintray and this plugin. It's really awesome. We are using it for happy continuous delivery in Mockito project.
Few weeks ago we started having issues with upload related to central sync. Here's the relevant line from the log:
> Could not sync 'szczepiq/maven/mockito/2.0.29-beta' to Maven Central: HTTP/1.1 400 Bad Request [messages:[There are no published artifacts to sync for this version.], status:Validation Failed]
Full log can be seen in Travis here or here.
This worked about 3 weeks ago (on 7/1 we had successful uploads), about 2 weeks ago it stopped working (7/8 first failure).
Although the automatic sync fails, I am able to sync to central via Bintray UI which seems to indicate that there is a problem with the bintray gradle plugin (or we are not using it correctly :).
Can you help out diagnosing this issue?
Hmmm, we seem to have a different problem now:
groovy.json.JsonException: Unable to determine the current character, it is not a string, number, array, or object
full log: https://travis-ci.org/mockito/mockito/builds/71930126#L283
@szczepiq , we might need your help to troubleshoot the issue you reported.
Our Support Team will soon contact you.
We're happy to hear you're enjoying Bintray! Thank you!
|
gharchive/issue
| 2015-07-21T12:07:00 |
2025-04-01T04:56:09.976556
|
{
"authors": [
"eyalbe4",
"szczepiq"
],
"repo": "bintray/gradle-bintray-plugin",
"url": "https://github.com/bintray/gradle-bintray-plugin/issues/79",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
131013435
|
Improve readme
Improving the README page and also adding qiita-philosophy to the tutorials.
@antgonza the docs are not building correctly here - not sure what is the error, but one guess is that the folder "qiita-philosophy" needs to be added to setup.py
:+1:
|
gharchive/pull-request
| 2016-02-03T12:43:37 |
2025-04-01T04:56:10.135453
|
{
"authors": [
"ElDeveloper",
"antgonza",
"josenavas"
],
"repo": "biocore/qiita",
"url": "https://github.com/biocore/qiita/pull/1635",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
203844149
|
WIP: Fix 1805
Based on https://github.com/biocore/qiita/pull/2059, so merge that one first.
The idea is, as defined in the issue, to download all file paths from the BIOM artifacts that the user has access to via nginx zip.
ToDo:
[x] figure out how to test failures, place holders in place
[ ] test/tweak in the test-env until it actually works as expected
The new button is show here:
Coverage increased (+0.07%) to 91.611% when pulling 1626859cca8c196a63ee2ee0b5849358524161db on antgonza:fix-1805 into 4e380e04488c353a741c0766bf478778aa30ffbc on biocore:master.
Coverage increased (+0.08%) to 91.625% when pulling ade8cbe7844e47dcbee8f1123936b65f5d62f583 on antgonza:fix-1805 into 4e380e04488c353a741c0766bf478778aa30ffbc on biocore:master.
Coverage increased (+0.06%) to 91.64% when pulling 890d465779da653b9f332c3f15e993b4b7e6f07f on antgonza:fix-1805 into 27950465f44a5e945bf4001ddb40bde805789f8d on biocore:master.
Closing in favor of https://github.com/biocore/qiita/pull/2087
|
gharchive/pull-request
| 2017-01-29T01:34:47 |
2025-04-01T04:56:10.141617
|
{
"authors": [
"antgonza",
"coveralls",
"josenavas"
],
"repo": "biocore/qiita",
"url": "https://github.com/biocore/qiita/pull/2062",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
140062357
|
Doesn't work in the production server
Here is what happens if I run this locally in my workstation:
bundle exec rake maintenance:start
Created tmp/maintenance.yml
Run `rake maintenance:end` to stop maintenance mode
And this is what happens when I run it in the production server:
RAILS_ENV=production bundle exec rake maintenance:start
rake aborted!
Don't know how to build task 'maintenance:start'
(See full trace by running task with --trace)
All the other rake tasks works in production, it has to be something related with the environment, could you put some light here so I can research a little bit?
That is strange. It could be a naming conflict with a gem you have in your production group I suppose.
If you put load 'tasks/maintenance.rake' in your Rakefile does it work?
I assume you're using Rails. I see RAILS_ENV in your example. Which version?
If I try to load it from the Rakefile it doesn't work:
RAILS_ENV=production bundle exec rake maintenance:start
rake aborted!
LoadError: cannot load such file -- tasks/maintenance.rake
....
```
Some info about the stack:
````bundle exec rake about
About your application's environment
Rails version 4.2.0
Ruby version 2.1.2-p95 (x86_64-linux)
RubyGems version 2.2.2
Rack version 1.5
JavaScript Runtime Node.js (V8)
Middleware Rack::Sendfile, ActionDispatch::Static, Rack::Lock, #<ActiveSupport::Cache::Strategy::LocalCache::Middleware:0x0055ee707318b0>, Rack::Runtime, Rack::MethodOverride, ActionDispatch::RequestId, Rails::Rack::Logger, ActionDispatch::ShowExceptions, ActionDispatch::DebugExceptions, ActionDispatch::RemoteIp, ActionDispatch::Reloader, ActionDispatch::Callbacks, ActiveRecord::Migration::CheckPending, ActiveRecord::ConnectionAdapters::ConnectionManagement, ActiveRecord::QueryCache, ActionDispatch::Cookies, ActionDispatch::Session::CookieStore, ActionDispatch::Flash, ActionDispatch::ParamsParser, Rack::Head, Rack::ConditionalGet, Rack::ETag, Warden::Manager
```
I don't see Turnout in your middleware list either so it seems like it's more than just a rake issue.
This is a dumb question, I know, but are you sure you don't have the turnout gem in just your development group in your Gemfile?
It really looks like it is just not installed in production.
Exactly. You can close the issue. I just found the problem in capistrano failling the last time it was deployed!
That explains it. Thanks for the update.
|
gharchive/issue
| 2016-03-11T01:51:40 |
2025-04-01T04:56:10.157532
|
{
"authors": [
"adamcrown",
"lepek"
],
"repo": "biola/turnout",
"url": "https://github.com/biola/turnout/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1633751188
|
send_transaction mutation returns non-encoded txid
The following graphl query:
mutation SendTx {
send_transaction(
request: {
encoded_hex: "0100000001f17ebad1a9324ff82faf9b5054066bc5172bc6cf3ea78219400648a61015eb07000000006441cbed835372fca913a22b41fa1f54a62fcf2d66de86b59444bd8fbf3d4ba4056c2ceb58e34fa5d7222535a4759d465fa7fc4bcf474e1d938f32ccfe5b5816de8141210277ca380aff06f4cb50047c14a4ec192b0c9d30d5caad7fc9b4002bc0a69ecd59feffffff0239590000000000001976a914fe0d9c981cd67b0181d5f994ac77b391fe74244988ac97c57900000000001976a9146aef70d3eb17970780647492589deb34b18eab6f88ac87f90b00"
node_internal_id: 3
}
) {
transaction_hash
validation_success
transmission_success
transmission_error_message
transmission_error_message
}
}
returns:
{
"data": {
"send_transaction": {
"transaction_hash": "8d800122aa869bb081f8308aebdad74fc96ae8e565af600c6dace2454533e237",
"validation_success": true,
"transmission_success": true
}
}
}
but for consistency I think it should return:
{
"data": {
"send_transaction": {
"transaction_hash": "\\x8d800122aa869bb081f8308aebdad74fc96ae8e565af600c6dace2454533e237",
"validation_success": true,
"transmission_success": true
}
}
}
Thanks for the issue!
Yes, there are several places where the returned result is just a hex string rather than Postgres' bytea hex encoding. My preferred solution would be to fix https://github.com/bitauth/chaingraph/issues/48 such that all hex-encoded values are returned without the escape sequence.
I'm not able to focus on this currently, but I would love to merge a PR if you're interested!
|
gharchive/issue
| 2023-03-21T11:52:01 |
2025-04-01T04:56:10.228233
|
{
"authors": [
"bitjson",
"elderapo"
],
"repo": "bitauth/chaingraph",
"url": "https://github.com/bitauth/chaingraph/issues/58",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2108197822
|
Beehave doesn't output print statements (help needed)
I will leave this issue here, because mainly I want it indexed by google, so It can help others new users.
My issue is that there's not proper documentation on how to get signals working so far.
The tutorial seems to miss the setup stages, so I was unable to understand how to set up the tree so the leafs can do their job.
So far what I understand is that I need a root node, and then a sequencer node or a selector node.
Then I need to create a new node and a new script with a derived class from the leafs nodes.
But my issue is that I don't get any output if I do just the tick function.
Any help or explanation on how to set up the trees and leaves would be greatly appreciated.
EDIT : Added a screenshot of my tree.
What's the source code of IDLE CHECK and IDLE Action?
IDLE CHECK
extends ConditionLeaf
func tick (action, blackboard):
return RUNNING
IDLE ACTION
extends ActionLeaf
func tick(actor, blackboard):
print("IDLE TEST")
return SUCCESS
IDLE CHECK
extends ConditionLeaf
func tick (action, blackboard):
return RUNNING
IDLE ACTION
extends ActionLeaf
func tick(actor, blackboard):
print("IDLE TEST")
return SUCCESS
The reason why IDLE TEST isn't printing is that your previous node is RUNNING. If your sequence is atomic, it would work as expected.
Ah, seems I got it.
SUCESS and FAILURE are condition checks, are done once.
RUNNING is a loop condition, and should be used to keep the leaf running.
Thanks!!!!!
Added final advice to people lurking here.
You need a selector node, which will be the parent of a sequence node.
The selector will pick the sequences based on their failure return check.
|
gharchive/issue
| 2024-01-30T15:52:26 |
2025-04-01T04:56:10.234023
|
{
"authors": [
"LeoDog896",
"ca3gamedev"
],
"repo": "bitbrain/beehave",
"url": "https://github.com/bitbrain/beehave/issues/303",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
986834651
|
Is this expected Behaviour in /v0/send-bitclout?
I was running bit clout on my local machine in TESTNET mode, and i am trying out different API's from Backend API List through postman. Everything is working great when i call
api/v0/send-bitclout
I get an response, but immediately when i try to call next API
api/v0/submit-transaction
with transactionhex as input
i am getting
{
"error": "SubmitTransaction: Problem processing transaction: VerifyAndBroadcastTransaction: Problem validating txn: ValidateTransaction: Problem validating transaction: : ConnectTransaction: : _connectBasicTransfer: Problem verifying txn signature: : RuleErrorInvalidTransactionSignature"
}
when try to log few transaction data on my console i noticed below output is getting printed:
< TxHash: f285f5199b27eac5f40e9b2d194628c3d3d2612b667e4bd5a496e7e264c95c2e, TxnType: BC1YLgJxJq5BHAMhmcWjAxKjphgXE7fogodt1RDk9jhGWTLPz1j3qHP, PubKey: BASIC_TRANSFER >
here you can see
PubKey -> is having transaction type value and TxnType is having public key
it just swapped the data which is strange, i don't understand how this is happening? can someone shed some light here please?
Above log is printed when calling "/send-bitclout" API, inside "SendBitClout" method call -> above line #987.
File: routes/transaction.go#L987
Did you sign the transaction hex before sending it to submit-transaction, or just lift it directly off the send-bitclout response?
Did you sign the transaction hex before sending it to submit-transaction, or just lift it directly off the send-bitclout response?
That's the exact issue.. Thanks for pointing out.. :)
Just figured out by running frontend on my local machine. Everything works smooth..
but i still don't get why the keys are swapped?
My guess is that's a typo in the logging code somewhere, it definitely shouldn't be like that.
I am not doing anything extra here! i am just printing txnn..
glog.Infof("========raw data Just before Sanity check=========: %v", txnn)
The transaction String() method has these parameters swapped. It's an easy fix in the core repo if you want to try it :)
|
gharchive/issue
| 2021-09-02T14:51:00 |
2025-04-01T04:56:10.240839
|
{
"authors": [
"andyboyd",
"jagathks",
"maebeam"
],
"repo": "bitclout/backend",
"url": "https://github.com/bitclout/backend/issues/137",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333725783
|
snycronizing
Bitcoin Core has been syncronizinf for months I need help , I have transactions that need done
No need to open more than one issue https://github.com/bitcoin/bitcoin/issues/13504#issuecomment-398443803
This should be closed
Bitcoin Core has been syncronizinf for months I need help , I have transactions that need done
Hello, this repository is for the website, not for the software.
how do i get support for the website
On Tuesday, June 19, 2018, 11:52:57 AM EDT, Karel Bílek <notifications@github.com> wrote:
Hello, this repository is for the website, not for the software.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
gharchive/issue
| 2018-06-19T15:33:23 |
2025-04-01T04:56:10.245472
|
{
"authors": [
"MarcoFalke",
"doone57",
"karel-3d"
],
"repo": "bitcoin-core/bitcoincore.org",
"url": "https://github.com/bitcoin-core/bitcoincore.org/issues/566",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2234992571
|
Create docker-image.yml
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
Reviews
See the guideline for information on the review process.
A summary of reviews will appear here.
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
Reviews
See the guideline for information on the review process. A summary of reviews will appear here.
#816
|
gharchive/pull-request
| 2024-04-10T08:01:10 |
2025-04-01T04:56:10.249030
|
{
"authors": [
"DrahtBot",
"Thales-de-Milet"
],
"repo": "bitcoin-core/gui",
"url": "https://github.com/bitcoin-core/gui/pull/816",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2357724009
|
Replaced popbackStack with navigateUp
Replaced popbackStack with navigateUp to avoid more than one pop of screens.
Tested ACK 99dfd8f00ed5f5ed3ff8331ab3ea92954e4d9d96. Thanks for the work!
The only thing I'd need before merging is a refactor of the commit messages (the bdk project has a convention we try to enforce, even though I personally also prefer the style you are currently using!). The bdk commit messages loosely use the conventional commits style with an all-lowercase messages. In this case, they would be something like:
fix: replace popbackStack with navigateUp to avoid more than one pop of screen
feat: add modifier to click copy bitcoin address
Thanks @thunderbiscuit
I hope my new commit messages are ok now.
|
gharchive/pull-request
| 2024-06-17T16:11:40 |
2025-04-01T04:56:10.393147
|
{
"authors": [
"ItoroD",
"thunderbiscuit"
],
"repo": "bitcoindevkit/bdk-kotlin-example-wallet",
"url": "https://github.com/bitcoindevkit/bdk-kotlin-example-wallet/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
509150256
|
Image update: Testframework moved to notebook repository.
Fixes #118.
Looks good. Thanks @jachiang . A few points:
I find the grey and blue backgrounds a little difficult to distinguish when they're small.
it's confusing that the bitcoinops/bitcoin/tree/optech-taproot header is in blue, when that's supposed to denote jupyter notebooks.
I know I keep saying this, but capitalization is inconsistent. "Jupyter Notebook Chapters" has every word capitalized, but "Notebook dependencies" only has the first word capitalized. Please pick a style and be consistent with it.
Thanks @jnewbery, fixes have been applied.
|
gharchive/pull-request
| 2019-10-18T15:24:09 |
2025-04-01T04:56:10.427270
|
{
"authors": [
"jachiang",
"jnewbery"
],
"repo": "bitcoinops/taproot-workshop",
"url": "https://github.com/bitcoinops/taproot-workshop/pull/128",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2684144948
|
null value in column "data" of relation "changes" violates not-null constraint
While running the application the library works, but in tests it fails with this error even when we turn on capture: :ignore.
** (Postgrex.Error) ERROR 23502 (not_null_violation) null value in column "data" of relation "changes" violates not-null constraint
table: changes
column: data
Failing row contains (154, 6012496, insert, public, merchants, null, null, {}, 5993165, null).
Good day to you, too, @krainboltgreene
Carbonite's trigger procedures shouldn't be setting the changes.data column to anything but a jsonb value. Can you give me a little more information about your scenario? Trigger config, table schema, test setup, etc. Ideally a minimal working example reproducing the crash.
Absolutely I can get you those details.
So I have no idea what we did differently, maybe it was updating elixir, erlang, and postgres, but now it completely works in tests.
Unfortunately it's happening again, so let me try and get you as much information as possible:
** (Postgrex.Error) ERROR 23502 (not_null_violation) null value in column "data" of relation "changes" violates not-null constraint
table: changes
column: data
Failing row contains (155, 6430169, insert, public, merchants, null, null, {}, 6410293, null).
stacktrace:
(ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:1096: Ecto.Adapters.SQL.raise_sql_call_error/1
(ecto 3.12.4) lib/ecto/repo/schema.ex:834: Ecto.Repo.Schema.apply/4
(ecto 3.12.4) lib/ecto/repo/schema.ex:415: anonymous fn/15 in Ecto.Repo.Schema.do_insert/4
(ecto 3.12.4) lib/ecto/multi.ex:897: Ecto.Multi.apply_operation/5
(elixir 1.17.3) lib/enum.ex:2531: Enum."-reduce/3-lists^foldl/2-0-"/3
(ecto 3.12.4) lib/ecto/multi.ex:870: anonymous fn/5 in Ecto.Multi.apply_operations/5
(ecto_sql 3.12.1) lib/ecto/adapters/sql.ex:1400: anonymous fn/3 in Ecto.Adapters.SQL.checkout_or_transaction/4
(db_connection 2.7.0) lib/db_connection.ex:1756: DBConnection.run_transaction/4
(ecto 3.12.4) lib/ecto/repo/transaction.ex:18: Ecto.Repo.Transaction.transaction/4
(core 1.0.0) test/support/fixtures/users_fixtures.ex:104: Dashboard.UsersFixtures.active_merchant_fixture/1
test/dashboard/transactions/refund_demand_test.exs:5: Dashboard.Transactions.RefundDemandTest.__ex_unit_setup_1/1
test/dashboard/transactions/refund_demand_test.exs:1: Dashboard.Transactions.RefundDemandTest.__ex_unit__/2
Interestingly there is a change happening in this transaction.
Here's the log of SQL:
16:25:32.312 [debug] QUERY OK db=0.1ms
begin []
16:25:32.320 [debug] QUERY OK source="transactions" db=4.7ms
INSERT INTO "carbonite_default"."transactions" ("meta","inserted_at") VALUES ($1,$2) ON CONFLICT ("id") DO UPDATE SET "id" = EXCLUDED."id" RETURNING "inserted_at","meta","xact_id","id" [%{}, ~U[2024-12-03 00:25:32.315714Z]]
16:25:32.322 [debug] QUERY OK source="accounts" db=1.2ms
INSERT INTO "public"."accounts" (...,"inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10) [..., ~U[2024-12-03 00:25:32.320959Z], ~U[2024-12-03 00:25:32.320959Z], "8bf14116-9ebf-4a1e-a60e-ae3cd95b0a46"]
16:25:32.323 [debug] %Postgrex.Query{ref: nil, name: "ecto_insert_merchants_0", statement: "INSERT INTO \"public\".\"merchants\" (...\"inserted_at\",\"updated_at\",\"id\") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35)", param_oids: nil, param_formats: nil, param_types: nil, columns: nil, result_oids: nil, result_formats: nil, result_types: nil, types: nil, cache: :statement} uses unknown oid(s) 1982332forcing us to reload type information from the database. This is expected behaviour whenever you migrate your database.
16:25:32.334 [debug] QUERY ERROR source="merchants" db=10.1ms
INSERT INTO "public"."merchants" (...,"inserted_at","updated_at","id") VALUES ($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25,$26,$27,$28,$29,$30,$31,$32,$33,$34,$35) [..., ~U[2024-12-03 00:25:32.322866Z], ~U[2024-12-03 00:25:32.322866Z], "bfa228b2-1b2e-4fdb-a2e1-ae87681d0d1f"]
16:25:32.334 [debug] QUERY OK db=0.1ms
rollback []
That doesn't tell me much, unfortunately. I guess the most noticeable line in the logs is this one:
16:25:32.323 [debug] %Postgrex.Query{...} uses unknown oid(s) 1982332forcing us to reload type information from the database. This is expected behaviour whenever you migrate your database.
I think I'd try debugging this first. Are you testing migrations by any chance?
Again, it would be superb if you could try to isolate the error in a MWE, ideally something like this: https://github.com/bitcrowd/carbonite/issues/112#issuecomment-2301295061
|
gharchive/issue
| 2024-11-22T18:21:31 |
2025-04-01T04:56:10.434318
|
{
"authors": [
"krainboltgreene",
"maltoe"
],
"repo": "bitcrowd/carbonite",
"url": "https://github.com/bitcrowd/carbonite/issues/121",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1452447082
|
Variables from OnTime
Would like to request that the Companion module be able to get variables from Ontime. Things like:
OnAir/OffAir status
Current running time
Current Title and Subtitle
Current Cue Start, End, Duration
This would be super helpful as we build out our Companion buttons!
Thanks!
Hi @tcconway , have you got a chance to check the changes in the latest beta?
Hi @cpvalente -- I'm running companion (3.0.0+5393-beta-c304f676), but don't see the variables @kellhogs mentioned...perhaps I'm running the wrong ver? Would I need to get beta code for this module directly? Thanks all!
Hey @tcconway,
unfortunately my changes haven't been merged to the official repository yet. So to test the changes you would need to clone my repository (https://github.com/kellhogs/companion-module-getontime-ontime) and run it on a local development instance of companion.
I haven't gotten around to make the module v3 ready yet. Some features are working in v3 and some don't. So it would be best if you test it with the latest stable version of companion.
Thanks @kellhogs for the clarification.
@kellhogs can this be closed?
I've implemented the variables as requested by @tcconway
I would like to get feedback from @tcconway on the implementation before we close this.
Closing the issue as implemented. No more actionable points here
Thanks for working on this! I do only see "undefined" variable data though.
Hi @tcconway, great to hear from you
There have been some braking changes in the API recently. With ontime 2.9.0 you need the module version 2.3.0
|
gharchive/issue
| 2022-11-16T23:34:56 |
2025-04-01T04:56:10.447586
|
{
"authors": [
"cpvalente",
"josephdadams",
"kellhogs",
"tcconway"
],
"repo": "bitfocus/companion-module-getontime-ontime",
"url": "https://github.com/bitfocus/companion-module-getontime-ontime/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1462394950
|
Sony 120DH Commands
I have a Sony SRG 120DH and would like to work with some specific commands. Any chance that some of those get added?
I would be very grateful for those or some option to send custom commands.
Digital Zoom ON/OFF
CAM_DZoomModeInq:
Command Packet: 8x090406FF
D-Zoom On: y05002FF
D-Zoom Off: y05003FF
Information Display
Command Packet: 8x 097E 0118FF
On: y0 50 02 FF
Off: y05003FF
The NR (Noise Reduction) function removes noise (both random and non-random) to provide clearer images.
"This function has six steps: levels 1 to 5, plus off.
The NR effect is applied in levels based on the gain, and this setting value determines the limit of the effect. In bright conditions, changing the NR level will not have an effect."
CAM_NR 8x 01 04 53 0p FF // p: NR Setting (0: OFF, Level 1 to 5)
Video Latency setting
When the Video Latency is set to LOW, the latency until the shot image is output from the camera is shortened.
CAM_LowLatencyInq
Command Packet: 8x 097E 015AFF
Low: y0 50 02 FF
Normal: y0 50 03 FF
UDP Commands under:
https://pro.sony/support/res/manuals/AES7/81296da1a8a41043dc27edd9a72b7558/AES71001M.pdf
Those requests have been added to the module and should show up in the beta (Companion 3.x) as soon as they get merged into a new build.
My cameras don't have the low latency setting so I wasn't able to test that action. I did put in the command sequence from the link you provided so it is expected to work on cameras that do support it.
|
gharchive/issue
| 2022-11-23T20:53:29 |
2025-04-01T04:56:10.452096
|
{
"authors": [
"goseid",
"sonarkon"
],
"repo": "bitfocus/companion-module-sony-visca",
"url": "https://github.com/bitfocus/companion-module-sony-visca/issues/36",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2204567349
|
GET on API /mpi_led_color returns error
Returns:
ERROR: Operation not readable.
Supports POST but not GET.
|
gharchive/issue
| 2024-03-24T21:37:13 |
2025-04-01T04:56:10.453175
|
{
"authors": [
"phillipivan"
],
"repo": "bitfocus/companion-module-telestream-prism",
"url": "https://github.com/bitfocus/companion-module-telestream-prism/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
779793130
|
with EQMac blocked via outbound firewall, app does not work
I know there's a similar bug elsewhere about the UI being dragged from the internet.
i will not allow EQMac or other such local apps access the internet.
On Catalina, the app appears to install, okay but i have a privacy concern.
If i start the latest EQMac app with my outbound firewall disabled - or if internet is not available,
the app appears with a broken UI, just a blank white space as the remote UI skin is not loaded.
On Mojave, i can usually let the remote UI load by one time opening the firewall for a second or two until the UI is downloaded, and then blocking the app, and re-starting EQMac as if Mojave is caching the UI and Catalina isn't.
I guess this maybe is to do with the Catalina disc volume permissions perhaps.
Please can EQMac be made to work with all files locally ?
It does work with no internet at all. The UI loads from local files if there's no access to the web.
It does work with no internet at all. The UI loads from local files if there's no access to the web.
Hi
Thanks for the reply.
It does not work for me :(
I even did a full OS re-install of Mojave 10.14.6 and i either get a blank white UI
or the big 'EQMac quit unexpectedly' dialogue with loads of debug info in.
It's a totally fresh install of Fresh Mojave from the App store.
EVERY TIME - if i disable my firewall, Then start EQMac - it all loads, UI and helper app & works perfectly.
if i disallow EQMac to connect outside, every single time i get blank UI.
I've been using computers since the early 80s and i was coding in 6502a assembler, so I'm pretty well aware
of how to use and correctly install a clean OS on my rMBP. the OS is stock & updated with no weird kernel
modules etc.
I tried High Sierra, the experience was the same, again, the UI only populated when web was accessible :(
I would not waste your time if was a *nix newbie or using a Hackintosh or weird hardware or whatever.
i can send you the dump if needed.
Thanks
Hi
Thanks for the reply.
It does not work for me :(
I even did a full OS re-install of Mojave 10.14.6 and i either get a blank white UI
or the big 'EQMac quit unexpectedly' dialogue with loads of debug info in.
It's a totally fresh install of Fresh Mojave from the App store.
EVERY TIME - if i disable my firewall, Then start EQMac - it all loads, UI and helper app & works perfectly.
if i disallow EQMac to connect outside, every single time i get blank UI.
I've been using computers since the early 80s and i was coding in 6502a assembler, so I'm pretty well aware
of how to use and correctly install a clean OS on my rMBP. the OS is stock & updated with no weird kernel
modules etc.
I tried High Sierra, the experience was the same, again, the UI only populated when web was accessible :(
I would not waste your time if was a *nix newbie or using a Hackintosh or weird hardware or whatever.
i can send you the dump if needed.
Thanks
yeah, I'm on a corporate machine with enforced incoming FW, and I'm completely unable to use the app
having to pull a UI down from the internet is a pretty silly requirement for an audio eq
STILL not working when firewalled off.
I agree with the above comment 12th Feb.. having to pull the UI off the web is ridiculous and
likely could be a potential security issue too.
every time, Mojave. Fresh Install. UI is a plain white box unless allowed internet access, which isn't going to happen.
On Catalina, the driver completely fails to load and has never worked on any of my 3 Macs.
I'm rating this as unuseable, because of the ongoing breakage of the app.
I'm so disappointed, as I'm a electronic musician and bass guitarist and enjoy audio a great deal,
but on my 3 Macs, a Macbook Pro 15 retina, a Macbook Air 13 and iMac 24" this app fails, always.
the UI should be in the code already. until it is, i cant use it. Sorry, I like the concept.
Guys the UI should load fine without internet, it's shipped with the app. It will only load the UI from the web if it can access it. Tens of thousands of people use eqMac every day and it works fine with and without internet. There is just some specific issue with your setup, we can get to the bottom of this I'm sure. Please run Console.app and filter the system log by "eqMac" and run the app, post yiur logs here. Also please make sure to use the latest version that you can download from the website.
On a side note, I went with a Web UI because it would take me twice as long to develop the UI in Swift... I'm the only one working on the app and for me using the Web technologies for UI is a massive time saver. There are no security implications to that at all as I know what I'm doing when it come to Web :)
I'm going to close this issue as it's related to #346
As I mentioned there is not requirement for the internet by design, but there's just some bug that I think I've fixed in v1.0.0 that's coming soon.
Thanks for being patient with me.
I have just released v1.0.0. You can download it here: https://github.com/bitgapp/eqMac/releases/tag/v1.0.0
Or from the website: https://eqmac.app
Could you please let me know if this issue is still present.
Thanks again.
Roman
|
gharchive/issue
| 2021-01-06T00:35:37 |
2025-04-01T04:56:10.477121
|
{
"authors": [
"4fthawaiian",
"nodeful",
"zaphodd"
],
"repo": "bitgapp/eqMac",
"url": "https://github.com/bitgapp/eqMac/issues/437",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1579927794
|
Bug: eqMac fails to load correctly when MacBook is restarted and eqMac is set to start on login
[x] I have checked for a similar issue and sure it hasn't been reported before.
Describe the bug
When eqMac is set to startup on login and I restart my computer, when the application tries to load, the UI gets caught in various bad states (screenshot of one such state attached–the other was an all-white Electron window with the typical Mac three buttons at the top left, but no content).
Oddly enough, the app will still connect to an audio device and function, the UI is just in a bad state and unresponsive.
Steps to Reproduce
Steps to reproduce the behaviour (feel free to change the placeholder as you need):
Make sure eqMac is set to start on login
Restart your M2 MacBook
Click on eqMac icon in the dock
See that the application UI is in a bad state.
Expected behaviour
I expect that eqMac's UI will not crash on startup
Setup information:
Audio device used for playback: AirPods Max
Audio transmission interface: Bluetooth
macOS Version: 12.6.3
eqMac Version 1.7.4
eqMac UI Version 4.6.0
Screenshots or Console.app logs
Additional information
The app functions normally if I restart but didn't have it set to start up on login.
Hi @benvenker, is this still an issue on the latest eqMac version?
|
gharchive/issue
| 2023-02-10T15:48:28 |
2025-04-01T04:56:10.483105
|
{
"authors": [
"benvenker",
"superjeng1"
],
"repo": "bitgapp/eqMac",
"url": "https://github.com/bitgapp/eqMac/issues/809",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
341666416
|
TypeError: winston.Logger is not a constructor
I think that this because since winston 3.0.0 the way of building the logger changed.
already being discussed in https://github.com/bithavoc/express-winston/issues/175
|
gharchive/issue
| 2018-07-16T20:26:34 |
2025-04-01T04:56:10.484632
|
{
"authors": [
"bithavoc",
"ghidorsi"
],
"repo": "bithavoc/express-winston",
"url": "https://github.com/bithavoc/express-winston/issues/181",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2181187231
|
UI issues:
Not showing amount on send confirmation screen.
Not showing balance when we switch the currency to Bitcoin.
Not showing balance when we switch the currency to Bitcoin.
Not reproducible
|
gharchive/issue
| 2024-03-12T09:54:33 |
2025-04-01T04:56:10.487931
|
{
"authors": [
"Deveshshankar",
"cakesoft-swati"
],
"repo": "bithyve/bitcoin-keeper",
"url": "https://github.com/bithyve/bitcoin-keeper/issues/4089",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2482238066
|
CTA priority needs to be reversed
https://github.com/user-attachments/assets/228f59c5-7273-42f8-8ea9-d74a28372c6a
Confirm Access should be primary CTA.
Confirm Later should be secondary CTA.
Completed
|
gharchive/issue
| 2024-08-23T04:21:04 |
2025-04-01T04:56:10.489476
|
{
"authors": [
"ASN-BitHyve",
"Deveshshankar"
],
"repo": "bithyve/bitcoin-keeper",
"url": "https://github.com/bithyve/bitcoin-keeper/issues/5030",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1581836922
|
Rebranding modal Assets
Need to change modal Assets
https://xd.adobe.com/view/f11ee4be-fef5-4c9b-9e22-4ae72872c584-a44d/specs/
Fixed
|
gharchive/issue
| 2023-02-13T07:54:05 |
2025-04-01T04:56:10.490523
|
{
"authors": [
"Deveshshankar",
"Pawan2792"
],
"repo": "bithyve/bitcointribe",
"url": "https://github.com/bithyve/bitcointribe/issues/6362",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1682951635
|
Import wallet Keeper side alignment
Text cutting at side Device Galaxy F25 5G dev App
Still having the same issue in keeper app v 1.0.3(158)
not resolved @Pawan2792
Completed
|
gharchive/issue
| 2023-04-25T11:07:51 |
2025-04-01T04:56:10.492668
|
{
"authors": [
"Deveshshankar",
"cakesoft-swati"
],
"repo": "bithyve/bitcointribe",
"url": "https://github.com/bithyve/bitcointribe/issues/6488",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
993199459
|
Put a 'Beta' tag.
Put a ‘Beta’ tag here like Coming Soon in Add Accounts.
Verified this issue on dev app v2.0(345)
|
gharchive/issue
| 2021-09-10T12:42:39 |
2025-04-01T04:56:10.494304
|
{
"authors": [
"cakesoft-swati"
],
"repo": "bithyve/hexa",
"url": "https://github.com/bithyve/hexa/issues/4289",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714141646
|
Fix build failure on macosx
The building of bk on macOS has failed for more than a year, this patch series fix that :-)
I don't find any instruction to contribute directly to bk repository, you are free to edit my patches to commit to upstream.
Hi. Just a note to let you know that we got your fix and we are trying to figure out how to integrate it. Thanks!
FWIW, this looks good to me and I tried building it on macOS Catalina latest with Xcode 12.0.1 and it builds.
Thank you @Dieken for your contribution!
|
gharchive/pull-request
| 2020-10-03T17:54:47 |
2025-04-01T04:56:10.495967
|
{
"authors": [
"Dieken",
"ob",
"wscott"
],
"repo": "bitkeeper-scm/bitkeeper",
"url": "https://github.com/bitkeeper-scm/bitkeeper/pull/4",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
322639158
|
Building oauth2_proxy
@ploxiln suggests for building:
install the "dep" tool from https://github.com/golang/dep (download a stand-alone binary from Releases, put it somewhere in your PATH e.g. /usr/local/bin/, name it dep, make it executable)
checkout the oauth2_proxy repo into your GOPATH, e.g. git clone https://github.com/bitly/oauth2_proxy $HOME/go/src/github.com/bitly/oauth2_proxy
cd into that repo
run dep ensure
run go build
the result is a single stand-alone binary named oauth2_proxy
But there seem to be issues with the dependencies on a pretty vanilla go environment:
$ # cloned to $HOME/src/oauth2_proxy
$ export GOPATH=$HOME
$ cd $GOPATH/src/oauth2_proxy
$ dep ensure
Warning: the following project(s) have [[constraint]] stanzas in Gopkg.toml:
✗ github.com/18F/hmacauth
However, these projects are not direct dependencies of the current project:
they are not imported in any .go files, nor are they in the 'required' list in
Gopkg.toml. Dep only applies [[constraint]] rules to direct dependencies, so
these rules will have no effect.
Either import/require packages from these projects so that they become direct
dependencies, or convert each [[constraint]] to an [[override]] to enforce rules
on these projects, if they happen to be transitive dependencies,
Solving failure: No versions of gopkg.in/fsnotify.v1 met constraints:
v1.2.11: unable to update checked out version: fatal: reference is not a tree: 836bfd95fecc0f1511dd66bdbf2b5b61ab8b00b6
: command failed: [git checkout 836bfd95fecc0f1511dd66bdbf2b5b61ab8b00b6]: exit status 128
A colleague @wm notes: https://github.com/go-fsnotify/fsnotify and "Change your dependencies from github.com/go-fsnotify/fsnotify to github.com/fsnotify/fsnotify"
Alright . . . that seems to have compiled and accepts the newer options (--provider oidc, --oidc-issuer-url)
$ git diff origin/master
diff --git a/Gopkg.lock b/Gopkg.lock
index 5a3758a..1294a90 100644
--- a/Gopkg.lock
+++ b/Gopkg.lock
@@ -131,7 +131,7 @@
version = "v1.0.0"
[[projects]]
- name = "gopkg.in/fsnotify.v1"
+ name = "gopkg.in/fsnotify/fsnotify.v1"
packages = ["."]
revision = "836bfd95fecc0f1511dd66bdbf2b5b61ab8b00b6"
version = "v1.2.11"
@@ -149,6 +149,6 @@
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
- inputs-digest = "b502c41a61115d14d6379be26b0300f65d173bdad852f0170d387ebf2d7ec173"
+ inputs-digest = "cfdd05348394cd0597edb858bdba5681665358a963356ed248d98f39373c33b5"
solver-name = "gps-cdcl"
solver-version = 1
diff --git a/Gopkg.toml b/Gopkg.toml
index c4005e1..422bd43 100644
--- a/Gopkg.toml
+++ b/Gopkg.toml
@@ -4,7 +4,7 @@
#
[[constraint]]
- name = "github.com/18F/hmacauth"
+ name = "github.com/mbland/hmacauth"
version = "~1.0.1"
[[constraint]]
@@ -36,7 +36,7 @@
name = "google.golang.org/api"
[[constraint]]
- name = "gopkg.in/fsnotify.v1"
+ name = "gopkg.in/fsnotify/fsnotify.v1"
version = "~1.2.0"
[[constraint]]
diff --git a/watcher.go b/watcher.go
index bedb9f8..9888fe5 100644
--- a/watcher.go
+++ b/watcher.go
@@ -8,7 +8,7 @@ import (
"path/filepath"
"time"
- "gopkg.in/fsnotify.v1"
+ "gopkg.in/fsnotify/fsnotify.v1"
)
func WaitForReplacement(filename string, op fsnotify.Op,
Should this have been closed?
Nope, don't think so. Just bumped into the same issue. The diff provided by @jgn works though. See PR above.
|
gharchive/issue
| 2018-05-14T00:08:10 |
2025-04-01T04:56:10.508035
|
{
"authors": [
"jgn",
"kautsig",
"sporkmonger"
],
"repo": "bitly/oauth2_proxy",
"url": "https://github.com/bitly/oauth2_proxy/issues/592",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
60741802
|
Add support for Foxtrot Sockets
As part of handling different sockets in https://github.com/bitpay/bitcore-p2p/issues/36 (FirefoxOS and ChromeOS) it may be possible to additionally connect through a Foxtrot socket. The only issue is that addr messages only include v4 and v6 IP addresses and not a publickey as an address.
Separating into two modules may help with implementing Peers that implement connecting via other networks: https://github.com/bitpay/bitcore/issues/1260
|
gharchive/issue
| 2015-03-11T22:36:22 |
2025-04-01T04:56:10.613162
|
{
"authors": [
"braydonf"
],
"repo": "bitpay/bitcore-p2p",
"url": "https://github.com/bitpay/bitcore-p2p/issues/49",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1072474757
|
🛑 Plex Media Server is down
In 4152b02, Plex Media Server (http://plex.bitpushr.net) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Plex Media Server is back up in cf160cd.
|
gharchive/issue
| 2021-12-06T18:36:20 |
2025-04-01T04:56:10.624348
|
{
"authors": [
"bitpushr"
],
"repo": "bitpushr/Upptime",
"url": "https://github.com/bitpushr/Upptime/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1753230113
|
🛑 Luogu CDN is down
In 7e2ce87, Luogu CDN (https://cdn.luogu.com.cn/fe/loader.js?ver=20230228-2) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Luogu CDN is back up in a3fa7ec.
|
gharchive/issue
| 2023-06-12T17:34:25 |
2025-04-01T04:56:10.644431
|
{
"authors": [
"bitsstdcheee"
],
"repo": "bitsstdcheee/luogu-status",
"url": "https://github.com/bitsstdcheee/luogu-status/issues/625",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
804021921
|
Error: TypeError: Cannot read property 'filter' of null
Workflow this happends on:
on:
push:
branches: [dockerx]
jobs:
push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0
- uses: bjerkio/kopier@main
with:
repos: |
indivorg/service-user
github-token: ${{ secrets.GH_PRIVATE_TOKEN }}
I'm testing adding another repo, just in case.
No dice.
|
gharchive/issue
| 2021-02-08T22:28:40 |
2025-04-01T04:56:10.688976
|
{
"authors": [
"cobraz"
],
"repo": "bjerkio/kopier",
"url": "https://github.com/bjerkio/kopier/issues/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
711996761
|
ReferenceError: document is not defined; SSR.
const doc = document, win = window, div = doc.createElement('div'), { filter, indexOf, map, push, reverse, slice, some, splice } = Array.prototype;
I render it server side first, then later hydrate on the client side, which is probably why your component fails.
If you are using next js you just have to tell next to import and render the component only on the client and not to apply ssr for that component, this for libraries that for example run only in the browser. I recommend these links
https://nextjs.org/docs/advanced-features/dynamic-import
https://stackoverflow.com/questions/53139884/next-js-disable-server-side-rendering-on-some-pages
For Next.js, this seems to work for me:
const ResizePanel = dynamic(() => import('react-resize-panel'), { ssr: false });
Would be nice to support SSR I think? Even if it's non-functional and just does the basic DOM element wrapping or whatever.
|
gharchive/issue
| 2020-09-30T14:27:09 |
2025-04-01T04:56:10.692009
|
{
"authors": [
"MorpheusFT",
"chriscoyier",
"willydavid1"
],
"repo": "bjgrosse/react-resize-panel",
"url": "https://github.com/bjgrosse/react-resize-panel/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1790403310
|
Frustum culling;
Adds Pass:setViewCull to enable/disable frustum culling.
Renames Pass:setCullMode to Pass:setFaceCull (with backcompat).
Adds drawsCulled pass stat.
Some stuff currently missing:
Text is not culled, but should be.
VR view frusta are not merged yet.
Optimizations I tried that weren't worth it:
Detecting infiniteZ and reducing plane count to 5 made it slower (branching?)
Putting transforms and bounds in a separate array from the draws (to avoid ever touching memory for culled draws, and reduce size of Draw struct) did have a measurable impact of 5% in the "hot" loop that actually does draws. But draw calls are not actually a bottleneck of render pass recording right now, and so this change doesn't seem worth the extra code complexity.
Pre-transforming bounding boxes by the draw's transform (in lovrPassDraw) makes culling twice as fast, but makes overall overhead slightly higher when draws are only submitted once. This would probably still be a worthwhile optimization, because you get faster pass submission in the case where you're saving a pass and submitting it many times (~20% assuming lots of its draws are culled).
Closes #86
|
gharchive/pull-request
| 2023-07-05T21:49:27 |
2025-04-01T04:56:10.695168
|
{
"authors": [
"bjornbytes"
],
"repo": "bjornbytes/lovr",
"url": "https://github.com/bjornbytes/lovr/pull/678",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2211960802
|
serviceMonitor missing job labels and namespace selectors
Details
What steps did you take and what happened:
I upgraded to app-template 3.0.4 recently and started having problem with any servicemonitor that is via app-template. I tracked down the problem and it seems that it is because it doesn't have Job label and Namespace Selector fields. As soon as I added them manually to service monitors, it scraping in VictoriaMetrics. Before that, there were 0/0 servicescrapes.
What did you expect to happen:
I expect serviceMonitor to have job label and namespace selector
Anything else you would like to add:
A quick comparison between, for example, the one created by Grafana (external-secrets as well, etc):
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2024-03-25T20:50:08Z"
generation: 1
labels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: grafana
app.kubernetes.io/version: 10.4.0
helm.sh/chart: grafana-7.3.7
helm.toolkit.fluxcd.io/name: grafana
helm.toolkit.fluxcd.io/namespace: monitoring
name: grafana
namespace: monitoring
resourceVersion: "217766523"
uid: 11d28831-6914-4c42-8c93-028a4d5143f3
spec:
endpoints:
- honorLabels: true
interval: 30s
path: /metrics
port: service
scheme: http
scrapeTimeout: 30s
jobLabel: grafana
namespaceSelector:
matchNames:
- monitoring
selector:
matchLabels:
app.kubernetes.io/instance: grafana
app.kubernetes.io/name: grafana
The one created by app-template for nut-exporter:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
meta.helm.sh/release-name: nut-exporter
meta.helm.sh/release-namespace: monitoring
creationTimestamp: "2024-03-27T20:54:25Z"
generation: 1
labels:
app.kubernetes.io/instance: nut-exporter
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: nut-exporter
helm.sh/chart: app-template-3.0.4
helm.toolkit.fluxcd.io/name: nut-exporter
helm.toolkit.fluxcd.io/namespace: monitoring
name: nut-exporter
namespace: monitoring
resourceVersion: "219358039"
uid: 32fb2710-2baf-4a5d-8784-800c2d367825
spec:
endpoints:
- interval: 1m
path: /ups_metrics
port: http
scheme: http
scrapeTimeout: 30s
- interval: 1m
path: /metrics
port: http
scheme: http
scrapeTimeout: 30s
selector:
matchLabels:
app.kubernetes.io/instance: nut-exporter
app.kubernetes.io/name: nut-exporter
app.kubernetes.io/service: main
Additional Information:
Full config is here.
https://github.com/nklmilojevic/home/blob/main/kubernetes/apps/monitoring/nut-exporter/app/helm-release.yaml
hi, thank you for raising this issue! This feature was never implemented, so that explains why the fields aren't available. I'll add them to the upcoming release 👍
Thank you so much for the fast answer!
|
gharchive/issue
| 2024-03-27T21:46:11 |
2025-04-01T04:56:10.703195
|
{
"authors": [
"bjw-s",
"nklmilojevic"
],
"repo": "bjw-s/helm-charts",
"url": "https://github.com/bjw-s/helm-charts/issues/309",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
84186170
|
Triangle becomes a square when I change 'static' to 'const'
Hi there!
Recently I went into issue when I did a research of this amazing platform (Rust).
I've used slightly modified code from static_triangle.rs to match the most recent (1.0) version of Rust and get it running.
But when I tried to switch allocation model from static to const (line 33) everything went wrong: my triangle at the center of the screen became the square at the top right corner.
Using static (.png)
Using const (.png)
Did anyone faced with it? Is it just my misunderstanding or a library bug? Or even a compiler bug?
Thanks.
P.S.: I am sorry for misspellings if any, I am not a good friend with English :(
@bohdan-shulha This is a classic case of "dangling pointer". When you use a const, there is no fixed location that the data of is stored at, as opposed to a static which does have a fixed location in the read-only section of the executable. So when you write mem::transmute(&VERTEX_DATA[0]), the pointer will be to a "random" place on the stack.
Don't use const for this!
(I recommend using glium or gfx-rs or another of the higher-level wrappers around gl-rs instead of raw gl-rs.)
Ohh.
I've learned that Rust was designed to prevent dangling pointers so I was not even close in my minds to the source of this issue (which is caused by unsafe{}, am I right?).
I've seen gfx-rs a bit earlier but I didn't gave much attention to it, now I will look closer.
Thank you for a super-fast response and sorry for inconvenience. I must go deeper into the language before asking such questions :(
Good luck!
No worries, happy to help. The issue isn't only the unsafe (you can easily make a dangling raw pointer in safe code, just with casts), but what the functions are doing with the raw pointer.
|
gharchive/issue
| 2015-06-02T20:00:54 |
2025-04-01T04:56:10.709688
|
{
"authors": [
"bohdan-shulha",
"cmr"
],
"repo": "bjz/gl-rs",
"url": "https://github.com/bjz/gl-rs/issues/344",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
65124479
|
Support Oculus SDK versions through 0.5.0.1-beta
Tested against the following SDKs:
0.5.0-beta
0.4.4-beta
0.4.3-beta
0.4.2-beta
0.4.1-beta
0.4.0-beta
Ah, i missed that. In 5.0 they've added a new top-level $OVR_DIR/LibOVRKernel folder containing the bulk of the headers.
The main OVR.h contains the following cross includes:
#include "Kernel/OVR_Types.h" #include "Kernel/OVR_RefCount.h" #include "Kernel/OVR_Std.h" #include "Kernel/OVR_Alg.h" #include "Extras/OVR_Math.h"
Should the same trampoline approach be taken for these headers?
As a side note, they've moved the CAPI headers in to $OVR_DIR/LibOVR/Include in 5.0
Looking a bit deeper into this, the LibOVRKernel/src and LibOVR/src dependencies are only needed for the C++ types in the SDK. bgfx does not use these types, and we can detect this and include the new <OVR_CAPI.h> file instead of <OVR.h> in SDK 5.0 and later (see updated commit)
Trampoline header is just needed for those graphics API files, nothing else. I was hopping they will fix that on their end. Their code contains shared pointers and I want avoid any possibility of contamination. :)
|
gharchive/pull-request
| 2015-03-30T03:01:40 |
2025-04-01T04:56:10.718875
|
{
"authors": [
"bkaradzic",
"mendsley"
],
"repo": "bkaradzic/bgfx",
"url": "https://github.com/bkaradzic/bgfx/pull/312",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1881866
|
Duplicate collections
Not sure if this is qu related or not, but I thought I'd share it in case. I just updated to the latest version of qu-mongo and deployed to Heroku. All of a sudden all the qu collections are duplicate!
These are the collections in my qu database:
qu.workers
qu.queue:sync_friends
qu.queues
qu.queue:sync_account
qu.workers
qu.queue:sync_friends
qu.queues
qu.queue:sync_account
Any ideas?
Odd, I have no clue. I don't think anything has changed with the collections in qu
|
gharchive/issue
| 2011-10-11T22:40:22 |
2025-04-01T04:56:10.720982
|
{
"authors": [
"bkeepers",
"fbjork"
],
"repo": "bkeepers/qu",
"url": "https://github.com/bkeepers/qu/issues/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1989530020
|
Enabling Proxy-Kernel Execution
This PR makes the following changes so we can run PK programs on Zynq-Parrot:
Adds HTIF code that runs on PS and performs syscall handshakes with the PL.
Adds a block RAM at 0x10000 for tohost/fromhost handshake with HTIF code.
Adds a DMA backdoor to the simulation DRAM so HTIF can read memory directly in both simulation and FPGA.
Adds an extra PL2PS FIFO for extracting the syscall trace information.
Changes to submodules:
blackparrot: Adding L2 manipulation commands to cce2cache module.
black-parrot-subsystems: Fixing width gearboxing in bp_me_axil_master
basejump_stl: Adding bsg_nonsynth_axi_mem_dma
Is there a PR for the subsystems change?
Is there a PR for the subsystems change?
This PR is based on the profiling PR, so I pushed the submodule changes to their profiling branch, and we can create the PRs when we want to merge to master.
|
gharchive/pull-request
| 2023-11-12T19:09:30 |
2025-04-01T04:56:10.743156
|
{
"authors": [
"dpetrisko",
"farzamgl"
],
"repo": "black-parrot-hdk/zynq-parrot",
"url": "https://github.com/black-parrot-hdk/zynq-parrot/pull/82",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1234445093
|
Usage example for proposed copy target
The copy target from this PR could be used here to spare manual copying while developing a test program in the sdk.
hi, thanks for the PRs. I believe
make bp-tests_manual
handles the use case you’re talking about
Ah i missed that, thanks for pointing it out!
Updated the README to describe this behavior
|
gharchive/pull-request
| 2022-05-12T20:23:27 |
2025-04-01T04:56:10.745472
|
{
"authors": [
"dpetrisko",
"vsteltzer"
],
"repo": "black-parrot-sdk/black-parrot-sdk",
"url": "https://github.com/black-parrot-sdk/black-parrot-sdk/pull/43",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1980882653
|
st_pages deployment on streamlit cloud not working
I have been working on my app (two pages) here https://github.com/gloriamacia/pixarstar/blob/main/genai/.streamlit/pages.toml
This was my pages.toml
[[pages]]
path = "streamlit_app.py"
name = "Home"
icon = "🎨"
[[pages]]
path = "pages/stability_ai_app.py"
name = "Draw"
icon = "🖼️"
When I deployed, I specified as my entry point of the app genai/streamlit_app.py. I am getting this error:
https://pixarstar.streamlit.app/stability_ai_app
I tried changing pages.toml to
[[pages]]
path = "genai/streamlit_app.py"
name = "Home"
icon = "🎨"
[[pages]]
path = "genai/pages/stability_ai_app.py"
name = "Draw"
icon = "🖼️"
It still does not work. What I do not understand is that on the local development running the first version with poetry run python -m streamlit run streamlit_app.py was working just fine.
What am I missing here? Maybe the deployment instructions should be improved because it does not look obvious to me.
Thanks a lot in advance.
I could not really figure it out and I decided not to import the config. This worked just fine after changing the paths. I believe part of the reason may be that setting up a symbolic link may be necessary depending on where the .streamlit directory is. In any case, great library - a couple thing should be clarified a bit more.
|
gharchive/issue
| 2023-11-07T09:11:50 |
2025-04-01T04:56:10.760929
|
{
"authors": [
"gloriamacia"
],
"repo": "blackary/st_pages",
"url": "https://github.com/blackary/st_pages/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1768596947
|
Conditional logic for using agent's proxy configuration
This enhancement is adding conditional logic for using the Agent's proxy configuration if the separate BD Proxy Service Endpoint is not used. Inheriting the Agent's proxy configuration is the recommended approach from Microsoft, and mitigates the need for users to worry about proxy configurations in each pipeline. The conditional logic is set to use the BD Proxy endpoint proxy details by default if configured in the pipeline, otherwise will use the Agent's proxy configuration if the agent does have a proxy configured. That way there should be no impact to existing customers.
The Microsoft documentation for this approach can be found -
https://github.com/Microsoft/azure-pipelines-task-lib/blob/master/node/docs/proxy.md
as well as here - https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/proxy?view=azure-devops&tabs=windows#how-the-agent-handles-the-proxy-within-a-build-or-release-job
"Task authors need to use azure-pipelines-task-lib methods to retrieve proxy configuration and handle the proxy within their task"
I tested this change in our ADO instance for various scenarios like running on cloud agents, on prem agents, different OS, with/without the BD Proxy endpoint selected, and everything was working well. Please let me know if you have any questions or there is anything I can help test with, etc.
closing as this was handled per #69
|
gharchive/pull-request
| 2023-06-21T22:24:56 |
2025-04-01T04:56:10.766761
|
{
"authors": [
"pbcahill"
],
"repo": "blackducksoftware/detect-ado",
"url": "https://github.com/blackducksoftware/detect-ado/pull/68",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
544056960
|
[Gally]: master <- dev
Automatically created by Git-Ally
:tada: This PR is included in version 1.10.145 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-12-31T01:13:24 |
2025-04-01T04:56:10.784983
|
{
"authors": [
"MrsFlux"
],
"repo": "blackflux/lambda-example",
"url": "https://github.com/blackflux/lambda-example/pull/1093",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
435045865
|
[Gally]: master <- dev
Automatically created by Git-Ally
:tada: This PR is included in version 2.7.4 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-04-19T04:31:24 |
2025-04-01T04:56:10.790038
|
{
"authors": [
"MrsFlux"
],
"repo": "blackflux/lambda-tdd",
"url": "https://github.com/blackflux/lambda-tdd/pull/394",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
481429946
|
[Gally]: master <- dev
Automatically created by Git-Ally
:tada: This PR is included in version 1.4.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2019-08-16T03:53:17 |
2025-04-01T04:56:10.792462
|
{
"authors": [
"MrsFlux"
],
"repo": "blackflux/node-tdd",
"url": "https://github.com/blackflux/node-tdd/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
642830559
|
[Gally]: master <- dev
Automatically created by Git-Ally
:tada: This PR is included in version 3.0.47 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2020-06-22T07:33:18 |
2025-04-01T04:56:10.797115
|
{
"authors": [
"MrsFlux"
],
"repo": "blackflux/s3-cached",
"url": "https://github.com/blackflux/s3-cached/pull/1657",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1651163257
|
NotFound when publishing package
I have created a publisher account and I've run nyssa init, which has left me with the following:
{
"name": "miniroute",
"version": "1.0.0",
"description": "Tiny and simple HTTP router.",
"homepage": "https://github.com/BenStigsen/miniroute",
"tags": [
"http",
"router"
],
"author": "Benjamin Stigsen",
"license": "ISC",
"sources": [
"https://nyssa.bladelang.com"
],
"deps": {}
}
If I run nyssa info I correctly get:
PACKAGE INFORMATION
~~~~~~~~~~~~~~~~~~~
• Name: miniroute
• Version: 1.0.0
• Description: Tiny and simple HTTP router.
• Homepage: https://github.com/BenStigsen/miniroute
• Author: Benjamin Stigsen
• License: ISC
• Tags: http, router
DEPENDENCIES
~~~~~~~~~~~~
None
SOURCES
~~~~~~~
• https://nyssa.bladelang.com
However, when I do any of the following:
nyssa publish
nyssa publish -r .
nyssa publish -r https://github.com/BenStigsen/miniroute
nyssa publish -r https://nyssa.bladelang.com/
It results in:
Checking for valid publisher account
Checking for valid Nyssa package
Packaging miniroute@1.0.0...
NotFound -> no such file or directory
I have reinstalled Nyssa and tried cleaning the cache.
Please share a link to the package.
It's a bug in the zip module of Blade. I'm raising a ticket to fix it now. In the meantime, in your Blade installation path, modify the path libs/zip.b and change line 587 from:
if npath.starts_with('/')
To
if npath.starts_with(os.path_separator)
You should be able to publish after that.
Yes this made me able to publish! https://nyssa.bladelang.com/view/miniroute
One thing does surprise me though, how come it does not show 1.0.1 by default, but instead shows version 1.0.0?
That's a bug. I'll fix now.
I was just wondering if it was a design choice.
I don't mean to stress you or take your focus away from other tasks 😄
Fixed in v0.1.7
|
gharchive/issue
| 2023-04-03T00:55:29 |
2025-04-01T04:56:10.863081
|
{
"authors": [
"BenStigsen",
"mcfriend99"
],
"repo": "blade-lang/nyssa",
"url": "https://github.com/blade-lang/nyssa/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
773343833
|
Update README.md
Fix Editing Articles link
thank you
|
gharchive/pull-request
| 2020-12-23T01:00:22 |
2025-04-01T04:56:10.864350
|
{
"authors": [
"blakadder",
"ghoost82"
],
"repo": "blakadder/zigbee",
"url": "https://github.com/blakadder/zigbee/pull/156",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1353489821
|
Fix pages with broken front matter
Fixes various yaml syntax errors.
Also adds config to prevent building with front matter errors.
thanks a lot for the fixes
|
gharchive/pull-request
| 2022-08-28T22:45:27 |
2025-04-01T04:56:10.865554
|
{
"authors": [
"blakadder",
"lukashass"
],
"repo": "blakadder/zigbee",
"url": "https://github.com/blakadder/zigbee/pull/738",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
688640149
|
1.16.2 crash
crash-2020-08-29_19.33.42-client.txt
forge: 33.0.20
version: 1.16.2-9.1.0
Duplicate of #465
|
gharchive/issue
| 2020-08-30T02:34:56 |
2025-04-01T04:56:10.876193
|
{
"authors": [
"Warbringer12",
"blay09"
],
"repo": "blay09/CookingForBlockheads",
"url": "https://github.com/blay09/CookingForBlockheads/issues/469",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1819258219
|
Why are the tokens counted differently than OpenAI?
Hi @blaze-Youssef !
First, thanks for the tool, very useful, as I'm stuck in openlimit by similar issues that you met in https://github.com/shobrook/openlimit/issues/4 .
However, I'm a bit stuck trying to understand. What does the argument max_tokens correspond to, please, in https://github.com/blaze-Youssef/openai-ratelimiter/blob/main/openai_ratelimiter/defs.py#L9 ?
I am trying to understand it, but the way you count tokens, which is the same as in openlimit https://github.com/shobrook/openlimit/blob/master/openlimit/utilities/token_counters.py#L14 , is different than in OpenAI's cookbook https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb (see section 6).
Would you or @shobrook be able to help clarify this counting, please?
Ah! Found max_tokens in the OpenAI API, as an optional parameter: https://platform.openai.com/docs/api-reference/completions/create#completions/create-max_tokens
Do I understand correctly that n*max_tokens` is here to preemptively count into the rate limit the maximum possible number of output tokens?
Ungh, now I see that OpenAI has a separate token counter in their batch-API script than in their tutorial Notebook: https://github.com/openai/openai-cookbook/blob/main/examples/api_request_parallel_processor.py#L339
The former does indeed take max_tokens into account. But is also less recent than their notebook, which has different token increments for roles' names. So I don't know which version of their token counter to believe 😅
Shouldn't a rate limiter, in any case, be updated after the actual completion is returned by the API, to account for the actual number of output tokens?
Hi,
I will dig into the OpenAI notebooks and update the implémentation if necessary.
As far as I know, each request's tokens are calculated this way:
Prompt tokens + Max tokens = request total tokens.
Thanks. It looks like there are two contradictory implementations from
OpenAI: one in the notebook, the other in their batch call.
They differ not just by the accounting for max_count, but also by how they
handle the role names depending on the model (admittedly, a rather smaller
factor!)
On Tue, Jul 25, 2023, 13:18 Youssef Benhammouda @.***>
wrote:
Hi,
I will dig into the OpenAI notebooks and update the implémentation if
necessary.
As far as I know, each request's tokens are calculated this way:
Prompt tokens + Max tokens = request total tokens.
—
Reply to this email directly, view it on GitHub
https://github.com/blaze-Youssef/openai-ratelimiter/issues/2#issuecomment-1649730408,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAFBERK5LVHHK54RCKYFFSLXR62RTANCNFSM6AAAAAA2WHFFPA
.
You are receiving this because you authored the thread.Message ID:
@.***>
|
gharchive/issue
| 2023-07-24T22:47:55 |
2025-04-01T04:56:10.885876
|
{
"authors": [
"blaze-Youssef",
"jucor"
],
"repo": "blaze-Youssef/openai-ratelimiter",
"url": "https://github.com/blaze-Youssef/openai-ratelimiter/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2562588372
|
feat: add local langfuse tracing option
why
The purpose of this PR is to integrate local Langfuse tracing into the project to enhance debugging and monitoring capabilities. Tracing allows developers to observe the flow of execution and diagnose issues more effectively.
what
Exchange Package:
- Defined observe decorator wrapper observe_wrapper to use the observe decarator only if Langfuse local env variables are set
- Add observe decorator to tool calling function and to providers' completion functions
Goose:
- Modifications in to set up Langfuse tracing upon CLI initialization.
- Updates in to trace session-level information.
- Add observe decorator to reply
usage
Developers can use locally hosted Langfuse tracing by applying the observe_wrapper decorator to functions for automatic integration with Langfuse, providing detailed execution insights. Docker is a requirement.
To enable tracing, launch the CLI with the --tracing flag, which will initialize Langfuse with the set env variables
setting up locally hosted Langfuse
Run setup_langfuse.sh script (in packages/exchange/src/exchange/langfuse) to download and deploy the Langfuse docker container with default initialization variables found in the .env.langfuse.local file.
Go to http://localhost:3000/ and log in with the default email/password output by the shell script (or found in the .env.langfuse.local file).
Run Goose with the --tracing flag enabled i.e., goose session start --tracing
When Goose starts you should see the log INFO:exchange.langfuse:Langfuse credentials found. Find traces, if enabled, under http://localhost:3000.
Go to http://localhost:3000/ and you should be able to see your traces
Sample trace viewing:
Hey @ahau-square,
The Langfuse UI looks good for users to view the traces! Below is what I found about tracing with Langfuse
Pros
UI is clear easy to browse
Api is easy to use and clean
build-in LLM specific tracing
Cons
Manual steps to setup the Langfuse account
We could copy the docker compose file in our repo to start the docker container instead of asking the user to clone the langfuse repo. (The drawback is that the docker compose file could be out of date. However, in the docker compose file there are not much customised logic, it mainly defined the image and basic configurations
However, I found it is hard to automate the steps to sign up as a user, create a project and get the api keys
Alternative
Open telemetry can integrated with a few tools that visualise the tracing. For example, export to local Zipkin or using otel-tui as @codefromthecrypt suggested. These tools can be started as docker container via scripts without manual steps.
For your reference, this is the PR @codefromthecrypt created for tracing.
When the user passes --tracing, but langfuse server is not up. it shows error "ERROR:langfuse:Unexpected error occurred. Please check your request and contact support: https://langfuse.com/support."
Maybe we can validate whether the server is up when the user pass --tracing option and remind the users to start the server
ironically just got off a call with @marcklingen. So marc, "goose" as you can read in its lovely readme is an inference backed SWE capable buddy. It uses its own LLM abstraction called exchange. I've thought about instrumenting this to use the genai conventions as tagged above. I actually haven't gotten around to spiking that, either.
My thinking is that we can maybe do both (langfuse and otel) until langfuse handles OTLP? Then compare until the latter works as well. wdyt? Do you have an issue to follow-up on that?
Also, if you can help @ahau-square meanwhile with any pointers or any notes about comments above. We've been using python VCR tests at times, so we can probably test things work even without accounts. Depending on how things elaborate I don't mind contributing help on that as it is a good way to learn.
Meanwhile, I think let's see where we get and keep a debug story going, even if it is work in progress
thanks for tagging me @codefromthecrypt, happy to help, "goose" seems cool and I'll try to give it a proper spin later this week
My thinking is that we can maybe do both (langfuse and otel) until langfuse handles OTLP? Then compare until the latter works as well. wdyt? Do you have an issue to follow-up on that?
This seems very reasonable
I am generally very excited about standardization on the instrumentation side via opentelemetry. We are tracking support for OTLP in Langfuse here, feel free to subscribe to the discussion or add your thoughts: https://github.com/orgs/langfuse/discussions/2509
However, I found it is hard to automate the steps to sign up as a user, create a project and get the api keys. It is a once-off step for the user. If we could automate it, the user will have smoother experiences
Also, if you can help @ahau-square meanwhile with any pointers or any notes about comments above. We've been using python VCR tests at times, so we can probably test things work even without accounts. Depending on how things elaborate I don't mind contributing help on that as it is a good way to learn.
Langfuse supports initialization via environment variables which would overcome the issue of having to create an account via the local langfuse server. I will add this to the default docker compose configuration as well and update this message once done.
Side note: I think interesting here would be to return a deep link to the trace in the local langfuse ui on the cli output if tracing is enabled to make it easier to jump to the trace from any invocation of goose for the debugging use case.
Added support for headless initialization via Docker Compose here: https://github.com/langfuse/langfuse/pull/3568/files.
This change required explicitly handling empty string values for these environment variables, as Docker Compose does not allow adding environment variables that may not exist without setting a default value. Therefore, this depends on the next langfuse release which will include this fix (scheduled for tomorrow).
we just released the change. let me know if you have any questions, @ahau-square. You can now pass the init envs to the Langfuse docker compose deployment to initialize default credentials.
There's no way to bypass it. They'd need to sign in with the default credentials to view the data in the UI.
addition to the above @ahau-square, if this is local anyway, you could make all traces public. thereby users could directly jump from a trace url in their console output to viewing the trace in their local langfuse instance without having to sign in. this will limit the features that are available though.
more on this here: https://langfuse.com/docs/tracing-features/url#share-trace-via-url
nit: is this a chore or a feature? the PR title shows chore, but I suspect the result is more a feature than say, reformat files. wdyt?
Nice one! @ahau-square
A couple of suggestions:
Instead of checking out the langfuse repository, how about in the script we copy docker-compose.yaml file from the latest langfuse gitrepo and then start the containe? In this case, user has one step less to setup the langfuse local server and we don't need to have Langfuse code in our local goose project.
The langfuse folder has files with mixed purposes.
langfuse.py: used in the goose (with exchange) application.
setup_langfuse.sh: used to start Langfuse server
env.langfuse.local: used for both goose and Langfuse server
Since Langfuse server is a self contained application, we could create a fold outside of src/exchange (could be under goose such as Langfuse-server, It will contain setup_langfuse.sh, env file containing the variables for starting Langfuse, and docker-compose file
@ahau-square was asking what kinds of tests should be written for this PR.
In terms of testing, I feel like we can make it simple just to write unit tests
observe_wrapper function. Test cases
langfuse_context observe function is called if the credential env variable exists apart from executing the function
if the credential does not exist, it executes function only
maybe we can also write tests on openai.py…. to make sure the decorator is applied when credential is present.
I found it is a bit hard to test at the integration level. Although we can run the tests and start the Langfuse server, we still have to manually login to verify the traced data unless there is existing Langfuse api for us to verify
Another alternative is to test the integration with Langfuse, but it requires us to know the implementation details of the langfuse_context.observe. I feel it is a bit overkill for us to test in this way too. FYI These looks like the relevant implementation and the tests in Langfuse python.
It would be great if you could give us some advice @marcklingen :) Thank you!
Usually testing this is overkill for most teams as you mentioned.
If you however want to test it, I've seen some teams run an example and then fetch the same trace id (fetch_trace) to check that it includes all of the observations that it should include. This however necessitates that you (1) run langfuse in ci via docker compose, (2) use flush to make sure the events are sent immediately at the end of the test, and (3) wait e.g. 2-5sec in CI as Langfuse does not have read after write consistency on the apis.
Usually testing this is overkill for most teams as you mentioned.
If you however want to test it, I've seen some teams run an example and then fetch the same trace id (fetch_trace) to check that it includes all of the observations that it should include. This however necessitates that you (1) run langfuse in ci via docker compose, (2) use flush to make sure the events are sent immediately at the end of the test, and (3) wait e.g. 2-5sec in CI as Langfuse does not have read after write consistency on the apis.
Thanks for that @marcklingen! I feel that we could skip this kind of integration test until tracing becomes a critical feature for our application, but it is good to know about how people test!
Even if we don't run tests in CI right now, probably worth describing them or even having a mechanism with default config in the repo, mentioned in CONTRIBUTING.md? Usually fixtures like this are quite tricky to setup the first time. If we punt them, then someone potentially not on this issue yet may have a large task which is to figure out what's wrong, how to test it locally, etc while under pressure. "An ounce of prevention is worth a pound of cure." sort of thing
Seems like this is roughly an hour to set up. You can have a look at how we run Langfuse via docker compose in the Langfuse/Langfuse main CI pipeline (public GH Actions)
@marcklingen thanks. Here's a suggestion forward: if we don't add an integrated test or an integrated ad-hoc capability in this PR, add something to CONTRIBUTING.md like this.
Langfuse is experimental, and we don't currently have integration tests. To try this locally with docker, use steps similar to Langfuse CI pipeline.
We could still investigate VCR tests to have some sort of test working. For example, record requests made to a saas or ad-hoc install of langfuse. If decide to do no tests, maybe someone should sign up to manually verify before merge?
hope the suggestions help!
Thanks all for the feedback! I've moved the langfuse-related code into a new package and also updated to only pull the docker compose file from the langfuse repo.
I'm going to skip the integration tests for now as this is still experimental. If it gets more traction we can revisit and left a note in the README in the langfuse-wrapper package to that effect. @codefromthecrypt if you want to take a stab at something more please feel free (tbh I'm not totally sure I understand what you're suggesting). I've implemented a couple tests per @lifeizhou-ap 's guidance.
Hi @ahau-square The PR looks good!
Just wondering why we create a project for langfuse_wrapper instead of leaving it in exchange? I guess it might be reused in other projects in addition to exchange or goose and would be published it as a package? If it is not going be reused, we might not need to have a separate project for langfuse_wrapper
I think both way works. I am just curious about it. :)
this is awesome! pretty excited to try this out :)
I think both way works. I am just curious about it. :)
We need the wrapper in the block plugins repo also since we want to wrap the block provider completions there as well.
Nice @ahau-square, does it make sense to add a quick get-started to the docs? Happy to help
|
gharchive/pull-request
| 2024-10-02T20:30:10 |
2025-04-01T04:56:11.000782
|
{
"authors": [
"ahau-square",
"codefromthecrypt",
"lamchau",
"lifeizhou-ap",
"marcklingen"
],
"repo": "block-open-source/goose",
"url": "https://github.com/block-open-source/goose/pull/106",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2127820227
|
TheHeader disabled icons improvement
closes #80
Hey @eduramme !
I updated the PR with the latest updates from PR #100 and changed the base branch to merge 'feat/header-adjustments' to develop
|
gharchive/pull-request
| 2024-02-09T20:42:15 |
2025-04-01T04:56:11.014428
|
{
"authors": [
"eduramme",
"heronlancellot"
],
"repo": "blockful-io/swaplace-dapp",
"url": "https://github.com/blockful-io/swaplace-dapp/pull/105",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
280227731
|
I have no idea about newsletter in admin dashboard.
How to manage newsletter in admin dashboard?
When someone subscribes to newsletter, new email added to the list.
In the admin, click "Packages" and then click "Newsletter". It should load list of subscribers. When you publish new post, subscribers get email notification.
Currently, you can only view this list and remove emails from the list. But there will be template edit and other improvements coming soon.
In the admin, click "Packages" and then click "Newsletter". It should load list of subscribers.
the admin path is following below:
/blogifier/widgets/Newsletter/settings
It is ok, I saw the subscribers, and I can emails from the list.
many thanks.
|
gharchive/issue
| 2017-12-07T18:13:22 |
2025-04-01T04:56:11.043117
|
{
"authors": [
"forconz",
"rxtur"
],
"repo": "blogifierdotnet/Blogifier",
"url": "https://github.com/blogifierdotnet/Blogifier/issues/75",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2596397809
|
🛑 Maxthon API is down
In 7d03e16, Maxthon API (https://api.maxthon.com) was down:
HTTP code: 500
Response time: 494 ms
Resolved: Maxthon API is back up in 51e2da2 after 7 minutes.
|
gharchive/issue
| 2024-10-18T04:43:24 |
2025-04-01T04:56:11.046797
|
{
"authors": [
"bloodchen"
],
"repo": "bloodchen/upptime",
"url": "https://github.com/bloodchen/upptime/issues/1630",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2635266908
|
🛑 Maxthon API is down
In 13f668a, Maxthon API (https://api.maxthon.com) was down:
HTTP code: 500
Response time: 709 ms
Resolved: Maxthon API is back up in bd29932 after 27 minutes.
|
gharchive/issue
| 2024-11-05T12:01:31 |
2025-04-01T04:56:11.049275
|
{
"authors": [
"bloodchen"
],
"repo": "bloodchen/upptime",
"url": "https://github.com/bloodchen/upptime/issues/2155",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2434650651
|
🛑 Maxthon API is down
In 3b1807b, Maxthon API (https://api.maxthon.com) was down:
HTTP code: 500
Response time: 462 ms
Resolved: Maxthon API is back up in 2e7b9ae after 7 minutes.
|
gharchive/issue
| 2024-07-29T07:20:56 |
2025-04-01T04:56:11.051505
|
{
"authors": [
"bloodchen"
],
"repo": "bloodchen/upptime",
"url": "https://github.com/bloodchen/upptime/issues/349",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2540485877
|
🛑 Maxthon API is down
In bb8f473, Maxthon API (https://api.maxthon.com) was down:
HTTP code: 500
Response time: 516 ms
Resolved: Maxthon API is back up in cd98cf5 after 7 minutes.
|
gharchive/issue
| 2024-09-21T19:46:46 |
2025-04-01T04:56:11.053734
|
{
"authors": [
"bloodchen"
],
"repo": "bloodchen/upptime",
"url": "https://github.com/bloodchen/upptime/issues/901",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
166620590
|
Create HBase GC log directory and fix HBASE_REGIONSERVER_OPTS
This PR has two fixes
Create /var/log/hbase/gc directory. Even though hbase processes run with -Xloggc option gc logs are not written due to the absence of this directory.
Fix the bug in the current code so that HBASE_REGIONSERVER_OPTS is set with all the JVM related parameters so that they take in effect when hbase region server processes start.
LGTM
|
gharchive/pull-request
| 2016-07-20T16:09:23 |
2025-04-01T04:56:11.076834
|
{
"authors": [
"amithkanand",
"bijugs"
],
"repo": "bloomberg/chef-bach",
"url": "https://github.com/bloomberg/chef-bach/pull/613",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1683935302
|
Build musl x86-64 wheels
#352 Set up CI to build and test musl libc wheels
Describe your changes
Enable musl builds in the pyproject file. Dependencies for memray are supplied using comparable libraries from the Alpine main repository.
Testing performed
Running against GitHub actions is producing long build times. Further changes may be necessary.
Hummmm, seems that lldb is missing from the alpine repo?
I have updated the PR skipping 32 bit muslinux wheels
Hummmm, seems that lldb is missing from the alpine 32 bits repo?
We could make that install optional. We already do that for the manylinux builds. The test suite just skips that memray attach test when lldb isn't available.
We probably don't want 32 bits muslc wheels in any case
But that's fine, too.
There were still a few failing tests even for musl x86-64. Looks like the old musl libc version in the musllinux_1_1 container implements aligned_alloc and realloc in terms of the malloc symbol in the PLT, so we needed to set the recursion guard in a few more places in the hooks to prevent double counting of allocations with those allocators.
|
gharchive/pull-request
| 2023-04-25T21:59:49 |
2025-04-01T04:56:11.081820
|
{
"authors": [
"godlygeek",
"pablogsal",
"salticus"
],
"repo": "bloomberg/memray",
"url": "https://github.com/bloomberg/memray/pull/379",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1570032472
|
Bug Report: Robot reported as offline but is OK in official app
Describe The Bug:
Plug-in says in the log that iRobot went offline again and again but if I look in the official app it is ok and I can start the cleaning process
To Reproduce:
Connect iRobot account with iRobot 976 on it. Then wait a couple of hours/days. Sometimes it works ok, sometimes it does not.
Expected behavior:
iRobot should be always available in HomeBridge if it is ok in official app.
Logs:
Attempting To Reconnect To Roomba
[03/02/2023, 06:32:38] *
Roomba went offline, disconnecting...
[03/02/2023, 06:32:38] *
Roomba connection closed, reconnecting in 5 seconds
[03/02/2023, 06:32:43] *
Attempting To Reconnect To Roomba
Config:
{
"name": "*",
"email": "*",
"password": "*",
"roombas": [
{
"autoConfig": true,
"info": {
"ver": 2
}
}
],
"manualDiscovery": false,
"lowBattery": 10,
"offAction": "stop:dock",
"status": "phase:run",
"eveStatus": "inverted:phase:run",
"bin": "filter:motion",
"ignoreMultiRoomBin": false,
"hideStuckSensor": false,
"disableMultiRoom": true,
"platform": "iRobotPlatform"
Node Version:
16.4.0
NPM Version:
8.5.1
Homebridge Version:
1.6.0
Plugin Version:
Latest
Operating System:
Raspberian
Same problem here too. Plugin works on a raspi but not on my Ubuntu VM for some reason…
|
gharchive/issue
| 2023-02-03T15:34:11 |
2025-04-01T04:56:11.085714
|
{
"authors": [
"frankb-CZ",
"mgoeppl"
],
"repo": "bloomkd46/homebridge-iRobot",
"url": "https://github.com/bloomkd46/homebridge-iRobot/issues/187",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1198436530
|
Missing transactions?
I am kinda new to this, so if i do something wrong please let me know:
backendService.getAddressService().getTransactions("<RECEIVE_ADDR>",100,1, OrderEnum.desc)
.getValue()
gives 19 transactions back, whilst ccvalut reports 25.
I am using 0.2.0-beta3 version
@chmod I tried with an address. It works for me.
You can verify the total transaction count for the address in cardanoscan.io
Can you please check in ccvault if the additional 6 transactions belong to another address at a different index ? Usually, wallet uses addresses at different indices for the same account.
If you want to simulate that in your java code, then you need to fetch transaction separately for address at each index.
Thanks. Using cardanoscan.io i found out that my wallet doesnt have just one address, but plenty. 🤯 Is there some way one can take all addresses given the stake hash?
|
gharchive/issue
| 2022-04-09T08:39:18 |
2025-04-01T04:56:11.090769
|
{
"authors": [
"chmod",
"satran004"
],
"repo": "bloxbean/cardano-client-lib",
"url": "https://github.com/bloxbean/cardano-client-lib/issues/104",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2685095473
|
[BUG]
Acknowledgement of preliminary instructions
[X] I have read the preliminary instructions, and I am certain that my problem has not already been addressed.
[X] I have thoroughly looked through the available Wiki articles and could not find a solution to my problem.
[X] I am using the latest version of Bloxstrap.
[X] I did not answer truthfully to all the above checkboxes.
Bloxstrap Version
v2.5.4
What problem did you encounter?
Bloxstrap_20241123T013623Z.log
Bloxstrap Log
System.AggregateException: One or more errors occurred. (The given key 'content-platform-dictionaries.zip' was not present in the dictionary.)
[Inner Exception]
System.Collections.Generic.KeyNotFoundException: The given key 'content-platform-dictionaries.zip' was not present in the dictionary.
Update.
Update Bloxstrap.
|
gharchive/issue
| 2024-11-23T01:47:50 |
2025-04-01T04:56:11.093994
|
{
"authors": [
"RAFAkok",
"bluepilledgreat",
"ihatewoodenspoons"
],
"repo": "bloxstraplabs/bloxstrap",
"url": "https://github.com/bloxstraplabs/bloxstrap/issues/3818",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1997422031
|
Change in memory limit calculation
We've encountered a memory limit while bumping Revm from 3.5.0 to 1609e07c68048909ad1682c98cf2b9baa76310b5 (https://github.com/bluealloy/revm/compare/v26...1609e07c68048909ad1682c98cf2b9baa76310b5) in https://github.com/foundry-rs/foundry/pull/6281 while testing pcaversaccio/snekmate's Multicall.
Some calls to Multicall::multicall_self in these tests use about 60MiB in total, but split across multiple internal calls. Foundry by default uses 32MiB memory limit, so these calls failed after updating Revm with MemoryLimitOOG.
I believe the root cause is in this commit: https://github.com/bluealloy/revm/commit/b5aa4c9f2868f0101bc567baacbcbe0707193371
Since it's the only one that changed this logic.
Before:
new_size > ($interp.memory_limit as usize)
After:
(self.last_checkpoint + new_size) as u64 > self.memory_limit
I'm not sure if this is correct. Should this just be checking new_size only again?
cc @rakita @lorenzofero @pcaversaccio
Hi Dani! Yes I probably had a misunderstanding about memory limit when I worked on this, I'm sorry. Right now it checks that, between all contexts, the amount of memory allocated does not overcome memory_limit.
Looking at the original code it seems that you're allowed to allocate up to 32MiB in every context, which is different than the behavior of the current shared memory.
If you agree I can make a little PR to fix this: as you said we should be checking new_size only.
memory_limit was introduced for foundry, I noticed this change in behaviour It felt nicer that it is over all memory allocated, if it was 32mb per stack than the maximum is 32mb*1024. Like this better but wouldn't mind reverting back to limit per stack
@rakita It would be awesome to revert this change as since https://github.com/foundry-rs/foundry/pull/6281 is now merged and all my commits in snekmate will now fail (I use foundry nightly builds) with these 2 specific failures. Maybe as a background why I do such crazy tests: Snekmate is a Vyper library and in Vyper you statically allocate memory at compile time for e.g. dynamic arrays. Since snekmate should be a generic module essentially I put high placeholders there in multicall for the memory allocation. People using this contract can reduce it according to their needs. But I test kind of the upper limit.
@rakita It would be awesome to revert this change as since https://github.com/foundry-rs/foundry/pull/6281 is now merged and all my commits in snekmate will now fail (I use foundry nightly builds) with these 2 specific failures. Maybe as a background why I do such crazy tests: Snekmate is a Vyper library and in Vyper you statically allocate memory at compile time for e.g. dynamic arrays. Since snekmate should be a generic module essentially I put high placeholders there in multicall for the memory allocation. People using this contract can reduce it according to their needs. But I test kind of the upper limit.
Solution would be for foundry to expose memory limit as a config or just bump corrent value to 256mb or 512mb, that would solve the problem.
I'm indifferent to how it's solved and leave that to you guys; I just need a solution that works :)
We believe the new behavior makes more sense, so we're bumping the default memory limit to 128MiB in https://github.com/foundry-rs/foundry/pull/6338.
|
gharchive/issue
| 2023-11-16T17:44:31 |
2025-04-01T04:56:11.118694
|
{
"authors": [
"DaniPopes",
"lorenzofero",
"pcaversaccio",
"rakita"
],
"repo": "bluealloy/revm",
"url": "https://github.com/bluealloy/revm/issues/865",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2595054647
|
fix: Fix bug if inner sub set is empty causing NPE [tentative]
(I haven't tried but I think this might be the issue)
Tries to fix https://github.com/flame-engine/flame/issues/3347 (needs to validate)
Pull Request Test Coverage Report for Build 11388053958
Details
2 of 2 (100.0%) changed or added relevant lines in 1 file are covered.
No unchanged relevant lines lost coverage.
Overall coverage increased (+0.02%) to 97.345%
Totals
Change from base Build 10766990276:
0.02%
Covered Lines:
110
Relevant Lines:
113
💛 - Coveralls
|
gharchive/pull-request
| 2024-10-17T15:20:50 |
2025-04-01T04:56:11.143100
|
{
"authors": [
"coveralls",
"luanpotter"
],
"repo": "bluefireteam/ordered_set",
"url": "https://github.com/bluefireteam/ordered_set/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
233083859
|
Fix error with gulp task minifyHtml
Please see https://github.com/blueimp/JavaScript-MD5/pull/17#issuecomment-251092848
|
gharchive/pull-request
| 2017-06-02T05:54:20 |
2025-04-01T04:56:11.145579
|
{
"authors": [
"blueimp",
"clementbirkle"
],
"repo": "blueimp/JavaScript-MD5",
"url": "https://github.com/blueimp/JavaScript-MD5/pull/21",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
543412817
|
Is it possible to get progress on delayed data.submit() ?
I have a case where I have to save the form first - to get an id - which the uploaded files will be related to.
I works fine - but I can not get any progress-data.
is it possible?
Here is my code: first collect the data-objects in a pending list.
var pendingList = [];
$('#fileupload').fileupload({
dropZone: $('#drop-area'),
uploadTemplateId: null,
downloadTemplateId: null,
url: multi_upload_action,
autoUpload: false,
add: function (e, data)
{
$.each(data.files, function (index, file)
{
var row = $('<div class="template-upload">' +
'<div class="table-cell">' +
'<div class="name">' + file.name + '</div>' +
'<div class="error"></div>' +
'</div>' +
'<div class="table-cell">' +
'<div class="size">Processing...</div>' +
'</div>' +
'<div class="table-cell">' +
'<div class="progress" style="width: 100px;"></div>' +
'</div>' +
'</div>');
var file_size = formatFileSize(file.size);
row.find('.size').text(file_size);
data.context = row.appendTo($(".content_upload_download"));
});
pendingList.push(data);
},
limitConcurrentUploads: 1,
maxChunkSize: 8388000
});
Then - later on - send the files with extra info.
sendAllFiles = function (id, redirect_action)
{
var total_files = pendingList.length;
var n = 0;
pendingList.forEach(function (data)
{
data.formData = {id: id};
data.submit()
.done(function (data, status)
{
n++;
$.each(data.files, function (index, file)
{
if (typeof file.error != 'undefined' && file.error)
{
alert(file.name + ': ' + file.error);
}
});
if (n == total_files)
{
window.location.href = redirect_action;
}
});
});
};
Regards
Hi @sigurdne, please see #2190.
|
gharchive/issue
| 2019-12-29T14:07:55 |
2025-04-01T04:56:11.148105
|
{
"authors": [
"blueimp",
"sigurdne"
],
"repo": "blueimp/jQuery-File-Upload",
"url": "https://github.com/blueimp/jQuery-File-Upload/issues/3546",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2059202742
|
mavp2p not routing ParamExt commands
Hi @aler9
When testing a newer version of mavp2p we found that it seems to not route ParamExt commands
[E] at D:\a\auterion-qgroundcontrol\auterion-qgroundcontrol\src\FactSystem\ParameterExt.cc:207 - "No response for param set: "CAM_NEUTRAL""
[D] at D:\a\auterion-qgroundcontrol\auterion-qgroundcontrol\src\FactSystem\ParameterExt.cc:211 - "Param set retry: "CAM_STREAM" 2"
[D] at D:\a\auterion-qgroundcontrol\auterion-qgroundcontrol\src\FactSystem\ParameterExt.cc:211 - "Param set retry: "CAM_STREAM" 3"
[E] at D:\a\auterion-qgroundcontrol\auterion-qgroundcontrol\src\FactSystem\ParameterExt.cc:207 - "No response for param set: "CAM_STREAM""
So AMC is sending out the parameter change to switch between EO/IR and to set the gimbal mode. Payload manager doesn't receive any param change. Looking at the mavp2p configurations sounds like a good next step
Could that be related to this commit?
https://github.com/bluenviron/mavp2p/commit/217a5ca1d2af1f5b28ad5db34833a18e2660064e
@aler9
i dove a little deeper and it seems it does route the messages but it seems to truncate the payload.
Using an older mavp2p the message comes through with many bytes populated towards the end of the payload, on the new mavp2p on the first 20 or so are populated, the rest is 0x0
@sanderux @aler9 I have also seen that when QGC or Mission Planner does not negotiate stream rates, and mavp2p is configured with --streamreqdisable I can no longer receive parameters or send commands.
Hello, since #18 and #29, the router forwards messages, addressed to a certain system ID and component ID, only to the node which is broadcasting that specific system ID and component ID.
Therefore, this bug may be caused by one of the following things:
the router inability to route PARAM_EXT_SET messages to the desired target; this might be caused by the fact that the target doesn't broadcast its system ID or component ID and therefore the router doesn't send him the message
the router inability to correctly re-encode PARAM_EXT_SET messages. In order to obtain TargetSystem and TargetComponent of a message, the router has to decode the message, and then to re-encode it.
Unfortunately i don't have ready-to-use SITLs that supports PARAM_EXT_SET, therefore i was not able to replicate the issue. In order to debug further, we need network dumps of the data exchanged between the control station and the router, and between the router and the drone. They can be generated in this way:
Download wireshark (https://www.wireshark.org/)
Start capturing on the interface used for exchanging packets (if the router and the external hardware or software are both installed on your pc, the interface is probably "loopback", otherwise it's the one of your network card)
Start the router and replicate the issue
Stop capturing, save the result in .pcap format
Attach
024/05/03 12:34:45 mavp2p v0.0.0
2024/05/03 12:34:45 router started with 2 endpoints
2024/05/03 12:34:45 channel opened: tcp:172.20.110.10:5799
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=2 cid=101
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=2 cid=154
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=2 cid=1
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=194
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=191
2024/05/03 12:34:45 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=236
2024/05/03 12:34:46 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=51
2024/05/03 12:34:46 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=192
2024/05/03 12:34:46 node appeared: chan=tcp:172.20.110.10:5799 sid=1 cid=155
2024/05/03 12:32:40 &frame.V2Frame{IncompatibilityFlag:0x0, CompatibilityFlag:0x0, SequenceNumber:0xc0, SystemID:0xff, ComponentID:0xbe, Message:(*common.MessageParamExtSet)(0xc000258ba0), Checksum:0x870f, SignatureLinkID:0x0, SignatureTimestamp:0x0, Signature:(*frame.V2Signature)(nil)}, &common.MessageParamExtSet{TargetSystem:0x2, TargetComponent:0x65, ParamId:"CAM_STREAM", ParamValue:"", ParamType:0x1}
Seems lijk ParamValue should not be empty
|
gharchive/issue
| 2023-12-29T08:40:55 |
2025-04-01T04:56:11.165908
|
{
"authors": [
"Autonomost",
"aler9",
"sanderux"
],
"repo": "bluenviron/mavp2p",
"url": "https://github.com/bluenviron/mavp2p/issues/59",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1505576997
|
Add support for value localization + several bugfixes and refactorings
These changes add support for value localization, and also fixes several bugs around the multi-select dropdown relating to the asynchronous row pipeline.
Looks like the version of ngx-grid-core package needs to be updated.
This is pushed now, looks like I did update it but didn't push the change 😆
|
gharchive/pull-request
| 2022-12-21T01:41:11 |
2025-04-01T04:56:11.175048
|
{
"authors": [
"EmanH"
],
"repo": "blueshiftone/ngx-grid",
"url": "https://github.com/blueshiftone/ngx-grid/pull/35",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2703598998
|
UX: Post composer unintuitively expects animated GIFs as video files, offers no warning to users attempting to upload animated GIFs as images
I'm already seeing users in the wild bump into this issue, so I want to ensure there is an issue explicitly regarding this.
Related: #1047, #6433, #6797
#6433 has brought the ability to upload animated GIFs directly to Bluesky, with caveats.
The post composer doesn't appear to know the difference between a static image and an animated image. As such, it doesn't warn users attempting to upload animated GIFs as images, nor attempt to automatically handle animated GIFs so that they can upload correctly.
Animated GIFs are treated and processed as a video format. While this is an expected consequence of media storage and content delivery on the modern web, its implementation in the UI is counter-intuitive for users.
GIF uploads are (currently?) restricted to the Web client, with no warning whatsoever for mobile users.
To smooth out these issues, I would like to request:
A way for the Post composer to automatically detect animated GIFs when uploading as an image attachment. (Crucially, this needs to leave static GIFs alone. #6797)
Silently redirect animated GIFs to the video upload process.
A notice in the Post composer underneath attached GIFs, so that users understand that their animated GIFs will be processed to fit a maximum resolution, framerate, and video bitrate. (This will also greatly benefit users uploading actual videos, as users are often left guessing what the "optimal" video settings are.)
Warn mobile users that animated GIFs can only be uploaded via the Web client, until this is no longer the case.
More detailed information below.
As an example throughout this post, here is my fancy animated GIF. It is a 1080px square image, and plays at an effective 50fps (2ms delay per frame).
https://lollie.me/gifs/LollieLogoGif.gif
Many users know GIF as an image format, and so they will understandably upload animated GIFs as images. A user will see their GIF animating without issue in the post composer, and expect it to upload successfully.
https://github.com/user-attachments/assets/697f2488-9ca2-4d46-9273-1840ee25a70a
There is no warning to direct the user to upload their animated GIF as a video, nor is there any attempt by the post composer to identify the animated GIF and silently redirect it to the video upload process. As a result, the user will unwittingly post the first frame of their animated GIF as a static image, and likely be left with the impression that animated GIF support is broken.
If the user does know to upload their animated GIF as a video, they are beholden to Bluesky's video processing: A maximum "720p" resolution, 30fps, 5000kbps. These limits are not presented to the end user in the UI (even when uploading actual videos), and so an animated GIF created without these limits in mind will likely end up experiencing issues with the resulting conversion. For example:
Low image quality or resolution (anything higher than "720p" in video terms)
Skipped frames (if the animated GIF contains frames with less than 4ms delay)
https://github.com/user-attachments/assets/72f68a6c-7351-413b-a438-651623568108
In cases where an animated GIF has a consistent frame pacing (eg: every frame is 2ms), the conversion to a video framerate of 30fps will be relatively stable. But this can't be guaranteed for animated GIFs with variable frame pacing, which is far more common when dealing with traditional/keyframed animation content.
I'd like to add this situation here as well, the GIF to video file conversion is not optimal for certain kinds of animation such as this one:
https://bsky.app/profile/lordfuckpuppy.social/post/3lc22r6abzk2f
I suspect it is due to compression algorithms working better with frames that are generally similar from one to another, but being bad when it has too much difference between them. Still, the first few frames look decent, and the quality degrades over time so it would be nice for artists if it at least the quality could keep up throughout the video.
When will this be fixed?
bump
|
gharchive/issue
| 2024-11-29T01:41:26 |
2025-04-01T04:56:11.184143
|
{
"authors": [
"HeyItsLollie",
"SpriterGors",
"Zero3K"
],
"repo": "bluesky-social/social-app",
"url": "https://github.com/bluesky-social/social-app/issues/6837",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
562066755
|
dotted_name property should include the obj
fixes #796
Here's what the fix looks like in practice:
In [14]: dotted_name??
Signature: dotted_name(obj)
Source:
def dotted_name(obj):
"""Return the dotted name"""
names = []
while obj.parent is not None:
names.append(obj.attr_name)
obj = obj.parent
names.append(obj.name)
return '.'.join(names[::-1])
File: ~/.ipython/user/mining.py
Type: function
In [15]: dotted_name(m1.low_limit_switch)
Out[15]: 'm1.low_limit_switch'
We intentionally did not include the name of the root object in the dotted name.
If you do, then things like getattr(obj.parent, obj.dotted_name) is obj don't round trip.
Another issue is that the rest of the names are the chain are attr_names which are extracted from the classes and python ensures are unique and immutable for us (should check that, but they should be thought of as immutable). On the other hand the parent.name attribute is mutable by the user at run time and there is no guarantee that the name reported by obj.name is the name the object has in the enclosing namespace.
While the default name of a given sub-object is self.parent.name + '_' + self.attr_name, there should be a clear separation between the "names python has a say over because the language needs to work" and the "names the scientists get a say over because the ophyd Device structure is an implementation detail of the collection system that should not leak into analysis". It makes like mildly easier for those of us in the middle if those two sets of names roughly match, but we should not bake that convenience into the code anyplace.
|
gharchive/pull-request
| 2020-02-08T19:31:28 |
2025-04-01T04:56:11.193850
|
{
"authors": [
"prjemian",
"tacaswell"
],
"repo": "bluesky/ophyd",
"url": "https://github.com/bluesky/ophyd/pull/797",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
2439810928
|
🛑 Skytalks SSL Website (v4) is down
In eee7adb, Skytalks SSL Website (v4) (https://skytalks.info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Skytalks SSL Website (v4) is back up in 6b4bdc6 after 30 minutes.
|
gharchive/issue
| 2024-07-31T11:25:57 |
2025-04-01T04:56:11.217458
|
{
"authors": [
"bluknight"
],
"repo": "bluknight/skytalks-monitor",
"url": "https://github.com/bluknight/skytalks-monitor/issues/546",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2445968126
|
🛑 Skytalks SSL Website (v4) is down
In e0c27f1, Skytalks SSL Website (v4) (https://skytalks.info) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Skytalks SSL Website (v4) is back up in ecb3b8b after 55 minutes.
|
gharchive/issue
| 2024-08-03T02:41:49 |
2025-04-01T04:56:11.219836
|
{
"authors": [
"bluknight"
],
"repo": "bluknight/skytalks-monitor",
"url": "https://github.com/bluknight/skytalks-monitor/issues/728",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1129587347
|
Support import assertions
import { foo } from './client-module' assert { only: 'client' }
import { bar } from './server-module' assert { only: 'server' }
is it possible to use .{client|server}.{ts|js} instead of ?{server|client}? Or would that be a rewrite?
Playing around with a Nuxt 3 app and want to have a data loader/action like what Remix does.
<script setup lang="ts">
import { useServerData } from '@/lib/loader'
import { prisma } from '@/lib/prisma?server'
const todos = await useServerData(async () => {
const todos = await prisma.todo.findMany()
return todos
})
</script>
<template>
<div>
<ul v-if="todos">
<li v-for="t in todos" :key="t.id">{{ t.title }}</li>
</ul>
<div v-else>No todos</div>
</div>
</template>
Was able to implement a basic loader https://github.com/wobsoriano/nuxt-data-playground
It's likely not a lot of work to support that, but that would only work for files since you can't use that for importing packages. It could also conflict if you actually want to load a file with the .client or .server name, so it would need some fs checks for each import.
|
gharchive/issue
| 2022-02-10T07:47:03 |
2025-04-01T04:56:11.227516
|
{
"authors": [
"bluwy",
"wobsoriano"
],
"repo": "bluwy/vite-plugin-iso-import",
"url": "https://github.com/bluwy/vite-plugin-iso-import/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2255070140
|
Add ability to remove close button
Description
I don't update my EdgeFrfox frequently so I was stuck with v24.2.4. In this version and before, Linux support for the hide tabs toolbar tweak wasn't there. My Firefox window looked like this:
In v24.2.13, the next version, where Linux support was properly added for the hide tabs toolbar tweak, my Firefox window now looks like this:
It would be nice to have a feature to hide the close button and the drag space after it. I don't want to delete the entire Linux/GTK part of hide-tabs.css just to remove my close button.
Now the window controls can be hidden using the uc.tweak.hide-tabs-bar.no-window-controls tweak (either enabled with the existing tweak or on its own):
|
gharchive/issue
| 2024-04-21T13:20:12 |
2025-04-01T04:56:11.232610
|
{
"authors": [
"bmFtZQ",
"m3957"
],
"repo": "bmFtZQ/edge-frfox",
"url": "https://github.com/bmFtZQ/edge-frfox/issues/144",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2035138389
|
How to continue using this plugin,the official undo usage is poor
The official undo is too torturous. Ctrl+Z has not responded at all. Even the selection has been marked as a revocation step, and it seems to conflict with Efficientnode. I don't know why, but I used this plugin well before
According to what you said, "Optionally, if you wish to keep using the extension, you will have to remove the undoRedo. js from the web \ extensions \ core directory."
“
I found that there is still the same reaction as the official undo
In addition, when the official undo button presses Ctrl+Z, the preview image always disappears, which is very troublesome. It would be great if this problem could be solved
Hi,
I have to address your issue with some additional questions.
You mentioned the official is 'too torturous', did you disable this extension and delete the ZZZ-Bmad-DirtyUndoRedo folder prior to using it?
If so, and you prefer this extension behavior, but it seems broken now, my first guess is that one of the following (or both) apply:
You are updating Comfy and "undoRedo.js" is placed within the folder again, conflicting with this extension.
I've updated this extension near the time the official undo was added, and I might have broken some previously working behavior.
I doubt I can address any of these soon, but, with respect to the potential problem number 2, I would need to better understand the problems you mentioned.
Can you elaborate more (ignoring the last point about the previews)?
From what extension is the Efficient node (or is that the name of the extension???), what is the current and the desired behavior, etc...
Depending on the nature of the problem:
I could try to bypass (or delete) the undoRedo.js when the extension is active. Preferably, bypass; deleting could lead to confusion when disabling this extension.
I would have to verify older versions of this extension to check when it stopped behaving correctly and why.
Best Regards
|
gharchive/issue
| 2023-12-11T08:50:05 |
2025-04-01T04:56:11.237340
|
{
"authors": [
"AlexanderDash",
"bmad4ever"
],
"repo": "bmad4ever/ComfyUI-Bmad-DirtyUndoRedo",
"url": "https://github.com/bmad4ever/ComfyUI-Bmad-DirtyUndoRedo/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
916327242
|
Fix data source values for mysqld-exporter
Description
As a title.
Fixed!
|
gharchive/pull-request
| 2021-06-09T15:16:29 |
2025-04-01T04:56:11.272860
|
{
"authors": [
"bmf-san"
],
"repo": "bmf-san/gobel-example",
"url": "https://github.com/bmf-san/gobel-example/pull/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1034602834
|
Add schemer version to schema
Adding the version number of schemer itself to the encoded schemas will allow even more compatibility as schemer evolves
I always thought adding a small frame to the schemer data make sense. such as:
[uniqueID][schemerVersion][payload length][schemer payload][checksum]
that would only add several bytes, and would allow us to verify that some bytes actually represent schemer data
it is just one idea that maybe makes sense, as long as we are adding the version number anyway?
I think we should discuss further. IMO payload length and checksum are not strictly needed.
The notion of an unique ID is interesting, but I think it's a separate issue.
The JSON document also needs the schemer version number. Should follow semver (i.e. major.minor.revision)
I was just thinking some way to verify that an arbitrary slice of bytes was actually schemer encoded data
My thinking was that common programmer off by one errors or similar could crash Schemer lib if incomplete data was passed to Schemer . Maybe it doesn't matter. I was thinking some way for schemer decode to verify if passed in data is OK.
A simple verification step [to make sure library users are passing correct byte slices by having simple checksum for schemer data] at our level can help prevent/identify buys for users of this library.
RIght now, according to my understanding, the library will crash and/or return garbage if passed in schemer schema / encoded data passed has issues.
So we will shelf the idea of adding a data frame to the encoded schemer data. (Reason for this is to keep the data as small as possible.) However, our discussions have decided that since schemas themselves are only transmitted over the wire occasionally, it is OK to add extra data to them.
For a JSON encoded scheme, we will need to modify the function DecodeSchemaJSON() and then MarshalJSON() in each datatype. The only change should be to add an additional field called: "schemerversion".
The only problem with this approach is that we support nested structures, so writing the version number in each MarshalJSON() would lead to redundancy. I think the way around this is to use a mutex.
The other type of schema we support is the binary format. For this, I think an easy way to implement this is to add a magic (64-bit) number to the beginning of the byte slice, which is followed by the Schema version number.
Like the JSON schemas, a similar use of a mutex can be used to ensure nested data types do not also add these bytes.
I'd just call the additional field version. The version number will only appear at the root level, so I don't think we need a mutex.
For the binary format, I'm good with adding some framing, magic number, checksum, etc.... although I'm slightly worried it would add too much additional complexity to the encoding/decoding code. Similarly, the magic number, framing, and checksum will only appear at the root level. It should not be nested.
Ya, I might just be thinking wrong. But for example, we could have a schema for an integer [by itself]. So if we did, it seems like we would have to have the version number.
But if we had a struct with an integer in it, how would the integer know not to include the version number?
It seems like we would need a mutex [i.e. global variable] to prevent redundancy.
In any case, I think I have enough to start work on this.
It doesn't seem like too big of a task, and we can double check the implementation to make sure it doesn't add to much complexity.
And if I am not thinking right about the struct fields, I will just leave that out.
The version number would not be stored in the Schema instance itself. In other words, there is no struct field for the schema version. It would only be produced when encoded. I think that the encoding function would have to know when it's encoding the root schema, so that it only produces the version number at the root level.
If, for example, you had a struct with an integer field, when the struct is encoded, the version number would only be added to the struct, not to the individual fields. To be honest, I'm not exactly sure how this would work. Perhaps, every type would encode the version, but then for nested types (i.e. object, array), I believe that the MarshalJSON function can unmarshal the field / element and remove its version field before re-marshalling.
For MarshalSchemer, if the first few bytes are reserved for the version number, these could be dropped before appending to the resulting byte slice.
Thoughts?
I think we are on the same page re: the JSON. But actually I think it will be clearer once I start working with the code and get schemer back into the front of my brain!
Re: your point about MarshalSchemer , yes I see what you mean I think.
But for sure we both are in agreement that somehow we want to make sure the schema version number is only encoded once into a JSON/binary schema.
|
gharchive/issue
| 2021-10-25T02:15:20 |
2025-04-01T04:56:11.288753
|
{
"authors": [
"BenjaminPritchard",
"bminer"
],
"repo": "bminer/schemer",
"url": "https://github.com/bminer/schemer/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
103596242
|
Fix for problem with IERS workaround
I noticed a problem with the astropy/astroplan#71 approach: download_IERS_A wasn't working if you tried to use Time in the same session. The underlying problem with how the table is applied at the end of download_IERS_A. This PR fixes that, but more importantly adds a test to catch this. Of couse it requires downloading, so it uses the remote_data decorator.
Wow, I can't believe I botched that.
|
gharchive/pull-request
| 2015-08-27T21:13:15 |
2025-04-01T04:56:11.291882
|
{
"authors": [
"bmorris3",
"eteq"
],
"repo": "bmorris3/astroplan",
"url": "https://github.com/bmorris3/astroplan/pull/3",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
104149109
|
Have Tron download tools?
Would it be possible to have Tron download the external tools that it uses, rather than packaging them together with the bat files?
Frequently asked, see here
The closest pre-existing thing I can think would be something like chocolatey.org but I don't think all the tools are available. Also involving something like chocolatey assumes that all machines support it (requires power shell), and the same would apply to pretty much anything that isn't custom built. I think building something specifically for tron would end up relying on either wget or curl, or possibly btsync, so those would be the minimum required to include.
In theory this is a neat goal for the project, but after a lot of thought I agree mostly with your FAQ note that the current setup is standalone and can be used offline. Having any kind of tools download would require you to maintain at least two versions of tron, one that downloads as needed, and the other that includes the tools and maybe downloads updates as needed (when a network connection is available, and if this behaviour is accepted by the user running from).
For now I'll close this issue, but I'll definitely help out if I think of something that might work for this.
Yeah, I like your thoughts on it. I kind of ended up at the standalone method after thinking for quite a while on it.
One possible option I thought of was to bundle the utilities like we're doing currently, then automatically try to grab the latest versions at launch based on the use of a flag (or use a flag to NOT try to get new versions). The problem I came to was back to the downloader code - I'd have to maintain an update script when various URL's change. Of course, it could failback to the individual local copy of whatever tool it couldn't find, but that's a decent-size undertaking to build.
If you want to build an update script that will handle failing back to the internal version of a tool and can put it together as a sub-script that Tron calls, I could test integrating it. It really would be ideal, especially for those times when I'm away for an extended period and can't push out a refresh release with the latest versions of everything.
I like the mentality of having the tools able to update and having the flag for it, and i agree if you're unavailable this would keep things a little more current and reduce the work you have to do to keep things updated between releases. If I can come up with some time and an idea about how to make something like this possible I'll certainly bring it to you. I will say don't hold your breath though because I don't consider myself nearly the level of guru that you are with putting together batch files (thus tron! ty!) but if there is something I can do and I find the time then I'll do what I can.
Sounds good
|
gharchive/issue
| 2015-08-31T22:51:04 |
2025-04-01T04:56:11.297073
|
{
"authors": [
"nemchik",
"vocatus"
],
"repo": "bmrf/tron",
"url": "https://github.com/bmrf/tron/issues/20",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
313162303
|
Problems about scalable offline map matching
Hello:
Recently, i am trying to move this wonderful procedure onto spark cluster. I mainly change your example code, to load roadmap from an existing file. for example, xx.bfmap. So i change this
"val matcher = sc.broadcast(new BroadcastMatcher(host, port, database, user, pass, config))"
into
"val matcher = sc.broadcast(new BroadcastMatcher(bfmappath))"
which bfmappath is the filepath of xx.bfmap. and i change the BoradcastMatcher object as follows
val map: RoadMap = Loader.roadmap(bfmapPath,true).construct
For example “root/data/xx.bfmap”, it test right on local model. which means i can get the roadmap object. however, when i use
spark-submit --master yarn-cluster --class com.bmwcarit.barefoot.taxi.PointMM test.jar
it seems it cannot find the xx.bfmap path and return the null type roadmap. I consider put the xx.bfmap onto HDFS,but how should i write the bfmappath as parameters then? i am new to spark and hadoop,and please forgive me if i don't explain it clearly.
looking forward to your apply.
Thanks.
well, in fact, i wonder
"as an alternative map data can be stored/loaded as serialized object via HDFS"
as you write in this section, how to do for this?
Since you have the file serialized locally, you can just hadoop fs -put it into HDFS. Add the HDFS URI as a parameter to your Spark job.
https://github.com/jongiddy/barefoot-hdfs-reader.git has a class that you can add to your Spark map-matching code to read the map file from HDFS using:
val map = RoadMap.Load(new HdfsMapReader(new URI(map_uri))).construct()
where map_uri is the HDFS URI for the serialized map.
Thank you for your HdfsMapReader . I manage to get the read the map file from HDFS. But i get another problem now. The output on spark job is different from the procedure i run in my intellij idea. The first picture is log on spark job. The second is log on my pc (win10, intellij idea)
as you can see, on spark job it is full of HMM break, while on my local pc it works well. Since the the number and index of road is created the same, i think it is not caused by the bfmap. So what's the problem? Could you give me some advice?
Thanks a lot.
After you've read the road data, you should construct the map with map.construct() which kind of finalizes the read step and creates the spatial index and the routing graph.
Thanks for your reply. In fact, I did map.construct() after read this road data. In details, the code i use for spark is as follows. the version of spark i use is 1.6.2
in the main class, i change your example code on spark like this:
val matcher = sc.broadcast(new BroadcastMatcher(bfmapPath)).value
... some code to read trace data to tracerdd
val matches = tracerdd. groupBy(x=>x._1).map(x=>{
val trip = x._2.map({
x => new MatcherSample(x._1,x._2,x._3,x._4)
}).toList
matcher.mmatch(trip,10,500)
})
in the BroadcastMatcher , the code like this:
object BroadcastMatcher {
private var instance = null: Matcher
private def initialize(bfmappath:String) {
if (instance != null) return
this.synchronized {
if (instance == null) { // initialize map matcher once per Executor (JVM process/cluster node)
val map = RoadMap.Load(new HdfsMapReader(new URI(bfmappath))).construct
val router = new Dijkstra[Road, RoadPoint]()
val cost = new TimePriority()
val spatial = new Geography()
instance = new Matcher(map, router, cost, spatial)
}
}
}
}
@SerialVersionUID(1L)
class BroadcastMatcher(bfmappath:String) extends Serializable {
def mmatch(samples: List[MatcherSample]): MatcherKState = {
mmatch(samples, 0, 0)
}
def mmatch(samples: List[MatcherSample], minDistance: Double, minInterval: Int): MatcherKState = {
BroadcastMatcher.initialize(bfmappath)
BroadcastMatcher.instance.mmatch(new ArrayList[MatcherSample](samples.asJava), minDistance, minInterval)
}
}
As you can see, i did the map.construct(). and it works well when i run locally in my intellij idea. But when i transfer to spark (yarn cluster). It return HMM breaks.
If you can think of other reasons I might be getting the HMM breaks, I'd really appreciate your thoughts. If there's further debug information I can enable to help find the problem, I'm happy to do that.
Thanks a lot
Sorry, I haven't seen that map construction is actually shown in your log. Anyways, from remote it's difficult to see what could be the problem. With regard, to "IntelliJ" vs "Spark" execution, do you use the same reading/parsing of your trace data? (Usually, it's different because in Spark you also read your data from HDFS.) Have you checked if there is some (typical) error, e.g., swapped lat/lon or cut off decimals of floating point values due to errorneous implicit number types (integer vs. float)?
Thank you.Finally I manage to run it on spark, in fact ,the "IntelliJ" and "Spark" execution is intrinsically the same judging from the result it generates. The only different is about the logs which,in fact,doesn't matter .
Thanks again. It is really nice of you to provide this wonderful program.
谢谢。最后我设法在spark上运行它,事实上,“IntelliJ”和“Spark”执行从它产生的结果来看本质上是相同的。唯一不同的是日志,实际上并不重要。
再次感谢。提供这个精彩的节目真是太好了。
Hello:
Recently, i am trying to move this wonderful procedure onto spark cluster. I mainly change your example code, to load roadmap from an existing file. for example, xx.bfmap. So i change this
"val matcher = sc.broadcast(new BroadcastMatcher(host, port, database, user, pass, config))"
into
"val matcher = sc.broadcast(new BroadcastMatcher(bfmappath))"
which bfmappath is the filepath of xx.bfmap. and i change the BoradcastMatcher object as follows
val map: RoadMap = Loader.roadmap(bfmapPath,true).construct
For example “root/data/xx.bfmap”, it test right on local model. which means i can get the roadmap object. however, when i use
spark-submit --master yarn-cluster --class com.bmwcarit.barefoot.taxi.PointMM test.jar
it seems it cannot find the xx.bfmap path and return the null type roadmap. I consider put the xx.bfmap onto HDFS,but how should i write the bfmappath as parameters then? i am new to spark and hadoop,and please forgive me if i don't explain it clearly.
looking forward to your apply.
Thanks.
hey
can i ask you some question ? i dowload about spark some code but in idea have some error
,how to solve this question?
thanks
|
gharchive/issue
| 2018-04-11T03:36:54 |
2025-04-01T04:56:11.312985
|
{
"authors": [
"XiaoXiCanNotFly",
"biallen",
"jongiddy",
"smattheis",
"wxlreis"
],
"repo": "bmwcarit/barefoot",
"url": "https://github.com/bmwcarit/barefoot/issues/104",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
56975559
|
Better nomenclature.
For the glory.
Nice, thanks!
:+1:
|
gharchive/pull-request
| 2015-02-09T00:14:00 |
2025-04-01T04:56:11.431229
|
{
"authors": [
"bnjbvr",
"jankeromnes"
],
"repo": "bnjbvr/kresus",
"url": "https://github.com/bnjbvr/kresus/pull/26",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.