id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1020072870
|
🛑 API is down
In 753be1d, API (https://api.staging.greyfinch.com/healthz) was down:
HTTP code: 503
Response time: 74 ms
Resolved: API is back up in 75afc22.
|
gharchive/issue
| 2021-10-07T13:53:00 |
2025-04-01T06:40:34.564577
|
{
"authors": [
"neil-buckley"
],
"repo": "teamgreyfinch/staging-status",
"url": "https://github.com/teamgreyfinch/staging-status/issues/513",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
333097854
|
Middleware should be able to take a 0 arity functions as arguments
For instance:
defmodule MyApp.Endpoint do
use Tesla
plug(Tesla.Middleware.BaseUrl, &hostname/0)
def hostname, do: MyApp.Config.api_hostname()
...
end
Yes but with a Config module that pulls values dynamically from the env it will use the value from the first module eval
Are you 100% sure? :)
Yes
That's not true. It takes Elixir AST not evaluated value. Your function will be evaluated in runtime.
Example
defmodule Middleware do
@behaviour Tesla.Middleware
@impl true
def call(env, next, opts) do
IO.inspect(opts, label: "TEST")
Tesla.run(env, next)
end
end
defmodule Test do
use Tesla
plug Middleware, test()
defp test do
Process.get(:test)
end
end
iex(1)> Test.get("http://example.com") |> elem(0)
TEST: nil
:ok
iex(2)> Process.put(:test, 2)
nil
iex(3)> Test.get("http://example.com") |> elem(0)
TEST: 2
:ok
iex(4)> Process.put(:test, :ok)
2
iex(5)> Test.get("http://example.com") |> elem(0)
TEST: :ok
:ok
Technically the issue i'm running into is with the adapter macro (which more or less looks the same as the plug macro) using:
defmodule Repo do
use Tesla
adapter Tesla.Adapter.Hackney, ssl_options()
def ssl_options do
[ssl_options: [
certfile: Config.api_ssl_certfile(),
password: Config.api_ssl_password(),
versions: [:"tlsv1.2"]]]
end
end
@sitch I still don't know what exactly is your issue.
As my example shows and @amatalai's one proved you can specify a function call and it will be evaluated on every request.
This is also not true for adapter macro
defmodule Test do
use Tesla
adapter Tesla.Adapter.Hackney, test()
defp test do
IO.inspect(Process.get(:test), label: "RUNTIME")
[]
end
end
iex(1)> Test.get("http://example.com") |> elem(0)
RUNTIME: nil
:ok
iex(2)> Process.put(:test, 1)
nil
iex(3)> Test.get("http://example.com") |> elem(0)
RUNTIME: 1
:ok
iex(4)> Process.put(:test, :ok)
1
iex(5)> Test.get("http://example.com") |> elem(0)
RUNTIME: :ok
:ok
Are you sure that this Config module is working properly?
The Config calls are simple Application.get_env/3 calls
I was migrating an already working production implementation in HTTPoison to Tesla with hackney and this breaks non-async tests. I'll look into this more on my end, but after the examples you gave you should be able close this ticket.
|
gharchive/issue
| 2018-06-17T22:24:34 |
2025-04-01T06:40:34.569970
|
{
"authors": [
"amatalai",
"sitch",
"teamon"
],
"repo": "teamon/tesla",
"url": "https://github.com/teamon/tesla/issues/213",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
560966106
|
Make Fuse return original error when circuit is closed
Problem
Fuse middleware is returning {:error, :unavailable} when circuit is open (expected)
https://github.com/teamon/tesla/blob/master/lib/tesla/middleware/fuse.ex#L58
But also when the call has failed, hidden the original error.
https://github.com/teamon/tesla/blob/master/lib/tesla/middleware/fuse.ex#L73
I think it would be interesting to return the original error or maybe include a third element in the tuple including the original element.
For me, it's interesting to know when the circuit is open and the request never takes place, and know when the call has been performed with error as result.
Timeout middleware integration
I ask for this after using Timeout and Fuse middlewares. For me, it makes sense to keep the order Fuse -> Timeout, so that if there is a timeout error, Fuse is aware of this when calculating its circuit status. But doing it, the original {:error, :timeout} is hidden by Fuse, returning {:error, :unavailable}. And I'd like to know that timeout was the original problem.
Another point I've realised is that, if Tesla gets a 5xx error it returns {:ok, %Tesla.Env{status: 5xx} and it seems it's taken by Fuse as a successful response. So, it's not considered to melt the fuse. That is, to open the circuit.
Is it right? If so, is it the expected behaviour for a circuit breaker?
And maybe even http status 429 (Too many requests) should be considered by Fuse?
Thanks!
It's impossible to have one-size-fits-all answer for "to melt or not to melt".
For the same reasons Retry middleware exposes (should_retry function](https://github.com/teamon/tesla/blob/master/lib/tesla/middleware/retry.ex#L32), Fuse middleware could use the same.
Yes! It was just was I was about to propose. Having an optional configuration to let the client decide which http status you want to consider to melt.
If you agree, I'll submit an MR containing this proposal.
Not sure if do exactly the same as Retry of just passing a list of http status codes 🤔
Let's do the same as in Retry for consistency (also list of http status codes might not be enough)
Sure! It was just to make the clients simpler. But let's do the same way as Retry for consistency, yes :)
BTW, one thing is when the request should be melt according to the response (solved doing the same as Retry). And different one is what should we return if it happens. That's the original question of this issue.
For me, it makes sense to return the original error even it has been selected to melt. And only return specific circuit breaker error when it's open.
Because having another function as param to decide when to return original response is too much.
What do you think?
To clarify, this existing test would be:
test "unavailable endpoint" do
assert {:error, :econnrefused} = Client.get("/unavailable")
assert_receive :request_made
assert {:error, :econnrefused} = Client.get("/unavailable")
assert_receive :request_made
assert {:error, :econnrefused} = Client.get("/unavailable")
assert_receive :request_made
assert {:error, :unavailable} = Client.get("/unavailable")
refute_receive :request_made
assert {:error, :unavailable} = Client.get("/unavailable")
refute_receive :request_made
end
The only drawback I see here is that it wouldn't be backward compatible, would it?
Unless we add a parameter to decide if we want to keep the original response when the circuit is close, keeping the current behaviour as default one.
IMHO, it should be the right behaviour. I want my response unless you don't perform the request because the circuit is open.
I don't see this as a huge backwards compatibility issue - let's go with this change.
|
gharchive/issue
| 2020-02-06T11:58:37 |
2025-04-01T06:40:34.580296
|
{
"authors": [
"asniaire",
"teamon"
],
"repo": "teamon/tesla",
"url": "https://github.com/teamon/tesla/issues/351",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
53997565
|
'Timed out waiting for response' since Rails 4.2.0
While all my tests are passing with Rails 4.1.9, upgrading (only) Rails to 4.2.0 make all my tests failing with the following error:
Timed out waiting for response to {"name":"visit","args":["http://127.0.0.1:33290/"]}. It's possible that this happened because something took a very long time (for example a page load was slow). If so, setting the Poltergeist :timeout option to a higher value will help (see the docs for details). If increasing the timeout does not help, this is probably a bug in Poltergeist - please report it to the issue tracker. (Capybara::Poltergeist::TimeoutError)
I have tried to analyse which processes are running or the open ports in both cases but I can't figure out why it's not working.
Is there any way how to reproduce this?
I guess a fresh Rails application should be enough as I really encounter this issue just by upgrading Rails only.
Check it's not asset compilation taking ages: try doing a
RAILS_ENV=test rake assets:precompile
before running your tests. Worked for us.
@chuckd it was a clever trick but it doesn't fix my issue. I have also try to remove the public/assets folder in order to check if it is recreated while running my tests but it doesn't.
I barely think poltergeist is related
I'm trying to reproduce this with fresh rails applications (Version 4.1.9 and 4.2.0).
You may be running into a concurrency issue - rails 4.2 changed to not allowing concurrency in the default test environment setup (by injecting Rack::Lock into the middleware stack https://github.com/rails/rails/commit/112077c255879351edf4530791cc4bcc7bd4005b ), you can set config.allow_concurrency = true in your test.rb to get back to the previous behavior although there are potential constant loading issues.
Thank you @twalpole for the suggestion but it's not working :(
@zedtux what about barebone application? Does it work?
@route I tried during half an hour to build a new Rails 4.1.9 and 4.2.0 application and wasn't able to reproduce the issue. It obviously need more time on order to integrates more and more the tools I'm using ... Like devise first.
I should try to find some time in my life to extend more and more this from-scratch-project in order to reproduce the bug.
@chuckd you should be right. I have the capybara-screenshot gem installed and when it's failing with the timeout error, and I open the print screen, I see the home page but with a lot of missing CSS.
As I'm using Docker (and Fig) the command should be integrated in my Dockerfile so I'm trying this...
Adding the following to the env.rb file make it working:
Capybara.register_driver :poltergeist do |app|
Capybara::Poltergeist::Driver.new(app, timeout: 10000)
end
It's taking quite long time on the first visit call and then working. I have tried with calling rake assets:precompile in my Dockerfile but it's not working. The assets are generated but it is still taking a long time before to start.
So based on my previous comment, I guess we should close this issue and I should open a new one in the Rails project, right ?
(To try to determine what has changed which has increased the boot time)
Sounds correct to me, also try to run application in test env in order to debug or maybe play with environment/test.rb settings, because we have 4.2 application and it just works.
I did it. I've tried the solution from @twalpole, I tried to disable the asset compilation and so on but nothing worked.
I have also copy/past the environment/test.rb file from a fresh rails 4.2 application.
Do you have an example of Rails 4.2 environment/test.rb file ?
Well adding the following to the config/environments/test.rb make it no more waiting:
config.assets.compile = false
But then all scenario fail with:
No route matches [GET] "/assets/application.css" (ActionController::RoutingError)
So it's really about the assets...
My conf is standard. What if you precompile assets and set compile = false?
What if you precompile assets
As I'm using Docker (with Fig) I need to figure out how to do it. I will keep you informed.
@zedtux too much abstraction layers nowadays :)
@route it's fine for me... RVM or Docker is the same : )
|
gharchive/issue
| 2015-01-11T17:12:55 |
2025-04-01T06:40:34.592184
|
{
"authors": [
"chuckd",
"route",
"simi",
"twalpole",
"zedtux"
],
"repo": "teampoltergeist/poltergeist",
"url": "https://github.com/teampoltergeist/poltergeist/issues/571",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
181914482
|
Convert README badge to SVG
Because raster badges make eyes bleed :sunglasses:
Thanks
|
gharchive/pull-request
| 2016-10-09T21:54:50 |
2025-04-01T06:40:34.593408
|
{
"authors": [
"rmm5t",
"twalpole"
],
"repo": "teampoltergeist/poltergeist",
"url": "https://github.com/teampoltergeist/poltergeist/pull/816",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1493573580
|
Api has been offline since yesterday evening "ERR_TUNNEL_CONNECTION_FAILED"
@lukvxx The API has been shut down due to lack of funds. I've created an alternative that runs on free hosting here: https://github.com/nfelger/gesetze-aus-dem-internet (hosted at https://gadi.netlify.app/)
@nfelger Sad to hear. I will look into hosting my own service. Thank you :)
|
gharchive/issue
| 2022-12-13T07:26:40 |
2025-04-01T06:40:34.602007
|
{
"authors": [
"lukvxx",
"nfelger"
],
"repo": "tech4germany/rechtsinfo_api",
"url": "https://github.com/tech4germany/rechtsinfo_api/issues/9",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2027705674
|
Bug while creating new transaction from customer to credit note
[x] creating new credit note from customer-error(reason id is required)
ISSUE FIXED!
|
gharchive/issue
| 2023-12-06T05:43:29 |
2025-04-01T06:40:34.623519
|
{
"authors": [
"AnjaliVellookkaran"
],
"repo": "techgebra/polosys-books",
"url": "https://github.com/techgebra/polosys-books/issues/373",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
203034634
|
Copy update to Thank you page + add Fb buttons
sub-copy update:
Thank you for making calls for change. You have made a difference and your representatives will take notice. Now, spread the word!
[fb buttons]
https://github.com/technologists-for-progress/callsforchange/pull/82
|
gharchive/issue
| 2017-01-25T07:53:00 |
2025-04-01T06:40:34.644953
|
{
"authors": [
"daybowbow",
"jontg"
],
"repo": "technologists-for-progress/callsforchange",
"url": "https://github.com/technologists-for-progress/callsforchange/issues/81",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
725770748
|
Request for 'circledsteps.sty' package
The rather new package circledsteps.sty (https://ctan.org/pkg/circledsteps) is missing in the bundle.
It would be nice if this could be added to the next version.
Thanks for asking! I'm currently working on updating to TeXLive 2020.0, so I'll see if I can add it as part of that. (Part of that update is/was the creation of this new repo for managing the bundle creation.)
|
gharchive/issue
| 2020-10-16T20:51:06 |
2025-04-01T06:40:34.716925
|
{
"authors": [
"pkgw",
"valkum"
],
"repo": "tectonic-typesetting/tectonic-texlive-bundles",
"url": "https://github.com/tectonic-typesetting/tectonic-texlive-bundles/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
278597274
|
default location overrides location specified via drag and drop
Hi,
At first, a very nice and simple extension but I got a little bug here, I believe.
When I drag a tab into the bookmarks sidebar and drop it at some folder "x" the bookmark always appears in the specified default bookmark location "y".
Drag and drop works as expected when "Override default location" setting is turned off.
FF 57.0 linux
Hello! Thank you for reporting this problem! And sorry I took a while to respond, I have been busy with life recently. 😃
You are not the first one to experience this issue, and it actually also happens when using the "Bookmarks All Tabs..." feature. In both cases, this behavior was due to the way I was handling the newly created bookmarks.
I have fixed this problem in a recent commit (ec106f5). I am planning to publish a new version on AMO (probably this week-end) that will include this fix. I will post here again when I do so!
Again, thank you for your help! 👍
I have just published the version 2.1.0 on AMO that should fix this problem. Please let me know if you are still experiencing issues after the update to the new version!
I am currently able to reproduce this bug in Firefox 71.0 running Default Bookmark Folder 2.10.1
Thanks for reporting this @SebiderSushi!
I was able to reproduce the issue, but only with the "Bookmarks Menu" or the "Other Bookmarks" folder. The other folders were behaving as expected.
After more digging, I have created a dedicated issue #136 regarding this problem. Unfortunately, this is not something I am able to fix for now (please refer to the issue for all the details).
While investigating this, I have also found another similar bug with manually created bookmarks via the bookmark sidebar or library context menu. They were being moved to the default bookmark location configured in the add-on settings. This has been solved by the recent commit cb1e8f7 and will be released with the next version of the add-on (which should hopefully be available next week).
If you are experiencing issues with the drag and drop on the bookmark sidebar with other folders than "Bookmarks Menu" or "Other Bookmarks", please feel free to open a new issue. 🙂
Thank you for your time and your explanations!
|
gharchive/issue
| 2017-12-01T21:09:43 |
2025-04-01T06:40:34.733051
|
{
"authors": [
"SebiderSushi",
"teddy-gustiaux",
"tmalch"
],
"repo": "teddy-gustiaux/default-bookmark-folder",
"url": "https://github.com/teddy-gustiaux/default-bookmark-folder/issues/5",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2224186542
|
Current and future state of the project
It has been some time since I shared news about this project, and since the last release. This post intends on providing you with a quick update on both items.
TL;DR:
The project is not abandoned. A new major version 4.0.0 is coming soon, with a number of fixes and improvements. :rocket:
Development efforts on the project will continue (albeit at a pace that is compatible with my available time as the sole maintainer), and the priority will be given to bug fixes and quality-of-life improvements.
The last release of the add-on was back in 2021, on the heels of the internal bookmarking changes that happened back then in Firefox (see issue 399 for more details). This project is something I maintain in my spare time, and since that last release, it was difficult to find time to work on it. This was not made any easier by the fact that trying to resolve the issues introduced with these changes would be quite an involved task.
Thankfully, I eventually reached a solution that I considered reliable enough to bring to a public release. While the new system is not perfect, and will never be, a lot of work has been put in to rewrite the internal add-on logic to try and address issues introduced with the 3.x branch of the add-on (themselves related to the Firefox internal bookmarking changes aforementioned).
I consider the project to be mostly feature-complete at this point. While I may consider new features, my priority will be on bug fixes and quality-of-life improvements. I would also like to offer some of the capabilities of the add-on via their own standalone extension, as not everybody is interested in all of the features (I have started with quick-bookmarking last year). All of this will be happening at a pace that is compatible with my available time as the sole maintainer- hopefully with more regular releases.
Finally, I hope you will find this new version to be a positive improvement. Thank you for your continued interest and support! I will be updating this when the 4.0.0 version is submitted for review to Mozilla and published.
Update: version 4.0.0 has been submitted to Mozilla for review.
Update: version 4.0.0 was reviewed and approved by Mozilla, and has been published on AMO. The update should gradually happen automatically for users, or you can force it manually.
The drag and drop bug is still requiring me to remain on version 2.13.0, which still does not contain that bug. Apparently it's been decided that people who drag and drop into folders to organize their bookmarks are space aliens, and there's a limit to how much I can complain about something free, so I guess I just hope someone will eventually fork the working version and backport any important updates.
|
gharchive/issue
| 2024-04-04T00:49:08 |
2025-04-01T06:40:34.739969
|
{
"authors": [
"nuenuewei",
"teddy-gustiaux"
],
"repo": "teddy-gustiaux/default-bookmark-folder",
"url": "https://github.com/teddy-gustiaux/default-bookmark-folder/issues/553",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1425522545
|
Remote plan with an invalid url gives a traceback
Reproducible easily:
plan:
import:
url: https://some.weird.url/
Exploring the tree with tmt gives:
Traceback (most recent call last):
File "/home/psss/.virtualenvs/tmt/bin/tmt", line 7, in <module>
exec(compile(f.read(), __file__, 'exec'))
File "/home/psss/git/tmt/bin/tmt", line 20, in <module>
tmt.cli.main()
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/core.py", line 1637, in invoke
super().invoke(ctx)
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/home/psss/.virtualenvs/tmt/lib/python3.10/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/home/psss/git/tmt/tmt/cli.py", line 148, in main
tmt.Plan.overview(tree)
File "/home/psss/git/tmt/tmt/base.py", line 1218, in overview
style(str(plan), fg='red') for plan in tree.plans()]
File "/home/psss/git/tmt/tmt/base.py", line 2120, in plans
return [plan.import_plan() or plan for plan in plans]
File "/home/psss/git/tmt/tmt/base.py", line 2120, in <listcomp>
return [plan.import_plan() or plan for plan in plans]
File "/home/psss/git/tmt/tmt/base.py", line 1616, in import_plan
node = fmf.Tree.node(
File "/home/psss/git/fmf/fmf/base.py", line 634, in node
tree = utils.fetch_tree(
File "/home/psss/git/fmf/fmf/utils.py", line 653, in fetch_tree
repository = fetch_repo(url, ref)
File "/home/psss/git/fmf/fmf/utils.py", line 774, in fetch_repo
raise FetchError("{0}".format(error), error)
fmf.utils.FetchError: Command 'git clone https://some.weird.url/ /home/psss/.cache/fmf/https:__some.weird.url_' returned non-zero exit status 128.
@adiosnb, interested to look into this?
Is this issue about gracefully skipping a plan with an invalid URL?
I'd say correctly terminating run with a nice error message.
|
gharchive/issue
| 2022-10-27T12:02:53 |
2025-04-01T06:40:34.759670
|
{
"authors": [
"adiosnb",
"lukaszachy",
"psss"
],
"repo": "teemtee/tmt",
"url": "https://github.com/teemtee/tmt/issues/1650",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
238438239
|
Unexpected event 'FlagsChanged' when using complex-modifications for vim mode
This is my config:
{
"global": {
"check_for_updates_on_startup": true,
"show_in_menu_bar": false,
"show_profile_name_in_menu_bar": false
},
"profiles": [
{
"complex_modifications": {
"rules": [
{
"manipulators": [
{
"description": "Change left_control + hjkl to arrow key",
"from": {
"key_code": "h",
"modifiers": {
"mandatory": [
"left_control"
],"optional":["any"]
}
},
"to": [
{
"key_code": "left_arrow"
}
],
"type": "basic"
},
{
"description": "Change left_control + hjkl to arrow key",
"from": {
"key_code": "j",
"modifiers": {
"mandatory": [
"left_control"
],"optional":["any"]
}
},
"to": [
{
"key_code": "down_arrow"
}
],
"type": "basic"
},
{
"description": "Change left_control + hjkl to arrow key",
"from": {
"key_code": "k",
"modifiers": {
"mandatory": [
"left_control"
],"optional":["any"]
}
},
"to": [
{
"key_code": "up_arrow"
}
],
"type": "basic"
},
{
"description": "Change left_control + hjkl to arrow key",
"from": {
"key_code": "l",
"modifiers": {
"mandatory": [
"left_control"
],"optional":["any"]
}
},
"to": [
{
"key_code": "right_arrow"
}
],
"type": "basic"
}
]
}
]
},
"devices": [
{
"disable_built_in_keyboard_if_exists": true,
"identifiers": {
"is_keyboard": true,
"is_pointing_device": false,
"product_id": 34050,
"vendor_id": 2652
},
"ignore": false
},
{
"disable_built_in_keyboard_if_exists": true,
"identifiers": {
"is_keyboard": true,
"is_pointing_device": false,
"product_id": 256,
"vendor_id": 2131
},
"ignore": false
}
],
"fn_function_keys": {
"f1": "display_brightness_decrement",
"f10": "mute",
"f11": "volume_decrement",
"f12": "volume_increment",
"f2": "display_brightness_increment",
"f3": "mission_control",
"f4": "launchpad",
"f5": "illumination_decrement",
"f6": "illumination_increment",
"f7": "rewind",
"f8": "play_or_pause",
"f9": "fastforward"
},
"name": "hyper + arrow",
"selected": true,
"simple_modifications": {
"caps_lock": "left_control"
},
"virtual_hid_keyboard": {
"caps_lock_delay_milliseconds": 0,
"keyboard_type": "ansi"
}
}
]
}
When I use CTRL-J/CTRL-K, things goes fine, event log:
eventType:FlagsChanged code:0x3b name:left_control flags:Ctrl misc:
eventType:FlagsChanged code:0x3b name:left_control flags: misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7d name:down_arrow flags:NumPad Fn misc:
eventType:KeyUp code:0x7d name:down_arrow flags:NumPad Fn misc:
But when I use CTRL-H/CTRL-L, things goes different:
eventType:FlagsChanged code:0x3b name:left_control flags:Ctrl misc:
eventType:FlagsChanged code:0x3b name:left_control flags: misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:FlagsChanged code:0x3b name:left_control flags:Ctrl misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyUp code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:FlagsChanged code:0x3b name:left_control flags: misc:
An FlagsChanged event unexpectedly happened and set the follow-up events with modifier 'Ctrl'.
I dont know whether it's a config mistake or a bug for karabiner. I need help...
Seems an unexpected 'Ctrl' flag change has taken place after 'H' key down, which makes long pressing 'H' key with unexpected 'Ctrl' modifier.
Another mistake:
When I type and holding CTRL-J or something like this to perform arrow actions, If I stop pressing CTRL, it continues to perform arrow actions instead of inputing 'J':
eventType:FlagsChanged code:0x3b name:left_control flags:Ctrl misc:
eventType:FlagsChanged code:0x3b name:left_control flags: misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:FlagsChanged code:0x3b name:left_control flags:Ctrl misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:Ctrl NumPad Fn misc:
eventType:FlagsChanged code:0x3b name:left_control flags: misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyDown code:0x7b name:left_arrow flags:NumPad Fn misc:
eventType:KeyUp code:0x7b name:left_arrow flags:NumPad Fn misc:
I think this makes vim-mode absolutely unusable. Hope for fixing this.
Ctrl + HJKL are default macOS shortcuts. Would it help, if you use Fn + HJKL as arrow keys, and then make the letter S to be recognized as Fn if pressed alone?
This way you can use S + HJKL as arrow keys. This is actually the vi-mode on the old Karabiner.
Sweet, I was having the exact same problem and this seems to have fixed it for me.
Since version 0.91.5 is shipped, I've implemented a Vi mode for KE that can be directly imported into the app, which works fine.
http://yige.ch/vi-mode-for-karabiner-elements/
|
gharchive/issue
| 2017-06-26T04:18:16 |
2025-04-01T06:40:34.781634
|
{
"authors": [
"apm1467",
"johnlindquist",
"kyangc"
],
"repo": "tekezo/Karabiner-Elements",
"url": "https://github.com/tekezo/Karabiner-Elements/issues/795",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
}
|
511420027
|
Add structural OpenAPI schema to Tekton CRDs
Kubernetes 1.15 introduced structural OpenAPI schema which helps with validation, supporting kubectl explain and also building tools around it:
Structural schema should be added for all CRDs introduced by Tekton.
This PR is related: https://github.com/tektoncd/pipeline/pull/1179
Is this something that can be easily generated using existing tooling? I'm a bit worried we'll end up with structs and YAML validation out of sync, and I'm also a bit uncomfortable with maintaining code to generate one from the other. Is there something OpenAPI provides for this?
/kind feature
/kind question
/remove-lifecycle rotten
/remove-lifecycle stale
/reopen
/reopen
/lifecycle frozen
It would be still nice to be able to tackle this.
knative does this now, one try is available here : https://github.com/tektoncd/pipeline/compare/main...vdemeester:schema-gen.
The limitation, as of today, is that it doesn't work with multiple version CRDs (such as ours), it panics..
I've started playing around with this, and narrowed down the panic to something thinking the package github.com/tektoncd/pipeline/pkg/apis/resource/v1alpha1 should have a kind Pipeline in it. I haven't yet figured out why the heck it's thinking that, though.
I'm pretty sure there's something weird about our CRDs and/or our packages resulting in this...
EDIT AGAIN: My description of the problem may not be right, but I just tested and verified that if I just put config/300-task.yaml in config/300-resources/ and remove the v1alpha1 version from it, hack/update-schemas.sh works, but if I instead remove v1beta1 from 300-task.yaml, it fails. So it's not the multiple versions of a particular CRD that's the problem, it's definitely something with there being multiple v1alpha1 packages, because I can create the same failure by copying config/300-run.yaml (which only has a v1alpha1 version) in instead. I will continue digging tomorrow.
Arg.. I don't like that 😝… Seems like it's a bit too "tightly" coupled to the assumption that you put all your type in the same package.. 😅
PR opened for that fix - https://github.com/kubernetes-sigs/controller-tools/pull/627 - I'm now trying to determine if we need to filter some fields out, like the knative-specific branch does. I think at a minimum we need to only take the Spec field of PersistentVolumeClaim (used in the VolumeClaimTemplate field of WorkspaceBinding), since we definitely don't need the TypeMeta, ObjectMeta, or Status of PersistentVolumeClaim in the schema. The other corev1 types that could possibly need filtering are corev1.Container and corev1.Volume, but we need many fields in corev1.Volume, and I think we're ok with everything in corev1.Container.
So yeah, if we do need that filtering, we'll need a change to schemapatch to do what the knative-specific branch does in terms of allowing you to specify "only use these fields (or exclude these fields) in the schema for this type". I'll get going on that tomorrow.
/assign @abayer
Well, https://github.com/kubernetes-sigs/controller-tools/pull/627#issuecomment-931355857 - our layout is awfully nonstandard, and the not-unreasonable response from the maintainer is asking if we can restructure things rather than make a change to schemapatch. I think we could solve this by moving everything in pkg/apis/resource/v1alpha1 and pkg/apis/run/v1alpha1 into pkg/apis/pipeline/v1alpha1 - no changes to CRDs needed, just rearranging the packages. That said, I'm not sure what other possible ramifications could come out of doing that.
Started playing around with what would be involved in moving pkg/apis/resource/v1alpha1 and pkg/apis/run/v1alpha1 and...well, so far, I don't think it's viable without mucking up our types pretty thoroughly.
They are in separate packages because of « loop dependency ». We may want to go ahead and reorganize that by duplicating some code and make v1alpha1 and v1beta1 completely independent of each other (code wise) which could be beneficial for the future anyway 😇
I want to add that the OpenAPI schema also is really handy for users. I tend to use kubectl explain for CRD to understand what fields are available and what they mean. For Tekon CRs the result is rather uninformative.
$ kubectl explain tasks
KIND: Task
VERSION: tekton.dev/v1beta1
DESCRIPTION:
<empty>
Compare the above to one that has a OpenAPI schema. You can drill down into the spec as far as you like.
$ kubectl explain podmonitors
KIND: PodMonitor
VERSION: monitoring.coreos.com/v1
DESCRIPTION:
PodMonitor defines monitoring for a set of pods.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object> -required-
Specification of desired Pod selection for target discovery by Prometheus.
if we're not going to get common recipes for Tekton, we really need the autocomplete in our editors
/unassign
OpenAPI schemas will also help with managing Tekton through Terraform - without them Terraform doesn't know how to ignore the release annotation in a taskrun which causes Terraform to constantly want to destroy and create the manifest.
Just want to second this. Managing Tekton resources with kubernetes_manifest in Terraform is a world of pain due to the lack of a schema. kubectl_manifest is a workaround, but it has its own problems.
Adding my voice here, today, that I was looking for this documentation and discovered that not only was the structural schema missing, but there's also no standard Open-API generated schema documentation, only by-hand documentation which doesn't document (for example) status fields in a consistent way.
|
gharchive/issue
| 2019-10-23T15:53:23 |
2025-04-01T06:40:34.851775
|
{
"authors": [
"ImJasonH",
"abayer",
"doctorpangloss",
"evankanderson",
"jbg",
"ktarplee",
"siamaksade",
"vdemeester"
],
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/issues/1461",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2190709075
|
TaskRun Fails to Create Pod When ServiceAccount References Non-Existent Secret
Expected Behavior
When repairing a ServiceAccount that includes a reference to a non-existent secret, I expect the TaskRun to issue a warning and proceed with the creation of the Pod, similar to Kubernetes’ behavior of handling missing imagePullSecrets within a Pod’s service account configuration.
Log a warning if a ImagePullSecrets does not exist
Actual Behavior
Currently, if a ServiceAccount is referenced by a TaskRun and contains a non-existent secret, the TaskRun fails to create the associated Pod and results in an error.
Steps to Reproduce the Problem
Execute the following script:
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: taskrun-sa
secrets:
- name: not-exist
---
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
name: test-sa-has-not-found-secret
spec:
serviceAccountName: taskrun-sa
taskSpec:
description: |
A simple task that prints the date.
steps:
- name: echo
image: bash:latest
script: |
#!/usr/bin/env bash
echo "Hello, world!"
status:
conditions:
- message: 'failed to create task run pod "test-sa-has-not-found-secret": translating
TaskSpec to Pod: secrets "not-exist" not found. Maybe invalid TaskSpec'
reason: PodCreationFailed
status: "False"
type: Succeeded
EOF
Additional Info
Kubernetes version:
Output of kubectl version:
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.8
Tekton Pipeline version:
Output of tkn version or kubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
Client version: 0.35.1
Pipeline version: v0.57.0
/assign @l-qing
@l-qing I understand the idea of keeping it "consistent" with Kubernetes, but I think it may also be valid to error out. If the user specified a secret, and the secret is not present, then the "credential" handling (via service account) will fail or not work, and a warning (through events) is usually harder to find for users than their TaskRun or PipelineRun to fail…
I wonder what use-case this would cover (there is probably valid ones).
cc @tektoncd/core-maintainers
I wonder what use-case this would cover (there is probably valid ones).
cc @tektoncd/core-maintainers
In our environment, there are other controllers that synchronize secrets across multiple clusters and associate them with related service accounts. Sometimes it involves adding secrets, and sometimes it involves deleting them. I am not sure if the sequence of operations ensures that the secret exists before updating the service account, or if it removes the secret from the service account before deleting the secret.
However, the phenomenon I have encountered is that the pipeline TaskRuns intermittently fail due to the absence of certain credentials on some service accounts. This kind of problem is just a warning on the created k8s Pods and does not lead to an actual failure. I am also not sure if this issue is caused by the Tekton cache client not updating in a timely manner.
|
gharchive/issue
| 2024-03-17T14:55:29 |
2025-04-01T06:40:34.859994
|
{
"authors": [
"l-qing",
"vdemeester"
],
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/issues/7760",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
505299850
|
Bump knative.dev/pkg to 528ad1c 🔥
Changes
This brings some change on how to create the webhook, alongside import
change related to the injection packages.
One of the reason to do that, is to be able to update the k8s dependency in a cleaner way :angel: (otherwise it's gonna be hell to review I think :sweat_smile: )
Signed-off-by: Vincent Demeester vdemeest@redhat.com
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
[ ] Includes tests (if functionality changed/added)
[ ] Includes docs (if user facing)
[x] Commit messages follow commit message best practices
See the contribution guide for more details.
Double check this list of stuff that's easy to miss:
If you are adding a new binary/image to the cmd dir, please update
the release Task to build and release this image.
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.
/cc @afrittoli @abayer
Will remove the hold once 0.8 is released :wink:
/hold cancel
|
gharchive/pull-request
| 2019-10-10T14:13:00 |
2025-04-01T06:40:34.867643
|
{
"authors": [
"vdemeester"
],
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/pull/1412",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1933017724
|
fix: panic may occur when calculating the final task timeout waiting time
When I was troubleshooting this issue: #7185, I also found another problem. Determining the final task timeout waiting time may cause panic
If you set the pipelinerun finally timeout and also set the pipeline timeout to 0s, this will cause the program to panic.
Changes
This commit no functional changes, it will avoid the panic caused by the above scenario.
Submitter Checklist
As the author of this PR, please check off the items in this checklist:
[x] Has Docs if any changes are user facing, including updates to minimum requirements e.g. Kubernetes version bumps
[x] Has Tests included if any functionality added or changed
[x] Follows the commit message standard
[x] Meets the Tekton contributor standards (including functionality, content, code)
[x] Has a kind label. You can add one by adding a comment on this PR that contains /kind <type>. Valid types are bug, cleanup, design, documentation, feature, flake, misc, question, tep
[ ] Release notes block below has been updated with any user facing changes (API changes, bug fixes, changes requiring upgrade notices or deprecation warnings). See some examples of good release notes.
[ ] Release notes contains the string "action required" if the change requires additional action from users switching to the new release
Release Notes
NONE
/kind bug
/kind bug
/remove bug
/assign @jerop
|
gharchive/pull-request
| 2023-10-09T13:00:41 |
2025-04-01T06:40:34.873664
|
{
"authors": [
"cugykw"
],
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/pull/7188",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
436405806
|
Fix typo in auth.md
Changes
Fixed typo in auth configuration example
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
[ ] Includes tests (if functionality changed/added)
[x] Includes docs (if user facing)
[ ] Commit messages follow commit message best practices
See the contribution guide
for more details.
Release Notes
release-note
/lgtm
/ok-to-test
@dtkachenko you're gonna need to sign the CLA :pray: :angel:
@dtkachenko you're gonna need to sign the CLA 🙏 👼
Any chance you can sign the CLA @dtkachenko ?
I'm going to close this out for now, feel free to reopen if you can sign the CLA!
|
gharchive/pull-request
| 2019-04-23T21:50:28 |
2025-04-01T06:40:34.879086
|
{
"authors": [
"chmouel",
"dlorenc",
"dtkachenko",
"vdemeester"
],
"repo": "tektoncd/pipeline",
"url": "https://github.com/tektoncd/pipeline/pull/788",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
440281370
|
Tracking Issue Inline Mode
[X] Initial Support (#119)
[ ] rest of answerInlineQuery fields (https://core.telegram.org/bots/api#answerinlinequery)
[ ] rest of InlineQueryResult variants (https://core.telegram.org/bots/api#inlinequeryresult)
[ ] rest of InlineQueryResultArticle fields (https://core.telegram.org/bots/api#inlinequeryresultarticle)
[ ] rest of InputMessageContent implementation (https://core.telegram.org/bots/api#inputmessagecontent)
I'll pick this up. Also looking into implementing ChosenInlineResult in the near future.
|
gharchive/issue
| 2019-05-04T03:53:51 |
2025-04-01T06:40:34.916852
|
{
"authors": [
"goweiwen",
"gugahoa"
],
"repo": "telegram-rs/telegram-bot",
"url": "https://github.com/telegram-rs/telegram-bot/issues/121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1265882444
|
No hints while extension is active
Just installed and don't see any hints (but keymaps like 'gg' seems to work)
It says that extension is enabled
Any ideas why I doesn't see any hints?
I'm on Mac mini, macOS Monterey 12.3.1
It seems to it doesn't appear automatically. Solved :)
|
gharchive/issue
| 2022-06-09T09:52:29 |
2025-04-01T06:40:34.998760
|
{
"authors": [
"xbladesub"
],
"repo": "televator-apps/vimari",
"url": "https://github.com/televator-apps/vimari/issues/269",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2460163478
|
telink: b92: update ble lib
Update 2m/3m/3.5m flash protect interface .
The following west manifest projects have been modified in this Pull Request:
Name
Old Revision
New Revision
Diff
hal_telink
https://github.com/telink-semi/hal_telink/commit/34700f71d3c69326595484c631853ca61fabf0af (develop)
https://github.com/telink-semi/hal_telink/commit/445c736548c2cd0a0f2553942fcc569f93072b72 (develop_b92_prot)
telink-semi/hal_telink@34700f71..445c7365
Note: This message is automatically posted and updated by the Manifest GitHub Action.
OK, i have updated the latest west.yml and are waiting for CI completion. Thank you !
|
gharchive/pull-request
| 2024-08-12T06:41:21 |
2025-04-01T06:40:35.002263
|
{
"authors": [
"andriy-bilynskyy",
"fengtai-telink"
],
"repo": "telink-semi/zephyr",
"url": "https://github.com/telink-semi/zephyr/pull/292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2264293591
|
docs: minor updates to readme
status badges;
document supported Ruby and GDAL versions.
@tindron @turboladen
May I ask to merge this one?
|
gharchive/pull-request
| 2024-04-25T18:58:16 |
2025-04-01T06:40:35.017867
|
{
"authors": [
"oleksii-leonov"
],
"repo": "telus-agcg/ffi-gdal",
"url": "https://github.com/telus-agcg/ffi-gdal/pull/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1860814666
|
🛑 Secret Site2 is down
In ddf94b6, Secret Site2 ($SECRET_SITE_2) was down:
HTTP code: 404
Response time: 31 ms
Resolved: Secret Site2 is back up in 7afc446 after 58 days, 42 minutes.
|
gharchive/issue
| 2023-08-22T07:43:27 |
2025-04-01T06:40:35.020762
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/1286",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1898560973
|
🛑 Secret Site2 is down
In 0c48f32, Secret Site2 ($SECRET_SITE_2) was down:
HTTP code: 404
Response time: 113 ms
Resolved: Secret Site2 is back up in 0fad8d9 after 50 minutes.
|
gharchive/issue
| 2023-09-15T14:39:18 |
2025-04-01T06:40:35.022895
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/1919",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1899654452
|
🛑 Secret Site2 is down
In cb4ebba, Secret Site2 ($SECRET_SITE_2) was down:
HTTP code: 404
Response time: 92 ms
Resolved: Secret Site2 is back up in 164eead after 10 minutes.
|
gharchive/issue
| 2023-09-17T04:40:08 |
2025-04-01T06:40:35.025054
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/1960",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2025257493
|
🛑 QRCODE VIEWER is down
In 0bd86f4, QRCODE VIEWER (https://qrcode-viewer.onrender.com/qrcode-viewer/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: QRCODE VIEWER is back up in da334f8 after 21 minutes.
|
gharchive/issue
| 2023-12-05T04:24:44 |
2025-04-01T06:40:35.027465
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/2780",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2353581172
|
🛑 MYSITE2 is down
In 04edeec, MYSITE2 ($SECRET_SITE_OBOERU) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MYSITE2 is back up in cfc299c after 18 minutes.
|
gharchive/issue
| 2024-06-14T15:09:30 |
2025-04-01T06:40:35.029811
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/3350",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2385448415
|
🛑 MYSITE2 is down
In 5a8bf91, MYSITE2 ($SECRET_SITE_OBOERU) was down:
HTTP code: 0
Response time: 0 ms
Resolved: MYSITE2 is back up in 692b92d after 9 minutes.
|
gharchive/issue
| 2024-07-02T07:21:33 |
2025-04-01T06:40:35.031916
|
{
"authors": [
"templain"
],
"repo": "templain/mywatcher",
"url": "https://github.com/templain/mywatcher/issues/3431",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2577870797
|
Versioning3 proto updates for data plane APIs
What changed?
Added "deployment_name" field to WorkerVersionStamp and WorkerVersionCapabilities so this info is sent in the poll requests and task completion commands coming from SDK.
Added versioning_behavior and default_versioning_behavior fields to RespondWorkflowTaskCompletedRequest so SDK can sent the workflow versioning annotation to server when a workflow task completes.
Added build_id, versioning_behavior, and deployment_name fields to WorkflowExecutionInfo (Mutable State) so server can use the info for routing tasks.
Why?
Needed for versioning-3 API.
Breaking changes
None.
Server PR
None.
This all looks good to me from a syntax/consistency POV, but will defer SDK approval to @antlai-temporal.
If at all possible, would like to recommend that at least most of the implementation be done server side before this PR is merged to ensure there isn't anything missing.
Thanks @cretz, this is being merged to a feature branch right now. We'll merge it to main once the server implementation is ready.
|
gharchive/pull-request
| 2024-10-10T07:12:27 |
2025-04-01T06:40:35.036062
|
{
"authors": [
"ShahabT",
"cretz"
],
"repo": "temporalio/api",
"url": "https://github.com/temporalio/api/pull/463",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
572272271
|
[Test] Speed up moving_average_test
Increase error: 1e-3 -> 5e-3
Reduce number of samples: 50000 -> 5000
Use a simpler optimizer: adam -> sgd
Reduce epochs: 10 -> 5
CPU Test time: 35s -> 8s
Related to #1143
Failure is unrelated. Thanks for the pull request!
|
gharchive/pull-request
| 2020-02-27T18:30:06 |
2025-04-01T06:40:35.113469
|
{
"authors": [
"Squadrick",
"gabrieldemarmiesse"
],
"repo": "tensorflow/addons",
"url": "https://github.com/tensorflow/addons/pull/1170",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1059343342
|
Download from http://mirror.tensorflow.org/github.com/tensorflow/runtime/archive/b570a1921c9e55ac53c8972bd2bfd37cd0eb510d.tar.gz failed:
bazel: 3.7.2
pip: 21.2.4
python: 3.9.9
Running setup.py install for tensorflow-gnn: started
Running setup.py install for tensorflow-gnn: finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f4zi6o4f/setup.py'"'"'; file='"'"'/tmp/pip-req-build-f4zi6o4f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-wlsd6gr1/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.9/tensorflow-gnn
cwd: /tmp/pip-req-build-f4zi6o4f/
Complete output (46 lines):
running install
running build
running bazel_build
Loading:
Loading: 0 packages loaded
WARNING: Download from http://mirror.tensorflow.org/github.com/tensorflow/runtime/archive/b570a1921c9e55ac53c8972bd2bfd37cd0eb510d.tar.gz failed: class com.google.devtools.build.lib.bazel.repository.downloader.UnrecoverableHttpException GET returned 404 Not Found
DEBUG: /root/.cache/bazel/_bazel_root/b4d19525ca49b6ef2846f4f7e84a4744/external/tf_runtime/third_party/cuda/dependencies.bzl:51:10: The following command will download NVIDIA proprietary software. By using the software you agree to comply with the terms of the license agreement that accompanies the software. If you do not agree to the terms of the license agreement, do not use the software.
Analyzing: target //package:move_generated_files (0 packages loaded, 0 targets configured)
ERROR: /tmp/pip-req-build-f4zi6o4f/package/BUILD:10:10: no such target '//tensorflow_gnn/sampler:sampling_spec_pb2.py': target 'sampling_spec_pb2.py' not declared in package 'tensorflow_gnn/sampler' defined by /tmp/pip-req-build-f4zi6o4f/tensorflow_gnn/sampler/BUILD and referenced by '//package:move_generated_files'
ERROR: Analysis of target '//package:move_generated_files' failed; build aborted: Analysis failed
INFO: Elapsed time: 0.114s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
ERROR: Build failed. Not running target
FAILED: Build did NOT complete successfully (0 packages loaded, 0 targets configured)
Traceback (most recent call last):
File "", line 1, in
File "/tmp/pip-req-build-f4zi6o4f/setup.py", line 152, in
setup(
File "/usr/local/lib/python3.9/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.9/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.9/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.9/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/usr/local/lib/python3.9/distutils/command/install.py", line 546, in run
self.run_command('build')
File "/usr/local/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.9/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.9/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.9/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-req-build-f4zi6o4f/setup.py", line 86, in run
subprocess.check_call(
File "/usr/local/lib/python3.9/subprocess.py", line 373, in check_calld)
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/bin/bazel', 'run', '-c', 'opt', '--experimental_repo_remote_exec', '//package:move_generated_files']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/local/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-f4zi6o4f/setup.py'"'"'; file='"'"'/tmp/pip-req-build-f4zi6o4f/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(file) if os.path.exists(file) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-wlsd6gr1/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.9/tensorflow-gnn Check the logs for full command output.
WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.
NoSuchKeyThe specified key does not exist.
Encountering the same problem, running on ubuntu
See the comment on how to resolve this on #7 . We're working on a fix soon.
This should now be fixed with the latest update!
|
gharchive/issue
| 2021-11-21T10:28:17 |
2025-04-01T06:40:35.132195
|
{
"authors": [
"littleforce163",
"nodiz",
"sibonli"
],
"repo": "tensorflow/gnn",
"url": "https://github.com/tensorflow/gnn/issues/12",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1636270704
|
Tensorflow 0.4.0 docs say it should be compatible with JAva 8 but it is not
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04 x86_64): Windows x86_64, windows 10
TensorFlow installed from (source or binary): binary (maven)
TensorFlow version: 2.7.0 (Tf Java 0.4.0)
Java version (i.e., the output of java -version): 1.8.0_271 (Java 8)
Installed from Maven Central?: NO, downloaded from maven link directly
CUDA/cuDNN version: No cuda
GPU model and memory: NO gpu
Describe the problem
I am trying to load a JAva model using the TF JAva API 0.4.0 for Tf 2.7.0 but I encounter an error saying that it has been compiled for Java 11 and higher only (I am using JAva 8)
Provide the exact sequence of commands / steps that you executed before running into the problem
The error happens when I am trying to load with reflection:
Class< ? > c = engineClassloader.loadClass( className );
c.newInstance();
a class that uses the Tf2 0.4.0 Java dependencies.
The class that I want to load is: https://github.com/bioimage-io/tensorflow-2-java-interface/blob/main/src/main/java/io/bioimage/modelrunner/tensorflow/v2/api030/Tensorflow2Interface.java
And I am loading the classes on a URLClassloader
Any other info / logs
This is the error:
Exception in thread "main" java.lang.UnsupportedClassVersionError: org/tensorflow/ndarray/Shaped has been compiled by a more recent version of the Java Runtime (class file version 55.0), this version of the Java Runtime only recognizes class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:756)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
Solved I was using the wrong NDArray version.
Sorry for the inconveninece!
|
gharchive/issue
| 2023-03-22T18:11:08 |
2025-04-01T06:40:35.157087
|
{
"authors": [
"carlosuc3m"
],
"repo": "tensorflow/java",
"url": "https://github.com/tensorflow/java/issues/490",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
960487751
|
Support Anaconda package
Push TF similarity to anaconda after official annoucement.
Moving this to 0.15
@owenvallis hey I am new here can you provide me some resources to solve this issue
|
gharchive/issue
| 2021-08-04T13:52:17 |
2025-04-01T06:40:35.197419
|
{
"authors": [
"AKACHI1",
"ebursztein",
"owenvallis"
],
"repo": "tensorflow/similarity",
"url": "https://github.com/tensorflow/similarity/issues/113",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
429130457
|
Adding GlobalMaxPooling 1D, 2D, 3D
I've added the max operation to operators, since it wasnt defined earlier.
@dan-zheng I'm working on the test cases now, I will have a seperate PR for them, there are a lot of layers which do not have test cases and I will make a list of the same in #77, it would be easier to track them that way.
@rxwei can this be merged?
Could you pull from master? I wanted to trigger a rebuild.
@rxwei the docker summary fails because it says the max function is not differentiable, is there any way correct that?
We could either add a VJP for the max op to the standard library, or just use the actual "max pooling" operation inside the global max pooling layers (set the pool size to the width & height of the input).
@tanmayb123 I was trying to do that when i initially added the max function to the batch normalized vjp, i just put it in the wrong place. I think we should rather put in the max function because it is actually used in multiple places, might come in as a handy operation.
@tanmayb123 any idea about how i could go about adding the vjp for the max function
@dan-zheng Could you guide me maybe, as to how i can add a max vjp function to the operators file?
@rxwei could you tell me how i could go about defining a vjp for the max function?
I think the vjp is the way to go since we'd need a max function for hinge loss as well.
I'm sorry for the delayed response! Recently the team has been busy with releasing v0.3 and getting some important core pieces like RNNs into the library. I'll definitely help you with defining derivatives for min and max later this week!
In the meantime, it would be useful to explore our custom differentiation tutorial, and see how other derivatives are defined, e.g. mean(alongAxes:). Feel free to post any questions you have.
@rxwei do you mind triggering a build on this? Thanks to @eaplatanios for adding the max and min vjps. With this, we are done with all the pooling layers.
@rxwei could you take a quick glance and trigger the build once more? This does pass locally now.
Also is there any way to keep updating the toolchain directly, instead of continuously downloading nightlies xD.
Also is there any way to keep updating the toolchain directly, instead of continuously downloading nightlies xD.
You can always build a toolchain from the tensorflow branch on apple/swift using the utils/build-toolchain-tensorflow script. However, it often takes quite a while.
@eaplatanios sorry to bother you with this. But could you tell me if the max function call i wrote for this is wrong? The error that pops is function not differentiable
@Shashi456 Could you post the full error trace? I can clone and take a look in a bit.
[17/36] Compiling TensorFlow Pooling.swift
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:375:22: error: expression is not differentiable
return input.max(squeezingAxes: [1, 2])
^
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:375:22: note: cannot differentiate functions that have not been marked '@differentiable' and that are defined in other files
return input.max(squeezingAxes: [1, 2])
^
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:359:22: error: expression is not differentiable
return input.max(squeezingAxes: 1)
^
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:359:22: note: cannot differentiate functions that have not been marked '@differentiable' and that are defined in other files
return input.max(squeezingAxes: 1)
^
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:391:22: error: expression is not differentiable
return input.max(squeezingAxes: [1, 2, 3])
^
/swift-apis/Sources/TensorFlow/Layers/Pooling.swift:391:22: note: cannot differentiate functions that have not been marked '@differentiable' and that are defined in other files
return input.max(squeezingAxes: [1, 2, 3])
I see. The Tensor.max(squeezingAxes:) function is not currently differentiable when the axes are int arrays. I'll make a PR to fix that now.
Done in #159.
#159 has been merged. Could you do a pull?
@rxwei done. pretty happy that finally done with pooling layers xD
@rxwei I'll take care of both the PRs, in a while. Sorry!
@rxwei built and tested with nightly. This PR passes as is. The build now takes substantially more time though somehow xD.
I'm not sure about the test since the linux error you've mentioned popped up, but i rectified the error you mentioned about GlobalMaxPooling1D.
@eaplatanios could you trigger a build on this?
Thank you.
@rxwei could you trigger a build on this?
@rxwei The errors aren't from these additions. I think this can be merged,
@rxwei I think this is ready to be merged.
|
gharchive/pull-request
| 2019-04-04T07:47:37 |
2025-04-01T06:40:35.208494
|
{
"authors": [
"Shashi456",
"eaplatanios",
"rxwei",
"tanmayb123"
],
"repo": "tensorflow/swift-apis",
"url": "https://github.com/tensorflow/swift-apis/pull/76",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
377056265
|
t2t_datagen / translate_enzh_wmt32k - How long does it take? (attached log)
How long is this supposed to take? It's been 8 hours now..
Attached log.txt
How many GPUs are you using?
I am using 8 GPUs, i ended up also taking 10-13 hours
How much memory did you use for t2t-datagen?
My mem is 68G, but it doesn't enough for the data of translate_enzh_wmt32k.
|
gharchive/issue
| 2018-11-03T13:50:36 |
2025-04-01T06:40:35.211062
|
{
"authors": [
"Sc-TuDou",
"echan00",
"lowblung"
],
"repo": "tensorflow/tensor2tensor",
"url": "https://github.com/tensorflow/tensor2tensor/issues/1199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
490634738
|
Cannot load fashion_mnist dataset
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10 Home 64bit
Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
TensorFlow installed from (source or binary): binary
TensorFlow version: 1.12
Python version: 3.6.6
Installed using virtualenv? pip? conda?: pip
Bazel version (if compiling from source):
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version: 9.0/7.5
GPU model and memory: GeForce 940M
Describe the problem
when I try to use the fashion_mnist dataset, it starts to read the data from the website, but after a while, the website shuts down my connection because I am connecting too often (WinError 10054).
Provide the exact sequence of commands / steps that you executed before running into the problem
import tensorflow as tf
from tensorflow import keras
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels),(test_images, test_labels) = fashion_mnist.load_data()
Any other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
(it's loading...)
Traceback (most recent call last):
File "C:\Users\admin\Desktop\working as intended\python pls\love_copying_code_instead_of_writing_it_urself.py", line 6, in <module>
(train_images, train_labels),(test_images, test_labels) = fashion_mnist.load_data()
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\datasets\fashion_mnist.py", line 52, in load_data
paths.append(get_file(fname, origin=base + fname, cache_subdir=dirname))
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\utils\data_utils.py", line 249, in get_file
urlretrieve(origin, fpath, dl_progress)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\urllib\request.py", line 277, in urlretrieve
block = fp.read(bs)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\http\client.py", line 449, in read
n = self.readinto(b)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\http\client.py", line 493, in readinto
n = self.fp.readinto(b)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\ssl.py", line 1009, in recv_into
return self.read(nbytes, buffer)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\ssl.py", line 871, in read
return self._sslobj.read(len, buffer)
File "C:\Users\admin\AppData\Local\Programs\Python\Python36\lib\ssl.py", line 631, in read
v = self._sslobj.read(len, buffer)
ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。
Was able to execute the given code without any error on colab with Tensorflow 1.12.0. Please see the gist here. Thanks!
it's very weird because I can run the code in colab but not on my own computer, so do I just turn to colab from now on?
This looks like a proxy configuration issue where you are trying to connect from firewall corporate network.
The workaround can be;
You can download the data from this link
https://www.kaggle.com/zalando-research/fashionmnist
Later you can use tf.keras.utils.get_file to load the dataset.
See,
https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file
is this issue has solved?
no, when I run the code above with the newest version of tensorflow, it gives a different error:
OSError: Not a gzipped file (b'\x00\x00')
|
gharchive/issue
| 2019-09-07T13:16:47 |
2025-04-01T06:40:35.252192
|
{
"authors": [
"gadagashwini",
"matdadi",
"ongoing0217",
"ymodak"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/32314",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
520563244
|
logits and labels must have the same first dimension
System information
MAC OS 10.14.6
Tensorflow 1.14
Working model
input_layer=Input(shape=(X.shape[1],))
model=Embedding(input_dim=len(vocab)+1,output_dim=32,input_length=X.shape[1])(input_layer)
model = Bidirectional(LSTM(units = 50, return_sequences=True, recurrent_dropout=0.2))(model)
output_layer= Dense(3, activation="softmax")(model)
model = Model(input_layer,output_layer)
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["acc"])
model.summary()
Trying to create same model with tf-lite layers of bi-directional LSTM
import os
os.environ['TF_ENABLE_CONTROL_FLOW_V2'] = '1'
import tensorflow as tf
import numpy as np
from tensorflow.lite.experimental.examples.lstm.rnn import bidirectional_dynamic_rnn
def build_LSTM_layer(num_layers):
lstm_layers=[]
for i in range(num_layers):
lstm_layers.append(tf.lite.experimental.nn.TFLiteLSTMCell(num_units=50,name='rnn{}'.format(i),forget_bias=1.0))
final_lstm_layer=tf.keras.layers.StackedRNNCells(lstm_layers)
return final_lstm_layer
def build_bidirectional(inputs,num_layers,use_dynamic_rnn=True):
lstm_inputs=transposed_inp=tf.transpose(inputs,[1,0,2])
outputs,output_states=bidirectional_dynamic_rnn(build_LSTM_layer(num_layers),build_LSTM_layer(num_layers),lstm_inputs,dtype="float",time_major=True)
fw_lstm_output,bw_lstm_output=outputs
final_out=tf.concat([fw_lstm_output,bw_lstm_output],axis=2)
final_out=tf.unstack(final_out,axis=0)
resultant_out=final_out[-1]
return resultant_out
tf.reset_default_graph()
model_tf = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(X.shape[1],), name='input'),
tf.keras.layers.Embedding(input_dim=len(vocab)+1,output_dim=32,input_length=X.shape[1]),
tf.keras.layers.Lambda(build_bidirectional, arguments={'num_layers' : 2, 'use_dynamic_rnn': True}),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(3, activation=tf.nn.softmax, name='output')
])
model_tf.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
model_tf.summary()
Inputs are token sequence and output should be NER tags which i get from keras model but not from above model
X.shape = (30, 16)
y.shape = (30, 16, 1)
I/P = array([[15., 10., 38., 4., 32., 57., 39., 0., 0., 0., 0., 0., 0., 0., 0., 0.],...])
O/P = array([[[1.],[1.],[1.],[1.],[2.],[1.],[1.],[0.],[0.],[0.],
[0.],[0.],[0.],[0.],[0.],[0.]],...])
Output logs
Epoch 1/10
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-503-89e5191da57e> in <module>
2 train_x,test_x,train_y,test_y = train_test_split(X,y,test_size=0.2)
3
----> 4 history = model_tf.fit(train_x,train_y,epochs=10,batch_size=3)
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
778 validation_steps=validation_steps,
779 validation_freq=validation_freq,
--> 780 steps_name='steps_per_epoch')
781
782 def evaluate(self,
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_arrays.py in model_iteration(model, inputs, targets, sample_weights, batch_size, epochs, verbose, callbacks, val_inputs, val_targets, val_sample_weights, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq, mode, validation_in_fit, prepared_feed_values_from_dataset, steps_name, **kwargs)
361
362 # Get outputs.
--> 363 batch_outs = f(ins_batch)
364 if not isinstance(batch_outs, list):
365 batch_outs = [batch_outs]
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/keras/backend.py in __call__(self, inputs)
3290
3291 fetched = self._callable_fn(*array_vals,
-> 3292 run_metadata=self.run_metadata)
3293 self._call_fetch_callbacks(fetched[-len(self._fetches):])
3294 output_structure = nest.pack_sequence_as(
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1456 ret = tf_session.TF_SessionRunCallable(self._session._session,
1457 self._handle, args,
-> 1458 run_metadata_ptr)
1459 if run_metadata:
1460 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
InvalidArgumentError: logits and labels must have the same first dimension, got logits shape [3,3] and labels shape [48]
[[{{node loss/output_loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]
Hi @gadagashwini ,
I tried your two codes, and I found out that there is no error under tensorflow==1.14.0. Could you give me more information?
Hi There,
We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help.
This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information.
|
gharchive/issue
| 2019-11-10T07:00:14 |
2025-04-01T06:40:35.258092
|
{
"authors": [
"BhavinSuthar25",
"jaeyoo",
"tensorflowbutler"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/34133",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
619825343
|
TimeDistributed(Dropout()) with the same dropout mask
System information
TensorFlow version (you are using): 1.14
Are you willing to contribute it (Yes/No): Yes
Describe the feature and the current behavior/state.
Here is an example block of my code. I am trying to apply a time distributed dropout to the output of a many to many GRU. I would like to keep the dropout to have the same dropout mask for all time steps. However, I did not find a solution to this purpose based on the current API. Did I miss anything or is it a new feature on the roadmap? Thanks a lot!
from tensorflow.keras.layers import Dense, Input, GRU, Dropout, TimeDistributed
x= TimeDistributed(Dense(512, activation='relu', kernel_regularizer=l2(1e-5), \
bias_regularizer=l2(1e-5), name='cam_fc'))(input_tensor)
out = GRU(
512,
dropout=0.1,
recurrent_dropout=0.1,
activation='relu',
kernel_regularizer=l2(1e-5),
bias_regularizer=l2(1e-5),
return_sequences=True,
name='intentNet_gru')(x, training=self.is_train)
out = TimeDistributed(Dropout(0.1))(out, training=self.is_train)
I think what TimeDistributed does is that it slide the inputs based on timestep, and feed each timestep one by one to the wrapped layer. Note that when feeding multiple input to a dropout layer, each of them will get different dropout mask, which is aligned with current behavior.
If you want to apply same mask across time steps, what you can do is just using one dropout layer, but with noise_shape param specified. See https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout for more details.
|
gharchive/issue
| 2020-05-17T23:49:28 |
2025-04-01T06:40:35.262092
|
{
"authors": [
"bzhong2",
"qlzh727"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/39631",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
172662436
|
Clang host compiler support?
Hi all,
I couldn't help but notice that currently TensorFlow is tightly coupled with GCC. Is there a plan to make it more flexible with the choice for host compiler? like clang?
Is there any effort or plans for that?
Thanks,
Luke
This is a general question best suited to StackOverflow. Please can you re-ask there.
Actually this question is probably not suited to StackOverflow as written...
However, ./configure does allow you to specify the choice of the host compiler, and on Macs it uses clang anyway, so I suspect it already works.
|
gharchive/issue
| 2016-08-23T10:18:27 |
2025-04-01T06:40:35.264088
|
{
"authors": [
"lukeiwanski",
"prb12",
"vrv"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/3981",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
742104503
|
How to pass the seq length in the LSTMs in tfLite
Is there an option to pass seq length in LSTM tfLite ?
Is batch size option suffice seq length ?
Does seq length required during training ?
Can we pass variable seq length during inference ? If we can , How we can do that ?
@pranathibl,
Could you please provide a minimal code sample of the use case you are trying to implement? Thanks!
I a trying support of variable sequences in tfLite.
This is the script i created.
Getting issues while i was trying variable sequences.
lstminput_var_sequencemode.txt
Currently we don't support seq_len, by default lstm will consume all your sequences
Then why its working in this particular case,
When i created model with just lstm layer its working,
Could you explain the difference and why its not working in above case and working when i used lstm layer just ?
lstminput_var_sequencemode_layer.txt
our current kernel does not support seq_len yet. (we will process all sequences)
also since you're using return_sequences=True which means you're consuming all the sequences (which caused dynamic shapes and it's hard for the following dense layer to figure out the dims)
If you disable return_sequences it should be fine.
Is there any thing to check for support ? Which will work and which does not work ?
Will you add support in future ?
We will support this in future, but it's unlikely to happen in the near term.
@pranathibl
Could you please let us know if this is still an issue in latest stable TF v2.6.0 ?Thank you!
|
gharchive/issue
| 2020-11-13T03:46:05 |
2025-04-01T06:40:35.269740
|
{
"authors": [
"Saduf2019",
"amahendrakar",
"pranathibl",
"renjie-liu"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/44819",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1690730488
|
error message of tf.eye is inconsistent with doc
Click to expand!
Issue Type
Bug
Have you reproduced the bug with TF nightly?
Yes
Source
source
Tensorflow Version
tf 2.12.0
Custom Code
Yes
OS Platform and Distribution
win11
Mobile device
No response
Python version
No response
Bazel version
No response
GCC/Compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
According to doc, the param num_rows should be Non-negative int32 scalar Tensor. But below snippet code 1 indicates that the param num_rows cannot be zero which is inconsistent with doc. On the other hand, the param num_rows shouldnt be Bool Tensor, but when given bool tensor, tf.eye works, as below snippet code 2 shows.
Standalone code to reproduce the issue
snippet code 1:
import tensorflow as tf
results={}
try:
num_rows = "1"
results["res"] = tf.eye(num_rows=num_rows)
except Exception as e:
results["err"] = "Error:"+str(e)
print(results)
# results = Error:Arguments `num_rows` and `num_columns` must be positive integer values. Received: num_rows=1, num_columns=1
snippet code 2:
import tensorflow as tf
results={}
try:
num_rows = True
results["res"] = tf.eye(num_rows=num_rows,)
except Exception as e:
results["err"] = "Error:"+str(e)
print(results)
# results = {'res': <tf.Tensor: shape=(1, 1), dtype=float32, numpy=array([[1.]], dtype=float32)>}
### Relevant log output
_No response_</details>
Hi @cheyennee ,
Thanks for reporting. I need to cross check the implementation and let you update and do necessary. Thanks!
Hi @cheyennee ,
For the code snippet 1:
Since you are passing num_rows = "1" as string and it is raising the error as intended and in the description it is printing num_rows=1, and here 1 is string not a number. By default if you won't provide any value to num_columns then num_columns=num_rows as per API, hence you are getting same value for both num_rows and num_columns.
For example if I pass num_rows = "anything" then the error description will be like below:
TypeError: Arguments num_rows and num_columns must be positive integer values. Received: num_rows=anything, num_columns=anything
I hope this will clarify your query and there is no need to change any description here.
For the code snippet 2:
If you pass num_rows = True, here True will be converted as 1 and hence num_rows=1 and num_columns=1 and the output will be (1,1) shaped tensor.
Hope this clarify your queries. Thanks!
@SuryanarayanaY I see many other APIs that don't convert type internally. So I am a little confused, which APIs will automatically convert type internally, and which ones will not?
Hi @cheyennee ,
I hope for code snippet-1, I have answered your query right?
I am assuming your question is for code snippet-2 where True is converted as '1'.Correct me if I am wrong.
The tf.eye API calls tf.ones internally and this is where booleans are converted into integer.Please refer the source code and the gist to explain this.
I think in the both APIs we need to change the description of argument shape that it also accepts boolean and converts them into integers 1 or 0.
I think this documentation change will suffice the purpose of this issue right? Please confirm
@SuryanarayanaY Yeah, you're right. I think the documentation should be changed since the argument shape can accepts both boolean and integer.
You didn't pass a boolean tensor, you passed a Python boolean, which automatically gets converted to and integer when you do math on it. The function fails if you do actually try to pass a tensor of type tf.bool. It's not useful to document that tf.ones(True) happens to work - it has no valid semantic meaning.
|
gharchive/issue
| 2023-05-01T12:03:21 |
2025-04-01T06:40:35.282283
|
{
"authors": [
"SuryanarayanaY",
"cantonios",
"cheyennee"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/60457",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
196633923
|
Saver can't handle filename only
Hey everyone,
it seems to me like - at least on Windows - the tf saver can't save model files whose path consists only of the file's name with no parent path, relative nor absolute. The issue lies at or around saver.py:1363 where it tries to check whether the parent directory of the file is actually a directory. There is no parent directory given (i.e. an empty string) and as such gFile.IsDirectory can't check anything. It fails and raises a ValueError that the parent directory does not exist.
Expected behavior in my opinion would be that the current working directory is used as the parent path when using no path/just a filename (i.e. a relative path).
Some details about my system specs:
Windows 10
Python 3.5.2
TF 0.12.0 (in a virtual environment; pip install tensorflow --upgrade just executed; issue persists)
So, I'm wondering whether this is an expected behavior and how to deal with it or if it is an actual bug that needs to be addressed.
This may be a known limitation. I think most people are using fully qualified paths in batch systems to avoid current directory surprises. @mrry, do you have any insight. @concretevitamin, do you have any insight?
I think this isn't OS-specific, but agree that it's a bit strange. As a workaround, it might be possible to prepend "./" to the save path, and perhaps we should do that in the cases when os.path.dirname() returns ""?
I think this is the usual and as such expectable behavior that when you don't provide an absolute/fully qualified path that it is treated as relative to the current working directory. So... yes, I think this should also be handled like this by TF.
Sounds fair to me. I'll defer to @concretevitamin on how best to fix that, since he wrote the most recent version of the Saver code.
Awesome! Thanks a lot!
@mrry or @concretevitamin Could you give a quick status update on this as soon as there's something new to mention?
That particular check was contributed from the community -- @PuchatekwSzortach, could you fix that?
Someone has spotted the same problem a week ago and I already fixed it - but the code is still awaiting review.
Here's the original post that mentions the issue: https://github.com/tensorflow/tensorflow/pull/6601#issuecomment-271063210
And here's a pull request fixing the issue:
https://github.com/tensorflow/tensorflow/pull/6601
Great! I've got to wait for it to get included into the native Windows version, though... haven't managed yet to build TF on my Windows 10 from sources...
gfile.IsDirectory('') returns True. @HWiese1980 are you sure this is the function that causes the issue?
change the path to be exactly directory.
For example:
checkpoint_name = "C:\Users\Minkun\Desktop\VTIS_Project\model.ckpt"
note that you used '\' instead ''
I am seeing the same issue as noted in the original post, in Tensorflow-gpu version 1.4.0 with Python 3.6.2, on Windows 7.
The restore function does not have a problem with just a string of a filename, but the save function can't handle it. If I put './filename.ckpt', as suggested by other poster, then the saver is fine.
Nagging Awaiting TensorFlower: It has been 14 days with no activity and the awaiting tensorflower label was assigned. Please update the label and/or status accordingly.
Yeah, we need to update saver.py. A simply way is just follow @mrry 's advice: prepend "./" to the save path in the cases when os.path.dirname() returns "".
Community contributions are welcomed.
Please remove the assignee, as this issue is inviting external contributions. Otherwise, remove the contributions welcome label. Thank you.
@gunan It seems there was a good amount of activity on this over a year ago (Jan 2017), in particular in pull request 6601. But reading that PR, towards the bottom a problem was discovered and I think the changes were reverted. Then nothing since?
So is this something that likely isn't going to ever be "fixed" and we just need to plan on attaching './' to the front of file path string, for saver, going forward?
Hi @tylerlekang,
We currently are working on other issues, but this issue staying open means it is worth fixing.
Also, the issue is marked contributions welcome, this means we do not have the time to look into this now, but we would appreciate any fixes.
One potential path forward is to pick up that PR and modify it with the last suggestion on the PR thread.
@HWiese1980 ,
We see that you are using older version of tensorflow (1.x) which is not actively supported. We recommend that you upgrade to latest stable version of tensorflow 2.6.0 and let us know if the issue still persists in newer versions .Thanks!
I think this issue has been resolved in the meantime. It's already been quite a while since I opened it, and I must confess, I totally forgot about it. Thanks for waking me up from my slumber again. :-) I guess, considering the last activity is from 2018, I can simply close it.
@google-ml-butler
Seriously, can't tell... (I'm aware it's a bot)
|
gharchive/issue
| 2016-12-20T10:13:40 |
2025-04-01T06:40:35.296669
|
{
"authors": [
"HWiese1980",
"PuchatekwSzortach",
"aselle",
"concretevitamin",
"gunan",
"mingxingtan",
"minkoon",
"mrry",
"ppwwyyxx",
"tensorflowbutler",
"tilakrayal",
"tylerlekang"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/6417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2671429357
|
model.fit fails when the number of rows exceeds Int32.MaxValue
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
2.19.0-dev20241117
Custom code
Yes
OS platform and distribution
MacOS 15.1.0
Mobile device
No response
Python version
3.10
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
I would expect model.fit to handle training on extremely large NumPy arrays without limitations.
Standalone code to reproduce the issue
import numpy as np
from keras import Sequential
from keras.layers import Dense
n = 2_147_483_648
x = np.zeros(n).astype(np.float32)
y = x
model = Sequential([
Dense(64, activation="relu", input_shape=(1,)),
Dense(1, activation="sigmoid")
])
model.compile(optimizer="adam", loss="binary_crossentropy")
model.fit(x=x,y=y, epochs=1, batch_size=1024, verbose=1)
Relevant log output
ValueError: Invalid value in tensor used for shape: -2147483648
Hi @github-clement-schiano, May be tensorFlow uses int32 value for calculating the dimension of the data, as you are passing high dimension ( int32 + 1) causing the error. I tried to use max of int32 which is 2_147_483_647 but the colab crashes due to high ram utilization. so I tried using a data generator to train the model on batch data and was able to train the model. please refer to this gist. Thank You.
Hi @kiransair, I’m working directly with NumPy arrays and don't want to use data generator. There shouldn’t be any size limitations from NumPy’s perspective. A dataset with 2 billion rows isn’t excessively large, and the machine I’m using has plenty of RAM to handle it. Do you have any insights into what might be causing this issue?
|
gharchive/issue
| 2024-11-19T09:24:51 |
2025-04-01T06:40:35.303087
|
{
"authors": [
"github-clement-schiano",
"kiransair"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/80241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
127468905
|
relu_layer doesn't appear in tensorflow.org/versions/master/api_docs
relu_layer has a docstring in the code, but it doesn’t appear in the tensorflow online documentation for some reason.
In case it is not public yet, it should be removed from the cifar example.
I'm not sure whether it is supposed to be public or not.
It is not supposed to be public, and it will likely be deprecated (and
removed) in favor of layers.fully_connected_relu once that's matured.
I agree using unstable functions in the tutorials is suboptimal, and since
relu_layer is so simple, a simplified version of it should probably be
inlined in cifar.
On Tue, Jan 19, 2016 at 11:44 AM josh11b notifications@github.com wrote:
I'm not sure whether it is supposed to be public or not.
—
Reply to this email directly or view it on GitHub
https://github.com/tensorflow/tensorflow/issues/811#issuecomment-172964027
.
What about xw_plus_b?
I can make a PR replacing relu_layer and xw_plus_b with public ops in the tutorials. Is someone already working on it?
Is there any other ops that should be replaced in the tutorials?
Nobody is working on it, if you wrote a pull request, we'd be grateful for it.
Looks like it was fixed in #828
|
gharchive/issue
| 2016-01-19T15:19:58 |
2025-04-01T06:40:35.307814
|
{
"authors": [
"cesarsalgado",
"josh11b",
"martinwicke",
"vrv"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/811",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2702096013
|
What Is The Cheapest Day To Book A Flight On Delta Airlines?
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
#2323w
Custom code
Yes
OS platform and distribution
No response
Mobile device
No response
Python version
No response
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
𝔗𝔥𝔢 𝔠𝔥𝔢𝔞𝔭𝔢𝔰𝔱 𝔡𝔞𝔶 𝔱𝔬 𝔟𝔬𝔬𝔨 Delta Airlines 𝔣𝔩𝔦𝔤𝔥𝔱𝔰 𝔦𝔰 𝔱𝔶𝔭𝔦𝔠𝔞𝔩𝔩𝔶 𝔗𝔲𝔢𝔰𝔡𝔞𝔶 𝔬𝔯 𝔚𝔢𝔡𝔫𝔢𝔰𝔡𝔞𝔶. 𝔓𝔯𝔦𝔠𝔢𝔰 𝔞𝔯𝔢 𝔩𝔬𝔴𝔢𝔯 𝔪𝔦𝔡-𝔴𝔢𝔢𝔨 𝔡𝔲𝔢 𝔱𝔬 𝔞𝔦𝔯𝔩𝔦𝔫𝔢 𝔣𝔞𝔯𝔢 𝔞𝔡𝔧𝔲𝔰𝔱𝔪𝔢𝔫𝔱𝔰, 𝔭𝔯𝔬𝔳𝔦𝔡𝔦𝔫𝔤 𝔟𝔢𝔱𝔱𝔢𝔯 𝔡𝔢𝔞𝔩𝔰 𝔣𝔬𝔯 𝔢𝔞𝔯𝔩𝔶 𝔟𝔬𝔬𝔨𝔦𝔫𝔤𝔰. 𝔉𝔬𝔯 𝔪𝔬𝔯𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢, 𝔠𝔞𝔩𝔩 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 .
𝔇𝔬𝔢𝔰 Delta Airlines 𝔥𝔞𝔳𝔢 𝔣𝔯𝔢𝔢 𝔠𝔞𝔫𝔠𝔢𝔩𝔩𝔞𝔱𝔦𝔬𝔫?
𝔜𝔢𝔰, Delta Airlines 𝔬𝔣𝔣𝔢𝔯𝔰 𝔞 24-𝔥𝔬𝔲𝔯 𝔯𝔦𝔰𝔨-𝔣𝔯𝔢𝔢 𝔠𝔞𝔫𝔠𝔢𝔩𝔩𝔞𝔱𝔦𝔬𝔫 𝔭𝔬𝔩𝔦𝔠𝔶 +1(844) 619-5030 {{𝔒𝔗𝔄}}. 𝔜𝔬𝔲 𝔠𝔞𝔫 𝔠𝔞𝔫𝔠𝔢𝔩 𝔞𝔫𝔶 𝔣𝔩𝔦𝔤𝔥𝔱 𝔴𝔦𝔱𝔥𝔦𝔫 24 𝔥𝔬𝔲𝔯𝔰 𝔬𝔣 𝔟𝔬𝔬𝔨𝔦𝔫𝔤 𝔣𝔬𝔯 𝔞 𝔣𝔲𝔩𝔩 𝔯𝔢𝔣𝔲𝔫𝔡 +1(844) 619-50-30 {{𝔒𝔗𝔄}}, 𝔞𝔰 𝔩𝔬𝔫𝔤 𝔞𝔰 𝔱𝔥𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔴𝔞𝔰 𝔟𝔬𝔬𝔨𝔢𝔡 𝔞𝔱 𝔩𝔢𝔞𝔰𝔱 7 𝔡𝔞𝔶𝔰 𝔟𝔢𝔣𝔬𝔯𝔢 𝔡𝔢𝔭𝔞𝔯𝔱𝔲𝔯𝔢 𝔞𝔱 +1(844) 619-5030 {{𝔒𝔗𝔄}}.
𝔇𝔬𝔢𝔰 Delta Airlines 𝔞𝔩𝔩𝔬𝔴 𝔫𝔞𝔪𝔢 𝔠𝔥𝔞𝔫𝔤𝔢𝔰?
𝔉𝔬𝔯 𝔞 𝔣𝔲𝔩𝔩 𝔫𝔞𝔪𝔢 𝔠𝔥𝔞𝔫𝔤𝔢, 𝔠𝔬𝔫𝔱𝔞𝔠𝔱 Delta Airlines 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔰𝔢𝔯𝔳𝔦𝔠𝔢 𝔞𝔱 +1(844) 619-5030, +1(844) 619-50-30 (𝕆𝕋𝔸), 𝔭𝔯𝔬𝔳𝔦𝔡𝔦𝔫𝔤 𝔶𝔬𝔲𝔯 𝔪𝔞𝔯𝔯𝔦𝔞𝔤𝔢 𝔠𝔢𝔯𝔱𝔦𝔣𝔦𝔠𝔞𝔱𝔢 𝔞𝔫𝔡 𝔞𝔫𝔶 𝔯𝔢𝔮𝔲𝔦𝔯𝔢𝔡 𝔡𝔬𝔠𝔲𝔪𝔢𝔫𝔱𝔞𝔱𝔦𝔬𝔫. 𝔖𝔦𝔫𝔠𝔢 𝔣𝔢𝔢𝔰 𝔪𝔞𝔶 𝔞𝔭𝔭𝔩𝔶, 𝔦𝔱'𝔰 𝔟𝔢𝔰𝔱 𝔱𝔬 𝔪𝔞𝔨𝔢 𝔱𝔥𝔢𝔰𝔢 𝔠𝔥𝔞𝔫𝔤𝔢𝔰 𝔢𝔞𝔯𝔩𝔶 𝔱𝔬 𝔞𝔳𝔬𝔦𝔡 𝔱𝔯𝔞𝔳𝔢𝔩 𝔡𝔦𝔰𝔯𝔲𝔭𝔱𝔦𝔬𝔫𝔰.
ℌ𝔬𝔴 ℭ𝔞𝔫 𝔜𝔬𝔲 ℭ𝔥𝔞𝔫𝔤𝔢 𝔜𝔬𝔲𝔯 𝔑𝔞𝔪𝔢 𝔒𝔫 𝔞𝔫 Delta Airlines 𝔗𝔦𝔠𝔨𝔢𝔱?
𝔏𝔬𝔠𝔞𝔱𝔢 𝔶𝔬𝔲𝔯 𝔟𝔬𝔬𝔨𝔦𝔫𝔤 𝔲𝔫𝔡𝔢𝔯 "𝔐𝔞𝔫𝔞𝔤𝔢 𝔐𝔶 𝔗𝔯𝔦𝔭," 𝔦𝔫𝔭𝔲𝔱 𝔶𝔬𝔲𝔯 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫 𝔦𝔫𝔣𝔬𝔯𝔪𝔞𝔱𝔦𝔬𝔫, 𝔞𝔫𝔡 𝔠𝔥𝔬𝔬𝔰𝔢 𝔱𝔥𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔑𝔞𝔪𝔢 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 𝔠𝔥𝔞𝔫𝔤𝔢 𝔬𝔭𝔱𝔦𝔬𝔫. 𝔈𝔫𝔰𝔲𝔯𝔢 𝔶𝔬𝔲 𝔰𝔢𝔱𝔱𝔩𝔢 𝔞𝔫𝔶 𝔯𝔢𝔩𝔢𝔳𝔞𝔫𝔱 𝔑𝔞𝔪𝔢 𝔞𝔡𝔧𝔲𝔰𝔱𝔪𝔢𝔫𝔱 𝔠𝔥𝔞𝔯𝔤𝔢𝔰 𝔞𝔫𝔡 𝔞𝔴𝔞𝔦𝔱 𝔠𝔬𝔫𝔣𝔦𝔯𝔪𝔞𝔱𝔦𝔬𝔫 𝔣𝔯𝔬𝔪 Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔲𝔭𝔬𝔫 𝔠𝔬𝔪𝔭𝔩𝔢𝔱𝔦𝔫𝔤 𝔱𝔥𝔢 𝔭𝔯𝔬𝔠𝔢𝔰𝔰.
ℭ𝔞𝔫 ℑ 𝔠𝔥𝔞𝔫𝔤𝔢 𝔞𝔫 Delta Airlines 𝔣𝔩𝔦𝔤𝔥𝔱 𝔣𝔬𝔯 𝔣𝔯𝔢𝔢?
Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔪𝔞𝔡𝔢 𝔱𝔥𝔢 𝔭𝔯𝔬𝔠𝔢𝔰𝔰 𝔢𝔞𝔰𝔦𝔢𝔯 𝔣𝔬𝔯 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯𝔰 𝔱𝔬 𝔠𝔥𝔞𝔫𝔤𝔢 𝔬𝔯 𝔠𝔞𝔫𝔠𝔢𝔩 𝔱𝔥𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔴𝔦𝔱𝔥 𝔫𝔬 𝔭𝔢𝔫𝔞𝔩𝔱𝔶 𝔣𝔢𝔢𝔰 𝔦𝔣 𝔱𝔥𝔢𝔶 𝔪𝔞𝔨𝔢 𝔠𝔥𝔞𝔫𝔤𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔯𝔢𝔮𝔲𝔢𝔰𝔱𝔰 𝔴𝔦𝔱𝔥𝔦𝔫 24 𝔥𝔬𝔲𝔯𝔰 𝔬𝔣 𝔱𝔥𝔢 𝔭𝔲𝔯𝔠𝔥𝔞𝔰𝔢 𝔬𝔣 𝔱𝔥𝔢 𝔱𝔦𝔠𝔨𝔢𝔱 𝔞𝔫𝔡 𝔤𝔢𝔱𝔱𝔦𝔫𝔤 𝔠𝔬𝔫𝔣𝔦𝔯𝔪𝔢𝔡 𝔱𝔥𝔢 𝔰𝔱𝔞𝔱𝔲𝔰 𝔬𝔣 𝔱𝔥𝔢 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫. 𝔉𝔬𝔯 𝔦𝔪𝔪𝔢𝔡𝔦𝔞𝔱𝔢 𝔥𝔢𝔩𝔭, Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯𝔰 𝔠𝔞𝔫 𝔞𝔩𝔰𝔬 𝔰𝔭𝔢𝔞𝔨 𝔴𝔦𝔱𝔥 𝔢𝔵𝔭𝔢𝔯𝔱𝔰 𝔞𝔱 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 𝔣𝔬𝔯 𝔦𝔪𝔪𝔢𝔡𝔦𝔞𝔱𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢.
𝔇𝔬𝔢𝔰 Delta Airlines 𝔞𝔩𝔩𝔬𝔴 𝔠𝔥𝔞𝔫𝔤𝔢𝔰?
𝔜𝔢𝔰, Delta Airlines 𝔞𝔩𝔩𝔬𝔴𝔰 𝔠𝔥𝔞𝔫𝔤𝔢𝔰 𝔱𝔬 𝔣𝔩𝔦𝔤𝔥𝔱 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫𝔰, 𝔟𝔲𝔱 𝔱𝔥𝔢𝔶 𝔱𝔶𝔭𝔦𝔠𝔞𝔩𝔩𝔶 𝔠𝔥𝔞𝔯𝔤𝔢 𝔞 𝔣𝔢𝔢 𝔣𝔬𝔯 𝔪𝔬𝔡𝔦𝔣𝔦𝔠𝔞𝔱𝔦𝔬𝔫𝔰. 𝔜𝔬𝔲 𝔠𝔞𝔫 𝔠𝔥𝔞𝔫𝔤𝔢 𝔶𝔬𝔲𝔯 𝔣𝔩𝔦𝔤𝔥𝔱 𝔲𝔭 𝔱𝔬 7 𝔡𝔞𝔶𝔰 𝔟𝔢𝔣𝔬𝔯𝔢 𝔡𝔢𝔭𝔞𝔯𝔱𝔲𝔯𝔢. 𝔉𝔬𝔯 𝔪𝔬𝔯𝔢 𝔡𝔢𝔱𝔞𝔦𝔩𝔰, 𝔠𝔞𝔩𝔩 Delta Airlines 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔰𝔢𝔯𝔳𝔦𝔠𝔢 𝔞𝔱 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 𝔣𝔬𝔯 𝔦𝔪𝔪𝔢𝔡𝔦𝔞𝔱𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢.
𝔜𝔢𝔰, 𝔶𝔬𝔲 𝔠𝔞𝔫 𝔠𝔥𝔞𝔫𝔤𝔢 𝔬𝔯 𝔪𝔬𝔡𝔦𝔣𝔶 𝔞 𝔣𝔩𝔦𝔤𝔥𝔱 𝔟𝔬𝔬𝔨𝔢𝔡 𝔱𝔥𝔯𝔬𝔲𝔤𝔥 Delta Airlines , 𝔟𝔲𝔱 𝔦𝔱 𝔡𝔢𝔭𝔢𝔫𝔡𝔰 𝔬𝔫 𝔱𝔥𝔢 𝑨𝒊𝒓𝒍𝒊𝒏𝒆'𝔰 𝔭𝔬𝔩𝔦𝔠𝔦𝔢𝔰 𝔞𝔫𝔡 𝔭𝔬𝔱𝔢𝔫𝔱𝔦𝔞𝔩 𝔣𝔢𝔢𝔰. 𝔉𝔬𝔯 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢 𝔴𝔦𝔱𝔥 𝔶𝔬𝔲𝔯 𝔟𝔬𝔬𝔨𝔦𝔫𝔤, 𝔯𝔢𝔞𝔠𝔥 𝔬𝔲𝔱 𝔱𝔬 Delta Airlines 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔠𝔞𝔯𝔢 𝔞𝔱 +1(844) 619-5030 𝔬𝔯 +1(844) 619-50-30 (𝕆𝕋𝔸).
ℭ𝔞𝔫 ℑ 𝔠𝔥𝔞𝔫𝔤𝔢 𝔱𝔥𝔢 𝔫𝔞𝔪𝔢 𝔬𝔫 𝔪𝔶 𝔣𝔩𝔦𝔤𝔥𝔱 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫?
Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔤𝔢𝔫𝔢𝔯𝔞𝔩𝔩𝔶 𝔡𝔬𝔢𝔰 𝔫𝔬𝔱 𝔞𝔩𝔩𝔬𝔴 𝔫𝔞𝔪𝔢 𝔠𝔥𝔞𝔫𝔤𝔢𝔰 𝔬𝔫 𝔣𝔩𝔦𝔤𝔥𝔱 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫𝔰. ℑ𝔣 𝔶𝔬𝔲 𝔫𝔢𝔢𝔡 𝔱𝔬 𝔠𝔬𝔯𝔯𝔢𝔠𝔱 𝔞 𝔫𝔞𝔪𝔢, 𝔶𝔬𝔲 𝔪𝔞𝔶 𝔫𝔢𝔢𝔡 𝔱𝔬 𝔠𝔞𝔫𝔠𝔢𝔩 𝔞𝔫𝔡 𝔯𝔢𝔟𝔬𝔬𝔨. 𝔉𝔬𝔯 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢, 𝔠𝔬𝔫𝔱𝔞𝔠𝔱 Delta Airlines 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯 𝔰𝔢𝔯𝔳𝔦𝔠𝔢 𝔞𝔱 +1(844) 619-5030, +1(844) 619-50-30 (𝕆𝕋𝔸) 𝔣𝔬𝔯 𝔤𝔲𝔦𝔡𝔞𝔫𝔠𝔢.
𝔇𝔬𝔢𝔰 Delta Airlines 𝔬𝔣𝔣𝔢𝔯 𝔞 𝔯𝔢𝔣𝔲𝔫𝔡?
#𝔄𝔦𝔯™𝔓𝔬𝔩𝔦𝔠𝔶-Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔱𝔶𝔭𝔦𝔠𝔞𝔩𝔩𝔶 𝔡𝔬𝔢𝔰 𝔫𝔬𝔱 𝔦𝔰𝔰𝔲𝔢 𝔯𝔢𝔣𝔲𝔫𝔡𝔰 𝔣𝔬𝔯 𝔠𝔞𝔫𝔠𝔢𝔩𝔢𝔡 𝔣𝔩𝔦𝔤𝔥𝔱𝔰 𝔲𝔫𝔩𝔢𝔰𝔰 𝔶𝔬𝔲 𝔥𝔞𝔳𝔢 𝔰𝔢𝔩𝔢𝔠𝔱𝔢𝔡 𝔞 𝔯𝔢𝔣𝔲𝔫𝔡𝔞𝔟𝔩𝔢 𝔣𝔞𝔯𝔢 𝔬𝔯 𝔞𝔡𝔡𝔢𝔡 𝔱𝔥𝔢 𝔗𝔯𝔦𝔭 𝔉𝔩𝔢𝔵 𝔬𝔭𝔱𝔦𝔬𝔫 +1(844) 619-5030 𝔬𝔯 ++1(844) 619-50-30 (𝔒𝔗𝔄)
Standalone code to reproduce the issue
𝔏𝔬𝔠𝔞𝔱𝔢 𝔶𝔬𝔲𝔯 𝔟𝔬𝔬𝔨𝔦𝔫𝔤 𝔲𝔫𝔡𝔢𝔯 "𝔐𝔞𝔫𝔞𝔤𝔢 𝔐𝔶 𝔗𝔯𝔦𝔭," 𝔦𝔫𝔭𝔲𝔱 𝔶𝔬𝔲𝔯 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫 𝔦𝔫𝔣𝔬𝔯𝔪𝔞𝔱𝔦𝔬𝔫, 𝔞𝔫𝔡 𝔠𝔥𝔬𝔬𝔰𝔢 𝔱𝔥𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔑𝔞𝔪𝔢 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 𝔠𝔥𝔞𝔫𝔤𝔢 𝔬𝔭𝔱𝔦𝔬𝔫. 𝔈𝔫𝔰𝔲𝔯𝔢 𝔶𝔬𝔲 𝔰𝔢𝔱𝔱𝔩𝔢 𝔞𝔫𝔶 𝔯𝔢𝔩𝔢𝔳𝔞𝔫𝔱 𝔑𝔞𝔪𝔢 𝔞𝔡𝔧𝔲𝔰𝔱𝔪𝔢𝔫𝔱 𝔠𝔥𝔞𝔯𝔤𝔢𝔰 𝔞𝔫𝔡 𝔞𝔴𝔞𝔦𝔱 𝔠𝔬𝔫𝔣𝔦𝔯𝔪𝔞𝔱𝔦𝔬𝔫 𝔣𝔯𝔬𝔪 Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔲𝔭𝔬𝔫 𝔠𝔬𝔪𝔭𝔩𝔢𝔱𝔦𝔫𝔤 𝔱𝔥𝔢 𝔭𝔯𝔬𝔠𝔢𝔰𝔰.
Relevant log output
Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔪𝔞𝔡𝔢 𝔱𝔥𝔢 𝔭𝔯𝔬𝔠𝔢𝔰𝔰 𝔢𝔞𝔰𝔦𝔢𝔯 𝔣𝔬𝔯 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯𝔰 𝔱𝔬 𝔠𝔥𝔞𝔫𝔤𝔢 𝔬𝔯 𝔠𝔞𝔫𝔠𝔢𝔩 𝔱𝔥𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔴𝔦𝔱𝔥 𝔫𝔬 𝔭𝔢𝔫𝔞𝔩𝔱𝔶 𝔣𝔢𝔢𝔰 𝔦𝔣 𝔱𝔥𝔢𝔶 𝔪𝔞𝔨𝔢 𝔠𝔥𝔞𝔫𝔤𝔢 𝔣𝔩𝔦𝔤𝔥𝔱 𝔯𝔢𝔮𝔲𝔢𝔰𝔱𝔰 𝔴𝔦𝔱𝔥𝔦𝔫 24 𝔥𝔬𝔲𝔯𝔰 𝔬𝔣 𝔱𝔥𝔢 𝔭𝔲𝔯𝔠𝔥𝔞𝔰𝔢 𝔬𝔣 𝔱𝔥𝔢 𝔱𝔦𝔠𝔨𝔢𝔱 𝔞𝔫𝔡 𝔤𝔢𝔱𝔱𝔦𝔫𝔤 𝔠𝔬𝔫𝔣𝔦𝔯𝔪𝔢𝔡 𝔱𝔥𝔢 𝔰𝔱𝔞𝔱𝔲𝔰 𝔬𝔣 𝔱𝔥𝔢 𝔯𝔢𝔰𝔢𝔯𝔳𝔞𝔱𝔦𝔬𝔫. 𝔉𝔬𝔯 𝔦𝔪𝔪𝔢𝔡𝔦𝔞𝔱𝔢 𝔥𝔢𝔩𝔭, Delta Airlines 𝔄𝔦𝔯𝔩𝔦𝔫𝔢𝔰 𝔠𝔲𝔰𝔱𝔬𝔪𝔢𝔯𝔰 𝔠𝔞𝔫 𝔞𝔩𝔰𝔬 𝔰𝔭𝔢𝔞𝔨 𝔴𝔦𝔱𝔥 𝔢𝔵𝔭𝔢𝔯𝔱𝔰 𝔞𝔱 +1(844) 619-5030 (𝕆𝕋𝔸), +1(844) 619-50-30 𝔣𝔬𝔯 𝔦𝔪𝔪𝔢𝔡𝔦𝔞𝔱𝔢 𝔞𝔰𝔰𝔦𝔰𝔱𝔞𝔫𝔠𝔢.
Closing this issue as spam.
|
gharchive/issue
| 2024-11-28T13:06:54 |
2025-04-01T06:40:35.317456
|
{
"authors": [
"Venkat6871",
"garrylims"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/81180",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
219349659
|
ValueError("No variables provided.") for apply_gradients
In optimizer.py, the first part of code segment is
def apply_gradients(self, grads_and_vars, global_step=None, name=None):
grads_and_vars = tuple(grads_and_vars) # Make sure repeat iteration works.
if not grads_and_vars:
raise ValueError("No variables provided.")
Running my program, I got the error message caused by this specific error. I then printed out tuple(grads_and_vars), part of which is. I don't know why it can cause the error of no variables provided.
((<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_0:0' shape=(3, 3, 3, 64) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2afc746b5c50>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_1:0' shape=(64,) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2affd48189b0>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_2:0' shape=(3, 3, 64, 64) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2affd486d940>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_3:0' shape=(64,) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2affd488cf98>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_4:0' shape=(3, 3, 64, 128) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2afc746b5d68>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_5:0' shape=(128,) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2affd48f4278>), (<tf.Tensor 'Optimizer/training/clip_by_global_norm/Optimizer/training/clip_by_global_norm/_6:0' shape=(3, 3, 128, 128) dtype=float32>, <tensorflow.python.ops.variables.Variable object at 0x2affd4915e10>),
Please provide details about what platform you are using (operating system, architecture). Also include your TensorFlow version. Also, did you compile from source or install a binary? Make sure you also include the exact command if possible to produce the output included in your test case. We ask for this in the issue submission template, because it is really difficult to help without that information. Thanks!
|
gharchive/issue
| 2017-04-04T18:58:59 |
2025-04-01T06:40:35.325936
|
{
"authors": [
"asimshankar",
"surfreta"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/issues/8961",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
235899668
|
Improve docs for parallel_stack
Partly solves #10036
Moves #10593 to master
Can one of the admins verify this patch?
Jenkins, test this please.
|
gharchive/pull-request
| 2017-06-14T14:31:30 |
2025-04-01T06:40:35.327227
|
{
"authors": [
"Androbin",
"girving",
"tensorflow-jenkins"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/10704",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
316835886
|
Ensure Java Session closes the JNI on finalize
Currently the JNI Session object does not close
the native library when the JVM deallocates
This ensures that when the JVM deallocates this
object from the system it also closes the native
code correctly
Obviously the user should call close explicitly or use try blocks, but this catches the cases where they don't and can prevent massive memory leaks on services.
Thanks for the contribution.
We have debated this before, and while I admit that the decision isn't set in stone, the feeling was that the finalizer should be avoided in this case. There is some discussion in the Effective Java book. In particular, since the native peers of these classes can hold on to a significant chunk of resources (e.g., large amounts of memory) - encouraging cleanup on the finalizer may seem convenient but actually makes it harder to reason about and debug the memory footprint of a program (for example, if the memory footprint goes up and down as the GC runs, making it hard to associate with the code that is missing the close() calls).
So I'd suggest that we do not merge this PR, but I say so with the humility that I could be wrong :)
If the consensus is not to cleanup on finalize, perhaps a better situation might be an assertion that the object has been closed on finalize? I do think something has to be done to ensure that native components and their Java wrappers don't go out of sync and cause hard to debug memory leaks.
@asimshankar I take your point, I don't know the performance implications of doing this on Tensors for example. I will fall back to thinking that this technique should be applied for Graph or Session though, having said that if consistency is king then I can close this PR.
@8W9aG : Thanks for your understanding. Let's close this for now, with the understanding that it may make sense to revisit this in the future (e.g., if there is a lot of compelling feedback around this). Thanks!
|
gharchive/pull-request
| 2018-04-23T14:24:19 |
2025-04-01T06:40:35.331273
|
{
"authors": [
"8W9aG",
"asimshankar"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/18796",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
435719158
|
Added Support for SoftPlus operator in tflite.
This is part of issue #27822.
Do you have a specific model which uses this operator?
We're trying to raise the bar for adding new operators to TFLite, preferring a smaller set of core operators where possible, as it makes it more difficult to maintain parity with our delegate/accelerator pipeline.
@jdduke , thanks for the comments, this implementation is inline with the issue #27822. But i understand your point. So should we use graph transform to convert softplus operator or leave it for the custom ops ?
As i feel graph transform might again not be so efficient as it will create again two nodes(when value is within the Range). Let me know your take on this.
Regards
Amit
I'm going to defer to @miaout17 for further review. I agree that the graph transform won't be quite as important. Can you tell if the SoftPlus operator in that graph is used sparingly? Or all throughout?
@jdduke , thanks for the response, I think the model uses softplus sparingly, but softplus seems to be picking up momentum.
@miaout17 , could you please review and provide the feedback.
Regards
Amit
@amitsrivastava78 could you please resolve the conflicts?
@gbaned , thanks for pointing this out, i have resolved the conflicts.
Regards
Amit
While the code itself looks good, the following questions are unanswered yet:
What models use SoftPlus op?
Should we do graph transformation or define a new op? (tradeoff between efficiency versus fewer builtin ops).
@jdduke are you comfortable adding this as a builtin op? If yes, I can follow up the code review process
@miaout17 thanks for the review i have updated all the comments as per your suggestion, kindly check.
Regards
Amit
Hi @amitsrivastava78, we're working on some guidelines for new operators, could you give us a few days to get back to you?
As noted previously, it would be good to know if other models are using this operator, and whether a different activation would suffice.
@jdduke , thanks for your response, sure i will wait for the conclusion from your side, also i will check which all models are using this operator and update you.
Regards
Amit
@amitsrivastava78 could you please resolve the conflicts? Thanks!
@gbaned , thanks for pointing this out, i have resolved the conflicts.
Regards
Amit
@amitsrivastava78 could you please resolve the conflicts? Thanks!
@gbaned , i have rebased the code and resolved the merge conflicts.
@jdduke can you please have a look at the PR and let me know your feedback
Regards
Amit
Until we find another model which requires this op, I'd rather we rely on using the select TF ops, or we add this as a custom op that users can optionally link into their app.
See also https://github.com/tensorflow/tensorflow/commit/03195f13456354deea8b81c9e583621b1337b952#diff-2b45693b554369bde8c98e9a76b80036
@amitsrivastava78 Could you please address the jdduke's comments. Thanks!
I'm going to go ahead and close this PR, because it seems to have stalled. If you're still interested in pursing this (and responding to my comments), please feel free to reopen!
Does Tflite support SoftPlus operator now, if not how can i include that as a custom one. @jdduke
|
gharchive/pull-request
| 2019-04-22T13:02:53 |
2025-04-01T06:40:35.341434
|
{
"authors": [
"amitsrivastava78",
"divyajaincs",
"gbaned",
"jdduke",
"miaout17",
"rthadur"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/28042",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
446281757
|
Make vscode's pylint integration work on TensorFlow source
If you point a copy of Visual Studio Code at a TensorFlow source tree, the editor's built-in linter displays a large number of spurious errors and misses errors that would show up in CI builds. This PR adds a symbolic link to the pylintrc file for CI builds at the root of the source code tree. After this change, the default linter output of Visual Studio Code is much closer to what one sees when running tensorflow/tools/ci_build/ci_sanity.sh.
Can one of the admins verify this patch?
Is there another way to specify to vscode which file to use for linting? I'd like to avoid adding a file to the root directory if possible. I'd expect vscode to have a configuration option somewhere.
Per the VS Code documentation, you can configure the editor's pylint integration in two ways:
Set the python.linting.pylintArgs parameter in VS Code's settings.json file
Put a file called either .pylintrc or pylintrc in the root directory of the workspace
I'm not aware of any other undocumented ways of configuring the linter.
@aaudiber Can you please take a look on this PR? Thanks!
Thanks @frreiss! This PR will need a manual import. Let me do that.
Thanks @yifeif!
|
gharchive/pull-request
| 2019-05-20T19:53:10 |
2025-04-01T06:40:35.345529
|
{
"authors": [
"aaudiber",
"frreiss",
"gbaned",
"mellanox-github",
"yifeif"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/28876",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
496028449
|
[XLA:GPU][ROCm] Enabling amdgpu backend in XLA unit tests
This PR enables amdgpu XLA backend for unit tests, in preparation for setting up ROCm XLA community support builds.
@jerryyin Could you please check failed build errors? Thanks!
@gbaned Judging from the build failures: external/local_config_mlir/include/mlir/Dialect/QuantOps/UniformSupport.h:148:14: error: 'clamp' is not a member of 'std'
I don't think the failures have anything to do with this PR. std::clamp is a C++ 17 addition and shouldn't be used in that header file.
|
gharchive/pull-request
| 2019-09-19T21:10:50 |
2025-04-01T06:40:35.347215
|
{
"authors": [
"gbaned",
"jerryyin"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/32673",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
555625216
|
Symmetric quantization with activations 16-bit and weights 8-bit: interface
In this PR we add a new option TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 to enable quantization with 16-bit activations and weights 8-bit.
When it is set, then we do post-training symmetric quantization with 16-bit activations and 8-bit weights. The bias is 64-bit in this case. It behaves the same way as TFLITE_BUILTINS_INT8.
Example of usage:
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8]
The name is quite long even it is explanatory.
May be, we should use something like TFLITE_BUILTINS_INT16x8.
Any suggestions are welcome.
Implementations of reference kernels are submitted in other PRs.
@suharshs Thanks for the review!
I will correct according to the suggestions and re-test internally.
Hi @suharshs, I added "non-strict" mode to this PR in the last commit. Now all changes to introduce 16x8 mode is here. Could you please take a look when you have a time ? Thanks
This overall looks good now.
Before submitting this change, we need to ensure that the kernels are in, and then in one change we need to bump the version number of all the 16 bit ops, and ensure that this tool uses that new version when applying 16 bit operations.
Let's first get the rest of the kernels submitted, then send a single PR that updates the version for all the kernels, and finally update this PR with those version numbers.
Thanks for the review! The plan sounds good to me.
@wwwind Can you please resolve conflicts? Thanks!
@gbaned conflicts are resolved
This overall looks good now.
Before submitting this change, we need to ensure that the kernels are in, and then in one change we need to bump the version number of all the 16 bit ops, and ensure that this tool uses that new version when applying 16 bit operations.
Let's first get the rest of the kernels submitted, then send a single PR that updates the version for all the kernels, and finally update this PR with those version numbers.
@suharshs dependent changes are submitted internally or there is a PR ?
Adding Feng to speak to how to add support in the mlir code path too.
Hi @liufengdb , I tested all my models with the flag experimental_new_converter = True (default)
and all models converted without problems to 16x8 (activations int16, weights int8) mode, with description "MLIR converted", etc.
I know that there is _experimental_new_quantizer flag, but this path is in the active development, so I keep an eye on it, but it looks that it is not needed for now.
All our essential changes are in quantize_model.cc.
Please let me know if something need to be done in addition.
*There is merge conflict on this PR - changes to resolve it are under testing right now.
Thanks!
Hi @wwwind, it is great to see the patch works well. Internally we have decided to migrate to the new quantizer, and, of course, we want to support this 16 bits activation feature in the new quantizer. My only concern is how to verify the results, because most of the tests in this patch seems tied to the old quantizer.
Thanks @liufengdb for the support in the new quantizer.
Yes, we have not added a lot of tests here, but I am ready to add unit tests with models that cover all our operators. Like in quantize_model_test.cc. Is it okay ? Perhaps, should I do this in the additional PR, because this one is too big already.
Internally we have a set of end2end dummy models for all reference kernels that we have implemented:
we create one layer model and quantize it to 16x8 and do some checks
proper accuracy testing on classic models with 16x8 mode.
In all these tests we use installed patched tf.
Is there any place in tensorflow where I can add similar tests?
What is the timeline for the migration to the new quantizer ? When does it make sense for us to start testing it ?
Thanks!
add unit tests with models that cover all our operators.
Internally we have a set of end2end dummy models for all reference kernels that we have >implemented. we create one layer model and quantize it to 16x8 and do some checks
A good place for these per-op tests can be extent from these tests:
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/testing/op_tests
Is there any place in tensorflow where I can add similar tests?
There are some tests are using import tf directly: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/testing/generate_examples.py#L32
What is the timeline for the migration to the new quantizer? When does it make sense for us to start testing it ?
My plan is to switch the flag internally by the end of this quarter and make it public externally in the middle of next quarter. We have some internal users with the new converter, and most of the features are ready, except this 16 bits. I can add the 16bits support next week, and later on, you add the tests to tensorflow/lite/testing/op_tests?
Thanks! This sounds good. I will prepare PR with these tests.
Hi @liufengdb ! I created a set of tests for 16x8 quantization in ops_test:
https://github.com/tensorflow/tensorflow/pull/39543
My concern is that these tests are for the old converter:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/testing/toco_convert.py#L123
If the function QuantizeModel stays the same for the new quantizer, then
in this PR I have parametrized with int16 all tests here.
This should be a good cover for 16x8 case.
Thanks!
I will switch the zip tests to use the new converter. Then you can submit #39543.
@wwwind can you please resolve conflicts and check sanity errors
@liufengdb I updated - please take a look
There is a failure in CI with
//tensorflow/tools/api/tests:api_compatibility_test
File "/home/elezhe01/.local/lib/python3.6/site-packages/tensorflow_estimator/python/estimator/canned/dnn.py", line 23, in <module>
from tensorflow.python.feature_column import dense_features
ImportError: cannot import name 'dense_features'
But I checked that the same error is in master, so it's not specific to this PR.
I tested this PR on our set of models - everything is working as should be when
experimental_new_converter = True
Inference is broken now, when experimental_new_converter = False.
Do we need to support this case ?
I tested this PR on our set of models - everything is working as should be when
experimental_new_converter = True
Inference is broken now, when experimental_new_converter = False.
Do we need to support this case ?
No, but we should:
Document this restriction in the API docs (and eventual guide)
Throw an exception if trying to use this flag when the new converter is disabled
Hi @rthadur Thanks! I pushed a fix for these errors.
@jdduke, the option is renamed as suggested.
Hi @rthadur
There is a failure of the test in Ubuntu CPU checks:
//tensorflow/tools/api/tests:api_compatibility_test
I checked master and this test fails their as well
Thanks
@wwwind Can you please address Ubuntu Sanity errors? Thanks!
Hi @rthadur, Could you please re-approve this PR ?
I found a problem and fixed this CI failure finally.
Thanks!
Hi @jdduke I added description to API as requested.
Please take a look.
Thanks!
Hi @rthadur Sorry to bother you, but could you please re-approve this PR?
I had to push fixes to pylint errors.
Thanks!
Hi @jdduke ! Thanks for the review! Documentation is merged into this PR and comment is corrected, + small fix for pylint.
Could you please re-approve ?
Thanks!
For @tensorflow/api-owners, this looks good.
Hi @jdduke Could you please help with this PR ?
There are failures but they don't look relevant to my PR.
I tried these targets locally and they are green:
//tensorflow/lite/tools/optimize/calibration:logging_op_resolver (Linux GPU)
//tensorflow/tools/ci_build/builds:gen_win_out (Windows bazel)
Is it possible to re-run CI somehow ?
I run the failing target locally:
bazel test //tensorflow/lite/tools/optimize/calibration:logging_op_resolver
and it's green locally.
run the failing target locally:
bazel test //tensorflow/lite/tools/optimize/calibration:logging_op_resolver
and it's green locally.
Right, I think there was a broken build at head, so probably just needed a rebase.
Hi @jdduke I pushed a fix. Could you please re-approve ?
Thanks!
The build error in "MacOS CPU Python3" is reproducable in master branch
@rthadur can you help push this through internal migration?
Hi @rthadur I have corrected. Could you please re-approve ? Thanks!
Hi @jdduke Could you please re-approve this PR? I pushed a small change as requested.
Thanks!
Hi @rthadur there is again failure on this PR in "MacOS CPU Python3".
It is the same as before and it is reproducible in master branch for me locally.
thanks
@wwwind will you be able to fix the error ?
Hi, I will try to patch this internally to resolve the tests. Thanks.
|
gharchive/pull-request
| 2020-01-27T14:49:49 |
2025-04-01T06:40:35.374911
|
{
"authors": [
"gbaned",
"jdduke",
"jsimsa",
"liufengdb",
"rthadur",
"suharshs",
"wwwind"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/36251",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
580360001
|
change tf.image.decode_image to tf.io.decode_image
tf.image.decode_image to tf.io.decode_image for API migration and consistency which include decode_ jpeg, decode_gif, and decode_png.
Please see https://www.tensorflow.org/api_docs/python/tf/io/decode_image
Afaik, tf.io.decode_image is currently just an alias for tf.image.decode_image. But since the later might get removed in the future, it's ok to replace usage with the more consistent alias.
Oh...Sorry, I accidentally dismissed mihaimaruseac's review approval and added four more commits to this PR during the experiment with vscode pull request extension. Any way I can withdraw these four changes...
git rebase and then git push -f. But it seems copybara merged some of the PR, so closing this and let's create another one to fix
@mihaimaruseac Thank you. I will create another one later.
|
gharchive/pull-request
| 2020-03-13T04:15:41 |
2025-04-01T06:40:35.378864
|
{
"authors": [
"Angus-Luo",
"mihaimaruseac"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/37558",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
172611660
|
Install numpy from pip
Pip is installed using get-pip
The version of numpy in apt repos is ancient (1.8) AND a newer version seems to be pulled in from pypi anyway.
Can one of the admins verify this patch?
@tensorflow-jenkins test this please.
@cancan101 , can you please make the same changes to Dockefile, Dockefile.devel and Dockerfile.gpu in the same directory? Thanks.
@caisq done
@tensorflow-jenkins , test this please.
Can you squash the two commits into one, @cancan101 ?
done
@tensorflow-jenkins , test this please.
Merged. Thanks!
|
gharchive/pull-request
| 2016-08-23T04:49:54 |
2025-04-01T06:40:35.381970
|
{
"authors": [
"caisq",
"cancan101",
"tensorflow-jenkins"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/3975",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
685260528
|
Fix dlpack device for int32
Using BackingDeviceName instead of DeviceName, to set the correct device for int32 tensors
Fix https://github.com/tensorflow/tensorflow/issues/41307
I just fixed the lint error. But have no idea about why windows build failed
Fixed lint problem and merge with the latest master. Hope this could fix the CI errors
|
gharchive/pull-request
| 2020-08-25T07:53:22 |
2025-04-01T06:40:35.383588
|
{
"authors": [
"VoVAllen"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/42646",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
175920585
|
Master
Install numpy from pip
Can one of the admins verify this patch?
|
gharchive/pull-request
| 2016-09-09T03:39:10 |
2025-04-01T06:40:35.384387
|
{
"authors": [
"cyrilfurtado",
"tensorflow-jenkins"
],
"repo": "tensorflow/tensorflow",
"url": "https://github.com/tensorflow/tensorflow/pull/4292",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
453194337
|
EncodeBase64 and DecodeBase64 ops
the TF implementation of the pix2pix model which fails conversion because of
`Unsupported Ops: DecodeJpeg, EncodePng, DecodeBase64`
the Open NSFW model also fails conversion with some of the same ops (https://github.com/tensorflow/tfjs/issues/433).
i wanted to try to implement some of these ops in TensorFlow.js. starting with this pull request for DecodeBase64 and EncodeBase64.
along with this tfjs-core PR, there is a corresponding PR in tfjs-converter (https://github.com/tensorflow/tfjs-converter/pull/376)
To see the logs from the Cloud Build CI, please join either
our discussion
or announcement mailing list.
This change is
Can you fix the failing errors: https://console.cloud.google.com/gcr/builds/b214fa07-9b38-4194-909d-f5bf95a89811?project=834911136599
Just curious have you tried this in Node, does it work everywhere? Why aren't you using the built in methods to do base64 conversion?
By the way thanks for the PR!
Can you fix the failing errors: https://console.cloud.google.com/gcr/builds/b214fa07-9b38-4194-909d-f5bf95a89811?project=834911136599
@nsthorat my apologies, the errors should be resolved now.
Just curious have you tried this in Node, does it work everywhere? Why aren't you using the built in methods to do base64 conversion?
i have added it to the Node kernel (https://github.com/tensorflow/tfjs-node/pull/259) and i tried it out.
regarding built in methods are you asking about btoa()/atob()? if so, browsers fail if a character exceeds the range of a 8-bit byte (0x00~0xFF). for example, this would not work:
btoa('add emphasis— with em dash')
because of the em dash/long dash unicode character (i.e., 0x2014).
if you are referring to some other built in methods please let me and i can review to make sure i didn't miss something.
Hi @vabarbosa!
So @dsmilkov and I just did a deep dive and we have the following conclusions:
We are actually about to change the internal representation of string tensors to hold onto to the underlying byte array
This means that for you, you will just need to take that byte array and generate the base64 encoded version (as well as the reverse). This means just the method arrayBufferToString
We will let you know once string stuff is done!
thank you! i'll be on the look out for your string tensor updates. and if you have any questions for me in the meantime, feel free to ask.
Here is the PR if you want to follow: https://github.com/tensorflow/tfjs-core/pull/1816
hi @nsthorat i have pulled in all the latest updates (including string tensor PR #1816)
and i have made my changes accordingly. let me know if you have any questions.
thanks.
|
gharchive/pull-request
| 2019-06-06T19:12:43 |
2025-04-01T06:40:35.398255
|
{
"authors": [
"nsthorat",
"vabarbosa"
],
"repo": "tensorflow/tfjs-core",
"url": "https://github.com/tensorflow/tfjs-core/pull/1779",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2671846939
|
18++Imsha Rehman Video Original Video Link Imsha Rehman Viral Video On Social Media X Trending Now
04 seconds ago
L𝚎aked Video Imsha Rehman Original Video Viral Video L𝚎aked on X Twitter Telegram
🔴📺📱 𝖢𝗅𝚒𝖼𝗄 𝖧𝖾𝗋𝖾 𝖳𝗈 𝗅𝗂𝗇𝗄 👉👉 (𝖥𝗎𝗅𝗅 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖵𝗂𝖽𝖾𝗈1)
🔴📺📱 𝖢𝗅𝚒𝖼𝗄 𝖧𝖾𝗋𝖾 𝖳𝗈 𝗅𝗂𝗇𝗄 👉👉 (𝖥𝗎𝗅𝗅 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖵𝗂𝖽𝖾𝗈2)
➤ ►🌍📺📱👉 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖫𝖾𝖺𝚔𝖾𝖽 𝖵𝗂𝖽𝖾𝗈 𝖵𝗂𝗋𝖺𝗅 𝖮𝗇 𝖲𝗈𝖼𝗂𝖺𝗅 𝖬𝖾𝖽𝗂𝖺
𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖫𝖾𝖺𝚔𝖾𝖽 𝖵𝗂𝖽𝖾𝗈 𝖵𝗂𝗋𝖺𝗅 𝖮𝗇 𝖲𝗈𝖼𝗂𝖺𝗅 𝖬𝖾𝖽𝗂𝖺 2024.𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖫𝖾𝖺𝚔𝖾𝖽 𝖵𝗂𝖽𝖾𝗈 𝖵𝗂𝗋𝖺𝗅 𝖮𝗇 𝖲𝗈𝖼𝗂𝖺𝗅 𝖬𝖾𝖽𝗂𝖺 2024. 𝖳𝗁𝖾 𝗋𝖾𝖼𝗎𝗋𝗋𝖾𝗇𝗍 𝗍𝗁𝖾𝗆𝖾 𝗈𝖿 𝖫𝖾𝖺𝚔𝖾𝖽 𝗍𝖺𝗉𝖾𝗌 𝖺𝗇𝖽 𝗍𝗁𝖾 𝗌𝗎𝖻𝗌𝖾𝗊𝗎𝖾𝗇𝗍 𝖿𝖺𝗅𝗅𝗈𝗎𝗍 𝗌𝖾𝗋𝗏𝖾𝗌 𝖺𝗌 𝖺 𝗋𝖾𝗆𝗂𝗇𝖽𝖾𝗋 𝗈𝖿 𝗍𝗁𝖾 𝖿𝗋𝖺𝗀𝗂𝗅𝗂𝗍𝗒 𝗈𝖿 𝗋𝖾𝗉𝗎𝗍𝖺𝗍𝗂𝗈𝗇 𝗂𝗇 𝗍𝗁𝖾 𝖽𝗂𝗀𝗂𝗍𝖺𝗅 𝖾𝗋𝖺. 𝖠𝗌 𝗍𝗁𝖾 𝗅𝗂𝗇𝖾𝗌 𝖻𝖾𝗍𝗐𝖾𝖾𝗇 𝗉𝗋𝗂𝗏𝖺𝗍𝖾 𝖺𝗇𝖽 𝗉𝗎𝖻𝗅𝗂𝖼 𝗅𝗂𝖿𝖾 𝖼𝗈𝗇𝗍𝗂𝗇𝗎𝖾 𝗍𝗈 𝖻𝗅𝗎𝗋, 𝖼𝖾𝗅𝖾𝖻𝗋𝗂𝗍𝗂𝖾𝗌 𝗅𝗂𝗄𝖾 𝖯𝗋𝗂𝗌𝗈𝗇 𝖮𝖿𝖿𝗂𝖼𝖾𝗋𝖿𝗂𝗇𝖽 𝗍𝗁𝖾𝗆𝗌𝖾𝗅𝗏𝖾𝗌 𝖺𝗍 𝗍𝗁𝖾 𝗆𝖾𝗋𝖼𝗒 𝗈𝖿 𝗂𝗇𝗍𝖾𝗋𝗇𝖾𝗍 𝖼𝗁𝖺𝗍𝗍𝖾𝗋, 𝗐𝗁𝖾𝗋𝖾 𝖺 𝗋𝗎𝗆𝗈𝗋 𝖼𝖺𝗇 𝗂𝗀𝗇𝗂𝗍𝖾 𝖺 𝖿𝗂𝗋𝖾𝗌𝗍𝗈𝗋𝗆 𝗈𝖿 𝗌𝗉𝖾𝖼𝗎𝗅𝖺𝗍𝗂𝗈𝗇 𝖺𝗇𝖽 𝗃𝗎𝖽𝗀𝗆𝖾𝗇𝗍.
𝖨𝗇 𝗍𝗁𝖾 𝖾𝗏𝖾𝗋 𝖾𝗏𝗈𝗅𝗏𝗂𝗇𝗀 𝗅𝖺𝗇𝖽𝗌𝖼𝖺𝗉𝖾 𝗈𝖿 𝖼𝖾𝗅𝖾𝖻𝗋𝗂𝗍𝗒 𝖼𝗎𝗅𝗍𝗎𝗋𝖾, 𝗍𝗁𝖾 𝖨𝗌𝗁𝗈𝗐𝗌𝗉𝖾𝖾𝖽𝗌𝖼𝖺𝗇𝖽𝖺𝗅 𝗎𝗇𝖽𝖾𝗋𝗌𝖼𝗈𝗋𝖾𝗌 𝗍𝗁𝖾 𝗋𝖾𝗅𝖾𝗇𝗍𝗅𝖾𝗌𝗌 𝗉𝗎𝗋𝗌𝗎𝗂𝗍 𝗈𝖿 𝗌𝖾𝗇𝗌𝖺𝗍𝗂𝗈𝗇𝖺𝗅𝗂𝗌𝗆, 𝖺 𝗉𝗎𝗋𝗌𝗎𝗂𝗍 𝗍𝗁𝖺𝗍 𝗈𝖿𝗍𝖾𝗇 𝖼𝗈𝗆𝖾𝗌 𝖺𝗍 𝗍𝗁𝖾 𝖾𝗑𝗉𝖾𝗇𝗌𝖾 𝗈𝖿 𝗍𝗋𝗎𝗍𝗁 𝖺𝗇𝖽 𝖽𝗂𝗀𝗇𝗂𝗍𝗒. 𝖠𝗌 𝗐𝖾 𝗇𝖺𝗏𝗂𝗀𝖺𝗍𝖾 𝗍𝗁𝖾 𝖼𝗈𝗆𝗉𝗅𝖾𝗑𝗂𝗍𝗂𝖾𝗌 𝗈𝖿 𝗍𝗁𝖾 𝖽𝗂𝗀𝗂𝗍𝖺𝗅 𝖺𝗀𝖾, 𝗍𝗁𝖾 𝗅𝗂𝗇𝖾 𝖻𝖾𝗍𝗐𝖾𝖾𝗇 𝖾𝗇𝗍𝖾𝗋𝗍𝖺𝗂𝗇𝗆𝖾𝗇𝗍 𝖺𝗇𝖽 𝖾𝗑𝗉𝗅𝗈𝗂𝗍𝖺𝗍𝗂𝗈𝗇 𝗋𝖾𝗆𝖺𝗂𝗇𝗌 𝗉𝖾𝗋𝗂𝗅𝗈𝗎𝗌𝗅𝗒 𝗍𝗁𝗂𝗇.
𝖠𝗌 𝗍𝗁𝖾 𝗌𝗂𝗍𝗎𝖺𝗍𝗂𝗈𝗇 𝗎𝗇𝖿𝗈𝗅𝖽𝗌, 𝗍𝗁𝖾 𝗍𝗋𝗎𝗍𝗁 𝗋𝖾𝗆𝖺𝗂𝗇𝗌 𝗌𝗁𝗋𝗈𝗎𝖽𝖾𝖽 𝗂𝗇 𝗆𝗒𝗌𝗍𝖾𝗋𝗒, 𝗅𝖾𝖺𝗏𝗂𝗇𝗀 𝗍𝗁𝖾 𝗉𝗎𝖻𝗅𝗂𝖼 𝗍𝗈 𝗉𝗈𝗇𝖽𝖾𝗋 𝗍𝗁𝖾 𝖺𝗎𝗍𝗁𝖾𝗇𝗍𝗂𝖼𝗂𝗍𝗒 𝗈𝖿 𝗍𝗁𝖾 𝗋𝗎𝗆𝗈𝗋𝗌. 𝖨𝗇 𝖺 𝗐𝗈𝗋𝗅𝖽 𝗐𝗁𝖾𝗋𝖾 𝖿𝖺𝗆𝖾 𝖺𝗇𝖽 𝗂𝗇𝖿𝖺𝗆𝗒 𝖺𝗋𝖾 𝗍𝗐𝗈 𝗌𝗂𝖽𝖾𝗌 𝗈𝖿 𝗍𝗁𝖾 𝗌𝖺𝗆𝖾 𝖼𝗈𝗂𝗇, 𝗍𝗁𝖾 𝗌𝖺𝗀𝖺 𝗈𝖿 𝖨𝗌𝗁𝗈𝗐𝗌𝗉𝖾𝖾𝖽𝗂𝗌 𝖺 𝗍𝖾𝗌𝗍𝖺𝗆𝖾𝗇𝗍 𝗍𝗈 𝗍𝗁𝖾 𝗉𝗈𝗐𝖾𝗋 𝗈𝖿 𝗌𝗈𝖼𝗂𝖺𝗅 𝗆𝖾𝖽𝗂𝖺 𝗍𝗈 𝗌𝗁𝖺𝗉𝖾 𝗇𝖺𝗋𝗋𝖺𝗍𝗂𝗏𝖾𝗌 𝖺𝗇𝖽 𝖼𝗁𝖺𝗅𝗅𝖾𝗇𝗀𝖾 𝗍𝗁𝖾 𝖻𝗈𝗎𝗇𝖽𝖺𝗋𝗂𝖾𝗌 𝗈𝖿 𝗉𝗋𝗂𝗏𝖺𝖼𝗒 𝖺𝗇𝖽 𝖼𝗈𝗇𝗌𝖾𝗇𝗍.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
🔴📺📱 𝖢𝗅𝚒𝖼𝗄 𝖧𝖾𝗋𝖾 𝖳𝗈 𝗅𝗂𝗇𝗄 👉👉 (𝖥𝗎𝗅𝗅 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖵𝗂𝖽𝖾𝗈1)
🔴📺📱 𝖢𝗅𝚒𝖼𝗄 𝖧𝖾𝗋𝖾 𝖳𝗈 𝗅𝗂𝗇𝗄 👉👉 (𝖥𝗎𝗅𝗅 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖵𝗂𝖽𝖾𝗈2)
➤ ►🌍📺📱👉 𝖫𝖾𝖺𝚔𝖾𝖽 𝗏𝗂𝗋𝖺𝗅 𝗅𝗂𝗇𝗄 2024 𝖫𝖾𝖺𝚔𝖾𝖽 𝖵𝗂𝖽𝖾𝗈 𝖵𝗂𝗋𝖺𝗅 𝖮𝗇 𝖲𝗈𝖼𝗂𝖺𝗅 𝖬𝖾𝖽𝗂𝖺
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
rtvgf
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xcxdfxc
fdg
DD
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
cvcvcv
edr
DD
.
.
.
.
.
.
.
.
.
.cvcvc
|
gharchive/issue
| 2024-11-19T11:27:38 |
2025-04-01T06:40:35.490424
|
{
"authors": [
"Addsdsdcc",
"fatafati2",
"johnjbeyer",
"kawserislam00c",
"namulhossen",
"vuterbasko"
],
"repo": "tensorflow/tflite-micro",
"url": "https://github.com/tensorflow/tflite-micro/issues/2788",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2182202725
|
Test Infra: enable arbitrary/random delay introduction to noc calls (semaphore_inc, async write, async read, etc.)
To expand test variability and increase likelihood of catching hangs during testing (particularly for running determinism tests), allow noc apis to, under the hood, introduce artificial delays. These delays should be lightly configurable from host side.
For example, host can provide a fixed delay value, delay per API entrypoint, or small set of random delays (maybe it can store this sequence of delays in L1 to loop through over time.
Here's an example to convey the idea (Note I put the delay at the start, but I think there are usecases for having it at the beginning and end of the function):
static uint32_t i = 0; // could be shared across all noc api calls that need random delays. For worker cores, needs to be threadsafe
constexpre uint34_t rand_delay_list_size = 32;
std::array<rand_delay_list_size, uint32_t> rand_delays; // can be populated by host
inline
void noc_semaphore_inc(uint64_t addr, uint32_t incr) {
#ifdef SYNTHETIC_DELAYS
uint32_t delay = delay_counts[i];
i = increment_wraparound(i, rand_delay_list_size);
for (uint32_t j = 0; j < delay; j++) {
std::asm("");
}
#endif
noc_fast_atomic_increment(noc_index, NCRISC_AT_CMD_BUF, addr, NOC_UNICAST_WRITE_VC, incr, 31 /*wrap*/, false /*linked*/);
}
FYI @jliangTT @pgkeller - not sure where the right ownership is for this but I figured you guys would be a good starting point. This is for improved testing methodology but requires some lower level improvements to get the benefit.
i don't really know which project board to add this . but multi-device seems to be a good place for this to start.
|
gharchive/issue
| 2024-03-12T16:56:49 |
2025-04-01T06:40:35.498019
|
{
"authors": [
"SeanNijjar",
"jliangTT"
],
"repo": "tenstorrent-metal/tt-metal",
"url": "https://github.com/tenstorrent-metal/tt-metal/issues/6303",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2439526556
|
#9908: Use mm tiles for reduce sum on w dim
LLK impl for reduce row has precision issues.
Reduce column doesn't seem to have the same issue.
For reduce sum on row(w) dim the issue is
workedaround by using llk for mm tiles
instead of reduce to achieve the same result.
Reduce w impl of LLK doesn't allow for 32 bit acc
in addition to the precision issues.
32 bit acc is exposed as an option for all reduce
ops.
Math fidelity is exposed for all reduce ops as well to give developers to get perf with math fidelity
in similar fashion to matmul ops.
Current state is that math fidelity is hard coded
to HiFi4 which is the most expensive one.
Green post-commit pipeline
https://github.com/tenstorrent/tt-metal/actions/runs/10177325891
Is compute_kernel_config used only for reduce sum along w and ignored in all other cases? If that's the case, I don't think we should propagate this to all of the reduce APIs.
It's used for all codepaths.
Is compute_kernel_config used only for reduce sum along w and ignored in all other cases? If that's the case, I don't think we should propagate this to all of the reduce APIs.
It's used for all codepaths.
I see it now. Very nice 👍
|
gharchive/pull-request
| 2024-07-31T09:06:13 |
2025-04-01T06:40:35.502246
|
{
"authors": [
"TT-BrianLiu",
"pavlejosipovic"
],
"repo": "tenstorrent/tt-metal",
"url": "https://github.com/tenstorrent/tt-metal/pull/10933",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2382496865
|
#0: Make prefetcher early exit after fetching/reading exec_buf
Ticket
Problem description
Multi-Device Trace tests using all-gather were hanging after the removal of enqueue_record_event when sending trace commands.
The issue was with prefetch_h fetching a trace command, inserting a read barrier (thus setting the pending read size to 0) and then stalling indefinitely, until a subsequent command was issued by host. With the enqueue_record_event in place, a subsequent command was always issued.
For all-gather, a subsequent command for the r-chip(read in this case) could not be issued until the read for chip 0 was complete (thus all-gather kernel on r-chip did not even start). However, for the chip 0 read to complete, the all-gather op must have been started and completed on all chips.... deadlock.
What's changed
Early exit in fetch_q_get_cmds after reading exec_buf command, so that the trace can run on all chips, regardless of whether a subsequent command is enqueued or not.
Checklist
[ ] Post commit CI passes
[ ] Model regression CI testing passes (if applicable)
[ ] New/Existing tests provide coverage for changes
is there a directed test we should add (preferably to test_prefetcher) to catch the bug that requires the "return" below?
I think this only gets exposed in multi-chip environments where we have a setup that looks like this:
chip 0: exec_buf (Trace on chip 0 runs a program that requires chip 1 to complete)
chip 1: exec_buf (Trace on chip 1 runs a program that requires chip 0 to complete)
chip 0: blocking command depending on exec_buf (ex: read) --> Exec buf actually start running once read is sent. This will hang, since chip 1 never started running exec buf.
chip 1: blocking command depending on exec_buf ---> This will never actually be sent to device, because the previous blocking command hung.
Detecting if this case is broken, will require us to likely have a test with data dependencies between chips. I'm not sure if we can add something like this to test_prefetcher. I'll think of a simpler test case, probably with fewer chips and running a single all-gather op to ensure that this bug is being regressed on.
is there a directed test we should add (preferably to test_prefetcher) to catch the bug that requires the "return" below?
I think this only gets exposed in multi-chip environments where we have a setup that looks like this:
chip 0: exec_buf (Trace on chip 0 runs a program that requires chip 1 to complete) chip 1: exec_buf (Trace on chip 1 runs a program that requires chip 0 to complete) chip 0: blocking command depending on exec_buf (ex: read) --> Exec buf actually start running once read is sent. This will hang, since chip 1 never started running exec buf. chip 1: blocking command depending on exec_buf ---> This will never actually be sent to device, because the previous blocking command hung.
Detecting if this case is broken, will require us to likely have a test with data dependencies between chips. I'm not sure if we can add something like this to test_prefetcher. I'll think of a simpler test case, probably with fewer chips and running a single all-gather op to ensure that this bug is being regressed on.
Thanks for the details. Yeah, this is beyond the scope of test_prefetcher but would be great to capture in a fast dispatch unit test (non-trivial data dependencies across chips).
|
gharchive/pull-request
| 2024-06-30T23:27:22 |
2025-04-01T06:40:35.510145
|
{
"authors": [
"pgkeller",
"tt-asaigal"
],
"repo": "tenstorrent/tt-metal",
"url": "https://github.com/tenstorrent/tt-metal/pull/9856",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2389469324
|
ttrt testing
Since ttrt is going to be a core development tool, let's brainstorm ways we can test it APIs so eventually we can put it in CI.
These should be push commit level tests, so quick checks to make sure things aren't broken. Ideally we can run these tests on non-silicon machines
Blocked by: https://github.com/tenstorrent/tt-mlir/issues/217
blocked by: https://github.com/tenstorrent/tt-mlir/issues/286
Tons of silicon + ttrt api testing has gone into tip. Closing issue.
|
gharchive/issue
| 2024-07-03T21:00:57 |
2025-04-01T06:40:35.512664
|
{
"authors": [
"nsmithtt",
"tapspatel"
],
"repo": "tenstorrent/tt-mlir",
"url": "https://github.com/tenstorrent/tt-mlir/issues/104",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
630784351
|
Config/setup prettier
Prettier and editor config settings to address #28
Rather than enforcing defaults on the project for prettier, it will be incorporated in the lint script command.
|
gharchive/pull-request
| 2020-06-04T12:36:57 |
2025-04-01T06:40:35.513564
|
{
"authors": [
"rohni"
],
"repo": "tenzir/ui-component-library",
"url": "https://github.com/tenzir/ui-component-library/pull/29",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
177593582
|
Side menu not recognizes gestures in Xcode8 release
I recently updated to xcode8 release, but side menu not recognizes gestures anymore. It doesn't opens, does not closes menu anymore by swiping
There seems to be an issue with iPhone 7 simulator, the others before 7 work just fine. Are you testing on simulator or on device? If device, then which one is it?
I've tested on a real device iPad 2 (the old one, iOS 9.5) and iPhone 5s simulator (iOS 9.3)
btw, the problem only with underCenterPanelRight mode
with overCenterPanelLeft everything works fine
First of all, for underCenterPanel there is no swiping. There is only panning, which has to start from the edge of the screen to be recognised.
oh. But before Xcode 8 release I could just start panning from the middle of the screen to open side menu underCenterPanel. By panning, which has to start from the edge of the screen I could go to the previous ViewController (by default in Storyboard).
Isn't it available anymore?
|
gharchive/issue
| 2016-09-17T17:35:02 |
2025-04-01T06:40:35.517462
|
{
"authors": [
"FNet92",
"teodorpatras"
],
"repo": "teodorpatras/SideMenuController",
"url": "https://github.com/teodorpatras/SideMenuController/issues/43",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2684117864
|
:bug: fix: altera caminho do executavel do PHPSTORM
Este pull request inclui uma pequena alteração no arquivo phpstorm-nautilus.py. A alteração atualiza o caminho usado para localizar o script PhpStorm instalado pelo JetBrains Toolbox.
phpstorm-nautilus.py: Atualizado o caminho para o script PhpStorm de scripts/phpstorm para apps/phpstorm/bin/phpstorm.sh para refletir a nova estrutura de diretório usada pelo JetBrains Toolbox.
Acho interessante, ao invés de substituir o path, adicionar uma verificação se o path anterior existe e usá-lo caso exista, se não existir, verificar se o novo path existe e então usar o novo path. Caso ambos não existirem, imprimir um erro no console indicando que os caminhos não existem.
Adicionada uma nova condição para detectar instalações do PhpStorm no diretório apps do JetBrains Toolbox.
|
gharchive/pull-request
| 2024-11-22T18:06:40 |
2025-04-01T06:40:35.617896
|
{
"authors": [
"PabloHBMiranda",
"terciotales"
],
"repo": "terciotales/phpstorm-nautilus",
"url": "https://github.com/terciotales/phpstorm-nautilus/pull/2",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
1447914062
|
🛑 Login LR is down
In ca36101, Login LR (https://www.la-razon.com/login) was down:
HTTP code: 503
Response time: 6 ms
Resolved: Login LR is back up in 87cf803.
|
gharchive/issue
| 2022-11-14T11:48:16 |
2025-04-01T06:40:35.681713
|
{
"authors": [
"terorero"
],
"repo": "terorero/monitor",
"url": "https://github.com/terorero/monitor/issues/210",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714767690
|
Support for EC2 metadata_options as map
Description
Support for metadata_options block. Required for IMDSv2 implementation and AWS Security guidelines.
Motivation and Context
In new AWS Security guidelineas, IMDSv2 (http_tokens) is required to be enabled in all instances.
Breaking Changes
No AFAIK
How Has This Been Tested?
Executed in our environment, attached TF plan and apply.
included in ec2.tf
metadata_options = {
http_tokens = "required"
}
plan:
~ metadata_options {
http_endpoint = "enabled"
http_put_response_hop_limit = 1
~ http_tokens = "optional" -> "required"
}
applied with no error.
When can we expect this to be merged in and released?
closed in favor of #193
|
gharchive/pull-request
| 2020-10-05T11:45:29 |
2025-04-01T06:40:35.687099
|
{
"authors": [
"bryantbiggs",
"jayolmos",
"svetozar02"
],
"repo": "terraform-aws-modules/terraform-aws-ec2-instance",
"url": "https://github.com/terraform-aws-modules/terraform-aws-ec2-instance/pull/182",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1929304684
|
feat: new param instance_state
Description
introduce new param instance_state to control state of instance https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance#instance_state
Motivation and Context
we want to turn off instance without impacting run of state
Breaking Changes
How Has This Been Tested?
[x] I have updated at least one of the examples/* to demonstrate and validate my change(s)
[x] I have tested and validated these changes using one or more of the provided examples/* projects
[ ] I have executed pre-commit run -a on my pull request
change not correct closing
|
gharchive/pull-request
| 2023-10-06T01:42:51 |
2025-04-01T06:40:35.690655
|
{
"authors": [
"anaye1997"
],
"repo": "terraform-aws-modules/terraform-aws-ec2-instance",
"url": "https://github.com/terraform-aws-modules/terraform-aws-ec2-instance/pull/366",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1599653219
|
Merging of launch_template_tags from self_managed_node_group_defaults with the ones inside self_managed_node_groups does not work
Description
It is possible to add "launch_template_tags" to both the "self_managed_node_group_defaults" sections and inside a self-managed node group node group like the following:
self_managed_node_group_defaults = {
…
tag_specifications = [
"instance",
"volume",
"network-interface"
]
launch_template_tags = {
auto-delete = "no",
}
}
self_managed_node_groups = {
smng-ond = {
name = "smng-ond"
…
tag_specifications = [
"instance",
"volume",
# ENI incurres no cost !
# "network-interface"
]
launch_template_tags = {
cost-bu = "smng-ond"
cost-center = "08101965"
}
But currently only the ones inside the node group definition are used and the default ones are overwritten/lost.
The idea here is to have common tags (like auto-delete) to apply per default to all resources like EC2 instances in all node groups and node group specific tags tags like cost-center to only particular node group.
Versions
Module version [Required]: 19.10
Terraform version: 1.3.9
Provider version(s): AWS provider 4.56
Reproduction Code [Required]
See TF snippet above
Expected behavior
All tags from both sections are applied to the respective resources defined in " tag_specifications"
Actual behavior
Only the tags from within the self-managed node group definition are applied.
After looking at the module I believe merging is not possible because of usage of "try" function for this variable herehttps://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/node_groups.tf#L453. Can someone please confirm, thx!
|
gharchive/issue
| 2023-02-25T10:36:04 |
2025-04-01T06:40:35.694780
|
{
"authors": [
"youwalther65"
],
"repo": "terraform-aws-modules/terraform-aws-eks",
"url": "https://github.com/terraform-aws-modules/terraform-aws-eks/issues/2491",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
591086609
|
docs: Fixed ssl_certificate_id argument name
Description
Documentation update
Motivation and Context
Spotted wrong argument name for ssl in README.md
Breaking Changes
None
How Has This Been Tested?
No tests needed
@antonbabenko Thanks!
ps. Some contribution doc would be great, got confused when test failed. Cheers!
Thanks @gstlt !
Yes, it should be coming in the near future (I hope). There is a meta-repository for such things across all terraform-aws-modules repositories - https://github.com/terraform-aws-modules/meta . I have just created an issue there.
|
gharchive/pull-request
| 2020-03-31T12:56:58 |
2025-04-01T06:40:35.697656
|
{
"authors": [
"antonbabenko",
"gstlt"
],
"repo": "terraform-aws-modules/terraform-aws-elb",
"url": "https://github.com/terraform-aws-modules/terraform-aws-elb/pull/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2011872966
|
fix(iam-group-with-policies): Related resources shouldn't be created when group creation is disabled
Description
Related resources shouldn't be created when group creation is disabled
Motivation and Context
In some scenarios the iam-group-with-policies submodule attempts to create aws_iam_group_membership, aws_iam_policy and aws_iam_group_policy_attachment resources even though the create_group variable is set to false. Workarounds do exist, but ultimately this behaviour is undesired.
Breaking Changes
None.
How Has This Been Tested?
[X] I have updated at least one of the examples/* to demonstrate and validate my change(s)
[ ] I have tested and validated these changes using one or more of the provided examples/* projects
[X] I have executed pre-commit run -a on my pull request
Not stale, just updated.
Not stale.
Hello @bryantbiggs, sorry to bother you, but could you take a quick look at this PR? It's nothing controversial, just a simple fix for something that got overlooked. Thank you!
|
gharchive/pull-request
| 2023-11-27T09:37:30 |
2025-04-01T06:40:35.701537
|
{
"authors": [
"pawelpesz"
],
"repo": "terraform-aws-modules/terraform-aws-iam",
"url": "https://github.com/terraform-aws-modules/terraform-aws-iam/pull/440",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
561825630
|
Any common feature libraries out there?
** Question : **
I arrived here and am interested in leveraging this platform in addition to and as a compliment terratest, but I don't want to start from scratch in writing features and I can't find any reference at all in the readme to a sample library or folder of already-existing tests/features. I could copy-paste all the examples from the .md example files but that seems like more work than it should be. Is there any shared library of common features/test out there? If not, is there any interest in starting such a library?
As a new user, or as someone trying to pitch this to a peer, I would love to be able to run pip install and then run the code. It seems the new user experience is hampered right now by having to start from scratch with features. (Apologies in advance if this is available somewhere obvious and I just missed it.)
Thanks!
Spot on! I was just working on this :)
You are so right and this becomes a common request from many other people.
I will give priority and start with small then grow bigger and more diverse on the tests
FANTASTIC! I think this would be an amazing resource!
I find that one of the most awesome things about Terraform is also one of its most dangerous: people can use libraries of Terraform scripts (open and closed source) to deploy infrastructures that they themselves don't need to fully understand. I love that deploying and inspecting other people's terraform architectures is also a great way to learn those architectures. But you know as they say, with great power comes great ... um... need for automated testing. :)
Seriously, though, I love this concept and we're all better at this stuff when we're learning from each other. I'm morbidly excited to give this a spin and see what tests my scripts will fail! :)
@aaronsteers you can pull features from a git repository, see https://github.com/eerkunt/terraform-compliance/pull/283. I don't have a library to offer you right now, but this would at least save you from copying Markdown if you found one!
@aaronsteers
We do have a place to share commonly used features now! Please check out user-friendly-features.
@aaronsteers
We do have a place to share commonly used features now! Please check out user-friendly-features.
@Kudbettin - This is fantastic! Exactly what I was hoping for. (Closing as resolve.)
Thanks!!
@Kudbettin - This is fantastic! Exactly what I was hoping for. (Closing as resolve.)
Thanks!!
|
gharchive/issue
| 2020-02-07T19:25:22 |
2025-04-01T06:40:35.708258
|
{
"authors": [
"Kudbettin",
"aaronsteers",
"byarbrough",
"eerkunt"
],
"repo": "terraform-compliance/cli",
"url": "https://github.com/terraform-compliance/cli/issues/206",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2547592712
|
Essential Security - Observability Event Notifications configuration name is confusing
Suggest to remove Observability from the term as it implies this is deploying the observability services, when this is actually not the case.
"Essential Security - Observability Event Notifications" -> "Essential Security - Event Notifications"
@vburckhardt It was put there for ordering actually - cc @in-1911
Yes, that was entirely to place the EN DA after logging. So if you can find an "alternative spelling" that would do that - we can change it.
Renaming to Essential Security - Event Notifications since it no longer has a dependency on Observability. Infact Observability has a dependency on it now with Cloud Logs integration
Part of https://github.com/terraform-ibm-modules/dev-rag/releases/tag/v0.4.3
|
gharchive/issue
| 2024-09-25T10:23:38 |
2025-04-01T06:40:35.717338
|
{
"authors": [
"in-1911",
"ocofaigh",
"vburckhardt"
],
"repo": "terraform-ibm-modules/dev-rag",
"url": "https://github.com/terraform-ibm-modules/dev-rag/issues/53",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1463679049
|
chore(deps): update common-dev-assets digest to 0b3e06d
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
4998976 -> 0b3e06d
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.2.12 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-11-24T18:04:15 |
2025-04-01T06:40:35.723064
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/ibmcloud-terratest-wrapper",
"url": "https://github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper/pull/204",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1893123558
|
fscloud: Validation for zone id is missing for existing_serviceref_zone and existing_cbr_zone_vpcs
Need to add zone id validation for both existing_serviceref_zone and existing_cbr_zone_vpcs.
issue closed
|
gharchive/issue
| 2023-09-12T19:25:54 |
2025-04-01T06:40:35.724147
|
{
"authors": [
"Ak-sky",
"Khuzaima05"
],
"repo": "terraform-ibm-modules/terraform-ibm-cbr",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-cbr/issues/287",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2458805636
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
minor
v1.35.4 -> v1.36.0
go (source)
toolchain
patch
1.22.5 -> 1.22.6
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.36.0
Compare Source
Features
simplify testprojects TearDown logic (#845) Simplified the logic around when resources or projects should get deleted during TestTearDown in the testprojects package, also added unit tests to verify all permutations. (380757f)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
:tada: This issue has been resolved in version 1.24.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-08-10T00:00:34 |
2025-04-01T06:40:35.733520
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-cbr",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-cbr/pull/499",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2291043977
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
2a961d3 -> 2015ae9
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
patch
v1.31.7 -> v1.31.8
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.31.8
Compare Source
Bug Fixes
deps: update gomod (#809) (5f06800)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
:tada: This PR is included in version 1.1.6 :tada:
The release is available on:
GitHub release
v1.1.6
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-05-11T19:48:25 |
2025-04-01T06:40:35.742782
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-code-engine",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-code-engine/pull/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1522955794
|
chore(deps): update common-dev-assets digest to e11eb25
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
9fe7626 -> e11eb25
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 5.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-01-06T18:07:44 |
2025-04-01T06:40:35.748592
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-cos",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-cos/pull/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1675108937
|
docs: update deploy arch doc
Description
Replace this text with a summary of the changes in this PR. Include why the changes are needed and context about the changes. List required dependencies. If there is a Git issue for the change, please link to it.
Types of changes in this PR
No release required
[ ] Examples or tests (addition or updates of examples or tests)
[x] Documentation update
[ ] CI-related update (pipeline, etc.)
[ ] Other changes that don't affect Terraform code
Release required
[ ] Bug fix (patch release (x.x.X): Change that fixes an issue and is compatible with earlier versions)
[ ] New feature (minor release (x.X.x): Change that adds functionality and is compatible with earlier versions)
[ ] Breaking change (major release (X.x.x): Change that is likely incompatible with previous versions)
Release notes content
Replace this text with information that users need to know about the bug fixes, features, and breaking changes. This information helps the merger write the commit message that is published in the release notes for the module.
Checklist for reviewers
[ ] If relevant, a test for the change is included or updated with this PR.
[ ] If relevant, documentation for the change is included or updated with this PR.
Merge actions for mergers
Merge by using "Squash and merge".
Use a relevant conventional commit message that is based on the PR contents and any release notes provided by the PR author.
The commit message determines whether a new version of the module is needed, and if so, which semver increment to use (major, minor, or patch).
@huayuenh - just change lastupdated: "2023-03-31" to the target release date ("2023-04-21")??
:tada: This PR is included in version 1.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-04-19T15:03:32 |
2025-04-01T06:40:35.755279
|
{
"authors": [
"huayuenh",
"michaelbowler",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-devsecops-alm",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-devsecops-alm/pull/142",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2449523404
|
fix: workaround provider idempotent issue
Description
Provider 1.68.0 has an idempotent issue with ibm_database. https://github.com/IBM-Cloud/terraform-provider-ibm/issues/5546
Release required?
[ ] No release
[x] Patch release (x.x.X)
[ ] Minor release (x.X.x)
[ ] Major release (X.x.x)
Release notes content
This works around an idempotent issue in the 1.68.0 provider.
Run the pipeline
If the CI pipeline doesn't run when you create the PR, the PR requires a user with GitHub collaborators access to run the pipeline.
Run the CI pipeline when the PR is ready for review and you expect tests to pass. Add a comment to the PR with the following text:
/run pipeline
Checklist for reviewers
[ ] If relevant, a test for the change is included or updated with this PR.
[ ] If relevant, documentation for the change is included or updated with this PR.
For mergers
Use a conventional commit message to set the release level. Follow the guidelines.
Include information that users need to know about the PR in the commit message. The commit message becomes part of the GitHub release notes.
Use the Squash and merge option.
/run pipeline
:tada: This issue has been resolved in version 1.11.4 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-08-05T21:37:02 |
2025-04-01T06:40:35.780561
|
{
"authors": [
"shemau",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icd-rabbitmq",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icd-rabbitmq/pull/219",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1966329365
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
b1e90d1 -> 12f782d
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
patch
v1.23.13 -> v1.23.14
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.23.14
Compare Source
Bug Fixes
deps: update gomod (#687) (01a6c0c)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
:tada: This PR is included in version 1.4.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-10-28T00:31:59 |
2025-04-01T06:40:35.789551
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icd-redis",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icd-redis/pull/240",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1468865857
|
chore(deps): update common-dev-assets digest to c46827e
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
c3fd48d -> c46827e
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.0.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-11-30T00:08:28 |
2025-04-01T06:40:35.795158
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icse-subnet-module",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icse-subnet-module/pull/69",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1551609820
|
chore(deps): update terraform github.com/terraform-ibm-modules/terraform-ibm-resource-group to v1.0.5
This PR contains the following updates:
Package
Type
Update
Change
github.com/terraform-ibm-modules/terraform-ibm-resource-group
module
patch
v1.0.4 -> v1.0.5
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Renovate will not automatically rebase this PR, because other commits have been found.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox. ⚠ Warning: custom changes will be lost.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 1.0.0 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-01-21T00:11:13 |
2025-04-01T06:40:35.802094
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-icse-vpn-gateway-module",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-icse-vpn-gateway-module/pull/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2367336392
|
chore(deps): update terraform-module
This PR contains the following updates:
Package
Type
Update
Change
terraform-ibm-modules/kms-all-inclusive/ibm (source)
module
patch
4.13.2 -> 4.13.4
terraform-ibm-modules/observability-instances/ibm (source)
module
patch
2.13.1 -> 2.13.2
Release Notes
terraform-ibm-modules/terraform-ibm-kms-all-inclusive (terraform-ibm-modules/kms-all-inclusive/ibm)
v4.13.4
Compare Source
Bug Fixes
deps: update terraform-module (#502) (1cb586a)
v4.13.3
Compare Source
Bug Fixes
remove upper limit for required terraform version (#500) (55443ca)
terraform-ibm-modules/terraform-ibm-observability-instances (terraform-ibm-modules/observability-instances/ibm)
v2.13.2
Compare Source
Bug Fixes
allow more permitted_target_regions to be used in the global_event_routing_settings. Full list of supported regions is now: us-south, eu-de, us-east, eu-es, eu-gb, au-syd, br-sao, ca-tor, eu-es, jp-tok, jp-osa, in-che (#522) (f27e865)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
/run pipeline
:tada: This PR is included in version 3.1.4 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-06-21T22:53:56 |
2025-04-01T06:40:35.815947
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-ocp-all-inclusive",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-ocp-all-inclusive/pull/276",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1531160700
|
chore(deps): update common-dev-assets digest to f8b5283
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
497c14b -> f8b5283
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 4.0.0 :tada:
The release is available on:
GitHub release
v4.0.0
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-01-12T18:13:01 |
2025-04-01T06:40:35.822432
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-powervs-sap",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-powervs-sap/pull/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2044686292
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
common-dev-assets
digest
ef9143b -> ab267b7
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
minor
v1.25.6 -> v1.26.1
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.26.1
Compare Source
Bug Fixes
deps: update gomod (#726) (a42f8cf)
v1.26.0
Compare Source
Features
expose checkConsistency with public function (#724) (f9ab5b8)
v1.25.8
Compare Source
Bug Fixes
deps: update gomod (#723) (7596e10)
v1.25.7
Compare Source
Bug Fixes
deps: update gomod (#721) (addd7eb)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
/run pipeline
:tada: This PR is included in version 1.2.1 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2023-12-16T09:43:05 |
2025-04-01T06:40:35.836625
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-powervs-workspace",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-powervs-workspace/pull/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1878517679
|
chore(deps): update ci dependencies
This PR contains the following updates:
Package
Type
Update
Change
github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper
require
minor
v1.10.8 -> v1.11.3
terraform-ibm-modules/common-pipeline-assets
action
patch
v1.17.0 -> v1.17.4
Release Notes
terraform-ibm-modules/ibmcloud-terratest-wrapper (github.com/terraform-ibm-modules/ibmcloud-terratest-wrapper)
v1.11.3
Compare Source
Bug Fixes
bug setting the pr dir (#617) (00131bb)
v1.11.2
Compare Source
Bug Fixes
lock into vpc-go-sdk v0.41.0 (#616) (ef9cacf)
v1.11.1
Compare Source
Bug Fixes
add logging (#615) (beb86f4)
v1.11.0
Compare Source
Features
addition of configurable base repo or branch for upgrade test (4fd1e34)
v1.10.19
Compare Source
Bug Fixes
deps: update gomod (#607) (3101f35)
v1.10.18
Compare Source
Bug Fixes
deps: update module github.com/ibm/platform-services-go-sdk to v0.45.0 (#605) (b4091f7)
v1.10.17
Compare Source
Bug Fixes
deps: update module github.com/ibm/platform-services-go-sdk to v0.44.0 (#602) (d141e39)
v1.10.16
Compare Source
Bug Fixes
deps: update gomod (#600) (7ac75cc)
v1.10.15
Compare Source
Bug Fixes
deps: update module golang.org/x/crypto to v0.12.0 (#595) (f861dfe)
v1.10.14
Compare Source
Bug Fixes
deps: update gomod (#590) (b608f5f)
v1.10.13
Compare Source
Bug Fixes
deps: update gomod (#584) (faa3056)
v1.10.12
Compare Source
Bug Fixes
deps: update module github.com/ibm/platform-services-go-sdk to v0.41.0 (#582) (77a309d)
v1.10.11
Compare Source
Bug Fixes
deps: update module github.com/gruntwork-io/terratest to v0.43.8 (#581) (5b1468a)
v1.10.10
Compare Source
Bug Fixes
fixed a bug in upgrade test (#579) (8d18919)
v1.10.9
Compare Source
Bug Fixes
deps: update module golang.org/x/crypto to v0.11.0 (#578) (7d3a4ef)
terraform-ibm-modules/common-pipeline-assets (terraform-ibm-modules/common-pipeline-assets)
v1.17.4
Compare Source
Bug Fixes
remove collab check - no longer required (#523) (be1c5de)
v1.17.3
Compare Source
Bug Fixes
Debug logging (#522) (0fe31b8)
v1.17.2
Compare Source
Bug Fixes
bug with trim (60c8a95)
v1.17.1
Compare Source
Bug Fixes
add debug and trim comment (#520) (e0bb379)
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, check this box
This PR has been generated by Renovate Bot.
/run pipeline
/run pipeline
|
gharchive/pull-request
| 2023-09-02T09:23:21 |
2025-04-01T06:40:35.874974
|
{
"authors": [
"terraform-ibm-modules-dev",
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-secrets-manager-secret",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-secrets-manager-secret/pull/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1507328146
|
chore(deps): update common-dev-assets digest to e39789c
This PR contains the following updates:
Package
Update
Change
common-dev-assets
digest
859d6c0 -> e39789c
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
[ ] If you want to rebase/retry this PR, click this checkbox.
This PR has been generated by Renovate Bot.
:tada: This PR is included in version 2.0.2 :tada:
The release is available on GitHub release
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2022-12-22T06:12:33 |
2025-04-01T06:40:35.880931
|
{
"authors": [
"terraform-ibm-modules-ops"
],
"repo": "terraform-ibm-modules/terraform-ibm-transit-gateway",
"url": "https://github.com/terraform-ibm-modules/terraform-ibm-transit-gateway/pull/63",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1252942226
|
updateGlobalAssemblyNum called twice during loadState
loadState has an input parameter that is supposed to allow you to choose whether to call updateGlobalAssemblyNum or not. That parameter is called updateGlobalAssemNum:
https://github.com/terrapower/armi/blob/f6bf598525efaac72f0391296af832b556d415ad/armi/bookkeeping/db/database3.py#L398-L445
Towards the end of that method there is a check as to whether updateGlobalAssemNum is True or False, and updateGlobalAssemblyNum is called depending on the check.
However, further up in the method, loadDB.load is called, which itself calls to updateGlobalAssemblyNum without regards to the parameter that was passed into loadState:
https://github.com/terrapower/armi/blob/f6bf598525efaac72f0391296af832b556d415ad/armi/bookkeeping/db/database3.py#L1082-L1147
So I'm pretty sure it is being called duplicate.
I believe this was introduced in #615 @john-science .
I'll take this.
|
gharchive/issue
| 2022-05-30T16:27:15 |
2025-04-01T06:40:35.937479
|
{
"authors": [
"keckler"
],
"repo": "terrapower/armi",
"url": "https://github.com/terrapower/armi/issues/690",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
450325945
|
Uploading BETY trait CSVs from Google Drive
@kimberlyh66 during our last NCSA meeting I told @dlebauer I would attempt to upload these BETYdb CSVs to BETY that were uploaded by @ZongyangLi , but if there were issues with uploading them he suggested asking you for assistance.
https://drive.google.com/drive/folders/1Y-Qdxe1GgCgXSxR0KFEeyIVyoQyv-tCX
There are 3 directories in this Google Drive folder with .tar files containing daily CSVs for BETY upload:
ARCHIVE -- TRAIT_COLUMN_NAME(S)
s4_98th_height.tar.gz -- 98th_quantile_canopy_height
s6_98th_height.tar.gz -- 98th_quantile_canopy_height
s4Panicle_BETY.zip -- panicle_counting, panicle_volumn_median,
panicle_surface_are_median
S6PanicleBETY.tar.gz -- panicle_counting, panicle_volumn_median,
panicle_surface_are_median
s4_leaf_angle.tar.gz -- leaf_angle_alpha_src, leaf_angle_beta_src,
leaf_angle_alpha_fit, leaf_angle_beta_fit,
leaf_chi_src, leaf_chi_fit
s6_leaf_angle.tar.gz -- leaf_angle_alpha_src, leaf_angle_beta_src,
leaf_angle_alpha_fit, leaf_angle_beta_fit,
leaf_chi_src, leaf_chi_fit
I wrote a small Python script to iterate over the daily CSVs and push them to BETY with some key snippets here:
BETY_URL = "https://terraref.ncsa.illinois.edu/bety/api/v1/traits"
BETY_KEY = "<SECRET>"
def submit_traits(csv):
resp = requests.post("%s.%s" % (BETY_URL, 'csv'),
params={ 'key':BETY_KEY },
data=file(csv, 'rb').read(),
headers={'Content-type': 'text/csv'})
...however, none of the CSVs were successfully uploaded. A line from my logfile for each file:
/Users/mburnette/Downloads/BETYdbUploads/s4_98th_height/2017-05-06_98th_quantile.csv,No trait variable was found in the CSV file.
/Users/mburnette/Downloads/BETYdbUploads/s6_98th_height/2018-05-18_98th_quantile.csv,No trait variable was found in the CSV file.
/Users/mburnette/Downloads/BETYdbUploads/s4BetyLeafAngle/2017-08-15_betaD.csv,No trait variable was found in the CSV file.
/Users/mburnette/Downloads/BETYdbUploads/s6leafAngleBety/2018-05-19_betaD.csv,No trait variable was found in the CSV file.
/Users/mburnette/Downloads/BETYdbUploads/s4Panicle_BETY/2017-07-20_panicle.csv,No trait variable was found in the CSV file.
/Users/mburnette/Downloads/BETYdbUploads/s6PanicleBETY/2018-06-16_panicle.csv,No trait variable was found in the CSV file.
I'm assuming perhaps we need some trait defined in bety that corresponds with the column names I listed above that don't exist yet? We've successfully uploaded other bety data such as CanopyCover with similar CSVs and the "No trait variable..." error message was coming from BETY with a 400 response on the post.
Please let me know if you might be able to look into this and how I can help.
@ZongyangLi can you please define the variables and methods associated with these data?
Added methods by the following link: https://terraref.ncsa.illinois.edu/bety/methods/new
Scanner 3d ply data to 98th quantile height
Scanner 3d ply data to leaf angle distribution
Scanner 3d ply data to panicle counting
Added variables by the following link: https://terraref.ncsa.illinois.edu/bety/variables/new
98th_quantile_canopy_height
leaf_angle_alpha_src
leaf_angle_beta_src
leaf_angle_alpha_fit
leaf_angle_beta_fit
leaf_chi_src
leaf_chi_fit
panicle_counting
panicle_volumn_median
panicle_surface_area_median
Error operation:
Added 98th_quantile_canopy_height to methods, please delete it in methods
@ZongyangLi thanks for doing this ... should the trait associated with ‘Scanner 3D ply to 98th quantile height` be associated with the trait ‘canopy_height’? More specifically, if using the 98th quantile of the point cloud is intended to reflect the actual canopy height, then do we need a separate variable?
Similarly, if the best estimate of the panicle_volume is the median, then it would make sense call the trait ‘panicle_volume’ and describe the method of estimation in the methods (same for surface_area). And I am not sure what the difference is between _src and _fit but I suspect that these can also be differentiated in the methods rather than in the variable itself.
And to clarify - are you requesting that I delete the 98th_quantile_canopy_height method? I can do that although if you added it you should be able to delete it (as long as there aren’t any data already associated with the method).
We've proposed a new naming scheme, listed as [Variable Name, Method].
@dlebauer Does this fit your naming convention?
Chance 98th_quantile_canopy_height to [ Canopy Height, method == 3D_scanner_98th_quantile]
Change Leaf angle variables from:
leaf_angle_alpha_src
leaf_angle_beta_src
leaf_chi_src
leaf_angle_alpha_fit
leaf_angle_beta_fit
leaf_chi_fit
to:
[ Leaf Angle Mean, 3D_scanner_leaf_angle_distribution]
[ Leaf Angle Variance, 3D_scanner_leaf_angle_distribution]
[ Leaf Angle Alpha, 3D_scanner_leaf_angle_distribution]
[ Leaf Angle Beta, 3D_scanner_leaf_angle_distribution]
[ Leaf Angle Chi, 3D_scanner_leaf_angle_distribution]
And for panicles change from:
panicle_counting
panicle_volume_median
panicle_surface_area_median
to:
[Panicle Count, 3D_scanner_panicle_count]
[Panicle Volume, 3D_scanner_panicle_volume_median]
[Panicle Surface Area, 3D_scanner_surface_area_median]
Additionally, the leaf length and width parameters would have the following variables and methods:
[leaf_length, 3D_scanner_geodesic_kalman]
[leaf_length, 3D_scanner_geodesic_unfiltered]
[leaf_width, 3D_scanner_geodesic_kalman]
[leaf_width, 3D_scanner_geodesic_unfiltered]
Do those naming conventions for variables and methods seem to be more consistent?
Hi Abby - this is definitely on the right track, but I have a few thoughts and it will be easier to flush this out in this spreadsheet where we can capture the other information like descriptions, units, citations, etc.
A few notes -
Method names It might make sense to include something about the algorithm used (like where 'kalman' is used) rather than just saying '3D Scanner Panicle Volume' which doesn't allow it to be differentiated from another algorithm.
Variable names The variable naming convention loosely follows the structure of CF (Climate Forecast) standard names are constructed ... you can see examples here. And are thus snake_case. Method names don't have such constraint so can be typed like the title of a protocol.
Statistics
The leaf_angle_mean and leaf_angle_variance present a special case since BETYdb is designed to store the mean values alongside (optionally the sample size and a statistic, so the appropriate name for the mean leaf angle would be leaf_angle and each of these values can either standalone or be stored with a statistic. It would still be okay to have leaf_angle_variance alongside leaf_angle_beta etc, but there is also the option of including columns 'stat', 'statname' and 'n'. For now lets ignore n because that gets confusing. Unfortunately we only store one statistic for each record or else we could treat alpha and beta in the same way.
Also on the topic of variance. Does the variance you are computing have the same units as the mean? Would it make sense to call this 'Standard Deviation'?
As a footnote, I'll reference this lengthy discussion where I think we concluded that we would fit the normal and beta distributions separately, such that, e.g., mean != alpha/(alpha+beta)); if these values end up being equal then we should reconsider only storing one or the other set of parameters or else analyses that include both traits might have numerical issues.
Hi David - I don't currently have permission to edit that google sheet. If you grant it, I can fill things out there, but in the meantime, I'll reply in line here. I've gone through and edited our variables and methods to reflect your comments (snake case for variables, descriptive for methods, adding in algorithm details where appropriate). If you're on board with these changes, then @ZongyangLi can implement them.
Change 98th_quantile_canopy_height to [ canopy_height, 3D scanner to 98th quantile height]
Change Leaf angle variables from:
leaf_angle_alpha_src
leaf_angle_beta_src
leaf_chi_src
leaf_angle_alpha_fit
leaf_angle_beta_fit
leaf_chi_fit
to:
[ leaf_angle_mean (+ leaf_angle_variance stored stored alongside as statistic), 3D scanner to leaf angle distribution]
[ leaf_angle_alpha, 3D scanner to leaf angle distribution]
[ leaf_angle_beta, 3D scanner to leaf angle distribution]
[ leaf_angle_chi, 3D scanner to leaf angle distribution]
And for panicles change from:
panicle_counting
panicle_volume_median
panicle_surface_area_median
to:
[panicle_count, 3D scanner to panicle count faster_rcnn + roughness treshold + convex hull]
[panicle_volume, 3D scanner to panicle volume faster_rcnn + roughness treshold + convex hull]
[panicle_surface_area, 3D scanner to panicle surface area faster_rcnn + roughness treshold + convex hull]
Additionally, the leaf length and width parameters would have the following variables and methods:
[leaf_length, 3D scanner to leaf measurements kalman]
[leaf_length, 3D scanner to leaf measurements unfiltered]
[leaf_width, 3D scanner to leaf measurements kalman]
[leaf_width, 3D scanner to leaf measurements unfiltered]
Regarding the leaf angle variance, @ZongyangLi is currently saving the variance, but we could obviously compute standard deviation is that were the preferred measurement?
@dlebauer @abby621
Files updated to here in the sub directory: https://drive.google.com/open?id=1Y-Qdxe1GgCgXSxR0KFEeyIVyoQyv-tCX
Example leaf angle csv file: https://drive.google.com/open?id=10awD6-suq49L_TGI0x5Q3L-jSJvFmlBX
If we all agree with the current definition of methods and variables, I could add those to BETY.
@abby621 you should have access to the google doc if you want to update the records there. then @kimberlyh66 can upload the data and we will be on our way!
@dlebauer We have the spreadsheet almost entirely filled, but have a question regarding the min/max values. Should that be the min/max that we've ever seen, or some sort of bound on the possible reported values? I'm not sure that we know what that should be -- our algorithms don't specify particular min/max values beyond what's specified by the datatype (so a leaf could technically be hundreds of meters long, even if we would never expect to observe that).
@abby621 consider these to be very broad uniform priors that set upper and lower bounds on what data should be considered 'valid'. If they fall outside of the range they will be rejected. Then we can always update the min/max values if they should not be rejected.
So, these should be set so that they provide a high level constraint on valid values - most variables have a lower bound at 0; some have upper bounds at 1 or 100 by definition. The longest leaf in the world is 25m long so we could set max at 25000mm, or we could go with something like 2m which is more reasonable for Sorghum (and wheat). For leaf angle, if in degrees then I think the valid range would be [0,90]? In many cases we have -inf,inf, but these aren't very useful.
I have already filled in the sheet and update the new methods name and variables in csv file, can we go ahead and get it uploaded now?
OK, I will try to upload in the morning after downloading new CSV files. We must make sure they are in BETY as well. We can ask @kimberlyh66 to add the new / updated names to BETY and I can upload the trait data.
Is this the spreadsheet (https://docs.google.com/spreadsheets/d/1nDVti2uj2cWboAmsqzQGyXidZFnqi5jPmBw23nGKH9E/edit#gid=1676929050) with new method names and variables? If @dlebauer approves, I can add to BETY.
@max-zilla I can also help with uploading the trait data if you would like.
@Huynh, Kimberly My-Linh - (kimberlyh)mailto:kimberlyh@email.arizona.edu if you can update the method names and descriptions then Max can upload the trait data.
From: Kimberly Huynh notifications@github.com
Sent: Monday, June 10, 2019 1:26:00 PM
To: terraref/computing-pipeline
Cc: LeBauer, David Shaner - (dlebauer); Mention
Subject: Re: [terraref/computing-pipeline] Uploading BETY trait CSVs from Google Drive (#582)
@max-zillahttps://github.com/max-zilla I can also help with uploading the trait data if you would like.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHubhttps://github.com/terraref/computing-pipeline/issues/582?email_source=notifications&email_token=AADRPZ33BB7E3GOS4VTDYN3PZ22FRA5CNFSM4HRFNEA2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODXLED5Q#issuecomment-500580854, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AADRPZ5F7CR3PDHMPCLZEQLPZ22FRANCNFSM4HRFNEAQ.
All new methods and variables have been added to BETY.
@ZongyangLi You mentioned in the spreadsheet that the method 3D scanner to leaf length and width should be the same method that I used to upload Zeyu's data. If this is the case, you should use the name Scanner 3d ply data to leaf length and width.
@kimberlyh66 The method in the spreadsheet 3D scanner to leaf length and width is actually made for Zeyu, if you have already uploaded his data, then you could skip it.
Downloaded the rewritten version of all files, but I'm still getting an error on the citation:
/Users/mburnette/Downloads/BETYdbUploadsV2/s4_98th_height_rewrite/2017-05-06_98th_quantile.csv,{:
lookup_errors=>[
"No citation could be found matching {\"author\"=>\"ZongyangLi\", \"year\"=>\"2018\", \"title\"=>\"Maricopa Field Station Data and Metadata\"}",
"No citation could be found matching {\"author\"=>\"ZongyangLi\", \"year\"=>\"2018\", \"title\"=>\"Maricopa Field Station Data and Metadata\"}",
"No citation could be found matching {\"author\"=>\"ZongyangLi\", \"year\"=>\"2018\", \"title\"=>\"Maricopa Field Station Data and Metadata\"}",
...
I think the other fields are the same besides the 2018, I think this is another entry we need to add to BETY first?
@max-zilla I guess here year should be 2016.
Could you change it from 2018 to 2016 and try again?
If it works I can update all the csv files.
@ZongyangLi changing it to 2016 results in Success!
@max-zilla Should be all right this time, please find all collections here:
https://drive.google.com/open?id=1fDGakYulkLjLSAG0e_H-MEmjT69Bg2zF
Uploading these now, will close this once finished.
@ZongyangLi the LeafAngle and 98th height CSVs uploaded successfully, but the panicle CSVs encountered error:
No method could be found matching {"name"=>"3D scanner to panicle count faster_rcnn + roughness threshold + convex hull"}
@kimberlyh66 @ZongyangLi can we update this method in BETY so i can upload panicle data and close this? thanks!
@kimberlyh66
I think there was a type error in the spreadsheet previously, there was a missing letter 'h' in the word 'roughness threshold', could you change it to the right one? Thanks.
@ZongyangLi @ZongyangLi the method has been updated to be
3D scanner to panicle count faster_rcnn + roughness threshold + convex hull
@kimberlyh66 thanks much! this is now uploaded & complete.
|
gharchive/issue
| 2019-05-30T14:07:55 |
2025-04-01T06:40:35.976565
|
{
"authors": [
"ZongyangLi",
"abby621",
"dlebauer",
"kimberlyh66",
"max-zilla"
],
"repo": "terraref/computing-pipeline",
"url": "https://github.com/terraref/computing-pipeline/issues/582",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
352453764
|
Parse Halo and Rotation of TextSymbolizer
Title says it all...
In with this! Thanks again
|
gharchive/pull-request
| 2018-08-21T09:18:09 |
2025-04-01T06:40:36.003914
|
{
"authors": [
"jansule",
"marcjansen"
],
"repo": "terrestris/geostyler-sld-parser",
"url": "https://github.com/terrestris/geostyler-sld-parser/pull/70",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
1203315461
|
The known vulnerability in the shared library zstd which blacklite depends on. Can you help upgrade to patch versions?
Hi, @wsargent , @shipkit-org , I'd like to report a vulnerability issue in com.tersesystems.blacklite:blacklite-codec-zstd:1.1.0.
Issue Description
I noticed that com.tersesystems.blacklite:blacklite-codec-zstd:1.1.0 directly depends on com.github.luben:zstd-jni:v1.4.5-6 in the pom. However, as shown in the following dependency graph, com.github.luben:zstd-jni:v1.4.5-6 sufferes from the vulnerability which the C library zstd(version:1.4.5) exposed: CVE-2021-24032.
Dependency Graph between Java and Shared Libraries
Suggested Vulnerability Patch Versions
com.github.luben:zstd-jni:v1.4.9-1 (>=v1.4.9-1) has upgraded this vulnerable C library zstd to the patch version 1.4.9.
Java build tools cannot report vulnerable C libraries, which may induce potential security issues to many downstream Java projects. Could you please upgrade this vulnerable dependency?
Thanks for your help~
Best regards,
Helen Parr
Filing https://github.com/tersesystems/blacklite/pull/25
Fixed in https://github.com/tersesystems/blacklite/releases/tag/v1.1.1
|
gharchive/issue
| 2022-04-13T13:37:19 |
2025-04-01T06:40:36.009705
|
{
"authors": [
"HelenParr",
"wsargent"
],
"repo": "tersesystems/blacklite",
"url": "https://github.com/tersesystems/blacklite/issues/24",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
665368945
|
AAPT: error: style attribute 'attr/sv_animationDuration (aka com.haris.kareem:attr/sv_animationDuration)' not found.
could someone help me with this error? android studio
Don't see how this relates to this library. Sorry.
hello friend, i am facing the same problem. How did you solve it?
hello friend, i am facing the same problem. How did you solve it?
@Alouis62 please provide steps to reproduce or any indication that this library is the cause.
@Alouis62 please provide steps to reproduce or any indication that this library is the cause.
|
gharchive/issue
| 2020-07-24T19:25:28 |
2025-04-01T06:40:36.012138
|
{
"authors": [
"Alouis62",
"matheussimao59",
"scarlac"
],
"repo": "teslamotors/react-native-camera-kit",
"url": "https://github.com/teslamotors/react-native-camera-kit/issues/301",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1703268658
|
iOS: App crash on iPad when pressed on “Don’t Allow” to camera service modal alert.
Describe the bug
As title my app just crash when pressed on “Don’t Allow” to camera service modal alert. Is anyone has the same problem?
Screenshots
This is my AppStore report review
Smartphone:
Device: [iPad]
OS: [iOS 16.4.1]
I believe you should use react-native-permissions to handle permission.
@nkqdev Have you been able to fix this bug now cos I have this same issue too?
@nkqdev Have you been able to fix this bug now cos I have this same issue too?
well i just disable my camera function for ipad until i find out a solution
|
gharchive/issue
| 2023-05-10T07:10:00 |
2025-04-01T06:40:36.015492
|
{
"authors": [
"lehaidangdev",
"nkqdev",
"tosinakerele"
],
"repo": "teslamotors/react-native-camera-kit",
"url": "https://github.com/teslamotors/react-native-camera-kit/issues/540",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
176548311
|
T2 Example code does not run.
Attempting to follow these instructions:
https://tessel.github.io/t2-start/modules/rfid.html
INFO Looking for your Tessel...
INFO Connected to athena.
INFO Tessel [athena] CLI version: 0.0.28
INFO Tessel [athena] Firmware version: 0.0.14
INFO Tessel [athena] Node version: 4.4.3
Hardware is connected properly, running code:
// Any copyright is dedicated to the Public Domain.
// http://creativecommons.org/publicdomain/zero/1.0/
/*********************************************
This basic RFID example listens for an RFID
device to come within range of the module,
then logs its UID to the console.
*********************************************/
var tessel = require('tessel');
var rfidlib = require('rfid-pn532');
var rfid = rfidlib.use(tessel.port['A']);
rfid.on('ready', function (version) {
console.log('Ready to read RFID card');
rfid.on('data', function(card) {
console.log('UID:', card.uid.toString('hex'));
});
});
rfid.on('error', function (err) {
console.error(err);
});
Expected
Expect card uid to be logged to console.
Observed
Orange LED indicator flashes in the presence of the card, and is steady when no card is present.
Nothing is logged to console.
note version is undefined in 'ready' callback.
@OR13 this was fixed this week with https://github.com/tessel/t2-firmware/pull/214.
It's expected to be released within the next few days as version 0.0.15. Of course, you're welcome to build and flash firmware yourself with these instructions.
I'm not sure if the version returned in the ready callback will be fixed with this upcoming firmware patch. That might be an issue with this driver.
Closing this as it was fixed in the last version of firmware.
|
gharchive/issue
| 2016-09-13T04:47:18 |
2025-04-01T06:40:36.019864
|
{
"authors": [
"OR13",
"johnnyman727"
],
"repo": "tessel/rfid-pn532",
"url": "https://github.com/tessel/rfid-pn532/issues/32",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
237585171
|
refactor(lib/log.js)
eliminate char-spinner dependency
use simplified internal spinner adapted from char-spinner
coverage at 100%
Signed-off-by: Rick Waldron waldron.rick@gmail.com
r? @tcr
(tessel-highfive has picked a reviewer for you, use r? to override)
@tcr the tessel-highfive bot seems a bit intrusive
Coverage decreased (-0.09%) to 79.565% when pulling 6824aaac22a22fc5caca339274f09b312c8b6de3 on refactor-log into d06d1b7b830b43797a51d137ff8d3cd706d4a2b7 on master.
Coverage increased (+0.09%) to 79.746% when pulling b6d47932d6734039cf59253bacec9ba76ba527f3 on refactor-log into d06d1b7b830b43797a51d137ff8d3cd706d4a2b7 on master.
I'll smoke test this tonight.
My initial feedback is a preference to see the simplified internal spinner moved to its own file for the sake of reducing noise in the lib/log.js exports and isolating the testing logic. That's purely a preference, not a blocker.
|
gharchive/pull-request
| 2017-06-21T15:59:17 |
2025-04-01T06:40:36.024639
|
{
"authors": [
"HipsterBrown",
"coveralls",
"rwaldron",
"tessel-highfive"
],
"repo": "tessel/t2-cli",
"url": "https://github.com/tessel/t2-cli/pull/1252",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
194454340
|
Tesseract security considerations
I am integrating Tesseract into an application, but I have some questions before keep going with the process.
I think every application should have security filters and considerations in order to avoid malicious and bad input data, so my questions are:
Does Tesseract have special code to handle bad or malicious input data?
Or just have a few validations to tell the user the correct input data?
Releases are performed after doing some security reviews and testing?
Or just functional testing?
I will appreciate your answers.
Thanks a lot!
Posted on
https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/tesseract-ocr/bVq_mGnPsN4/LqYyZg1ICQAJ
|
gharchive/issue
| 2016-12-08T21:52:14 |
2025-04-01T06:40:36.027341
|
{
"authors": [
"joshimar",
"zdenop"
],
"repo": "tesseract-ocr/tesseract",
"url": "https://github.com/tesseract-ocr/tesseract/issues/548",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
333017672
|
DO NOT MERGE: add unit test to test loading of languages
Please ignore the change to the training scripts. Thanks,
Please do not merge this as it includes some additional commits. I will resubmit along with any other suggested changes.
@stweil Is this ok as a test for loading ALL languages?
There does not seem to be too much difference in load times between different the three tessdata repos.
Line 67: [ OK ] Tessdata_fast/LoadLanguage.eng/0 (237 ms)
Line 317: [ OK ] Tessdata_best/LoadLanguage.eng/0 (242 ms)
Line 566: [ OK ] Tessdata/LoadLanguage.eng/0 (267 ms)
comlete log is attached.
loadlang_test.log.txt
New PR https://github.com/tesseract-ocr/tesseract/pull/1677
|
gharchive/pull-request
| 2018-06-16T21:53:31 |
2025-04-01T06:40:36.030123
|
{
"authors": [
"Shreeshrii"
],
"repo": "tesseract-ocr/tesseract",
"url": "https://github.com/tesseract-ocr/tesseract/pull/1676",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1758159155
|
Syntax highlighting
Syntax highlighting should be aligned a little bit. See attached PDF.
Findings.pdf
solved and already merged with #6
second issue (dot-syntax has different color) is not solved => repoened
solved and already merged with #7
|
gharchive/issue
| 2023-06-15T06:51:13 |
2025-04-01T06:40:36.037740
|
{
"authors": [
"HolQue",
"test-fullautomation"
],
"repo": "test-fullautomation/vscode-jsonp",
"url": "https://github.com/test-fullautomation/vscode-jsonp/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1556784997
|
[Enhancement]: Allow a option to increase timeout more than 120 seconds
Module
MySQL
Proposal
I am using testcontainers for running one of my build step. I am having HDD on my laptop and not SSD. Starting a mysql 8 container takes more than 120 seconds and testcontainers kills the container after 120 seconds and it attempts 3 times but still it fails because startup takes more than 120 seconds. I searched the internet and even read the source code and come to a conclusion that there is no option to increase the timeout. I desperately need this feature.
Have you tried using withConnectTimeoutSeconds()?
I am starting mysql container from my pom.xml
I don't have any option to use above method there.
https://github.com/testcontainers/testcontainers-java/blob/main/modules/jdbc/src/main/java/org/testcontainers/containers/JdbcDatabaseContainer.java contains the timeout at line number 40. It is a private variable and cannot be overridden.
I am looking for an option to override it using a property file or environment variable.
As mentioned before, the value can be set using the withStartupTimeoutSeconds() method.
I am starting mysql container from my pom.xml
It is unclear what this means, can you share a reproducer example?
I am using testcontainer like following in my pom.xml for generating jooq classes using jooq-codegen-maven plugin:
jdbc:tc:mysql:8.0.23:///price_service?TC_INITSCRIPT=file:scripts/merged_schema.sql
org.testcontainers.jdbc.ContainerDatabaseDriver
The above usage fails due to timeout while creating docker container. I have implemented a solution for myself at https://github.com/jasvant6thstreet/testcontainers-java-jasvant
|
gharchive/issue
| 2023-01-25T14:51:06 |
2025-04-01T06:40:36.197198
|
{
"authors": [
"jasvant6thstreet",
"kiview"
],
"repo": "testcontainers/testcontainers-java",
"url": "https://github.com/testcontainers/testcontainers-java/issues/6436",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.