id
stringlengths
4
10
text
stringlengths
4
2.14M
source
stringclasses
2 values
created
timestamp[s]date
2001-05-16 21:05:09
2025-01-01 03:38:30
added
stringdate
2025-04-01 04:05:38
2025-04-01 07:14:06
metadata
dict
178526812
tagFormatter() not called on add The tagFormatter works fine when the tags: option gets read on toggle instantiation, but it does not get called when new tags are added. How are you adding new tags? if you're calling add() that should internally call _add which calls _createTag which calls tagFormatter I'm using add() and it's not calling tagFormatter :( seems to be working fine here: https://jsfiddle.net/okcoker/s5yx2yc1/2/ Yeap, traced it back to an error on the code I wrote to format the tag. Thanks!
gharchive/issue
2016-09-22T05:40:51
2025-04-01T06:39:50.289596
{ "authors": [ "highvoltag3", "okcoker" ], "repo": "okcoker/taggle.js", "url": "https://github.com/okcoker/taggle.js/issues/79", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
866746025
Added precision to number formatting for issue #224. @neuroactive I guess you'll take care of the charts right? @neuroactive I guess you'll take care of the charts right? @FottyM Looks good, thank you! Yes, I can take care of the charts. @neuroactive I guess you'll take care of the charts right? @FottyM Looks good, thank you! Yes, I can take care of the charts. Please do let me know if I need to change something. There was a small issue where trailing zeroes in the numbers were getting removed, i.e. with a precision of 1, "29.97" was being formatted as "30" instead of "30.0". But I have fixed that by adding an argument to the Intl.NumberFormat function call. I've sent the changes live and the numbers look much better now. Thanks!
gharchive/pull-request
2021-04-24T13:25:40
2025-04-01T06:39:50.313287
{ "authors": [ "FottyM", "neuroactive" ], "repo": "okestonia/koroonakaart", "url": "https://github.com/okestonia/koroonakaart/pull/228", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1577624829
🧪 Add mock logic querier 📝 Purpose Trying the testing process on cosmwasm smart contract, I've added a unit test of the new cw-logic-sample (#114) smart contract. 🕹️ Mock To do that, a MockLogicQuerier has been introduced allowing mock the logic module. A default implementation has been set to return a successful Answer whatever program or query set initially with a Foo / Bar variable substitution. Is intended to just check if the cw-logic-sample contract, Ask query, requests the logic module. 🚨 Linter In addition, I've introduce some unused import so I've enforced the linter rules 😈 😈. I'm nice because I haven't add the missing_docs rules 😇. size-limit report 📦 Path Size target/wasm32-unknown-unknown/release/cw_template.wasm 178.08 KB (0%) :tada: This PR is included in version 1.0.0 :tada: The release is available on GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2023-02-09T10:26:26
2025-04-01T06:39:50.330502
{ "authors": [ "bdeneux", "bot-anik" ], "repo": "okp4/contracts", "url": "https://github.com/okp4/contracts/pull/115", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
346035191
:seedling: Allow default profile, configurable profile prefix, and configurable credentials suffix Problem Statement This tool does not allow you to create a working default profile. This tool gives warnings on AWS Java SDK (#157). This tool makes credentials profiles with names that difficult to use (#118). Different SDKs have conflicting prefix and suffix expectations. https://github.com/oktadeveloper/okta-aws-cli-assume-role/issues/157#issuecomment-409091620 Solution Support default as a named profile via OKTA_PROFILE=default Allow configurable profile prefix (Fix #157 with OKTA_PROFILE_PREFIX="") Allow configurable credentials suffix (Fix #118 with OKTA_CREDENTIALS_SUFFIX="") Tested on macOS in Fish shell. Tested on Windows 10 in PowerShell. @mraible ready to merge. @mrminecart it will be possible to set the default role with this fix once 1.0.3 is released. https://github.com/oktadeveloper/okta-aws-cli-assume-role/issues/122#issuecomment-391302510 Until then, you could download and build it directly.
gharchive/pull-request
2018-07-31T04:56:56
2025-04-01T06:39:50.358780
{ "authors": [ "AlainODea" ], "repo": "oktadeveloper/okta-aws-cli-assume-role", "url": "https://github.com/oktadeveloper/okta-aws-cli-assume-role/pull/161", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1929672109
Documentation for the changes in private registry credentials This PR adds documentation for the changes we are adding for private registry credentials that will go out in 1.14. The main changes are: Added an Upgrading to Okteto 1.14.x section explaining how the migration is done for instances using privateRegistry key. Removed the helm setting of privateRegistry from the settings page. We don't want people using that setting anymore from 1.14 Added more information in the registry credentials section of the admin dashboard explaining the new functionality (screenshots are still missing) and the advanced section in which we explain how to create the private registry credentials directly in the Kubernetes cluster I had doubts about creating a specific page for the private registry, but I didn't find any place where it would fit, so I decided to go with the simplest approach and include it within the sections already exist. I just set the PR ready for review as I already did the latest changes and updated a couple of screenshots
gharchive/pull-request
2023-10-06T08:21:52
2025-04-01T06:39:50.361349
{ "authors": [ "ifbyol" ], "repo": "okteto/docs", "url": "https://github.com/okteto/docs/pull/509", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
307868909
Still works? I start the project but nothing happens. @XxStR abandoned @oleghalin That sad :/ Anyway, I'm updating thank you very much. @XxStR check Arcana repo @XxStR If you want I can try to help you. You can share your code here or to t.me/KhalinOleg @oleghalin Thanks. Project @oleghalin Hi, did you get a view on the project?
gharchive/issue
2018-03-23T01:23:04
2025-04-01T06:39:50.402005
{ "authors": [ "XxStR", "oleghalin" ], "repo": "oleghalin/Dota2LadderBot", "url": "https://github.com/oleghalin/Dota2LadderBot/issues/3", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
420274645
Understanding Power - Energy calculation by PZEM-004T Hello all, I am using PZEM-004T and Arduino UNO to measure electricity consumption. The module is now working well, I can see the values on Serial screen. However, I still have some questions as below: 1. How does PZEM-004T calculate the Energy (Wh) from Power (W)? As the following pic: P = 25W, E = 609Wh = 609/3600 Ws. With equation: E = P*t, we have t = E/P = 609/3600/25 = 0.006766 s. So is this calculation correct, if not please tell me how the module calculates E from P. 2. In the pic, it seems the Energy is calculated instantly. How could we get the accumulated Energy and how to reset it to Zero programmatically? Thanks a lot! Pic sourcce: http://pdacontrolen.com/meter-pzem-004t-with-arduino-esp32-esp8266-python-raspberry-pi/ E is measured in Wh (Watt multiplied by Hour), so 609Wh = 6093600 Ws. Here 609Wh means that you were consuming either 609 W during 1 hour, or 25 W during 609/25 = 24.36 hours, or even 1 W during 609 hours and so on. You equation should be 6093600/25 = 87696s and it only shows how many seconds is needed to consume 609W with power of 1 W. No, this is an accumulated Energy value and, unfortunately, it cannot be reset programmatically. Thanks a lot for your quick response! This isn't still clear enough because what if we want to know energy consumed after 5min. we know the voltage, current reading but not the time This isn't still clear enough because what if we want to know energy consumed after 5min. we know the voltage, current reading but not the time We also now total energy consumed by now. You just have to subtract Energy value 5 minutes ago from curent Energy value Just to make it clear, you have to record values from PZEM with timestamp somewhere (external DB, file etc.) Thanks so much for your fast reply, I can't get your first comment. Energy is the accumulation of power over a period of time. Energy after 5 min is power x 5min then you convert to kwh. But we can't control that here. concerning your second comment, what is the time interval for each energy calculation made by the meter. I will love to know that can't find it anywhere in the datasheet. It's not just about Power multiplied by Time. Energy is calculated by PZEM's internal MCU using RMS method. It's calculated quite precisely (tens or maybe hundreds of readings per second. I guess...) Do you have a link for learning how this energy is calculated. By the way, you will still need time to calculate energy and how do compensate for that. is it something like this this is for another metering IC, if you have for pzem-004t pls share Sorry, I don't know if PZEM uses special MCU, or just calculates RMS programatically. You may refer to this article https://learn.openenergymonitor.org/electricity-monitoring/ac-power-theory/arduino-maths for more information on how to calculate RMS power having only instantaneous voltage and current measurements And the Energy will be equal to something like [RMS power] x [number of samples] / [sample rate] Found some info on PZEM-004t. Looks like it uses this metering IC http://www.sdicmicro.com/DataSheet/SD3004 datasheet v0.2c.pdf Thanks so much, will look into it hello I have read through the comments, but I can not implement them. I have a constant load connected and I get 9.2 W displayed at 0.07A by 234V. The Wh say 0.76. somehow all this does not fit together or makes no sense to me can someone break down my values and explain them to me?
gharchive/issue
2019-03-13T01:21:35
2025-04-01T06:39:50.413122
{ "authors": [ "Aliiiu", "EDSTOBI", "olehs", "vandat07" ], "repo": "olehs/PZEM004T", "url": "https://github.com/olehs/PZEM004T/issues/54", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
863198435
Failed to obtain access token I am sorry to reach here.. I have copied the storewize platform ansible modules and created a simple playbook to obtain volumes list. I got stuck with Failed to obtain access token. My password is correct any inputs. Hi. Not sure what's causing your problem. Bur please check that your Spectrum Virtualize is running supported version and that the user you are using have right permission. Check the role here, and Also create a issue if you have problems ibm.spectrum_virtualize
gharchive/issue
2021-04-20T20:10:40
2025-04-01T06:39:50.428603
{ "authors": [ "naveen17", "olemyk" ], "repo": "olemyk/ansible-virtualize-playbooks", "url": "https://github.com/olemyk/ansible-virtualize-playbooks/issues/3", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2622198282
🛑 Hitobito PBS is down In aadff35, Hitobito PBS (https://db.scout.ch) was down: HTTP code: 404 Response time: 1174 ms Resolved: Hitobito PBS is back up in 36a9342 after 22 minutes.
gharchive/issue
2024-10-29T19:54:48
2025-04-01T06:39:50.442037
{ "authors": [ "olibrian" ], "repo": "olibrian/obrian_uptime", "url": "https://github.com/olibrian/obrian_uptime/issues/553", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2470676475
✨ Add support for user defined dap strategy Why? The default strategy configuration for dap does not work with current versions of nvim-dap or nvim-dap-ruby. It is missing the request field. What? I am proposing enabling users to provide their own strategy config implementation so they can customize it to their needs. Alternatives Change the existing get_strategy_config to include: request = "attach", error_on_failure = false, localfs = true, waiting = 1000, Todo [ ] Update documentation [x] Suppress exit code 1 warning for test failures error_on_failure = false, [x] ~Fix inconsistent population of test output~ dap-ruby appends stdout to dap.repl instead of the process output tempfile dap-ruby is configured as a type = "server" and it'll be challenging to capture the output stream as is [x] ~Fix inconsistent coloring of test output~ When dap-ruby runs the specs, it doesn't have a TTY -- so the .rspec configured --color is ignored. If --force-color is set, then the dap.repl includes the color codes Demo https://github.com/user-attachments/assets/2d86ca90-69e5-4a74-84d0-0d259cab5a3e Thank you. I'll add @elken to this who authored the original PR I can confirm it does work as I've been using it, unless you mean something changed in the week or so since? I can confirm it does work as I've been using it, unless you mean something changed in the week or so since? EDIT: Seems something has changed, okay but I don't see this PR adding it I get an error message: Config needs the request property which must be one of attach or launch https://github.com/user-attachments/assets/2c24dd53-6f5b-4e77-94e5-a910139d603f This PR makes the strategy configuration user editable. An alternative would be to include these lines in the default strategy configuration: request = "attach", options = { source_filetype = "ruby" }, error_on_failure = true, localfs = true, waiting = 1000, At least request = "attach" is needed … I put all these in my user defined implementation of strategy_config but I suspect some may not be needed. @elken I interpreted your statement: okay but I don't see this PR adding it As desiring a default configuration which worked! This PR should now work without requiring users to override the default strategy_config. It does so by adding these values to the default get_strategy_config: request = "attach", error_on_failure = false, localfs = true, diff --git a/lua/neotest-rspec/init.lua b/lua/neotest-rspec/init.lua index c449491..75aa9db 100644 --- a/lua/neotest-rspec/init.lua +++ b/lua/neotest-rspec/init.lua @@ -123,8 +123,13 @@ function NeotestAdapter.build_spec(args) local function get_strategy_config(strategy, command, cwd) local strategy_config = { dap = function() + vim.fn.setenv("RUBYOPT", "-rdebug/open") + vim.fn.setenv("RUBY_DEBUG_COMMANDS", "c ;;") return { name = "Debug RSpec Tests", + request = "attach", + localfs = true, + waiting = 1000, type = "ruby", args = { unpack(command, 2) }, command = command[1], The above seems to work consistently now. The env vars are needed for "reasons". Also probably worth noting that I'm calling bin/rspec in my config. @elken indeed, a simpler contribution! The setenv stuff could be simplified a bit. nvim-dap-ruby https://github.com/suketa/nvim-dap-ruby/blob/main/lua/dap-ruby.lua#L81 configures debug to open (if required) so one change to your diff could be to simply: diff --git a/lua/neotest-rspec/init.lua b/lua/neotest-rspec/init.lua index c449491..75aa9db 100644 --- a/lua/neotest-rspec/init.lua +++ b/lua/neotest-rspec/init.lua @@ -123,8 +123,13 @@ function NeotestAdapter.build_spec(args) local function get_strategy_config(strategy, command, cwd) local strategy_config = { dap = function() + vim.fn.setenv("RUBYOPT", "-rdebug") + vim.fn.setenv("RUBY_DEBUG_NONSTOP", true) return { name = "Debug RSpec Tests", + request = "attach", + localfs = true, + waiting = 1000, + error_on_failure = false, type = "ruby", args = { unpack(command, 2) }, command = command[1], Would you prefer to not make the strategy configurable by users, and instead simply update get_strategy_config ? I still believe neotest-rspec will need to include error_on_failure = false, in the strategy, so as to not notify the user with a message about an exit code of 1. My rspec_cmd is configured like so, which works well with the nvim-ruby-dap existing environment variables, and is an alterative to using RUBYOPTS rspec_cmd = function() return vim.tbl_flatten({ "bundle", "exec", "rdbg", "--nonstop", "-c", "--", "rspec", }) end, I'm not beholden to my code :smile: whichever change is least destructive and simplest for the user. RE: making it configurable, I would see what the other adapters do. I don't know if it needs to be, but it's obviously not a massive deal if it is :smile: Closing in favour of: https://github.com/olimorris/neotest-rspec/pull/72 Pending further discovery in trends of adapter's exposing dap strategy configuration.
gharchive/pull-request
2024-08-16T17:29:36
2025-04-01T06:39:50.478860
{ "authors": [ "cpb", "elken", "olimorris" ], "repo": "olimorris/neotest-rspec", "url": "https://github.com/olimorris/neotest-rspec/pull/71", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1597596408
Collect and Remove Hash Table Methods Implements void remove_kv_pair(hash_table* in_table, void* key, size_t key_size); Implements vector_kv_pair collect_table(hash_table* in_table); The tests are a little weak, will add more later and re-test for memory leaks closes https://github.com/olincollege/p2p-networking/issues/15
gharchive/pull-request
2023-02-23T22:10:17
2025-04-01T06:39:50.480878
{ "authors": [ "daniel-sudz" ], "repo": "olincollege/p2p-networking", "url": "https://github.com/olincollege/p2p-networking/pull/20", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
384909153
Multi level Aggregation using Query DSL` Please use the following questions as a guideline to help me answer your issue/question without further inquiry. Thank you. Which version of Elastic are you using? [ ] elastic.v6 (for Elasticsearch 6.x) { "query": { "match_all": {} }, "aggs": { "date": { "date_histogram": { "field": "time", "interval": "hour", "format": "yyyy-MM-dd HH" }, "aggs": { "count": { "sum": { "field": "no_of_hits" } } } } } } I tried in this way. Please correct me if I am wrong query := elastic.NewMatchAllQuery() aggs := elastic.NewDateHistogramAggregation(). Field("time"). Interval("hour"). Format("yyyy-MM-dd HH") aggs.SubAggregation("count", elastic.NewSumAggregation().Field("no_of_hits")) searchResult, err := client.Search(). Index("legitimate-bot-18.11.20"). Type("legitimate"). Query(query). Aggregation("date", aggs). Do(ctx) hour_agg, found := searchResult.Aggregations.Terms("date") for _, b := range hour_agg.Buckets { count_agg, found := b.SumBucket("count") log.Printf(", %d\n", count_agg.Value) But not getting the correct result. Any help will be appriciated I can say for sure that this works, using it every day. While I cannot run your code snippet, I need to make a guess. Can you change aggs.SubAggregatio(...) to aggs = aggs.SubAggregation(...) and try again. Other than that, please see the largest integration test you can find in this library for an example of basically every kind of aggregation you can find, including sub-aggregations. Expected query and the data but the data getting in the code is this. I am a very newbie to this golang. May be I made a very small mistake. Can you please look
gharchive/issue
2018-11-27T17:35:41
2025-04-01T06:39:50.493060
{ "authors": [ "olivere", "ravi45722" ], "repo": "olivere/elastic", "url": "https://github.com/olivere/elastic/issues/975", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
205666290
Periodic state jump in circular motions When performing a circular motion with the bicycle, we see a jump in state once per circle. This is illustrated in the following plot: This is likely due to how yaw angle is wrapped for z and C*x in the Kalman measurement update step: template <typename T> void Kalman<T>::measurement_update_state(const measurement_t& z) { m_x = m_system.normalize_state(m_x + m_K*(z - m_system.Cd()*m_x)); } https://github.com/oliverlee/bicycle/blob/master/src/kalman.hh#L119-L122 As a result, we could have the case where z = -179, C*x = 181 and z - C*x = -360 (degrees)
gharchive/issue
2017-02-06T18:16:29
2025-04-01T06:39:50.498566
{ "authors": [ "oliverlee" ], "repo": "oliverlee/phobos", "url": "https://github.com/oliverlee/phobos/issues/69", "license": "bsd-2-clause", "license_type": "permissive", "license_source": "bigquery" }
417478449
"No response" status can now be programmatically triggered No response status can be triggered via sending NO_RESPONSE as a value of some valid characteristic. #48 Did not tested this myself but code can't make any harm as I see. Merging to dev branch. @radionoise thank you very much for your contribution! It's pretty useful feature!
gharchive/pull-request
2019-03-05T20:04:40
2025-04-01T06:39:50.499849
{ "authors": [ "Shaquu", "radionoise" ], "repo": "oliverrahner/node-red-contrib-homekit-bridged", "url": "https://github.com/oliverrahner/node-red-contrib-homekit-bridged/pull/51", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2451071772
Avoid the need to download ollama manually Would it be possible to use ollama-python straight out of the box without the need to download/install ollama manually first? yes please I want this too Hey - not planning to support this for now, thanks!
gharchive/issue
2024-08-06T14:43:54
2025-04-01T06:39:50.504201
{ "authors": [ "GXcells", "IliasAarab", "ParthSareen" ], "repo": "ollama/ollama-python", "url": "https://github.com/ollama/ollama-python/issues/245", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2521544057
Inconsistent Responses from Identical Models What is the issue? I am new to Ollama and have noticed that when I ask a query using Ollama, the model's responses are quite poor. However, if I ask the same query using https://www.llama2.ai/, I receive much better responses. Can anyone explain what might be causing this difference? What could I be doing wrong? # ! pip install -q pyautogen # ! pip install -q 'litellm[proxy]' import requests import json # Define the payload payload = { "model": "llama3:8b", # Changed model to match the curl version "prompt": """How does a neural network work?""" , "format": "json", # To ensure response is in JSON format "stream": False # To disable streaming in the response } # Send the POST request to the API response = requests.post("http://127.0.0.1:11434/api/generate", json=payload) output = response.json() print(output) {'model': 'llama3:8b', 'created_at': '2024-09-12T06:26:59.001500912Z', 'response': '{ }\n\n ', 'done': True, 'done_reason': 'stop', 'context': [128006, 882, 128007, 271, 4438, 1587, 264, 30828, 4009, 990, 30, 128009, 128006, 78191, 128007, 271, 90, 557, 220], 'total_duration': 243709137, 'load_duration': 56054134, 'prompt_eval_count': 17, 'prompt_eval_duration': 39720000, 'eval_count': 4, 'eval_duration': 103353000} Ollama model response: response': '{ }\n\n URL: https://www.llama2.ai/ Model: Meta LLAMA3-8B Response: I'd be happy to explain how a neural network works! A neural network is a type of machine learning model inspired by the structure and function of the human brain. It's a complex system made up of many interconnected nodes or "neurons," which process and transmit information. Here's a simplified overview of how a neural network works: Data Input: The neural network receives input data, which can be images, sound waves, text, or any other type of data that can be digitized. Neuron Layers: The input data is fed into the first layer of neurons, called the input layer. Each neuron in this layer receives a set of input values and performs a calculation on them. Activation Functions: The output of each neuron is then passed through an activation function, which determines whether the neuron should "fire" or not. Common activation functions include sigmoid, tanh, and ReLU (Rectified Linear Unit). Hidden Layers: The output from the input layer is then passed to one or more hidden layers, which are made up of neurons that perform complex calculations on the input data. Each hidden layer can have multiple neurons, and each neuron in the layer receives input from the previous layer. Output Layer: The output from the hidden layers is then passed to the output layer, which produces the final output of the neural network. Training: During training, the neural network is presented with a large dataset of input-output pairs. The network adjusts the weights and biases of the connections between neurons to minimize the error between its predictions and the actual output. Forward Propagation: During forward propagation, the input data flows through the network, and the output is calculated at each layer. Backpropagation: During backpropagation, the error between the predicted output and the actual output is calculated, and the error is propagated backwards through the network. This process helps the network adjust the weights and biases to improve its performance. Optimization: The network uses an optimization algorithm, such as stochastic gradient descent (SGD), to adjust the weights and biases based on the error calculated during backpropagation. Repeat: The process of forward propagation, backpropagation, and optimization is repeated multiple times until the network converges or reaches a desired level of accuracy. Neural networks can be used for a wide range of applications, including image and speech recognition, natural language processing, and predictive modeling. They're particularly useful when dealing with complex, non-linear relationships between inputs and outputs. I hope this helps! Do you have any specific questions about neural networks or would you like me to elaborate on any of these points? OS Linux GPU Nvidia CPU Intel Ollama version No response There might be some confusion about "format":"json". In the request to ollama, it indicates the the model output should be formatted as JSON. This has nothing to do with the JSON marshalling that requests does with the payload that it sends to and receives from the ollama service. If you remove it you get an answer more like the one from llama2.ai: --- 6771.py.orig 2024-09-13 18:34:19.572952124 +1000 +++ 6771.py 2024-09-13 18:34:42.853207233 +1000 @@ -5,11 +5,10 @@ payload = { "model": "llama3:8b", # Changed model to match the curl version "prompt": """How does a neural network work?""" , - "format": "json", # To ensure response is in JSON format "stream": False # To disable streaming in the response } # Send the POST request to the API response = requests.post("http://127.0.0.1:11434/api/generate", json=payload) output = response.json() -print(output) +print(output["response"]) $ python3 6771.py A fundamental question in the world of artificial intelligence! A neural network is a type of machine learning model inspired by the structure and function of the human brain. It's a complex system composed of many interconnected nodes or "neurons," which process and transmit information to each other. Here's a simplified overview of how a neural network works: **Basic Components:** 1. **Neurons (Nodes)**: These are the basic computing units in a neural network. Each neuron receives input from other neurons, performs a computation on that input, and then sends the output to other neurons. 2. **Connections (Edges)**: Neurons are connected by edges, which represent the flow of information between them. 3. **Activation Functions**: Each neuron applies an activation function to its output before sending it to other neurons. **How Neural Networks Work:** 1. **Input Layer**: The input layer receives the input data, which is a set of features or values that describe the problem you're trying to solve. 2. **Hidden Layers**: The input layer feeds into one or more hidden layers, which are composed of multiple neurons. Each neuron in these layers applies an activation function to its output and then sends it to other neurons. 3. **Output Layer**: The hidden layers feed into the output layer, which produces the final output of the neural network. **The Training Process:** 1. **Forward Pass**: The input data flows through the network, with each neuron applying its activation function to the input from previous layers. 2. **Error Calculation**: The output of the network is compared to the desired output (target), and an error is calculated based on the difference between the two. 3. **Backpropagation**: The error is propagated backward through the network, adjusting the weights and biases of each neuron to minimize the error. 4. **Optimization**: An optimization algorithm, such as Stochastic Gradient Descent (SGD), is used to update the model's parameters based on the calculated error. **Key Concepts:** 1. **Weighted Sums**: Each neuron computes a weighted sum of its inputs, where the weights represent the strength of the connections between neurons. 2. **Bias Terms**: A bias term is added to each neuron's output to introduce some randomness and allow for non-linear relationships. 3. **Activation Functions**: These determine the output of each neuron based on its input. Common activation functions include sigmoid, ReLU (Rectified Linear Unit), and tanh. **Types of Neural Networks:** 1. **Feedforward Networks**: Information flows only in one direction, from input to output, without any feedback loops. 2. **Recurrent Neural Networks (RNNs)**: Feedback connections allow information to flow backwards in time, enabling the network to capture temporal dependencies. 3. **Convolutional Neural Networks (CNNs)**: Designed for image and signal processing tasks, CNNs use convolutional and pooling layers to extract features. This is a high-level overview of how neural networks work. If you're interested in diving deeper, I'd be happy to explain more about the math behind neural networks or discuss specific applications!
gharchive/issue
2024-09-12T07:11:03
2025-04-01T06:39:50.516455
{ "authors": [ "rick-github", "wahidur028" ], "repo": "ollama/ollama", "url": "https://github.com/ollama/ollama/issues/6771", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1869585811
example doesn't compile for me I'm getting a type error when trying to compile the example. $ nim r --d:release --d:threadsafe example.nim Hint: used config file '/home/itwrx/.choosenim/toolchains/nim-2.0.0/config/nim.cfg' [Conf] Hint: used config file '/home/itwrx/.choosenim/toolchains/nim-2.0.0/config/config.nims' [Conf] Hint: used config file '/var/www/betterweb-guildenstern/betterweb/nim.cfg' [Conf] ...................................................................................................................................................... /var/www/betterweb-guildenstern/betterweb/example.nim(13, 32) Error: type mismatch Expression: getOrDefault(readData(getBody()), "say") [1] readData(getBody()): StringTableRef [2] "say": string Expected one of (first mismatch at [position]): [1] func getOrDefault(headers: HttpHeaders; key: string; default = @[""].HttpHeaderValues): HttpHeaderValues In previous versions, strtabs was automatically exported to user. But because I want the server to be just a very unmagical and unopinionated building block, I removed such export (you only need StringTableRef with certain header operations). So, the fix is to explicitly import strtabs when required, and I have modified the example accordingly. Need to document this also at some point...
gharchive/issue
2023-08-28T11:37:53
2025-04-01T06:39:50.518682
{ "authors": [ "ITwrx", "olliNiinivaara" ], "repo": "olliNiinivaara/GuildenStern", "url": "https://github.com/olliNiinivaara/GuildenStern/issues/13", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
54096112
Custom Auth Example? Hi! I don't quite understand the following At this current moment in time, custom Auth drivers written for the base Auth class will not work. I'm currently looking into this particular issue but for the meantime, you can work around this by changing your closure to return an instance of Ollieread\Multiauth\Guard instead of the default. Where do I need to return the ollieread version of the guard? Thank you! try this: Auth::XXXX()->extend('name of driver', function() { $provider = App::make('namespace\of\custom\provider'); return new Ollieread\Multiauth\Guard($provider, App::make('session.store')); }); If you look at this guide to extending to a custom driver, you can see that they return new Illuminate\Auth\Guard - so I believe that is what he is referring to as the default that you have to switch out with Ollieread\Multiauth\Guard http://toddish.co.uk/blog/creating-a-custom-laravel-4-auth-driver/ Actually I forgot the name parameter. So instead it is something like this: Auth::XXXX()->extend('name of driver', function() { $provider = App::make('namespace\of\custom\provider'); return new Ollieread\Multiauth\Guard($provider, App::make('session.store'), 'XXXX'); }); You can read this thread for more explanation https://github.com/ollieread/multiauth/pull/26 Will give this a try today! Thank you
gharchive/issue
2015-01-12T19:32:12
2025-04-01T06:39:50.524994
{ "authors": [ "Siflax", "notflip" ], "repo": "ollieread/multiauth", "url": "https://github.com/ollieread/multiauth/issues/77", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1511671557
Integration suddenly not working Describe the bug A clear and concise description of what the bug is. If possible attach the JSON info file for your devices. Expected behavior If applicable, a clear and concise description of what you expected to happen. Screenshots Environment details: Environment (HASSIO, Raspbian, etc): Home Assistant version installed: Component version installed: Last know working version: LG device type and model with issue: LG devices connected (list): Output of HA logs Paste the relavant output of the HA log here. Retrying setup: ThinQ platform not ready 2022-12-27 11:32:19.099 WARNING (MainThread) [homeassistant.config_entries] Config entry 'LGE Devices' for smartthinq_sensors integration not ready yet: ThinQ platform not ready; Retrying in background Additional context Add any other context about the problem here. May be a temporary LG service down or there are new TOS to accept in mobile application. No one else who has this problem??? @thoompje, Please check you log for additional message. Based on information provided I can only suppose a problem related to LG, but if there are others errors in the log I can better investigate. @thoompje, Please check you log for additional message. Based on information provided I can only suppose a problem related to LG, but if there are others errors in the log I can better investigate. Hello, thanks for answering, i have debug log enabled but the only thing what i can see in the logs: 2022-12-27 16:15:29.100 ERROR (MainThread) [homeassistant] Error doing job: Unclosed client session 2022-12-27 16:15:29.103 ERROR (MainThread) [homeassistant] Error doing job: Unclosed connector 2022-12-27 16:15:49.099 WARNING (MainThread) [homeassistant.config_entries] Config entry 'LGE Devices' for smartthinq_sensors integration not ready yet: ThinQ platform not ready; Retrying in background 2022-12-27 16:15:49.113 ERROR (MainThread) [homeassistant] Error doing job: Unclosed client session 2022-12-27 16:15:49.117 ERROR (MainThread) [homeassistant] Error doing job: Unclosed connector I don't see any specific error, really seems a problem in LG. Did you check your mobile application? Everything is working there? I don't see any specific error, really seems a problem in LG. Did you check your mobile application? Everything is working there? Don't see any errors in the app, everything is working fine. Is there an workaround or things to try? Woulb be useful if you could enable debug and provide log. Or you could just try to reconfigure the integration. Woulb be useful if you could enable debug and provide log. Or you could just try to reconfigure the integration. Many thanks for replying! I enabled debug logging but there is no debug log. I deleted the intergration and added it again, but no luck see screenshot. How do you enable the debug log? How do you enable the debug log? I enabled the debug like this Also failing for me since last update. I see this log `Logger: homeassistant.components.websocket_api.http.connection Source: custom_components/smartthinq_sensors/wideq/core_async.py:345 Integration: Home Assistant WebSocket API (documentation, issues) First occurred: 23:29:23 (5 occurrences) Last logged: 23:33:41 [546951289568] Traceback (most recent call last): File "/usr/src/homeassistant/homeassistant/components/websocket_api/commands.py", line 200, in handle_call_service await hass.services.async_call( File "/usr/src/homeassistant/homeassistant/core.py", line 1745, in async_call task.result() File "/usr/src/homeassistant/homeassistant/core.py", line 1782, in _execute_service await cast(Callable[[ServiceCall], Awaitable[None]], handler.job.target)( File "/usr/src/homeassistant/homeassistant/helpers/entity_component.py", line 213, in handle_service await service.entity_service_call( File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 678, in entity_service_call future.result() # pop exception if have File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 943, in async_request_call await coro File "/usr/src/homeassistant/homeassistant/helpers/service.py", line 715, in _handle_entity_call await result File "/config/custom_components/smartthinq_sensors/climate.py", line 278, in async_set_hvac_mode await self._device.power(False) File "/config/custom_components/smartthinq_sensors/wideq/ac.py", line 704, in power await self.set(keys[0], keys[1], key=keys[2], value=op_value) File "/config/custom_components/smartthinq_sensors/wideq/ac.py", line 859, in set await super().set( File "/config/custom_components/smartthinq_sensors/wideq/device.py", line 551, in set await self._set_control( File "/config/custom_components/smartthinq_sensors/wideq/device.py", line 512, in _set_control await self._client.session.set_device_v2_controls( File "/config/custom_components/smartthinq_sensors/wideq/core_async.py", line 1163, in set_device_v2_controls res = await self.post2(cmd_path, payload) File "/config/custom_components/smartthinq_sensors/wideq/core_async.py", line 1000, in post2 return await self._auth.gateway.core.lgedm2_post( File "/config/custom_components/smartthinq_sensors/wideq/core_async.py", line 334, in lgedm2_post return self._manage_lge_result(out, is_api_v2) File "/config/custom_components/smartthinq_sensors/wideq/core_async.py", line 345, in _manage_lge_result raise API2_ERRORScode custom_components.smartthinq_sensors.wideq.core_exceptions.NotLoggedInError ` Think I solved, disabled the integration, restarted HA and re-enabled it, then it started to work, at least for now. Maybe the crash condition has not been met yet. Think I solved, disabled the integration, restarted HA and re-enabled it, then it started to work, at least for now. Maybe the crash condition has not been met yet. I disabled it and enabled it again, but no luck. Also de deletion and adding again no luck. Nothing is working at all. What can I do more to investigate the issue and solve it at the end? :) @modem The most strange thing is when i going to add the integration again it says: Success! What i also did this morning is redownload from hacs the repository and restarted home assistant and trying to add it again. Same result. Also with 1.26.1 and 1.27.1. Hopefully anyone has an solution for this, we can't control the AC's now from Home Assistant We can close this one, found it! In the core_async.py i searched for the OAUTH url OAUTH_REDIRECT_URI = f"https://kr.m.lgaccount.com/{OAUTH_REDIRECT_PATH}" Suddenly my Unifi dream machine blocked this URL because i am using country control, i changed nothing so its really strange that it suddenly stopped working. I added South Korea to the allow list temporary so its working again. This is interesting, also because this url should not be used anymore with last release. @thoompje, can you confirm that you are using last release? This is interesting, also because this url should not be used anymore with last release. @thoompje, can you confirm that you are using last release? Really interesting then. I allowed South Korea on the Unifi and directly after it, it started working again. I think that there are some URL returned by the API that refer to KR, not that specific URL in the constant. But this is really a good catch, I should add some comment about this in readme. Thanks,👍 I think that there are some URL returned by the API that refer to KR, not that specific URL in the constant. But this is really a good catch, I should add some comment about this in readme. Thanks,👍 Cheers, you welcome! Have a good NYE!
gharchive/issue
2022-12-27T10:34:19
2025-04-01T06:39:50.554094
{ "authors": [ "modem", "ollo69", "thoompje" ], "repo": "ollo69/ha-smartthinq-sensors", "url": "https://github.com/ollo69/ha-smartthinq-sensors/issues/451", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
719625329
[Feature Request]Support for LG Ovens? Any plans for adding oven support to this integration? I can assist if necessary. I have a Signature Duel Oven (Gas, Elec). Does wideq support this? Potentially it is possible, wideq is the protocol so is just a matter to manage the property that comes from the payload. Real complexity is to implement without having an oven device. I put this in the todo list, but my priority here is implement controls so to be able to add climate entity and control existing sensors. Oven now is supported, I close this request
gharchive/issue
2020-10-12T19:56:02
2025-04-01T06:39:50.556365
{ "authors": [ "KTibow", "KungFuJoe23", "ollo69" ], "repo": "ollo69/ha-smartthinq-sensors", "url": "https://github.com/ollo69/ha-smartthinq-sensors/issues/98", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1209102609
Create incident_cls.pt using GH Actions We need to shift the incident_cls.pt creation to an automatic process, maybe using GitHub Actions, and this process should use the current state of the database, rather than a manually downloaded incidents.csv file This is being developed on branch "4-reduce-size-of-incident-cls"
gharchive/issue
2022-04-20T03:05:08
2025-04-01T06:39:50.575474
{ "authors": [ "olsonadr" ], "repo": "olsonadr/nlp-lambdas", "url": "https://github.com/olsonadr/nlp-lambdas/issues/6", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
69044690
uncompleted extra_data for access_token, code, and expires in Google+ Hi, I'm trying to authenticate in my web server from a SPA application using google+. I only have one problem when the user is authenticated. I don't get the extra data in social_auth_usersocialauth table, I mean I'm getting in the extra_data field {"code": null, "access_token": "", "expires": null, "user_id": "10181231375248323"}. the way i'm doing it is the next. In my SPA i'm doing gapi.auth.signIn({ "clientid" : "872066331548-m61m12384ii2gs5b82tbb2qlipesum6ee.apps.googleusercontent.com", "cookiepolicy" : "single_host_origin", "callback" : this.statusChangeCallback, "scope": "https://www.googleapis.com/auth/plus.login https://www.googleapis.com/auth/plus.profile.emails.read" }); In my web server the endpoint receive the access_token in the request that was giving by spa and do authentication api_view(['GET']) @permission_classes((permissions.AllowAny, )) @psa('social:complete') def register_by_access_token(request, backend): token = request.GET.get('access_token') user = request.backend.do_auth(access_token=token, backend=backend) if user: login(request, user) serialize = UserSerializer(user, context={'request': request}) return Response(serialize.data) else: return Response('ERROR') and the settings are AUTHENTICATION_BACKENDS = ( 'social.backends.facebook.FacebookAppOAuth2', 'social.backends.facebook.FacebookOAuth2', 'social.backends.google.GooglePlusAuth' 'django.contrib.auth.backends.ModelBackend', ) SOCIAL_AUTH_GOOGLE_PLUS_KEY = '872066331548-###################.apps.googleusercontent.com' SOCIAL_AUTH_GOOGLE_PLUS_SECRET = '######################' SOCIAL_AUTH_GOOGLE_PLUS_AUTH_EXTRA_ARGUMENTS = { 'access_type': 'offline', 'approval_prompt': 'auto' } SOCIAL_AUTH_GOOGLE_PLUS_SCOPE = [ 'https://www.googleapis.com/auth/plus.login', 'https://www.googleapis.com/auth/plus.emails.read' ] SOCIAL_AUTH_GOOGLE_PLUS_USE_UNIQUE_USER_ID = True SOCIAL_AUTH_PIPELINE = ( 'social.pipeline.social_auth.social_details', 'social.pipeline.social_auth.social_uid', 'social.pipeline.social_auth.auth_allowed', 'social.pipeline.social_auth.social_user', 'social.pipeline.user.get_username', # 'social.pipeline.mail.mail_validation', 'social.pipeline.social_auth.associate_by_email', 'social.pipeline.user.create_user', 'social.pipeline.social_auth.associate_user', 'social.pipeline.social_auth.load_extra_data', 'social.pipeline.user.user_details', 'social.pipeline.debug.debug' ) What i'm doing wrong?? i'm using the similar code to authenticate with facebook and the extra_data is completed. Some help is welcome. Thanks What version are you using? There was a fix related to access token storage when using AJAX authentication a few days ago. I'm using 0.2.6 the last The flow i need is based on server-side-flow as i saw there the client send an "authorization code" to the server and use it to obtain credential. The code is a one-time code that the server can exchange with Google's servers for an access token. Maybe this approach isn't supported yet in psa. This approach resembles the default server-side OAuth2 implementation, the user clicks the login button, get's redirected to the provider where it accepts, get's redirected back to the site with a code parameter, that's then exchanged for an access_token. In theory (this is not tested), you can call request.backend.auth_complete() in your view implementation.
gharchive/issue
2015-04-17T01:46:12
2025-04-01T06:39:50.588809
{ "authors": [ "omab", "zetahernandez" ], "repo": "omab/python-social-auth", "url": "https://github.com/omab/python-social-auth/issues/596", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
758821005
fixes typo in variable name - https_letsencrypt_enabled I noticed that the variable name at the end of the script has an underscore, while the actual variable that is checked in the script to conditionally run doesn't have an underscore. So, uncommenting the line at the end of the script and setting it to True doesn't actually do anything. Thanks for the fix! Travis failure is unrelated.
gharchive/pull-request
2020-12-07T20:23:51
2025-04-01T06:39:50.649870
{ "authors": [ "SteveO-sudo", "manics" ], "repo": "ome/prod-playbooks", "url": "https://github.com/ome/prod-playbooks/pull/297", "license": "BSD-2-Clause", "license_type": "permissive", "license_source": "github-api" }
2036060987
🛑 OME gate (443) is down In aea3a35, OME gate (443) ($OME_GATE) was down: HTTP code: 0 Response time: 0 ms Resolved: OME gate (443) is back up in 1d1f9d4 after 9 minutes.
gharchive/issue
2023-12-11T16:30:30
2025-04-01T06:39:50.655920
{ "authors": [ "snoopycrimecop" ], "repo": "ome/upptime", "url": "https://github.com/ome/upptime/issues/1411", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1903134238
🛑 OME demo server is down In e409fe9, OME demo server (https://demo.openmicroscopy.org) was down: HTTP code: 0 Response time: 0 ms Resolved: OME demo server is back up in 1bd8983 after 20 minutes.
gharchive/issue
2023-09-19T14:33:12
2025-04-01T06:39:50.658517
{ "authors": [ "snoopycrimecop" ], "repo": "ome/upptime", "url": "https://github.com/ome/upptime/issues/991", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2747615234
Add description prop in generateOpenApiSpec const swaggerSpec = await generateOpenApiSpec( { ImageFileDTO, NewOrderImageDTO, GetOrdersResponseDTO, PaginationRequestDTO, GetOrdersRequestApiDTO, OrderDTO, ShortOrderDTO, NewOrderApiDTO, UpdatedOrderFieldsApiDTO, ValidationErrorDTO, AaaApiErrorDTO, AaaServerActionErrorDTO, }, { securitySchemes: { BearerAuth: { type: 'http', scheme: 'bearer', }, }, security: [], } ); swaggerSpec.info.description = 'test description'; export { swaggerSpec }; generateOpenApiSpec does not have description prop. To have description, I have to set like that. Can we have it as a param of generateOpenApiSpec? Sure we can Maybe it's even better to have whole InfoObject prop. I know that right now title and version are fetched from package.json, but it'd be good to override them if InfoObject was provided with corresponding props.
gharchive/issue
2024-12-18T11:56:14
2025-04-01T06:39:50.682641
{ "authors": [ "GorgeousPuree", "omermecitoglu" ], "repo": "omermecitoglu/next-openapi-json-generator", "url": "https://github.com/omermecitoglu/next-openapi-json-generator/issues/30", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1580001346
Can I use my own pretrained StyleGAN2-ADA? I have trained a StyleGAN2-ADA generator on my own images and now trying different inversion methods. My first question after reading your paper is if I can use my pretrained G and implement your code to train an encoder or if should I retrain a StyleGAN2 using psp framework. I have trained my Generator using [https://github.com/NVlabs/stylegan3]. but for StyleGAN2-ADA not for version 3. As far as I understood, your work is based on psp which maps Z into W+ in contrast to the original StyleGAN2 which is on W space. So I suppose I need to train a psp first on my image and then use the trained G and follow your method. Would be great if you can clarify this for me. Thank you very much in advance for your help. I have trained a StyleGAN2-ADA generator on my own images and now trying different inversion methods. My first question after reading your paper is if I can use my pretrained G and implement your code to train an encoder or if should I retrain a StyleGAN2 using psp framework. I have trained my Generator using [https://github.com/NVlabs/stylegan3]. but for StyleGAN2-ADA not for version 3. As far as I understood, your work is based on psp which maps Z into W+ in contrast to the original StyleGAN2 which is on W space. So I suppose I need to train a psp first on my image and then use the trained G and follow your method. Would be great if you can clarify this for me. Thank you very much in advance for your help. May I know if you have any progress in your research? I’m facing a similar situation and hope to seek your assistance.
gharchive/issue
2023-02-10T16:33:09
2025-04-01T06:39:50.686739
{ "authors": [ "hamediut", "lzh5531773" ], "repo": "omertov/encoder4editing", "url": "https://github.com/omertov/encoder4editing/issues/95", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1051336158
Fix relayer event query Re-structure the event query logic in the Relayer to avoid situations where messages are not relayed until the service is restarted. I have also applied the same logic to the relevant section of the message-relayer-fast. Codecov Report Merging #146 (46310eb) into develop (dc7f2c1) will increase coverage by 17.79%. The diff coverage is n/a. @@ Coverage Diff @@ ## develop #146 +/- ## ============================================ + Coverage 71.73% 89.52% +17.79% ============================================ Files 68 33 -35 Lines 2257 1079 -1178 Branches 335 150 -185 ============================================ - Hits 1619 966 -653 + Misses 638 113 -525 Flag Coverage Δ batch-submitter ? contracts 89.52% <ø> (ø) core-utils ? data-transport-layer ? message-relayer ? Flags with carried forward coverage won't be shown. Click here to find out more. Impacted Files Coverage Δ ...kages/batch-submitter/src/batch-submitter/index.ts ...kages/data-transport-layer/src/utils/validation.ts packages/batch-submitter/src/index.ts ...itter/src/batch-submitter/state-batch-submitter.ts ...ices/l1-ingestion/handlers/state-batch-appended.ts packages/message-relayer/src/relay-tx.ts ...ckages/data-transport-layer/src/utils/contracts.ts packages/core-utils/src/coders/index.ts ...h-submitter/src/batch-submitter/batch-submitter.ts packages/data-transport-layer/src/utils/index.ts ... and 25 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update dc7f2c1...46310eb. Read the comment docs.
gharchive/pull-request
2021-11-11T20:28:29
2025-04-01T06:39:50.711438
{ "authors": [ "codecov-commenter", "mmontour1306" ], "repo": "omgnetwork/optimism-v2", "url": "https://github.com/omgnetwork/optimism-v2/pull/146", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
423793062
Update copyright ditto test me Build Passed :ok_hand: You can safely merge 'update-copyright' into 'master'. 👍
gharchive/pull-request
2019-03-21T15:30:53
2025-04-01T06:39:50.713986
{ "authors": [ "DavidMansolino", "cyberbotics-jenkins" ], "repo": "omichel/webots", "url": "https://github.com/omichel/webots/pull/298", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
573068407
Upgrade truffle to newer version Description Seems like truffle new release 5.1.15 might solve the previous unstable migration issue: In this PR: https://github.com/trufflesuite/truffle/pull/2808 of the change log, it claims to solve: https://github.com/trufflesuite/truffle/issues/2257 Should give it a try and make our DevOps happy 😛 This would make contract deployments less expensive! Currently we have many failures due to the issue and it's costing us many ETH for re-trying the deployments. Is there any way we can try the new version?
gharchive/issue
2020-02-28T23:46:30
2025-04-01T06:39:50.722226
{ "authors": [ "arthurk", "boolafish", "jbunce" ], "repo": "omisego/plasma-contracts", "url": "https://github.com/omisego/plasma-contracts/issues/592", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
302485629
Clean and update README.md Closes #39 (Almost) complete rewrite of README.md with more complete CLI documentation. Comments, edits, reviews welcome. Tried install with this Readme. Got stucked a little bit at: leveldb solidity installation I think leveldb should be added to pre-installation dependancy as well. And it would be nice to have some solidity installation guide for mac as well (like the one in another PR). The link for Solidity 0.4.18 doesn't have mac download : ( @boolafish Thanks for the feedback, I went ahead and updated the README to include better install notes for leveldb and solidity. Whoops, you are correct! Thanks. Thanks! Fixed. @kfichter Read it over, and it looks good! Merging.
gharchive/pull-request
2018-03-05T22:14:40
2025-04-01T06:39:50.726041
{ "authors": [ "DavidKnott", "boolafish", "kfichter" ], "repo": "omisego/plasma-mvp", "url": "https://github.com/omisego/plasma-mvp/pull/45", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1684351129
[concept] AI Copilot Currently the idea of copilot in tools are gaining popularity I was wondering what mind mapping folks think about this Below is a video form superus which demonstrates the concept Watch Video Another is a todo list that does itself for you Personally when I run my-mind locally I set up a process called, aimon.sh, which monitors changes in a map that I make when I hit save (saved to nosDAV and it will run a command give myself a payment reward use AI to expand questions use NosDAV pubsub to replace firebase with an open system for realtime, collaborative, payments enabled maps I dont yet have todolists that do themselves, but I think that would be kind of cool, and head towards AGI for mind maps. I am wondering if there is an appetite for any of these features. I am considering modifying my-mind to muck around with anythingllm which offers a pretty easy to use API for this purpose. I reckon I could probably make some kind of plugin that hooks into subscribers in pubsub.ts (e.g. "item-change" events) to listen for changes to nodes. I'd probably keep it in a separate project and not pollute the lightweight code here. Thinking I might expose expose subscribers on a global scope rather than a module scope so I can access it from another script without needing to create some kind of plugin system. There already is such code in there for firebase updates. However firebase is proprietary. I prefer a pub/sub solution over websockets called nostr: https://nostr.how/en/what-is-nostr Which does pub/sub over websockets. More background on nostr here. And it might be possible to apply for an open source grant to integrate my mind and nostr here Yeah, I can see this in the firebase one: pubsub.subscribe("firebase-list", this); pubsub.subscribe("firebase-change", this); ... case "firebase-change": if (data) { pubsub.unsubscribe("item-change", this); app.currentMap.mergeWith(data); pubsub.subscribe("item-change", this); } else { /* FIXME */ console.log("remote data disappeared"); } That's a good place to start. Thanks. Yeah I like the idea of your nostr pub/sub backend. Probably go with something like that to keep it easy and use the firebase code for a bit of help. @rquast awesome! I would be happy to run a nostr relay if you want to test. But there are a few 100 here A nostr relay also be run with a single line of code if you have docker: # docker pull scsibug/nostr-rs-relay docker run -p 4445:8080 scsibug/nostr-rs-relay
gharchive/issue
2023-04-26T06:18:06
2025-04-01T06:39:50.772353
{ "authors": [ "arkin0x", "melvincarvalho", "rquast" ], "repo": "ondras/my-mind", "url": "https://github.com/ondras/my-mind/issues/174", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
87841836
Loads a blank screen in every browser I just pulled down the new changes and it doesn't load in any browser. I reverted to sha c831747 and it works in Firefox Damn. Well, I will investigate + fix - but not before Monday, as I will be gone for the whole weekend. The problem is with toggle.js. It is included in the index.html, but not actually there. Yeah, I forgot to git add that file after splitting the app into individual components. It should be noted, though, that the latest commit (the one that does not really work) does not bring any radially new features (and therefore one has no real motivation to pull it). It is a scaffolding for a reworked storage system that will allow greater flexibility with remote storage servers. Fixed in https://github.com/ondras/wwwsqldesigner/commit/dcea4c24e5efcb66cefb18eec8e7c9b596000867.
gharchive/issue
2015-06-12T20:30:54
2025-04-01T06:39:50.775006
{ "authors": [ "CSiPet2", "nfuller52", "ondras" ], "repo": "ondras/wwwsqldesigner", "url": "https://github.com/ondras/wwwsqldesigner/issues/199", "license": "bsd-3-clause", "license_type": "permissive", "license_source": "bigquery" }
177125080
Some Warning and a Fatal Error I cloned the repo into the Composer/vendor folder. I run on Command Prompt php converter.php "C:/User/..../..../ Php7CodeFolder" "C:/Users/.../..../Php5CodeFolder" It give me some "Warning" --> mkdir(): Invalid argument in C:\Users..........\Composer\vendor\php7backport\src\Bouda\Php7Backport\DirectoryBackporter.php on line 31 and a "Fatal Error" --> Uncaught Error: Class 'PhpParser\Lexer\Emulative' not found in C:\Users.........\Composer\vendor\php7backport\src\Bouda\Php7Backport\Backporter.php:21 I don't understand why and how to resolve these errors. Can you give me some help? Hi Wakasaki, I just returned from holiday, sorry for the delay. Uncaught Error: Class 'PhpParser\Lexer\Emulative' not found in Did you run composer install? mkdir(): Invalid argument in I will look into it. It may be a bug. It is possible it cannot create nested directories.
gharchive/issue
2016-09-15T09:06:01
2025-04-01T06:39:50.778889
{ "authors": [ "Wakasaki", "ondrejbouda" ], "repo": "ondrejbouda/php7backport", "url": "https://github.com/ondrejbouda/php7backport/issues/10", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1597363421
should L0 have in-order queues? L0 has had from the beginning only out-of-order queues, and layers above, like SYCL, have implemented in-order queues by using synchronization primitives like events and barriers. Although using event or barriers is functionally correct, performance may vary on different generations of the same devices depending on the method used. One way to fix this is to have layers above like SYCL to implement device-dependent in-order queues, however, other more practical option, is to let the L0 driver implementation to directly implement the in-order queue as efficient as possible. are there other scenarios where in-order queues are desired? or are there any cons on adding this to the spec? From my point of view it should. Driver knowing that it works with in order queue is beneficial and it can apply optimizations that are not possible without this knowledge. In order execution is also well established standard in the industry, so a lot of applicaitons uses it. Having efficient driver implementation for this is paramount. I concur with @MichalMrozek that in-order can be seen mostly as a defacto. I was mostly thinking to provide a case where the SYCL queue construction is in-order and if the underlying L0 queue objects adopt some form of in-order fashion same as the upstream SYCL queue: DAG construction by the scheduler, the submission of kernels, kernel-scheduling hints can default to oldest-first, dependency tracking, barriers/fences and their associated book-keeping might get simpler, might enable optimizations. In reg to optimizations: I am not certain if it is beneficial for L0 to provide the optimizations or the driver to incorporate these. More importantly helps applications which has hundreds of "low-runtime" kernel submissions in-order, math libraries and all the projects that are porting from CUDA to SYCL and adopt SYCL in-order as a the reason are some uses cases. Since these types of kernels are flagged to be in-order, any book-keeping overhead with out-of-order can be viewed as an additional overhead. we can have a flag either per queue or per device. logically, it should be per queue, since it would give most flexibility to the application. in practice, it may introduce extra complexity, so having it global per device would be easier for the implementation. If an application wants "out of order", it would need to use separate queues. In Level Zero entity that needs to ensure in-order is command list , not command queue. Hence we need command list property to enforce in order start & completion of appended commands.
gharchive/issue
2023-02-23T18:49:02
2025-04-01T06:39:50.785328
{ "authors": [ "MichalMrozek", "abagusetty", "jandres742", "pboudier09" ], "repo": "oneapi-src/level-zero-spec", "url": "https://github.com/oneapi-src/level-zero-spec/issues/81", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
981554052
Intel Extension for Scikit-learn getting started sample Adding a New Sample(s) Description New intel extension for a scikit-learn sample that does digit recognition using SVM classifier. Checklist Administrative [x] Review sample design with the appropriate Domain Expert: Rachel Oberman [x] If you have any new dependencies/binaries, inform the oneAPI Code Samples Project Manager: @JoeOster Code Development [x] Implement coding guidelines and ensure code quality. see wiki for details [x] Adhere to readme template [x] Enforce format via clang-format config file [x] Adhere to sample.json specification. https://github.com/oneapi-src/oneAPI-samples/wiki/sample-json-specification [x] Ensure/create CI test configurations for sample (ciTests field) https://github.com/oneapi-src/oneAPI-samples/wiki/sample-json-ci-test-object [x] Run jsonlint on sample.json to verify json syntax. www.jsonlint.com Security and Legal [x] N/A OSPDT Approval (see @JoeOster for assistance) [ ] Bandit Scans (Python only) [X] Virus scan Review [ ] Review readme with Tom Lenth(@tomlenth) and/or Joe Oster(@JoeOster) [X] Tested using Dev Cloud when applicable @tomlenth please review the readme @asirvaiy - Please supply the bandit report Please note conda is failing on test machines, no eta on fix, merging despite the issues as its a env issue
gharchive/pull-request
2021-08-27T19:58:26
2025-04-01T06:39:50.792630
{ "authors": [ "JoeOster", "asirvaiy" ], "repo": "oneapi-src/oneAPI-samples", "url": "https://github.com/oneapi-src/oneAPI-samples/pull/640", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2013459208
Update Listed required Python version As of this time, ansible==9.0.1 requires Python >= 3.10 sudo apt-get install python3.10 sudo apt-get install python3.10-venv
gharchive/issue
2023-11-28T02:10:53
2025-04-01T06:39:50.840681
{ "authors": [ "SinLess-Games" ], "repo": "onedr0p/flux-cluster-template", "url": "https://github.com/onedr0p/flux-cluster-template/issues/1048", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1838480097
bring setup steps under one L2 heading For easier reference and to make room for other L2 headings such as "Usage" and "FAQs". Maybe instead of Step we use a different name since there would be substeps and subsubsteps? We could use Stage, but there are no L4 headings that need to be referred to as substeps. Your call. The substeps would be the ones with letters, e.g. 2a, 2b, 2c, etc.. Stage or section is probably fine unless you come up with a better name Stage sounds good. Section sounds unnatural to me, e.g. "There are 6 sections to getting a Flux-managed cluster up and running." One note on substeps: a good way to refer to them would be "Step 2b". And subsubsteps: "Step 3c(iii)"
gharchive/pull-request
2023-08-07T00:26:32
2025-04-01T06:39:50.842941
{ "authors": [ "alex-matthews", "onedr0p" ], "repo": "onedr0p/flux-cluster-template", "url": "https://github.com/onedr0p/flux-cluster-template/pull/893", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2485300043
🛑 Identity Plus (EU) is down In f0abbe8, Identity Plus (EU) (https://eu.onetimesecret.com/) was down: HTTP code: 0 Response time: 0 ms Resolved: Identity Plus (EU) is back up in b1a2fc5 after 25 minutes.
gharchive/issue
2024-08-25T14:34:43
2025-04-01T06:39:50.853912
{ "authors": [ "delano" ], "repo": "onetimesecret/status", "url": "https://github.com/onetimesecret/status/issues/61", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
174232386
undefined is not an object(evaluating 'container.setState') Hai, I run your demo project,and check in device it working fine, But in index.ios.js, i added the some code for navigator, added some form code init.when click on signIn button i want to show the webRtc screen for that screen i just copied the main.js file code in RTcWebdemo project and added in my code, here is my code index.ios.js if (!window.navigator.userAgent) { window.navigator.userAgent = "react-native"; } 'use strict'; import React,{Component} from 'react'; var SignIn = require('./SignIn'); import { AppRegistry, StyleSheet, Navigator, AsyncStorage, } from 'react-native'; class RCTWebRTCDemo extends Component { render() { return ( <Navigator initialRoute={{ component:SignIn //component declaration }} configureScene={(route) => ({ ...Navigator.SceneConfigs.HorizontalSwipeJump, gestures: false })} renderScene={(route, navigator) =>{ return <route.component navigator={navigator} {...route.passProps} />; }}/> ); } }; signIn.js onSignin(){ this.props.navigator.push({ component: main }) } When click on signIn button of the below image calling the above function in main component i added the main.js file code but when i run project before clicking on signIn button, It shows some error like this please give me suggestions that how to resolve this make sure you set container = this in the main WebRTCDEMO Component's ComponentDidMount. my suggestion would be: try to understand demo main.js logic, and applied it in to your own logic. Hi @shaikhussian I am also facing same issue . If u find the solution of this issue then please help me to resolve this . Thanks Would someone have the time to write the demo using React.Component instead of React.createClass and make it easier to understand? The official examples from OpenWebRTC are pretty good. @davidperrenoud Have you worked on this . if yes then pls provide solution to above issue. Thanks Yes, the easiest solution (not the best one) would be to build your app like the RCTWebRTCdemo with React.createClass: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L232 As @zxcpoiu said, you can then map the container variable to this in componentDidMount: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L247 (A prettier solution would be to put the sockets and WebRTC functions in two separate files and the React.Component in another file.) Yes, i did same . " Yes, the easiest solution (not the best one) would be to build your app like the RCTWebRTCdemo with React.createClass: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L232 As @zxcpoiu https://github.com/zxcpoiu said, you can then map the container variable to this in componentDidMount: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L247 " but still getting same issue when i started app . On Thu, Sep 22, 2016 at 7:57 PM, David Perrenoud notifications@github.com wrote: Yes, the easiest solution (not the best one) would be to build your app like the RCTWebRTCdemo with React.createClass: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L232 As @zxcpoiu https://github.com/zxcpoiu said, you can then map the container variable to this in componentDidMount: https://github.com/oney/RCTWebRTCDemo/blob/master/main.js#L247 (A prettier solution would be to put the sockets and WebRTC functions in two separate files and the React.Component in another file.) — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/oney/react-native-webrtc/issues/128#issuecomment-248919493, or mute the thread https://github.com/notifications/unsubscribe-auth/AQlFJGis5hr4NtjZhLUFnKSdFNnvgSwxks5qspBDgaJpZM4Jxb09 . It simply means you did not set it correctly. it's a JS/react-native coding syntax issue. as @davidperrenoud you may want to try export your component, maybe as a singleton, and manipulate it in another file ( your main app.js with main logic ) Hi I am also getting the same issue. Please tell how to solve this one.. Put your code "socket.on" into componantdidMount function . On Tue, Oct 18, 2016 at 4:28 PM, swaroopa94 notifications@github.com wrote: Hi I am also getting the same issue. Please tell how to solve this one.. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/oney/react-native-webrtc/issues/128#issuecomment-254474492, or mute the thread https://github.com/notifications/unsubscribe-auth/AQlFJD3gzXjFfRpE33a2ocmilyzK59BKks5q1KZdgaJpZM4Jxb09 . this is how I solved the problem: https://stackoverflow.com/questions/40109937/undefined-is-not-an-object-evaluating-container-setstate/40974201#40974201 thanks for the link @LoseTheQuit for anyone who interest in app structure, see #167 as well Nothing actionable for the plugin itself, closing.
gharchive/issue
2016-08-31T09:58:17
2025-04-01T06:39:50.871722
{ "authors": [ "LoseTheQuit", "davidperrenoud", "rohitgupta87410", "saghul", "shaikhussian", "swaroopa94", "zxcpoiu" ], "repo": "oney/react-native-webrtc", "url": "https://github.com/oney/react-native-webrtc/issues/128", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
314575522
muting/unmuting remote streams updates mediaStreamTrackSetEnabled to work for remote tracks as well as by local tracks. so setting .enabled on a remote track in js will mute / unmute the track if it is audio. applies to issues #338 and #179 @ippa yes this solution worked on 1.58.3 for muting remote audio tracks (local tracks already worked). I updated to fix the merge conflict in 1.63.0. With this change just call find the audio track in javascript (e.g. track = stream.getAudioTracks()[0]) and set track.enabled = false or track.enabled = true. It's not tested with video tracks however. I did see that in the jitsi fork there's a similar commit that can mute video tracks too. https://github.com/jitsi/react-native-webrtc/commit/49f678eed157ba95cbd7ec8bc19af1b002bb1513 Was wondering if perhaps that or the effort for M63 might have been keeping this from being merged. I'm coming across an issue with this where if I mute both the local and remote tracks then the audiosession on ios gets deactivated, it then gets activated again when you unmute but the audio never comes back. Investigating some more Sorry, this has fallen through the cracks again :-/ Can you please rebase? I'll include it for the 69 release. Fixed the conflict, should be good to go now. Actually it needs one fix first Hey @weberjc, seems like the iOS part is missing now? @saghul it looks like the ios part is still there, however i'm not in a position to test it at the moment. It gets those remoteTrack vars from the other files like WebRTCModule+RTCPeerConnection.m. I see it again now. I'll give it a test and merge it. Is there a reason why this PR is closed or some alternative solution has been added to library itself?
gharchive/pull-request
2018-04-16T09:46:57
2025-04-01T06:39:50.876741
{ "authors": [ "akashkrshukla", "danjenkins", "saghul", "weberjc" ], "repo": "oney/react-native-webrtc", "url": "https://github.com/oney/react-native-webrtc/pull/440", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
2721205856
eth_getLog() returns null instead of list if there are no transactions example: curl -XPOST 'https://testnet.evm.nodes.onflow.org' -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1318572,"method":"eth_getLogs","params":[{"blockHash":"0x4967b45f073cb74e750536d41fdb70ae3923d8a151abdbef25e4fd77d27fe253","address": ["0x0000000071727de22e5e9d8baf0edac6f37da032"],"topics": ["0x2da466a7b24304f47e87fa2e1e5a81b9831ce54fec19055ce277ca2f39ba42c4","0x49628fd1471006c1482da88028e9ce4dbb080b815c9b0344d39e5a8e6ec1419f","0xd1c19fbcd4551a5edfb66d43d2e337c04837afda3482b42bdf569a8fccdae5fb"]}]}' | jq return: { "jsonrpc": "2.0", "id": 1318572, "result": null } Problem should return empty list instead of null Completed by https://github.com/onflow/flow-evm-gateway/pull/697
gharchive/issue
2024-12-05T19:14:05
2025-04-01T06:39:50.889463
{ "authors": [ "j1010001", "m-Peter" ], "repo": "onflow/flow-evm-gateway", "url": "https://github.com/onflow/flow-evm-gateway/issues/695", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
760635161
invalid contract: Checking failed, not very helpfull error message Instructions When trying to deploy a contract that used to work https://github.com/versus-flow/auction-flow-contract/blob/upgrade-flow/contracts/Auction.cdc i now get an error that is not helpfull Problem The error message says: DEBU[0079] ️✉️ Transaction submitted txID=468e7ba9ec94691de9d59303dc135150fe83dfef1859c61ccde4d91287673f48 ERROR Execution failed: invalid contract: Checking failed WARN[0079] ❗ Transaction reverted ``` ### Steps to Reproduce 1. Check out https://github.com/versus-flow/auction-flow-contract/tree/upgrade-flow 2. Use v0.11.2 of the flow emulator 3. In one terminal run ```make emulator``` 4. In another terminal run ```make demo``` ### Acceptance Criteria There is an error message thrown that makes it easier to understand what is wrong with the contract moved to cadence repo. Thanks @bjartek!
gharchive/issue
2020-12-09T20:09:58
2025-04-01T06:39:50.891763
{ "authors": [ "bjartek", "psiemens" ], "repo": "onflow/flow-go-sdk", "url": "https://github.com/onflow/flow-go-sdk/issues/124", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
479765643
Language support for F# and Elixir? I'm about to purchase a License for onivim2 :D Looks so interesting off the bat but I have some questions such as support for 2 FP languages that I use. Count on me to chip in helping for issues, I like the ReasonML/Ocaml language ;) Currently, the only real language support we have is syntax highlights. We don't bundle any for F# or Elixir, but it is just as simple as copying the VSCode ones over in the extensions folder. Once we start to get LSP and code extension support, then it will broadly be the same. There won't be any out of the box support, but there will be the (hopefully) full set of code extensions that work over there, as well as some of the vim ones, so installing a plugin will give as much support as you'd reasonably expect from those two editors.
gharchive/issue
2019-08-12T17:36:39
2025-04-01T06:39:50.914102
{ "authors": [ "CrossR", "Kavignon" ], "repo": "onivim/oni2", "url": "https://github.com/onivim/oni2/issues/647", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1422668398
Upgrade ONNX-MLIR to ONNX v1.12.0 release. @AlexandreEichenberger @chentong319 @hamptonm1 @cjvolzka ONNX release v1.12.0 has been made available earlier this year. Detail for the updates included are described on the ONNX releases web page https://github.com/onnx/onnx/releases. The ai.onnx opset version increased to 17 with the new operators: SequenceMap, LayerNormalization, DFT, HannWindow, HammingWindow, BlackmanWindow, MelWeightMatrix, and STFT. The Scan operator was updated to remove an unused type constraint, but the version remains as opset 16. Also included in the release are shape inference enhancements, bug fixes, infrastructure improvements, documentation updates, and wheel changes. Let’s use this issue is to discuss moving to ONNX v1.12.0. We plan on working on this without initially providing support for the new operators. Note that the TreeEnsembleClassifier and TreeEnsembleRegressor operators show up as version 1 in ONNX-MLIR but were updated to version 3 in ONNX v1.11.0. The SupportedONNXOps-cpu.md document at https://github.com/onnx/onnx-mlir/blob/main/docs/SupportedONNXOps-cpu.md states that TreeEnsembleClassifier and TreeEnsembleRegressor are unsupported. Is the document correct? Should these be updated to version 3 in ONNX-MLIR when we move ONNX to v1.12.0? @mikeessen That is correct, we currently don't have code that lower these ops to CPU reference code. We typically mention a version only for the ops we support. If some of these ops are important to your projects, we encourage folks to contribute their implementations. @mikeessen That is correct, we currently don't have code that lower these ops to CPU reference code. We typically mention a version only for the ops we support. If some of these ops are important to your projects, we encourage folks to contribute their implementations. This looks good to me @mikeessen! Thanks for putting this together. Also @cjvolzka is it important to support these ops that @mikeessen mentioned or are we fine with what we are inheriting from onn-xmlir ? PR #1835 added the new Ops into onnx-mlir's ONNX dialect definition. Further implementation support is needed PR #1835 added the new Ops into onnx-mlir's ONNX dialect definition. Further implementation support is needed Thanks for the information.... I am assuming @mikeessen will still update the documentation for this? @hamptonm1 @chentong319 Yes, I will update SupportedONNXOps-cpu.md for this. @hamptonm1 @chentong319 Yes, I will update SupportedONNXOps-cpu.md for this. FYI @philass so you do not have to do duplicate work :) Closing issue as completed as onnx-mlir has moved to onnx 1.12
gharchive/issue
2022-10-25T15:34:09
2025-04-01T06:39:50.934626
{ "authors": [ "AlexandreEichenberger", "chentong319", "cjvolzka", "hamptonm1", "mikeessen" ], "repo": "onnx/onnx-mlir", "url": "https://github.com/onnx/onnx-mlir/issues/1808", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1761792238
compile error ╰─ cmake --build . -j 1 ─╯ [1/696] Running gen_proto.py on onnx/onnx.in.proto Processing /Users/unicorn/workspace/onnx-mlir/third_party/onnx/onnx/onnx.in.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.proto3 generating /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx_pb.py [3/696] Running gen_proto.py on onnx/onnx-operators.in.proto Processing /Users/unicorn/workspace/onnx-mlir/third_party/onnx/onnx/onnx-operators.in.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-operators-ml.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-operators-ml.proto3 generating /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx_operators_pb.py [5/696] Running gen_proto.py on onnx/onnx-data.in.proto Processing /Users/unicorn/workspace/onnx-mlir/third_party/onnx/onnx/onnx-data.in.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-data.proto Writing /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-data.proto3 generating /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx_data_pb.py [7/696] Building CXX object third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o FAILED: third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o /usr/bin/c++ -DONNX_API="attribute((visibility("default")))" -DONNX_ML=1 -DONNX_NAMESPACE=onnx -D_DEBUG -D_GLIBCXX_ASSERTIONS -D_LIBCPP_ENABLE_ASSERTIONS -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -I/Users/unicorn/workspace/llvm-project/llvm/include -I/Users/unicorn/workspace/llvm-project/build/include -I/Users/unicorn/workspace/llvm-project/mlir/include -I/Users/unicorn/workspace/llvm-project/build/tools/mlir/include -I/Users/unicorn/workspace/onnx-mlir/build/third_party/onnx -isystem /opt/homebrew/include -fPIC -fvisibility-inlines-hidden -Werror=date-time -Werror=unguarded-availability-new -Wall -Wextra -Wno-unused-parameter -Wwrite-strings -Wcast-qual -Wmissing-field-initializers -Wimplicit-fallthrough -Wcovered-switch-default -Wno-noexcept-type -Wnon-virtual-dtor -Wdelete-non-virtual-dtor -Wsuggest-override -Wstring-conversion -Wmisleading-indentation -Wctad-maybe-unsupported -fdiagnostics-color -w -Wnon-virtual-dtor -g -O0 -std=gnu++11 -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk -fPIC -fvisibility=hidden -fvisibility-inlines-hidden -D_DEBUG -D_GLIBCXX_ASSERTIONS -D_LIBCPP_ENABLE_ASSERTIONS -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS -MD -MT third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o -MF third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o.d -o third_party/onnx/CMakeFiles/onnx_proto.dir/onnx/onnx-ml.pb.cc.o -c /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:11: /opt/homebrew/include/google/protobuf/port_def.inc:205:1: error: static_assert failed due to requirement '201103L >= 201402L' "Protobuf only supports C++14 and newer." static_assert(PROTOBUF_CPLUSPLUS_MIN(201402L), "Protobuf only supports C++14 and newer."); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:130: In file included from /opt/homebrew/include/google/protobuf/stubs/common.h:44: In file included from /opt/homebrew/include/absl/strings/string_view.h:39: In file included from /opt/homebrew/include/absl/base/attributes.h:37: In file included from /opt/homebrew/include/absl/base/config.h:86: /opt/homebrew/include/absl/base/policy_checks.h:79:2: error: "C++ versions less than C++14 are not supported." #error "C++ versions less than C++14 are not supported." ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:130: In file included from /opt/homebrew/include/google/protobuf/stubs/common.h:46: In file included from /opt/homebrew/include/google/protobuf/stubs/port.h:45: /opt/homebrew/include/google/protobuf/port_def.inc:205:1: error: static_assert failed due to requirement '201103L >= 201402L' "Protobuf only supports C++14 and newer." static_assert(PROTOBUF_CPLUSPLUS_MIN(201402L), "Protobuf only supports C++14 and newer."); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:130: In file included from /opt/homebrew/include/google/protobuf/stubs/common.h:56: /opt/homebrew/include/google/protobuf/port_def.inc:205:1: error: static_assert failed due to requirement '201103L >= 201402L' "Protobuf only supports C++14 and newer." static_assert(PROTOBUF_CPLUSPLUS_MIN(201402L), "Protobuf only supports C++14 and newer."); ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:132: In file included from /opt/homebrew/include/absl/log/absl_check.h:38: In file included from /opt/homebrew/include/absl/log/internal/check_impl.h:19: In file included from /opt/homebrew/include/absl/log/internal/check_op.h:37: In file included from /opt/homebrew/include/absl/log/internal/strip.h:24: In file included from /opt/homebrew/include/absl/log/internal/log_message.h:41: In file included from /opt/homebrew/include/absl/log/log_entry.h:36: In file included from /opt/homebrew/include/absl/types/span.h:69: /opt/homebrew/include/absl/types/internal/span.h:119:21: error: no template named 'remove_const_t' in namespace 'std'; did you mean simply 'remove_const_t'? using Container = std::remove_const_t; ^~~~~ /opt/homebrew/include/absl/meta/type_traits.h:592:1: note: 'remove_const_t' declared here using remove_const_t = typename std::remove_const::type; ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:132: In file included from /opt/homebrew/include/absl/log/absl_check.h:38: In file included from /opt/homebrew/include/absl/log/internal/check_impl.h:19: In file included from /opt/homebrew/include/absl/log/internal/check_op.h:37: In file included from /opt/homebrew/include/absl/log/internal/strip.h:24: In file included from /opt/homebrew/include/absl/log/internal/log_message.h:41: In file included from /opt/homebrew/include/absl/log/log_entry.h:36: In file included from /opt/homebrew/include/absl/types/span.h:69: /opt/homebrew/include/absl/types/internal/span.h:130:24: error: no template named 'enable_if_t' in namespace 'std'; did you mean simply 'enable_if_t'? using EnableIfIsView = std::enable_if_t<IsView::value, int>; ^~~~~ /opt/homebrew/include/absl/meta/type_traits.h:656:1: note: 'enable_if_t' declared here using enable_if_t = typename std::enable_if<B, T>::type; ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:132: In file included from /opt/homebrew/include/absl/log/absl_check.h:38: In file included from /opt/homebrew/include/absl/log/internal/check_impl.h:19: In file included from /opt/homebrew/include/absl/log/internal/check_op.h:37: In file included from /opt/homebrew/include/absl/log/internal/strip.h:24: In file included from /opt/homebrew/include/absl/log/internal/log_message.h:41: In file included from /opt/homebrew/include/absl/log/log_entry.h:36: In file included from /opt/homebrew/include/absl/types/span.h:69: /opt/homebrew/include/absl/types/internal/span.h:133:27: error: no template named 'enable_if_t' in namespace 'std'; did you mean simply 'enable_if_t'? using EnableIfNotIsView = std::enable_if_t<!IsView::value, int>; ^~~~~ /opt/homebrew/include/absl/meta/type_traits.h:656:1: note: 'enable_if_t' declared here using enable_if_t = typename std::enable_if<B, T>::type; ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:132: In file included from /opt/homebrew/include/absl/log/absl_check.h:38: In file included from /opt/homebrew/include/absl/log/internal/check_impl.h:19: In file included from /opt/homebrew/include/absl/log/internal/check_op.h:37: In file included from /opt/homebrew/include/absl/log/internal/strip.h:24: In file included from /opt/homebrew/include/absl/log/internal/log_message.h:43: /opt/homebrew/include/absl/strings/internal/has_absl_stringify.h:46:8: error: no template named 'enable_if_t' in namespace 'std'; did you mean simply 'enable_if_t'? T, std::enable_if_t<std::is_void<decltype(AbslStringify( ^~~~~ /opt/homebrew/include/absl/meta/type_traits.h:656:1: note: 'enable_if_t' declared here using enable_if_t = typename std::enable_if<B, T>::type; ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:134: In file included from /opt/homebrew/include/absl/strings/cord.h:78: In file included from /opt/homebrew/include/absl/container/inlined_vector.h:53: In file included from /opt/homebrew/include/absl/container/internal/inlined_vector.h:30: In file included from /opt/homebrew/include/absl/container/internal/compressed_tuple.h:40: /opt/homebrew/include/absl/utility/utility.h:164:12: error: no member named 'in_place_t' in namespace 'std' using std::in_place_t; ~~~~~^ /opt/homebrew/include/absl/utility/utility.h:165:12: error: no member named 'in_place' in namespace 'std' using std::in_place; ~~~~~^ /opt/homebrew/include/absl/utility/utility.h:181:12: error: no member named 'in_place_type' in namespace 'std' using std::in_place_type; ~~~~~^ /opt/homebrew/include/absl/utility/utility.h:182:12: error: no member named 'in_place_type_t' in namespace 'std' using std::in_place_type_t; ~~~~~^ /opt/homebrew/include/absl/utility/utility.h:198:12: error: no member named 'in_place_index' in namespace 'std' using std::in_place_index; ~~~~~^ /opt/homebrew/include/absl/utility/utility.h:199:12: error: no member named 'in_place_index_t' in namespace 'std' using std::in_place_index_t; ~~~~~^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:134: In file included from /opt/homebrew/include/absl/strings/cord.h:78: In file included from /opt/homebrew/include/absl/container/inlined_vector.h:53: In file included from /opt/homebrew/include/absl/container/internal/inlined_vector.h:30: /opt/homebrew/include/absl/container/internal/compressed_tuple.h:107:36: error: no type named 'in_place_t' in namespace 'absl' explicit constexpr Storage(absl::in_place_t, V&& v) ~~~~~~^ /opt/homebrew/include/absl/container/internal/compressed_tuple.h:120:36: error: no type named 'in_place_t' in namespace 'absl' explicit constexpr Storage(absl::in_place_t, V&& v) ~~~~~~^ /opt/homebrew/include/absl/container/internal/compressed_tuple.h:143:48: error: no type named 'in_place_t' in namespace 'absl' explicit constexpr CompressedTupleImpl(absl::in_place_t, Vs&&... args) ~~^ /opt/homebrew/include/absl/container/internal/compressed_tuple.h:144:24: error: no member named 'in_place' in namespace 'absl'; did you mean 'isspace'? : Storage<Ts, I>(absl::in_place, absl::forward(args))... {} ^~ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.3.sdk/usr/include/_ctype.h:267:1: note: 'isspace' declared here isspace(int _c) ^ In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.cc:4: In file included from /Users/unicorn/workspace/onnx-mlir/build/third_party/onnx/onnx/onnx-ml.pb.h:24: In file included from /opt/homebrew/include/google/protobuf/io/coded_stream.h:134: In file included from /opt/homebrew/include/absl/strings/cord.h:78: In file included from /opt/homebrew/include/absl/container/inlined_vector.h:53: In file included from /opt/homebrew/include/absl/container/internal/inlined_vector.h:30: /opt/homebrew/include/absl/container/internal/compressed_tuple.h:155:48: error: no type named 'in_place_t' in namespace 'absl' explicit constexpr CompressedTupleImpl(absl::in_place_t, Vs&&... args) ~~~~~~^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. ninja: build stopped: subcommand failed. static_assert(PROTOBUF_CPLUSPLUS_MIN(201402L), "Protobuf only supports C++14 and newer."); It seems that your c++ version is too old. onnx-mlir has set(CMAKE_CXX_STANDARD 17) I believe the problem is that you're using too new protobuf. We compile protobuf with c++11 so we cannot compile the newest protobuf. We build with the old protobuf 3.20.3, see: https://github.com/onnx/onnx-mlir/blob/main/docker/Dockerfile.llvm-project#L66 If you can't move to a lower version of protobuf on your system you can try to build protobuf from source, like it's done in that docker file. (If you're on MacOS you need to do a few things different. And if you have a Mac with M1 or M2 chip, then you need to apply a this patch to 3.20.3.) it works thank you; I forget to reply
gharchive/issue
2023-06-17T11:51:33
2025-04-01T06:39:50.975594
{ "authors": [ "chentong319", "sorenlassen", "wm901115nwpu" ], "repo": "onnx/onnx-mlir", "url": "https://github.com/onnx/onnx-mlir/issues/2326", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1355284965
Only update the output shape if the infered shape is better - unary elementwise ops Currently our shape inference always overwrites the output shape with a new shape. Sometimes, shape inference has not been powerful enough to infer static dimensions. Meanwhile, in many onnx models, outputs of ONNX ops have static shape or partially static shape. In such cases, we should respect the existing shape and only update the output shape if the inferred shape is better, for example, the inferred shape can infer a static dimension. This patch adds a function to check whether the inferred shape is better than the existing shape or not, and it applies the function to the shape inference of unary element-wise ops as the first candidates. Resolves #1480 Signed-off-by: Tung D. Le tung@jp.ibm.com Jenkins Linux ppc64le Build #6313 [push] Only update the output s... started at 10:11 Jenkins Linux amd64 Build #7245 [push] Only update the output s... failed after 26 min Jenkins Linux s390x Build #7261 [push] Only update the output s... started at 10:10 Jenkins Linux ppc64le Build #6313 [push] Only update the output s... aborted after 48 min Jenkins Linux s390x Build #7261 [push] Only update the output s... aborted after 2 hr 4 min
gharchive/pull-request
2022-08-30T07:17:09
2025-04-01T06:39:50.981846
{ "authors": [ "jenkins-droid", "tungld" ], "repo": "onnx/onnx-mlir", "url": "https://github.com/onnx/onnx-mlir/pull/1654", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
326967197
Feature request: print subcase when an exception is thrown inside one Description If an exception is thrown in a SUBCASE, the logged error only contain the TEST_CASE name. This might be related to #125 Steps to reproduce Here's a simple program: #define DOCTEST_CONFIG_IMPLEMENT_WITH_MAIN #include "doctest/doctest.h" TEST_CASE("tc") { SUBCASE("ab") { std::cout << "ab" << std::endl; } SUBCASE("ab, c") { throw std::runtime_error("fail"); } } The output is ab =============================================================================== test.cpp:4: TEST CASE: tc test.cpp:4: ERROR: test case THREW exception: fail =============================================================================== There is no information about which SUBCASE failed. Extra information SUBCASEs seems to be implemented as ifs, I'm not sure this feature can be implemented in such a way. Maybe SUBCASEs should actually set up a lambda, but that means the user needs to add a ';' at the end of each SUBCASE. Also it wouldn't work with C++ < 11. doctest version: master cd1d747 Operating System: debian unstable Compiler+version: clang 5.0.0 This seems reasonable! And indeed it is connected to #125 It can be implemented in C++98 without any troubles. Perhaps I'll look into fixing both issues! I haven't made progress in this regard with the default console reporter but the just released version 2.3 supports xml as the output and there you will get a much better context of where the exception was thrown from. Anyway this is still a deficit of the console reporter so this issue will remain open until I fix that. fixed! releasing version 2.3.5 right now. Just a message to thank you a lot for having fixed this. I have around 400 test cases, which are mostly subcases because they are ideal to avoid duplicating function calls, so until now I've had a lot of troubles to track where exceptions came from. Now it will be a lot easier.
gharchive/issue
2018-05-28T09:50:20
2025-04-01T06:39:50.995403
{ "authors": [ "Scorbutics", "blastrock", "onqtam" ], "repo": "onqtam/doctest", "url": "https://github.com/onqtam/doctest/issues/136", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1231112702
Wrong Contact header when GRUU enabled on registrar #989 pull request was created to solve this issue. Describe the bug SIP.js UA puts GRUU contact value into REGISTER contact header. To Reproduce (if possible) Set up regId or instanceId or both parameters in registerer options. Then if GRUU enabled on a registrar and it response with GRUU contact on second re-REGISTER SIP.js will use it as contact header. Expected behavior GRUU contact can be used everywhere except REGISTER messages. Environment Information SIP.js version 0.20.0 According rfc5627 3.3. Using a GRUU Once a user agent obtains GRUUs from the registrar, it uses them in several ways. First, it uses them as the contents of the Contact header field in non-REGISTER requests and responses that it emits (for example, an INVITE request and 200 OK response). According to RFC 3261 [1], the Contact header field is supposed to contain a URI that routes to that user agent. Prior to this specification, there hasn't been a way to really meet that requirement. The user agent would use one of its temporary GRUUs for anonymous calls, and use its public GRUU otherwise. Second, the UA can use the GRUU in any other place it needs to use a URI that resolves to itself, such as a webpage. BTW I do not see recent activity at the project - is it still actively supported? @jscheid @nud @ibc @pedrokiefer @etamme #989 has been picked onto the main branch. Thank you.
gharchive/issue
2022-05-10T12:55:08
2025-04-01T06:39:51.008892
{ "authors": [ "john-e-riordan", "zusrut" ], "repo": "onsip/SIP.js", "url": "https://github.com/onsip/SIP.js/issues/991", "license": "mit", "license_type": "permissive", "license_source": "bigquery" }
1817926228
A Typical Example with HTTP.jl (OpenTelemetrySDKHTTPExt) not working Hi, trying A_Typical_Example_with_HTTP and can't get it working - after using OpenTelemetrySDK.OpenTelemetrySDKHTTPExt: otel_http_middleware i got Package OpenTelemetrySDK not found, but a package named OpenTelemetrySDK is available from a registry. │ Install package?, after 'yes' next line got error: UndefVarError: 'OpenTelemetrySDKHTTPExt' not defined. Looks like OpenTelemetrySDKHTTPExt extension do not loaded, but what i doing wrong? HTTP package already used, his hash same as in weakdeps:HTTP in src/api/Project.toml. In sources looks like OpenTelemetrySDKHTTPExt moved to OpenTelemetryAPI.OpenTelemetryAPIHTTPExt - but it's not working anyway. Ubuntu 23.04 Julia 1.9.1 OpenTelemetry.jl 0.4.0 That's weird, maybe restart the REPL and try again? That's weird, maybe restart REPL and try again? Thanks for reporting, I'm looking into it. The doc needs to be updated, OpenTelemetrySDKHTTPExt should be OpenTelemetryAPIHTTPExt. Even though, the otel_http_layer still can't be found with julia@1.9. I guess the implementation was changed somehow in Julia@1.9.0 (I barely remember it works with a pre-release of julia@1.9) I'll make a patch release later tonight.
gharchive/issue
2023-07-24T08:38:01
2025-04-01T06:39:51.018758
{ "authors": [ "NoobsEnslaver", "ZhaoFancy", "findmyway" ], "repo": "oolong-dev/OpenTelemetry.jl", "url": "https://github.com/oolong-dev/OpenTelemetry.jl/issues/83", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2139791365
iPad Air 3rd gen 16.0b1 20A5283p has 0% success rate I have made around 5-10 attempts with each PUAF method and the exploit has always led to an immediate kernel panic without making it to the next stage. not sure, might be b1 specific, try again on 2.0.5 though No luck unfortunately. I did find that the exploits were panicking after a longer period than on 2.0.4 if that helps. Perhaps a way to manually set offsets might help for a unique beta build like this? Any updates on this? no Still no success with 2.0.8 Can you send some panic logs? panic-full-2024-02-28-213402.000.txt All the panics are very similar if not the same functionally. You sure it's failing during exploitation? Looks like some sort of broken offset to me That was my initial theory (being offset related) but since patchfinding was visually succeeded according to the progress tracker, I thought it was exploit related. Would you like me to obtain the correct offsets? iPad 11,3 16.0 20A5283p.txt Here are the offsets. Stupid GitHub won't allow me to upload a .h file, so it's a .txt instead. iPad 11,3 16.0 20A5283p.txt Here are the offsets. Stupid GitHub won't allow me to upload a .h file, so it's a .txt instead. these aren't the relevant offsets for dopamine though Oh I thought the KFD offsets were needed. Which offsets are required? Any update? Again always happy to help if you let me know what is needed Try again in 2.1. No luck unfortunately On Wed, 1 May 2024 at 3:18 AM, Lars Fröder @.***> wrote: Try again in 2.1. — Reply to this email directly, view it on GitHub https://github.com/opa334/Dopamine/issues/329#issuecomment-2086081533, or unsubscribe https://github.com/notifications/unsubscribe-auth/AHNAJA4XUWW7MHR2J23AUELY77G67AVCNFSM6AAAAABDM7CPBKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBWGA4DCNJTGM . You are receiving this because you authored the thread.Message ID: @.***> Please test latest nightly https://github.com/opa334/Dopamine/actions/runs/8988727802 (CC: @Lunariansia) UPDATE: I tried the latest build, and it caused full panic on my device. It could sucessfuly be booted though. I will try to get the panic logs asap, they are extremely long though. Let's hope that other patchs do not bootload my device 🙏🏻 Panic Log.txt Got the panic logs. Let's just hope that we can get this bug fixed for all. well, you're getting the same panic as the OP of this issue now. Tried the nightly build and still same problem as well. Happy to assist with fixing this as well if you let me know what needs to be done. lets hope that we can get ios 16 beta working, OP any updates? I believe this case is dropped since there are no updates about the matter atm, but there havent been any updates for a while so I believe we may get an 16.0 beta update does the new branch work on 16.0 beta?
gharchive/issue
2024-02-17T03:19:29
2025-04-01T06:39:51.039682
{ "authors": [ "Lunariansia", "aydenbottos", "opa334" ], "repo": "opa334/Dopamine", "url": "https://github.com/opa334/Dopamine/issues/329", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2316129912
I fixed WeightBufs. You should just set IOSURFACE_OBJ_SIZE = 0x1 @opa334 And add check, that if we found IOSurface_location then exit, so it will fast up exploit. @opa334
gharchive/issue
2024-05-24T19:29:57
2025-04-01T06:39:51.042116
{ "authors": [ "ghh-jb" ], "repo": "opa334/Dopamine", "url": "https://github.com/opa334/Dopamine/issues/581", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1227895964
Moretwins add VICReg class subclassing the general parent class Twins for siamese networks. Other losses could be easily implemented, e.g. SimCLR or the like. Possible generalization: look into student-teacher networks such as BYOL N.B: Twins signature has changed: [n, 2, d] -> [n, 2, d'] for consistency with the Module class
gharchive/pull-request
2022-05-06T13:49:51
2025-04-01T06:39:51.043942
{ "authors": [ "opeltre" ], "repo": "opeltre/revert", "url": "https://github.com/opeltre/revert/pull/23", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1107356807
Handle create=false when exact matching Overview When a facility list item is submitted with create=false and there is no match found, a FacilityListItem is created with facility_id=None. When another facility list item is submitted which exact matches that FacilityListItem, an error was being thrown. Instead, we now filter out FacilityListItems where the facility_id=None, allowing us to match to the next best option (if one exists) or pass the FacilityListItem on to the dedupe process. Connects #1580 Testing Instructions Not on this branch, browse /api/docs/#!/facilities/facilities_create authorize with an API token Submit a facility with create=false twice. The second submission should result in ERROR_MATCHING { "country": "Turkey", "name": "BAŞARI TEKSTİL", "address": "Merkezefendi" } Switch to this branch and resubmit the facility. The submission should not throw an error. Checklist [x] fixup! commits have been squashed [x] CI passes after rebase [x] CHANGELOG.md updated with summary of features or fixes, following Keep a Changelog guidelines Thanks for reviewing!
gharchive/pull-request
2022-01-18T20:59:43
2025-04-01T06:39:51.049303
{ "authors": [ "TaiWilkin" ], "repo": "open-apparel-registry/open-apparel-registry", "url": "https://github.com/open-apparel-registry/open-apparel-registry/pull/1594", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2175163219
How to distinguish which process the output stream line comes from I have a output stream looking like this, however it doesn't work reliably because some lines simply do not have a prefix. const outputStream = { write: (data) => { if (['[SDK]', '[bundle]', '[dts]'].some((str) => data.includes(str))) { sdkStream.write(data); } else if (data.includes('[Demo App]')) { demoStream.write(data); } }, }; May I know if there's any way I can tell reliably which process a line is coming form? Many thanks! Not really. What you have is the best way right now. However, we can change the output stream to have the command object alongside the text. This would be a breaking change though. PRs welcome, should be an easy change!
gharchive/issue
2024-03-08T02:34:53
2025-04-01T06:39:51.054402
{ "authors": [ "LookRain", "gustavohenke" ], "repo": "open-cli-tools/concurrently", "url": "https://github.com/open-cli-tools/concurrently/issues/468", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2361554047
🌱 Update deps of api and library to 0.14.0 Summary Related issue(s) Fixes # /lgtm
gharchive/pull-request
2024-06-19T07:26:25
2025-04-01T06:39:51.055757
{ "authors": [ "qiujian16", "zhujian7" ], "repo": "open-cluster-management-io/ocm", "url": "https://github.com/open-cluster-management-io/ocm/pull/532", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1020469493
test nested subscriptions with subscription-admin Signed-off-by: Roke Jung roke@redhat.com /lgtm
gharchive/pull-request
2021-10-07T21:15:32
2025-04-01T06:39:51.056758
{ "authors": [ "rokej", "xiangjingli" ], "repo": "open-cluster-management/applifecycle-backend-e2e", "url": "https://github.com/open-cluster-management/applifecycle-backend-e2e/pull/140", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2405542436
update docs TODO: update the english document Miss fengzhe
gharchive/pull-request
2024-07-12T13:02:41
2025-04-01T06:39:51.066047
{ "authors": [ "Leymore", "bittersweet1999" ], "repo": "open-compass/opencompass", "url": "https://github.com/open-compass/opencompass/pull/1318", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
359114430
Force Fetch To Store https://github.com/open-contracting/kingfisher/issues/58 This starts to put some common code into ocdskingfisher/cli/commands/base.py - in the future files like ocdskingfisher/cli/commands/check_different_schema_version.py will use these to.
gharchive/pull-request
2018-09-11T16:01:54
2025-04-01T06:39:51.075609
{ "authors": [ "odscjames" ], "repo": "open-contracting/kingfisher", "url": "https://github.com/open-contracting/kingfisher/pull/222", "license": "BSD-3-Clause", "license_type": "permissive", "license_source": "github-api" }
2695784049
[BUG] Data race if the provider set few times Observed behavior I have several tests with a similar setup: the provider is created, then configured, the tests are executed, and openfeature.Shutdown() is called for cleanup. However, when these tests are run with the race detector enabled, they consistently fail. Expected Behavior No response Steps to reproduce package openfeature_test import ( "context" "sync" "testing" of "github.com/open-feature/go-sdk/openfeature" ) func TestDataRace(t *testing.T) { of.SetEvaluationContext(of.NewTargetlessEvaluationContext(map[string]any{})) err := of.SetProviderAndWait(newRaceProvider()) if err != nil { t.Fatal(err) } of.Shutdown() err = of.SetProviderAndWait(newRaceProvider()) if err != nil { t.Fatal(err) } } func newRaceProvider() of.FeatureProvider { return &raceProvider{ state: of.NotReadyState, } } var ( _ of.FeatureProvider = (*raceProvider)(nil) // ensure implements FeatureProvider _ of.StateHandler = (*raceProvider)(nil) // ensure implements StateHandler ) type raceProvider struct { state of.State mu sync.RWMutex } func (p *raceProvider) Metadata() of.Metadata { return of.Metadata{ Name: "racing", } } func (p *raceProvider) Status() of.State { p.mu.RLock() defer p.mu.RUnlock() return p.state } func (p *raceProvider) Init(evalCtx of.EvaluationContext) error { p.mu.Lock() defer p.mu.Unlock() p.state = of.ReadyState return nil } func (p *raceProvider) Shutdown() { p.mu.Lock() defer p.mu.Unlock() p.state = of.NotReadyState } func (p *raceProvider) BooleanEvaluation(ctx context.Context, flag string, defaultValue bool, evalCtx of.FlattenedContext) of.BoolResolutionDetail { return of.BoolResolutionDetail{} } func (p *raceProvider) StringEvaluation(ctx context.Context, flag string, defaultValue string, evalCtx of.FlattenedContext) of.StringResolutionDetail { return of.StringResolutionDetail{} } func (p *raceProvider) FloatEvaluation(ctx context.Context, flag string, defaultValue float64, evalCtx of.FlattenedContext) of.FloatResolutionDetail { return of.FloatResolutionDetail{} } func (p *raceProvider) IntEvaluation(ctx context.Context, flag string, defaultValue int64, evalCtx of.FlattenedContext) of.IntResolutionDetail { return of.IntResolutionDetail{} } func (p *raceProvider) ObjectEvaluation(ctx context.Context, flag string, defaultValue interface{}, evalCtx of.FlattenedContext) of.InterfaceResolutionDetail { return of.InterfaceResolutionDetail{} } func (p *raceProvider) Hooks() []of.Hook { return []of.Hook{} } Create a file with test and run go test -race ./... @beeme1mr I am trying to implement the provider and write some e2e tests for it. Using another provider doesn't help much. The issue is related to the providers which implement StateHandler interface. In the next release, stage management moves to the provider. This is a non-breaking change but may address this problem. FYI @toddbaert There have been substantial changes to the SDK recently like @beeme1mr said, which are yet unreleased. The changes are significant enough that there's a chance this bug could be resolved. I would have liked to see the released already but there's a few more issues we have to iron out with it, namely the one you pointed out here: https://github.com/open-feature/go-sdk/pull/296#pullrequestreview-2465098493 Just keep this alive, the release 1.14.0 doesn't fix the issue. go-sdk uses reflect.DeepEqual() to compare provider(s) internally and that doesn't play very well if provider has sync.Mutex probably. I think that just running v.Shutdown() should solve the problem. @blkt i gave it a quick shot and it seems to fix the problem. At least all tests pass, even with the race detector enabled. Cool. go-sdk uses reflect.DeepEqual() to compare provider(s) internally and that doesn't play very well if provider has sync.Mutex probably. @erka yea using reflect.DeepEqual() to compare provider implementations is a problem... Here are just some ideas: Require the feature flag implementation to include a UniqueName() or Id() method that can be used to internally identify providers. Also not an ideal solution. Serialize provider implementations into a format like JSON, which naturally excludes unexported fields and requires omitting problematic field types like sync.Mutex. However, this approach is also far from ideal. Avoid directly using the FeatureProvider implementation within the SDK. Instead, create a providerReference wrapper during registration, assigning it a self-generated identifier. Subsequently, the rest of the code should work with providerReference values, enabling easy comparison using their IDs. This approach seems like the most promising. The effort strongly depends on how easy it is to change the code to not use FeatureProvider implementations directly. @warber We could also add a kind field to providerReference and set it to reflect.TypeOf(FeatureProvider).Kind(). If the kind is ptr, we can perform a direct comparison of FeatureProvider. For struct, fallback to reflect.DeepEqual. @warber We could add a kind field to providerReference and set it to reflect.TypeOf(FeatureProvider).Kind(). If the kind is ptr, we can perform a direct comparison of FeatureProvider. For struct, fallback to reflect.DeepEqual. I'll give this a shot this week unless I see a PR from somebody else first. My findings so far: In addition to just doing v.Shutdown() as @blkt suggested here, another simple (but inadequate, IMO) solution is to RLock the api mutex during shutdown: go func(forShutdown StateHandler) { api.mu.RLock() defer api.mu.RUnlock() forShutdown.Shutdown() }(v) This is probably a bit better than just running shutdown synchronously, but it still means that we won't be able to register providers until the old provider is shut down, which is bad, we want the shutdown not to block anything - it's for side effects such as flushing telemetry, etc. It also hides the real issue, IMO... The root problem of this entire issue, as far as I can tell, is that we are using DeepEqual, which is a problem specifically because it reads unexported fields (in this case the state of @erka 's provider), bypassing the mutex in their provider and causing the race (this is confirmed by the trace in the DATA RACE warning). I think this is a more general issue and we should not do it, or we risk a lot of this sort of issue. I think the only real solution is to find a safe way to compare providers without using DeepEqual at all, so that provider's unexported fields are never read (perhaps the way @warber suggested in their last point here) I think the only real solution is to find a safe way to compare providers without using DeepEqual at all, so that provider's unexported fields are never read (perhaps the way @warber suggested in their last point here) I'm going to give this a shot.
gharchive/issue
2024-11-26T19:01:34
2025-04-01T06:39:51.099281
{ "authors": [ "beeme1mr", "erka", "toddbaert", "warber" ], "repo": "open-feature/go-sdk", "url": "https://github.com/open-feature/go-sdk/issues/307", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2354108316
Add multi-provider blog post This PR Adds a blog post introducing the new Multi-Provider in JavaScript and Node.js Related Issues This shouldn't be merged before the client-side multi-provider PR is merged, as it references the client-side provider Thanks @emmawillis ! I'll added some reviewers. I'll review it myself in the next day or so! Preview is here: https://deploy-preview-593--openfeature.netlify.app/blog/openfeature-multi-provider-release
gharchive/pull-request
2024-06-14T21:02:55
2025-04-01T06:39:51.102311
{ "authors": [ "emmawillis", "toddbaert" ], "repo": "open-feature/openfeature.dev", "url": "https://github.com/open-feature/openfeature.dev/pull/593", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2692014092
Unable to install deploy-mgmt-hub.sh All in One script When using the All-in-one script to install the OH Management Hub, I keep getting an error message that says: Error: http code 404 from: downloading https://github.com/naphelps/openbao-plugin-auth-openhorizon/releases/download/v0.1.0-test/openbao-plugin-auth-openhorizon_0.1.0-test_linux_x86_64.tar.gz, stdout: Not Found I see there were commits made last week by @naphelps. Wondering if that affected the script. @naphelps this is still not working. I am wondering if the issue lies in this forking from your branch (naphelps)? I have attached an image highlighting. Error: http code 404 from: downloading https://github.com/naphelps/openbao-plugin-auth-openhorizon/releases/download/v1.0.0/openbao-plugin-auth-openhorizon_1.0.0_linux_x86_64.tar.gz, stdout: Not Found
gharchive/issue
2024-11-25T19:38:51
2025-04-01T06:39:51.108642
{ "authors": [ "hararegirljustcoding" ], "repo": "open-horizon/devops", "url": "https://github.com/open-horizon/devops/issues/193", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
320655490
Unable to format LUN So, I'm not sure if this is user error, or which component of this solution is currently causing these issues, but I'm having trouble formatting the exposed objects. The below paste is a section of journalctl on the initiator which was attempting to format and mount the LUN. There is also a paste of the targetcli ls at the bottom. https://paste.ubuntu.com/p/TPpTZmgwJP/ These errors occured using the following packages: Target: CentOS 7 Gluster 4.0 TCMU-Runner 1.3 Gluster-Block 0.3 (The packages used were all included in the centos-release-gluster40.x86_64 repo) Initiator: Fedora 25 iSCSIadm Windows Server 2016 I've also tried this same setup using gluster 3.12 and compiling both the gluster-block and tcmu-runner packages from git. I get the same result. If there is instead a specific set of package versions I should be using, please let me know. I'm not 100% this is a gluser-block issue, and so I'll also be raising an issue on the tcmu-runner git. If any additional info is needed, please let me know. Adding gluster developers @lxbsz @pkalever . Does this only fail with gluster? Did you use the gluster block tools to set things up? On the iscsi target logs are there any errors in journalctl or /var/log/tcmu-runner.log? Could you set log_level = 4 in /etc/tcmu/tcmu.conf and attach the /var/log/tcmu-runner.log? @BlackoutWNCT Are you using the community version of glusterfs & gluster-block ? If so the tcmu-runner will crash I think, because there has one bug in glusterfs API. If not, please attach the logs as Mike mentioned. I've tested the TargetCLI without tcmu-runner or gluster-block and the exposed object was fine, however I've not tested with just tcmu-runner. I did create the LUN using gluster-block. I'm currently using the gluster packages contained within the CentOS repos. (I've tested on both 3.12 and 4.0 from centos-release-gluster312.noarch and centos-release-gluster40.x86_64 repos.) gluster-block I've tried from both the centos-release-gluster40.x86_64 repo, and by cloning the current master git. I've done the same for the tcmu-runner packages. Requested logs are here: https://paste.ubuntu.com/p/tKV3tDQ4rV/ @BlackoutWNCT Actually we meant the logs when you are doing the format or read/write. Did you get any errors in /var/log/tcmu-runner.log or journalctl ? From your logs provided I couldn't see any problem. Right of course. Apologies, attached is a new paste dump of various logs for the format. https://paste.ubuntu.com/p/9vf8kpXd5v/
gharchive/issue
2018-05-07T02:29:23
2025-04-01T06:39:51.131166
{ "authors": [ "BlackoutWNCT", "lxbsz", "mikechristie" ], "repo": "open-iscsi/tcmu-runner", "url": "https://github.com/open-iscsi/tcmu-runner/issues/414", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1509018319
Release 1.4.3 version Please, release the 1.4.3 version :) https://plugins.gradle.org/plugin/io.jumpco.open.gradle.s3 Please :)
gharchive/issue
2022-12-23T07:50:07
2025-04-01T06:39:51.132798
{ "authors": [ "sanmibuh" ], "repo": "open-jumpco/s3-plugin", "url": "https://github.com/open-jumpco/s3-plugin/issues/18", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1162352940
Data Quality - columnValuesToBeUnique & columnValuesToBeNotNull JSON UI In https://github.com/open-metadata/OpenMetadata/pull/3255 we added a default property to both test definitions. This is because we need to be able to differentiate both JSONs from the Python side in order to assign the proper class. However, creating a test from the UI is not passing the testCase with the default attributes informed: testCase: {config: {}, columnTestType: "columnValuesToBeNotNull"} cc @Sachin-chaurasiya @ShaileshParmar11
gharchive/issue
2022-03-08T08:16:58
2025-04-01T06:39:51.145816
{ "authors": [ "pmbrull" ], "repo": "open-metadata/OpenMetadata", "url": "https://github.com/open-metadata/OpenMetadata/issues/3260", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1415333002
Fix ISSUE-3790: Remove unnecessary public modifiers Describe your changes : I worked on fixing the code smells due to unnecessary public modifiers in the tests. Closes #3790. Type of change : [ ] Bug fix [x] Improvement [ ] New feature [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] Documentation Frontend Preview (Screenshots) : For frontend related change, please link screenshots of your changes preview! Optional for backend related changes. Checklist: [x] I have read the CONTRIBUTING document. [ ] I have commented on my code, particularly in hard-to-understand areas. [ ] I have added tests that prove my fix is effective or that my feature works. [x] All new and existing tests passed. Reviewers Thanks @shivamshrey
gharchive/pull-request
2022-10-19T17:18:46
2025-04-01T06:39:51.150917
{ "authors": [ "harshach", "shivamshrey" ], "repo": "open-metadata/OpenMetadata", "url": "https://github.com/open-metadata/OpenMetadata/pull/8268", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1457742741
Why can't test after deleting the training set When I delete the training set, the running test will report an error. Original Traceback (most recent call last): File "/home/xidian/anaconda3/envs/pro/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/xidian/anaconda3/envs/pro/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/xidian/anaconda3/envs/pro/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 49, in data = [self.dataset[idx] for idx in possibly_batched_index] File "../pcdet/datasets/kitti/kitti_dataset.py", line 380, in getitem calib = self.get_calib(sample_idx) File "../pcdet/datasets/kitti/kitti_dataset.py", line 110, in get_calib assert calib_file.exists() AssertionError Can you help me to analyze the reason? I found that the code used training sets for testing. Try to change the pkl file path in dataset config. You actually delete the validation set, not the training set.
gharchive/issue
2022-11-21T11:03:52
2025-04-01T06:39:51.160437
{ "authors": [ "Frank-DA", "Townjj", "jihanyang" ], "repo": "open-mmlab/OpenPCDet", "url": "https://github.com/open-mmlab/OpenPCDet/issues/1194", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2544837767
扩出来的图片两边会出现框、墙等 两边很容易出现墙、框、之类的遮挡,请问怎么解决 hi @hjj-lmx 这个确实是有一定概率会出现的。 如果你只想在推理的时候改善,你可以尝试输入一些 prompt,以及调整一下比例(但还是具有随机性) 如果想进一步改进,还是需要训练的时候,可以提高 outpainting 任务的训练比例和 outpainting 训练的 mask 的大小
gharchive/issue
2024-09-24T09:18:58
2025-04-01T06:39:51.162433
{ "authors": [ "hjj-lmx", "zengyh1900" ], "repo": "open-mmlab/PowerPaint", "url": "https://github.com/open-mmlab/PowerPaint/issues/95", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2319383155
Is it possible to pass allow_unused and materialize_grad to optim_wrapper? Hi, there! Im deal with a custom train pipeline which forward model partly from batch to batch. And when i trying to run a distributed train script, an error occurs Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameters which did not receive grad for rank 1: pipeline.neck_segmentation.spatial_path.conv_7x7.conv.weight ... ....... ...... Parameter indices which did not receive grad for rank 1: 45 46 47 ... Possible solution doc is to pass allow_unused=True and materialize_grads=True into optim_wrapper. Is it possible to pass those args by Config or somehow? Looking forward for reply Thanks :) Solved by find_unused_parameters = True Could you tell me which file should I add find_unused_parameters = True?
gharchive/issue
2024-05-27T15:19:21
2025-04-01T06:39:51.181204
{ "authors": [ "ashkanaev", "liuqs111" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/11745", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
833463709
Poor performance due to duplication of annotation ids I'm trying to train my custom dataset on VFnet, but the loss reduces very slow but I could not find the cause. Today, I train my data using detection2, the code throws the Assertion Error: annotation ids are not unique And I check again my data and it's actually duplicated, I did not check more detail, but when I train again VFnet with the correct data format. The loss became stable. => I think we should check the annotation ids are unique or not in order to prevent this issue Can you post a simple labeled example about duplication of annotation ids? We look for the cause of instability. Thank you! { "info": { ... }, "licenses": [ ... ], "images": [ { "license": 0, "id": 0, "url": null, "file_name": "fold1/images/c337c0c48cb14bcfac1ba739a785c2cb.jpg", "height": 1536, "width": 1536 }, { "license": 0, "id": 1, "url": null, "file_name": "fold2/images/c0a788fb3c78173e4a2a447b5fc70a93.jpg", "height": 1440, "width": 1152 }, ... ], "annotations": [ { "id": 0, "image_id": 0, "iscrowd": 0, "category_id": 0, "bbox": [ 859.0, 425.5, 145.5, 148.0 ], "area": 21534 }, { "id": 1, "image_id": 0, "iscrowd": 0, "category_id": 13, "bbox": [ 1111.0, 822.0, 220.5, 96.5 ], "area": 21278 }, { "id": 2, "image_id": 0, "iscrowd": 0, "category_id": 13, "bbox": [ 1083.5, 839.5, 251.0, 141.0 ], "area": 35391 }, { "id": 0, // I restart the counter, so it's duplicated here "image_id": 0, "iscrowd": 0, "category_id": 13, "bbox": [ 416.5, 811.5, 193.0, 117.5 ], "area": 22677 }, ... ], "type": "instances", "categories": [ ... ] } It's because I restart the counter of id when creating data from k fold. folds = [0, 1, 2, 3, 4] # Put id = 0 here is ok for fold in folds: id = 0 # It will be restart when loop over new fold for img_name in {list of images of this fold}: # Add a annotation record with id = id id += 1 Thank you!We will add this check.
gharchive/issue
2021-03-17T06:56:53
2025-04-01T06:39:51.185340
{ "authors": [ "hhaAndroid", "ptran1203" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/4782", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1038182846
CenterNet can't be trained Thanks for your error report and we appreciate it a lot. Checklist I have searched related issues but cannot get the expected help. I have read the FAQ documentation but cannot get the expected help. The bug has not been fixed in the latest version. Describe the bug A clear and concise description of what the bug is: I use mmdetection to train my own dataset, CenterNet can't be trained while Faster RCNN is ok. The type of dataset is Coco, the number of labels is 6(except background). Reproduction What command or script did you run? python tools/train.py configs/centernet/centernet_resnet18_140e_coco.py Did you make any modifications on the code or config? Did you understand what you have modified? Only the classnames. What dataset did you use? My own dataset. The type of dataset is Coco, the number of labels is 6(except background). labels are chinses. The file structure of the dataset is as follows: --data ------coco --------annotations ------------instances_test2017.json ------------instances_val2017.json ------------instances_train2017.json --------test2017 --------val2017 --------train2017 Environment Please run python mmdet/utils/collect_env.py to collect necessary environment information and paste it here. sys.platform: linux Python: 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56) [GCC 7.3.0] CUDA available: True GPU 0: Tesla V100-PCIE-32GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.243 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.6.0+cu101 PyTorch compiling details: PyTorch built with: GCC 7.3 C++ Version: 201402 Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0) OpenMP 201511 (a.k.a. OpenMP 4.5) NNPACK is enabled CPU capability usage: AVX2 CUDA Runtime 10.1 NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 CuDNN 7.6.3 Magma 2.5.2 Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, TorchVision: 0.7.0+cu101 OpenCV: 4.5.4-dev MMCV: 1.3.15 MMCV Compiler: GCC 7.5 MMCV CUDA Compiler: 10.1 MMDetection: 2.17.0+9874180 You may add addition that may be helpful for locating the problem, such as How you installed PyTorch [e.g., pip, conda, source] Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.) Error traceback pip install pytorch 2021-10-28 15:19:56,524 - mmdet - INFO - Environment info: ------------------------------------------------------------ sys.platform: linux Python: 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 23:10:56) [GCC 7.3.0] CUDA available: True GPU 0: Tesla V100-PCIE-32GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.243 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.6.0+cu101 PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications - Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0) - OpenMP 201511 (a.k.a. OpenMP 4.5) - NNPACK is enabled - CPU capability usage: AVX2 - CUDA Runtime 10.1 - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 - CuDNN 7.6.3 - Magma 2.5.2 - Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, TorchVision: 0.7.0+cu101 OpenCV: 4.5.4-dev MMCV: 1.3.15 MMCV Compiler: GCC 7.5 MMCV CUDA Compiler: 10.1 MMDetection: 2.17.0+9874180 ------------------------------------------------------------ 2021-10-28 15:19:56,981 - mmdet - INFO - Distributed training: False 2021-10-28 15:19:57,393 - mmdet - INFO - Config: dataset_type = 'CocoDataset' data_root = 'data/coco/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile', to_float32=True, color_type='color'), dict(type='LoadAnnotations', with_bbox=True), dict( type='PhotoMetricDistortion', brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18), dict( type='RandomCenterCropPad', crop_size=(512, 512), ratios=(0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3), mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True, test_pad_mode=None), dict(type='Resize', img_scale=(512, 512), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile', to_float32=True), dict( type='MultiScaleFlipAug', scale_factor=1.0, flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict( type='RandomCenterCropPad', ratios=None, border=None, mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True, test_mode=True, test_pad_mode=['logical_or', 31], test_pad_add_pix=1), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='DefaultFormatBundle'), dict( type='Collect', meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg', 'border'), keys=['img']) ]) ] data = dict( samples_per_gpu=16, workers_per_gpu=4, train=dict( type='RepeatDataset', times=5, dataset=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_train2017.json', img_prefix='data/coco/train2017/', pipeline=[ dict( type='LoadImageFromFile', to_float32=True, color_type='color'), dict(type='LoadAnnotations', with_bbox=True), dict( type='PhotoMetricDistortion', brightness_delta=32, contrast_range=(0.5, 1.5), saturation_range=(0.5, 1.5), hue_delta=18), dict( type='RandomCenterCropPad', crop_size=(512, 512), ratios=(0.6, 0.7, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3), mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True, test_pad_mode=None), dict(type='Resize', img_scale=(512, 512), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ])), val=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_val2017.json', img_prefix='data/coco/val2017/', pipeline=[ dict(type='LoadImageFromFile', to_float32=True), dict( type='MultiScaleFlipAug', scale_factor=1.0, flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict( type='RandomCenterCropPad', ratios=None, border=None, mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True, test_mode=True, test_pad_mode=['logical_or', 31], test_pad_add_pix=1), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='DefaultFormatBundle'), dict( type='Collect', meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg', 'border'), keys=['img']) ]) ]), test=dict( type='CocoDataset', ann_file='data/coco/annotations/instances_test2017.json', img_prefix='data/coco/test2017/', pipeline=[ dict(type='LoadImageFromFile', to_float32=True), dict( type='MultiScaleFlipAug', scale_factor=1.0, flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict( type='RandomCenterCropPad', ratios=None, border=None, mean=[0, 0, 0], std=[1, 1, 1], to_rgb=True, test_mode=True, test_pad_mode=['logical_or', 31], test_pad_add_pix=1), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='DefaultFormatBundle'), dict( type='Collect', meta_keys=('filename', 'ori_shape', 'img_shape', 'pad_shape', 'scale_factor', 'flip', 'flip_direction', 'img_norm_cfg', 'border'), keys=['img']) ]) ])) evaluation = dict(interval=1, metric='bbox') optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) lr_config = dict( policy='step', warmup='linear', warmup_iters=1000, warmup_ratio=0.001, step=[18, 24]) runner = dict(type='EpochBasedRunner', max_epochs=28) checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) custom_hooks = [dict(type='NumClassCheckHook')] dist_params = dict(backend='nccl') log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)] model = dict( type='CenterNet', backbone=dict( type='ResNet', depth=18, norm_eval=False, norm_cfg=dict(type='BN'), init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18')), neck=dict( type='CTResNetNeck', in_channel=512, num_deconv_filters=(256, 128, 64), num_deconv_kernels=(4, 4, 4), use_dcn=False), bbox_head=dict( type='CenterNetHead', num_classes=6, in_channel=64, feat_channel=64, loss_center_heatmap=dict(type='GaussianFocalLoss', loss_weight=1.0), loss_wh=dict(type='L1Loss', loss_weight=0.1), loss_offset=dict(type='L1Loss', loss_weight=1.0)), train_cfg=None, test_cfg=dict(topk=100, local_maximum_kernel=3, max_per_img=100)) work_dir = './work_dirs/centernet_resnet18_140e_coco' gpu_ids = range(0, 1) 2021-10-28 15:19:57,569 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'torchvision://resnet18'} 2021-10-28 15:19:57,569 - mmcv - INFO - load model from: torchvision://resnet18 2021-10-28 15:19:57,569 - mmcv - INFO - Use load_from_torchvision loader 2021-10-28 15:19:57,723 - mmcv - WARNING - The model and loaded state dict do not match exactly unexpected key in source state_dict: fc.weight, fc.bias loading annotations into memory... Done (t=0.13s) creating index... index created! loading annotations into memory... Done (t=0.02s) creating index... index created! 2021-10-28 15:20:02,468 - mmdet - INFO - Start running, host: root@dl-1142278230-pod-jupyter-fhx5r, work_dir: /root/nmmdetection/mmdetection/work_dirs/centernet_resnet18_140e_coco 2021-10-28 15:20:02,469 - mmdet - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) CheckpointHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook -------------------- before_train_epoch: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) NumClassCheckHook (LOW ) IterTimerHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook -------------------- before_train_iter: (VERY_HIGH ) StepLrUpdaterHook (LOW ) IterTimerHook (LOW ) EvalHook -------------------- after_train_iter: (ABOVE_NORMAL) OptimizerHook (NORMAL ) CheckpointHook (LOW ) IterTimerHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook -------------------- after_train_epoch: (NORMAL ) CheckpointHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook -------------------- before_val_epoch: (NORMAL ) NumClassCheckHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook -------------------- before_val_iter: (LOW ) IterTimerHook -------------------- after_val_iter: (LOW ) IterTimerHook -------------------- after_val_epoch: (VERY_LOW ) TextLoggerHook -------------------- 2021-10-28 15:20:02,469 - mmdet - INFO - workflow: [('train', 1)], max: 28 epochs stop in here Bug fix If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated! I found that when I do not change the codes. Sometimes, it stop in the beginning after_val_epoch: (VERY_LOW ) TextLoggerHook 2021-10-28 15:20:02,469 - mmdet - INFO - workflow: [('train', 1)], max: 28 epochs sometime stop in 2021-10-28 15:20:02,469 - mmdet - INFO - workflow: [('train', 1)], max: 28 epochs then stopped. 2021-10-28 11:20:00,644 - mmdet - INFO - Epoch [1][50/156] lr: 9.990e-04, eta: 3:14:02, time: 2.696, data_time: 2.132, memory: 7312, loss_center_heatmap: 13.3996, loss_wh: 1.5383, loss_offset: 0.4650, loss: 15.4029, grad_norm: 51.5340 When I use the dataset which type is voc (openmmlab) root@dl-1142278230-pod-jupyter-fhx5r:~/lastestmmdet/mmdetection# python tools/train.py configs/centernet/centernet_resnet18_140e_coco.py 2021-11-01 09:46:06,488 - mmdet - INFO - Environment info: sys.platform: linux Python: 3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0] CUDA available: True GPU 0: Tesla V100-PCIE-32GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.243 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.6.0 PyTorch compiling details: PyTorch built with: GCC 7.3 C++ Version: 201402 Intel(R) Math Kernel Library Version 2019.0.5 Product Build 20190808 for Intel(R) 64 architecture applications Intel(R) MKL-DNN v1.5.0 (Git Hash e2ac1fac44c5078ca927cb9b90e1b3066a0b2ed0) OpenMP 201511 (a.k.a. OpenMP 4.5) NNPACK is enabled CPU capability usage: AVX2 CUDA Runtime 10.2 NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75 CuDNN 7.6.5 Magma 2.5.2 Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF, TorchVision: 0.7.0 OpenCV: 4.5.4-dev MMCV: 1.3.16 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 10.1 MMDetection: 2.18.0+db256a1 2021-11-01 09:46:06,787 - mmdet - INFO - Distributed training: False 2021-11-01 09:46:06,971 - mmdet - INFO - Config: dataset_type = 'VOCDataset' data_root = 'data/VOCdevkit/' img_norm_cfg = dict( mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1024, 1024), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ] test_pipeline = [ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1024, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ] data = dict( samples_per_gpu=36, workers_per_gpu=2, train=dict( type='RepeatDataset', times=3, dataset=dict( type='VOCDataset', ann_file=['data/VOCdevkit/VOC2007/ImageSets/Main/train.txt'], img_prefix=['data/VOCdevkit/VOC2007/'], pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='Resize', img_scale=(1024, 1024), keep_ratio=True), dict(type='RandomFlip', flip_ratio=0.5), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) ])), val=dict( type='VOCDataset', ann_file='data/VOCdevkit/VOC2007/ImageSets/Main/val.txt', img_prefix='data/VOCdevkit/VOC2007/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1024, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ]), test=dict( type='VOCDataset', ann_file='data/VOCdevkit/VOC2007/ImageSets/Main/test.txt', img_prefix='data/VOCdevkit/VOC2007/', pipeline=[ dict(type='LoadImageFromFile'), dict( type='MultiScaleFlipAug', img_scale=(1024, 1024), flip=False, transforms=[ dict(type='Resize', keep_ratio=True), dict(type='RandomFlip'), dict( type='Normalize', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='ImageToTensor', keys=['img']), dict(type='Collect', keys=['img']) ]) ])) evaluation = dict(interval=1, metric='mAP') optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001) optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2)) lr_config = dict( policy='step', warmup='linear', warmup_iters=1000, warmup_ratio=0.001, step=[18, 24]) runner = dict(type='EpochBasedRunner', max_epochs=300) checkpoint_config = dict(interval=1) log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) custom_hooks = [dict(type='NumClassCheckHook')] dist_params = dict(backend='nccl') log_level = 'INFO' load_from = None resume_from = None workflow = [('train', 1)] model = dict( type='CenterNet', backbone=dict( type='ResNet', depth=18, norm_eval=False, norm_cfg=dict(type='BN'), init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18')), neck=dict( type='CTResNetNeck', in_channel=512, num_deconv_filters=(256, 128, 64), num_deconv_kernels=(4, 4, 4), use_dcn=False), bbox_head=dict( type='CenterNetHead', num_classes=6, in_channel=64, feat_channel=64, loss_center_heatmap=dict(type='GaussianFocalLoss', loss_weight=1.0), loss_wh=dict(type='L1Loss', loss_weight=0.1), loss_offset=dict(type='L1Loss', loss_weight=1.0)), train_cfg=None, test_cfg=dict(topk=100, local_maximum_kernel=3, max_per_img=100)) work_dir = './work_dirs/centernet_resnet18_140e_coco' gpu_ids = range(0, 1) 2021-11-01 09:46:07,148 - mmdet - INFO - initialize ResNet with init_cfg {'type': 'Pretrained', 'checkpoint': 'torchvision://resnet18'} 2021-11-01 09:46:07,149 - mmcv - INFO - load model from: torchvision://resnet18 2021-11-01 09:46:07,149 - mmcv - INFO - Use load_from_torchvision loader 2021-11-01 09:46:07,241 - mmcv - WARNING - The model and loaded state dict do not match exactly unexpected key in source state_dict: fc.weight, fc.bias 2021-11-01 09:46:14,535 - mmdet - INFO - Start running, host: root@dl-1142278230-pod-jupyter-fhx5r, work_dir: /root/lastestmmdet/mmdetection/work_dirs/centernet_resnet18_140e_coco 2021-11-01 09:46:14,536 - mmdet - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) CheckpointHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook before_train_epoch: (VERY_HIGH ) StepLrUpdaterHook (NORMAL ) NumClassCheckHook (LOW ) IterTimerHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook before_train_iter: (VERY_HIGH ) StepLrUpdaterHook (LOW ) IterTimerHook (LOW ) EvalHook after_train_iter: (ABOVE_NORMAL) OptimizerHook (NORMAL ) CheckpointHook (LOW ) IterTimerHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook after_train_epoch: (NORMAL ) CheckpointHook (LOW ) EvalHook (VERY_LOW ) TextLoggerHook before_val_epoch: (NORMAL ) NumClassCheckHook (LOW ) IterTimerHook (VERY_LOW ) TextLoggerHook before_val_iter: (LOW ) IterTimerHook after_val_iter: (LOW ) IterTimerHook after_val_epoch: (VERY_LOW ) TextLoggerHook after_run: (VERY_LOW ) TextLoggerHook 2021-11-01 09:46:14,537 - mmdet - INFO - workflow: [('train', 1)], max: 300 epochs 2021-11-01 09:46:14,537 - mmdet - INFO - Checkpoints will be saved to /root/lastestmmdet/mmdetection/work_dirs/centernet_resnet18_140e_coco by HardDiskBackend. 2021-11-01 09:47:59,687 - mmdet - INFO - Epoch [1][50/214] lr: 9.990e-04, eta: 1 day, 13:28:14, time: 2.103, data_time: 0.519, memory: 21603, loss_center_heatmap: 30.6203, loss_wh: 1.4170, loss_offset: 0.4195, loss: 32.4568, grad_norm: 112.8104 2021-11-01 09:49:26,658 - mmdet - INFO - Epoch [1][100/214] lr: 1.998e-03, eta: 1 day, 10:12:23, time: 1.739, data_time: 0.142, memory: 21603, loss_center_heatmap: 3.8281, loss_wh: 1.4484, loss_offset: 0.3195, loss: 5.5960, grad_norm: 5.6225 2021-11-01 09:50:51,520 - mmdet - INFO - Epoch [1][150/214] lr: 2.997e-03, eta: 1 day, 8:51:07, time: 1.697, data_time: 0.091, memory: 21603, loss_center_heatmap: 3.5224, loss_wh: 1.4239, loss_offset: 0.2561, loss: 5.2024, grad_norm: 5.4901 2021-11-01 09:52:15,438 - mmdet - INFO - Epoch [1][200/214] lr: 3.996e-03, eta: 1 day, 8:04:44, time: 1.678, data_time: 0.087, memory: 21603, loss_center_heatmap: 3.3468, loss_wh: 1.4444, loss_offset: 0.2486, loss: 5.0398, grad_norm: 5.0586 2021-11-01 09:52:38,171 - mmdet - INFO - Saving checkpoint at 1 epochs [ ] 0/660, elapsed: 0s, ETA:Traceback (most recent call last): File "tools/train.py", line 189, in main() File "tools/train.py", line 185, in main meta=meta) File "/root/lastestmmdet/mmdetection/mmdet/apis/train.py", line 177, in train_detector runner.run(data_loaders, cfg.workflow) File "/root/.local/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run epoch_runner(data_loaders[i], **kwargs) File "/root/.local/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 54, in train self.call_hook('after_train_epoch') File "/root/.local/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 307, in call_hook getattr(hook, fn_name)(self) File "/root/.local/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch self._do_evaluate(runner) File "/root/lastestmmdet/mmdetection/mmdet/core/evaluation/eval_hooks.py", line 18, in _do_evaluate results = single_gpu_test(runner.model, self.dataloader, show=False) File "/root/lastestmmdet/mmdetection/mmdet/apis/test.py", line 28, in single_gpu_test result = model(return_loss=False, rescale=True, **data) File "/root/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/root/.local/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 42, in forward return super().forward(*inputs, **kwargs) File "/root/.local/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.module(*inputs[0], **kwargs[0]) File "/root/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/root/.local/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func return old_func(*args, **kwargs) File "/root/lastestmmdet/mmdetection/mmdet/models/detectors/base.py", line 174, in forward return self.forward_test(img, img_metas, **kwargs) File "/root/lastestmmdet/mmdetection/mmdet/models/detectors/base.py", line 147, in forward_test return self.simple_test(imgs[0], img_metas[0], **kwargs) File "/root/lastestmmdet/mmdetection/mmdet/models/detectors/single_stage.py", line 103, in simple_test feat, img_metas, rescale=rescale) File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 351, in simple_test return self.simple_test_bboxes(feats, img_metas, rescale=rescale) File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/dense_test_mixins.py", line 38, in simple_test_bboxes *outs, img_metas=img_metas, rescale=rescale) File "/root/.local/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func return old_func(*args, **kwargs) File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/centernet_head.py", line 294, in get_bboxes with_nms=with_nms)) File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/centernet_head.py", line 338, in _get_bboxes_single batch_border = det_bboxes.new_tensor(img_meta['border'])[..., KeyError: 'border' (openmmlab) root@dl-1142278230-pod-jupyter-fhx5r:~/lastestmmdet/mmdetection# File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/centernet_head.py", line 338, in _get_bboxes_single batch_border = det_bboxes.new_tensor(img_meta['border'])[..., KeyError: 'border' Please follow the test_pipeline in configs/centernet/centernet_resnet18_dcnv2_140e_coco.py , and add RandomCenterCropPad to your config. Currently, this transform is necessary for CenterNet. When I use that, it stoped. I gave up, I finally used the original Centernet. Thank you very much! File "/root/lastestmmdet/mmdetection/mmdet/models/dense_heads/centernet_head.py", line 338, in _get_bboxes_single batch_border = det_bboxes.new_tensor(img_meta['border'])[..., KeyError: 'border' Please follow the test_pipeline in configs/centernet/centernet_resnet18_dcnv2_140e_coco.py , and add RandomCenterCropPad to your config. Currently, this transform is necessary for CenterNet. This is stupid
gharchive/issue
2021-10-28T07:38:35
2025-04-01T06:39:51.262209
{ "authors": [ "RangiLyu", "bennyji", "taofuyu" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/6395", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1230817514
DataLoader worker (pid(s) 659) exited unexpectedly I keep getting this runtime error when trying to train YOLCAT. I am using a batch size of 1 and I am training with 53 images. The images have a size of ‪4,000 x 2,250‬. I have successfully trained Mask Scoring R-CNN, Mask R-CNN and HTC with the same set-up. I only changed the config, weights file and 'num_class'. I used the same images... Any advice would be much appreciated. I have included the error printout: ‪ @David-Biggs The information provided is relatively small, and it is difficult for me to judge. I guess the possible reason is that there are not enough resources. @David-Biggs The information provided is relatively small, and it is difficult for me to judge. I guess the possible reason is that there are not enough resources. Thanks @hhaAndroid! I am training on Google Colab Pro, which I think generally has sufficient resources (I did initially think that this was the issue). What additional information do you think I should provide? Thanks Again. @David-Biggs It is recommended to run https://github.com/open-mmlab/mmdetection/blob/master/tools/misc/browse_dataset.py first to ensure that there is no problem with the dataloader output. Closed due to no reply for more than 15 days.
gharchive/issue
2022-05-10T08:40:50
2025-04-01T06:39:51.269112
{ "authors": [ "David-Biggs", "hhaAndroid" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/7953", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1325911151
How to calculate average recall (AR) of each class Hi, After completing each epoch I am getting the following results, but it shows the average recall of all classes. I want to check the average recall of each class. Can anyone please guide me on how to do this? Thanks. I was using 5 classes and got the following results. Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.320 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.320 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.320 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.128 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.303 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.341 @ibrahim1611 python test.py xxxx --eval-options evaluation.classwise=True @hhaAndroid It only shows the average precision (AP) of each class. @ibrahim1611 Sorry. Currently no support. @ibrahim1611 Sorry. Currently no support. so how to do it ? Option 1: Go to mmdetection/mmdet/evaluation/metrics/coco_metrics.py file and change classwise: bool = False to classwise: bool = True in COCOmetric initializer. This will create mAP values of different thresholds (same format as general mAP information) for each class as follows: If you want to obtain AR values in the same way, you can go to line 520 if self.classwise: # Compute per-category AP precisions = coco_eval.eval['precision'] Option 2: Implement classwise precision and recall functions (taken from official GroundingDINO implementation) import itertools from tabulate import tabulate def per_class_AR_table(coco_eval, class_names=None, headers=["class", "AR"], colums=6): per_class_AR = {} recalls = coco_eval.eval["recall"] # dimension of recalls: [TxKxAxM] # recall has dims (iou, cls, area range, max dets) assert len(class_names) == recalls.shape[1] for idx, name in enumerate(class_names): recall = recalls[:, idx, 0, -1] recall = recall[recall > -1] ar = np.mean(recall) if recall.size else float("nan") per_class_AR[name] = float(ar * 100) num_cols = min(colums, len(per_class_AR) * len(headers)) result_pair = [x for pair in per_class_AR.items() for x in pair] row_pair = itertools.zip_longest(*[result_pair[i::num_cols] for i in range(num_cols)]) table_headers = headers * (num_cols // len(headers)) table = tabulate( row_pair, tablefmt="pipe", floatfmt=".3f", headers=table_headers, numalign="left", ) return table def per_class_AP_table(coco_eval, class_names=None, headers=["class", "AP"], colums=6): per_class_AP = {} precisions = coco_eval.eval["precision"] # dimension of precisions: [TxRxKxAxM] # precision has dims (iou, recall, cls, area range, max dets) assert len(class_names) == precisions.shape[2] for idx, name in enumerate(class_names): # area range index 0: all area ranges # max dets index -1: typically 100 per image precision = precisions[:, :, idx, 0, -1] precision = precision[precision > -1] ap = np.mean(precision) if precision.size else float("nan") per_class_AP[name] = float(ap * 100) num_cols = min(colums, len(per_class_AP) * len(headers)) result_pair = [x for pair in per_class_AP.items() for x in pair] row_pair = itertools.zip_longest(*[result_pair[i::num_cols] for i in range(num_cols)]) table_headers = headers * (num_cols // len(headers)) table = tabulate( row_pair, tablefmt="pipe", floatfmt=".3f", headers=table_headers, numalign="left", ) return table and call them as follows in the same lines: if self.classwise: # Compute per-category AP info = "" AP_table = per_class_AP_table(coco_eval, class_names=cat_list) info += "per class AP:\n" + AP_table + "\n" AR_table = per_class_AR_table(coco_eval, class_names=cat_list) info += "per class AR:\n" + AR_table + "\n" print(info) This solution will create the following output:
gharchive/issue
2022-08-02T14:04:35
2025-04-01T06:39:51.276992
{ "authors": [ "YCAyca", "hhaAndroid", "ibrahim1611", "nanhui69" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/issues/8474", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1804130720
Provide inference of Detic Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers. Motivation Provide inference of Detic. Modification Add Detic1 to projects. Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.You have signed the CLA already but the status is still pending? Let us recheck it. Hi @ryylcc, We'd like to express our appreciation for your valuable contributions to the mmdetection. Your efforts have significantly aided in enhancing the project's quality. It is our pleasure to invite you to join our community thorugh Discord_Special Interest Group (SIG) channel. This is a great place to share your experiences, discuss ideas, and connect with other like-minded people. To become a part of the SIG channel, send a message to the moderator, OpenMMLab, briefly introduce yourself and mention your open-source contributions in the #introductions channel. Our team will gladly facilitate your entry. We eagerly await your presence. Please follow this link to join us: ​https://discord.gg/UjgXkPWNqA. If you're on WeChat, we'd also love for you to join our community there. Just add our assistant using the WeChat ID: openmmlabwx. When sending the friend request, remember to include the remark "mmsig + Github ID". Thanks again for your awesome contribution, and we're excited to have you as part of our community!
gharchive/pull-request
2023-07-14T04:12:00
2025-04-01T06:39:51.283405
{ "authors": [ "CLAassistant", "OpenMMLab-Assistant-004", "ryylcc" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/pull/10639", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1159787611
[Fix] Fix num_last_epochs logic for the hooks in YOLOX training Motivation Based on #7284 , I ran a few more experiments with num_last_epochs. I could still see that the before_train_epoch hooks didn't come in at the proper epoch. They came in one epoch earlier than I expected them to, so I made a PR to fix it. On the other hand, the learning rate schedule, which is the learning rate is fixed during the num_last_epochs, works fine. This means that the number of epochs for fixed learning rate and the number of epochs for non-mixup-mosaic augmentation didn't match as well. Modification I modified two files, sync_norm_hook.py and yolox_mode_switch_hook.py that implement before_train_epoch hook using num_last_epochs. In both of the files, the if logic was modified from if (epoch + 1) == runner.max_epochs - self.num_last_epochs: to if epoch == runner.max_epochs - self.num_last_epochs: BC-breaking (Optional) Yes Use cases (Optional) If this PR introduces a new feature, it is better to list some use cases here, and update the documentation. Checklist Pre-commit or other linting tools are used to fix the potential lint issues. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls. The documentation has been modified accordingly, like docstring or example tutorials. The official YOLOX code also adds one more epoch. https://github.com/Megvii-BaseDetection/YOLOX/pull/1152 proposes a fix for the code. Since the authors have not commented so far, we cannot determine whether it is a bug or an intentional specification. If it is intentional, num_last_epochs=15 -> num_last_epochs=16 in yolox_mode_switch_hook.py, sync_norm_hook.py This PR makes a BC-breaking change. If we want to reproduce the settings of the published trained models, we need to modify yolox_s_8x8_300e_coco.py. For users that use their own configs, it is worth considering raising warnings. @shinya7y The author in the paper says We adopt the MixUp and Mosaic implementation in our model and close it for the last 15 epochs So I think it's a bug but let's wait for the answer. What a coincidence that person made a PR 1 day after I published an issue here :) @HJoonKwon Sorry for the late reply. Although the understanding is a bit wrong, in fact, this is not a bug, this is completely according to the official way of writing. If num_last_epochs=15, it means switching from 300-15 @hhaAndroid Still confused.. Switching from 300-15(in the meta data perspective) means that training closes the augmentation from 285th epoch to 300th epoch. And that is 16 epochs, not 15 epochs. There might be some confusion in the 'writing' in the paper. But, sure we may wait! :) Have you tried it? Are there any differences in the results? While the official code still haven't updated. I would suggest that the +1 be removed from num_last_epochs , and setting it to 16 in the configs with a comment that leads to these discussions. This could be extra confusing as mentioned in #8560, when people expect to tune off this thing through num_last_epochs=0.
gharchive/pull-request
2022-03-04T15:50:06
2025-04-01T06:39:51.293577
{ "authors": [ "HJoonKwon", "hhaAndroid", "nemonameless", "shinya7y", "voldemortX" ], "repo": "open-mmlab/mmdetection", "url": "https://github.com/open-mmlab/mmdetection/pull/7322", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1284832851
Release plan for MV-FCOS3D++ Hi, I saw that the codebase for MV-FCOS3D++ used in Waymo 2D challenge is planned to release on mmdetection3d. When might the code be released? Also, I was wondering what exactly the "FCOS3D++" model was compared to FCOS3D The code of MV-FCOS3D++ for camera-only 3D detection will be released in a month if everything goes well. The "FCOS3D++" model is basically similar to the released version of PGD. It only serves as a monocular baseline for pretraining the backbone for perspective view parsing in the MV-FCOS3D++. Please refer to the report for more details. The code of MV-FCOS3D++ together with our recent work DfM are preliminarily released here. We will further refine it recently and support them in the mmdet3d. Please stay tuned.
gharchive/issue
2022-06-26T07:28:49
2025-04-01T06:39:51.296423
{ "authors": [ "Divadi", "Tai-Wang" ], "repo": "open-mmlab/mmdetection3d", "url": "https://github.com/open-mmlab/mmdetection3d/issues/1586", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
923642352
Wind Onshore Installed Capacity for HB and HH @chrwm Can you please check, if the Installed Capacities for Wind Onshore for HB and HH are correct? It seems that there is a decimal error for the 2030/2050 values. See it's been fixed right now, will check the new data and then close the issue if it's fixed!
gharchive/issue
2021-06-17T08:35:37
2025-04-01T06:39:51.350434
{ "authors": [ "khainsch" ], "repo": "open-modex/models", "url": "https://github.com/open-modex/models/issues/24", "license": "CC0-1.0", "license_type": "permissive", "license_source": "github-api" }
1122645851
redesignv2: Inconsistent escaping of quotes I seem to notice an inconsistent escaping of quotes when querying for tasks: Steps to reproduce (using redesignv2 checkout) exact.repos.txt ros2 launch rmf_demos_ign office.launch.xml headless:=1 server_uri:=ws://localhost:8001 use_sim_time:=false rmf_api_server ros2 run rmf_demos_tasks dispatch_teleop -s coe curl -X 'GET' 'http://localhost:8000/tasks?task_id=compose.dispatch-0&limit=100&offset=0' -H 'accept: application/json' Gives this output [ { "booking": { "id": "compose.dispatch-0", "unix_millis_earliest_start_time": 0, "priority": null, "labels": null }, "category": "teleop", "detail": {}, "unix_millis_start_time": null, "unix_millis_finish_time": null, "original_estimate_millis": 60000, "estimate_millis": 84267, "assigned_to": { "group": "tinyRobot", "name": "tinyRobot1" }, "status": "underway", "dispatch": null, "phases": { "1": { "id": 1, "category": "Sequence", "detail": "[{\"category\":\"Go to [place:coe]\",\"detail\":\"Moving the robot from [place:tinyRobot1_charger] to [place:coe]\"},{\"category\":\"Perform action\",\"detail\":\"Performing action teleop at waypoint [[place:coe]]\"}]", "unix_millis_start_time": null, "unix_millis_finish_time": null, "original_estimate_millis": 87509, "estimate_millis": 84267, "final_event_id": 0, "events": { "0": { "id": 0, "status": "underway", "name": "Sequence", "detail": {}, "deps": [ 1, 2 ] }, "1": { "id": 1, "status": "underway", "name": "Go to [place:coe]", "detail": "Moving the robot from [place:tinyRobot1_charger] to [place:coe]", "deps": [ 4, 3, 8 ] }, "2": { "id": 2, "status": "standby", "name": "Perform action", "detail": "Performing action teleop at waypoint [[place:tinyRobot1_charger]]", "deps": [] }, "3": { "id": 3, "status": "standby", "name": "Pass through [door:coe_door]", "detail": {}, "deps": [ 5, 6, 7 ] }, "4": { "id": 4, "status": "underway", "name": "Move [tinyRobot/tinyRobot1] to ( 8.91186 -6.18135 2.76447)", "detail": {}, "deps": [] }, "5": { "id": 5, "status": "standby", "name": "Open door \"coe_door\"", "detail": {}, "deps": [] }, "6": { "id": 6, "status": "standby", "name": "Move [tinyRobot/tinyRobot1] to ( 6.51731 -5.23293 2.76447)", "detail": {}, "deps": [] }, "7": { "id": 7, "status": "standby", "name": "Close door \"coe_door\"", "detail": {}, "deps": [] }, "8": { "id": 8, "status": "standby", "name": "Move [tinyRobot/tinyRobot1] to ( 5.34648 -4.97681 2.92624)", "detail": {}, "deps": [] } }, "skip_requests": null } }, "completed": [], "active": 1, "pending": [], "interruptions": null, "cancellation": null, "killed": null } ] I'm not sure if this is any issue, especially when output on the upcoming event viewer I think this is a bug on rmf side, you can test it by turning on debug logs RMF_API_SERVER_LOG_LEVEL=DEBUG npm start | grep task_state_update DEBUG:fastapi.RmfGatewayApp:{'data': {'active': 1, 'assigned_to': {'group': 'tinyRobot', 'name': 'tinyRobot1'}, 'booking': {'id': 'compose.dispatch-0', 'unix_millis_earliest_start_time': 1644198634002}, 'category': 'teleop', 'completed': [], 'detail': '', 'estimate_millis': 84557, 'original_estimate_millis': 60000, 'pending': [], 'phases': {'1': {'category': 'Sequence', 'detail': '[{"category":"Go to [place:coe]","detail":"Moving the robot from [place:tinyRobot1_charger] to [place:coe]"},{"category":"Perform action","detail":"Performing action teleop at waypoint [[place:coe]]"}]', 'estimate_millis': 84557, 'events': {'0': {'deps': [1, 2], 'detail': '', 'id': 0, 'name': 'Sequence', 'status': 'underway'}, '1': {'deps': [4, 3, 8], 'detail': 'Moving the robot from [place:tinyRobot1_charger] to [place:coe]', 'id': 1, 'name': 'Go to [place:coe]', 'status': 'underway'}, '2': {'deps': [], 'detail': 'Performing action teleop at waypoint [[place:tinyRobot1_charger]]', 'id': 2, 'name': 'Perform action', 'status': 'standby'}, '3': {'deps': [5, 6, 7], 'detail': '', 'id': 3, 'name': 'Pass through [door:coe_door]', 'status': 'standby'}, '4': {'deps': [], 'detail': '', 'id': 4, 'name': 'Move [tinyRobot/tinyRobot1] to ( 8.91186 -6.18135 2.76447)', 'status': 'underway'}, '5': {'deps': [], 'detail': '', 'id': 5, 'name': 'Open door "coe_door"', 'status': 'standby'}, '6': {'deps': [], 'detail': '', 'id': 6, 'name': 'Move [tinyRobot/tinyRobot1] to ( 6.51731 -5.23293 2.76447)', 'status': 'standby'}, '7': {'deps': [], 'detail': '', 'id': 7, 'name': 'Close door "coe_door"', 'status': 'standby'}, '8': {'deps': [], 'detail': '', 'id': 8, 'name': 'Move [tinyRobot/tinyRobot1] to ( 5.34648 -4.97681 2.92624)', 'status': 'standby'}}, 'final_event_id': 0, 'id': 1, 'original_estimate_millis': 87509}}, 'status': 'underway', 'unix_millis_finish_time': 1644198771922, 'unix_millis_start_time': 1644198636257}, 'type': 'task_state_update'} The above shows the message received from rmf "as is", notice that the details field is a string.
gharchive/issue
2022-02-03T04:35:26
2025-04-01T06:39:51.593820
{ "authors": [ "cnboonhan", "koonpeng" ], "repo": "open-rmf/rmf-web", "url": "https://github.com/open-rmf/rmf-web/issues/574", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1583027043
Favorite tasks What's new Favorite tasks are added. The user will be able to create favorite tasks for those he/she considers recurrent. The user will be able to edit his/her favorite tasks. The user will be able to delete his/her favorite tasks. chrome-capture-2023-1-13.webm chrome-capture-2023-1-13 (1).webm chrome-capture-2023-1-13 (2).webm chrome-capture-2023-1-13 (3).webm Self-checks [ ] I have prototyped this new feature (if necessary) on Figma [ ] I'm familiar with and follow this Typescript guideline [ ] I added unit-tests for new components [ ] I tried testing edge cases [ ] I tested the behavior of the components that interact with the backend, with an e2e test Discussion Per VC, we will be merging this into deploy/hammer, to be backported into main after https://github.com/open-rmf/rmf_api_msgs/pull/33 and all other API changes for this project has been merged. This will help us reduce the number of PRs for generation of API messages for the API server, and allow us to fix any incoming broken behaviors once the API is updated. I noticed that there was a bunch of commits that were not signed earlier in this PR, I don't think it is a big deal, but let's make sure each commit is properly signed with gpg next time. Thanks!
gharchive/pull-request
2023-02-13T20:36:41
2025-04-01T06:39:51.600953
{ "authors": [ "Angatupyry", "aaronchongth" ], "repo": "open-rmf/rmf-web", "url": "https://github.com/open-rmf/rmf-web/pull/676", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1091274362
python module for rmf_api_msgs schemas The objective here is to make the rmf_api_msgs schemas and data models available to users as a python package. Similar to the existing cpp header generation, the schemas here are generated as python functions. After compilation, users can obtain the schemas and data models by just importing the rmf_api_msgs python module. Description During compilation of rmf_api_msgs pkg, cmakelist will run 2 commands to generate the python rmf_api_msgs.schemas and rmf_api_msgs.models: rmf_api_msgs.schemas Execute generate_py_schemas.py. It will load all existing /schemas/*.json, and generate schemas.py according to the template: schemas_template.jinja2 rmf_api_msgs.models: This depends on an external datamodel generation pkg: data-model-codegen data-model-codegen script will generate respective data models according to the schemas. The output is located at rmf_api_msgs/rmf_api_msgs/models/* Lastly, ament_python_install_package() in the cmakelist will install the rmf_api_msgs module The implementation here is very experimental, not sure if this is the "right way" to generate py schemas and models. [NOT RELEVANT] Also, added some minor printout in rust scripts for verbosity, greatly help in debugging To Test To view the python modules: colcon build --packages-select rmf_api_msgs source install.setup python3 >>> import rmf_api_msgs.schemas as schemas >>> from rmf_api_msgs.models import task_state >>> help(schemas) >>> help(task_state) Simple validation script in py: ros2 run rmf_api_msgs check_sample.py -i src/rmf_api_msgs/rmf_api_msgs/samples/task_log/multi_dropoff_delivery.json --task_log **Maybe shouldn't depend on ament_cmake here?? Feedback is more than welcome. Thanks for testing it. The reason for using ament_cmake is to install the python scripts as a package of rmf_api_msgs via the existing cmakelist, specifically with ament_python_install_package(). Hopefully export it as ament_cmake will also make this pkg discoverable as an ament or ros2 pkg, helps in rosdep.
gharchive/pull-request
2021-12-30T19:38:16
2025-04-01T06:39:51.607684
{ "authors": [ "youliangtan" ], "repo": "open-rmf/rmf_api_msgs", "url": "https://github.com/open-rmf/rmf_api_msgs/pull/9", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
499815462
fix: update schema utils to fix browser issue fixes #227 :tada: This PR is included in version 1.12.1 :tada: The release is available on: npm package (@latest dist-tag) GitHub release Your semantic-release bot :package::rocket:
gharchive/pull-request
2019-09-28T20:02:48
2025-04-01T06:39:51.610238
{ "authors": [ "openrpc-bastion", "shanejonas" ], "repo": "open-rpc/generator-client", "url": "https://github.com/open-rpc/generator-client/pull/286", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1590811480
feat: update onboarding steps What type of PR is this? (check all applicable) [x] 🍕 Feature [ ] 🐛 Bug Fix [ ] 📝 Documentation Update [ ] 🎨 Style [ ] 🧑‍💻 Code Refactor [ ] 🔥 Performance Improvements [ ] ✅ Test [ ] 🤖 Build [ ] 🔁 CI [ ] 📦 Chore (Release) [ ] ⏩ Revert Description Updates the onboarding steps to match the new design linked in the issue Related Tickets & Documents closes #801 Mobile & Desktop Screenshots/Recordings Step 2: Added tests? [ ] 👍 yes [x] 🙅 no, because they aren't needed [ ] 🙋 no, because I need help [optional] Are there any post-deployment tasks we need to perform? [optional] What gif best describes this PR or how it makes you feel? Need the images (with the active and normal variants) of the new steps for the left side: Are they not exportable in Figma? "Export as SVG" Are they not exportable in Figma? "Export as SVG" Tried, but they weren't exported with the circle, or maybe I don't know how to properly export them @getaheaddev perhaps you can share/upload step 2/3 active/inactive svgs first thing tomorrow. I'm also facing issues with the backend. 1- with /auth/onboarding: I keep getting server error 500 when trying to send a request to it. I tried with body of "" & and with body of {"id": []} (inspired by how it is called in frontend) and in both I get the same error. Side note: I don't know how onboarding in api was made, but given that the frontend used to send repo ids to it, that may need changing (since onboarding no longer collects repo ids). 2- with /auth/profile: When I try to send a request with name, email (sending them because they are required), and timezone, I get error 404 and message "user not found". It works if I omit the timezone. Suggestion: Would it be possible to make the endpoint not require name & email? As in my use-case I only need to update the timezone of the user. @Deadreyo is still blocked? I think @0-vortex shipped a fix. The /auth/onboarding endpoint has been updated to accept the new onboarding fields https://beta.api.opensauced.pizza/#/Authentication service/postOnboarding The /auth/onboarding endpoint has been updated to accept the new onboarding fields Cool. I get a message everyday from people confused by the onboarding. The /auth/onboarding endpoint has been updated to accept the new onboarding fields https://beta.api.opensauced.pizza/#/Authentication service/postOnboarding The issue that I've reported to you still persists: Error 400: { "message": [ "interests must be an array", "timezone must be a string" ] } on sending {"interests":["javascript","react"],"timezone":"Egypt Standard Time"} Frontend-wise, this PR is ready for review. Backend-wise, onboarding won't work due to api issues but that can be fixed async. We can save time if we want this asap by reviewing it early (excluding the api thing). Frontend-wise, this PR is ready for review. Backend-wise, onboarding won't work due to api issues but that can be fixed async. We can save time if we want this asap by reviewing it early (excluding the api thing). Can you update description with screenshots of all steps? Can you update description with screenshots of all steps? Updated The deploy preview is failing to build @Deadreyo onboarding images are still broken All the keys here should be lowercase to match the interests https://github.com/open-sauced/insights/blob/3d6fa77c9fcb18e6a2088aa148f58752422f14fc/components/atoms/LanguagePill/LanguagePill.tsx#L19
gharchive/pull-request
2023-02-19T18:56:36
2025-04-01T06:39:51.628572
{ "authors": [ "Deadreyo", "bdougie", "brandonroberts" ], "repo": "open-sauced/insights", "url": "https://github.com/open-sauced/insights/pull/880", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
284213023
Which NLP solver to use? @donboyd5, @evtedeschi3 please feel free to rename as you see fit. You two are the admins. Are you wedded to IPOPT? What about other options in R like nleqslv? I am not an expert on nonlinear solvers, but have worked with many and have developed satisficing preferences after extensive experimentation. A) We need an NLP solver that, at a miniumum: Can impose bounds on variables (most can), sometimes called box constraints. In our case, our variables (the x[i] adjustment factors that are multiplied by record weights to arrive at adjusted weights) will need to be >=0 and we may want to impose upper bounds, too. Can handle linear equality and inequality constraints. In our case, an equality constraint might be that we want the number of returns calculated with the new, adjusted weights, to equal 12 million for the state of California. An inequality constraint might be that we want total capital gains income in the $75-100k agi range in California, calculated using the new, adjusted weights, to fall between $42 billion and $44 billion. (These numbers are all made up.) Any NLP solver that can do this, AND solve our problem (no matter how large or complicated it gets), will work. B) However, some variants of our problem can get large, complicated, and hard to solve. In my experience the following attributes of an NLP solver are also valuable: It can handle sparse constraint-coefficient matrices. Suppose you have 2 constraints, where x[i] is the adjustment factor we are solving for, wt[i] is the set of weights we are starting with, and cg and pension are capital gains and pensions income on hte file, respectively: a) The number of returns with positive capital gains must equal 1,000: left hand side: sum over i: x[i] * wt[i] * (cg[i] > 0) RHS: 1000 b) The number of returns with positive pension income must equal 10,000 LHS: sum over i: x[i] * wt[i] * (pension[i] > 0) RHS: 10000 A constraint coefficient (cc) is the derivative of the LHS with respect to x[i] - how much will the constraint value on the adjusted file change if we change x[i]? In equation a, the cc is wt[i] for those records where cg[i] is > 0, and zero for all other records (probably most of them). (If we increase x[i] by 1, the value of the LHS of constraint a will change by wt[i] for returns that have capital gains. Similarly for equation b. In our case, all of our constraints will be linear functions of x[i], like those above, with no interaction between x[i] and x[something else], and so all of our constraint coefficients will be constants (which is nice for us - they don't have to be recalculated with each new guessed-at set of x[i]'s). A constraint coefficient matrix (ccm) is a matrix with a row for each i (150k rows, if we are looking at a single state in isolation) and a column for each constraint. In our case, it would be a 150k x 2 matrix. In column 1, corresponding to constraint a, the values would be wt[i] for those records with capital gains, and zero otherwise. Similarly for column 2. Perhaps only 10% of the elements of the 300k element ccm will be nonzero, but we have reserved space for all 300k. A matrix with a lot of zeros is a sparse matrix. Some nonlinear solvers take advantage of sparsity, storing only the nonzero values in a compact way (we'll get to that later) and also doing calculations with special methods, making them much more able to deal with very large problems. (Imagine that we got to a situation where we had 8 million records and 1,000 constraints, for a 9 billion element constraint coefficient matrix. Suppose further - and this is not unrealistic - that only 4% of the elements, or 360 million, were nonzero. It would be a good candidate for sparse matrix methods.) Anyway, Ipopt can use sparse matrix methods making it well suited for large problems. Ipopt is an interior point solver. Another common class of solver methods is based on generalized reduced gradient methods. Interior point methods seem to work very well on large convex problems like ours are likely to be. But I don't know much more about it than that. Nonlinear solvers rely on linear solvers for part of their work. When problems get large, linear solvers that store the entire problem in memory (even on my 32 gb RAM machine) can choke and swap out memory, slowing them down. And some linear solvers simply work better than others. Ipopt allows you to decide which linear solvers to use, and in particular can use the Harwell Subroutine Library solvers, including one known as MA77, which does most of its work out of memory and is well-suited to huge problems. So, Ipopt coupled with MA77 seems to be able to solve absolutely huge problems that have a structure similar to ours. I have had some NLP solver and linear solver combinations run for days and weeks, but I think this combination will generally solve our problems, robustly, in seconds or minutes. That said, maybe what we are doing now won't get that large. Other NLP solvers may be perfectly appropriate. It's not really up to me to decide what we use. I am fine with experimentation. I have experimented with probably a dozen nonlinear solvers and a half-dozen linear solvers, and settled on this combination, but it is very hard to get it all working the very first time. After that, it is easy forever after. So bottom line, whatever you find works for the problem, in acceptable time, should be fine. As for nleqslv, I don't think that is an NLP solver - it solves systems of nonlinear equations, but I don't think is an optimizer. On Fri, 22 Dec 2017, Don Boyd wrote: Are you wedded to IPOPT? What about other options in R like nleqslv? I believe SNOPT from Stanford also allows sparse matrices and might be suitable. I looked at it when I was frustrated by all the different libraries IPOPT required, but got IPOPT working before actually tryiing SNOPT out. Can I ask for a description of the object of this excercise? I thought it was to impute state of residence to PUF returns, but the example given is to modify weights given a state of residence. So the purpose is to take state of residence from the 2008 PUF and modify weights till the totals add up to published aggregates? What about the >200K returns? dan I just elaborated on the objective, at https://github.com/open-source-economics/PUF-State-Distribution/blob/master/README.md - please see if that makes sense. I don't think SNOPT is available through R. It could be accessed by other means. On Fri, 22 Dec 2017, Don Boyd wrote: I don't think SNOPT is available through R. It could be accessed by other means. Yes, if R is required. I wonder about the desirability of an out-of-memory linear solver. The total memory taken by this problem is large for a desktop, but quite reasonable for a modest size server. Even 64GB would allow the solver to stay in memory (with sparse matrix support) and I would have to believe that would be a great time-saver. We can certainly use the system here - we have R and IOPT, although I have never combined them. dan — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVWPzhK-niAh55SYaPuDpeNVyXk_oks5tDBKGgaJpZM4RLQP7.gif] Thanks for offering use of the system you have. It is definitely good to know that is a possibility. I'm actually not sure yet how large or difficult this problem will become and I am very optimistic that it will be solvable on commodity equipment. A few comments: The memory requirements of the approach you and I discussed early/mid 2017 were much greater than we have been discussing here. There we were talking about assigning 70k uncoded records to 50 states in a way that exhausted each record exactly. That required us to solve for 3.5 million proportions (50 proportions x 70k records) and to impose a constraint on every single record that the proportions, across states, add to 1 -- every record is used completely, whether assigned all to one state, 60% to one state and 40% to another, or 2% to each of 50 states, or something else. Thus, we had an NLP with 3.5 million variables, at least 70k constraints (plus all of our substantive targets), and it was relatively large and difficult. Still, it was solvable on commodity equipment in a matter of hours. The problem we have been discussing here, so far, is much easier. We have been talking about distributing some portion of each record on a (let's say) 160k record file, to each of 50 (approx) states, without worrying about using each record's weight exactly -- we can re-use records or not, as needed. On its face, this might seem big -- we need 8 million variables (the proportion of each of 160k records to be distributed to each of 50 states) and, maybe, 5k constraints (25 substantive targets for each of, let's say, 4 income ranges, for each of 50 states). But each state is separable from the others (since we are not constraining a record across states) and each income range is separable, so that we can solve 200 separate problems: one state, one income range (50 x 4). The average problem size might entail about 40k records (the approx 40k records for a specific income range -- 160k / 4; of course some will be larger, some smaller) and the 25 constraints for that income range and state. Unless we find reasons to make the problem more complex (e.g., constraints across states or income ranges), this should be pretty easy to solve. (CAUTION: In my experience, problems almost always become more difficult than you think they are at the start.) IF the problem becomes large, that does not necessarily mean an in-memory solver on a large machine will be better than an out-of-memory solver on a small machine. It is an empirical question. I have done a LOT of tests and timings of this. For example I have solved relatively large problems requiring, say, 24gb of RAM, all in (considering all requirements) on a machine that had 32gb of RAM, both ways: in memory, and out of memory. To do this in memory, the operating system must allocate 24gb of RAM, which is a VERY expensive and time-consuming operation. As a result, the out-of-memory solver, which does not need to do this, can actually be much faster than an in-memory solver even on a problem that fits entirely in memory. But the Ipopt/Harwell HSL combination allows a range of solvers that all are excellent, some of which are in memory (e.g., MA57) and one of which is out-of-memory (MA77). But again, it is an empirical question, and if the problem becomes much larger than we have been discussing so far, and larger than what is practical on PCs, it is really good to know that we could do it on much larger and faster equipment. On Sat, 23 Dec 2017, Don Boyd wrote: Thanks for offering use of the system you have. It is definitely good to know that is a possibility. I'm actually not sure yet how large or difficult this problem will become and I am very optimistic that it will be solvable on commodity equipment. A few comments: The memory requirements of the approach you and I discussed early/mid 2017 were much greater than we have been discussing here. There we were talking about assigning 70k uncoded records to 50 states in a way that exhausted each record exactly. That required us to solve for 3.5 million proportions (50 proportions x 70k records) and to impose a constraint on every single record that the proportions, across states, add to 1 -- every record is used completely, whether assigned all to one state, 60% to one state and 40% to another, or 2% to each of 50 states, or something else. Thus, we had an NLP with 3.5 million variables, at least 70k constraints (plus all of our substantive targets), and it was relatively large and difficult. Still, it was solvable on commodity equipment in a matter of hours. Yes, that is right. The problem we have been discussing here, so far, is much easier. We have been talking about distributing some portion of each record on a (let's say) 160k record file, to each of 50 (approx) states, without worrying about using each record's weight exactly -- we can re-use records or not, as needed. On its face, this might seem big -- we need 8 million variables (the proportion of each of 160k records to be distributed to each of 50 states) and, maybe, 5k constraints (25 substantive targets for each of, let's say, 4 income ranges, for each of 50 states). But each state is separable from the others (since we are not constraining a record across states) and each income range is separable, so that we I don't know what you mean by "each state is separable". If the aggregates by state are targets, doesn't that prevent each state from separating from the others in the same income range? can solve 200 separate problems: one state, one income range (50 x 4). The average problem size might entail about 40k records (the approx 40k records for a specific income range -- 160k / 4; of course some I see, we give up the requirement that the probabilities sum to 1 for each state. Same number of variables but far fewer constraints. That makes some sense as a simplification. Does it change the result? Do you get negative weights? I worry that the randomization of weight sums per record might affect the accuracy of the federal calculation. Do we use the SOI weight for the federal aggregates and our random weights for the state level aggregates? I think that works but do the states add up to the federal? Don't you get some negative weights? Doesn't that bother you? will be larger, some smaller) and the 25 constraints for that income range and state. Unless we find reasons to make the problem more complex (e.g., constraints across states or income ranges), this should be pretty easy to solve. (CAUTION: In my experience, problems almost always become more difficult than you think they are at the start.) I am not doing to well at the moment. IPOPT claims no solution in the feasible region. Maybe I made a mistake somewhere, so I will work on it. Oddly, IPOPT doesn't seem to offer a way to look at the values for X after each step, or any way to look at G when execution terminates. So I have to jury-rig something. dan IF the problem becomes large, that does not necessarily mean an in-memory solver on a large machine will be better than an out-of-memory solver on a small machine. It is an empirical question. I have done a LOT of tests and timings of this. For example I have solved relatively large problems requiring, say, 24gb of RAM, all in (considering all requirements) on a machine that had 32gb of RAM, both ways: in memory, and out of memory. To do this in memory, the operating system must allocate 24gb of RAM, which is a VERY expensive and time-consuming operation. As a result, the out-of-memory solver, which does not need to do this, can actually be much faster than an in-memory solver even on a problem that fits entirely in memory. But the Ipopt/Harwell HSL combination allows a range of solvers that all are excellent, some of which are in memory (e.g., MA57) and one of which is out-of-memory (MA77). But again, it is an empirical question, and if the problem becomes much larger than we have been discussing so far, and larger than what is practical on PCs, it is really good to know that we could do it on much larger and faster equipment. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVYkkWUQzvqiyiYMroRd4c07HfRD0ks5tDOmRgaJpZM4RLQP7.gif] I see, we give up the requirement that the probabilities sum to 1 for each state. Same number of variables but far fewer constraints. That makes some sense as a simplification. Does it change the result? Do you get negative weights? I worry that the randomization of weight sums per record might affect the accuracy of the federal calculation. Do we use the SOI weight for the federal aggregates and our random weights for the state level aggregates? I think that works but do the states add up to the federal? Don't you get some negative weights? Doesn't that bother you? If we do every state independently, we are letting the PUF data speak - what fraction of each record's weight should we use in order to hit the targets for the state? We would constrain that fraction so that it would be >= 0. Thus, we would never have negative weights, although zero weights might occur and might be desirable - some records drop out in some states. For example, suppose the file has a record with $50k agi and a $15k SALT deduction. If we knew where the records really came from (we don't), it might have been for some poor sucker in high-tax NY who is in the PUF sample. When we get around to targeting low-tax Mississippi, the method we use might use 0% of this record's weight, but it would never use a negative % of the weight - we wouldn't allow that. The targets for each of the 50 states, by design, would be what we know (or believe) to be true for the 50 states. If we do a good job of hitting ALL of those targets, then by definition the sum of the values on our constructed file will equal the national totals, for all of the targeted variables, because the values for the states in https://www.irs.gov/statistics/soi-tax-stats-historic-table-2 add to the national total in that same source. In other words, sums of targeted variables would equal known national totals. But that doesn't mean sums of calculated variables, such as federal tax under a different tax law, would equal the sum of same computed from the original file, although I suspect it would be close. I think this would be an important check on our work and we would want, always, to investigate how close the totals are. IPOPT claims no solution in the feasible region. Maybe I made a mistake somewhere, so I will work on it. Oddly, IPOPT doesn't seem to offer a way to look at the values for X after each step, or any way to look at G when execution terminates. So I have to jury-rig something. I don't know if you've asked the Ipopt list; there may be an easy way to retrieve the X values at each step when you are calling Ipopt from a C++ or Fortran program, thus returning them to your program. An alternative might be to obtain them as output. Ipopt has an option, file_print_level, that controls the amount of output written to a file. It can range from 0 to 12, with a default of 5. I don't think the documentation says what you get at the different levels; the Ipopt mailing list might, or you might just experiment. I have to believe writing the X's would be included in one of the higher levels. I, too, have wanted to retrieve the X's. When calling from R, the Ipoptr interface to Ipopt does not let you do that. What I have done - and you could easily do, too, is put a statement in your objective function (often called eval_f) that writes the then-current X values to an external file so you can watch how they change from step to step. That only took about 10 minutes of programming time for me. I really hope we get to the point where the full recent (C++) version of Ipopt is working in multiple environments. As we've discussed, I can provide my Windows version to people with proper licenses, but it would be great to have it on Linux and Macs. IPOPT claims no solution in the feasible region. Maybe I made a mistake somewhere, so I will work on it. If you have not used the derivative checker, I strongly recommend using it, if you have any doubt at all about whether your derivative calculation is correct. I have found mistakes that way - I sent Ipopt looking for solutions to a feasible problem in directions where they could not be found. On Sun, 24 Dec 2017, Don Boyd wrote: I see, we give up the requirement that the probabilities sum to 1 for each state. Same number of variables but far fewer constraints. That makes some sense as a simplification. Does it change the result? Do you get negative weights? I worry that the randomization of weight sums per record might affect the accuracy of the federal calculation. Do we use the SOI weight for the federal aggregates and our random weights for the state level aggregates? I think that works but do the states add up to the federal? Don't you get some negative weights? Doesn't that bother you? If we do every state independently, we are letting the PUF data speak what fraction of each record's weight should we use in order to hit the targets for the state? We would constrain that fraction so that it would be >= 0. Thus, we would never have negative weights, although zero weights might occur and might be desirable - some records drop out in some states. For example, suppose the file has a record with $50k agi and a $15k SALT deduction. If we knew where the records really came from (we don't), it might have been for some poor sucker in high-tax NY who is in the PUF sample. When we get around to targeting low-tax Mississippi, the method we use might use 0% of this record's weight, but it would never use a negative % of the weight - we wouldn't allow that. The targets for each of the 50 states, by design, would be what we know (or believe) to be true for the 50 states. If we do a good job of hitting ALL of those targets, then by definition the sum of the values on our constructed file will equal the national totals, for all of the targeted variables, because the values for the states in https://www.irs.gov/statistics/soi-tax-stats-historic-table-2 add to the national total in that same source. In other words, sums of targeted variables would equal known national totals. But that doesn't mean sums of calculated variables, such as federal tax under a different tax law, would equal the sum of same computed from the original file, although I suspect it would be close. I think this would be an important check on our work and we would want, always, to investigate how close the totals are. If we have 150,000 records and 40 constraints, there are obviously an infinite number of solutions that meat the constraints. Your choice to "use the distribution of weights that is most similar to the national distribution that also meets the constraints" seems really plausible and worthwhile, but I still worry about negative weights. I have to guess that they are very rare, but even if that is true, I don't think it helps matters to constrain to zero them ex post. Won't that bias the result away from the know aggregate? Do you have any statistics on how often they occur? If they are rare, shouldn't we leave them in? If the are common, we should use another approach. I am trying to puzzle out if the MaxEnt approach tends to make the states look like the national distribution, and I think it does. But as you point out there is considerable extra arithmetic with all those additional constraints. dan — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.[AHvQVWRf8lKN07ODCrU51E379oAyK83mks5tDjQugaJpZM4RLQP7.gif] Forcing the weights to be non-negative doesn't bother me intellectually. I am not sure what would happen if we didn't force it. I think it depends on (a) how we establish the initial weights (for example, whether they are simply scaled down national weights, or something else), and (b) the objective function, and how much it penalizes changes from those initial weights. If we use a maximum entropy objective function, you could not get negative weights because you would have to have a negative x[i] and you can't take a log of a negative number. For some of the other objective functions under consideration, a negative x[i] is possible to have in the function, but the penalty would be huge. So I don't know, 100%, but I doubt it would be an issue even if we constrained the x[i]'s and therefore the weights to non-negative numbers. In all of the runs I have done, I have always constrained the x[i]'s to be nonnegative so I don't have any statistics on what happens if you relax that requirement. With maximum entropy, the problem would simply blow up if we tried to have negative values because of the can't-take-a-log-of-a-negative-number issue. That objective function tends to spread the x[i]'s out in (0, 1), with clustering in the 0.2 to 0.3 range I think. I don't think the other objective functions would want x[i] to be negative, generally. As for bias caused by forcing the weights to be non-negative, I am not sure I understand the concern. The results will hit the known aggregates. If there are weights that the objective function wants to be negative, they will instead be zero - the associated, presumably weird, records will drop out. My reaction is that we should start down the road and begin examining actual results. During the week I'll propose some next steps. On Sun, 24 Dec 2017, Don Boyd wrote: As for bias caused by forcing the weights to be non-negative, I am not sure I understand the concern. The results will hit the known aggregates. If there are weights that the objective function wants to be negative, they will instead be zero - the associated, presumably weird, records will drop out. I am confused. Are you constraining the weights to be >0 during the maximization, or are you accepting negative weights during maximization, but setting them to zero when simulations are done. If the weights are constrained to be >0, that adds a constraint for each taxpayer, which I thought you wanted to avoid. You would have the same number of constraints as MaxEnt, although they would be inequalities rather than equalities. Does that make a difference? I have no idea if it is an important difference - I have never done optimizations with inequality constraints. I agree that there is no bias problem if the constraints are imposed during maximization. dan During optimization. They are simply bounds on the variables. Not a computational difficulty at all. On Dec 25, 2017 7:20 AM, "Daniel Feenberg" notifications@github.com wrote: On Sun, 24 Dec 2017, Don Boyd wrote: As for bias caused by forcing the weights to be non-negative, I am not sure I understand the concern. The results will hit the known aggregates. If there are weights that the objective function wants to be negative, they will instead be zero - the associated, presumably weird, records will drop out. I am confused. Are you constraining the weights to be >0 during the maximization, or are you accepting negative weights during maximization, but setting them to zero when simulations are done. If the weights are constrained to be >0, that adds a constraint for each taxpayer, which I thought you wanted to avoid. You would have the same number of constraints as MaxEnt, although they would be inequalities rather than equalities. Does that make a difference? I have no idea if it is an important difference - I have never done optimizations with inequality constraints. I agree that there is no bias problem if the constraints are imposed during maximization. dan — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/open-source-economics/PUF-State-Distribution/issues/1#issuecomment-353865771, or mute the thread https://github.com/notifications/unsubscribe-auth/AGPEmHfO3Zb9srbJzUmV8mnScrJrSHITks5tD5L-gaJpZM4RLQP7 .
gharchive/issue
2017-12-22T17:19:57
2025-04-01T06:39:51.701731
{ "authors": [ "MattHJensen", "donboyd5", "feenberg" ], "repo": "open-source-economics/PUF-State-Distribution", "url": "https://github.com/open-source-economics/PUF-State-Distribution/issues/1", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
2169714231
OnOpampConnectionSettings/Accepted callbacks need re-thinking The comments say that the Client implementation (the caller) will attempt to re-connect using new settings. The example implementation works completely differently, the callback tries the re-connection. Which is the right way? To summarize this is what the example is supposed to do: OnOpampConnectionSettings callback called when new settings are offered by the Server. The callback implementation first pre-verifies the settings (e.g. check certificates, etc). If checks in (2) pass the callback implementation logs that it is starting to reconnect, creates a reconnection goroutine and returns from the callback. Reconnection goroutine stops the Client, creates a new Client with new connection settings and wait for successful OnConnect() callback. If OnConnect() is not called within a predefined period of time reconnection goroutine assumes the new connection settings are bad, reverts to the old connection settings and re-creates the Client again. We get rid of OnOpampConnectionSettingsAccepted(), it is not needed. The example implementation of steps 4 and 5 is not complete today (e.g not waiting for OnConnect), but can be modified to match the above steps. See also comment thread here: https://github.com/open-telemetry/opentelemetry-collector-contrib/pull/30237#discussion_r1509173487 cc @andykellr @srikanthccv @evan-bradley I listened to the meeting recording, and this sounds good to me. However, making it non-blocking leaves us with one question: what would be the mechanism for the client to report back to the server whether it accepted or rejected the connection settings offer? what would be the mechanism for the client to report back to the server whether it accepted or rejected the connection settings offer? It can be done using a health message. We don't have any other way to report errors from the agent currently. Steps 1-3 and 6 done in https://github.com/open-telemetry/opamp-go/pull/266 Steps 4-5 remaining.
gharchive/issue
2024-03-05T16:55:12
2025-04-01T06:39:51.722452
{ "authors": [ "srikanthccv", "tigrannajaryan" ], "repo": "open-telemetry/opamp-go", "url": "https://github.com/open-telemetry/opamp-go/issues/261", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
541882839
Merge changes from dd-trace-java 0.40.0 https://github.com/DataDog/dd-trace-java/releases/tag/v0.40.0 @trask sorry for the delay. This is ready to review/merge now.
gharchive/pull-request
2019-12-23T19:50:45
2025-04-01T06:39:51.723890
{ "authors": [ "tylerbenson" ], "repo": "open-telemetry/opentelemetry-auto-instr-java", "url": "https://github.com/open-telemetry/opentelemetry-auto-instr-java/pull/46", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2395151465
Update otel-webserver-module for centos 9 stream, since centos7 is EOL Centos7 has reached EOL, otel-webserver-module build and compatibility needs to be provided for the latest centos9 stream. cc @DebajitDas @lalitb @aryanishan1001 is already working on this Nginx SDK 1.26.0 with Redhat 8.9 version is not supported for this OS
gharchive/issue
2024-07-08T09:40:13
2025-04-01T06:39:51.768819
{ "authors": [ "DebajitDas", "aryanishan1001", "lalitb", "nrBaskar" ], "repo": "open-telemetry/opentelemetry-cpp-contrib", "url": "https://github.com/open-telemetry/opentelemetry-cpp-contrib/issues/463", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
609713681
fix: observers should not expose bind/unbind method Which problem is this PR solving? Fixes #997 Short description of the changes @opentelemetry/api: Add new interface UnboundMetric, with bind/unbind method, which extends Metric. @opentelemetry/metrics: Removed problematic setCallback method from BoundObserver. BoundObsever now acts as merely an instrument hold values, should not be exposed as public API. Codecov Report Merging #1001 into master will decrease coverage by 0.56%. The diff coverage is 87.50%. @@ Coverage Diff @@ ## master #1001 +/- ## ========================================== - Coverage 95.00% 94.44% -0.57% ========================================== Files 212 166 -46 Lines 8805 6745 -2060 Branches 796 662 -134 ========================================== - Hits 8365 6370 -1995 + Misses 440 375 -65 Impacted Files Coverage Δ packages/opentelemetry-api/src/metrics/Metric.ts 100.00% <ø> (ø) ...ackages/opentelemetry-api/src/metrics/NoopMeter.ts 72.72% <87.50%> (ø) ...ry-tracing/test/export/SimpleSpanProcessor.test.ts packages/opentelemetry-tracing/src/version.ts ...elemetry-tracing/src/export/SimpleSpanProcessor.ts ...lemetry-tracing/src/export/InMemorySpanExporter.ts ...es/opentelemetry-tracing/src/MultiSpanProcessor.ts ...s/opentelemetry-tracing/src/BasicTracerProvider.ts ...y-tracing/test/export/InMemorySpanExporter.test.ts ...ages/opentelemetry-metrics/src/MetricObservable.ts ... and 34 more
gharchive/pull-request
2020-04-30T08:34:24
2025-04-01T06:39:51.801593
{ "authors": [ "codecov-io", "legendecas" ], "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/pull/1001", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1197494697
Merge back into a single lerna monorepo This is attempting to solve several pain points we've had which stem from the fact that our stable and experimental packages are not linked by lerna and aren't released together. Keeping this as a draft for now AFAICT, Lerna can only bump all changed packages to the same version -- this is not what we expect of stable packages and experimental packages release works. Is yarn workspace an available choice? I found that react is using yarn workspace and publishes packages with different versions (scheduler v0.21.0, and react v18.0.0) at one time. AFAICT, Lerna can only bump all changed packages to the same version -- this is not what we expect of stable packages and experimental packages release works. My current plan is to use lerna independent mode and to create automations to ensure the stable packages are versioned together and the experimental packages are versioned together. Is yarn workspace an available choice? I found that react is using yarn workspace and publishes packages with different versions (scheduler v0.21.0, and react v18.0.0) at one time. We considered yarn workspaces a long time ago and I can't remember exactly why it was rejected. I think it had to do with supporting old node versions but I could be wrong. NPM now also has workspace support but only for npm 7+. My current plan is to use lerna independent mode and to create automations to ensure the stable packages are versioned together and the experimental packages are versioned together. Oh, now I know lerna supports independent mode :O. This sounds great and may fit our use case. @open-telemetry/javascript-approvers sorry for the delay but this is ready to be reviewed now. Its a lot of files changed but should be relatively easy to review. I'm confused by the @opentelemetry/api version change. What benefit can we gain on that change? The specification requires that an SDK support old API versions. This allows users to freely update their SDK without fear that dependencies which use an older API will break. looks fine for lerna but i'm not sure to understand why we changed back to SpanAttributes instead of Attributes ? Is this to support TS user with latest SDK but API <1.1 ? Yes
gharchive/pull-request
2022-04-08T15:52:24
2025-04-01T06:39:51.808277
{ "authors": [ "dyladan", "legendecas" ], "repo": "open-telemetry/opentelemetry-js", "url": "https://github.com/open-telemetry/opentelemetry-js/pull/2892", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2051635721
feat: Add support for mutual TLS and improve SSL error logging. Add support for specifying client certificate and key for mutual TLS when exporting. Also log more information about why an SSL error occured as this can be very difficult to diagnose. This is logged in the same way as other probably fatal issues are, e.g. HTTP 404s. @megabus-tobin Thank you for your contribution! Please consider separating the error handling changes in a separate PR. We will be able to review and release that change independent of verifying the correctness of the changes required for mTLS support. Happy Holidays! @arielvalentin I've split the small logging change off into #1557 Could you incorporate the new environment variables into the exporter tests? (here's the file for the OTLP library) Anywhere the certificate_file is mentioned may be a good place to add content for the client certificate and key. The Net:HTTP class makes this a bit harder than it should be. Unlike ca_cert, the cert and key members aren't strings, they're OpenSSL::X509::Certificate and OpenSSL::PKey::RSA objects. I'm not sure what the best approach would be. Should I add a test certificate and key to the repository or refactor OpenTelemetry::Exporter::OTLP::Exporter.initialize to optionally take the object directly and fallback to trying to load them from the environment? The latter would be very similar to what OpenTelemetry::Exporter::OTLP::Exporter.prepare_endpoint already does. It would also have the small advantage of allowing for more exotic ways of obtaining the certificate and key but the utility of that is questionable since you need a file for the CA cert anyway. The Net:HTTP class makes this a bit harder than it should be. Unlike ca_cert, the cert and key members aren't strings, they're OpenSSL::X509::Certificate and OpenSSL::PKey::RSA objects. I'm not sure what the best approach would be. Should I add a test certificate and key to the repository or refactor OpenTelemetry::Exporter::OTLP::Exporter.initialize to optionally take the object directly and fallback to trying to load them from the environment? The latter would be very similar to what OpenTelemetry::Exporter::OTLP::Exporter.prepare_endpoint already does. It would also have the small advantage of allowing for more exotic ways of obtaining the certificate and key but the utility of that is questionable since you need a file for the CA cert anyway. Sorry for my delated response. Great questions! I think adding the test certificate and key seems the least intrusive to what you've already built if that's not too much effort. If other approvers/maintainers disagree, please chime in! I like your idea about following the pattern in OpenTelemetry::Exporter::OTLP::Exporter.prepare_endpoint, but like you mentioned, I'm not sure about the utility of that approach. @kaylareopelle I have updated the tests to check that the client certificate and key are handled correctly. Some small notes: To test that explicit TLS settings are preferred to environment variables I needed a second cert/key that would be loaded in the unlikely event the test was going to fail. Initially I generated the cert/key on the fly but since a path is needed this would have made it necessary to ensure a writeable filesystem when running tests. Instead I have put the test certs/keys into the same directory as the tests and referenced that via __FILE__. The certs/keys I generated are obviously meaningless. E.g. CN = a.example.tld, O = Internet Widgits Pty Ltd. By the time this gets reviewed they will probably also be expired, which the tests don't care about. Rather than reading the certs/keys at the point of comparison, I've done it once near the top of the file. Apart from avoiding a little repetition this makes it much easier to spot if the files got removed, etc. rather than an individual test appearing to fail. Would be nice to update the gRPC exporter as well. creds = if mtls_config_provided GRPC::Core::ChannelCredentials.new(ca, key, cert) else :this_channel_is_insecure end @client = Opentelemetry::Proto::Collector::Trace::V1::TraceService::Stub.new( "#{uri.host}:#{uri.port}", creds ) https://github.com/open-telemetry/opentelemetry-ruby/blob/main/exporter/otlp-grpc/lib/opentelemetry/exporter/otlp/grpc/trace_exporter.rb#L34 @kaylareopelle I have updated the tests to check that the client certificate and key are handled correctly. @megabus-tobin, thank you for updating the tests! I think this is a solid approach. Are you planning to make the GRPC changes mentioned here as part of this PR? @kaylareopelle Are you planning to make the GRPC changes mentioned here as part of this PR? I have made these changes now. I've only performed minimal testing on this as it's not something I've needed to use previously and am not familiar with it. Unlike the HTTP exporter, there are almost no existing tests included in the library. We chatted about this PR in the SIG last week. A question was raised about whether other languages also have this feature. The environment variables are referenced in the protocol exporter specification. They were added in this PR: https://github.com/open-telemetry/opentelemetry-specification/pull/2370 The OTEL_EXPORTER_OTLP_CLIENT_KEY has been added to Go, PHP, and JS (search) cc @mwear The OTEL_EXPORTER_OTLP_CLIENT_KEY has been added to Go, PHP, and JS (search) It's available in the Java SDK too but is configured differently (because Java). See https://opentelemetry.io/docs/languages/java/configuration/ and in particular the otel.exporter.otlp.client.key system property.
gharchive/pull-request
2023-12-21T03:42:02
2025-04-01T06:39:51.832783
{ "authors": [ "arielvalentin", "kaylareopelle", "megabus-tobin", "szechyjs" ], "repo": "open-telemetry/opentelemetry-ruby", "url": "https://github.com/open-telemetry/opentelemetry-ruby/pull/1556", "license": "apache-2.0", "license_type": "permissive", "license_source": "bigquery" }
1673520710
Merge the sdk-logs package into the active sandbox Package name: @opentelemetry/sdk-logs Source Location: auto-merge/js/experimental/packages/sdk-logs Destination: pkgs/sdk-logs/ Completed with the linked PR's
gharchive/issue
2023-04-18T17:01:37
2025-04-01T06:39:51.834559
{ "authors": [ "MSNev" ], "repo": "open-telemetry/opentelemetry-sandbox-web-js", "url": "https://github.com/open-telemetry/opentelemetry-sandbox-web-js/issues/95", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1290205536
Update trace-receiver.md Need to assign -p 14250:14250 otherwise connect Jaeger will be failed It is necessary to rebuild when you modify components.go to supporting custom Receiver @rquedas can you look at the suggested changes. @noahdyp please make sure that you sign the CLA successfully, if you're a corporate contributor work with your internal OSS experts to get it signed, otherwise you can go with "individual contributor" Are we still working on this ?. @rahuldimri if you're interested in taking up this work, please feel free to submit another PR (that links to this one and gives credit) -- we can close this one due to inactivity in favor of yours if you do so. @rahuldimri "It is necessary to rebuild when you modify components.go to supporting custom Receiver" in the context of the tutorial, you are in development mode, so building your code to see the change is implicit, not sure if we need to call it out. @austinlparker and @svrnm, what do you think Sure thanks let me have a look into this. Closing in favor of https://github.com/open-telemetry/opentelemetry.io/pull/1598
gharchive/pull-request
2022-06-30T14:17:43
2025-04-01T06:39:51.844564
{ "authors": [ "cartermp", "noahdyp", "rahuldimri", "rquedas", "svrnm" ], "repo": "open-telemetry/opentelemetry.io", "url": "https://github.com/open-telemetry/opentelemetry.io/pull/1501", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
1405079354
Auto link in stability table Small improvement for the tables with the stability matrix: if the value is "Stable" or "Experimental" it will automatically link the definition in the spec like some of the pages are doing it already. What will the change look like @svrnm? TIA Details Sure, compare the preview with the current docs on some of the languages where that table is used: https://deploy-preview-1849--opentelemetry.netlify.app/docs/instrumentation/cpp/ vs https://opentelemetry.io/docs/instrumentation/cpp/
gharchive/pull-request
2022-10-11T18:44:17
2025-04-01T06:39:51.847006
{ "authors": [ "melbgirl", "svrnm" ], "repo": "open-telemetry/opentelemetry.io", "url": "https://github.com/open-telemetry/opentelemetry.io/pull/1849", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
2294030510
[zh] Show Chinese homepage in production Folks, I think that we're ready to launch the homepage 🚀 . It maybe look a bit strange given that we don't have any docs pages, but those should come soon :) ... and once they do, I'll implement a fallback to en pages (#4478) Fixes #2402 Preview: https://deploy-preview-4477--opentelemetry.netlify.app/zh/ /cc @open-telemetry/docs-cn-approvers @jiekun Awesome work everyone! Let's keep this rolling https://opentelemetry.io/zh/ 🎉
gharchive/pull-request
2024-05-13T23:40:51
2025-04-01T06:39:51.850013
{ "authors": [ "chalin", "svrnm" ], "repo": "open-telemetry/opentelemetry.io", "url": "https://github.com/open-telemetry/opentelemetry.io/pull/4477", "license": "CC-BY-4.0", "license_type": "permissive", "license_source": "github-api" }
548437171
fix(es-dev-server): don't look for config in root dir As discussed in https://github.com/open-wc/open-wc/commit/4085898bf9fd23ab51cd86d5fe582191c30be64e#r36748834, we introduced a breaking change by looking in the root dir for a config. In hindsight, this change was not needed for achieving what we need so we should revert it. @tmsns
gharchive/pull-request
2020-01-11T14:36:55
2025-04-01T06:39:51.859468
{ "authors": [ "LarsDenBakker" ], "repo": "open-wc/open-wc", "url": "https://github.com/open-wc/open-wc/pull/1214", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1103376795
Request to join pattern house GitHub Username; Aztech07 I want to be a part of the organization in order to get knowledge about Open Source. I would like to contribute to your organization and learn new skills and have a good experience while learning and contributing to the project. I would like to contribute the patterns in C++, because I'm quite comfortable in that. sent an invite, let m eknow if you are a participant of the ongoing SWOC open source program or a normal contributor
gharchive/issue
2022-01-14T09:53:47
2025-04-01T06:39:51.885037
{ "authors": [ "Aztech07", "aryashah2k" ], "repo": "openAOD/Join_PatternHouse", "url": "https://github.com/openAOD/Join_PatternHouse/issues/44", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1212291837
Roadmap to Specification Collaboration [x] Read Introduction to Git and GitHub [x] Read Workflow for Contributing [x] Read GitFlow Workflow #002 #003 #004 #005 [x] Make yourself familiar with the Basic Syntax of Markdown [x] Read Formulating Issues [x] Read Formulating Commit and Merge Request Messages [x] Read Management of Conflicts [x] Read Avoiding Conflicts #006 High Level Design ServiceList [x] Read Structure of Services [x] Read Structure of ServiceNames [x] Read Structure of Release Numbers [x] Read Structure of UUIDs in general and for details about Servers and Clients [x] Read Concept of ServiceList [ ] #017 ProfileList [x] Read Concept of Profiles [x] Structure of UUIDs in general and for details about Profiles [x] Read Concepts of ProfileList and ProfileInstanceList [x] #018 [ ] #019 Note on ProfileList: no additional profiles needed for now -> no modification. ProfileListInstances could be extended with additional action profiles.
gharchive/issue
2022-04-22T12:18:35
2025-04-01T06:39:51.894803
{ "authors": [ "kmohr-soprasteria", "openBackhaul" ], "repo": "openBackhaul/MicroWaveDeviceInventory", "url": "https://github.com/openBackhaul/MicroWaveDeviceInventory/issues/1", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
1318529931
ptpAnnounceInterval name and default value Taking as a reference the latest shared gendoc file (v.220404.1830_AlWe_TH), the description indicates the parameter represents the exponent of a 2^N number. IEEE 1588-2008 & 2019 specify the announceInterval = 2^(logAnnounceInterval), being logAnnounceInterval an INT8 value (e.g 1588-2019 sec. 8.2.15.4.1). ITU profile 8275.1 includes (table A.5) as part of its specification the logAnnounceInterval parameter as well. This way, if the intended use of the parameter is as the exponent to set the 2^N announceInterval, probably makes sense to rename this parameter to ptpLogAnnounceInterval to keep consistency with specifications. Current type defined for the parameter is already INT8. As for the default value, although defult profile in IEEE specs (e.g section I.3.2 1588-2019) specifies inicialization to 1 and range 0..4, ITU 8275.1 (profile of interest for actual implementation) sets -3 as value for ptpLogAnnounceInterval (inicialization and range for BC/TC/OC). Consider whether to keep current -1 value as default or modify, if more differentiation from the main working value is desired. I agree to change the name of this attribute to ptpLogAnnounceInterval with (-3..4) as range of values. Proposal to the 5G-xhaul call on 7th of September 2022: Name of ptpAnnounceInterval shall be changed to ptpLogAnnounceInterval. Datatype shall be changed from INT16 to INT8. Default value shall be changed from -1 to -99. If the telecom profile to be used is known, it is safer to specify the default values defined in the telecom profile. If the telecom profile to be used is known, it is safer to specify the default values defined in the telecom profile. If a device is using the telecom profile from ITU-T then the parameters in LTP::*_PAC are set accordantly. However, when a certain parameter is not implemented, then the default value out side the "valid" range (valid: depending on the profile) - in this case -99 - should be reported by the device. Decision made at the 5G-xhaul call on 7th of September 2022: Name of ptpAnnounceInterval shall be changed to ptpLogAnnounceInterval. Datatype shall be changed from INT16 to INT8. Default value shall be changed from -1 to -99.
gharchive/issue
2022-07-26T17:05:42
2025-04-01T06:39:51.900749
{ "authors": [ "DanielaSpreafico", "Nzein", "demx8as6", "eduardoyusta", "openBackhaul" ], "repo": "openBackhaul/synchronization", "url": "https://github.com/openBackhaul/synchronization/issues/35", "license": "Apache-2.0", "license_type": "permissive", "license_source": "github-api" }
2456033344
TR Occupancy / Belegungsprognosen Belegungsdaten von BLS bereits vorhanden in OJP? --> CAPRE --> OpentransportData --> EFA Heute gibt es auf OJP 2.0 noch keine Daten Wann ist es bei SKI+ eingeplant --> hängt von der Prio von den restlichen Features ab nur für TR? oder auch SER ? Occupancy Struktur in OJP ist sehr umfangreich. Wir könnten nur OCC_UNKNOWN, OCC_EMPTY, OCC_MANY_SEATS, OCC_FEW_SEATS, OCC_STANDING_ONLY, OCC_CRUSHED_STANDING, OCC_FULL, OCC_NOT_ACCEPTING_PASSENGERS je nachdem wie die Erwartungen sind, ist der Aufwand mittel oder gross.
gharchive/issue
2024-08-08T15:01:52
2025-04-01T06:39:51.998722
{ "authors": [ "Flurin-BLS", "TO-mdv" ], "repo": "openTdataCH/ojp-sdk", "url": "https://github.com/openTdataCH/ojp-sdk/issues/80", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
1279810642
Probabilistic structure (bloom-filter-like) or a guaranteed structure? What document to add I could not find any information explaining whether this "slim" structure is a probabilistic one or not. Trie is not probabilistic and thus provides guarantees. But in the redame of "slim" there is a mention of FPR (false positive rate) so I wonder whether this is actually a trie-like structure (i.e. all is guaranteed) or rather a bloom-like structuer (i.e. no guarantees). Additional context N/A It can be both: with the Complete option there won't be false positives: https://github.com/openacid/slim/blob/d27f7e916f6b70fc0b065fb835a7cde5e4a40db9/trie/slimtrie.go#L132-L140 To use it as a filter, set InnerPrefix, LeafPrefix and Complete to false(Complete implies InnerPrefix==true and LeafPrefix==true). Then slim won't store any single label branch in the trie it builds. With InnerPrefix==true, it does not reduce a single label branch that leads to an inner node. With LeafPrefix==true, it does not reduce a single label branch that leads to a leaf node. E.g.: // Complete InnerPrefix: true LeafPrefix: true ^ -a-> 1 -b-> $ `-c-> 2 -x-> 3 -y-> $ `-z-> $ InnerPrefix: true LeafPrefix: false ^ -a-> $ `-c-> 2 -x-> 3 -y-> $ `-z-> $ InnerPrefix: false LeafPrefix: true ^ -a-> 1 -b-> $ `-c-> 3 -y-> $ `-z-> $ InnerPrefix: false LeafPrefix: false ^ -a-> $ `-c-> 3 -y-> $ `-z-> $``` Ah, thanks for explanation! Could you add it to the readme and specify how the benchmark chart was obtained (wheth with Complete or without)? Thanks. I think readme.md is now more readable. Will you also explicitly specify if the charts (ns/Get() and bits/key) in the https://github.com/openacid/slim#performance-and-memory-overhead section were performed with Complete or without? Will you also explicitly specify if the charts (ns/Get() and bits/key) in the https://github.com/openacid/slim#performance-and-memory-overhead section were performed with Complete or without? Thanks for reminding me. The benchmark is done in the filter mode. The data structure that is used in the benchmark is a list of (key,value) and a slim is used as an index to locate a key-value record.
gharchive/issue
2022-06-22T08:57:55
2025-04-01T06:39:52.013609
{ "authors": [ "drmingdrmer", "dumblob" ], "repo": "openacid/slim", "url": "https://github.com/openacid/slim/issues/164", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }
280944908
Add AJAX loading indicator I've noticed on the staging site that there's sometimes a second or so before the summary information appears because its being requested from the server. Can you add a loading indicator, e.g. a spinner into the body of the modal. Will avoid a user thinking that there's no information. Might be my connection, but I'm only getting a loading indicator on the staging site now, no content. Also can you centre align the image? Loading and looks centred here, cache? Am not seeing the loading indicator now, but am seeing summaries. Closing for now.
gharchive/issue
2017-12-11T09:19:54
2025-04-01T06:39:52.015793
{ "authors": [ "digitalWestie", "ldodds" ], "repo": "openactive/api-dashboard", "url": "https://github.com/openactive/api-dashboard/issues/44", "license": "MIT", "license_type": "permissive", "license_source": "github-api" }