id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
1785677096
|
🛑 vipgifts.net is down
In e19e393, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 888 ms
Resolved: vipgifts.net is back up in 7c01b87.
|
gharchive/issue
| 2023-07-03T08:52:27 |
2025-04-01T04:34:50.254154
|
{
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/596",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2041211561
|
🛑 vipgifts.net is down
In f1226a0, vipgifts.net (https://vipgifts.net) was down:
HTTP code: 567
Response time: 481 ms
Resolved: vipgifts.net is back up in 9448975 after 18 minutes.
|
gharchive/issue
| 2023-12-14T08:40:16 |
2025-04-01T04:34:50.257297
|
{
"authors": [
"lanen"
],
"repo": "lanen/bs-site",
"url": "https://github.com/lanen/bs-site/issues/9014",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
116570846
|
Document topics option
This resolves #27.
CR @lanetix/engineers
✊
|
gharchive/pull-request
| 2015-11-12T15:21:18 |
2025-04-01T04:34:50.258229
|
{
"authors": [
"apechimp",
"armw4"
],
"repo": "lanetix/node-lanetix-amqp-easy",
"url": "https://github.com/lanetix/node-lanetix-amqp-easy/pull/28",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2161237110
|
🛑 Grafana is down
In fd7322b, Grafana (http://langasg1.ddns.net:8082) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Grafana is back up in c0d8244 after 9 minutes.
|
gharchive/issue
| 2024-02-29T13:17:03 |
2025-04-01T04:34:50.260507
|
{
"authors": [
"langasg"
],
"repo": "langasg/upptime",
"url": "https://github.com/langasg/upptime/issues/415",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1932202481
|
Integrating Twitter search API as a tool
Feature request
We are seeking to add a tool to Langchain for Twitter API (v1.1). Integrating the API as a tool will allow agents to search for tweets and timelines using a specific search query that filters by users, locations, hashtags, etc. to respond to prompts.
Motivation
Although Langchain currently has TwitterTweetLoader, we have noticed a plethora of more parameters the Twitter API provides that are not integrated into Langchain. TwitterTweetLoader currently only allows us to specify a list of users to return tweets from and the maximum number of tweets. It would be beneficial if we had more options to specify different operators in search queries. A tool also allows an agent to actively use the API to respond to prompts, without the user having to manually create their own custom tool or load tweets manually.
Your contribution
We have a small team of developers who will be working on this feature request, and we will submit a pull request later in 1-2 months which implements it. We will do our best to follow the guidelines for contributions, as stated in contributing.md.
We like to expand on our initial post by explaining our team's plan for implementing the feature to see if the community has any suggestions. We will provide code later in our pull request, but this is intended to be a high-level summary to engage with the community.
Our tool will be using the Twitter tweet search endpoint (https://developer.twitter.com/en/docs/twitter-api/v1/tweets/search/api-reference/get-search-tweets). We are looking into using the requests package or Tweepy. We will be using BaseModel to add a new utility called TwitterAPIWrapper (added to libs/langchain/utilities) to make a request to the Twitter API. This utility will validate API keys and module installation(s), as well as make requests based on the parameters given. Currently, we are looking to include parameters for: keywords, hashtags, mentions, sender, lang, locale, date, tweet list and max number of results.
The tool will be created as TwitterSearchRun (extending BaseTool and added to libs/langchain/tools) and we will write a new BaseModel subclass TwitterSearchSchema which will describe the run parameters for TwitterSearchRun.
We'll of course be adding tests to libs/langchain/tests/integration_tests/utilities to make sure all our methods are working as intended. We will also look into adding it into docs and a notebook example in docs/extras/integrations/tools.
We welcome any suggestions or concerns about our plan!
Closing the issue, since we ran into unexpected problems getting access to API with the pricing updates. We won't be working on it, so anyone else interested can pick it back up.
|
gharchive/issue
| 2023-10-09T02:28:26 |
2025-04-01T04:34:50.266606
|
{
"authors": [
"clwillhuang"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/issues/11538",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1782808752
|
llma Embeddings error
System Info
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
Who can help?
@sirrrik
Information
[ ] The official example notebooks/scripts
[ ] My own modified scripts
Related Components
[ ] LLMs/Chat Models
[ ] Embedding Models
[ ] Prompts / Prompt Templates / Prompt Selectors
[ ] Output Parsers
[ ] Document Loaders
[ ] Vector Stores / Retrievers
[ ] Memory
[ ] Agents / Agent Executors
[ ] Tools / Toolkits
[ ] Chains
[ ] Callbacks/Tracing
[ ] Async
Reproduction
pip install the latest langChain package from pypy on mac
Expected behavior
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.init
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
I get this too... and the llama-cpp-python package is installed.
|
gharchive/issue
| 2023-06-30T16:22:20 |
2025-04-01T04:34:50.276846
|
{
"authors": [
"rhubarb",
"sirrrik"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/issues/6980",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1898928211
|
Add feature for extracting images from pdf and recognizing text from images.
Description
It is for #10423 that it will be a useful feature if we can extract images from pdf and recognize text on them. I have implemented it with PyPDFLoader, PyPDFium2Loader, PyPDFDirectoryLoader, PyMuPDFLoader, PDFMinerLoader, and PDFPlumberLoader. RapidOCR is used to recognize text on extracted images. It is time-consuming for ocr so a boolen parameter extract_images is set to control whether to extract and recognize. I have tested the time usage for each parser on my own laptop thinkbook 14+ with AMD R7-6800H by unit test and the result is:
extract_images
PyPDFParser
PDFMinerParser
PyMuPDFParser
PyPDFium2Parser
PDFPlumberParser
False
0.27s
0.39s
0.06s
0.08s
1.01s
True
17.01s
20.67s
20.32s
19,75s
20.55s
Issue
#10423
Dependencies
rapidocr_onnxruntime in RapidOCR
@eyurtsev @baskaryan would it be possible to have a look at this? It may be a great feature to add. :)
this is awesome, thank @SuperJokerayo!!
|
gharchive/pull-request
| 2023-09-15T18:58:08 |
2025-04-01T04:34:50.282978
|
{
"authors": [
"SuperJokerayo",
"baskaryan"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/10653",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2080286493
|
Use newer MetadataVectorCassandraTable in Cassandra vector store
as VectorTable is deprecated
Tested manually with test_cassandra.py vector store integration test.
cc @hemidactylus
This is great, thank you @cbornet !
LGTM .
@efriis may I ask you to please take a quick look?
|
gharchive/pull-request
| 2024-01-13T12:57:50 |
2025-04-01T04:34:50.284646
|
{
"authors": [
"cbornet",
"hemidactylus"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/15987",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2100576362
|
langchain:Fix streaming error of TongYi model
Description:
Fix streaming error of TongYi model (Allow merge_dicts to merge int value)
Issue: the issue
When using Tongyi or ChatTongyi with streaming in the use case,the following error occured:
TypeError: Additional kwargs key output_tokens already exists in left dict and value has unsupported type <class 'int'>
Dependencies:
libs/core/langchain_core/utils/_merge.py
ints were not made addable by default intentionally since the integer should placed into one of the chunks. This may be an
incorrect assumption though, could you share the raw streaming response from TongYi to show what integer is being added? We want to make sure that an additive behavior makes sense for it, and we'd definitely want to unit-test that after the fact. utility is used by all streaming code so it's really critical
ints were not made addable by default intentionally since the integer should placed into one of the chunks. This may be an incorrect assumption though, could you share the raw streaming response from TongYi to show what integer is being added? We want to make sure that an additive behavior makes sense for it, and we'd definitely want to unit-test that after the fact. utility is used by all streaming code so it's really critical
These are values need to be merged when the error occured:
k = 'output_tokens'
left = {'input_tokens': 530, 'output_tokens': 2, 'total_tokens': 532}
merged = {'input_tokens': 530, 'output_tokens': 2, 'total_tokens': 532}
right = {'input_tokens': 530, 'output_tokens': 13, 'total_tokens': 543}
v = 13
Has the version been released yet
|
gharchive/pull-request
| 2024-01-25T14:58:59 |
2025-04-01T04:34:50.288773
|
{
"authors": [
"Ca11back",
"eyurtsev",
"jiangyd"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/16580",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1682938032
|
add load data from yuque
Complete load docs from yuque.
The annotation issue has been modified, please check it again.
@yyz629 Hi , could you, please, resolve the merging issues and address the last comments (if needed)? After that, ping me and I push this PR for the review. Thanks!
Closing because the PR wouldn't line up with the current directory structure of the library (would need to be in /libs/langchain/langchain instead of /langchain). Feel free to reopen against the current head if it's still relevant!
|
gharchive/pull-request
| 2023-04-25T10:59:45 |
2025-04-01T04:34:50.290553
|
{
"authors": [
"efriis",
"leo-gan",
"yyz629"
],
"repo": "langchain-ai/langchain",
"url": "https://github.com/langchain-ai/langchain/pull/3516",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2352426960
|
Add Postgres Checkpointer
Based on the base checkpointer and the patterns in the sqlite and async sqlite checkpointers, wanted to build an additional checkpointer for postgres to support long-term & hosted storage of checkpoints.
This is my first fork/PR on a public project, but wanted to contribute towards something that would support my use of langgraph as well as others. So let me know what I can add/do to improve!
Thank you for the PR! Excited to have you join the community.
We're planning to add a some "reference implementations" of checkpointing next week (postgres and redis to start probably) but don't plan to add them directly to the langgraph package - we will likely either keep them as reference implementations or have separate packages for core ones
Will try to follow up here when we start that
Thank you for the PR! Excited to have you join the community.
We're planning to add a some "reference implementations" of checkpointing next week (postgres and redis to start probably) but don't plan to add them directly to the langgraph package - we will likely either keep them as reference implementations or have separate packages for core ones
Will try to follow up here when we start that
I am very interested in these features.
Will LangGraph release Releases or Changelog in the future?
Otherwise, it is difficult to know what new features are available without reading the PR or the source code.
Yes we plan to add release notes + transition to semver
I package that we could install when employing Redis/Postgres based checkpointers would be nice.
@WesGBrooks thank you for your contribution! we now have an official implementation of a LangGraph checkpointer for Postgres as a separate library (langgraph-checkpoint-postgres). please see more here and here
|
gharchive/pull-request
| 2024-06-14T03:45:46 |
2025-04-01T04:34:50.294962
|
{
"authors": [
"WesGBrooks",
"gbaian10",
"hinthornw",
"thiagotps",
"vbarda"
],
"repo": "langchain-ai/langgraph",
"url": "https://github.com/langchain-ai/langgraph/pull/670",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2598657426
|
Added Persian/Farsi Translations
I added the Persian (Farsi) translation. The changes are:
Added Persian (Farsi) markdowns.
Reviewed and improved some translated texts for clarity.
P.S. Further refinements are needed for consistency and readability. Feedback and suggestions for improvement are welcome.
@AllenWriter
Hi, If it's needed any change or improvements give me the hint 🤝
Hi there! Thank you so much for your valuable contribution. I apologize for the delayed response.
Dify Docs is planning a significant version update shortly. Given the extensive nature of your changes, we must carefully consider several factors before merging them into the main branch.
We are developing a new content management system for community contributions like yours. This will help us better handle and integrate volunteer contributions moving forward.
We truly appreciate your enthusiasm and patience. I'd like you to please stay tuned for updates on our new contribution system.
I appreciate your understanding!
Thank you so much for your comment. Yeah very well, I'm looking forward to it.
|
gharchive/pull-request
| 2024-10-19T02:08:11 |
2025-04-01T04:34:50.302254
|
{
"authors": [
"AllenWriter",
"mosnfar"
],
"repo": "langgenius/dify-docs",
"url": "https://github.com/langgenius/dify-docs/pull/325",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1032198898
|
Changelog attributten foreløpig bare er en string på time-event.
Merk at changelog attributten foreløpig bare er en string.
Eg forslsår at vi tar denne seinare. Oppretter issue.
Originally posted by @stigbd in https://github.com/langrenn-sprint/race-service/issues/26#issuecomment-948373174
Løyst i 391fbb3451cd280449f6f252bcfa19e99412f7c8
|
gharchive/issue
| 2021-10-21T08:19:50 |
2025-04-01T04:34:50.326142
|
{
"authors": [
"stigbd"
],
"repo": "langrenn-sprint/race-service",
"url": "https://github.com/langrenn-sprint/race-service/issues/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1308836498
|
Respect known inputs, constrained P and T solvers
PR Summary
The get_sg_eos function did not properly take into account the knowns and unknowns for all possible input conditions, nor could it initialize mixed material regions with P-T, rho-T, or rho-P, inputs. There is now a different kernel for each different input condition and 2 new PTE solvers that can take into account when P or T are known but the internal energy is not known.
PR Checklist
[x] Adds a test for any bugs fixed. Adds tests for new features.
[x] Format your changes by using the make format command after configuring with cmake.
[x] Document any new features, update documentation for changes made.
[x] Make sure the copyright notice on any files you modified is up to date.
[x] After creating a pull request, note it in the CHANGELOG.md file
@jhp-lanl, please take a look at the new kernels and make sure the inputs are treated as inputs and the outputs are treated as outputs.
@Yurlungur, @jdolence, @chadmeyer, @jhp-lanl I gave the solvers my best shot. For now I have just commented out initbase and replaced them with different init bases that I think should work. I think there is still an issue in initialization at how to get good initial volume fraction / component density guesses yet still conserve mass.
@jhp-lanl @jdolence I just saw that new Tguess machinery was merged in. Now that I have function in the base class for obtaining a Tguess, should it be used there?
@jhp-lanl @jdolence I just saw that new Tguess machinery was merged in. Now that I have function in the base class for obtaining a Tguess, should it be used there?
Are you asking about your GetTguess() function? I don't think that's necessary. It looks like @jdolence intended ApproxTemperatureFromRhoMatU() to be used by the host code when an approximation for the PTE volume fractions and densities is already known (i.e. the host code decides when this is the case).
Your stuff is mainly for initialization where we wouldn't expect the material volumes or volume fractions to be known before the PTE solve, so your volume fraction guess is probably as good as any. I think your functions need to assume that only the bare minimum is known about a state. You still allow a temperature guess to be passed through right? I think that should be sufficient.
But I haven't thought about this code in a while so I could be forgetting key details.
@Yurlungur I think this is ready, give this another look when you get a chance. Hoping to get this merged soon.
Oh also @dholladay00 and/or @jhp-lanl if we could add documentation regarding the two new PTE solvers, that would be helpful. Jeff's pdf should be enough, just dump it into sphinx in the right file.
@Yurlungur, in terms of using the cache, I do the offset calculation to figure out the cache location within the solver and create an accessor to it outside of the solve. It's not ideal but it doesn't mess with the current implementation of caching in the solvers and it allows me to take advantage of it when doing lookups after the PTE solve.
Oh also @dholladay00 and/or @jhp-lanl if we could add documentation regarding the two new PTE solvers, that would be helpful. Jeff's pdf should be enough, just dump it into sphinx in the right file.
I can do this @dholladay00 since you've been slogging through debugging and writing this code. It will come in a separate MR and I'll create an issue
@Yurlungur, in terms of using the cache, I do the offset calculation to figure out the cache location within the solver and create an accessor to it outside of the solve. It's not ideal but it doesn't mess with the current implementation of caching in the solvers and it allows me to take advantage of it when doing lookups after the PTE solve.
:+1: I think this good enough for now. With the issue we can resolve later.
I can do this @dholladay00 since you've been slogging through debugging and writing this code. It will come in a separate MR and I'll create an issue
Thanks, @jhp-lanl ! That works for me.
As soon as @jdolence confirms nothing breaks downstream in riot, I'm ready to merge.
@Yurlungur @jdolence is there a timeline for the downstream testing? I'd like to get this merged ASAP.
@Yurlungur and @jdolence I reiterate that this should be merged ASAP.
Just spoke to @jdolence let's just merge it. Wil lmerge as soon as tests pass.
|
gharchive/pull-request
| 2022-07-19T00:42:06 |
2025-04-01T04:34:50.372998
|
{
"authors": [
"Yurlungur",
"dholladay00",
"jhp-lanl"
],
"repo": "lanl/singularity-eos",
"url": "https://github.com/lanl/singularity-eos/pull/156",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
276735837
|
Utility Bar: Adding 9Anime Discord Link and 9Anime Twitter Links
[ ] Bug
[ ] Question
[x] Suggestion
[ ] Other (please describe):
Describe your issue:
Hi Lap00za, I recently thought of this. Wouldn't be good if there was the 9Anime Discord Link as well as it's Twitter link as well?
Well, there is enough place and I think it'll be very convenient for us to have it. ^^
We used to have Discord Server, but then it wasn't that active back then.
we will add this into consideration.
Oh you mean the one which lap00za created and then deleted? @densityx
I think @AccelerateIzaya is talking about adding links to 9anime official discord server and twitter account. Correct? if yes I don't understand why we should be linking 😄
LOL.
|
gharchive/issue
| 2017-11-25T08:37:43 |
2025-04-01T04:34:50.380503
|
{
"authors": [
"AccelerateIzaya",
"densityx",
"lap00zza"
],
"repo": "lap00zza/9anime-Companion",
"url": "https://github.com/lap00zza/9anime-Companion/issues/107",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
484169130
|
Laradock integration with VSCode Remote Development
This is just to keep track of what's needed to have laradock workspace container properly supported in Visual Studio Code Remote Development (Containers).
Current pending issues from VSCode side (please add more, if needed):
[Dev Container] allow passing another user to "docker-compose exec" command when connecting to a container - https://github.com/microsoft/vscode-remote-release/issues/538
Open folder in container starts all compose services - https://github.com/microsoft/vscode-remote-release/issues/211
Once theses issues are addressed, it would be interesting if laradock provides an example devcontainer.json configuration to make VSCode attach to laradock workspace container.
I have a working devcontainer.json.
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or the definition README at
// https://github.com/microsoft/vscode-dev-containers/tree/master/containers/docker-existing-docker-compose
{
// See https://aka.ms/vscode-remote/devcontainer.json for format details.
"name": "Existing Docker Compose (Extend)",
// Update the 'dockerComposeFile' list if you have more compose files or use different names.
// The .devcontainer/docker-compose.yml file contains any overrides you need/want to make.
"dockerComposeFile": [
"../../laradock/docker-compose.yml"
],
// The 'service' property is the name of the service for the container that VS Code should
// use. Update this value and .devcontainer/docker-compose.yml to the real service name.
"service": "workspace",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/var/www/first-project",
// Uncomment the next line if you want to keep your containers running after VS Code shuts down.
// "shutdownAction": "none",
// Uncomment the next line if you want to add in default container specific settings.json values
// "settings": { "workbench.colorTheme": "Quiet Light" },
// Uncomment the next line to run commands after the container is created - for example installing git.
// "postCreateCommand": "apt-get update && apt-get install -y git",
// Add the IDs of any extensions you want installed in the array below.
"extensions": [
"onecentlin.laravel-blade",
"felixfbecker.php-intellisense",
"amiralizadeh9480.laravel-extra-intellisense",
"fterrag.vscode-php-cs-fixer"
]
}
Be sure to make the following path point to your Laradock docker-compose.yml.
"dockerComposeFile": [
"../../laradock/docker-compose.yml"
],
Also make sure that the following path is pointing to the folder of the project you want to work on this is the path inside the workspace container
"workspaceFolder": "/var/www/first-project",
You're also free to add and remove extensions from:
"extensions": [
"onecentlin.laravel-blade",
"felixfbecker.php-intellisense",
"amiralizadeh9480.laravel-extra-intellisense",
"fterrag.vscode-php-cs-fixer"
]
note: This devcontainer.json is meant for remote development when Laradock is already up and running. I think there are multiple ways to approach this. This is the one that I've been using for the last couple of weeks.
Thanks @boumanb your snippet works like a charm. Cheers.
@boumanb Thank your very much. I have also been this configuration for a few weeks now and it really bring development to a new level!
One thing I would like to achieve to improve my workflow even more, would be to automatically start the dev containers, if they are not running already.
I tried to add the services to runServices, but it just fails. It seems its missing the env. file, so how can I provide it to the extension?
I think my configuration should actually work, but the problem is that the command you see in the picture fails. It fails, because somehow the env variables are not present when this command is run from a different location that the laradock directory itself. So how can I provide the .env file to the devcontainer?
I know this is probably not the best place to ask, because it is rather a configuration of the vscode remote containers extension, but maybe you know how to solve this issue.
@MannikJ , have you tried the sample configuration now shipped with laradock? https://github.com/laradock/laradock/blob/master/.devcontainer/devcontainer.example.json
Just go to folder laradock/.devcontainer, copy devcontainer.example.json as devcontainer.json, make your own adjustments, open laradock folder in VSCode (with Remote-Containers extension already installed), and click on the button Repoen in container.
Hey, as I tried to explain, I have a working configuration. At least when the containers are running before I connect via remote container extension.
But it is not able to spin of the containers itself, because the laradock .env file does not reside in the project directory. I guess it tries to run docker-compose with the laravel .env file, because that's what the command sees.
I use the multi project setup which can is also important to know.
{
"name": "Laradock",
"dockerComposeFile": "../docker-compose.yml",
"remoteUser": "laradock",
"runServices": [
"nginx",
"postgres",
"pgadmin"
],
"service": "workspace",
"workspaceFolder": "/var/www",
"shutdownAction": "stopCompose",
"postCreateCommand": "uname -a"
}
From the default configuration it looks like the docker-composer file is located in the project directory (one level up from devcontainer.json) which I don't quite understand.
@MannikJ , the trick here, assuming you have a laradock folder in your project root (example: /home/me/my-project/laradock), is opnening /home/me/my-project/laradock subfolder directly on VSCode, rather than /home/me/my-project. Note that your devcontainer.json file should go to /home/me/my-project/laradock/.devcontainer folder.
Ok, I understand that. But the point is still my scenario is different. My laradock folder is not in the project directory but one level up like the laradock docs propose for a multi project setup.
But actually I feel this isn't the best setup anyway. As I am thinking about a way to provide the whole laradock environment for each project with its .env files etc. I had something like that a while ago, but it got very tedious because it wasn't done properly. But this is another very complicated topic. For now I just would like to have a way to tell the remote extension to use the environment file from another directory (the laradock directory that is located one level up from the project directory).
Hi @MannikJ,
No problem. I found your request interesting and played with it all day, never ran into this since my containers are always up. To my surprise, it's very confusing.
My setup (tried to replicate yours based on your comments, please say so if I'm wrong):
What I've found out:
"runArgs" is not available for Docker Compose according to VS Code documentation (it isn't listed underneath Docker Compose and/or General).
VS Code takes the .env file from the project / workspace root by default*. I even managed to get the whole thing working this way 😄 .
For as far as I could discover, it is not possible to override the path VS code takes the .env file from right now. Not by settings in VS Code and not by settings in the .devcontainer.json.
So besides from placing the Laradock .env file into our project root, it isn't possible to provide a different .env file location (at this moment). 4 possible ways around of which 2 are working and tested by me (number 2 and 3). They're ordered to my preferences:
I'm not aware of what OS you're using, but if you're on Linux, you may succeed by symlinking the .env file like so: For project A ln -s /home/user/development/laradock/.env /home/user/development/projecta/.env and for project B ln -s /home/user/development/laradock/.env /home/user/development/projectb/.env this way you have a central .env file that is identical for each and every project. If you would like to differ from it for whatever reason, just copy it instead of making a symlink. This solution is, in my eyes, the best.
Copy your .env file from laradock/.env to projecta/.env and open the projecta folder in VS Code.
Copy your .env file from laradock/.env to laradock/../. Create a .devcontainer folder in laradock/../ and create a devcontainer.json file identic identical to the one you're using right now but let the dockerComposeFile reference "../laradock/docker-compose.yml". Lastly, set workspaceFolder to /var/www. Now open laradock/../ in VS Code and reopen in container. Result should look like the following:
Leverage something like the initializeCommand inside of the devcontainer.json file to set all the variables in the environment (?) 🤷♂️
*The default .env file is picked up from the root of the project, but you can use env_file in your Docker Compose file to specify an alternate location.
copied from: VS Code documentation underneath Docker Composer
|
gharchive/issue
| 2019-08-22T19:24:23 |
2025-04-01T04:34:50.412780
|
{
"authors": [
"MannikJ",
"boumanb",
"lbssousa",
"ludo237"
],
"repo": "laradock/laradock",
"url": "https://github.com/laradock/laradock/issues/2248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
200522347
|
Make Laradock more maintanable
As laradock is getting more and more famous, we have more and more pull request that we want to merge...
Well if We really wanna make this the Official Repo...
I Sugguest we make this More Laravel Like...
We Could have a CORE Stacks...
In those Stacks we can Easily Spin Up A Laravel Dev machine...
for example that stack is : php mysql php-fpm redis nginx node
if We Do Intend to Use Extension for example
xdebug , that would be listed in our Plugin Stacks...
I hope you get the Idea...
If we Want this to be Maintable as Possible We Should limit the Core Stacks we Should Use...
Coz not all of them would be Usable in the Same Project...
We Want to Keep only the very basic and extend using Plugins....
Please Let me Here your thoughts thanks
@g0ld3lux Thanks for bringing this up.
As far as being the official repo, this is (as far as I know) something entirely out of our control. It depends on the decision of Taylor Otwell so although we can hope for it (obviously it would make our project/community grow), I don't think we should expect it nor base our long-term planning of this project on it.
That being said I do agree that we need to rethink our approach to how many frameworks/stacks we support and where we want to concentrate our effort.
Maintanability is definitely an issue. One of the problems we are facing is the lack of active support on our side for our less popular images (discussed previously in #368). Pushing in new updates can be often delayed and I know that several of the more active contributors spend quite a bit of time on helping out newcomers instead of adding new features/fixing bugs.
That isn't to say that helping newcomers is not a good way to contribute but we do need more code contributions if we want a more stable project and it does need to be more stable if we want it to be easier to use.
Docker Compose is still a relatively new technology and I can't think of another open-source Docker Compose stack project that does what we are doing. We are in the unfortunate position to be at the receiving end of the errors/bugs that can seep through the several layers of technologies that make running this project possible so I highly doubt that we can be stable AND trendy AND diversified at the same time but we can certainly pick one of those and do it well.
As far as supporting other frameworks, I think it is not a compatible goal with improving our support of Laravel.
What does everybody else think? Obviously @Mahmoudz and @appleboy I'd love to have your opinion on this.
Maybe we can follow the Contribution Guidelines of Gitea project
To make sure every PR is checked, we have team maintainers. Every PR MUST be reviewed by at least two maintainers (or owners) before it can get merged. A maintainer should be a contributor of Gitea (or Gogs) and contributed at least 4 accepted PRs. A contributor should apply as a maintainer in the Gitter develop channel. The owners or the team maintainers may invite the contributor. A maintainer should spend some time on code reviews. If a maintainer has no time to do that, they should apply to leave the maintainers team and we will give them the honor of being a member of the advisors team. Of course, if an advisor has time to code review, we will gladly welcome them back to the maintainers team. If someone has no time to code review and forgets to leave the maintainers team, the owners may move him or her from the maintainers team to the advisors team.
@g0ld3lux I like the idea of enhancing the maintainability of the project.
By Plugins do you mean repository containing a software support?! Can you give us more detailed example on your solution please?
@philtrep @appleboyI do support bringing more maintainers. Maybe @g0ld3lux would like to join us and help us implement his solution to fixing the maintainability issue. @appleboy suggestion (Gitea) looks very nice as a way to organize the PR's, I like to see that happening but I think it's not as easy as it sounds!
@Mahmoudz @appleboy one of the most important parts here is (in my opinion) the support of multiple frameworks. The project has gotten quite big since #76 and I think we should reopen a discussion on wether or not other frameworks should be supported.
As far as the Gitea guidelines, I'm fine with the minimum Contributions to become a Maintainer but the 2 reviewers mandatory would be too hard to implement atm. I already don't have time to do code contributions for LaraDock 😭 and I think it's the same with you guys...
@philtrep @Mahmoudz @appleboy
I completely agree! Also, leads to some "confusion" based on the name laradock I am thinking, if this project wants to be the official docker package for Larevel, it should stay focused to laravel.
I am willing to help the cause best I can. @philtrep knows my work (with the two recent PRs)
@philtrep @Mahmoudz @mikeerickson
Maybe we need to create another repo to handle other frameworks and make this repo to be the official docker package to Laravel. Find more maintainer to handle another framework issue or there are many issues on another framework like (CodeIgniter or ...)
@philtrep @Mahmoudz @appleboy
I am primed to help maintain / triage this repo as I only use Laravel for PHP work
Just to keep this issue update, @mikeerickson has been added to the admin team and the issue tags have been updated for clarity.
|
gharchive/issue
| 2017-01-13T00:59:55 |
2025-04-01T04:34:50.423694
|
{
"authors": [
"Mahmoudz",
"appleboy",
"g0ld3lux",
"mikeerickson",
"philtrep"
],
"repo": "laradock/laradock",
"url": "https://github.com/laradock/laradock/issues/550",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1306844654
|
Enhance ds:check for dsq, ds-macro and commented ds
Hi Luan,
This PR enhances the php artisan ds:check and does not introduce any breaking change.
Main changes
The dsq() is now included in the functions to be checked;
Commented calls //ds() will produce now produce a "ds() found error";
Macro DS calls User::query()->where('id', 20)->ds()->get(); also produces a "ds() found error";
Changed the ci_check.directories to base_path('app');
Removed the spaces on ci_check.text_to_search for better user experience;
Test is now checking for functions; whenever a new shortcut is created, it must be included there.
Problem
Currently, ds:check is not detecting the commented calls and macro calls.
Even though commented ds() calls would not produce any problem, having them in the final product can be seen as a "less professional" code quality.
Result
Please let me know if changes are required.
Greetings and thanks,
Dan
Hi Luan,
This PR enhances the php artisan ds:check and does not introduce any breaking change.
Main changes
* The `dsq()` is now included in the functions to be checked;
* Commented calls `//ds()` will produce now produce a "ds() found error";
* Macro DS calls `User::query()->where('id', 20)->ds()->get();` also produces a "ds() found error";
* Changed the `ci_check`.`directories` to `base_path('app')`;
* Removed the spaces on `ci_check`.`text_to_search` for better user experience;
* Test is now checking for functions; whenever a new shortcut is created, it must be included there.
Problem
Currently, ds:check is not detecting the commented calls and macro calls.
Even though commented ds() calls would not produce any problem, having them in the final product can be seen as a "less professional" code quality.
## Result
Please let me know if changes are required.
Greetings and thanks,
Dan
|
gharchive/pull-request
| 2022-07-16T16:09:18 |
2025-04-01T04:34:50.430808
|
{
"authors": [
"dansysanalyst"
],
"repo": "laradumps/laradumps",
"url": "https://github.com/laradumps/laradumps/pull/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1182571541
|
Fix error in global search when user can not view or edit record.
@ryangjchandler Can you give feedback on this one?
Ideally data is filtered by getEloquentQuery() before, but you cannot rely on that entirely I think.
Fixes https://github.com/laravel-filament/filament/issues/2031
@ryangjchandler Can you give feedback on this one?
Thank you
|
gharchive/pull-request
| 2022-03-27T15:42:17 |
2025-04-01T04:34:50.446446
|
{
"authors": [
"danharrin",
"pxlrbt"
],
"repo": "laravel-filament/filament",
"url": "https://github.com/laravel-filament/filament/pull/2032",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
164231073
|
Default password reset email causes RFC 5322 not compliant complaint
I was testing the password reset feature shipped by Laravel 5.2 today, using Mailgun sandbox domain.
I noticed from Mailgun logs that whenever those emails was going to be sent to my Hotmail account, Mailgun failed to deliver and returned a RFC 5322 not compliant complaint. I think it is from Microsoft mail servers. And whenever those emails was going to be sent to Gmail, I received all the mails but Gmail labelled them as "Unknown sender".
I digged into Laravel source and found in trait Illuminate\Foundation\Auth\ResetsPasswords, on Line 122, when setting up password reset mail, the function resetEmailBuilder() doest not include any "from" or "Application Name" info.
I know I can override the function in the default App\Http\Controllers\Auth\PasswordController, doing something like:
use Illuminate\Mail\Message;
...
protected function resetEmailBuilder()
{
return function (Message $message) {
$message->from(env('MAIL_FROM'), env('MAIL_NAME'));
$message->subject($this->getEmailSubject());
};
}
But I consider why not the Laravel source adds "from" and "application name" when setting up password reset mail, and set two sample environment variables in .env.example in the first place?
what happens when you change the "from" section in your config/mail.php file?
'from' => [
'address' => 'test@test.com',
'name' => 'My website',
],
I tested this and for me this works....
@it-can Totally forgot the configuration file. Thank you.
|
gharchive/issue
| 2016-07-07T05:15:07 |
2025-04-01T04:34:50.464873
|
{
"authors": [
"CarterZhou",
"it-can"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/14243",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
203847588
|
Error when upgrade form 5.3 to 5.4
Laravel Version: 5.4.#
PHP Version: 7.0.10
Database Driver & Version:
Description:
Found this error after upgrade to 5.4, I upgrade with suggestion in Upgrade Guide.
And I try to remove folder vender/ and used cammand composer install but it didn't work.
Why.
Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 262144 bytes) in C:\wamp\www\2_Work\servicedesk\vendor\laravel\framework\src\Illuminate\Container\Container.php on line 549
Script php artisan optimize handling the post-update-cmd event returned with error code 255
Need more than 128MB PHP memory by the looks of it.
Try updating it to 256 or 512 in php.ini
Looks more like an infinite loop.
@bbashy I already try it.
Have you tried using as development mode, without WAMP?
@ThunderBirdsX3 any progress?
Looks like an infinite loop.
Check if there is a var without double quotes or lower/upper case error
I'm getting the same issue, but I'm on PHP 7.1 and Valet
In my case I had a missing comma after a service provider, in the config/app.php providers array.
I had similar problem after upgrading to Laravel 5.4. I had a missing ::class and a comma in config/app.php
|
gharchive/issue
| 2017-01-29T03:16:50 |
2025-04-01T04:34:50.469470
|
{
"authors": [
"DanielHudson",
"GrahamCampbell",
"ThunderBirdsX3",
"bbashy",
"eberharterm",
"kariolos",
"movingSone",
"thachp",
"themsaid"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/17638",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
714364481
|
/email/verify route
Laravel Version: #.#.#
PHP Version: #.#.#
Database Driver & Version:
Description:
when you are already verified, you don't need verify again.
After registration, we go through verification.
After verification, the / email / verify route is still active and I can verify myself again by mistake by manually navigating to / email / verify and the email_verified_at entry in the database will be overwritten.
There is a solution to how to prevent accidental navigation along the / email / verify route after verification. Or rather, if the user has already been verified, then just go to the main page.
Steps To Reproduce:
need add check route to EnsureEmailsVerified.php
elseif ($request->route()->uri === '/email/verify') {
return route('login');
}
#here example
public function handle($request, Closure $next, $redirectToRoute = null)
{
if (!$request->user() ||
($request->user() instanceof MustVerifyEmail &&
!$request->user()->hasVerifiedEmail())) {
return $request->expectsJson()
? abort(403, 'Your email address is not verified.')
: Redirect::route($redirectToRoute ?: 'verification.notice');
} elseif ($request->route()->uri === '/email/verify') {
return route('login');
}
// dd( $request->user()->hasVerifiedEmail());
return $next($request);
}
Or add one else middleware when we are already verified
If you navigated to /email/verify when already verified, it wouldn't update the DB, regardless of if they have the signed URL query params.
Check the code that verifies;
public function verify(Request $request)
{
if (! hash_equals((string) $request->route('id'), (string) $request->user()->getKey())) {
throw new AuthorizationException;
}
if (! hash_equals((string) $request->route('hash'), sha1($request->user()->getEmailForVerification()))) {
throw new AuthorizationException;
}
if ($request->user()->hasVerifiedEmail()) {
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect($this->redirectPath());
}
if ($request->user()->markEmailAsVerified()) {
event(new Verified($request->user()));
}
if ($response = $this->verified($request)) {
return $response;
}
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect($this->redirectPath())->with('verified', true);
}
thank you
ср, 18 нояб. 2020 г. в 13:57, Ben Sampson notifications@github.com:
If you navigated to /email/verify when already verified, it wouldn't
update the DB, regardless of if they have the signed URL query params.
Check the code that verifies;
public function verify(Request $request)
{
if (! hash_equals((string) $request->route('id'), (string) $request->user()->getKey())) {
throw new AuthorizationException;
}
if (! hash_equals((string) $request->route('hash'), sha1($request->user()->getEmailForVerification()))) {
throw new AuthorizationException;
}
if ($request->user()->hasVerifiedEmail()) {
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect($this->redirectPath());
}
if ($request->user()->markEmailAsVerified()) {
event(new Verified($request->user()));
}
if ($response = $this->verified($request)) {
return $response;
}
return $request->wantsJson()
? new JsonResponse([], 204)
: redirect($this->redirectPath())->with('verified', true);
}
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/laravel/framework/issues/34673#issuecomment-729631612,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AIZPTI4QEFOS27MERJO5O7TSQOZCFANCNFSM4SD22G3A
.
|
gharchive/issue
| 2020-10-04T17:44:10 |
2025-04-01T04:34:50.478922
|
{
"authors": [
"bbashy",
"maksimuslyandia"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/issues/34673",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
147079731
|
[5.2] Make $query->when() return instance if callback return null
Follow-up to #12878.
When breaking the chain, we aren't using return value of orderBy:
$query = $model->where('customer_id', 1234)
->where('created_at', '>=' '2016-01-04 20:34:56');
if (! empty($sortParams)) {
$query->orderBy($sortParams);
}
return $query->get();
Likewise, when using when this PR allows to omit the return in the callback:
return $model->where('customer_id', 1234)
->where('created_at', '>=' '2016-01-04 20:34:56')
->when(! empty($sortParams), function ($builder) use ($sortParams) {
$builder->orderBy($sortParams); // <-- now you can omit the "return" here
})
->get();
TL;DR: If callback returns null, when returns builder instance instead of null. This improves chainability.
Added tests for all this goodness. :information_desk_person:
Ping @tomschlick
If you're not explicitly returning the modified object and you're not passing it by reference in the function arguments how is it actually applying the changes in the closure to the $this/$builder variable?
What am I missing?
Builder objects are mutable.
Looks breaking to me?
The only change is that if the callback returns nothing, when returns the current $query instead of nothing.
Just like what would happen if $value were false.
Strictly, it's not 100% backward compatible, but:
when method has been added recently.
the change is very specific, and only affect someone who would be expecting when to return null if $value is true AND $callback returns null.
I think this should go to 5.3 instead.
I'd still opt for 5.2. Thoughts?
The only thing we are gaining is the ability to omit return from the statement right?
Yes, so that when is chainable.
Just use return.
Refs #13726 to emphasize this method has to be duplicated into the Eloquent query builder.
|
gharchive/pull-request
| 2016-04-09T02:15:20 |
2025-04-01T04:34:50.485468
|
{
"authors": [
"GrahamCampbell",
"taylorotwell",
"tomschlick",
"vlakoff"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/13091",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
208748105
|
[5.4] Removed hidden characters from the mail template
to see hidden character
cat -A src/Illuminate/Mail/resources/views/markdown/message.blade.php
Don't see anything
@taylorotwell PR deletes symbol before copyright
|
gharchive/pull-request
| 2017-02-19T21:32:33 |
2025-04-01T04:34:50.487183
|
{
"authors": [
"akeinhell",
"taylorotwell"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/18005",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
245632356
|
[5.5] Highlight the change to datetime2 as a breaking change in the changelog
Related: https://github.com/laravel/framework/pull/18962
I'll argue that this is a breaking change and should be noted as such. This is regards to migrations, which I believe is very important to be stable over long timespans.
I have several years worth of migrations, and this change will change the behavior/result of every existing already migration. Executing the migrations in 5.5 will result in a different schema than when I executed in a 5.4 installation. Thus I think this should be documented as a breaking change that needs to be highlighted for SQL Server users.
I will need to rewrite my migrations to use explicit sql commands to keep the old behavior. This is also an indication that it is a breaking change.
TL:DR; I consider this a breaking change.
Is it possible for us to make this kind of thing configurable?
Probably, but it's way to short on time to do anything about it right now. I presume the 5.5 release is very close. I think that the short term (<1 day) solution would be to document this, and later (>1 day) expose some way to create old datetime types on SQL Server.
I think that the datetime2 implementation should have been a separate method/type on the Blueprint, just as we already do for text, mediumText and longText to expose mysql-specific types. But it's way to late to change that now.
The 5.5 release is around 4 weeks away?
Are you sure? ;) I thought it would be released during Laracon, and the After Party starts in 40 minutes (if I understand the schedule and timezones correct) which kind of limits the window a bit. There's also the fact that 5.4 just left the usual 6 month support window. And that others also thinks that it will be released today (with vagueness implied due to timezones).
Laravel 5.5 in July 2017 will require PHP 7+
https://twitter.com/taylorotwell/status/809767371774816256
Laravel 5.5 is scheduled to be released in July of 2017 and will be the next major release.
https://laravel-news.com/category/laravel-5.5
Here we go again! Currently scheduled for a July release date, we're on the verge of Laravel 5.5.
https://laracasts.com/series/whats-new-in-laravel-5-5
No, I'm not sure why everyone keeps thinking it is releasing at Laracon. Last year it released at Laracon EU.
I've updated the PR and added notices about dropped support for MySQL 5.5, and SQL Server 2005 and older. This was initiated by https://github.com/laravel/framework/issues/20284
I think we should modify the sql to be compatible with 5.5.
Instead of this statement:
alter table "users" add "foo" datetime(0) not null
We could create this one:
alter table "users" add "foo" datetime not null
When not passing precision to the function, this way we keep the functionality and the support for 5.5
@sisve Would something like this also solve the problem of changing the datetime column to datetime2 on sqlserver? I mean we can create a datetime column if no precision is informed and a datetime2 when the parameter is available (this way all of our drivers will keep the same functionality). Also we could add another method to always create a datetime2 field.
I agree with @fernandobandeira. @themsaid could you do this for us?
Here: https://github.com/laravel/framework/pull/20465
|
gharchive/pull-request
| 2017-07-26T07:37:57 |
2025-04-01T04:34:50.496259
|
{
"authors": [
"fernandobandeira",
"sisve",
"taylorotwell",
"themsaid"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/20262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
265668552
|
[5.5] Add user helper
I constantly write auth()->user(), so I thought a user() helper would be nice
I don't know, it feels strange to be honest, also I think you can avoid the outer if since guard() does the same thing inside.
It was proposed and declined many times before.
That's true, refs #8818, see also this answer on Stack Overflow.
Still, maybe this helper should be added, just to avoid more PRs :)
This PR will probably get closed but anyhow:
return app(AuthFactory::class)->guard($guard)->user();`
That should be enough no need to check if null.
|
gharchive/pull-request
| 2017-10-16T07:18:08 |
2025-04-01T04:34:50.499148
|
{
"authors": [
"decadence",
"jmarcher",
"ludo237",
"mateusjatenee",
"vlakoff"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/21683",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
418374232
|
[5.8] Sort auto discovered packages
Service providers of auto discovered packages are registered in order of the declaration in vendor/composer/installed.json. This is in alphabetically order.
I am currently working at a project where it is necessary to register the service providers in the order they are declared in composer.json resp. the composer.json file of metapackages.
To preserve compatibility it should be sorted optionally by an extra key in the root composer.json file.
"extra": {
"laravel": {
"dont-discover": [],
"sort-dependencies": true
}
}
Wouldn't this break if someone reorders your composer.json file? I don't think the order of composer.json entries are supposed to matter, and there's a setting for composer to automatically order them. I wouldn't expect such a thing to also break my application...
Seems like a case where you'd want to disable auto discovery to get more fine-grained control
If you set composers sort-packages flag it would have still the same behavior like it is right now, in alphabetically order.
My intention was to control the order of packages in the composer.json files of metapackages. And this is for me the perfect place. I describe in one file which packages should be loaded and also in which order.
With the default Laravel´s composer.json there is no different behavior. But you can enable this functionality by setting the sort-dependencies flag in the extra:laravel section.
Seems like a case where you'd want to disable auto discovery to get more fine-grained control
In my case disabling auto discovery is not a good solution. I'm developing a web interface for several network devices. All of them have the same basic interface and i extending it with their custom packages.
With this implementation, i can clone my project and generate a specific web interface when I integrate the metapackage with composer require company/wifidevice-bundle.
This bundle requires company/core, company/autobackup etc.
Another device bundle requires company/core, company/sd-card etc.
And here is the problem: company/core needs always be the first registered service provider.
If i disable auto discovery, i need to tell Laravel manually which service providers should be loaded.
But I don't know which packages are implemented, because the root project is the base for every device bundle.
The PackageManifest::build() method is already parsing vendor/composer/installed.json to find all declared service providers. This file contains all the dependencies of the application, and we should be able to use this to sort the providers according to dependency order instead of alphabetical order. I believe this would work with your use-case assuming all the packages have their dependencies properly declared.
This is exactly what this PR is doing.
The PackageManifest::build() method checks if sort-dependencies is set to true and if so, it takes the already parsed packages from installed.json and sort it by dependencies instead of alphabetical order.
Ah, ok. Now i know what you mean.
I will change this.
I have also changed the composer key from sort-packages to sort-providers.
You should never depend on the registration order of service providers. They should be able to be registered in any order.
|
gharchive/pull-request
| 2019-03-07T15:37:17 |
2025-04-01T04:34:50.505567
|
{
"authors": [
"devcircus",
"kapalla",
"sisve",
"taylorotwell"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/27816",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1834778179
|
[10.x] Add timestamps casting and mutation support to Eloquent
This PR fixes #47769.
This PR adds a new method (freshTimestampForAttribute) to the Eloquent which can be used for creating fresh timestamp for any attribute considering either its mutation or casting and It's a replacement for the freshTimestampString method. so it will not ignore the casting or mutation of the model's attribute for any purpose.
This method can be used for other parts like SoftDeletes and touching related models instead of freshTimestampString.
For now I just used it to fix issue #47769.
Got the idea from @timacdonald PR (#47942).
Going to look at @timacdonald's PR. If this is different explain the alternatives.
|
gharchive/pull-request
| 2023-08-03T10:25:36 |
2025-04-01T04:34:50.508305
|
{
"authors": [
"amir9480",
"taylorotwell"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/47945",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
49735927
|
[4.2] Log Failed Jobs Exceptions
This fixes issue #6203.
Recreated since this pull got foobar-ed. https://github.com/laravel/framework/pull/6466
Sorry commenting this late, but I haven't see this change in neither 5.* branches nor 4.* .
Basically, feature is still missing. Exceptions are not logged in the failed jobs table. I am correct in thsi assumption?? Cheers
Hi, I am not seeing the exceptions being logged either (5.1). I am experiencing an erratic behavior, sometimes a job runs as it is supposed to but after finishing it goes to the failed_jobs database table. Other times a job with the same payload fails before the handle() function.. this is killing me since I can't see what exception is being thrown...
Hahahaha, 2 years later and he still hasn't added/fixed this? For shame.
has not fixed yet?
|
gharchive/pull-request
| 2014-11-21T19:05:04 |
2025-04-01T04:34:50.511000
|
{
"authors": [
"elijan",
"milesj",
"paulovitorjp",
"xiatian"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/6472",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
64470926
|
[5.0] add removeMiddleware method on abstract Controller
for disable specfic middleware on some controllers like 3rd API listening.
class GithubCallbackController{
public function __construct()
{
$this->removeMiddleware('csrf');
}
}
I would really suggest implementing this at the middleware level where you check the request URI and skip the middleware for certain paths.
|
gharchive/pull-request
| 2015-03-26T08:39:34 |
2025-04-01T04:34:50.512267
|
{
"authors": [
"taylorotwell",
"zerozh"
],
"repo": "laravel/framework",
"url": "https://github.com/laravel/framework/pull/8166",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2540494236
|
Re-subscribe to the scaling channel when the underlying connection is lost
In scaling mode, when the subscribingClient connection is lost while the app is running, the main app continues to operate without any errors or logs. However, it loses the ability to listen for events on the scaling channel.
According to the clue/reactphp-redis:
When using the createLazyClient() method, the unsubscribe and punsubscribe events will be invoked automatically when the underlying connection is lost. This gives you control over re-subscribing to the channels and patterns as appropriate.
So, we need to re-subscribe to the channel when the unsubscribe event is detected.
Drafting until @joedixon can review.
Hey @ashiquzzaman33 - did you manage to check if your implementation behaves as expected? According to the createLazyClient section of the docs, you would actually need to create a whole new connection.
Additionally, if the underlying database connection drops, it will automatically send the appropriate unsubscribe and punsubscribe events for all currently active channel and pattern subscriptions. This allows you to react to these events and restore your subscriptions by creating a new underlying connection repeating the above commands again.
Hello @joedixon, thank you for your comment.
Yes, I’ve tested it, and the implementation behaves as expected. The documentation seems a bit misleading.
Re-subscribing should be sufficient unless we explicitly call $this->subscribingClient->close().
It will automatically create a new client if current connection is lost.
Thanks @ashiquzzaman33 do you think there is any scope to add a test for this functionality?
Hi @joedixon,
Yes, I have added a unit test for this. Please review it.
|
gharchive/pull-request
| 2024-09-21T20:13:39 |
2025-04-01T04:34:50.570724
|
{
"authors": [
"ashiquzzaman33",
"joedixon",
"taylorotwell"
],
"repo": "laravel/reverb",
"url": "https://github.com/laravel/reverb/pull/251",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
161399641
|
jasmine reporter giving error AfterAll TypeError with jasmine 2
When i am using jasmine reporter in script using jasmine 2 framework and protractor 3.x:
jasmine.getEnv().addReporter(new jasmineReporters.JUnitXmlReporter({
consolidate: true,
consolidateAll: true,
savePath: '...',
filePrefix: 'xmloutput'
}));
I am getting below error using chrome browser:
I think this has been addressed with other PRs that have been merged and released in the last 6 months. Feel free to re-open with an example test case if the problem persists.
|
gharchive/issue
| 2016-06-21T10:20:00 |
2025-04-01T04:34:50.582857
|
{
"authors": [
"Saurabh06",
"bloveridge"
],
"repo": "larrymyers/jasmine-reporters",
"url": "https://github.com/larrymyers/jasmine-reporters/issues/146",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
646955008
|
Remove no_text pragmas
I think these are useless because
abapgit does not use messages, so you have to put it all around the place
all this prevents is an info message in an atc check nobody looks at
actually if you ever wanted to translate abapgit, it would make more sense to not have pragmas because then you could just go over all the info messages
There are 624 instances of either ##NO_TEXT or ##EC_NOTEXT in the codebase. I propose to remove them for readability.
I think some of them in class headers(public, protected, private sections) are automatically added by SE24.
So suggest starting with the method bodies
I forget that because of you we still support se24 :D
You could change the CLAS/PROG serializer temporarily to remove the unwanted pragmas.
well, if we want to enforce this it needs to be part of CI
ie. implement a rule in abaplint, and then have a quickfix for it => it all can be fixed by one command line command
"#EC NOTEXT
I'm starting this now
ready, configuration =
"forbidden_pseduo_and_pragma": {
"ignoreGlobalClassDefinition": true,
"ignoreGlobalInterface": true,
"pragmas": ["##NO_TEXT"],
"pseudo": ["#EC NOTEXT"]
},
done, closing
|
gharchive/issue
| 2020-06-28T16:05:43 |
2025-04-01T04:34:50.586908
|
{
"authors": [
"FreHu",
"larshp",
"mbtools"
],
"repo": "larshp/abapGit",
"url": "https://github.com/larshp/abapGit/issues/3555",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
618831479
|
Gui refactoring part 4
Continuation of the story with scripts.
So now all the scripts are registered as deferred parts, scripts method in gui_page is private and not redefined in subclasses. In addition get_events method also removed and converted to parts approach - called hidden_forms in this case.
For review: makes sense to review first 4 and the last commit for understanding. All the refactoring page* are similar: remove redefinition, rename method, call register_deferred_script
P.S. register_deferred_script is a shortcut for zcl_abapgit_ui_factory=>get_gui_services( )->get_html_parts( )->add_part( iv_collection = c_html_parts-scripts ...
maybe merge ? I have some more changes in pipeline :)
so many changes, so little time, so many things to develop
Yeah ... how to replicate myself to do it all ? :)))))
BTW what's wrong with abaplint check ? It hangs for long time recently ...
typically connection stuff, actually I'm considering using only github actions, and not having the app.abaplint.org
Tomorrow I could check this PR out and see if anything breaks during daily use, if that helps?
Nothing out of the ordinary to report, everything worked as usual!
@sbcgua check conflicts
rebased
Thanks @g-back
ref #3286
|
gharchive/pull-request
| 2020-05-15T09:30:24 |
2025-04-01T04:34:50.591843
|
{
"authors": [
"g-back",
"larshp",
"sbcgua"
],
"repo": "larshp/abapGit",
"url": "https://github.com/larshp/abapGit/pull/3362",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2422595685
|
Sunfire Fanatic step no longer requires full set
This step was changed back in April (https://secure.runescape.com/m=news/varlamore-tweaks--drop-rates?oldschool=1), but the plugin still shows that the whole set is required.
Thank you for reporting this issue! 🚀
A patch has been implemented and is currently under review by RuneLite.
You can track the current status here: https://github.com/runelite/plugin-hub/pull/6939
|
gharchive/issue
| 2024-07-22T11:07:35 |
2025-04-01T04:34:50.621404
|
{
"authors": [
"joemckie",
"larsvansoest"
],
"repo": "larsvansoest/emote-clue-items",
"url": "https://github.com/larsvansoest/emote-clue-items/issues/109",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
114648672
|
Is it possible to add an icon, like FontAwesome, to text?
I guess the title says it all.
+1
Hi guys.
Just implemented this in 1.8.2 (in fact you can now pass in any sub-view, so not just an icon). You can pass in styling props to the backgroundStyle to make sure it is nicely centered etc (see the updated Advanced example for how to have a user icon next to a "login" label).
Enjoy...
|
gharchive/issue
| 2015-11-02T17:57:32 |
2025-04-01T04:34:50.623432
|
{
"authors": [
"Coolnesss",
"larsvinter",
"nbastoWM"
],
"repo": "larsvinter/react-native-awesome-button",
"url": "https://github.com/larsvinter/react-native-awesome-button/issues/3",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
229094562
|
Update Vagrantfile to new version of Last.Backend
Update Vagrant file to provision new version of Last.Backend platform
[ ] use coreos box
[ ] install etcd v3 version
[ ] create lastbackend kit image
[ ] create lastbackend dashboard image
[ ] run lastbackend kit image
[ ] run lastbackend dashboard image
Version
0.9.0
Vagrant deployment will be not present. Use docker-machine instead.
|
gharchive/issue
| 2017-05-16T16:31:48 |
2025-04-01T04:34:50.625643
|
{
"authors": [
"undassa"
],
"repo": "lastbackend/lastbackend",
"url": "https://github.com/lastbackend/lastbackend/issues/308",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1663266265
|
Consolidate build tooling
We currently have a wide mix of these, and we should probably consolidate:
parcel, rollup, vite
tsup, tsc, esbuild
ts-node, tsx
If we're able to get rid of parcel in network package, we should remove engines from package.json (it's only there to fix parcel build issues).
Context from discord:
Vite
Pro - 1 tool that can build everything
Pro - Doubles as a test config don't need to configure babel or anything like with jest
Pro - Should have less hacky than otherwise ways of configuring dev builds to work with hot reloading
Con - Bit of overkill
Tsup
Pro - Really fast even for prod build since it uses esbuild instead of rollup
Pro - Really simple to configure
Pro - Definitely much simpler tool than vite
Con - You are forced to use different tool once you need to bundle an actual website or non typescript or struggle configuring it with tsup
Tsc
Pro - It's typescript it's kinda the most obvious tool to use
Pro - Simplist to configure (sometimes)
Con - less forgiving than using esbuild
Con - slow
Con - boring
Parcel
con - meh
Rollup
Pro - most well documented build tool in js ecosystem
Pro - Plugin for everything
Pro - one build tool can do everything
Con - vite is basically a better version of rollup
Based on this I think we should try to consolidate to use vite everywhere
vite -> tsup happened in https://github.com/latticexyz/mud/pull/660
just need to do parcel/rollup ones now!
|
gharchive/issue
| 2023-04-11T21:50:45 |
2025-04-01T04:34:50.638187
|
{
"authors": [
"alvrs",
"holic"
],
"repo": "latticexyz/mud",
"url": "https://github.com/latticexyz/mud/issues/612",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1614539285
|
refactor(cli): replace prettier with forge fmt
Hopefully this gets us more consistent formatting!
I intentionally left all the forge defaults, so you'll see some whitespace changes. I'm fine with this.
I just noticed that our prettier version for cli (2.8.4) was different than everywhere else in the monorepo (2.6.2) and wondering if some of our inconsistent formatting was due to mismatched versions.
Looks like what I was able to commit here into this branch is different from what is actually generated, and I think that is the source of our issues. Looks like the tablegen prettier output disagrees with the precommit lint hook (which formats solidity, and with a diff prettier version).
Gonna just align the prettier versions for now.
See https://github.com/latticexyz/mud/pull/470
|
gharchive/pull-request
| 2023-03-08T02:27:12 |
2025-04-01T04:34:50.640894
|
{
"authors": [
"holic"
],
"repo": "latticexyz/mud",
"url": "https://github.com/latticexyz/mud/pull/469",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
181835056
|
Update to rustc 1.14.0-nightly (6e8f92f11 2016-10-07)
The current version fails on nightly: https://travis-ci.org/Manishearth/rust-clippy/builds/166067300#L420
I think it needs to be bumped to 0.2.4 - there's already a 0.2.3 published on crates.io. It's my bad, I published 0.2.3 and forgot to push it.
|
gharchive/pull-request
| 2016-10-08T16:29:06 |
2025-04-01T04:34:50.642810
|
{
"authors": [
"laumann",
"mcarton"
],
"repo": "laumann/compiletest-rs",
"url": "https://github.com/laumann/compiletest-rs/pull/50",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
943763262
|
Implement Migration Source for String/&str
Would it be possible to implement MigrationSource for String or &str?
This would allow migrations to be embedded in the binary, instead of having to keep them separately on disk.
If this is something of interest, I'd love to work on this as my first contribution.
We already support embedded migrations with sqlx::migrate!()
We already support embedded migrations with sqlx::migrate!()
If the sqlx::migrate!() macro is used in a library crate, would the migrations be included in the library without having to have the migrations available in the binary crate.
Yes.
Yes.
Would it be possible to have an example of this usage?
Can't get the sqlx::migrate!() macro to work in this manner.
When running the code the Migrator shows that there are zero migrations pending - on a fresh database.
Thank you for any advice, also out of interest, why was Path chosen and not 'static String?
If your project looks something like this:
migrations/
src/
...
main.rs
Cargo.toml
Then sqlx::migrate!().run(&pool).await?: should be all you need. You might need a clean build if you only add new migrations as we don't have a good way to tell the compiler to watch that folder for new files.
also out of interest, why was Path chosen and not 'static String?
I don't know, @mehcode is the primary author of the migrations feature.
Closing this due to inactivity. @KyleCotton if I didn't answer your questions to your satisfaction please feel free to reopen this.
|
gharchive/issue
| 2021-07-13T19:33:38 |
2025-04-01T04:34:50.649230
|
{
"authors": [
"KyleCotton",
"abonander"
],
"repo": "launchbadge/sqlx",
"url": "https://github.com/launchbadge/sqlx/issues/1326",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1271929879
|
fix #1905 : replaced all uses of "uri" to "url"
See #1905
Sorry I'm quite new to PRs. Going to resolve conflicts
Thanks!
|
gharchive/pull-request
| 2022-06-15T09:16:28 |
2025-04-01T04:34:50.650367
|
{
"authors": [
"RomainStorai",
"abonander"
],
"repo": "launchbadge/sqlx",
"url": "https://github.com/launchbadge/sqlx/pull/1906",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
602796731
|
postgres: Create PgInterval type
This PR creates a new type binding for PostgreSQL INTERVAL type.
Fixes #197
Some notes:
The PR exposes a new type: sqlx::postgres::types::PgInterval which is fully compatible with PG INTERVAL type
I did not implemented to support for https://crates.io/crates/pg_interval since the project does not seems to be actively maintained.
I've implemented TryFrom for chrono::Duration and time::Duration, theses conversions fail if there is an overflow, if using incompatible fields (PG days and month fields) or if there is loss of precision by using nanoseconds.
Can you add some tests to postgres-types ?
theses conversions fail [...] if there is loss of precision by using nanoseconds.
This may actually cause a lot of errors if people are inserting Durations based on precise timers. Maybe instead of/in addition to a TryFrom<Duration> for PgInterval we need to have like a PgInterval::truncate_nanos(Duration) constructor or something.
That is a good idea, I think that replacing the TryFrom behavior by silencing this type of issue could generate even more bugs that go unnoticed. So I would think that, as you said, adding a new constructor would be better. And I should add a bit of documentation to highlight the difference between the two.
I have one question however, I not that seasoned in Rust and I'm not sure how to create a function PgInterval::truncate_nanos(Duration) that could either take a chrono::Duration or a time::Duration, do you have any idea how I should do that?
Thank you
I've created two functions chrono_truncate_nanos and time_truncate_nanos in bd27d0e.
I'm not sure if it is the right thing to do. Please let me know how I can improve that.
@dimtion Sorry for the delay, would you mind providing impls for std::time::Duration as well? It seems fitting.
As for truncate_nanos, there's a few options I see:
only define the constructor for std::time::Duration and expect the user to convert their chrono::Duration or time::Duration first
define a trait TruncateNanos and implement it for all of [chrono, std::time, time]::Duration, then have the constructor take impl TruncateNanos
keep the separate constructors but add the crate name as a postfix, i.e. truncate_nanos_chrono, truncate_nanos_time, truncate_nanos_std
I don't think this impl is a good idea. As the Interval type is a proper interval and not just a duration, attempting to treat a Duration as such is error-prone at best. In the future, the time crate will provide for a Period type that could be properly mapped to the Interval appropriately. There's no timeframe on this, but I thought it would be best to make my thoughts known before this potentially lands.
@jhpratt what you says makes a lot of sense, and is indeed the core of the issue I faced. An interval/period is not the same duration, and converting between the two must not be an implicit thing.
@abonander do you think creating a PR only providing the PgInterval type could be enough? This would allow to make sqlx work with PostgreSQL Interval type, something is impossible right now. All the other type compatibility are only here to be handy in the end, and could be postponed if deemed error prone.
To be clear, I don't think converting a Period to a Duration directly will ever be allowed in the time crate. I'll almost certainly implement From<Duration> for Period when the latter is created, but only because some number of seconds would be a strict subset of what is storable in a period type.
Obviously you're free to do as you wish, but that's my perspective from the time crate.
I have to agree with @jhpratt here about the Period type and the concerns around conversion. As for this PR, I think we should accept a minimal useful surface for now.
A PgInterval type that is explicitly those three fields that fully supports the protocol and has Type + Encode + Decode.
Type + Encode for time::Duration, chrono::Duration, and std::time::Duration where Encode fails if we cannot express the duration as a PgInterval.
(in a later PR) Type + Encode + Decode for time::Period when that type is available.
Ok, so I've rebased this branch with master:
The first commit is the creation of the PgInterval type that fully supports Type + Encode + Decode
The second commit adds a compatibility layer for std::time::Duration, time::Duration & chrono::Duration. It implements TryFrom<Duration> for PgInterval, and fails if there is an overflow or if there is loss of precision
I've implemented Type + Encode for time::Duration, chrono::Duration, and std::time::Duration
Following @abonander idea, I did not remove the truncate_nanos_* methods since they might be useful in some cases.
Let me know if this implementation is better.
Looking at the overall diff, it looks good to me. I just checked the bits relevant to the time crate, though.
One but, I believe it would be more efficient to check if the nanos % 1000 == 0 instead of performing more complicated arithmetic.
@dimtion Thank you ❤️ for sticking with this. I merged it and narrowed down the public API further. We can always add methods to it if it becomes a pattern people want ( the truncate approach ) but I think we're probably best off waiting for a cleaner type like time::Period.
|
gharchive/pull-request
| 2020-04-19T19:11:48 |
2025-04-01T04:34:50.664912
|
{
"authors": [
"abonander",
"dimtion",
"jhpratt",
"mehcode"
],
"repo": "launchbadge/sqlx",
"url": "https://github.com/launchbadge/sqlx/pull/271",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2510718100
|
Add support for global variables
We have variables/secrets that are shared across collections and environments. As of now, we are needing to duplicate these values in many places.
It would be great to have the functionality to define "global" variables that are accessible from any collection/environment - similar to Postman - to avoid duplications of secrets/variables.
@zachdev Currently, environment variables are global and can be accessed across all collections. We're also working on adding support for collection level variables as outlined in this issue -https://github.com/launchiamenterprise/keyrunner/issues/48 . Could you clarify if your request is about needing global variables beyond what’s already available? This will help us understand if its a duplicate or a different request.
Issue is addressed with https://github.com/launchiamenterprise/keyrunner/issues/48
|
gharchive/issue
| 2024-09-06T15:41:52 |
2025-04-01T04:34:50.677116
|
{
"authors": [
"launchiamenterprise",
"zachdev"
],
"repo": "launchiamenterprise/keyrunner",
"url": "https://github.com/launchiamenterprise/keyrunner/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2461736791
|
Problematic Youtube player script (no n function match) in JMusicBot
Hello,
If I am posting in the wrong place please let me know and close this issue.
I am running JMusicBot on my Discord server to play Youtube audio in voice chat. However when I try to play something the bot first does nothing until I skip to a random song in the playlist.
When looking at the Github repo at: https://github.com/jagrosh/MusicBot I see in the Issues that they are using Lavaplayer for the playback so that's why I'm posting an issue here.
The error that I get when first adding a playlist to the queue is this:
[21:07:10] [ERROR] [SignatureCipherManager]: Problematic YouTube player script /s/player/1c78e434/player_ias.vflset/nl_NL/base.js detected (issue detected with script: no n function match). Dumped to /tmp/lavaplayer-yt-player-script6983809891652923440.js
Please also see the script.
lavaplayer-yt-player-script.txt
Again, if this is the wrong place to post this or if you need more details on my environment let me know, maybe this has to do with the implementation of the lavaplayer and not the lavaplayer itself.
Cheers!
It appears that a fix has already been pushed for this in JMusicBot.
Relevant PR #1655
@devoxin Alright, thank you for your response. I will post the issues in the correct repo the next time :)
|
gharchive/issue
| 2024-08-12T19:27:37 |
2025-04-01T04:34:50.797304
|
{
"authors": [
"devoxin",
"pieterhouwen"
],
"repo": "lavalink-devs/lavaplayer",
"url": "https://github.com/lavalink-devs/lavaplayer/issues/147",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1535460323
|
No Password Optoin
After installing there was no option to add a password to the backups in the configuration. I setup up onedrive and there is still no option. It backed up fine but was of course not password protected. I manually added in the password option in the .yaml and created a password but the backup was again not password protected. After adding the password manually, I did a backup again but it was still not password protected. The option showed in the configuration panel after editing the .yaml, but after deleting the password in the configuration panel and restarting the addon, the option is again gone.
Am I the only one without an option to password protect the backups?
This has been changed to an optional option now. You need to toggle the "Show unused optional configuration options" (bottom of your screenshot). It will be visible to you then.
|
gharchive/issue
| 2023-01-16T20:36:40 |
2025-04-01T04:34:50.799254
|
{
"authors": [
"Hawkeye0914",
"lavinir"
],
"repo": "lavinir/hassio-onedrive-backup",
"url": "https://github.com/lavinir/hassio-onedrive-backup/issues/34",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
588592026
|
testing missing for 15 US states
Compared to yesterday, today's JSON misses testing data for the following states:
Washington,
Lousiana,
Indiana,
Virginia,
Missisipi,
New York,
Texas,
Tennessee,
Ohio,
North Carolina,
Alabama,
Maryland,
Delaware,
Wyoming,
South Dakota,
Yes, we were pulling testing data from COVID Tracking, and we've replaced that with our direct-from-state data sources.
If you want the best and most accurate state-level data for tested, I would suggest hitting COVID Tracking's API: https://covidtracking.com/data/
HOWEVER, if, for any of the states you've listed above, you can find testing data on the sources we're currently scraping, file an issue and we'll add it into the dataset!
@zbraniecki if none of the sources for the states listed above are providing testing data, go ahead and close this issue. Thanks!
What's the reason not to use COVID Tracking as a backup for direct-from-state? This would give you the same data for states that we have direct sources for, and the baseline for those we don't, no?
We still pull it down, but we do not "combine" data between sources, we just filter out duplicates.
I miss how is it combining? If sourceA has data for Washington use it, if not use data from sourceB?
Why not combine (sourceA has data for Washigton till 3-11, sourceB from 3-12)?
We pull all the data for WA from their health department, then we include it in the dataset.
We pull the data for WA from COVIDTracking, then we include it in the dataset.
There is a de-dupe process that chooses one over the other. If COVIDTracking has testing, but WA health department doesn't, and if we give WA health department a higher priority, then we don't get testing data in the final dataset.
Ah, that makes sense. I see! Of course, that means that you'd have to merge.
I filed issue #410 to discuss something a bit more flexible.
Thank you for taking time to explain it to me!
Closing this issue as it appears to have been resolved. In addition, Li reports now report "multivalent data", combining data from multiple sources. See https://github.com/covidatlas/li/blob/master/docs/reports.md#combining-data-sources. If this still feels like an issue, please open up a new issue in the Li project. Cheers! jz
|
gharchive/issue
| 2020-03-26T17:24:03 |
2025-04-01T04:34:50.823180
|
{
"authors": [
"jzohrab",
"lazd",
"zbraniecki"
],
"repo": "lazd/coronadatascraper",
"url": "https://github.com/lazd/coronadatascraper/issues/386",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
29762508
|
Unhandled exception when test fails
When I have a test that fails I get the following exception:
/Users/pete/work/projects/console/node_modules/gulp-karma/index.js:56
stream.emit('error', new gutil.PluginError('gulp-karma', 'karma exited
^
TypeError: undefined is not a function
at done (/Users/pete/work/projects/console/node_modules/gulp-karma/index.js:56:30)
at ChildProcess.<anonymous> (/Users/pete/kinvey/projects/console/node_modules/gulp-karma/index.js:82:7)
at ChildProcess.EventEmitter.emit (events.js:98:17)
at Process.ChildProcess._handle.onexit (child_process.js:797:12)
Which breaks the pipe.
gulp-karma is deprecated (#43). Please use Karma directly: https://github.com/karma-runner/gulp-karma
|
gharchive/issue
| 2014-03-19T19:02:01 |
2025-04-01T04:34:50.825307
|
{
"authors": [
"lazd",
"ninjatronic"
],
"repo": "lazd/gulp-karma",
"url": "https://github.com/lazd/gulp-karma/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
394250979
|
Serious security vulnerability
Even though a "Tracery grammar" is in theory just JSON, in practice it's useful for it to be a JavaScript object instead. (Namely: keys don't need to be quoted, and the Tracery docs themselves often have examples of writing e.g. "foo bar baz".split(" ") instead of needing to write out the array).
In order to make that work here, I'm currently naïvely running eval on the user input. This is bad! Because I haven't properly set up Electron sandboxing, or thought through the security threat model, pasting in an arbitrary Tracery grammar could theoretically give a bad actor full access to all of Electron's OS APIs. Not great.
This means that, right now, users ABSOLUTELY shouldn't paste in arbitrary Tracery grammars from sources you don't trust. This works right now for my own personal use case, but fixing that is my #1 priority.
I have two options:
Fuck it, only support JSON
Think through properly sandboxing userland JS. Electron offers design patterns to do this, but I'll also need to think through whether there are vulnerabilities in the IPC API I will still need to expose for e.g. file saving/loading.
Anyone else, feel free to chime in with thoughts.
Given CBDQ doesn't support JS, I think killing JS support is fine.
|
gharchive/issue
| 2018-12-26T22:21:42 |
2025-04-01T04:34:50.827925
|
{
"authors": [
"lazerwalker"
],
"repo": "lazerwalker/tracery-dot-app",
"url": "https://github.com/lazerwalker/tracery-dot-app/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
417001813
|
Prolonged Usage Issues In Android
Users who tested the app with a session of longer than 4 minutes had a consistently degrading experience.
The likelihood of a video loading slowly increased, as well as the likelihood of a failure to load.
See SincerePuffins video, and any others that were over 5 minutes long.
Probably similar to https://github.com/lbryio/lbry-android/issues/421
And could be related to the SDK not handling multiple downloads well...maxing out the user's connection (https://github.com/lbryio/lbry/issues/1971)
closing for #421
|
gharchive/issue
| 2019-03-04T21:31:27 |
2025-04-01T04:34:50.890170
|
{
"authors": [
"alyssaoc",
"robvsmith",
"tzarebczan"
],
"repo": "lbryio/lbry-android",
"url": "https://github.com/lbryio/lbry-android/issues/453",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
264689827
|
Cannot publish new content in latest RC
The Issue
Publish button is greyed out if trying to publish new content. Edits work (besides the known issues)
Steps to reproduce
Go to Publish, fill out all fields
Publish greyed out
Expected behaviour
Tell us what should happen
Actual behaviour
Tell us what happens instead
System Configuration
LBRY Daemon version:
LBRY App version:
LBRY Installation ID:
Operating system:
Anything Else
Screenshots
Just because the current form validation sucks -> https://github.com/lbryio/lbry-app/issues/564
Nah, it was an issue with multiresolve. There's a hack now, but this is not a problem.
This issue should be to fix the hackaround.
Publish button is greyed out
I mean this should be handle in the form validation ^
|
gharchive/issue
| 2017-10-11T18:31:11 |
2025-04-01T04:34:50.894546
|
{
"authors": [
"btzr-io",
"tzarebczan"
],
"repo": "lbryio/lbry-app",
"url": "https://github.com/lbryio/lbry-app/issues/668",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
374662372
|
Update CHANGELOG.md
Corrected repetition of fixed heading under the same date
@twishajain Thanks for the PR 🙂
Not sure how it ended up in that state
@twishajain , Can we send you some LBC as appreciation?
|
gharchive/pull-request
| 2018-10-27T16:22:12 |
2025-04-01T04:34:50.896413
|
{
"authors": [
"seanyesmunt",
"twishajain",
"tzarebczan"
],
"repo": "lbryio/lbry-desktop",
"url": "https://github.com/lbryio/lbry-desktop/pull/2063",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
482183925
|
Remove unused dependency jsmediatags
I was checking if I could use music-metadata-browser with the audio-viewer, but I noticed it has been removed (90bcde49e76a5c79afcbc994662f0bedc544ee98).
Therefor this dependency is no longer required: jsmediatags
Related PR: #2447
Deleting code is better than adding it :)
Hope we see more from you @Borewit!
Welcome back @Borewit !
I thought I just commented the audio player out, but I guess I removed it. It was still a ways away from working due to a number of bugs.
Would you be interested in working on a new audio player? I could bring back that old code, or set up a super simple one that could be added on to.
Would you be interested in working on a new audio player? I could bring back that old code, or set up a super simple one that could be added on to.
I am not experienced with react, but I can give it a try.
I started with reviving the old code myself, just to get a bit in sync.
I could bring back that old code, or set up a super simple one that could be added on to.
If you can setup the basic wiring either on top of my branch, or fresh one, that would be great.
To capture the change, I opened issue #2757. If you can leave add some guidance @seanyesmunt to #2757, that would be very helpful.
|
gharchive/pull-request
| 2019-08-19T08:49:53 |
2025-04-01T04:34:50.900333
|
{
"authors": [
"Borewit",
"kauffj",
"seanyesmunt"
],
"repo": "lbryio/lbry-desktop",
"url": "https://github.com/lbryio/lbry-desktop/pull/2749",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
403640426
|
[WIP] android build of lbrynet as an IPC service
[ ] get PR to point where lbrynet daemon starts and continues running - 2pt
[ ] expose API in service - 2pt
[ ] sample app or automated test which calls at least one API method - 2pt
[ ] generate an .aar instead of .apk file which can be included in lbry-android app - 2pt
closing. will reopen when we revisit this issue
|
gharchive/pull-request
| 2019-01-28T03:48:31 |
2025-04-01T04:34:50.902392
|
{
"authors": [
"eukreign",
"lyoshenka"
],
"repo": "lbryio/lbry-sdk",
"url": "https://github.com/lbryio/lbry-sdk/pull/1820",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2326701308
|
Show links to turbo frames
Purpose
Per request in Issue https://github.com/lcampanari/turbo-devtools/issues/1 this shows links to turbo frames to the developer along with the id of the frame when you hover over the link.
Approach
Use similar CSS technique to cover the a[data-turbo-frame] elements.
Switch to darkred as the main color. Easier on my eyes in dark mode.
Screenshot/Video
This example is from https://github.com/tpaulshippy/counting (had to add some frames and links to demonstrate).
https://github.com/lcampanari/turbo-devtools/assets/3137263/1a8b9936-6c25-48e7-a0dd-db3477eacbe7
@lcampanari Are you able to take a look at these soon? I think my work on console logging is just about ready, but I wanted to get these smaller changes in first. If you'd rather not mess with this right now, I could maybe try to publish my fork to the store under a different name.
@lcampanari Are you planning to release this to the store?
@lcampanari Are you planning to release this to the store?
@tpaulshippy It has been submitted, it's under review.
|
gharchive/pull-request
| 2024-05-30T23:30:17 |
2025-04-01T04:34:50.907838
|
{
"authors": [
"lcampanari",
"tpaulshippy"
],
"repo": "lcampanari/turbo-devtools",
"url": "https://github.com/lcampanari/turbo-devtools/pull/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
302358248
|
Double check Q15 results
Cypher spec was changed in 385a4e166af49cb8ba2eb05725960ed10f91215e.
PostgreSQL bi-15 implementation seems unaffected by the change.
Great, thanks for checking.
|
gharchive/issue
| 2018-03-05T16:04:57 |
2025-04-01T04:34:50.927398
|
{
"authors": [
"jmarton",
"szarnyasg"
],
"repo": "ldbc/ldbc_snb_implementations",
"url": "https://github.com/ldbc/ldbc_snb_implementations/issues/29",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1772559254
|
🛑 Thonk Cloud is down
In db94720, Thonk Cloud (https://www.thonk.cloud) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Thonk Cloud is back up in e06b302.
|
gharchive/issue
| 2023-06-24T09:19:03 |
2025-04-01T04:34:50.998524
|
{
"authors": [
"le-server"
],
"repo": "le-server/thonk-upptime",
"url": "https://github.com/le-server/thonk-upptime/issues/260",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2582445882
|
Code Sanbox异常重现
官网可复现
bug描述
画布内添加很少的内容
使用右键移动画布,能够发现画布中的元素移动不连续,能够看到肉眼可见的卡顿
通过使用processon中添加元素,并右键移动画布进行对比就能很清晰的判断,确实为事件过多导致的性能问题,画布移动不连续,导致最终可视化效果不连续
期待效果
能够流畅的移动画布(无明显卡顿),和相关定量的性能分析结果
发生频率
每次
核心库版本
No response
浏览器版本
No response
其他需要补充的
No response
烦请提供测试的图纸id,官方的解决方案图纸测试无卡顿现象,图纸id查看方式:
这个卡顿和平滑的视觉效果需要对比才能看得出来,我上传两张不同效果的gif动图,录制帧率为60hz,画布放到最大,相同的鼠标和浏览器环境,分别来源于官方画布绘制的带一个矩形图形和processon画布带有一个矩形的图像效果,您尝试看看能否看出来差异
你可以通过设置meta2d的interval属性来设置图纸渲染帧率。官网是考虑到大多数用户机器的性能,而限制了其渲染帧率,默认为33.33hz。
对于你所说的和processon渲染效果有区别,原因是processon和meta2d的原理不同,processon采用的svg,meta2d基于canvas,对于少量图元来说,svg的性能可能会更好,对于大量图元来说,canvas性能肯定是大幅度优于svg的,因为svg的移动都会触发浏览器对于DOM的重排重绘。
另一方面,meta2d应用的场景不同,meta2d主要用于数字孪生,物联网等需要与大量数据大量图元和用户频繁交互的场景,所以在渲染前时核心库对于很多功能都需要去加载,计算,所以在js层面可能会浪费一定时间,与电脑性能和浏览器版本(不同版本对于v8引擎优化有所差异)有关。
|
gharchive/issue
| 2024-10-12T03:26:17 |
2025-04-01T04:34:51.003264
|
{
"authors": [
"Grnetsky",
"Terrency",
"ananzhusen"
],
"repo": "le5le-com/meta2d.js",
"url": "https://github.com/le5le-com/meta2d.js/issues/258",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1778016022
|
🛑 Meraki API Org Show is down
In 783bfb2, Meraki API Org Show (https://api.meraki.com/api/v1/organizations/43483) was down:
HTTP code: 502
Response time: 581 ms
Resolved: Meraki API Org Show is back up in 9fdea37.
|
gharchive/issue
| 2023-06-28T01:38:53 |
2025-04-01T04:34:51.005881
|
{
"authors": [
"leafle"
],
"repo": "leafle/mki-api-upptime",
"url": "https://github.com/leafle/mki-api-upptime/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
434151625
|
Problem with oracle-fallback transport
SUMMARY
Default role transport doesn't working.
ISSUE TYPE
Bug Report
COMPONENT NAME
tasks/fetch/oracle-fallback.yml
tasks/win_fetch/oracle-fallback.yml
ANSIBLE VERSION
2.7.10
CONFIGURATION
OS / ENVIRONMENT
Linux / Windows
STEPS TO REPRODUCE
- name: Install java
hosts: all
roles:
- role: lean_delivery.java
java_major_version: 8
java_minor_version: 202
java_arch: x64
java_package: jdk
EXPECTED RESULTS
ACTUAL RESULTS
TASK [ansible-role-java : Show artifacts' urls found] **************************
ok: [test-aws-centos7-ansible-role-java-12-rpm] =>
artifact_url_list: []
ok: [test-aws-ubuntu18-ansible-role-java-12-otn-deb] =>
artifact_url_list: []
ok: [test-aws-Debian9-ansible-role-java-12-otn-tb] =>
artifact_url_list: []
TASK [ansible-role-java : No artifact urls found] ******************************
fatal: [test-aws-centos7-ansible-role-java-12-rpm]: FAILED! => changed=false
msg: No artifact urls found, check java_package, java_major_version, java_minor_version, java_arch variables
fatal: [test-aws-ubuntu18-ansible-role-java-12-otn-deb]: FAILED! => changed=false
msg: No artifact urls found, check java_package, java_major_version, java_minor_version, java_arch variables
fatal: [test-aws-Debian9-ansible-role-java-12-otn-tb]: FAILED! => changed=false
msg: No artifact urls found, check java_package, java_major_version, java_minor_version, java_arch variables
PLAY RECAP *********************************************************************
test-aws-Debian9-ansible-role-java-12-otn-tb : ok=10 changed=0 unreachable=0 failed=1
test-aws-centos7-ansible-role-java-12-rpm : ok=10 changed=0 unreachable=0 failed=1
test-aws-ubuntu18-ansible-role-java-12-otn-deb : ok=10 changed=0 unreachable=0 failed=1
otn transport removed
|
gharchive/issue
| 2019-04-17T08:18:12 |
2025-04-01T04:34:51.011292
|
{
"authors": [
"pavelpikta",
"tgadiev"
],
"repo": "lean-delivery/ansible-role-java",
"url": "https://github.com/lean-delivery/ansible-role-java/issues/61",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2332362464
|
feat: The triple centralizer is equal to the centralizer
This PR shows (1) that a set is included in its double centralizer, and (2) that the centralizer of a set is equal to its triple centralizer.
maintainer merge
|
gharchive/pull-request
| 2024-06-04T01:55:40 |
2025-04-01T04:34:51.053968
|
{
"authors": [
"dupuisf",
"tb65536"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/13492",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1538020865
|
chore: the style linter shouldn't complain about long #align lines
maybe you could make all the existing #aligns one-line?
Done
bors r+
|
gharchive/pull-request
| 2023-01-18T13:39:21 |
2025-04-01T04:34:51.056244
|
{
"authors": [
"ChrisHughes24",
"digama0",
"jcommelin"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/1643",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1554420215
|
feat: port Data.Finsupp.Defs
bors merge
|
gharchive/pull-request
| 2023-01-24T06:49:52 |
2025-04-01T04:34:51.057666
|
{
"authors": [
"ChrisHughes24",
"jcommelin"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/1807",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1749907949
|
feat: port MeasureTheory.Integral.ExpDecay
bors merge
|
gharchive/pull-request
| 2023-06-09T13:31:21 |
2025-04-01T04:34:51.059048
|
{
"authors": [
"ChrisHughes24",
"Parcly-Taxel"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/4905",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1883554649
|
feat: Kernel of a filter
Define the kernel of a filter as the intersection of its sets and show it forms a Galois coinsertion with the principal filter.
bors merge
|
gharchive/pull-request
| 2023-09-06T08:50:23 |
2025-04-01T04:34:51.060629
|
{
"authors": [
"YaelDillies"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/6981",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1900966430
|
feat(Algebra/Module): add delaborator for composition notation
It's not clear to me exactly what the status of this is, but it seems Kyle's suggestions (e.g., pp.notation control) haven't been implemented yet.
I probably won't have time to come back to this for a little while
|
gharchive/pull-request
| 2023-09-18T13:38:26 |
2025-04-01T04:34:51.062459
|
{
"authors": [
"eric-wieser",
"j-loreaux"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/7239",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1976411461
|
feat: characterise independence in compactly-generated lattices
Also some loosely-related short lemmas
Split out from #7963
bors merge
|
gharchive/pull-request
| 2023-11-03T15:11:23 |
2025-04-01T04:34:51.064104
|
{
"authors": [
"ocfnash"
],
"repo": "leanprover-community/mathlib4",
"url": "https://github.com/leanprover-community/mathlib4/pull/8154",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2592871464
|
feat: light-weight basic quotation macro
Description:
I finally got annoyed enough at the lack of Qq, that I figured we can just roll our own basic version to get 80% of the benefits. So, I implemented a very basic ql(..) macro. Crucially, we only attempt to deal with closed terms, so we avoid a lot of the complexity that QQ has to deal with anti-quotations. Similarly, we avoid the performance problems of QQ because we don't attempt to deal with expression matching either.
To implement the macro, we had to take the ToExpr Expr instance from Mathlib, which meant pulling in the ToExpr derive handler (which isn't a bad thing, that derive handler will be generally useful). That is, DeriveToExpr.lean, ToExpr.lean and ToLevel.lean are taken from Mathlib, with only very slight modifications to eliminate any further Mathlib dependencies to do with pretty-printing.
Then, Tactics/QuoteLight.lean is actually new code. The implementation of the ql(..) macro is quite simple, only about 50 lines of meta-code. The rest of the file consists of a few basic tests.
Even with this limited scope, the macro will be useful to simplify a bunch of our meta-code, removing some of the tedium involved and improving readability. For example, we currently have mkApp (.const ``List.nil [0]) (mkConst ``StateField) somewhere in the code, with the macro we can simplify this to ql(@List.nil StateField).
Testing:
What tests have been run? Did make all succeed for your changes? Was
conformance testing successful on an Aarch64 machine? Yes
License:
By submitting this pull request, I confirm that my contribution is
made under the terms of the Apache 2.0 license.
LGTM, we paired on this, and I like the idea.
|
gharchive/pull-request
| 2024-10-16T19:36:36 |
2025-04-01T04:34:51.067976
|
{
"authors": [
"alexkeizer",
"bollu"
],
"repo": "leanprover/LNSym",
"url": "https://github.com/leanprover/LNSym/pull/241",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
124845665
|
Add Galois connections
This is a small theory to play with Lean and learning to prove in it. I want to use Galois connections to show that generated_by from topologies / sigma-algebras from Galois connections with the set of open sets / measurable sets.
@avigad: I tried to follow the library conventions I hope the theorem and constant names are okay and the theorems are at the right place. I'm also not absolutely sure if my usage of section/variable/parameter/include is correct. Also I'm not sure about my definition-style for structures, should I put the proofs in a new line?
complete_lattice_set is quite strange, but with a working simplifier this should work quite straight forward. I didn't simply unfold set and then use complete_lattice_Prop as then Inf and Sup are not any more definitionally equivalent to sInter and sUnion.
I tried to use blast or simp, but somehow it never worked. Maybe next month :-)
Very nice! Welcome aboard!
Our naming conventions are described here:
https://github.com/leanprover/lean/blob/master/doc/lean/library_style.org
They are designed to make it easy to find theorems using tab completion. Roughly, you describe the goal first, just naming what you see, as far as is needed to describe the theorem. If necessary, you add hypotheses afterwards. So a theorem A → B → C is named C_of_A_of_B.
The names can look funny at times, but the scheme works really well. I rarely need to browse theories to find what I am looking for; tab completion is almost always enough. The most important thing is to make the prefix obvious.
With this scheme, your theorems A_implies_B would be renamed to B_of_A. As horrible as it sounds, is_lub_unique would be eq_of_is_lub_of_is_lub. (Though "unique" comes up a lot; maybe we should adopt the convention of using f_unique for this purpose.)
We use mem for set membership. Would it be o.k. to rename upper_bounds_image to mem_upper_bounds_image? I would name is_lub_Sup_iff to is_lub_iff_Sup_eq (or even is_lub_iff, if you think that's enough).
Do you think users will open galois_connection when they want to use galois connections? Then it might make sense to use protected lemma compose and protected lemma id to avoid overloading compose and id. (This means users will have to write galois_connection.compose and galois_connection.id even if the namespace is open.)
The use of sections, parameters, and include looks good to me!
If you want, we can merge this to the repository, and then I will do a pull request with suggested name changes.
Cool, thanks!
The proof of complete_lattice_set will create an instance of has_lt (set A), which will allow the notation < to be used for subset. The pretty printer will automatically use the last defined notation for a definition:
open nat
theorem foo (a b : ℕ) : lt a b := sorry
check foo -- a < b
infix `is_lt`:50 := lt
check foo -- a is_lt b
check λ a b : ℕ, a < b -- a is_lt b
I'm not sure in which order the notations for lt and subset are defined, so just to check: does this affect pretty-printing subsets?
(Also, I think the parameter A to complete_lattice_set should be made explicit.)
@rlewis1988: lt a b and subset a b are definitionally equal, but they are different expressions. So theorems that are stated with the subset notation will be printed with the subset notation, and theorems that are stated with < will be printed with <.
Oh, right, good point! Never mind then.
Okay, I will be incorporating your suggestions, i.e. renaming theorems, using private, and make the parameter to complete_lattice_set explicit. I assume the workflow now is to close this pull request, change the history and submit a new one. This is also an exercise for me to learn git's history rewriting :-)
That works for me!
For the history rewriting, you can use "git rebase -i HEAD~3", or example, to revise the last three commits. The message that comes up in the editor is informative -- it will let you merge ("squash") or rename commits. Then when you push to your repo, you have to use "git push -f origin master" -- the "f" is needed to force the overwrite of your previous commit history.
I assume the workflow now is to close this pull request, change the history and submit a new one.
There shouldn't be any need for that, pull requests work quite well with force pushes to their respective branch. It doesn't, however, notify participants of new pushes, so you should leave a comment after that.
In my opinion, the best workflow consists of addressing review comments via additional commits (so they can be reviewed iteratively), then squashing them when the review is completed.
@Kha thanks for the hint, I will try this workflow.
@avigad So, I renamed most theorems. I didn't mention compose, i.e. increasing_u_l: increasing (u ∘ l) or even worse: u_l_u_eq_u : u ∘ l ∘ u = u.
@rlewis1988 I also changed the implicit parameters for the fun type class instances to explicit parameters.
If everything is fine I will incorporate the changes into the first two changes, and then force push.
Looks good to me. Leo manages the pull requests to the library; my vote is :+1:
The naming scheme sounds good. The only point is to make it easy to find the names with tab completion; we can always adjust later if necessary.
|
gharchive/pull-request
| 2016-01-04T21:42:27 |
2025-04-01T04:34:51.079949
|
{
"authors": [
"Kha",
"avigad",
"johoelzl",
"rlewis1988"
],
"repo": "leanprover/lean",
"url": "https://github.com/leanprover/lean/pull/946",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1331065100
|
Hover doc strings
In the current implementation, elab doc string is taken by preference to the syntax doc string. For example, when users hover over the calc keyword, they get the doc string.
Elaborator for the `calc` term mode variant.
instead of the (much more detailed) syntax doc string
Step-wise reasoning over transitive relations.
```
calc
a = b := pab
b = c := pbc
...
y = z := pyz
```
proves `a = z` from the given step-wise proofs. `=` can be replaced with any
relation implementing the typeclass `Trans`. Instead of repeating the right-
hand sides, subsequent left-hand sides can be replaced with `_`.
`calc` has term mode and tactic mode variants. This is the term mode variant.
See [Theorem Proving in Lean 4][tpil4] for more information.
[tpil4]: https://leanprover.github.io/theorem_proving_in_lean4/quantifiers_and_equality.html#calculational-proofs
At #1403, @kha reported that this is the intended behavior since a single syntax can have multiple elaborators, so the elaborator docstring is expected to be the more specific one we should show and jump to first.
@digama0 points out that
From a doc writer's perspective, it is a lot harder to write good elab comments than syntax comments, because when you are documenting a tactic like simp you really want all the syntactic variants to be documented there, not spread over multiple comments where the user never gets the full picture.
Thread #1403 also suggests that there are two kinds of consumers for these doc strings: users and code-readers.
I am marking this issue as an RFC to get feedback from the community.
to be the more specific one we should show and jump to first.
Just to be clear: those are two completely different issues. You can jump to either the elaborator or the syntax definition from the editor depending on which command you use. So this is no argument for which docstring to show in the hover.
My current feeling is that we should show (only) the syntax docstring. In virtually all cases right now, the elaborator/expander is just an implementation detail. There might also be multiple elaborators/expanders, multiplying the documentation effort. This might change if we add some more "extensible" syntax (similar to how we might want to show docstrings for type class instances in addition to or instead of the type class operation docstrings):
syntax "frobnicate!" term : term
-- and in another module:
/-- The MyStuff frobnication frobnicates my stuff. -/
macro_rules
| `(frobnicate! MyStuff + 2) => ...
|
gharchive/issue
| 2022-08-07T16:44:05 |
2025-04-01T04:34:51.084618
|
{
"authors": [
"gebner",
"leodemoura"
],
"repo": "leanprover/lean4",
"url": "https://github.com/leanprover/lean4/issues/1443",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
2541199519
|
structure A; abbrev B := A; structure C extends B fails
Description
The following fails with 'B' is not a structure:
structure A
abbrev B := A
structure C extends B
Expected non-failure.
Context
Lean Zulip discussion
Steps to Reproduce
class OK₁
class abbrev OK₂ := OK₁
class OK₃ extends OK₂
class FAIL₁
abbrev FAIL₂ := FAIL₁
class FAIL₃ extends FAIL₂
structure OKAY₁
class abbrev OKAY₂ := OKAY₁
structure OKAY₃ extends OKAY₂
structure FAILURE₁
abbrev FAILURE₂ := FAILURE₁
structure FAILURE₃ extends FAILURE₂
Hello, and thanks for your report. I'm not sure if this is really a big - after all B isn't a structure, but an abbreviation.
One could argue that the structure construction could look through the abbreviation. But it's one more feature to design, implement, document and maintain. Is it really worth it? Is this preventing you from writing the code you need to write in a serious way?
@nomeata I suggested reporting this one. My thought was that we have a number of features that unfold definitions anyway (like dot notation), so we should support this too.
|
gharchive/issue
| 2024-09-22T18:30:08 |
2025-04-01T04:34:51.087740
|
{
"authors": [
"euprunin",
"kmill",
"nomeata"
],
"repo": "leanprover/lean4",
"url": "https://github.com/leanprover/lean4/issues/5417",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
137389766
|
GetFixedFrame transforms more frames than it needs to
The current implementation of LeapProvider.GetFixedFrame calls GetTransformedFrame every time it obtains a new frame inside the inner loop. All but one of these frames is going to be discarded, and so the transformation is wasteful. It should acquire un-transformed frame objects, and only transform the once that is selected.
This was also solved in PR #16.
By the way, regarding @protodeep 's commit above, it doesn't actually avoid multiple transformations, since the assignment to closest can happen more than once.
The leapMat assignment can also be taken out of the loop, since it's not supposed to change each iteration.
Also, isn't leap_controller_.Frame() == leap_controller_.Frame(0) ? If that's so, then shouldn't the loop begin with searchHistoryIndex = 1 ?
Tracking research is actually interested in building an extrapolation mechanism into the API which puts the interpolation responsibility on the API side. This would allow us to feed a timestamp down in to LeapC and retrieve an interpolated frame corresponding to that timestamp.
We should find a solution here which enables us to swap out the way the interpolation works at a later time, once it's supported by the underlying system.
|
gharchive/issue
| 2016-02-29T21:57:18 |
2025-04-01T04:34:51.090352
|
{
"authors": [
"Amarcolina",
"cherullo",
"codemercenary"
],
"repo": "leapmotion/UnityModules",
"url": "https://github.com/leapmotion/UnityModules/issues/22",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
348651152
|
make install-deps: missing dependencies for testing
There is not installed mock. Currently it is the only one dependency I found I missing here. But this one is not necessary in case you don't want to run tests.
In case you would like to keep deps for testing separated, new target, e.g. install-test-deps would be still better than nothing. But probably it's not to big deal to keep this separated from normal deps now.
I may be wrong, but I am almost sure that python-mock was only used on the approach we proposed for mocking stored files. After hitting some issues (reported against leapp-to/leapp) we reverted that change and are not mocking files any more, and therefore not using python-mock. This, of course, do not invalidate the discussion about having a different rule to install test deps
@artmello Ah, I thought it is planned to mock basically everything - variables, funktions, .... But that was just assumption maybe after I spend few previous weeks with mock in other projects.
We had short discussion with @shaded-enmity, that maybe would be better to modify guideline and/or change design of actor class so it would be possible to change "everything" in tests without need of mock. It will be discussed I guess in next week(s). As I realize now, there are already deps for pytest* so it's not only about the possible mock already.
Fixed by PR #73
|
gharchive/issue
| 2018-08-08T09:43:23 |
2025-04-01T04:34:51.093188
|
{
"authors": [
"artmello",
"pirat89"
],
"repo": "leapp-to/leapp-actors",
"url": "https://github.com/leapp-to/leapp-actors/issues/70",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
115425207
|
Merged develop into master
@cpauya I merged develop into the master branch.
@amodia, I clone from installer repo and checkout your branch so that I can test it in clean source. When I open it on Xcode there's no error because pyrun-2.7 directory is already in the Resourcesdirectory. When I runsetup.shit buildsKA-Lite.pkgin theoutput` directory without error.
Tested thus merging.
|
gharchive/pull-request
| 2015-11-06T03:08:27 |
2025-04-01T04:34:51.138142
|
{
"authors": [
"amodia",
"cpauya",
"djallado"
],
"repo": "learningequality/installers",
"url": "https://github.com/learningequality/installers/pull/260",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1136776087
|
make run_docker fails while trying to download pip
ERROR: This script does not work on Python 3.6 The minimum supported Python version is 3.7. Please use https://bootstrap.pypa.io/pip/3.6/get-pip.py instead.
The command '/bin/sh -c curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python3 get-pip.py' returned a non-zero code: 1
I noticed you have a ticket for upgrading python. As a short time fix, changing the URL as suggested by the error message works, but I'm not sure if hardcoding 3.6 is desirable.
i think it was moved to https://bootstrap.pypa.io/pip/3.6/get-pip.py
This was fixed in https://github.com/learningequality/kolibri-installer-android/commit/53fd490ad1c35fce3a59c2e436ebad74a4fa5bc7
|
gharchive/issue
| 2022-02-14T05:06:14 |
2025-04-01T04:34:51.141245
|
{
"authors": [
"MBKayro",
"rtibbles",
"samwagg"
],
"repo": "learningequality/kolibri-installer-android",
"url": "https://github.com/learningequality/kolibri-installer-android/issues/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
150089710
|
Improve Documentation with full documentation for all commands and arguments
This would optimally be auto generated - but I am unaware of the tooling to make that possible.
But there should be a page for each command and repeat the built-in documentation.
Also explain how to run tests
|
gharchive/issue
| 2016-04-21T14:26:21 |
2025-04-01T04:34:51.197286
|
{
"authors": [
"Fil",
"vlandham"
],
"repo": "learntextvis/textkit",
"url": "https://github.com/learntextvis/textkit/issues/35",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1586848028
|
Submit a Cloud Definition
Do NOT copy/paste a definition from somewhere else. Read about the word you want to define and come up with your own definition. Copy/Paste submissions will be closed and not added.
Fill out the JSON with your submission:
{
"word": "Virtual Private Network",
"content": "A Virtual Private Network (VPN) is an encrypted tunnel setup between two or more communicating devices. This encrypted connection helps guarantee the safe transmission of data and safeguards the connection against unauthorized access. It also aids users to conduct intrinsic remote work",
"learn_more_URL":"https://www.cisco.com/c/en/us/products/security/vpn-endpoint-security-clients/what-is-vpn.html",
"tag":"networking",
"abbreviation": "VPN",
"author_name":"Ochu Williams",
"author_link": "https://github.com/WilliamsOchu"
}
Fill out the JSON below with the following.
Word (REQUIRED)
The word you are defining. Check this URL for all words we currently have.
Content (REQUIRED)
The definition. No more than 3 sentences.
learn more URL (REQUIRED)
Website where people can visit to learn more about the word.
tag (REQUIRED and select one)
Tech category the word fits in. Options:
compute
security
service
general
analytics
developer tool
web
networking
database
storage
devops
ai/ml
identity
iot
monitoring
cost management
disaster recovery
abbreviation (OPTIONAL)
If the word is commonly abbreviated, please provide it. For example, command line interface is often abbreviated as CLI.
author name (REQUIRED)
Your name.
author link (REQUIRED)
The URL you want your name to link to.
Another great one, thank you!
|
gharchive/issue
| 2023-02-16T01:33:02 |
2025-04-01T04:34:51.204995
|
{
"authors": [
"WilliamsOchu",
"madebygps"
],
"repo": "learntocloud/cloud-dictionary",
"url": "https://github.com/learntocloud/cloud-dictionary/issues/87",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1414394194
|
Include some more headers when installing
With the plugin system, I've found that I need utils.hpp and assert.hpp,which are in src/, not src/bifrost, so are not copied over to /usr/local (or --prefix if that is set). So, I need to manually pass the path of the bifrost source code before I can get things to compile.
Can I ask:
what the reasoning is behind some headers being placed in src/bifrost, and others not? (it looks like these have extern C set, and are .h, not .hpp).
Is there a good reason to have a cuda/ directory with only one file, stream.hpp in it? Follow-up: why isn't cuda.hpp in this directory?
I would think any part of the C++ API that we want to expose should have header included in src/bifrost, so they can be copied over to /usr/local/include/bifrost upon installation?
The follow-up question: If I were to just move all headers to src/bifrost and refactor the code, would a PR be accepted?
what the reasoning is behind some headers being placed in src/bifrost, and others not? (it looks like these have extern C set, and are .h, not .hpp).
Because this is how it's always been? 😃
I'm not really sure but, if I had to guess, I would say that the headers in src/bifrost/ are the ones that represent the C++ API and are needed to build a C++ pipeline or wrap the C++ API with Python.
Is there a good reason to have a cuda/ directory with only one file, stream.hpp in it? Follow-up: why isn't cuda.hpp in this directory?
This is something that I don't have a good answer for. It almost looks like src/cuda/ was meant to be something more but that something never materialized.
If I were to just move all headers to src/bifrost and refactor the code, would a PR be accepted?
Yes, I think this is something we need to do if we want to make a plugin system that allows people to write new classes in the same style as what ships in Bifrost. Even my plugin-wrapper branch ran into problems with not having assert.hpp and utils.hpp and I had to get around be giving a path to the Bifrost source. The question I see is not if but how to arrange the header files. It would be nice to have something that conveys what they are for, i.e., a directory structure that makes a distinction between what is part of the C++ API and what is needed to write new class/for plugin support. So maybe something like a:
src/bifrost/ - the .h files
src/bifrost/c++ - the relevant .hpp files
src/bifrost/cuda - the relevant .hu files
Something like that? @league do you have any thoughts on this?
Maybe also a src/bifrost/network directory so we can try to extend this to packet capture plugins?
I guess IMO, src/bifrost should represent the public interface, and in turn, anything needed to build solutions/plugins using Bifrost would go there. It does seem like assert and utils might be in that category… [I'm aware of a perspective that so-called utils modules can be a “code smell!” – but I don't have a problem with it in particular.]
But there is a cost to making something part of the public interface if it doesn't need to be – in terms of writing/maintaining documentation, limiting changes to the API, etc. I wouldn't want to put things in there that are just implementation-only but ideally not client-facing in there. One thing I'm thinking of is our unholy ifdef’d filesystem code for maintaining caches.
I think using subdirectories within the includes seems a little over-engineered, perhaps? Unless it's really clear that these are distinct sub-components with crisp boundaries somehow. Maybe different plugin interfaces qualify? I don't think I'd do it for different types – extensions like .h vs .hpp vs .hu are distinction enough? But I feel the same thing about web devs who do mkdir assets/{css,images,fonts,icons}. Just 2¢.
That's a good point about stability of the public interface. I guess my goal with the subdirectories was to try to convey a sense of that by separating out what defines the rings/status codes/classes (this .hs) with what is more suitable for plugins (the .hpps and .hus). I just took it a step farther to separate C++ from CUDA. And threw in an additional one for good measure.
What about calling it src/bifrost/plugins or src/bifrost/internal with a suitable README that says that the interfaces defined there are subject to change?
|
gharchive/issue
| 2022-10-19T06:43:17 |
2025-04-01T04:34:51.226307
|
{
"authors": [
"jaycedowell",
"league",
"telegraphic"
],
"repo": "ledatelescope/bifrost",
"url": "https://github.com/ledatelescope/bifrost/issues/188",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1704168620
|
[WIP] Improve testability of Go code
What
Increase testability and readability of Go code in current repository.
This was done by moving handler and session related code to separate packages.
Note
Marked as WIP, as there are more changes to appear with that regards in here.
Ho. Closing PR as it’s old and “wip”. Feel free to open new one. And maybe several - to make them smaller
|
gharchive/pull-request
| 2023-05-10T15:19:37 |
2025-04-01T04:34:51.229346
|
{
"authors": [
"4others",
"AskAlexSharov"
],
"repo": "ledgerwatch/diagnostics",
"url": "https://github.com/ledgerwatch/diagnostics/pull/12",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
98511660
|
Add command for tabifying/untabifying all files in a project
See: https://discuss.atom.io/t/tabify-all-files-of-project/19465
great! . Thank you.
Would love this very much as well... like was noted in the linked conversation, replacing N spaces with tabs has many edge cases, which is why I'm using something to tabify in the first place. Now doing it across all files in a project would be amazing. :)
|
gharchive/issue
| 2015-08-01T06:01:07 |
2025-04-01T04:34:51.243148
|
{
"authors": [
"Julix91",
"harikt",
"lee-dohm"
],
"repo": "lee-dohm/tabs-to-spaces",
"url": "https://github.com/lee-dohm/tabs-to-spaces/issues/33",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
2657514834
|
Sonnet 3.5 responses don't seem to update files
Have an issue on both my Mac and Windows installation using the VS code extension from marketplace. If I use GPT-4o as the model, it seems to grab the diffs automatically without issue, it edits the files automatically and I can compare the diffs
When using claude-3-5-sonnet-20241022, this seems to rarely work. Sometimes, when it sends back alot of changes, maybe one of them will be implemented, but the rest won't and mostly none of them are. I'll see the reply from Claude, it appears to be formatted correctly (Has the same Send, Receive, ======= etc) but I'll get an error saying something like 'LLM message not in correct format'. This is true no matter which option I select for Diff, diff-fenced, udiff or whole
what provider do you use? openrouter?
I was using Anthropic with my own key, just switched to Openrouter and realize I think I was wrong on gpt-4o
What seems to be happening me now is I no longer get an error about the LLM responding in the wrong format, but on both Openrouter using Sonnet and OpenAI directly for GPT-4o, it seems to only implement the last edit. So whether it's one file or multiple files, the only change that seems to apply is whichever one is at the end of the message for whatever reason
Curious, looking at the logs, and it thinks it has modified one file multiple times and another file once
2024-11-14 10:04:33.433 [debug] aider-chat: write D:\Git-Projects\resume-builder-chrome\config.js
2024-11-14 10:04:33.433 [debug] aider-chat: write D:\Git-Projects\resume-builder-chrome\config.js
2024-11-14 10:04:33.433 [debug] aider-chat: write D:\Git-Projects\resume-builder-chrome\config.js
2024-11-14 10:04:33.433 [debug] aider-chat: write D:\Git-Projects\resume-builder-chrome\options.js
2024-11-14 10:04:33.433 [debug] aider-chat: Applied edit to config.js
2024-11-14 10:04:33.434 [debug] aider-chat: Applied edit to options.js
2024-11-14 10:04:33.453 [info] command write file: D:\Git-Projects\resume-builder-chrome\options.js
2024-11-14 10:04:33.453 [info] command write file: D:\Git-Projects\resume-builder-chrome\config.js
but the only one of those edits is shown in VS code, so I can only accept one that I know of. Weirdly, this time, it was the last config.js edit, rather than the last edit in the message. So it was the third of four edits it made. So the issue might just be in popping the diff up in VS code for me to accept, or user error on my part?
for every edit, extension will give a diff editor to user, maybe this is an issue. let me try it locally.
you can try latest version of extension to see whether it have problems.
Aha, yes, updated and this is now working - thank you so much for such a fast fix!
|
gharchive/issue
| 2024-11-14T04:18:45 |
2025-04-01T04:34:51.248922
|
{
"authors": [
"jfcostello",
"lee88688"
],
"repo": "lee88688/aider-composer",
"url": "https://github.com/lee88688/aider-composer/issues/5",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
}
|
1554950962
|
Feat: Add Redux
Add Redux 💯
In this step I:
Added Redux Toolkit.
Structured my application files to use the duck pattern.
Wrote my book's actions and reducer using the duck pattern.
Wrote my category actions and reducer using the duck pattern.
Configured the Redux Store.
Thank you for the suggestions 💪. I changed my constants to follow the duck pattern.
|
gharchive/pull-request
| 2023-01-24T13:06:11 |
2025-04-01T04:34:51.257133
|
{
"authors": [
"leehaney254"
],
"repo": "leehaney254/bookstore",
"url": "https://github.com/leehaney254/bookstore/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
64794286
|
Added Bohunt School
Fixes #503.
This seems to be a secondary school. This database is for post-secondary (higher education) institutions only. Closing.
|
gharchive/pull-request
| 2015-03-27T16:11:02 |
2025-04-01T04:34:51.290703
|
{
"authors": [
"itsmechlark",
"lyzidiamond"
],
"repo": "leereilly/swot",
"url": "https://github.com/leereilly/swot/pull/678",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1201252739
|
hoge
fuga
test
|
gharchive/issue
| 2022-04-12T07:07:53 |
2025-04-01T04:34:51.291373
|
{
"authors": [
"HafidZiti",
"piro0919"
],
"repo": "leerob/on-demand-isr",
"url": "https://github.com/leerob/on-demand-isr/issues/879",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2501600037
|
Implicit conversion vs implicit constructor
Channel
This is a "C++Weekly" episode request.
Topics
Suddenly I got puzzled what happens if class B with an implicit conversion to A is passed as an argument to parameter type A with an implicit constructor from const B&. I was surprised! https://godbolt.org/z/EabPGnY4z
gcc compiles and calls the constructor of A(const B&) and ignores the conversion o_0
msvc gives a strange error that an "operator could not be called" 🫤
llvm detects and reports a conversion ambiguity with the two candidates listed💪💪
Length
This should be bite-sized (5-10 minutes) episode
Rules about what conversion chains are considered (or not) in C++ are indeed not always crystal clear. An educational presentation would be appreciated... (along with best-practices).
|
gharchive/issue
| 2024-09-02T21:28:21 |
2025-04-01T04:34:51.303648
|
{
"authors": [
"Dharmesh946",
"leonid-s-usov"
],
"repo": "lefticus/cpp_weekly",
"url": "https://github.com/lefticus/cpp_weekly/issues/431",
"license": "Unlicense",
"license_type": "permissive",
"license_source": "github-api"
}
|
420224748
|
Add corrupt files to bad_expid list?
There are a few corrupt files in our various imaging directories. Depending on how severe the corruption is, these files either pass or do not pass the file list construction stage. We need to either add these files to the bad_expid file and then check the bad_expid file at file list construction time, or find some other mechanism to remove these files from the file list.
We might like to remove them from the directory structure, but we don't want to redownload them in the future in that case.
We need to look more into whether the files are corrupt at source or just our copies are corrupt.
Currently the bad_expid file is not used to remove files from the file list. If we changed that, we would not have a chance to recover potentially okay (just shallow) exposures from the bad_expid list; it would be as if these files never existed.
Move to legacypipe product https://github.com/legacysurvey/legacypipe/issues/336
|
gharchive/issue
| 2019-03-12T21:59:02 |
2025-04-01T04:34:51.307285
|
{
"authors": [
"djschlegel",
"schlafly"
],
"repo": "legacysurvey/legacyzpts",
"url": "https://github.com/legacysurvey/legacyzpts/issues/48",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
}
|
281570755
|
Different Broker is how to get the same session_id?
I read the code several times, but did not understand how different Broker got the same session_id, and would be very much appreciated if I could tell the logic and flow of this program
楼主搞明白没 , 我发现楼上那位讲的不对啊 , 查看network不同根域名的时候也没进行重复授权直接就登录了
在Server.php里
if ($this->generateSessionId($brokerId, $token) != $sid) {
return $this->fail("Checksum failed: Client IP address may have changed", 403);
}
明白了 , 根据IP来唯一标识的
明白了 ... 第二次登录的时候用户(浏览器)把Broker的token带上去访问已经登录的认证服务器,和登录的session绑定起来
Github上的小伙伴真的是逗,我拿中文提问,你们就拿英文回复我,我拿英文提问,你们就拿中文回答我
@carlclone 就是你说的这么回事,只是我一直觉得这个方案不太靠谱,感觉对SSO节点压力会很大
每一个网页的请求都要后端访问一次SSO,这样SSO压力过大,如果SSO登录就直接登录子系统,以后每次访问不再判断登录态,这样又会导致登录状态不同步。
我想的是另外维持一个cookie,存储登录状态和UID,如果有登录状态才请求SSO,如果cookie不存在,则不请求,即使SSO处于登录状态子系统也不登录,如果UID何子系统当前登录UID不一致则从新向SSO同步一次登录用户
English! please. Please respect us!
|
gharchive/issue
| 2017-12-12T23:14:04 |
2025-04-01T04:34:51.310479
|
{
"authors": [
"carlclone",
"tangzhangming",
"tantana5"
],
"repo": "legalthings/sso",
"url": "https://github.com/legalthings/sso/issues/90",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2534705000
|
Bit Array Implementation??
I'm thinking there would probably be a way for this to be implemented as an array of bits. I would need to look into if this is possible and how as well as what the benefits/drawbacks would be to this approach.
This looks promising: https://learn.microsoft.com/en-us/dotnet/api/system.collections.bitarray?view=net-8.0&redirectedfrom=MSDN
|
gharchive/issue
| 2024-09-18T20:51:08 |
2025-04-01T04:34:51.324618
|
{
"authors": [
"leigh966"
],
"repo": "leigh966/LeggySetLib",
"url": "https://github.com/leigh966/LeggySetLib/issues/2",
"license": "Zlib",
"license_type": "permissive",
"license_source": "github-api"
}
|
111528007
|
Adds option to prevent expanding on addMessage()
When you uncheck the Expand on request box, the view in atom stays collapsed as you navigate the pages that the package is serving
Thanks! Will be good to have this option.
To leave space for future options could you change the expandOnRequest setting to be a dropdown instead of a boolean? Something like:
expandOnRequest:
title: 'Expand on request'
description: 'Expand the server console window when new requests are received by the server'
type: 'string'
enum: ['none', 'all']
default: 'all'
I'm imagining in the future this could be extended to only expand for given error levels. Just all or none is good for now though.
Perfect, thanks a bunch :+1:
I'll make one small change before releasing, just so that the setting can be modified while the server is running.
|
gharchive/pull-request
| 2015-10-15T01:43:08 |
2025-04-01T04:34:51.331151
|
{
"authors": [
"connormlewis",
"leijou"
],
"repo": "leijou/php-server",
"url": "https://github.com/leijou/php-server/pull/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
1120574064
|
Require cms/admin dependencies instead of recipe-cms
For projects that don’t use recipe-cms, installing this module will pull in a bunch of dependencies that we don’t necessarily want 😄
good point! actually i think the cms part is not even required
released :-) https://github.com/lekoala/silverstripe-cms-actions/releases/tag/1.2.8
|
gharchive/issue
| 2022-02-01T11:55:15 |
2025-04-01T04:34:51.336483
|
{
"authors": [
"kinglozzer",
"lekoala"
],
"repo": "lekoala/silverstripe-cms-actions",
"url": "https://github.com/lekoala/silverstripe-cms-actions/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
151950065
|
Add mocked StatusBar component
Why
The React Native team recently moved towards the StatusBar component (and imperative API) vs the StatusBarIOS API.
What changed?
Added a StatusBar component
Mocked some static methods on StatusBar
Looks good. Thanks!
|
gharchive/pull-request
| 2016-04-29T19:25:46 |
2025-04-01T04:34:51.338133
|
{
"authors": [
"greg5green",
"lelandrichardson"
],
"repo": "lelandrichardson/react-native-mock",
"url": "https://github.com/lelandrichardson/react-native-mock/pull/37",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
}
|
755185215
|
Update npm after installing the lunr package.
Adding npm install step before running the swizzle command.
closes #31
Thanks for the PR
Thanks for the PR
@lelouch77 Thanks for merging my PR.
|
gharchive/pull-request
| 2020-12-02T11:20:52 |
2025-04-01T04:34:51.389768
|
{
"authors": [
"divyabhushan",
"lelouch77"
],
"repo": "lelouch77/docusaurus-lunr-search",
"url": "https://github.com/lelouch77/docusaurus-lunr-search/pull/32",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1982368555
|
Update Post “termite-inspection-who-to-call-on-the-gold-coast”
Automatically generated by Netlify CMS
👷 Deploy Preview for chic-shortbread-66a66b processing.
Name
Link
🔨 Latest commit
643694d7bfb34d4bdd0fad4f136ba0f0c2989dad
🔍 Latest deploy log
https://app.netlify.com/sites/chic-shortbread-66a66b/deploys/654abdb18fa80300083c0317
|
gharchive/pull-request
| 2023-11-07T22:43:59 |
2025-04-01T04:34:51.396545
|
{
"authors": [
"lemotdesign"
],
"repo": "lemotdesign/one-click-hugo-cms",
"url": "https://github.com/lemotdesign/one-click-hugo-cms/pull/98",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
2248502879
|
feat: Document not authenticated users management
About this PR
Closes #146
Technical highlight/advice
How to test my changes
Get the project code and run the next commands:
npm i
npm start
Then open the next links:
http://localhost:8080/guides/public-routes.html
http://localhost:8080/features/routes.html
Checklist
[x] I didn't over-scope my PR
[x] My PR title matches the commit convention
[x] I did not include breaking changes
[x] I made my own code-review before requesting one
I included unit tests that cover my changes
[ ] 👍 yes
[x] 🙅 no, because they aren't needed
[ ] 🙋 no, because I need help
I added/updated the documentation about my changes
[ ] 📜 README.md
[ ] 📕 docs/*.md
[x] 📓 docs.lenra.io
[ ] 🙅 no documentation needed
:tada: This PR is included in version 1.1.0 :tada:
The release is available on:
GitHub release
v1.1.0
Your semantic-release bot :package::rocket:
|
gharchive/pull-request
| 2024-04-17T14:55:14 |
2025-04-01T04:34:51.416609
|
{
"authors": [
"shiipou",
"taorepoara"
],
"repo": "lenra-io/docs",
"url": "https://github.com/lenra-io/docs/pull/147",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1823970183
|
🛑 forums.k8slens.dev is down
In 2ef0e0b, forums.k8slens.dev (https://forums.k8slens.dev) was down:
HTTP code: 503
Response time: 434 ms
Resolved: forums.k8slens.dev is back up in bf48b0c.
|
gharchive/issue
| 2023-07-27T09:21:35 |
2025-04-01T04:34:51.419033
|
{
"authors": [
"lens-cloud"
],
"repo": "lensapp/k8slens-status",
"url": "https://github.com/lensapp/k8slens-status/issues/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
809224300
|
No hover effects for header elements
Any hover effects doesn't properly applied to all header elements (top line with cluster name and cluster menu). It seems that something placed on top of them and covers most of their height.
https://user-images.githubusercontent.com/9607060/108056014-cd186a00-7061-11eb-8ac0-5d01e206e636.mp4
@aleksfront not sure if it's fixable since it's blocked most probably by this element:
|
gharchive/issue
| 2021-02-16T11:19:36 |
2025-04-01T04:34:51.420693
|
{
"authors": [
"aleksfront",
"ixrock"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/issues/2159",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
787120009
|
Add age column to cluster overview
Adds an "Age" column to cluster overview with sorting.
fixes #1939
15 Oca 2021 Cum, saat 23:20 tarihinde Sebastian Malton <
notifications@github.com> şunu yazdı:
@Nokel81 approved this pull request.
Looks good. Though you need to "signoff" on your commit.
https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/lensapp/lens/pull/1970#pullrequestreview-569591615,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASLYC6UAATBQKVWI5GT5363S2CPR5ANCNFSM4WEO3ODA
.
Bırak artık hesabımı bilerek yapmadım sistem ne istediyse ben onu yaptım
15 Oca 2021 Cum, saat 23:20 tarihinde Sebastian Malton <
notifications@github.com> şunu yazdı:
@Nokel81 approved this pull request.
Looks good. Though you need to "signoff" on your commit.
https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/lensapp/lens/pull/1970#pullrequestreview-569591615,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASLYC6UAATBQKVWI5GT5363S2CPR5ANCNFSM4WEO3ODA
.
Bırak artık hesabımı bilerek yapmadım sistem ne istediyse ben onu yaptım
15 Oca 2021 Cum, saat 23:57 tarihinde Baykal Maraşlı <
maraslibaykal93@gmail.com> şunu yazdı:
15 Oca 2021 Cum, saat 23:20 tarihinde Sebastian Malton <
notifications@github.com> şunu yazdı:
@Nokel81 approved this pull request.
Looks good. Though you need to "signoff" on your commit.
https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/lensapp/lens/pull/1970#pullrequestreview-569591615,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASLYC6UAATBQKVWI5GT5363S2CPR5ANCNFSM4WEO3ODA
.
Bırak artık hesabımı bilerek yapmadım sistem ne istediyse ben onu yaptım
Teklinizi kabul ediyorum
15 Oca 2021 Cum, saat 23:57 tarihinde Baykal Maraşlı <
maraslibaykal93@gmail.com> şunu yazdı:
15 Oca 2021 Cum, saat 23:20 tarihinde Sebastian Malton <
notifications@github.com> şunu yazdı:
@Nokel81 approved this pull request.
Looks good. Though you need to "signoff" on your commit.
https://git-scm.com/docs/git-commit#Documentation/git-commit.txt---signoff
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/lensapp/lens/pull/1970#pullrequestreview-569591615,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ASLYC6UAATBQKVWI5GT5363S2CPR5ANCNFSM4WEO3ODA
.
Bırak artık hesabımı bilerek yapmadım sistem ne istediyse ben onu yaptım
Teklinizi kabul ediyorum
Updated the last commit so it's signed-off properly
Updated the last commit so it's signed-off properly
|
gharchive/pull-request
| 2021-01-15T18:52:45 |
2025-04-01T04:34:51.435426
|
{
"authors": [
"Nachasic",
"maraslibay"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/pull/1970",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
836271460
|
Improve loading animation when switching release details
Signed-off-by: Sebastian Malton sebastian@malton.name
~blocked until #1862 is merged.~
Not sure if this is related but I see the infinite spinner when opening details page of a release:
@jim-docker I cannot reproduce that, is it always the case?
@jim-docker I cannot reproduce that, is it always the case?
yes, but I think I've encountered this on master too. I will try to make a fresh "install" and see if I can reproduce
|
gharchive/pull-request
| 2021-03-19T18:48:48 |
2025-04-01T04:34:51.438166
|
{
"authors": [
"Nokel81",
"jim-docker"
],
"repo": "lensapp/lens",
"url": "https://github.com/lensapp/lens/pull/2367",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1362213516
|
[Bug] uncognized option --yes for archlinux
Steps To Reproduce
run leon create birth
The current behavior
On stdout: ```
✖ Installing packages
Error: Failed to install needed packages
For further information, look at the log file located at /home/me/.config/@leon-ai/cli/log-errors.txt
when looking at error files:
Error: Command failed with exit code 1: sudo --non-interactive /ssd/home/me/.npm-global/lib/node_modules/@leon-ai/cli/scripts/dependencies/install_pacman_packages.sh
pacman: unrecognized option '--yes'
## The expected behavior
correct installation
Hello! @kamaradclimber
Thanks for your report!
Indeed it is a bug, actually --yes doesn't exist for the pacman CLI.
In the CLI part, we already support many famous package managers for different GNU/Linux distributions, but unfortunately, on the Core side, it is not the case.
Being able to run Leon on multiple CPU architectures, and distribution, it's definitely something we would like to support, and we would like to solve it before the v1.0.0 stable release (currently Leon is in beta).
We will try to solve this problem later, when the core of Leon is more mature, just before the v1 stable release.
You can read more here: https://github.com/leon-ai/leon/pull/302#pullrequestreview-914081908.
Meanwhile, you can use Leon without problems on Ubuntu, macOS, and Windows, also you can try Leon with a single click thanks to GitPod: https://github.com/leon-ai/leon#️-try-with-a-single-click.
|
gharchive/issue
| 2022-09-05T15:51:32 |
2025-04-01T04:34:51.506088
|
{
"authors": [
"Divlo",
"kamaradclimber"
],
"repo": "leon-ai/leon-cli",
"url": "https://github.com/leon-ai/leon-cli/issues/193",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
1236839759
|
在页面无单击交互的地方双击会打开collection中“C”键设置的链接
我用的模板是多组件预设
我用的模板是多组件预设
好像没有加过双击事件逻辑,是否按下了C键而已?或者在组件设置关掉快捷按键看是否会重现。
好像没有加过双击事件逻辑,是否按下了C键而已?或者在组件设置关掉快捷按键看是否会重现。
没有按下C键,关闭collection的“快捷按键”确实不会重现,不过这样就没法用快捷键打开页面了
正常使用是没有问题的,毕竟没事也不会去双击
感谢分享!
|
gharchive/issue
| 2022-05-16T08:59:55 |
2025-04-01T04:34:51.508381
|
{
"authors": [
"leon-kfd",
"nebel-dev"
],
"repo": "leon-kfd/Dashboard",
"url": "https://github.com/leon-kfd/Dashboard/issues/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.