id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
2631161075 | 🛑 Official(泠泫凝的异次元空间-主页) is down
In 8489fa4, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Official(泠泫凝的异次元空间-主页) is back up in c431d72 after 6 minutes.
| gharchive/issue | 2024-11-03T11:50:24 | 2025-04-01T06:37:10.606513 | {
"authors": [
"LxnChan"
],
"repo": "LxnChan/status",
"url": "https://github.com/LxnChan/status/issues/10199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2762488625 | 🛑 Official(泠泫凝的异次元空间-主页) is down
In 693d190, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Official(泠泫凝的异次元空间-主页) is back up in e0ed2dc after 6 minutes.
| gharchive/issue | 2024-12-29T23:32:33 | 2025-04-01T06:37:10.608995 | {
"authors": [
"LxnChan"
],
"repo": "LxnChan/status",
"url": "https://github.com/LxnChan/status/issues/13182",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2474938740 | 🛑 EnderChest(末影箱-云盘) is down
In 673fc7c, EnderChest(末影箱-云盘) (https://enderchest.anavi.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: EnderChest(末影箱-云盘) is back up in 925da4d after 6 minutes.
| gharchive/issue | 2024-08-20T07:40:10 | 2025-04-01T06:37:10.611446 | {
"authors": [
"LxnChan"
],
"repo": "LxnChan/status",
"url": "https://github.com/LxnChan/status/issues/5967",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2484590883 | 🛑 Official(泠泫凝的异次元空间-主页) is down
In be5648e, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Official(泠泫凝的异次元空间-主页) is back up in d295546 after 5 minutes.
| gharchive/issue | 2024-08-24T13:22:38 | 2025-04-01T06:37:10.613872 | {
"authors": [
"LxnChan"
],
"repo": "LxnChan/status",
"url": "https://github.com/LxnChan/status/issues/6233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2497628649 | 🛑 Official(泠泫凝的异次元空间-主页) is down
In a1da662, Official(泠泫凝的异次元空间-主页) (https://lxnchan.cn) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Official(泠泫凝的异次元空间-主页) is back up in 5ccf6ad after 33 minutes.
| gharchive/issue | 2024-08-30T15:58:45 | 2025-04-01T06:37:10.616341 | {
"authors": [
"LxnChan"
],
"repo": "LxnChan/status",
"url": "https://github.com/LxnChan/status/issues/6576",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1728675571 | Create C++.yml
Improvement to run c++ code
give opinion, i accept any.
| gharchive/pull-request | 2023-05-27T11:56:42 | 2025-04-01T06:37:10.619214 | {
"authors": [
"Ly232e"
],
"repo": "Lychan23/Lychan23special",
"url": "https://github.com/Lychan23/Lychan23special/pull/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1050554180 | ToDo
Added components:
TodoContainer
TodoList
TodoItem
InputTodo
Header
Navbar
Main functions:
updateCheckbox(id) - passed to TodoItem component
updateTitle(id) - passed to TodoItem component
deleteItem(id) - passed to TodoItem component
addTodoItem(title) - passed to InputTodo component
Additional
Added useState to hold inputField state and function onChange, added handleSubmit function
Added styling: App.css stylesheet, Header inline styling, TodoItem css module
Imported useEffect to save to and read from localStorage
Added react icons
Added BrowserRouter, Routes and Route
Edited Navbar: use State and add function to toggle navbar open/close
Hey, I just found there are some linter errors that prevent your app from running in the browser. Could you fix them before I can give the code review?
Hey, I just found there are some linter errors that prevent your app from running in the browser. Could you fix them before I can give the code review?
Done, you can go ahead with the review now. Thanks
| gharchive/pull-request | 2021-11-11T04:14:17 | 2025-04-01T06:37:10.684854 | {
"authors": [
"M0rrighan"
],
"repo": "M0rrighan/to-do-list-react",
"url": "https://github.com/M0rrighan/to-do-list-react/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1838731552 | 🛑 Home is down
In 7cad18d, Home (https://home.rpmdp.com) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Home is back up in c7cd76e.
| gharchive/issue | 2023-08-07T05:52:15 | 2025-04-01T06:37:10.687348 | {
"authors": [
"M1XZG"
],
"repo": "M1XZG/uptime",
"url": "https://github.com/M1XZG/uptime/issues/88",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2545799740 | why not
adding a gettime and settime to flxvideosprite because i think it can be put to use so why not (plus i found there is time in Video.hx so why not.
Why not use the variables directly? video.bitmap.time
It works the same way
figured I'd use bitmap since it's already declared as a variable using Video but you already got a point.
The idea of seeking and getting time just popped in my head.
So i thought why not add it, it prob can be used.
It's useless to have a function that does the same thing as the variable
I mean you're right but, I thought it would be more safer to use a function logic with a small error handling.
It's up for you to make a decision. I'm just trying to add an Encapsulation and Consistency to both of the functions.
I mean you're right but, I thought it would be more safer to use a function logic with a small error handling.
It's up for you to make a decision. I'm just trying to add an Encapsulation and Consistency to both of the functions.
The error handling is really unnecessary though, I'll close this...
| gharchive/pull-request | 2024-09-24T15:59:28 | 2025-04-01T06:37:10.696850 | {
"authors": [
"HeroEyad",
"MAJigsaw77"
],
"repo": "MAJigsaw77/hxvlc",
"url": "https://github.com/MAJigsaw77/hxvlc/pull/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
792574123 | Incorrect code created for global verbs
Describe the bug
When creating a simple random maths and check a global verb the wrong code is created and errors saying there is something
wrong with your code when there isn't anything the end user did.
This was on a food item when eaten
1.16.4 version as you can see grabs the verbs and sets everything up the 1.12.2 version does not.
To Reproduce
Create this
Screenshots
Image above creates the code below
Code
public class ProcedureBeerFoodEaten extends ElementsMorefwoodsMod.ModElement {
public ProcedureBeerFoodEaten (ElementsMorefwoodsMod instance) {
super(instance, 10);
}
public static void executeProcedure(Map<String, Object> dependencies){
if(dependencies.get("entity")==null){
System.err.println("Failed to load dependency entity for procedure BeerEaten!");
return;
}
Entity entity =(Entity)dependencies.get("entity" );
if ((Math.random()<0.5)) {if ((<3)) {}}
}
}
Details
OS: Windows
MCreator version 2020.4/2020.5
Generator type: forge-1.12.2
Demo workspace
Unable to at this stage
Additional context
The code in 1.16.4
public classBeerFoodEatenProcedure extends ElementsMorefwoodsMod.ModElement {
public classBeerFoodEatenProcedure (ElementsMorefwoodsModElements instance) {
super(instance, 83);
}
public static void executeProcedure(Map<String, Object> dependencies) {
if (dependencies.get("entity") == null) {
if (!dependencies.containsKey("entity"))
ElementsMorefwoodsMod.LOGGER.warn("Failed to load dependency entity for procedureBeerEaten!");
return;
}
Entity entity = (Entity) dependencies.get("entity");
if ((Math.random() < 0.5)) {
if ((((entity.getCapability(ElementsMorefwoodsModVariables.PLAYER_VARIABLES_CAPABILITY, null)
.orElse(new ElementsMorefwoodsModVariables.PlayerVariables())).wetcount) < 3)) {
{
double _setval = (double) (((entity.getCapability(ElementsMorefwoodsModVariables.PLAYER_VARIABLES_CAPABILITY, null)
.orElse(new ElementsMorefwoodsModVariables.PlayerVariables())).wetcount) + 1);
entity.getCapability(ElementsMorefwoodsModVariables.PLAYER_VARIABLES_CAPABILITY, null).ifPresent(capability -> {
capability.wetcount = _setval;
capability.syncPlayerVariables(entity);
});
}
}
}
}
}
Player variables are not supported on 1.12.2 generator.
Player variables are not supported on 1.12.2 generator.
What should I change that to then?
What should I change that to then?
| gharchive/issue | 2021-01-23T14:47:43 | 2025-04-01T06:37:10.720872 | {
"authors": [
"KlemenDEV",
"LexShadow"
],
"repo": "MCreator/Generator-Forge-1.12.2",
"url": "https://github.com/MCreator/Generator-Forge-1.12.2/issues/27",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1535704841 | DaySelectedBackgroundColor - Circle not Rendering
Describe the bug
Hi there!
Thanks for all the work you put into XCalendar!
I think I might be experiencing a bug in setting the color value for the dot that indicates the currently selected date.
Looking at the Sample Code it looks to me like it should be set on the cx:DayView - where you assign it to a Color value.
I've done that like so:
<xc:DayView
...
SelectedBackgroundColor="{StaticResource DaySelectedBackgroundColor}"
...
>
</xc:DayView>
And then set the resource in App.xaml like
<Application.Resources>
...
<Color x:Key="DaySelectedBackgroundColor">Green</Color>
...
</Application.Resources>
However, the selected indicator still does not populate. Screenshot:
Are there other properties that I need to set to enable this feature?
Expected behavior
I would think there should be a green dot when selecting on a day.
Steps to reproduce OR link to code
I've uploaded my current code to this repo / branch
Xamarin Forms or .NET MAUI (If related to UI)
.NET MAUI
Additional context (Optional)
Device Info (Optional)
Device Model: samsung galaxy s20 FE
Android Version: 12, One UI version 4.1
IOS Version:
Windows Version:
When you say "Indicators", do you mean like on of the two red dots in your image? If so, there is no built in functionality for indicators. You will need to implement the logic yourself.
If you mean an actual day, the DayView has some properties you need to set in order to get the state-based properties working in different states. The Displaying Dates page lists them in the 'CalendarView' section.
When you say "Indicators", do you mean like on of the two red dots in your image? If so, there is no built in functionality for indicators. You will need to implement the logic yourself.
If you mean an actual day, the DayView has some properties you need to set in order to get the state-based properties working in different states. The Displaying Dates page lists them in the 'CalendarView' section. Usually you would just bind them to the properties of the same name from the binding context.
Also, just to confirm, is it the 'master' branch you wanted to link to for the reproduction of the issue?
Hey @ME-MarvinE - thanks for looking into this!
I'm pretty sure I linked to the exact branch / file in the ticket - but here is the exact full link: https://github.com/johnyenter-briars/cal/blob/dev/bring_in_xcalendar/CAL/Views/CalendarPage.xaml
The UI i'm referring to is the little dot that represents which dates are currently selected:
@johnyenter-briars I was opening the link through the Mobile app which kept leading me to the 'master' branch. The links work fine on PC.
Eventually I tried using a Calendar with CalendarDay instead of EventDay and it worked.
The EventDay and Event classes in the sample app (along with other classes) use PropertyChanged.Fody to inject usage of the INotifyPropertyChanged interface into the shorthand setter ({ get; set; }) by adding the AddINotifyPropertyChangedInterface attribute to the 'BaseObservable' classes.
This package nor the attribute is present in the project and there was no replacement for its functionality. You will need to either implement the INotifyPropertyChagedInterface' manually, or install the PropertyChanged.Fodynuget package and re-add theAddINotifyPropertyChangedInterface` attribute wherever it was removed.
I implemented the interface manually into the EventDay class and everything seemed to work correctly.
@johnyenter-briars Is your issue resolved?
@ME-MarvinE Sorry, took me a while to find the time to implement your fix. It worked as expected.
Thank you so much! Closing the issue now.
| gharchive/issue | 2023-01-17T02:55:51 | 2025-04-01T06:37:11.287192 | {
"authors": [
"ME-MarvinE",
"johnyenter-briars"
],
"repo": "ME-MarvinE/XCalendar",
"url": "https://github.com/ME-MarvinE/XCalendar/issues/99",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2699247932 | Iceberg/Hidden Depths progress data not accurate
Should say 1800 ft and maybe also 0/200 ft. in Hidden Depths. It also say 200 ft until Hidden Depths but I am already there; maybe "until Deep Mouse" instead.
Now at 1855 ft after 3 hunts and 55/200 ft in Hidden Depths; 145 feet until Deep Mouse.
On the left side it says 200 feet but the popup says Hidden Depths is 1800-2000ft. It would be good to note on the left side that Deep Mouse is out or engaging.
| gharchive/issue | 2024-11-27T17:11:44 | 2025-04-01T06:37:11.297907 | {
"authors": [
"KennC5"
],
"repo": "MHCommunity/mousehunt-improved",
"url": "https://github.com/MHCommunity/mousehunt-improved/issues/458",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1304416517 | Add: Footer for site mapping
It is important as before there was no footer on the website. This PR will be adding a footer and it will be acting as a site map along with the navbar.
Fix Issue: #1
Please, go through these steps before you submit a PR.
[x] My Pod Leader knows I'm working on this Pull Request
[x] I've explained what the Pull Request is adding.
[x] I've explained why this is important.
Preview for the footer
Another alternative
@karnikkanojia Please resolve this conflict so I can merge the PR!
@varsharathore16 you can merge the PR now
| gharchive/pull-request | 2022-07-14T08:05:27 | 2025-04-01T06:37:11.610640 | {
"authors": [
"karnikkanojia",
"varsharathore16"
],
"repo": "MLH-Fellowship/prep-portfolio-22.JUL.PREP.1",
"url": "https://github.com/MLH-Fellowship/prep-portfolio-22.JUL.PREP.1/pull/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
652398311 | Add Discord Status
Put minutes until next bot in its status.
Fixed https://github.com/MLH-Fellowship/session-bot/pull/9
| gharchive/issue | 2020-07-07T15:03:25 | 2025-04-01T06:37:11.611721 | {
"authors": [
"wrussell1999"
],
"repo": "MLH-Fellowship/session-bot",
"url": "https://github.com/MLH-Fellowship/session-bot/issues/8",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
128367504 | Wiki
It is not outdated?
https://github.com/MPOS/php-mpos/wiki/Multiple-Pools-on-Single-Host
Probably needs an update. Refer to https://github.com/MPOS/php-mpos/wiki/Single-Sign-On for a SSO based setup.
| gharchive/issue | 2016-01-24T02:21:04 | 2025-04-01T06:37:11.640363 | {
"authors": [
"TheSerapher",
"erm3nda"
],
"repo": "MPOS/php-mpos",
"url": "https://github.com/MPOS/php-mpos/issues/2480",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1840968316 | 🛑 annotator web app is down
In 01aa303, annotator web app (https://annotator.epigraphdb.org) was down:
HTTP code: 0
Response time: 0 ms
Resolved: annotator web app is back up in b92d503.
| gharchive/issue | 2023-08-08T09:48:16 | 2025-04-01T06:37:11.642745 | {
"authors": [
"YiLiu6240"
],
"repo": "MRCIEU/upptime",
"url": "https://github.com/MRCIEU/upptime/issues/65",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
427435301 | Lab 2 - Indentation needs to be fixed
Indentation needs to be fixed. Nothing crazy but the steps 1,2,3,4... aren't the things at that are at the extreme left. code, text, and images are...
I fixed some indentation issues on lab 2 as part of another ticket.
The results can be seen in this PR: https://github.com/MSDEVMTL/GlobalAzureBootcamp-2019/pull/58
If the changes are good, we could close this issue.
| gharchive/issue | 2019-03-31T19:39:50 | 2025-04-01T06:37:11.653621 | {
"authors": [
"FBoucher",
"arribajuan"
],
"repo": "MSDEVMTL/GlobalAzureBootcamp-2019",
"url": "https://github.com/MSDEVMTL/GlobalAzureBootcamp-2019/issues/39",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2605962534 | Ensure 'Apply simulation state' is always stored in its own snapshot, separate from other user actions
Fixes the following bug:
Run a simulation
Create an atom (or do anything not whitelisted)
Undo
-> The simulation AND user operation is reverted, instead of only the last operation.
This PR applies the simulation and the user action in two separate snapshots instead.
Applying that previous snapshot seems a bit redundant to me, why we cannot just create snapshot twice? Once just before user is performing not whitelisted operation and second time directly after that operation? That way we also would not need to add new api to History
The issue is when calling snapshot_moment, the action was already done, so the only way to take a snapshot before the user performs an action is to go back everywhere where this method is called an call apply_simulation_if_running() before doing anything.
It's possible, and removes the need for a simulation whitelist, but it's also more intrusive and error prone (I think it will be easy to forget applying the simulation in unrelated parts of the code).
The way its done here is just trying to mimic what we had before with NanoUndoRedo, but we can change that if necessary.
| gharchive/pull-request | 2024-10-22T16:42:01 | 2025-04-01T06:37:11.656515 | {
"authors": [
"HungryProton"
],
"repo": "MSEP-one/msep.one",
"url": "https://github.com/MSEP-one/msep.one/pull/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
318234079 | Default behavior when sync result in deletion of files
By default, when deleting files, DriveSync should put them in google drive recycle bin and have an option to bypass this (--no-bin)
Currently items are put into the bin when remote deletion occurs. Adding an option to permanently delete files instead is a nice suggestion - will look into it, thanks.
| gharchive/issue | 2018-04-26T23:51:54 | 2025-04-01T06:37:11.687604 | {
"authors": [
"MStadlmeier",
"mehdi-S"
],
"repo": "MStadlmeier/drivesync",
"url": "https://github.com/MStadlmeier/drivesync/issues/8",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2265780929 | Redesign lecture edit page for lecturers
Current preview:
TODO
[x] Add deep linking according to this guide and this SO answer
[x] Avoid redirect with Cancel button
[x] Fix layout issues
[x] Fix sample DB Save & Cancel buttons
[x] Fix Comments save button not saving any changes (it even triggers an exception)
For reviewers
Sorry for the over 1,000 line insertions/deletions in this PR. This is mainly due to having migrated the lectures.coffee file to lectures.js and due to copying/pasting some old content to new locations while also modifying it, such that Git does not recognize it as replacements and instead deletions/insertions.
[x] Headers should be internationalized
[x] Every functionality that was in the accordion beforehand, should still work.
[x] Clicking on the "cancel" button should not redirect to the "content" tab but stay in the current tab
[x] Jumping directly to a tab via a url should work, e.g. this one
[x] Accessibility: navbar header can be navigated by using Tab, then Arrow keys (up down as well as left right)
[x] URL hashes (e.g. "#settings") should be a good naming choice as they shouldn't be changed in the future very often (to allow easy bookmarking for example)
[x] Announcements should be scrollable (reproduce by creating at least 10 new announcements)
[x] Importing media should still work.
Thanks for the review. I've addressed all your points in the last commits.
Any action on any subpage (e.g. #people) should let the user stay on the current subpage. Note that for some actions, we reload the page, which is why some small visual flickering might appear as the "Content" tab is loaded first, then we jump to the correct section. This could be addressed in a future PR, but is maybe not worth the effort (one would have to implement some kind of js-data-storage mechanism or wrap the forms in partials, create a submit/cancel route and send custom JS from the backend such that the partial is reloaded with the initial data. This way, we wouldn't have to reload the entire page.
The back/forward buttons should now work properly again.
I only stumbled upon one very strange bug involving the navigation of the navbar headers by Tab and arrow keys. If you make use of that and return to the content tab, the layout for the content tab is suddenly broken.
I've listened to the wrong event listener, see the correct ones here. I'm now listening to shown.bs.tab, i.e. only initialize the grid once the content of the tab is already loaded. Otherwise, the focus event triggered too early and the grid system was initialized when the content was not visible yet, therefore you saw the broken layout. It should now work flawlessly, even when resizing the browser.
| gharchive/pull-request | 2024-04-26T13:21:39 | 2025-04-01T06:37:11.742018 | {
"authors": [
"Splines"
],
"repo": "MaMpf-HD/mampf",
"url": "https://github.com/MaMpf-HD/mampf/pull/628",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2481973044 | Update Gemfile dependencies (bundle update)
I ran bundle update to update our dependencies according to our Gemfile (which I cleaned up recently in #680 and pinned Gems there). Might be worth checking out if there are new major versions for some gems (in a future PR).
I also updated my local bundler version by executing bundle update --bundler. You can see this in the last line in the Gemfile.lock. The newest version is 2.5.17, released on August 1, 2024.
TODO for myself & reviewers
[x] Check that everything is still working with the new dependencies.
We should probably not encounter any problems as we pinned the gems in #680, such that they only undergo automatic minor version upgrades. There could still be a bad apple somewhere...
@fosterfarrell9 Good that we have tests: the update for the js-routes gem here broke MaMpf and you couldn't access some pages in the frontend anymore since Routes. ... was not defined anymore for JavaScript files. Some cypress tests detected this, e.g. your test that the lecture page shows a button.
Instead of fixing the underlying problem during the update, I've pinned js-routes to the older version. This is because there is already a new major version and I don't see the point in investing time here in upgrading a minor version if we could just as well upgrade to the major version soon, see #690.
| gharchive/pull-request | 2024-08-22T23:27:40 | 2025-04-01T06:37:11.745594 | {
"authors": [
"Splines"
],
"repo": "MaMpf-HD/mampf",
"url": "https://github.com/MaMpf-HD/mampf/pull/688",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
793423281 | IDS architecture overview
As discussed during dev meetings; it would be nice to have an overview of our 'mosaiq' of possible available tools and project architectures so that everyone reuses existing code as much as possible.
Perhaps better to move this to the projects website
Perhaps better to move this to the projects website
| gharchive/issue | 2021-01-25T14:21:39 | 2025-04-01T06:37:11.778388 | {
"authors": [
"thendriks"
],
"repo": "MaastrichtU-IDS/best-practices",
"url": "https://github.com/MaastrichtU-IDS/best-practices/issues/6",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
195854168 | [QUESTION] CSV to Array
So I am working on a form where I want o give the user the option of uploading a CSV that will allow the form to be automatically populated. So I thought I would build a function that reads the CSV and then throws each row into an array as an object which I can then pass back to my Laravel Blade template. My only problem is, the array I return from the function is always empty. Any idea?
private function import($path) {
$applicants = [];
Excel::load($path, function(LaravelExcelReader $excel) use ($applicants){
$excel->each(function(Collection $line) use ($applicants){
$name = new \stdClass;
$name->first = $line->get('first');
$name->middle = $line->get('middle');
$name->last = $line->get('last');
$name->birthdate = $line->get('birthdate');
$name->ssn = $line->get('ssn');
$name->email = $line->get('email');
$name->mobile_phone = $line->get('mobile_phone');
$name->home_phone = $line->get('home_phone');
$name->street = $line->get('street');
$name->city = $line->get('city');
$name->state = $line->get('state');
$name->zip = $line->get('zip');
array_push($applicants, $name);
});
});
return $applicants;
}
array_push() updates the $applicants array by reference therefore you need to pass the reference to the closures using & since you're using a primitive type:
Excel::load($path, function(LaravelExcelReader $excel) use (& $applicants) {
$excel->each(function(Collection $line) use (& $applicants) {
Even easier solution:
$excel = Excel::load($path)->get();
$excel->each(function(Collection $line) use ($applicants){
$name = new \stdClass;
$name->first = $line->get('first');
$name->middle = $line->get('middle');
$name->last = $line->get('last');
$name->birthdate = $line->get('birthdate');
$name->ssn = $line->get('ssn');
$name->email = $line->get('email');
$name->mobile_phone = $line->get('mobile_phone');
$name->home_phone = $line->get('home_phone');
$name->street = $line->get('street');
$name->city = $line->get('city');
$name->state = $line->get('state');
$name->zip = $line->get('zip');
array_push($applicants, $name);
});
| gharchive/issue | 2016-12-15T16:41:24 | 2025-04-01T06:37:11.781252 | {
"authors": [
"Neve12ende12",
"patrickbrouwers",
"stephanecoinon"
],
"repo": "Maatwebsite/Laravel-Excel",
"url": "https://github.com/Maatwebsite/Laravel-Excel/issues/1027",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
136877916 | Set cell border for merged cells
Hi. i have set border-top; and border-bottom my merged cells table in css, but failed to show border in excel.
Here my css:
tr.border-double { border-top: 1px double #000000; border-bottom: 1px double #000000; }
and here my table:
then the result:
merged column cant set border top and bottom.
Known bug, difficult to fix.
Use PHP-methods to merge the columns, it will work better.
| gharchive/issue | 2016-02-27T04:34:01 | 2025-04-01T06:37:11.784067 | {
"authors": [
"muhghazaliakbar",
"patrickbrouwers"
],
"repo": "Maatwebsite/Laravel-Excel",
"url": "https://github.com/Maatwebsite/Laravel-Excel/issues/706",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
758466838 | The preview location in DockMate does not account for the dock's position on the screen (bottom, left, or right)
Bug Report
Current Behavior
The DockMate plugin is only designed to work correctly when the dock is located at the bottom of the screen. If the dock is positioned on the left or right of the screen, there is some unwanted behaviour.
For example, the previews can be shown 50px from the dock. If the dock is at the bottom of the screen, this would mean that the preview will be 50px above the dock. However, when the dock is on the left side of the screen, one would expect this offset to be to the right side of the dock, but it is still an offset above the icon.
Even more annoyingly, since the preview now appears above the icon, and not to its side, if I then move my mouse upwards to move over the preview, the mouse is actually moving over another icon in the dock, so the preview then changes to this other icon. This is super annoying with the Spotify preview where I want to pause or skip a song, but as I move my mouse to the Spotify preview, the preview changes to that of the icon above Spotify.
To Reproduce
Set the Mac's dock to the left of the screen and then notice the behaviour outlined above.
Expected behavior/code
I would expect the locations of the preview to be relative to where on the screen the dock is placed. If the dock is at the bottom, then the previews should be placed above the icons as it is now. However, if the dock is at the left or right of the screen then I would expect the previews to be placed to the right or left of the icons, respectively.
Screenshots
Environment
MacForge version(s): 1.0.8
OS version: 11.0.1
Plugins: DockMate
Better screenshot for reference:
Hmm that's odd it's also definitely a bug that the Text is black there. I'll look into it.
Okay I see what the issue is, has to do with the orientation. It'll be fixed in the next version.
This is fixed in DockMate 0.8+
I think the bug is not solved....
| gharchive/issue | 2020-12-07T12:30:47 | 2025-04-01T06:37:11.790078 | {
"authors": [
"KevOBrien",
"a-ludwig1980",
"w0lfschild"
],
"repo": "MacEnhance/MacForge",
"url": "https://github.com/MacEnhance/MacForge/issues/53",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1441612542 | Feature request: Check and UI message for low power (laptops)
This has been mentioned in Slack, just adding here as a request.
Thank you
Duplicate of: https://github.com/Macjutsu/super/issues/8
| gharchive/issue | 2022-11-09T08:06:20 | 2025-04-01T06:37:11.799169 | {
"authors": [
"Macjutsu",
"mauriciope"
],
"repo": "Macjutsu/super",
"url": "https://github.com/Macjutsu/super/issues/45",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1812094241 | Banner time mismatch
I tried to import my wish history several times and every time I get this error. The server (Europe) in the setup is selected correctly. I already posted to discord three times about this bug, but you did not notice so I post here
Also, the wrong banner is showing up in the wish history, I didn't spin that banner at all
Looks like you fixed this bug so this issue no longer relevant. Thank you!
| gharchive/issue | 2023-07-19T14:33:49 | 2025-04-01T06:37:11.815951 | {
"authors": [
"nyaashaa"
],
"repo": "MadeBaruna/paimon-moe",
"url": "https://github.com/MadeBaruna/paimon-moe/issues/435",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2705100482 | Add another method for getting wish history link on PC
Will create a PR if approved, but there is an easier way (no powershell) to get a wish history link
Navigate to the GenshinImpact_Data -> Cache -> webCaches -> [most recently modified folder] -> cache_2
go to end of cache_2 and copy paste everything from the last instance of 1/0/ to the end
PC works, MAY work on android (I don't know how Genshin stores files over there and my emulator for it broke)
yes, that's the manual method on https://paimon.moe/wish/import
| gharchive/issue | 2024-11-29T13:19:03 | 2025-04-01T06:37:11.817881 | {
"authors": [
"OttersMeep",
"jogerj"
],
"repo": "MadeBaruna/paimon-moe",
"url": "https://github.com/MadeBaruna/paimon-moe/issues/605",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1228378565 | 🛑 Epic Games Store is down
In 425c6d0, Epic Games Store (https://epicgames.com) was down:
HTTP code: 503
Response time: 974 ms
Resolved: Epic Games Store is back up in 685b3a9.
| gharchive/issue | 2022-05-06T21:00:14 | 2025-04-01T06:37:11.854772 | {
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/326",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1931199552 | 🛑 Libreddit (libreddit.spike.codes) is down
In d128701, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in 68de907 after 1 hour, 43 minutes.
| gharchive/issue | 2023-10-07T05:34:44 | 2025-04-01T06:37:11.857202 | {
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/6710",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1931506355 | 🛑 Libreddit (libreddit.spike.codes) is down
In 336d4e3, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in 4417402 after 1 hour, 23 minutes.
| gharchive/issue | 2023-10-07T21:14:33 | 2025-04-01T06:37:11.859716 | {
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/6722",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2027260231 | 🛑 Libreddit (libreddit.spike.codes) is down
In c1568b5, Libreddit (libreddit.spike.codes) (https://libreddit.spike.codes) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Libreddit (libreddit.spike.codes) is back up in 911c4c7 after 26 minutes.
| gharchive/issue | 2023-12-05T22:43:21 | 2025-04-01T06:37:11.862388 | {
"authors": [
"Magic-Services-Account"
],
"repo": "Magic-Services/upptime",
"url": "https://github.com/Magic-Services/upptime/issues/8059",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2072172604 | Comment blocks in python are not handled
Comment blocks in python like
"""
comment line here
"""
are not handled.
This is because such comments get parsed as (expression_statement (string)) instead of (comment). As that seems to be an issue with the tree-sitter parser for python, I do not recommend to fix this here, as that would necessitate a workaround especially for that language.
This is a tricky one. Python handles multi-line comments different than other programming languages. They are just "regular" expressions.
How to interpret them correct is a difficult decision and I guess that won't be "fixed". As such, having a special handling in our code might be required to parse those. Alternative print a warning when parsing python with the concrete metric.
You might want to open an issue in the python library to ask if they would be interested in trying to differentiate the two or not.
This is a tricky one. Python handles multi-line comments different than other programming languages. They are just "regular" expressions. How to interpret them correct is a difficult decision and I guess that won't be "fixed". As such, having a special handling in our code might be required to parse those. Alternative print a warning when parsing python with the concrete metric.
Ok, mh. Interesting. If they are indeed handled as normal expressions, the behaviour of classifying them as "expression_statement" should be correct then from a syntactic view, I guess. That would also arise the question, wheter or not to count them as "comment line" at all. If there is no strong opinion to count them into the comment lines metric, we might leave it as it is.
Such "afterMetricHooks" would also have to be specific for each metric. But otherwise, something similar should work, I guess. It might also happen that we need to override the general logic for some language one day because it is not sufficient to just add some other value to the result, or because the logic does not apply at all. On the other hand, if the general logic is ok and we only need to add to that result, like it is the case here, this solution could avoid redundancy. On the other hand, we could call the function with the general logic again from the overriding function to avoid that redundancy. Mh.
However, the main issue here would be that part of identifying these multi-line strings and determining that they are not passed to anything. I currently have a look at the dependency analysis code to fix an issue that occurs at identifying passed parameters, etc., when updating the grammars. There is a lot that can go wrong there when the grammar changes, also to keep up with all variants that the grammar allows for on such a position, so this is not going to be trivial and the maintenance of handling this could become an issue. But there is probably no way around it.
Ok, it might be simpler as expected, as it seems like the Python tree sitter grammar always puts such solitude strings into a single (expression_statement) while for other kinds of operations with these strings, there is another node type surrounding the string node, e.g. call
| gharchive/issue | 2024-01-09T11:21:44 | 2025-04-01T06:37:11.932893 | {
"authors": [
"BridgeAR",
"ResistantBear"
],
"repo": "MaibornWolff/metric-gardener",
"url": "https://github.com/MaibornWolff/metric-gardener/issues/31",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
99436709 | add pause and resume functions to vpaid demo
Hi guys,
I've added pause and resume functionality to the demo, so that I could personally verify that issue #11 was solved.
Feel free to merge it if you'd like this in your demo, or discard this PR if you don't think it's needed.
Cheers,
Or
We have updated the demo and it looks very good. Thanks @orcaman
| gharchive/pull-request | 2015-08-06T13:42:15 | 2025-04-01T06:37:11.937701 | {
"authors": [
"carpasse",
"orcaman"
],
"repo": "MailOnline/videojs-vast-vpaid",
"url": "https://github.com/MailOnline/videojs-vast-vpaid/pull/16",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
982899195 | Chat Window is not configurable and Globals fonts do not appear to work.
Foundry Virtual Desktop: 0.8.9
Whetstone: V1.2.0
The chat window and chat dropdown always use the default parchment background, and cannot be changed. This prevents dark themes from working as the text box uses the dark text preset which would be set to a light colour in a dark theme (tested using the default Ocean Blues theme with the Icewind Blues preset)
Fonts as defined using the Whetstone Global Variables seem to also have no effect anywhere in Foundry.
Chat windows colors have been fixed in V1.2.2. Unfortunately nothing really uses the global fonts, so they don't do much.
| gharchive/issue | 2021-08-30T15:08:26 | 2025-04-01T06:37:11.978441 | {
"authors": [
"MajorVictory",
"XtraButtery"
],
"repo": "MajorVictory/Whetstone",
"url": "https://github.com/MajorVictory/Whetstone/issues/23",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1640072449 | replace PyCall (and Delaunay) with PythonCall
PythonCall is used in PyMNE, recently we got incompatability issues, especially trying to get both running at the same time. Most users will not understand that they need to place an ENV["PYTHOn..."] = "@PyCall" in between the two packages...
Much easier to simply remove the PyCall dependency.
This also had the issue of removing Delaunay.jl, as it is a wrapper to SciPy using ... PyCall.
Also I had to remove the wrapper to SciPy.jl - as it is using PyCall as well.
But turns out we don't really need them anyway :) Delaunay is 2 lines of code, importing SciPy I stole from PyMNE.jl (with attributions).
Some further thought:
We need SciPy (and thus PythonCall) at only two functions:
1)Delaunay: I tried replacing this with VoronoiDelaunay.jl - which worked fine to get the triangles, I even managed to generate a GeometryBasics.Mesh - but it crashed later with makie-mesh-plotting, as my Mesh is based on triangles, and not points+edges (coordinates(Gemoetry.Mesh) crashed when mesh is based on Ngons). I simply dont know enough to fix it. Maybe @Simon can push me in the right direction. I have the code still in the function, a bit ugly, but maybe it is an easy fix.
ClaughDoughter: This is hard to replace, I did look into the algorithm at some point, it is quite complex.
most likely we can skip this PR, as CloughTocher2DInterpolation.jl is already on par with the scipy version + we can probably figure out how to fix the Voronoi in pure julia.
There is some code in here I do want to use, so I keep it open for now
is this superceded by #33? If so, can you add closes #33 to the topline comment for that PR?
| gharchive/pull-request | 2023-03-24T21:31:36 | 2025-04-01T06:37:12.010271 | {
"authors": [
"behinger",
"palday"
],
"repo": "MakieOrg/TopoPlots.jl",
"url": "https://github.com/MakieOrg/TopoPlots.jl/pull/27",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1117537873 | Validate pack install before attempting to run it
Currently, libcnb-test tries to run pack and does not:
Validate its installed
Validate the installed version satisfies the version requirement
In such cases, errors will pop up but they're not very descriptive. By running a validation step first, we can output better errors.
Current error message when pack is not installed:
running 1 test
Finished dev [unoptimized + debuginfo] target(s) in 0.09s
test basic ... FAILED
failures:
---- basic stdout ----
thread 'basic' panicked at 'Could not spawn external 'pack' process: Os { code: 2, kind: NotFound, message: "No such file or directory" }', /Users/emorley/src/libcnb.rs/libcnb-test/src/lib.rs:183:14
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
basic
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.39s
error: test failed, to rerun pass '-p example-02-ruby-sample --test integration_test'
Current error message when using a pack CLI version that doesn't support a newer buildpack API version:
(This error message seems pretty clear already - I think we can ignore this case, and just focus on cleaning up the "pack not installed" case only)
running 1 test
Finished dev [unoptimized + debuginfo] target(s) in 0.08s
test basic ... FAILED
failures:
---- basic stdout ----
thread 'basic' panicked at 'pack command failed with exit-code 1!
pack stdout:
20: Pulling from heroku/buildpacks
Digest: sha256:58fb0194d0f9fbce5d23000137f696497f7a42a2fed9a02e5a2d756c8be88420
Status: Image is up to date for heroku/buildpacks:20
20: Pulling from heroku/pack
Digest: sha256:c28cb273b47bf0fac9716f5c8c4887115434d35bfd73378b26a9a8c5e050cac6
Status: Image is up to date for heroku/pack:20
pack stderr:
ERROR: failed to build: validating buildpacks: buildpack 'libcnb-examples/ruby@0.1.0' (Buildpack API 0.9) is incompatible with lifecycle '0.13.1' (Buildpack API(s) 0.2, 0.3, 0.4, 0.5, 0.6, 0.7)
', /Users/emorley/src/libcnb.rs/libcnb-test/src/lib.rs:221:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
failures:
basic
test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 2.67s
error: test failed, to rerun pass '--test integration_test'
| gharchive/issue | 2022-01-28T15:33:39 | 2025-04-01T06:37:12.039071 | {
"authors": [
"Malax",
"edmorley"
],
"repo": "Malax/libcnb.rs",
"url": "https://github.com/Malax/libcnb.rs/issues/296",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2271661238 | pls help to advise
Traceback (most recent call last):
File "D:\anytext\main.py", line 30, in
results, rtn_code, rtn_warning = pipe(
File "D:\anytext\anytext_pipeline.py", line 306, in call
samples, intermediates = self.ddim_sampler.sample(
File "D:\anytext\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\anytext\cldm\ddim_hacked.py", line 103, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "D:\anytext\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\anytext\cldm\ddim_hacked.py", line 163, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "D:\anytext\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\anytext\cldm\ddim_hacked.py", line 190, in p_sample_ddim
model_t = self.model.apply_model(x, t, c)
File "D:\anytext\cldm\cldm.py", line 445, in apply_model
control = self.control_model(x=x_noisy, timesteps=t, context=_cond, hint=_hint, text_info=cond['text_info'])
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\anytext\cldm\cldm.py", line 339, in forward
h = module(h, emb, context)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\anytext\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\anytext\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\anytext\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\anytext\ldm\modules\diffusionmodules\util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\anytext\venv\lib\site-packages\torch\autograd\function.py", line 553, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\anytext\ldm\modules\diffusionmodules\util.py", line 129, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\anytext\ldm\modules\attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\anytext\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "D:\anytext\ldm\modules\attention.py", line 192, in forward
out = einsum('b i j, b j d -> b i d', sim, v)
File "D:\anytext\venv\lib\site-packages\torch\functional.py", line 380, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: expected scalar type Half but found Float
https://github.com/MaletteAI/anytext/blob/fa9a1fe714808cf612cd95f4cbae125d42a52343/main.py#L13
try setting use_fp16 to False
| gharchive/issue | 2024-04-30T14:36:57 | 2025-04-01T06:37:12.054889 | {
"authors": [
"hjcxlp",
"jinchanz"
],
"repo": "MaletteAI/anytext",
"url": "https://github.com/MaletteAI/anytext/issues/2",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
254180067 | Pregunta 20 - Issue #7
¿ Para qué es el logo ?
El logo es de nuestra empresa .la que nos distingue
| gharchive/issue | 2017-08-31T01:20:00 | 2025-04-01T06:37:12.056326 | {
"authors": [
"MalumaDiego",
"garabhato"
],
"repo": "MalumaDiego/rfp",
"url": "https://github.com/MalumaDiego/rfp/issues/30",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
201087194 | Minify css bundle output from yarn build
Add an additional webpack plugin to minify the output css bundle when
building for production.
In addition adjust source map settings for versions that are faster and
smaller. Tested with latest version of Chrome and the source maps worked
pretty well, with other browsers/versions your mileage may vary.
https://www.pivotaltracker.com/story/show/137196541
@miq-bot add_label euwe/no, enhancement
| gharchive/pull-request | 2017-01-16T18:10:13 | 2025-04-01T06:37:12.104737 | {
"authors": [
"jjlangholtz"
],
"repo": "ManageIQ/manageiq-ui-service",
"url": "https://github.com/ManageIQ/manageiq-ui-service/pull/432",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
204699540 | Vertical navigation support dynamic translations and badge counts
This enables the page to be refreshed without doing a window refresh.
PV https://www.pivotaltracker.com/story/show/134436111
@miq-bot add_label bug
Ultimately the reason we had issues translating is that we changed the source title's language in our navigation provider, since the provider stayed in its current state after a state reload, the system attempted to translate the translated title. Solution was to create a variable of the original untranslated string that we can then use in our navigation controller to translate.
FYI - Only our Dashboard title changes at the moment. We need these other terms to be translated in our menu. @chriskacerguis where do I file a ticket requesting translations?
@chalettu in BZ, set the "Component" to "Internationalization" (sp?).
| gharchive/pull-request | 2017-02-01T20:44:07 | 2025-04-01T06:37:12.107625 | {
"authors": [
"chalettu",
"chriskacerguis"
],
"repo": "ManageIQ/manageiq-ui-service",
"url": "https://github.com/ManageIQ/manageiq-ui-service/pull/483",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
201657366 | Decouple API from ApplicationController
Since the base controller is skipping all but one of the callbacks defined in
ApplicationController, inheriting instead from the unpolluted
ActionController::Base appears to work fine. I've just updated the base controller to reimplement set_gettext_locale. Consequently the surface
area of the API controllers can be greatly reduced.
I've also eliminated the call to (and the deliberate setting of in the specs) ApplicationController.handle_exceptions?, since I don't see how that can currently be false.
This also seems to me to be just generally a good idea, now that ApplicationController lives in another repository.
EDIT: rebasing onto https://github.com/ManageIQ/manageiq/pull/13582 will help eliminate the duplication in this PR
@abellotti is this a good time to revive this work?
@miq-bot add-label api, refactoring
@miq-bot assign @abellotti
Hurrah! With https://github.com/ManageIQ/manageiq/pull/13582 I can re-add :scissors: :scissors: :scissors: to this PR :grin:
@abellotti see explanation in https://github.com/ManageIQ/manageiq/pull/13566/commits/16a1aa48dc3c08ea2c23991eff3c3fcd83c66390 for how I dealt with the Brakeman issue
LGTM !!! 🎵
| gharchive/pull-request | 2017-01-18T18:38:57 | 2025-04-01T06:37:12.112029 | {
"authors": [
"abellotti",
"imtayadeway"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/13566",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
440659057 | Make Classification.tag2human public as the UI depends on it
Having the private keyword before defining class methods has no effect and rubocop is complaining about this. The fix that made rubocop happy made this method private, therefore, inaccessible for the UI code.
Steps to reproduce:
Go to Policy -> Explorer -> Actions
Open the form for adding a new action
Select Tag as the action's type
Try to select a tag in the tree
Error caught: [NoMethodError] private method `tag2human' called for #<Class:0x00007fdd321c7570>
@miq-bot add_label bug, ui, control
@miq-bot add_reviewer @h-kataria
@miq-bot add_reviewer @martinpovolny
Wouldn't a spec for this method have caught that?
@djberg96 followup spec https://github.com/ManageIQ/manageiq/pull/18737
| gharchive/pull-request | 2019-05-06T11:43:52 | 2025-04-01T06:37:12.115847 | {
"authors": [
"agrare",
"djberg96",
"skateman"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/18736",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
484990624 | [WIP] direct_vms is a method
direct vms is not an association
Removing preload which only works with associations.
Since it is not an association, it can not be preloaded.
In previous versions, it was ignored, but it is throwing an error for future versions of virtual attributes.
Newer version of virtual attributes complains if you preload a relation that didn't work.
The older versions just ate errors (as we complained about in a number of PRs)
wip: determining if this is indeed a problem
@NickLaMuro thanks for this comment. I'll change something in virtual attributes.
The definition of Preloader.preload [[ref]]
(https://www.rubydoc.info/docs/rails/4.1.7/ActiveRecord/Associations/Preloader) is pretty straight forward:
Implements the details of eager loading of Active Record associations
preload was made for associations. Service defines direct_vms, but it is not a virtual association, and it does not define a :uses. So the preload will never do anything.
via @jrafanie
lets add virtual has many:
virtual_has_many :direct_vms
| gharchive/pull-request | 2019-08-25T23:49:06 | 2025-04-01T06:37:12.119644 | {
"authors": [
"kbrock"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/19201",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
98526296 | rename 'containers providers' in the menu to 'provider' for consistency
Aligning with the rest of the providers (infra/cloud) for naming convention.
@simon3z please review
@miq-bot add_label providers/containers
@miq-bot add_label ui
BEFORE
AFTER
| gharchive/pull-request | 2015-08-01T10:07:41 | 2025-04-01T06:37:12.121730 | {
"authors": [
"abonas"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/3683",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
104563045 | Unreachable code miq smis client
Based on 3cf168e6936af771c686df34a6aafc96b947b306
@roliveri Please review
@roliveri Updated
@roliveri Any other concerns?
| gharchive/pull-request | 2015-09-02T20:19:31 | 2025-04-01T06:37:12.122965 | {
"authors": [
"brandondunne"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/4184",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
125433321 | Process name setting for better administration/troubleshooting
I saw that some workers were eating CPU but I did not know what workers. This should help locating the greedy workers. Just checking it in an appliance. This trick works in ps and htop for sure, top seems to ignore these fancy names.
Not sure if you like it or not :)
Copied the files to an appliance and restarted evmserverd - It works.
Don't we already do this on master @mfalesni?
See: https://github.com/ManageIQ/manageiq/blob/4036ffde438426c164974b7ebaf157d814e9833c/app/models/miq_worker/runner.rb#L474-L483
cc @akrzos @Fryguy
Sorry, at least for the workers... we need to do this for the server process... let's come up with the best title for what we need and do it.
yeah I'm confused...I already coded this up and it's already on master.
@mfalesni Can you remove the changes for the workers, because it's already there (including truncating the names).
I'm ok with the server one, but I don't like that it's rescuing Exceptions. You should be able to use .respond_to? like is done in https://github.com/ManageIQ/manageiq/blob/4036ffde438426c164974b7ebaf157d814e9833c/app/models/miq_worker/runner.rb#L474-L483.
The other thing is where the code should live. I purposely didn't put the worker one in the worker binstub, but I'm not sure why. I guess I'm ok with the server one being there, but maybe it's better for the code to live in evm_server.rb and have the EvmServer.start method call something that sets the process title (again, like was done for workers in the link above)
@Fryguy yup, I can.
This looks good to me. @jrafanie let's coordinate around merging of #3593
I am in favor of a shorter title than ManageIQ Server process such as MiqServer.
It would be great to use the same process title as the service name via systemd. In this case I am advocating for the the main server process to show up in top/ps/pidstat and other system tools as MiqServer and for us to be able to control it via systemd as systemctl start miqserver. I believe this would make it easier for system administrators to work with ManageIQ.
@mfalesni a few things
in #3593, I changed the workers prefix to be MIQ: since we have code in MiqProcess.is_worker? that relied on the command line to detect our workers.
Perhaps this PR should change the server to MIQ Server, I don't know. But either way, I'm fine with changing them to be consistent and yet unique. I want to be able to easily find the server or all workers. If we change the workers proc title, we need to change the code in #3593 after it's merged to reflect any new worker names.
the screenshot in this PR should be updated to reflect the new names, currently it shows a change to both the server and worker process but no code changes for the worker setproctitle.
@mfalesni @jrafanie @Fryguy what is the status of this PR?
@mfalesni @jrafanie @Fryguy what is the status of this PR?
Geez, I forgot about this one. I can change the name.
| gharchive/pull-request | 2016-01-07T16:47:03 | 2025-04-01T06:37:12.131793 | {
"authors": [
"Fryguy",
"akrzos",
"chessbyte",
"jrafanie",
"mfalesni"
],
"repo": "ManageIQ/manageiq",
"url": "https://github.com/ManageIQ/manageiq/pull/6086",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
201015316 | Requests don't have a timeout
Putting a timeout on API requests would allow showing an appropriate error page to users in case of network issues (like this incident).
Absolutely :-) This is already planned but we'll get on to it asap.
| gharchive/issue | 2017-01-16T13:09:28 | 2025-04-01T06:37:12.146532 | {
"authors": [
"Changaco",
"hobailey"
],
"repo": "Mangopay/mangopay2-python-sdk",
"url": "https://github.com/Mangopay/mangopay2-python-sdk/issues/82",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1040952757 | Don't iterate over boolean in BraceBetweenPoints
Currently all() tries to iterate over a boolean. It seems like the intention is something along the lines of all(x == y for x, y in zip(direction, ORIGIN)), but in this case that's not needed. A simple comparison of the arrays seem to be enough and avoid exceptions.
Reviewer Checklist
[ ] The PR title is descriptive enough for the changelog, and the PR is labeled correctly
[ ] If applicable: newly added non-private functions and classes have a docstring including a short summary and a PARAMETERS section
[ ] If applicable: newly added functions and classes are tested
These arrays are numpy-arrays, and they behave a little different with respect to == comparison. This change causes an error in our documentation build:
Exception occurred:
File "/home/docs/checkouts/readthedocs.org/user_builds/manimce/envs/2248/lib/python3.8/site-packages/manim/mobject/svg/brace.py", line 257, in __init__
if direction == ORIGIN:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Oh, that makes sense! However, it seems that in my case I get that the comparison between direction and ORIGIN returns [ True True True] if it succeeds, but only False if it fails. I'll admit I'm not an expert on numpy; is ORIGIN maybe constructed in such a way that a failed comparison returns a boolean instead of an array of booleans? e.g.
>>> np.array([1,2]) == np.ndarray([1,3])
False
>>> np.array([1,2]) == np.ndarray([1,2])
array([[ True, True]])
and if so, how should manim behave to avoid the exception being raised by False being returned?
Oh, that makes sense! However, it seems that in my case I get that the comparison between direction and ORIGIN returns [ True True True] if it succeeds, but only False if it fails. I'll admit I'm not an expert on numpy; is ORIGIN maybe constructed in such a way that a failed comparison returns a boolean instead of an array of booleans? e.g.
>>> np.array([1,2]) == np.ndarray([1,3])
False
>>> np.array([1,2]) == np.ndarray([1,2])
array([[ True, True]])
and if so, how should manim behave to avoid the exception being raised by False being returned?
I'm not sure about this behavior, it looks weird to me -- but this might be because you are comparing np.array to np.ndarray? If I change it to np.array on both sides, I get a boolean numpy array in both cases. And given that my first comparison raised a deprecation warning,
>>> np.array([1,2]) == np.ndarray([1,3])
<stdin>:1: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
False
I'd assume that this inconsistency will be fixed at some point.
Ok, sorry for the confusion, I think I understood what my issue was. Comparison between numpy arrays returns a boolean when they don't have the same amount of elements:
>>> np.array([1,2]) == np.array([1,2,3])
<stdin>:1: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
False
In my code I was passing to BraceBetweenPoints a 2 elements array which was then checked on ORIGIN, which is a 3 elements array, and this returned false and caused an exception trying to eval bool(False). If I pass Brace a 3 elements array, it works.
At this points I'm wondering, given that the third element is discarded anyway, would it make sense to allow two elements array to be given to BraceBetweenPoints? That could be done by using np.array_equal(direction, ORIGIN) (which returns false if the two arrays are of different size) instead of all(direction == ORIGIN)?
would it make sense to allow two elements array to be given to BraceBetweenPoints
In general, yes -- but I think this is something that we should attack globally, not just for BraceBetweenPoints.
I'm also wondering - and sorry if it's a stupid question - shouldn't there be some kind of input validation to functions to check that the values are of correct type and length, to avoid intuitive exceptions if those are wrong in some weird way (e.g. the all taking bool as an argument instead of a list) and more easily understand what's wrong from the user?
I'm also wondering - and sorry if it's a stupid question - shouldn't there be some kind of input validation to functions to check that the values are of correct type and length, to avoid intuitive exceptions if those are wrong in some weird way (e.g. the all taking bool as an argument instead of a list) and more easily understand what's wrong from the user?
Yes and no. It is on us to make sure that users understand what is wrong when input arguments do not behave correctly -- but full-blown static type checks are not particularly pythonic.
I'll close this PR for the sake of cleaning up our list of open PRs a bit, but feel free to keep commenting; you might also want to join our Discord (https://manim.community/discord) if you have not already. :-)
| gharchive/pull-request | 2021-11-01T09:19:32 | 2025-04-01T06:37:12.160718 | {
"authors": [
"behackl",
"veggero"
],
"repo": "ManimCommunity/manim",
"url": "https://github.com/ManimCommunity/manim/pull/2248",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1931262161 | #12 Fixed some Minor Issues
Fixes Issue
Closes #12
Proposed Changes
This pull request includes the following changes:
Fixed a typo in the input placeholder from "Enetr your city" to "Enter your city."
Corrected the API call in Home.jsx to ensure it is invoked properly.
Removed unused and unnecessary variables and code from Home.jsx and App.js.
Checklist (Check all applicable boxes)
[x] My code adheres to the project's coding style.
[ ] My changes require updates to the project's documentation.
[ ] I have updated the documentation accordingly.
[x] All new and existing tests have passed successfully.
[ ] This PR does not contain any plagiarized content.
[x] The title of my pull request provides a brief description of the requested changes.
[] I am contributing independently.
[x] I am contributing for Hacktoberfest 2023.
Potential Impact on Existing Code
These changes should have no negative impact on existing code. They primarily focus on improving user experience, fixing a typo, and ensuring the API call works correctly.
Screenshots
@priyankeshh In open source it is good practice to raise the issue and wait for it to be assigned , this helps both maintainer and contributor, never work on something unless it is not assigned to you.
Very sorry to say this way PR will not get merge unless you are not assigned.
| gharchive/pull-request | 2023-10-07T08:46:29 | 2025-04-01T06:37:12.167109 | {
"authors": [
"ManishaSwain8",
"priyankeshh"
],
"repo": "ManishaSwain8/ReactToWeather",
"url": "https://github.com/ManishaSwain8/ReactToWeather/pull/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
115215676 | toJS exports non-context properties
// var data = {_id: 'string1', name: 'string2', some_property: true};
Template.test.viewmodel({
canEdit1: function(){return true;},
canEdit2: data.some_property,
changeCanEdit1: function(){this.canEdit1(false)}, // this doesnt work
changeCanEdit2: function(){this.canEdit2(false)}, // this works but...
submitData: function(){
var new_data = this.toJS(); // problem is that new_data
// is now:{_id: 'string1', name: 'string2', some_property: true, canEdit2: false}
// i dont want the canEdit2 property in my new_data
doSomething(new_data);
},
});
suppose i have something like the above, how should i export the updated data?
currently i been using data.hasProperty(h) and loop/compare it against this.toJS() to only get my keys back
canEdit1 is a tautology.it doesn't matter what you call it with,it will
always return true.
Tojs returns canedit2 because its part of the viewmodel.is ypu don't want
it there you have to pluck the values you need.
On Nov 4, 2015 11:09 PM, "peonmodel" notifications@github.com wrote:
// var data = {_id: 'string1', name: 'string2', some_property: true};
Template.test.viewmodel({
canEdit1: function(){return true;},
canEdit2: data.some_property,
changeCanEdit1: function(){this.canEdit1(false)}, // this doesnt work
changeCanEdit2: function(){this.canEdit2(false)}, // this works but...
submitData: function(){
var new_data = this.toJS(); // problem is that new_data
// is now:{_id: 'string1', name: 'string2', some_property: true, canEdit2: false}
// i dont want the canEdit2 property in my new_data
doSomething(new_data);
},
});
suppose i have something like the above, how should i export the updated
data?
currently i been using data.hasProperty(h) and loop/compare it against
this.toJS() to only get my keys back
—
Reply to this email directly or view it on GitHub
https://github.com/ManuelDeLeon/viewmodel/issues/121.
ok, i would suggest a feature that plucks the values that is also in my initial data context, i think that be useful
in my use case, i have a form that populate from existing data and allow users to edit them, which i then need to update the changed values to my collection
| gharchive/issue | 2015-11-05T06:09:10 | 2025-04-01T06:37:12.193312 | {
"authors": [
"ManuelDeLeon",
"peonmodel"
],
"repo": "ManuelDeLeon/viewmodel",
"url": "https://github.com/ManuelDeLeon/viewmodel/issues/121",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
592339644 | say does not speak Chinese with the right pronunciation
Hello, The Chinese words '你好' should be pronunced 'ni hao'. But when I try this on windows 10
const Say = require('say').Say;
const say = new Say('win32');
say.speak('你好');
It says something sounds like 'huan chai' to me.
Is that a problem or should I install some more plug-in? Thanks.
Actually, nodejs encode string as utf-8 as default, so as say.js. But it seems that platform like win32 decodes it as GBK.
To fix this problem, I forked this repository, and add setEncoding method for Say Class.
You can setEncode('gbk') before speak.
Here is an example.
Actually, nodejs encode string as utf-8 as default, so as say.js. But it seems that platform like win32 decodes it as GBK.
To fix this problem, I forked this repository, and add setEncoding method for Say Class.
You can setEncode('gbk') before speak.
Here is an example.
Thanks for your idea, although I failed to act in your way, as the code doesn't know the 'gbk' code.
However, I worked this out with the encoding package 'iconv-lite' with your guidance. A new solution could be found in my repository, which can be an optional way for the other guys.
The fix can be as simple as setting the command = 'chcp 65001 >NUL & powershell' instead of 'powershell' in the file win32.js. See https://stackoverflow.com/questions/68988696/set-utf-8-input-and-get-utf-8-output-through-pipe-to-from-powershell-with-c-c for more info about setting stdin's encoding to be UTF-8.
| gharchive/issue | 2020-04-02T03:53:55 | 2025-04-01T06:37:12.214209 | {
"authors": [
"awei12986",
"carsonDB",
"silence19",
"trungphan"
],
"repo": "Marak/say.js",
"url": "https://github.com/Marak/say.js/issues/99",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1186792039 | CSS Link Is still generated
When using the plugin the resulting HTML file generate should NOT contain the link to the non existing css chunk. However it still seems to be generated and points at nothing.
Am I missing something?
Vite: v2.8.0 and 2.9.0 tested
Config
// vite.config.js
const { defineConfig } = require('vite')
import cssInjectedByJsPlugin from 'vite-plugin-css-injected-by-js'
module.exports = defineConfig({
plugins: [
cssInjectedByJsPlugin()
],
build: {
assetsDir: '',
rollupOptions: {
output: {
dir: undefined,
manualChunks: undefined,
file: './dist/foo.js',
},
},
}
})
Result
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Foo Example</title>
<link href="https://fonts.googleapis.com/css?family=Source+Sans+Pro:400,600" rel="stylesheet" />
<script type="module" crossorigin src="./foo.js"></script>
<link rel="stylesheet" href="./index.61bb7400.css"> <!-- SHOULD NOT BE HERE -->
</head>
<body>
<div id="test-button"></div>
<div id="example-container"></div>
</body>
</html>
Hi @Nithos, I tried your configuration but in my case, the "link" element for the CSS is not generated.
Can you give me more information about it? Do you have an open project where this issue happens?
Hi, not sure what happened, but I can no longer reproduce the issue on a new project or my original one.
My apologies but thank you for reaching out.
Hi, not sure what happened, but I can no longer reproduce the issue on a new project or my original one. My apologies but thank you for reaching out.
Don't worry, thanks a lot anyway
@CQCStan fixed with v1.5.1
Awesome, thank you both.
| gharchive/issue | 2022-03-30T17:49:19 | 2025-04-01T06:37:12.230249 | {
"authors": [
"Marco-Prontera",
"Nithos"
],
"repo": "Marco-Prontera/vite-plugin-css-injected-by-js",
"url": "https://github.com/Marco-Prontera/vite-plugin-css-injected-by-js/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1225705956 | 一試合を作る
ゲームデザインがまだなのでそこから
https://scrapbox.io/marco3jp/私のためのストラテジーゲーム%2Fメモ#6272c1350b71ee000009799f
今日はここから下のところざっくり書いた
| gharchive/issue | 2022-05-04T18:03:09 | 2025-04-01T06:37:12.232038 | {
"authors": [
"Marco3jp"
],
"repo": "Marco3jp/GameForMeProject",
"url": "https://github.com/Marco3jp/GameForMeProject/issues/11",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
405009232 | Upgraded to babel 7, babel loader 8, fortawesome 1.2, webpack 4
Hi Mark,
I updated the outdated packages, and updated the icons.js file to be compatible with the latest version of fortawesome. Everything is working as expected when I generate a new project using the template, and run it in debug/release.
Please let me know if you need me to fix anything!
-Josh
This looks great @jholt456 🍺 🍺 🍺 🍺
If there's anything else that could use some serious love or updating feel free, and help is always super appreciated !! I need to spend some time to really get the VueCLI integrated in here as well, so we can always stay up to date!
Thanks again
| gharchive/pull-request | 2019-01-30T23:18:00 | 2025-04-01T06:37:12.328249 | {
"authors": [
"MarkPieszak",
"jholt456"
],
"repo": "MarkPieszak/aspnetcore-Vue-starter",
"url": "https://github.com/MarkPieszak/aspnetcore-Vue-starter/pull/120",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
257803001 | Lyrics skipping
OS: Windows 7
Issue Descriptions: Lyric scrolling does not track correctly when songs change. The time progress does not always reach the end.
Steps to Reproduce:
Play Song A all the way through.
Open up the lyrics and leave them open.
At the end of Song A but the progress bar has not reset (left at anywhere from 80-100% typically).
Song B starts, lyrics still left open.
The progress bar continues from that 80% point for Song B, despite being at the start of the song, and as a result most of the lyrics scroll by fast to match the progress bar.
It could likely be solved by ensuring that the progress tracking is reset to 0 at the start of each song.
This is the time fetching issue that was fixed in master
| gharchive/issue | 2017-09-14T17:48:44 | 2025-04-01T06:37:12.429158 | {
"authors": [
"MarshallOfSound",
"pyskell"
],
"repo": "MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-",
"url": "https://github.com/MarshallOfSound/Google-Play-Music-Desktop-Player-UNOFFICIAL-/issues/2762",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
521719936 | Integrating with Python 3.8
CPython accepted a commit that basically copies subset of asynctest into the standard library.
Should we champion our cause there?
Pros:
No legacy: it's Python 3.8+ only
We can alter mock's internals for our needs
@Kentzo would be nice state explicitly what "our cause" is :)
Full spectrum of features offered by asynctest of course :)
Hi,
I discussed with Lisa Roach who contributed this to cPython. AsyncMock is actually borrowing code from CoroutineMock, which is great but makes it hard to make asynctest compatible with Python 3.8.
I'd be super happy to push more features from asynctest directly into cPython, in particular, an AsyncTestCase would probably be a nice addition.
I have yet to look in depth at what features have been introduced in unittest by AsyncMock and where it diverges from asynctest. Ideally, more and more features would be introduced in unittest, making asynctest progressively irrelevant and deprecated.
I've been slowly drafting a doc summarizing the changes I'd like to make to support Python 3.8:
https://docs.google.com/document/d/1w5QSXafNI7Xsswwbd56KFA6EEjCXgNMgmF_BwAgVsvE/edit
I have authored https://github.com/python/cpython/pull/17133 yesterday which is essentially _AwaitedEvent for threads. If accepted it would allow to make another PR with _AwaitedEvent.
Re:
users should be able to replace all references of “unittest” with “asynctest”
I, for one, mix and match unittest.mock and asynctest. A whole bunch of unit tests run synchronously and those that require asynchrony specifically because code under test await’s something “external” use asynctest...
+1 on
deprecate asynctest.CoroutineMock
Re: AsyncTestCase
I’m squarely in the rosy pytest world, so no opinion here ;) Perhaps core devs may be Interested in better way to write tests like e.g. https://github.com/python/cpython/blob/master/Lib/test/test_asyncio/test_futures.py
| gharchive/issue | 2019-11-12T18:21:25 | 2025-04-01T06:37:12.444490 | {
"authors": [
"Kentzo",
"Martiusweb",
"dimaqq"
],
"repo": "Martiusweb/asynctest",
"url": "https://github.com/Martiusweb/asynctest/issues/144",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
434734210 | Peer-Review for the Japanese Translation
Reviewed as far as 2.
My appologies for taking so much time. Just completed reviewing the entire part of the translation.
@MasakiMinamide any chance to review and merge this in?
| gharchive/pull-request | 2019-04-18T12:00:38 | 2025-04-01T06:37:12.448726 | {
"authors": [
"Jokyash",
"shawntabrizi"
],
"repo": "MasakiMinamide/substrate-collectables-workshop",
"url": "https://github.com/MasakiMinamide/substrate-collectables-workshop/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
94168141 | AWS AMI
Example: kong/downloads/#aws
https://coreos.com/docs/running-coreos/cloud-providers/ec2/
Because Kong doesn't work without Cassandra, we should also investigate in creating a CloudFormation stack that also starts a Cassandra cluster (I know @ahmadnassri was supposed to look into this, but maybe @shashiranjan84 can help).
+100
I have not made any progress on this. @shashiranjan84 feel free to go at it!
| gharchive/issue | 2015-07-09T22:38:31 | 2025-04-01T06:37:12.456800 | {
"authors": [
"ahmadnassri",
"shashiranjan84",
"sinzone",
"thefosk"
],
"repo": "Mashape/kong",
"url": "https://github.com/Mashape/kong/issues/389",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1887960106 | Null reference exception when attempting to retrieve RequestId from Localstack SNS PublishBatch call
Contact Details
No response
Version
8.x
On which operating system(s) are you experiencing the issue?
Windows
Using which broker(s) did you encounter the issue?
Amazon SQS
What are the steps required to reproduce the issue?
This is an issue when using the Amazon SQS broker when running localstack locally.
I've set up a worker that publishes a message as per the tutorial.
_bus.Publish(new GettingStarted {Value = messageText}, stoppingToken);
This line throws a null reference exception. The issue is in MassTransit.AmazonSqsTransport.AmazonWebServiceResponseExtensions line 14:
var requestId = response.ResponseMetadata.RequestId;
ResponseMetadata is not populated in the response from localstack. I've raised this bug with in the localstack repo, but this could easily be worked around in the MassTransit library. The requestId is only used in an exception message when the API call fails.
What is the expected behavior?
No null reference exception.
What actually happened?
This line throws a null reference exception. The issue is in MassTransit.AmazonSqsTransport.AmazonWebServiceResponseExtensions line 14:
var requestId = response.ResponseMetadata.RequestId;
ResponseMetadata is not populated in the response from localstack. I've raised this bug with in the localstack repo, but this could easily be worked around in the MassTransit library. The requestId is only used in an exception message when the API call fails.
Related log output, including any exceptions
The issue is with localstack but a bit of defensive coding here could get around this.
requestId = response.ResponseMetadata?.RequestId ?? "[MISSING REQUESTID]";
or similar.
Link to repository that demonstrates/reproduces the issue
No response
Added the null check to the code, always fun working around fake-SQS.
Thanks, @phatboyg. Any idea when this will be released?
| gharchive/issue | 2023-09-08T16:09:43 | 2025-04-01T06:37:12.467763 | {
"authors": [
"heynineteen",
"phatboyg"
],
"repo": "MassTransit/MassTransit",
"url": "https://github.com/MassTransit/MassTransit/issues/4634",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
65245730 | Putting up and styling for login form and its inputs
@jason, review and merge login PR
Signed off by @jason-wanjohi
| gharchive/pull-request | 2015-03-30T16:05:04 | 2025-04-01T06:37:12.473036 | {
"authors": [
"jason-wanjohi",
"owagaantony"
],
"repo": "MasterFacilityList/mfl_admin_web",
"url": "https://github.com/MasterFacilityList/mfl_admin_web/pull/2",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
171128164 | Metainfo data structure and parser
Feel free to leave feedback and edit as needed
@synlestidae Great work, as always 👍 ,however I'll investigate if #8 can help here, before merging.
Yeah. Addressing #8 would let me get rid of fn get_u32_or_error and related functions.
All right I finally got around to it. Got it parsing two metainfo files. It uses TryFrom (of which I rolled my own). There is a file in this pull request, src/tests/.mod.rs.swp, that I can't delete from it since git can't seem to find it >:(
This is amazing work as always, thanks a lot!
| gharchive/pull-request | 2016-08-15T08:25:43 | 2025-04-01T06:37:12.496775 | {
"authors": [
"MatejLach",
"synlestidae"
],
"repo": "MatejLach/rustorrent",
"url": "https://github.com/MatejLach/rustorrent/pull/7",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2166155914 | 🛑 Greenpeace App is down
In 767caef, Greenpeace App (https://app.greenpeace.org.ar/coupon/regular/forms/registration) was down:
HTTP code: 502
Response time: 173 ms
Resolved: Greenpeace App is back up in aca706a after 2 hours, 30 minutes.
| gharchive/issue | 2024-03-04T07:49:04 | 2025-04-01T06:37:12.583040 | {
"authors": [
"MatiasM87"
],
"repo": "MatiasM87/uptime",
"url": "https://github.com/MatiasM87/uptime/issues/1418",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
499725405 | Running on serverless platform
can i deploying to serverless platform like cloud functions or aws lamda?
I don't think so as the server has to stay live to be able to receive push notifications.
| gharchive/issue | 2019-09-28T04:40:04 | 2025-04-01T06:37:12.615980 | {
"authors": [
"MatthieuLemoine",
"derit"
],
"repo": "MatthieuLemoine/push-receiver",
"url": "https://github.com/MatthieuLemoine/push-receiver/issues/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1855588927 | Glitch?
There is nothing there.
This was a known issue with the MKCAD app which should now be fixed
https://www.chiefdelphi.com/t/mkcad-2023-season-updates/416846/83
| gharchive/issue | 2023-08-17T19:36:07 | 2025-04-01T06:37:12.632892 | {
"authors": [
"DRDJ123",
"Max5254"
],
"repo": "Max5254/onshape4frc.com",
"url": "https://github.com/Max5254/onshape4frc.com/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1509913627 | 🛑 Librenms is down
In af077d5, Librenms (https://librenms.atslega.network) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Librenms is back up in 3a3db72.
| gharchive/issue | 2022-12-24T02:45:00 | 2025-04-01T06:37:12.635458 | {
"authors": [
"MaxAtslega"
],
"repo": "MaxAtslega/status",
"url": "https://github.com/MaxAtslega/status/issues/199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
168369891 | Refactor assets
Fixes #697.
👍 🚢
| gharchive/pull-request | 2016-07-29T17:04:08 | 2025-04-01T06:37:12.636200 | {
"authors": [
"XhmikosR",
"jmervine"
],
"repo": "MaxCDN/bootstrap-cdn",
"url": "https://github.com/MaxCDN/bootstrap-cdn/pull/710",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
683334445 | ENTITY tags
The xml file I'm working with contains a lot of <!ENTITY...> style abbreviations inside the DOCTYPE tag that don't seem to get picked up. Is there any configuration I have to do in order to make it work?
the tags look like this:
?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE JMdict [
<!ELEMENT JMdict (entry*)>
// (...)
<!ENTITY MA "martial arts term">
<!ENTITY X "rude or X-rated term (not displayed in educational software)">
<!ENTITY abbr "abbreviation">
<!ENTITY adj-i "adjective (keiyoushi)">
<!ENTITY adj-ix "adjective (keiyoushi) - yoi/ii class">
<!ENTITY adj-na "adjectival nouns or quasi-adjectives (keiyodoshi)">
<!ENTITY adj-no "nouns which may take the genitive case particle `no'">
<!ENTITY adj-pn "pre-noun adjectival (rentaishi)">
<!ENTITY adj-t "`taru' adjective">
<!ENTITY adj-f "noun or verb acting prenominally">
<!ENTITY adv "adverb (fukushi)">
<!ENTITY adv-to "adverb taking the `to' particle">
<!ENTITY arch "archaism">
<!ENTITY ateji "ateji (phonetic) reading">
<!ENTITY aux "auxiliary">
<!ENTITY aux-v "auxiliary verb">
<!ENTITY aux-adj "auxiliary adjective">
<!ENTITY Buddh "Buddhist term">
<!ENTITY chem "chemistry term">
<!ENTITY chn "children's language">
<!ENTITY col "colloquialism">
<!ENTITY comp "computer terminology">
<!ENTITY conj "conjunction">
<!ENTITY cop "copula">
<!ENTITY ctr "counter">
<!ENTITY derog "derogatory">
<!ENTITY eK "exclusively kanji">
<!ENTITY ek "exclusively kana">
<!ENTITY exp "expressions (phrases, clauses, etc.)">
<!ENTITY fam "familiar language">
<!ENTITY fem "female term or language">
<!ENTITY food "food term">
<!ENTITY geom "geometry term">
<!ENTITY gikun "gikun (meaning as reading) or jukujikun (special kanji reading)">
<!ENTITY hon "honorific or respectful (sonkeigo) language">
<!ENTITY hum "humble (kenjougo) language">
<!ENTITY iK "word containing irregular kanji usage">
<!ENTITY id "idiomatic expression">
<!ENTITY ik "word containing irregular kana usage">
<!ENTITY int "interjection (kandoushi)">
<!ENTITY io "irregular okurigana usage">
<!ENTITY iv "irregular verb">
<!ENTITY ling "linguistics terminology">
<!ENTITY m-sl "manga slang">
<!ENTITY male "male term or language">
<!ENTITY male-sl "male slang">
<!ENTITY math "mathematics">
<!ENTITY mil "military">
<!ENTITY n "noun (common) (futsuumeishi)">
<!ENTITY n-adv "adverbial noun (fukushitekimeishi)">
<!ENTITY n-suf "noun, used as a suffix">
<!ENTITY n-pref "noun, used as a prefix">
<!ENTITY n-t "noun (temporal) (jisoumeishi)">
<!ENTITY num "numeric">
<!ENTITY oK "word containing out-dated kanji">
<!ENTITY obs "obsolete term">
<!ENTITY obsc "obscure term">
<!ENTITY ok "out-dated or obsolete kana usage">
<!ENTITY oik "old or irregular kana form">
<!ENTITY on-mim "onomatopoeic or mimetic word">
<!ENTITY pn "pronoun">
<!ENTITY poet "poetical term">
<!ENTITY pol "polite (teineigo) language">
<!ENTITY pref "prefix">
<!ENTITY proverb "proverb">
<!ENTITY prt "particle">
<!ENTITY physics "physics terminology">
<!ENTITY quote "quotation">
<!ENTITY rare "rare">
<!ENTITY sens "sensitive">
<!ENTITY sl "slang">
<!ENTITY suf "suffix">
<!ENTITY uK "word usually written using kanji alone">
<!ENTITY uk "word usually written using kana alone">
<!ENTITY unc "unclassified">
<!ENTITY yoji "yojijukugo">
<!ENTITY v1 "Ichidan verb">
<!ENTITY v1-s "Ichidan verb - kureru special class">
<!ENTITY v2a-s "Nidan verb with 'u' ending (archaic)">
<!ENTITY v4h "Yodan verb with `hu/fu' ending (archaic)">
<!ENTITY v4r "Yodan verb with `ru' ending (archaic)">
<!ENTITY v5aru "Godan verb - -aru special class">
<!ENTITY v5b "Godan verb with `bu' ending">
<!ENTITY v5g "Godan verb with `gu' ending">
<!ENTITY v5k "Godan verb with `ku' ending">
<!ENTITY v5k-s "Godan verb - Iku/Yuku special class">
<!ENTITY v5m "Godan verb with `mu' ending">
<!ENTITY v5n "Godan verb with `nu' ending">
<!ENTITY v5r "Godan verb with `ru' ending">
<!ENTITY v5r-i "Godan verb with `ru' ending (irregular verb)">
<!ENTITY v5s "Godan verb with `su' ending">
<!ENTITY v5t "Godan verb with `tsu' ending">
<!ENTITY v5u "Godan verb with `u' ending">
<!ENTITY v5u-s "Godan verb with `u' ending (special class)">
<!ENTITY v5uru "Godan verb - Uru old class verb (old form of Eru)">
<!ENTITY vz "Ichidan verb - zuru verb (alternative form of -jiru verbs)">
<!ENTITY vi "intransitive verb">
<!ENTITY vk "Kuru verb - special class">
<!ENTITY vn "irregular nu verb">
<!ENTITY vr "irregular ru verb, plain form ends with -ri">
<!ENTITY vs "noun or participle which takes the aux. verb suru">
<!ENTITY vs-c "su verb - precursor to the modern suru">
<!ENTITY vs-s "suru verb - special class">
<!ENTITY vs-i "suru verb - included">
<!ENTITY kyb "Kyoto-ben">
<!ENTITY osb "Osaka-ben">
<!ENTITY ksb "Kansai-ben">
<!ENTITY ktb "Kantou-ben">
<!ENTITY tsb "Tosa-ben">
<!ENTITY thb "Touhoku-ben">
<!ENTITY tsug "Tsugaru-ben">
<!ENTITY kyu "Kyuushuu-ben">
<!ENTITY rkb "Ryuukyuu-ben">
<!ENTITY nab "Nagano-ben">
<!ENTITY hob "Hokkaido-ben">
<!ENTITY vt "transitive verb">
<!ENTITY vulg "vulgar expression or word">
<!ENTITY adj-kari "`kari' adjective (archaic)">
<!ENTITY adj-ku "`ku' adjective (archaic)">
<!ENTITY adj-shiku "`shiku' adjective (archaic)">
<!ENTITY adj-nari "archaic/formal form of na-adjective">
<!ENTITY n-pr "proper noun">
<!ENTITY v-unspec "verb unspecified">
<!ENTITY v4k "Yodan verb with `ku' ending (archaic)">
<!ENTITY v4g "Yodan verb with `gu' ending (archaic)">
<!ENTITY v4s "Yodan verb with `su' ending (archaic)">
<!ENTITY v4t "Yodan verb with `tsu' ending (archaic)">
<!ENTITY v4n "Yodan verb with `nu' ending (archaic)">
<!ENTITY v4b "Yodan verb with `bu' ending (archaic)">
<!ENTITY v4m "Yodan verb with `mu' ending (archaic)">
<!ENTITY v2k-k "Nidan verb (upper class) with `ku' ending (archaic)">
<!ENTITY v2g-k "Nidan verb (upper class) with `gu' ending (archaic)">
<!ENTITY v2t-k "Nidan verb (upper class) with `tsu' ending (archaic)">
<!ENTITY v2d-k "Nidan verb (upper class) with `dzu' ending (archaic)">
<!ENTITY v2h-k "Nidan verb (upper class) with `hu/fu' ending (archaic)">
<!ENTITY v2b-k "Nidan verb (upper class) with `bu' ending (archaic)">
<!ENTITY v2m-k "Nidan verb (upper class) with `mu' ending (archaic)">
<!ENTITY v2y-k "Nidan verb (upper class) with `yu' ending (archaic)">
<!ENTITY v2r-k "Nidan verb (upper class) with `ru' ending (archaic)">
<!ENTITY v2k-s "Nidan verb (lower class) with `ku' ending (archaic)">
<!ENTITY v2g-s "Nidan verb (lower class) with `gu' ending (archaic)">
<!ENTITY v2s-s "Nidan verb (lower class) with `su' ending (archaic)">
<!ENTITY v2z-s "Nidan verb (lower class) with `zu' ending (archaic)">
<!ENTITY v2t-s "Nidan verb (lower class) with `tsu' ending (archaic)">
<!ENTITY v2d-s "Nidan verb (lower class) with `dzu' ending (archaic)">
<!ENTITY v2n-s "Nidan verb (lower class) with `nu' ending (archaic)">
<!ENTITY v2h-s "Nidan verb (lower class) with `hu/fu' ending (archaic)">
<!ENTITY v2b-s "Nidan verb (lower class) with `bu' ending (archaic)">
<!ENTITY v2m-s "Nidan verb (lower class) with `mu' ending (archaic)">
<!ENTITY v2y-s "Nidan verb (lower class) with `yu' ending (archaic)">
<!ENTITY v2r-s "Nidan verb (lower class) with `ru' ending (archaic)">
<!ENTITY v2w-s "Nidan verb (lower class) with `u' ending and `we' conjugation (archaic)">
<!ENTITY archit "architecture term">
<!ENTITY astron "astronomy, etc. term">
<!ENTITY baseb "baseball term">
<!ENTITY biol "biology term">
<!ENTITY bot "botany term">
<!ENTITY bus "business term">
<!ENTITY econ "economics term">
<!ENTITY engr "engineering term">
<!ENTITY finc "finance term">
<!ENTITY geol "geology, etc. term">
<!ENTITY law "law, etc. term">
<!ENTITY mahj "mahjong term">
<!ENTITY med "medicine, etc. term">
<!ENTITY music "music term">
<!ENTITY Shinto "Shinto term">
<!ENTITY shogi "shogi term">
<!ENTITY sports "sports term">
<!ENTITY sumo "sumo term">
<!ENTITY zool "zoology term">
<!ENTITY joc "jocular, humorous term">
<!ENTITY anat "anatomical term">
<!ENTITY Christn "Christian term">
<!ENTITY net-sl "Internet slang">
<!ENTITY dated "dated term">
<!ENTITY hist "historical term">
<!ENTITY lit "literary or formal term">
<!ENTITY litf "literary or formal term">
<!ENTITY surname "family or surname">
<!ENTITY place "place name">
<!ENTITY unclass "unclassified name">
<!ENTITY company "company name">
<!ENTITY product "product name">
<!ENTITY work "work of art, literature, music, etc. name">
<!ENTITY person "full name of a particular person">
<!ENTITY given "given name or forename, gender not specified">
<!ENTITY station "railway station">
<!ENTITY organization "organization name">
]>
and one example:
<JMdict>
<entry>
<ent_seq>1000000</ent_seq>
<r_ele>
<reb>ヽ</reb>
</r_ele>
<sense>
<pos>&unc;</pos>
<xref>一の字点</xref>
<gloss g_type="expl">repetition mark in katakana</gloss>
</sense>
<sense>
<gloss xml:lang="dut">hitotsuten 一つ点: teken dat herhaling van het voorafgaande katakana-schriftteken aangeeft</gloss>
</sense>
</entry>
// (...)
</JMdict>
the entry->sense->pos tag should expand &unc; into unclassified because of <!ENTITY unc "unclassified">, but in the resulting struct it comes up empty.
The underlying Parser is picking up on the entities, because it will through an Error Domain=NSXMLParserErrorDomain Code=111 "(null)" if even one of the entities used in the xml files is missing from the definitions in the DOCTYPE header.
Seems like this might actually be a problem with MSXMLParser itself:
https://stackoverflow.com/questions/21784131/nsxmlparser-not-resolving-internal-entity
Some information about entity tags:
https://www.logicbig.com/tutorials/misc/xml/xml-entity.html#:~:text=Internal Entities%3A An internal entity,defined in an separate file.
maybe relevant apple documentation:
https://developer.apple.com/documentation/foundation/nsxmlparserdelegate/1412907-parser
parser:foundUnparsedEntityDeclarationWithName:publicID:systemID:notationName:
https://developer.apple.com/documentation/foundation/nsxmlparserdelegate/1414803-parser
parser:foundInternalEntityDeclarationWithName:value:
This seems like it would be necessary to decode the entity shortcuts.
relevant in case of external entity declaration (entity declaration resides in other file in other file)
https://developer.apple.com/documentation/foundation/nsxmlparserdelegate/1416221-parser
| gharchive/issue | 2020-08-21T06:33:20 | 2025-04-01T06:37:12.648366 | {
"authors": [
"MartinP7r"
],
"repo": "MaxDesiatov/XMLCoder",
"url": "https://github.com/MaxDesiatov/XMLCoder/issues/199",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1487556266 | --save-sample for Object Detection creates 0.00
Hello,
I posted this as a comment on #135 but it's closed so I didn't know if it would be seen.
I'm training an object-detection model and I'm experiencing a related issue. When I add --save-sample, my mAP goes to 0.0000. Attached (ex.1 , ex.2) are some of my tests. Additionally, I tried the changed train file in PR https://github.com/MaximIntegratedAI/ai8x-training/pull/136 and I am still receiving a lower mAP than I have in previous continuations (ex.3). I also have a control example with the unedited SVHN_74_simplified with the changed train.py (ex.4).
When I use the samples from the different train.py's in the synthesis quantize and generation, they receive the same output which I have in ex.4. They are both using checkpoints from the same previous svhn training session with the original training script.
Are there any differences with object detection in this way?
Thanks for the help!
sample_examples.zip
Indeed the changes mentioned in PR 136 seem to fix the issue and I opened a fresh PR #201 on that
Would you mind using --deterministic option for your trials with and without --save-sample option. As the logs are cut out for the very initial epochs, they can vary independent of the --save-sample option but due to random initialization.
I am attaching two logs of the two experiments I regenerated using the very similar train.py parameters you are using with only the addition of --deterministic option. Both experiment logs output identical mAP values (second one only has --save-sample additional option)
python train.py --deterministic --print-freq 200 --epochs 20 --optimizer Adam --lr 0.050 --model ai85tinierssd --use-bias --dataset SVHN_74_simplified --device MAX78000 --wd 0 --momentum 0.9 --weight-decay 5e-4 --obj-detection --obj-detection-params parameters/obj_detection_params_svhn.yaml --qat-policy policies/qat_policy_svhn_short.yaml --validation-split 0 --confusion --batch-size 16 --data /data/
python train.py --deterministic --print-freq 200 --epochs 20 --optimizer Adam --lr 0.050 --model ai85tinierssd --use-bias --dataset SVHN_74_simplified --device MAX78000 --wd 0 --momentum 0.9 --weight-decay 5e-4 --obj-detection --obj-detection-params parameters/obj_detection_params_svhn.yaml --qat-policy policies/qat_policy_svhn_short.yaml --validation-split 0 --confusion --batch-size 16 --data /data/ --save-sample 10
sample_training_logs.zip
I had the same results with the above scripts. Looking closer at some of my previous tests, it looks like this has been the same across other files. It looks like I'm noticing unrelated fluctuations.
Thanks for your help!
with_sample_2022.12.13-201045.log
without_sample_2022.12.13-101819.log
| gharchive/issue | 2022-12-09T22:22:23 | 2025-04-01T06:37:12.657120 | {
"authors": [
"ryalberti",
"seldauyanik-maxim"
],
"repo": "MaximIntegratedAI/ai8x-training",
"url": "https://github.com/MaximIntegratedAI/ai8x-training/issues/199",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1282921501 | fix testValidAVLSame2
testValidAVLSimple2() creates a tree with 3 nodes, with the value 3 as root, 2 as left child and 1 as right child and is therefore not a valid tree, because the right value is smaller than its parent. Wrongly, however, it is considered correct in the test
-> not a valid tree because the right value must be equal or bigger?
Thank you! You are absolutely right. I don't know why I didn't notice that. 😆
Great work, thanks for your contribution 🐧
| gharchive/pull-request | 2022-06-23T20:43:39 | 2025-04-01T06:37:12.659185 | {
"authors": [
"MaximilianAnzinger",
"WoodyLetsCode",
"flooxo"
],
"repo": "MaximilianAnzinger/gad2022-tests",
"url": "https://github.com/MaximilianAnzinger/gad2022-tests/pull/30",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
338814088 | A question about the train.
您好,请问在训练时我要分别训练三部分的网吗?我看您的代码是三个文件train_lanenet.py,train_encoder_decoder.py和train_lanenet_instance_segmentation.py,没有将三部分联合起来一起训练。
@luyunxi train_lanenet.py 是联合训练的脚本, 你可以仔细看一下代码
| gharchive/issue | 2018-07-06T05:53:36 | 2025-04-01T06:37:12.678968 | {
"authors": [
"MaybeShewill-CV",
"luyunxi"
],
"repo": "MaybeShewill-CV/lanenet-lane-detection",
"url": "https://github.com/MaybeShewill-CV/lanenet-lane-detection/issues/16",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
868089871 | LRAPA_PA_FRM_fit.Rmd
This will be a top level RMarkdown document that summarizes what we've found in recent explorations with Amazon_Park.
I expect it you may not finish before the end of the month and I will want to spend some time on a final pass, but I think you can put together a pretty good article this week.
This will not need to be completely "reproducible" -- you don't need to include all the code required to download data and create datasets. In the final blog post version, I'll probably go through and clean up/rename various files in reports/OR/LRAPA/ and have links to those files. This version is just a draft to figure out how best to tell the story.
Basic outline:
[ ] Background -- discussion of PA vs FRM monitors and how lm() can be used to "correct" PA pm25 so that it more closely matches FRM pm25.
[ ] LRAPA -- discussion what LRAPA is, how they have co-located sensor-monitor pairs.
[ ] Amazon Park -- a few plots showing that this sensor is basically good and matches the FRM data quite well
[ ] Linear fit -- Using lm(pm25_monitor ~ pm25 + humidity) and a 5-day window to created our model. Then use predict() to create corrected (aka "fitted") values. Code snippet to show what we are doing. For a one-week window in July, show FRM pm25, PA pm25 and corrected (PA values with the lm() calculated linear correction applied.)
[ ] EPA correction -- show a 4-month plot with PA pm25, FRM pm25 and EPA "corrected" pm25 (PA values with EPA linear correction applied.); This plot should show that the EPA "over corrects" during the smoky period.
[ ] Real-time linear fit -- Show the fit parameters for EPA (straight lines) and for real-time lm(). Pick ylim() that highlights the pm25 and humidity values. Don't worry about the smoky period when the intercept goes crazy.
[ ] Real-time vs EPA -- Discussion and plots showing PA pm25, FRM pm25, EPA corrected pm25 and real-time corrected pm25 for one week in July and one smoky week in October
Jon, when I try to use LRAPA_monitors object just created, the sensorMonitorData we created throws an error. I tried with the saved and then loaded LRAPA_monitors and that works fine. Maybe something that needs to be fixed in the function?
I wrote the entire rmd for everyone, so that people do not have to have files saved on their computer. I'm pastin the code below, or see lines 101-163. in the rmd
For the rest the rmd runs smoothly. I left your requests in the rmd for easy check (see "TO DO:"), but clearly they need to be removed. I also left all R blocks in eval = TRUE mode for easy check, but some of them will need to be turned off.
Get monitors data
LRAPA_monitors <- monitor_loadLatest() %>%
monitor_subset(stateCodes = "OR") %>%
monitor_subsetBy(countyName == "Lane")
archiveDir <- "C:/Users/astri/Mirror/Mazamascience/Projects/Data/LRAPA"
LRAPA_monitors <- get(load(file.path(archiveDir, "LRAPA_monitors.rda")))
Create dataframe (df): Amazon Park PA one-week data -- July, 2020
NOTE: source functions in the "Setup" R block first!
sensorMonitorData_07 <-sensorMonitorData(pat = Amazon_Park,
ws_monitor = LRAPA_monitors,
monitorID = monitorID,
startdate = 20200701,
enddate = 20200706)
| gharchive/issue | 2021-04-26T19:43:00 | 2025-04-01T06:37:12.687469 | {
"authors": [
"AstridSanna",
"jonathancallahan"
],
"repo": "MazamaScience/reports",
"url": "https://github.com/MazamaScience/reports/issues/9",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
284630836 | Tank MK2 With Entity Data Fix
Fixes the problem of the tank Mk2 losing items in it's equipment grid.
Thanks you ^^
You're welcome. Do you know how to fix the grid size problem indicated in my issue? I haven't been able to figure out how to do that.
| gharchive/pull-request | 2017-12-27T01:30:47 | 2025-04-01T06:37:12.704162 | {
"authors": [
"McGuten",
"Ramdat"
],
"repo": "McGuten/5DimsFactorioMods",
"url": "https://github.com/McGuten/5DimsFactorioMods/pull/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
165957927 | [1.10.2] Crystal state bug
The Crystal in question was a 100/100/100 crystal. It was on a pedestal, with a generator active on it.
The crystal was broken, whilst active, and on the pedestal. This resulted in the crystal having a permanent glow. Whenever the crystal is placed again in the world, be it raw, or pedestal, it is glowing, and classifies as 'on'. However, when using the generator it will not 'fire' at the crystal. Therefore, the crystal becomes a useless paperweight, incapable of generating power.
There is currently no way I can see to reverse this permanent state other than jumping into and modifying the metadata NBT data attached to the entity.
This was using the 1.10 version of Deep Resonance. Single crystal setup, using 3-long generator whilst in a flat dimension ( for informations sake ).
EDIT: Eventual fix was using NBTExplorer to location the item and set glowing: 1 to glowing: 0 as suspected.
Fixed next release
| gharchive/issue | 2016-07-17T03:08:44 | 2025-04-01T06:37:12.706391 | {
"authors": [
"McJty",
"awilliamson"
],
"repo": "McJty/DeepResonance",
"url": "https://github.com/McJty/DeepResonance/issues/92",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
90824327 | [Request] Ability to use crafter to compress 2x2 or 3x3
It would be nice if a crafter could just take whatever is put in it and automatically craft it 2x2 or 3x3.
What exactly do you mean by this?
For example if a crafter is set to compress 2x2 then if you gave it wood
planks it would churn out crafting tables. If you set it to 3x3 it would
churn out metal blocks if you fed it ingots.
On Aug 12, 2015 2:49 AM, "McJty" notifications@github.com wrote:
What exactly do you mean by this?
—
Reply to this email directly or view it on GitHub
https://github.com/McJty/RFTools/issues/221#issuecomment-130189588.
That sounds more like something for a different machine. In the GUI of the crafter it is not really easy to define this type of recipe atm.
| gharchive/issue | 2015-06-25T01:37:23 | 2025-04-01T06:37:12.709871 | {
"authors": [
"McJty",
"permdaddy"
],
"repo": "McJty/RFTools",
"url": "https://github.com/McJty/RFTools/issues/221",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
927882796 | generalize DarkSwitch to ThemeSwitch
Currently, DarkSwitch is used with Tailwind's class dark mode. All it really does is apply a class to <body>, which would play well with CSS variables:
Each theme would have a css class name to apply to <body> and what is essentially a RenderFragment: Currently SVGs and light / dark / system. Generalizing this to Themes rather than just dark/light, the C# and JS would be largely the same, but the UI would differ - perhaps a dropdown if in a menu, or radio buttons if in a settings page. The RenderFragments would be text instead of the current light/system/dark SVGs.
tl;dr: arbitrary UI; click on a RenderFragment and its associated class name is applied to <body>
On second thought, while it's a good idea in the context of useful Components, the amount of complexity it would add to this project is too much to demonstrate a simple aspect of Tailwind: ability to use var(--some-variable). Keep it simple.
| gharchive/issue | 2021-06-23T05:48:10 | 2025-04-01T06:37:12.722204 | {
"authors": [
"McNerdius"
],
"repo": "McNerdius/TailBlazor",
"url": "https://github.com/McNerdius/TailBlazor/issues/25",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
183346040 | Request static libs in build scripts
Signed-off-by: Maxime Gervais gervais.maxime@gmail.com
It's working but you should tell people in the Readme to set their LD_LIBRARY_PATH if they are using shared library.
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$BUILD_DIR/ZenLib/Project/GNU/Library/.libs:$BUILD_DIR/MediaInfoLib/Project/GNU/Library/.libs
Not related to your patch but...
The instruction to compile without online checker is wrong.
prepare should be used with NO_LIBCURL=1, configure without the --with-libcurl=...
It's working but you should tell people in the Readme to set their LD_LIBRARY_PATH if they are using shared library.
I guess people know how to use .so if they have .so a bit everywhere, but adding that in readme does not hurt.
prepare should be used with NO_LIBCURL=1, configure without the --with-libcurl=...
For @g-maxime :)
| gharchive/pull-request | 2016-10-17T07:41:29 | 2025-04-01T06:37:12.739878 | {
"authors": [
"JeromeMartinez",
"g-maxime",
"tribouille"
],
"repo": "MediaArea/MediaConch_SourceCode",
"url": "https://github.com/MediaArea/MediaConch_SourceCode/pull/417",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
366652142 | Add custom Info.plist for direct distribution
Signed-off-by: Maxime Gervais gervais.maxime@gmail.com
I don't like the idea of 2 plist to maintain, but I guess it is the quickest method...
| gharchive/pull-request | 2018-10-04T07:13:15 | 2025-04-01T06:37:12.741018 | {
"authors": [
"JeromeMartinez",
"g-maxime"
],
"repo": "MediaArea/MediaConch_SourceCode",
"url": "https://github.com/MediaArea/MediaConch_SourceCode/pull/655",
"license": "BSD-2-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2241389333 | showPostSource参数没作用。
showPostSource 设置成1,依然没有issues的链接。好奇怪...
@ttxs8 全局生成一次试一试
我也遇到同样的问题 https://astro-lee.github.io/
@Astro-Lee @ttxs8 发现问题了,我在快速上手中关于showPostSource的示例写错了,配置为1,不需要双引号,如下是正确的。
"showPostSource":1,
原来如此,谢谢~
| gharchive/issue | 2024-04-13T06:59:30 | 2025-04-01T06:37:12.757818 | {
"authors": [
"Astro-Lee",
"Meekdai",
"ttxs8"
],
"repo": "Meekdai/Gmeek",
"url": "https://github.com/Meekdai/Gmeek/issues/77",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1332820426 | feat(armv7): optimize armv7 assembly.
feat(armv7): optimize armv7 assembly to make statistics more precise.
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.chenxiaoqiang seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.You have signed the CLA already but the status is still pending? Let us recheck it.
@ruiyoua Thanks, please sign the CLA. Otherwise, we can NOT merge your code.
| gharchive/pull-request | 2022-08-09T07:17:22 | 2025-04-01T06:37:12.761146 | {
"authors": [
"CLAassistant",
"Li-Ming-xin",
"ruiyoua"
],
"repo": "MegEngine/MegPeak",
"url": "https://github.com/MegEngine/MegPeak/pull/18",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
436578099 | Flex布局,设置flex属性按比例(权重)布局,box-sizing: border-box无效
[扼要问题描述]
Flex布局,设置flex属性按比例(权重)布局,如果设置了padding或者margin,就会超出比例部分,同时设置box-sizing: border-box无效
**mpvue 版本号:2.00
最小化复现代码:
<template>
<div>
<div style="width: 10rem;background-color: #2196F3">
我是一半宽度(10rem),用来校准
</div>
<div style="display: flex;flex-direction: row;">
<div style="flex: 1;box-sizing: border-box;background-color: gray">我flex1,没有padding</div>
<div style="flex: 1;box-sizing: border-box;background-color: gold;padding: 0 10px">我flex1,有padding</div>
</div>
<div style="width: 10rem;background-color: #2196F3;padding:0 10px;box-sizing: border-box">
我是一半宽度,设置了padding
</div>
</div>
</template>
截图或动态图:
这应该不是框架本身的原因,flex布局本身计算比较复杂,可以参考以下进行理解:
https://teamtreehouse.com/community/does-boxsizing-borderbox-work-with-flexbox
https://stackoverflow.com/questions/37353792/flex-basis-and-box-sizing
https://stackoverflow.com/questions/34753491/how-does-flex-shrink-factor-in-padding-and-border-box
| gharchive/issue | 2019-04-24T09:01:22 | 2025-04-01T06:37:12.775724 | {
"authors": [
"Dewyzee",
"phoenixsky"
],
"repo": "Meituan-Dianping/mpvue",
"url": "https://github.com/Meituan-Dianping/mpvue/issues/1568",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2298435424 | Add Tailwind to advance options.
By submitting this pull request, I confirm that my contribution is made under the terms of the MIT license.
Problem/Feature
Resolves #186
Description of Changes:
This PR adds an option to the --advanced flag that let's users add Tailwind CSS framework to the project
This change creates package.json, tailwind.config.js, input.css and output.css files
pnpm is used to install tailwind via package.json as I couldn't test with npm
updated Makefile to run tailwind for the build recipe. which is also used in .air.toml
Checklist
[x] I have self-reviewed the changes being requested
[x] I have updated the documentation (if applicable)
@PrajvalBadiger Thanks for submitting this PR. I wanted to ask what you thought about changing the implementation to use the Tailwind Standalone CLI instead of installing the package with Node.js?
I've used the standalone CLI in my projects and it works well. We'd have to add some language to the output saying to make sure they have it installed, but we are already doing something similar with Templ so I don't think that will be an issue.
The other thing we probably want to add is some checks to make sure that the tailwind option is not used without the htmx option.
@PrajvalBadiger Thanks for submitting this PR. I wanted to ask what you thought about changing the implementation to use the Tailwind Standalone CLI instead of installing the package with Node.js?
I've used the standalone CLI in my projects and it works well. We'd have to add some language to the output saying to make sure they have it installed, but we are already doing something similar with Templ so I don't think that will be an issue.
@briancbarrow, Thanks for the review. I was not aware of the Tailwind Standalone CLI and had used Node.js for Tailwind in my projects. A huge dependency on Node.js would be reduced by using the standalone CLI.
The other thing we probably want to add is some checks to make sure that the tailwind option is not used without the htmx option.
There a few ways we can implement this:
By Default selecting the tailwind option is disabled and user can only select tailwind option if htmx option is selected
If tailwind option is selected, htmx is automatically selected
If only the tailwind option is used without htmx option, the program fails and print an error. If this is the case the user has to redo all the steps and select htmx option with tailwind option.
There a few ways we can implement this:
By Default selecting the tailwind option is disabled and user can only select tailwind option if htmx option is selected
If tailwind option is selected, htmx is automatically selected
If only the tailwind option is used without htmx option, the program fails and print an error. If this is the case the user has to redo all the steps and select htmx option with tailwind option.
I would be good with either the first or second option.
I have implemented the second option - If tailwind option is selected, htmx is automatically selected.
Node.js dependency is droped and tailwind standalone cli can be used now.
Added note to remind users to install tailwind standalone cli.
Thanks @PrajvalBadiger! The only thing left is it looks like there are commits on here that are from previously merged PRs. Can you do a rebase and only keep the commits that are relevant to this PR?
| gharchive/pull-request | 2024-05-15T17:16:03 | 2025-04-01T06:37:12.788154 | {
"authors": [
"PrajvalBadiger",
"briancbarrow"
],
"repo": "Melkeydev/go-blueprint",
"url": "https://github.com/Melkeydev/go-blueprint/pull/233",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
93340030 | use of undeclared type "SpringImageView"
use of undeclared type "SpringImageView"
am really new to swift so what shall i do? please help
If you have already included the Spring framework, try to write
import Spring
at the top of the Swift file where you want to use the SpringImageView :+1:
i added import Spring and still getting the error..
i only moved the spring file to my project and i didnt add anything.. am i missing something?
Please make sure to select Create groups and copy items if needed when dropping the Spring folder.
Thanks! Meng
On Mon, Jul 6, 2015 at 2:27 PM, oshiltawi notifications@github.com
wrote:
i added import Spring and still getting the error..
i only moved the spring file to my project and i didnt add anything.. am i missing something?
Reply to this email directly or view it on GitHub:
https://github.com/MengTo/Spring/issues/80#issuecomment-118947979
thank you allot! 1 more thing my image doesn't animate do i have to write some codes? or i can just use spring image view to animate it
In storyboard make sure to enable auto start. In code, use animate()
Thanks! Meng
On Mon, Jul 6, 2015 at 2:36 PM, oshiltawi notifications@github.com
wrote:
thank you allot! 1 more thing my image doesn't animate do i have to write some codes? or i can just use spring image view to animate it
Reply to this email directly or view it on GitHub:
https://github.com/MengTo/Spring/issues/80#issuecomment-118951904
i know i am a pain.. am really sorry its showing another error
http://i.imgur.com/FucacJr.png
EXC_BAD_ACCESS usually means, that an object is not initialised or already released.
thanks for your reply but i dont know what to do
That's hard to tell without seeing the code. It can be nearly everything and isn't directly related to the Spring framework.
http://i.imgur.com/lmFfes9.png
Maybe your SpringImageView is not connected to the Storyboard.
hmm if you would like to tell me the solution it will be amazing.
thanks
anyone?
Since the initial issue seems to be solved, I'll close it.
If you have any other issues related to the Spring framework, please let us know :+1:
| gharchive/issue | 2015-07-06T18:12:33 | 2025-04-01T06:37:12.822029 | {
"authors": [
"MengTo",
"oshiltawi",
"schneiderandre"
],
"repo": "MengTo/Spring",
"url": "https://github.com/MengTo/Spring/issues/80",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
737796535 | getting "Generating SSR bundle failed
`MacBook-Pro:designcodeweb richardchu$ gatsby develop
success open and validate gatsby-configs - 0.021s
success load plugins - 0.118s
success onPreInit - 0.022s
success initialize cache - 0.018s
success copy gatsby files - 0.100s
success onPreBootstrap - 0.037s
success createSchemaCustomization - 0.001s
success Checking for changed pages - 0.000s
success source and transform nodes - 0.219s
success building schema - 0.212s
info Total nodes: 150, SitePage nodes: 1 (use --verbose for breakdown)
success createPages - 0.002s
success Checking for changed pages - 0.000s
success createPagesStatefully - 0.055s
success update schema - 0.033s
success write out redirect data - 0.001s
success onPostBootstrap - 0.001s
info bootstrap finished - 4.257s
success onPreExtractQueries - 0.001s
success extract queries from components - 0.161s
success write out requires - 0.019s
success run static queries - 0.015s - 1/1 65.80/s
success run page queries - 0.015s - 5/5 323.44/s
error Generating SSR bundle failed
[BABEL] /Users/richardchu/Desktop/learning/react-for-designer/designcodeweb/.cache/develop-static-entry.js: Cannot find module '@babel/plugin-transform-property-literals'`
Gatsby CLI version: 2.12.117
Gatsby version: 2.24.66
nm, npm install seems to fix the issue.
| gharchive/issue | 2020-11-06T14:20:47 | 2025-04-01T06:37:12.826712 | {
"authors": [
"richardchunyc"
],
"repo": "MengTo/gatsby-starter-designcode",
"url": "https://github.com/MengTo/gatsby-starter-designcode/issues/1",
"license": "0BSD",
"license_type": "permissive",
"license_source": "github-api"
} |
898013141 | Re-add things that were accidentally removed
It seems that you recently force-pushed and deleted some things. This PR adds them back (and does a few minor things).
After merging, you can run git fetch and then git reset --hard to the commit hash of the merge commit to get your local version updated with everything from here.
Thanks!
| gharchive/pull-request | 2021-05-21T12:49:24 | 2025-04-01T06:37:12.829667 | {
"authors": [
"MennoMax",
"aaronfranke"
],
"repo": "MennoMax/gift",
"url": "https://github.com/MennoMax/gift/pull/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2608372501 | chore: move more things in Changes/Local Change object
Depends-On: #506
This pull request is part of a stack:
chore: add some datastructure (#506)
chore: move more things in Changes/Local Change object (#507) 👈
chore: move retrieval of change IDs and pull inside get_remote_changes() (#508)
fix: all short sha are wrong (#509)
| gharchive/pull-request | 2024-10-23T12:12:05 | 2025-04-01T06:37:12.844560 | {
"authors": [
"sileht"
],
"repo": "Mergifyio/mergify-cli",
"url": "https://github.com/Mergifyio/mergify-cli/pull/507",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2166589743 | Mobile app redesign
Due to the redesgin of the mobile app, we will need to update the pics in the docs:
Tutorials
[ ] https://merginmaps.com/docs/tutorials/capturing-first-data/
[ ] https://merginmaps.com/docs/tutorials/opening-surveyed-data-on-your-computer/
[ ] https://merginmaps.com/docs/tutorials/creating-a-project-in-qgis/#configure-attributes-forms (form preview)
[ ] https://merginmaps.com/docs/tutorials/mobile/
[ ] https://merginmaps.com/docs/tutorials/further-project-customisation/
Install & Sign Up
[ ] https://merginmaps.com/docs/setup/install-mobile-app/
[ ] https://merginmaps.com/docs/setup/sign-up-to-mergin-maps/#from-mergin-maps-mobile-app
Manage Account & Project
[ ] https://merginmaps.com/docs/manage/workspaces/#switch-workspaces-in-mergin-maps-mobile-app
[ ] https://merginmaps.com/docs/manage/create-project/#create-a-project-in-mergin-maps-mobile-app
[ ] https://merginmaps.com/docs/manage/plugin-multi-server-use/#custom-server-configuration-in-mergin-maps-mobile-app
Setup GIS Project
[ ] https://merginmaps.com/docs/gis/features/#image (form preview)
[ ] https://merginmaps.com/docs/gis/search_data/
[ ] https://merginmaps.com/docs/gis/setup_themes/#map-themes-in-mergin-maps-mobile-app
[ ] https://merginmaps.com/docs/gis/snapping/
[ ] https://merginmaps.com/docs/gis/proj/
Configure Survey Layer
[ ] https://merginmaps.com/docs/layer/settingup_forms/
[ ] https://merginmaps.com/docs/layer/settingup_forms_settings/
[ ] https://merginmaps.com/docs/layer/form-layout/
[ ] https://merginmaps.com/docs/layer/exif_metadata/
[ ] https://merginmaps.com/docs/layer/settingup_forms_photo/#photos-in-mergin-maps-mobile-app
[ ] https://merginmaps.com/docs/layer/attach-multiple-photos-to-features/
[ ] https://merginmaps.com/docs/layer/one-to-n-relations/
[ ] https://merginmaps.com/docs/layer/external-link/
[ ] https://merginmaps.com/docs/layer/working_with_nonspatial_data/
Fieldwork Tips
[ ] https://merginmaps.com/docs/field/input_ui/
[ ] https://merginmaps.com/docs/field/external_gps/
[ ] https://merginmaps.com/docs/field/gps_accuracy/
[ ] https://merginmaps.com/docs/field/tracking/#using-position-tracking-in-mergin-maps-mobile-app
[ ] https://merginmaps.com/docs/field/autosync/
[ ] https://merginmaps.com/docs/field/layers/
[ ] https://merginmaps.com/docs/field/input_features/
[ ] https://merginmaps.com/docs/field/reuse-last-values/
[ ] https://merginmaps.com/docs/field/stake-out/
[ ] https://merginmaps.com/docs/field/broken-project/
Support & Legal
[ ] https://merginmaps.com/docs/misc/troubleshoot/#diagnostic-log-on-mergin-maps-mobile-app
also bear in mind that the demo projects are deleted and on clean start you need to create first project yourself
| gharchive/issue | 2024-03-04T11:14:59 | 2025-04-01T06:37:12.862629 | {
"authors": [
"PeterPetrik",
"alex-cit"
],
"repo": "MerginMaps/docs",
"url": "https://github.com/MerginMaps/docs/issues/437",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2139810454 | [Bug] 1.18.1 不支持 vmess + tcp + tls 吗? JMS的 tcp vmess 没有特殊header 需要忽略ssl验证. proxy-providers好像有bug..
Verify steps
[X] 确保你使用的是本仓库最新的的 mihomo 或 mihomo Alpha 版本 Ensure you are using the latest version of Mihomo or Mihomo Alpha from this repository.
[ ] 如果你可以自己 debug 并解决的话,提交 PR 吧 Is this something you can debug and fix? Send a pull request! Bug fixes and documentation fixes are welcome.
[X] 我已经在 Issue Tracker 中找过我要提出的问题 I have searched on the issue tracker for a related issue.
[X] 我已经使用 Alpha 分支版本测试过,问题依旧存在 I have tested using the dev branch, and the issue still exists.
[X] 我已经仔细看过 Documentation 并无法自行解决问题 I have read the documentation and was unable to solve the issue.
[X] 这是 Mihomo 核心的问题,并非我所使用的 Mihomo 衍生版本(如 OpenMihomo、KoolMihomo 等)的特定问题 This is an issue of the Mihomo core per se, not to the derivatives of Mihomo, like OpenMihomo or KoolMihomo.
Mihomo version
docker 最新版本 1.18.1
What OS are you seeing the problem on?
openwrt + docker
Mihomo config
openwrt docker tun 部署 docker pull metacubex/mihomo:latest
直接使用的官方页面配置: https://wiki.metacubex.one/example/conf/
######### 锚点 start #######
# 策略组相关
pr: &pr {type: select, proxies: [默认,香港,台湾,日本,新加坡,美国,其它地区,全部节点,自动选择,直连]}
#这里是订阅更新和延迟测试相关的
p: &p {type: http, interval: 3600, health-check: {enable: true, url: https://www.gstatic.com/generate_204, interval: 300}}
######### 锚点 end #######
# url 里填写自己的订阅,名称不能重复
proxy-providers:
provider1:
<<: *p
url: "这里是JMS官方订阅url....忽略不写..."
ipv6: true
allow-lan: true
mixed-port: 7890
unified-delay: false
tcp-concurrent: true
external-controller: 127.0.0.1:9090
external-ui: ui
external-ui-url: "https://github.com/MetaCubeX/metacubexd/archive/refs/heads/gh-pages.zip"
geodata-mode: true
geox-url:
geoip: "https://mirror.ghproxy.com/https://github.com/MetaCubeX/meta-rules-dat/releases/download/latest/geoip-lite.dat"
geosite: "https://mirror.ghproxy.com/https://github.com/MetaCubeX/meta-rules-dat/releases/download/latest/geosite.dat"
mmdb: "https://mirror.ghproxy.com/https://github.com/MetaCubeX/meta-rules-dat/releases/download/latest/country-lite.mmdb"
find-process-mode: strict
global-client-fingerprint: chrome
profile:
store-selected: true
store-fake-ip: true
sniffer:
enable: true
sniff:
HTTP:
ports: [80, 8080-8880]
override-destination: true
TLS:
ports: [443, 8443]
QUIC:
ports: [443, 8443]
skip-domain:
- "Mijia Cloud"
tun:
enable: true
stack: mixed
dns-hijack:
- "any:53"
auto-route: true
auto-detect-interface: true
dns:
enable: true
listen: :1053
ipv6: true
enhanced-mode: fake-ip
fake-ip-filter:
- "*"
- "+.lan"
- "+.local"
nameserver:
- https://doh.pub/dns-query
- https://dns.alidns.com/dns-query
proxy-server-nameserver:
- https://doh.pub/dns-query
nameserver-policy:
"geosite:cn,private":
- https://doh.pub/dns-query
- https://dns.alidns.com/dns-query
"geosite:geolocation-!cn":
- "https://dns.cloudflare.com/dns-query#dns"
- "https://dns.google/dns-query#dns"
proxies:
- name: "直连"
type: direct
udp: true
proxy-groups:
- {name: 默认, type: select, proxies: [自动选择, 直连, 香港, 台湾, 日本, 新加坡, 美国, 其它地区, 全部节点]}
- {name: dns, type: select, proxies: [自动选择, 默认, 香港, 台湾, 日本, 新加坡, 美国, 其它地区, 全部节点]}
- {name: Google, <<: *pr}
- {name: Telegram, <<: *pr}
- {name: Twitter, <<: *pr}
- {name: Pixiv, <<: *pr}
- {name: ehentai, <<: *pr}
- {name: 哔哩哔哩, <<: *pr}
- {name: 哔哩东南亚, <<: *pr}
- {name: 巴哈姆特, <<: *pr}
- {name: YouTube, <<: *pr}
- {name: NETFLIX, <<: *pr}
- {name: Spotify, <<: *pr}
- {name: Github, <<: *pr}
- {name: 国内, type: select, proxies: [直连, 默认, 香港, 台湾, 日本, 新加坡, 美国, 其它地区, 全部节点, 自动选择]}
- {name: 其他, <<: *pr}
#分隔,下面是地区分组
- {name: 香港, type: select , include-all-providers: true, filter: "(?i)港|hk|hongkong|hong kong"}
- {name: 台湾, type: select , include-all-providers: true, filter: "(?i)台|tw|taiwan"}
- {name: 日本, type: select , include-all-providers: true, filter: "(?i)日|jp|japan"}
- {name: 美国, type: select , include-all-providers: true, filter: "(?i)美|us|unitedstates|united states"}
- {name: 新加坡, type: select , include-all-providers: true, filter: "(?i)(新|sg|singapore)"}
- {name: 其它地区, type: select , include-all-providers: true, filter: "(?i)^(?!.*(?:🇭🇰|🇯🇵|🇺🇸|🇸🇬|🇨🇳|港|hk|hongkong|台|tw|taiwan|日|jp|japan|新|sg|singapore|美|us|unitedstates)).*"}
- {name: 全部节点, type: select , include-all-providers: true}
- {name: 自动选择, type: url-test, include-all-providers: true, tolerance: 10}
rules:
- GEOIP,lan,直连,no-resolve
- GEOSITE,biliintl,哔哩东南亚
- GEOSITE,ehentai,ehentai
- GEOSITE,github,Github
- GEOSITE,twitter,Twitter
- GEOSITE,youtube,YouTube
- GEOSITE,google,Google
- GEOSITE,telegram,Telegram
- GEOSITE,netflix,NETFLIX
- GEOSITE,bilibili,哔哩哔哩
- GEOSITE,bahamut,巴哈姆特
- GEOSITE,spotify,Spotify
- GEOSITE,pixiv,Pixiv
- GEOSITE,CN,国内
- GEOSITE,geolocation-!cn,其他
- GEOIP,google,Google
- GEOIP,netflix,NETFLIX
- GEOIP,telegram,Telegram
- GEOIP,twitter,Twitter
- GEOIP,CN,国内
- MATCH,其他
Mihomo log
其实vmess+tls 完全没有报错就是测速没有延迟, 节点连不上, 打开任何网页也打不开.. Windows v2rayN客户端没有问题..(JMS 官方建议的v2rayN客户端版本是V5.18...对应===Xray 1.5.4 (Xray, Penetrates Everything.) Custom (go1.17.7 windows/386)一直用到今天...)
stdout: time="2024-02-17T03:19:48.688862936Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.689216723Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.689504594Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.689836505Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.690146251Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.69085499Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.69158473Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.692339844Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.692712005Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.693056458Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.693321579Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.693547326Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.693759656Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.694004361Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.694237691Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.694517978Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.694790683Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.695031596Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.69528855Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.695617545Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.696779237Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.69709569Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.697573142Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.698719375Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T03:19:48.699847817Z" level=warning msg="To use xtls-rprx-vision, ensure your server is upgrade to Xray-core v1.8.0+"
stdout: time="2024-02-17T04:25:14.816407271Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:15.002071779Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:15.197407939Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:15.489233119Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:15.804417554Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:16.120199322Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:17.277260804Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:18.420052516Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43472 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:19.527322323Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:19.669251481Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:20.877100507Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:21.071899418Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:21.261579002Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:21.511552119Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:21.888827917Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:22.405146834Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:23.197550287Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
stdout: time="2024-02-17T04:25:24.321044482Z" level=warning msg="[TCP] dial Google (match RuleSet/google_domain) 192.168.2.103:43475 --> www.google-analytics.com:443 error: tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match example.com"
Description
配置里自动订阅的:
proxy-providers: provider1: <<: *p url: "https://jjsubmarines.com/members...
没有其他问题,所有节点全部是TLS的...毕竟JMS节点比较简单, 但是它大厂不怕墙啊...
JMS官方订阅b64解码得到的是:
{ ps: 'JMS-xxxxx@XXs4.jjcruises.com:xxx', port: 'xxxx', id: 'xxxxxxx', aid: 0, net: 'tcp', type: 'none', tls: 'tls', sni: 'example.com', add: '42.xxxxx' }
JMS的tls sni 是固定的 example.com
ss节点可以测速有反应,所有 vmess tls 的不通..
看log是需要机场那边支持Xray-core v1.8.0+??? 这....用户角度无解?
面板里面看到 vemss tls tcp的节点 标记的是 vmess:: xudp , ss: udp
其中ss: udp 都能正常使用,
vmess:: xudp都不行, 而且其实配置是错的, JMS的vemss 只支持 tcp.. net: 'tcp'
verify certificate: x509: certificate is not valid for any names, but wanted to match example.com
手动选择没有测速反馈的 JMS vmess tls 节点用的tls example.com, log第二段请求google 触发了verify certificate: x509
你这个可以在provider1的url后面加override配置项,设置skip-cert-verify: true
| gharchive/issue | 2024-02-17T03:36:22 | 2025-04-01T06:37:12.936394 | {
"authors": [
"gowy222",
"wwqgtxx"
],
"repo": "MetaCubeX/mihomo",
"url": "https://github.com/MetaCubeX/mihomo/issues/1040",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2401614999 | fix: clear encryption salt and key
Explanation
Currently, when a user locks the app and then resets their password, they are not able to log in wth their new password.
This is because the encryption key was generated with the old password and not cleared.
The solve this we will remove the encryption key from state when the user locks the wallet as well as when #createNewVaultWithKeyring is called.
This allows for a new encryption key to be generated with the new password when the user logs in again and submitPassword is called.
References
Fixes https://github.com/MetaMask/metamask-extension/issues/25696
Moves path from this PR into the KeyringController
Changelog
@metamask/keyring-controller
FIXED: Reset encryption key when wallet is locked or when new vault is created to ensure that encryption key is always based on most recent password.
Checklist
[x] I've updated the test suite for new or updated code as appropriate
[x] I've updated documentation (JSDoc, Markdown, etc.) for new or updated code as appropriate
[x] I've highlighted breaking changes using the "BREAKING" category above as appropriate
@MajorLift good suggestion. I have made those changes in the latest commit.
This makes sense when the intention is to create a new vault with createNewVaultAndKeychain. But wouldn't the user be best served using the changePassword method? The purpose of this method is exactly changing only the password, as opposed to recreating the entire vault manually.
On the other hand, caching the encryption key allows us to make decryption faster as we skip derivation (the most time-consuming step), so clearing it out on lock would lose its purpose
But wouldn't the user be best served using the changePassword method? The purpose of this method is exactly changing only the password, as opposed to recreating the entire vault manually.
changePassword is for when the wallet is unlocked. In the "forgot password" flow, the wallet is locked so we cannot use that method.
| gharchive/pull-request | 2024-07-10T20:19:00 | 2025-04-01T06:37:12.949801 | {
"authors": [
"Gudahtt",
"mikesposito",
"owencraston"
],
"repo": "MetaMask/core",
"url": "https://github.com/MetaMask/core/pull/4514",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
657102998 | Where can I find typings for this?
provide a way to get typings for typescript projects.
Yea
@aksdevac I apologize for the delay on this. Types will be added to this package via #11, which will close this issue. We will publish an update to this package as soon as that's merged.
It uses types from: https://github.com/MetaMask/inpage-provider
Please merge this PR! Thanks!
We shipped types via #18 in 1.2.0.
Thanks for providing types.
I just noticed that the return type of the detectEthereumProvider is Promise<unknown>
Shouldn't it be something like this?: https://github.com/MetaMask/inpage-provider/blob/main/index.d.ts#L29
| gharchive/issue | 2020-07-15T06:40:59 | 2025-04-01T06:37:12.953256 | {
"authors": [
"Lukasz257",
"aksdevac",
"pointtoken",
"rekmarks",
"schoenwaldnils"
],
"repo": "MetaMask/detect-provider",
"url": "https://github.com/MetaMask/detect-provider/issues/6",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
1924833386 | Improve on WCAG "Robust" category - a11y audit
Go through the following criteria for WCAG's Robust section and improve their issues to meet WCAG requirements:
Level A
4.1.1 Parsing (used this html validator):
[ ] PDFs are structured a bit strangely and it affects screen reader performance. Instead of a paragraph being it's own div, one word or a whole phrase in the middle of a sentence will be.
[x] The calendar pdf also reads things out of order, but it doesn’t look like we have control over that.
[x] We’ve got some img and other tags that have stray, unneeded ending tags, like the Metro logo img tag
[x] There are buttons that have href attributes. Let's try to change those to just a tags
[x] Figure out if we need autocomplete attributes on radio buttons. If we do then keep them. If not, we can remove.
[x] Many small tags are encasing divs, whereas the divs should encase the smalls
[x] Some table elements are missing a scope attribute
[x] The board report search bar has an aria-describedby that doesn’t lead to anything. This is identical to the issue for 3.3.2 Labels or Instructions
[x] Run the validator again to see what else can use some improvement.
4.1.2 Name, Role, Value:
[x] Consider replacing the title attribute on various elements with a tooltip instead. Suggestion taken from this site.
[x] Many divs may be able to be changed into more semantic tags like sections, articles, etc.
[x] The buttons above the board members listing page map are labels instead of buttons. Consider making changing that to be more semantic. These labels contain inputs, so this is semantically correct.
Level AA
4.1.3 Status Messages:
[x] Can add an aria role of alert to errors
Connects #1012
Closed by #1088
| gharchive/issue | 2023-10-03T19:47:50 | 2025-04-01T06:37:13.060290 | {
"authors": [
"xmedr"
],
"repo": "Metro-Records/la-metro-councilmatic",
"url": "https://github.com/Metro-Records/la-metro-councilmatic/issues/1024",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
239843737 | explain edge cases where a route skips stops
see, for example, route 29 in SF. at the start of peak periods it skips some stops but not others (e.g. stop_id 16557), making the stop_sequence for this stop within the am_peak period take multiple values (27 (twice) and 62(28 times)) after this subset:
https://github.com/MetropolitanTransportationCommission/RegionalTransitDatabase/blob/master/R/r511.R#L138
started this here:
see https://github.com/MetropolitanTransportationCommission/RegionalTransitDatabase/commit/bfc80a26fdf9620f73f4be22eda8dbd675ad5271
| gharchive/issue | 2017-06-30T17:36:28 | 2025-04-01T06:37:13.067366 | {
"authors": [
"tombuckley"
],
"repo": "MetropolitanTransportationCommission/RegionalTransitDatabase",
"url": "https://github.com/MetropolitanTransportationCommission/RegionalTransitDatabase/issues/63",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
688900346 | Woe kntl.js itu bermasalah
kntl.js itu bermasalah masa folder tempat chrome nya nye suain dari sananya laptop orang kan beda beda
gabisa di edit lagi :( hum
Parah ganti inside heartz aja lah
Pada tanggal Sen, 31 Agt 2020 12:07, dzakijo1 notifications@github.com
menulis:
gabisa di edit lagi :( hum
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/MhankBarBar/whatsapp-bot/issues/1#issuecomment-683551461,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AQYMFGJAKKLBGA3IU6TTBTDSDMVYRANCNFSM4QQC2K6A
.
bukan ane yang buat bang :( ane ikutan ngoment doang
Jadi gabisa ni? Masalahnya folder chrome gw di Program Files bukan Program Files (x86) Pada tanggal Sen, 31 Agt 2020 12:09, dzakijo1 notifications@github.com menulis:
…
bukan ane yang buat bang :( ane ikutan ngoment doang — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub <#1 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQYMFGISRDX5245P2T2GLYLSDMWCNANCNFSM4QQC2K6A .
Install Ulang Chromenya Aja Bro Pasti Bisa
| gharchive/issue | 2020-08-31T05:05:48 | 2025-04-01T06:37:13.073390 | {
"authors": [
"MFarelSyahtiawan",
"ZefianAlfian",
"dzakijo1"
],
"repo": "MhankBarBar/whatsapp-bot",
"url": "https://github.com/MhankBarBar/whatsapp-bot/issues/1",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1343706570 | Nothing is changing
I installed the app and when I run it nothing changes. I restarted the computer, ran it in administrator etc but nothing changed. Can anybody help?
Turning off Windows' the "Show Accent Color in Title Bar and Window Border" setting fixed it. Thanks anyway!
| gharchive/issue | 2022-08-18T22:17:40 | 2025-04-01T06:37:13.100191 | {
"authors": [
"lvlzyro"
],
"repo": "MicaForEveryone/MicaForEveryone",
"url": "https://github.com/MicaForEveryone/MicaForEveryone/issues/119",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1535089820 | 'Remove Rule' button not showing when sidebar is not collapsed and language set other than English
Describe the bug
The Remove Rule button only shows in the sidebar when the window is resized narrow so the sidebar collapses, with every language set except English (the default). It will not show when the window is wider and the sidebar is not collapsed. Also the text for Add Rule flashes when resizing.
To Reproduce
Set other language than English
Resize window window wider so that the sidebar isn't collapsed
Remove Rule button is not being shown
Resize window window narrower so that the sidebar is collapsed
Remove Rule button is being shown in sidebar
Expected behavior
The Remove Rule button being shown regardless of set language, size of window or if the sidebar is collapsed
Screenshots
n/a, See video: https://youtu.be/VNO2wyX9_44
Windows Version
Windows 11 22H2 build 22621.1105 (Main Release)
Forgot to mention (I point it out in the video), but I can resize the window but maximize button is disabled??
Thanks for reporting.
Maximize button is disabled on purpose.
I'll check what's wrong with the remove button. I guess it's because there's not enough space for it to show up.
It only happens in some languages (i.e. English and Chinese works fine, but not Nederlands)
Weird. The button is completely missing.
@xmine64 your hypothesis is right. When the button labels get long enough, the remove button will move into the secondary commands section. However, we have hidden the entry way to secondary commands (via disabling the overflow button), then it shows up to the user as being completely missing.
When the button labels get long enough, the remove button will move into the secondary commands section.
We need to hide labels. We will need some room to add increase/decrease priority buttons anyway.
I'm not quite sure about the flashing text though.
It happens during resize👀
How about moving the CommandBar outside the SplitPane when the pane is visible without an overflow button?
Using English here and it still doesnt show, i think it affects all languages
This is a bug that can potentially affect all languages, though some are hit harder than others.
This is a bug that can potentially affect all languages, though some are hit harder than others.
i see, just wanted to confirm this though, hope it gets fixed soon
Maybe this also has to do with dpi scaling perhaps? (Looking at the flashing, just a guess).
And also a solution could be to make the sidebar a little more wider. (would also but better cuz some process rules could be long strings)
I don't think that DPI scaling affects this, since the sidebar width is static and uses logical pixels (which is not affected by DPI) instead of physical pixels.
| gharchive/issue | 2023-01-16T15:09:14 | 2025-04-01T06:37:13.108635 | {
"authors": [
"GitExploraa",
"dongle-the-gadget",
"toineenzo",
"xmine64"
],
"repo": "MicaForEveryone/MicaForEveryone",
"url": "https://github.com/MicaForEveryone/MicaForEveryone/issues/225",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
158534197 | Added proper User-Agent string to Facebook calendar call, and various…
This should only have the fixes for the calendar module, as well as spelling fixes for the word 'exist'. Finger crossed, and time to branch ...
Crap, I forgot to update the CHANGELOG file. I can create a new PR (after closing this one), or you'll see it with other future changes ...
Thanks! No worries, I'll update the changelog.
| gharchive/pull-request | 2016-06-05T02:34:21 | 2025-04-01T06:37:13.110124 | {
"authors": [
"KirAsh4",
"MichMich"
],
"repo": "MichMich/MagicMirror",
"url": "https://github.com/MichMich/MagicMirror/pull/352",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
299114629 | Table Field Groups
Hi,
is it possible from AL to extend i.e the table Item's Field Group: Like add field "Description 2" to the list.
This is currently not possible. The work to add this support is tracked by #143 but we don't have a timeline for when it will be available yet.
Closing as duplicate
| gharchive/issue | 2018-02-21T20:21:18 | 2025-04-01T06:37:13.200743 | {
"authors": [
"DominikDitoIvosevic",
"StanislawStempin",
"svwfr"
],
"repo": "Microsoft/AL",
"url": "https://github.com/Microsoft/AL/issues/1651",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
300552125 | Caption doesn't show in Dyn365 sandbox
Appears that the translation-feature doesn't work (anymore) on a D365 sandbox: I'm using the latest docker image "microsoft/dynamics-nav:devpreview-finus"
Situation:
We enabled the translation feature
We use labels in our notification
used in:
Also we have a wizard, where we have our label for the assisted setup. But many (not all) captions nor labels show up empty. The notification is not shown in this case (probably because it has an empty message), and pages are like:
and the wizard:
Just for the record:
All ApplicationAreas are in place!
We tested it on the previous release (D365 sandbox), and that did work!
We tested with all the "user experiences"
When you use the "old" "CaptionML" property, it does show, like:
does show:
Hi @waldo1001 . We are working on a new feature where if the label text differs from the translation item source (this means that the source string changed in the source code since the translation file was last updated) then we ignore that translation item. This was a requested feature to make sure you aren't using outdated source strings by accident (if you change the source string you want to update the translations as well). I'm working on adding warnings for the cases when this is happening. Could you check if this might be causing your behavior? Otherwise, could you please give us some kind of repro so that we can investigate?
Same issue here. Some observations:
Only happens with feature TranslationFile enabled in app.json.
When translation file is generated during publish it is ok. But when you publish a second time (with no changes at all), then the labels become empty.
Labels are also empty van you first build (with Ctrl+Shift+B) and then publish.
Removing the .xlf file helps, it gets recreated and labels appear again. But only one time.
Just to make sure we are on the same page:
Are you sure you renamed the xliff file which was generated and now you are adding translations to the renamed file and not the one which was generated? The one which is generated gets emptied on each build so that one is only for reference.
No, I did not rename the xliff file at all. The most simple flow is like this:
Enable TranslationFile feature
Create a label or caption
Publish the app (xliff is generated, captions are displayed fine)
Publish the app without any change, now the captions are gone
Could you please check that you have the build 20643 or later? What is your current version?
Current version is Feb preview, build 20723
@waldo1001 @ajkauffmann I'm afraid I'm stuck right now as I cannot repro your issue. Could you please make a zip of an extension which demonstrates this issue? I tried but it simply works for me.
Also, there is a chance you're issue was fixed in the mean time. Could you please try getting a more recent build? 20723 is now 2 weeks old and I made some fixes in the mean time. Could you try with a 20937 or newer?
@DominikDitoIvosevic tested again with build 20901. That works like expected, so it seems the issue has been solved along the road.
I suppose this was fixed then. I'll close it and if you guys still experience issues please open a new issue
| gharchive/issue | 2018-02-27T09:27:22 | 2025-04-01T06:37:13.211380 | {
"authors": [
"DominikDitoIvosevic",
"ajkauffmann",
"waldo1001"
],
"repo": "Microsoft/AL",
"url": "https://github.com/Microsoft/AL/issues/1692",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
351776420 | Button To Edit List Pages in BC
Hello Everyone,
When I get to some list pages through search in Dynamics Business Central I don't see an option to Edit /Manage the page:
If I refresh/reload this page the option appears:
Is it a bug of some sort or a matter of incorrect settings? Can this be fixed?
Thanks
Did you try clicking on Home tab which opens actions tab? in your first screenshot.
Got it. Indeed actions are hiding under the Home tab. Thank you
| gharchive/issue | 2018-08-18T00:57:16 | 2025-04-01T06:37:13.214033 | {
"authors": [
"Anna-Gudimova",
"pmohanakrishna"
],
"repo": "Microsoft/AL",
"url": "https://github.com/Microsoft/AL/issues/3373",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
407642778 | Txt2Al not handling CAL option fields with enum assigned
Issue:
When using Txt2AL to convert a table that contains an option field that has an Enum assigned, the resulting AL file contains compile errors. Example below.
Expected Result
Would be nice if the option field was changed to an enum field in the AL file. Would also be great if the enum object file was also created. Example below.
Hi @MikeGlue are you using our latest developer preview? This bug should have been fixed there some time ago.
I tried this with the BC on prem CU1 Docker image. I can try with the latest daily build if you think it's fixed there.
Yes this does work as expected in the daily build image I'm using. My mistake for not trying that first. :(
| gharchive/issue | 2019-02-07T10:57:53 | 2025-04-01T06:37:13.217492 | {
"authors": [
"MikeGlue",
"qutreson"
],
"repo": "Microsoft/AL",
"url": "https://github.com/Microsoft/AL/issues/4574",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.