id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
621084123 | Preferences for editing dataTable attributes
For fun, I tried to lay out a "perfect" dataTable attribute editor. A whiteboard or sketch would work best, but I used a google doc.
https://docs.google.com/document/d/1O4xziJ1rpYb06AgI9NV2ziHIXOd01bv1_agv803OO48/edit#
Summary goals for a pie-in-the-sky dataTable attribute editor:
User is told what fields will need to be filled in (specific to that measType). Maybe this is a help-popup.
Nominal/ordinal: enumerations (optional)
dateTime: a format string
interval/ratio: unit
User can know when they’re done from a summary page. (maybe rows turn green when minimum info is added?)
User can view and edit common info together (e.g, fields used by all are on the summary page)
Margaret's link doesn't work now, but this sounds like something I just came here to add. One thing that keeps me from using ezEML on some datasets is a wide table. With 50 columns, it just isn't workable to click through each column to set definitions and units after a table import. With a gridded attribute editor as suggested here this barrier to using ezEML would be solved!
@hubbardbrook I updated the sharing on that link.
| gharchive/issue | 2020-05-19T15:41:04 | 2025-04-01T06:37:24.182215 | {
"authors": [
"mobb"
],
"repo": "PASTAplus/ezEML",
"url": "https://github.com/PASTAplus/ezEML/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
54653195 | Crash on Reload Spigot 1.8
Hello,
Everytime when i do /reload the server crashes.
its PermissionsEx.
Crash log :
http://pastebin.com/ApjgF8tr
The /reload command is a vanilla command which has been broken for more than 4 years.
Don't use it. Reload plugins individually as needed. /pex reload, /ess reload, etc.
| gharchive/issue | 2015-01-17T06:36:22 | 2025-04-01T06:37:24.280069 | {
"authors": [
"Dino-SherComp",
"Stormbow"
],
"repo": "PEXPlugins/PermissionsEx",
"url": "https://github.com/PEXPlugins/PermissionsEx/issues/1857",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1166338641 | Bytetrack documentation (how to run)
Issue Type
Documentation Feature Request
OS
Ubuntu
OS architecture
aarch64
Programming Language
Python
Framework
TensorRT
Model name and Weights/Checkpoints URL
NA
Description
I've wanted to test Bytetrack and I'm having a rough time from their original repo.
Does your version offer any documentation on how to run it or any help?
Relevant Log Output
No response
URL or source code for simple inference testing code
No response
I just converted the models from the repository cited here for various frameworks. Therefore, all documents are listed in the citation repository. We welcome your pull requests for demo codes.
https://github.com/ifzhang/ByteTrack
| gharchive/issue | 2022-03-11T12:03:30 | 2025-04-01T06:37:24.336106 | {
"authors": [
"PINTO0309",
"callmesora"
],
"repo": "PINTO0309/PINTO_model_zoo",
"url": "https://github.com/PINTO0309/PINTO_model_zoo/issues/192",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
266191657 | py3 virtualenv install not working on RedHat
problems with SSL connection, even when cloning the GH repository
Wonder if this could be related to #84
Will consider this resolved when #110 is fixed.
| gharchive/issue | 2017-10-17T16:19:09 | 2025-04-01T06:37:24.358879 | {
"authors": [
"eseiver"
],
"repo": "PLOS/allofplos",
"url": "https://github.com/PLOS/allofplos/issues/27",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2091218001 | 🛑 MUSA is down
In b3ea337, MUSA (https://storemusa.com/) was down:
HTTP code: 403
Response time: 510 ms
Resolved: MUSA is back up in 29c8142 after 24 minutes.
| gharchive/issue | 2024-01-19T19:27:51 | 2025-04-01T06:37:24.374543 | {
"authors": [
"POCLANOScom"
],
"repo": "POCLANOS/status",
"url": "https://github.com/POCLANOS/status/issues/429",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
380995022 | multi-vehicle simulation sensor
Hello!
I am following the guide on multi-vehicle simulation: https://dev.px4.io/en/simulation/multi-vehicle-simulation.html.
Everything works well and I was able to insert the swarm on my world file.
Now I'd like to add a camera to one of the iris but I don't know how to do that. I know that for each vehicle inside the launch file I can add tag to define mavlink_upd_port, pose, vehicleID.... do I have to specify a tag for the camera?
Does anyone has experience? I didn't find any documentation on this topic and I don't have experience...
Thankyou in advance
The docs we have on camera in gazebo are here. I have not tried this out.
@S259420 I'm not a developer and I don't know anything about this topic. You're going to have to wait for some real help :-). The people I linked might be good ones to talk to. @TSC21 might also have some insight.
@hamishwillee Thanks a lot! :)
@lbegani - Any ideas on setting up Gazebo camera for multiple vehicles in a multi-vehicle simulation. Even knowing this is not supported would be useful ....
I am not sure how it works with multi-vehicle simulation. My understanding is that you add a camera sensor in the model's sdf file, specify the properties and you will have the camera frames published on the gazebo topic. For reference check - Firmware/Tools/sitl_gazebo/models/typhoon_h480/typhoon_h480.sdf . Try adding camera sensor in the model and see if it works.
@lbegani For a multivehicle, an urdf file is needed instead of the sdf...
@helenol @burrimi @birchera @devbharat @andre-nguyen
Hello, could you please explain how to use "component_snippets.xacro"? I've found it is called by iris_base.xacro. It should allow to set a camera sensor and create an urdf for themultivehicle simulation. I've found you contributed to component_snippets.xacro, any suggestion would be appreciated.
Thankyou in advance
| gharchive/issue | 2018-11-15T04:26:13 | 2025-04-01T06:37:24.404575 | {
"authors": [
"S259420",
"hamishwillee",
"lbegani",
"matteoscanavino"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/issues/10851",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
154905099 | Failsafe mode triggered after landing with LPE
Using LPE without GPS, failsafe is triggered after auto-landing has completed.
The reported sequence is:
55760: Landing at current position�����������������������
57704: [lpe] flow timeout �������������������������������
57758: [lpe] xy timeout ���������������������������������
58541: [lpe] lidar timeout ������������������������������
58592: [lpe] tz timeout ���������������������������������
58697: failsafe mode on����������������������������������
65799: LANDING DETECTED����������������������������������
Obviously, failsafe is triggered because LPE reports timeouts at low altitudes above ground.
Latest master: Landing denied, when GPS not valid.
Maybe relaxed to https://github.com/PX4/Firmware/issues/4534
| gharchive/issue | 2016-05-15T11:58:08 | 2025-04-01T06:37:24.408095 | {
"authors": [
"ecmnet"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/issues/4547",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
392071616 | Feature: go-around patterns for missions
Test data / coverage
Here is one flight with one go-around. In total, I have tested this feature at least over 10 times.
https://logs.px4.io/plot_app?log=4f523186-44dc-439f-8c31-2da5a1d66e86
Describe problem solved by the proposed pull request
To my understanding (please correct me if this is nonsense), the vehicle will always loiter above the landing waypoint after an aborted landing. So it is not possible to add regular waypoints after the LAND
point that the vehicle would follow automatically after the aborted landing.
Describe your preferred solution
If the landing is aborted and a valid waypoint exists after the LAND, just continue to the next item.
Edit. Some explanation of the code that might help (happens within Mission:on_active()):
Mission::on_active() checks if MissionBlock::is_mission_item_reached() is true
If true, and if autocontinue is true (seems to be for LAND), then advance_mission() and set_mission_items(). This sets the mission item to the next one.
check if _mission_item.nav_cmd == NAV_CMD_LAND. It is not true, because the item changed (if the next item isn't also LAND (which would be weird)).
Because the previous clause wasn't true, don't do do_abort_landing();
FW position controller will reset the _land_abort flag.
This looks right, I just want to give it a more careful review as the navigator states are becoming more intertwined.
No, the statement is for both NAV_CMD_LAND and NAV_CMD_VTOL_LAND cases.
Does this change impact the various RTL_TYPE options? Specifically RTL_TYPE 1 and 2 where parts of the mission are used during Return Mode?
I'm wondering if there is a use case where some people would want this feature disabled? - I have to think about it some more. If this is the case, maybe the correct place to effectively "disable" this feature would be Mission Feasibility checker. Thoughts?
I haven't considered if this will interfere with any return modes. A quick investigation showed that:
landing to the current position seems to be OK (the next waypoint will be set as invalid in navigator/land.cpp).
A quick look showed that if there is a valid landing point in the mission, it will just jump to NAV_CMD_DO_LAND_START. So no weird behaviour to be expected (navigator/rtl.cpp).
In case of landing at the home position without a LAND wpt, it could possibly just continue the mission from the point where the RTL was issued if an aborted landing was to happen. I did't see that the next waypoint would have been set to invalid. So this is a problem (navigator/rtl.cpp).
Another thing comes to mind concerning the FW position controller: Maybe it would be better to disable landing aborts under some situations, such as the "low battery immediate landing". I don't know if it is already implemented though.
Maybe it would be better to disable landing aborts under some situations, such as the "low battery immediate landing".
this is an interesting idea and maybe could be discussed in a separate issue post? I might suggest tying it to BAT_EMERGEN_THR with some logic for BAT_EMERGEN_THR where if already in a landing state, stay in the landing state and block the ability to abort?
@almaaro please rebase.
@TSC21 I'm a bit confused, something eeird has apparently happened (I'm not a git master)...
Do I rebase on master and push to the original branch? Thank you.
If this feature were to move forward, I think it should need to be enabled via parameter, e.g. MIS_DO_GO-AROUND or maybe MIS_ABORT_CONTINUE or something. Actually the term "go-around" doesn't accurately reflect the proposed feature - it is more of an "abort pattern" or "continue after abort" or something.
The current default behavior of aborting to loiter over the landing is generally pretty safe, consistent, and easily understood by the general user. If a user intentionally or accidentally adds waypoints after a Land waypoint, the system should still behave safely. I think a user should be required to enable the feature via a parameter so they have some understanding that the vehicle will go somewhere else (like maybe into some trees/buildings) after an abort.
Please reopen. See https://github.com/PX4/Firmware/pull/11099#issuecomment-546703322
| gharchive/pull-request | 2018-12-18T09:43:03 | 2025-04-01T06:37:24.418300 | {
"authors": [
"Antiheavy",
"TSC21",
"almaaro",
"dagar"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/11067",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
100620174 | VTOL updates for assisted modes
Use multicopter landed detector as it will register takeoff at ground level and therefore give the fixed-wing position controller the correct home position later
Also consider transition state in posctl
Tested on a standard VTOL: http://dash.oznet.ch/view/zqm9GvcUzafR8SnaPYEHf9
Doing transitions in alt and posctl
Rebased in #2686
@tumbili rebased on master with cleanup changes
@AndreasAntener Thanks!
My argument was mostly about architecture: You can assume that the status field is zero, but making assumptions about the VTOL status for non-VTOL vehicles is a stretch, because that's a completely non-expected dependency. It looks good now.
@tumbili I didn't realize you put this check in
here: https://github.com/PX4/Firmware/pull/2652/files#diff-fc77c3ef569029d45764664c75d4b0c1R1467
and here: https://github.com/PX4/Firmware/pull/2652/files#diff-e286114cb4359c03dcd9e90c57b99214R944
We have now 2 different places where we care about the transition phase
The mc pos controller is now completely shut off during the transition, which is not what I tested
Ok, can we go 2 steps back: Please do not propose PRs for merging with non-trivial changes which are not flight tested. We've had this now a couple of times.
And please separate architectural cleanup from flight testing / development. I never change code after I come back from testing because it always fails. There should be one branch which exactly represents what has been flown and should be merged as is (and any detail change has to be re-tested).
@AndreasAntener Why should the mc_pos_controller be shut off?
@AndreasAntener @LorenzMeier In fact my changes tell the mc_pos_controller to keep publishing during a transition. And they tell the fw_att_control module not to publish attitude setpoints during a transition otherwise we would have two modules publishing. This is exactly what we flight tested with the FireFly.
I just noticed that the "in_transition_flag" also needs to go into the else statement here https://github.com/PX4/Firmware/pull/2652/files#diff-fc77c3ef569029d45764664c75d4b0c1R1467
Sorry for the confusion, you're right Roman, I misinterpreted the change in mc_pos_control. And yes, it's missing in the else ;)
We were on different branches yesterday with the offboard stuff. The detail change from me that already went in with the cleanup branch only affects offboard. For this I'll do another flight test toady.
Flown: http://dash.oznet.ch/view/A7A4so48PhK3YtWCaSiggD (manual, alt- and posctl)
@tumbili ready for a flight test with the Firefly ;)
@tumbili
I gave the transitions in POSCTL some more thoughts. I see the following open issues:
a) to FW: altitude increase during transition (especially with the FunCub), drop in FW right after the transition
b) to MC: the position controller in manual flight steps on the brakes. Best case this just doesn't look so nice..
a)
The climb makes sense. The MC pos controller is actually reducing thrust when it notices the climb, but our FunCub just has a natural pull up as soon as it gets some airspeed. You were also talking about adding pitch blending, this might help.
Since we're still transitioning well below cruise airspeed, a drop after the transition is somewhat expected, but is never that extreme if we do it in manual, so I'm not quite sure what to make of that. I tried transitioning at a higher speed once, but it got away quite a bit before it actually transitioned. On the other hand it started blending in FW controls already, so I was able to fly it in FW before the transition completed (which could be the correct and most seamless way to do it).
b)
Our back transitions in offboard are actually smoother and just in POSCTL. It might have something todo with the fact that in the offboard test, the MC position controller still has a setpoint ahead of him, and not at the position were the transition happened. Ideally we would put the MC controller in velocity mode after the transition, starting with the current velocity and reducing that vector until it's 0.
Thoughts?
@AndreasAntener
a) I agree that we should try blending the fw pitch controls with the ones from the multicopter. I'm pretty sure this will make a big difference, also for the FireFly.
b) This also sounds reasonable. Who would then publish this velocity?
a) I think we can try that this week on the Cub at least
b) I have the same thoughts about this as about the transition work your currently doing. Architecture-wise it fits in the vtol type code, but we don't want to duplicate position controller code there. Depending on what we do there we might need to update the interface to the position controller. E.g. we have the setpoint triplet that could carry a velocity input but this is not handled except for offboard. The same "velocity control" for manual looks at manual input.
We're seeing output pulsing during timed back transitions with the standard VTOL, it is unclear yet if this is related to the mode flags or if the standard VTOL blending (or both).
http://dash.oznet.ch/view/bgJVKEGhAtkcwqsRd46KE7#Thrust_PLOT
@AndreasAntener @SimonWilks Found the problem, will push a fix.
Flight tested, will merge soon.
Merged.
| gharchive/pull-request | 2015-08-12T19:45:26 | 2025-04-01T06:37:24.432174 | {
"authors": [
"AndreasAntener",
"LorenzMeier",
"SimonWilks",
"tumbili"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/2686",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
157453554 | Snapdragon MPU9250 rotation support
@julianoes @LorenzMeier
This PR :
Fixes Snapdragon build after a slew of changes from this week broke it.
Renames rc_in_pwm_out to snapdragon_io (I prefer the more concise name). Also fixes broken build due to incomplete renaming in the last PR.
Works around missing dprintf in QuRT. Fixes build.
Adds sensor rotation support to MPU9250 wrapper to make supporting P1/P2 Flight boards possible. ( #4652 )
TODO :
Mag rotation support (for HMC5883 and AK mag in MPU9250)
Automagically detect P1/P2 boards and set rotations. (@mcharleb @jywilson Any suggestions?)
I haven't added rotations support to the mag driver yet. I can do this if required, but there are hacks in the driver to make it consistent with the 3dr gps, and I'd need to check how we can deal with transitional support. Let me know if we need rotation support in the mag driver at this stage.
I was planning to add it for the AK mag in the 9250 after #4651 is merged. Please merge this PR before #4651.
FYI @SamChenzx
You can test this now. You will need to start the MPU9250 wrapper with an -R <rotation> parameter. Check the P2 board's rotation w.r.t P1 and set the correct value from here (https://github.com/PX4/Firmware/blob/master/src/lib/conversion/rotation.h#L50) in px4.config(https://github.com/PX4/Firmware/blob/master/posix-configs/eagle/flight/px4.config#L7)
Please stand by for test flights on Eagle. Will be done by today.
Thanks @mhkabir for working on this.
I disagree that I didn't do the rename completely. One of the changes must have just gotten lost in a merge, if you look at: https://github.com/PX4/Firmware/pull/4668/files.
I would vote for my naming because snapdragon_io can be confused with the IO board on Pixhawk. Also, it's not really tied to snapdragon, it could as well be used on other hardware.
What about mavlink_io?
I've already cherry-picked d53b560, it' be skipped in a rebase.
@julianoes All done. Please merge.
@mhkabir is it tested?
Thanks, I'll test and merge it tomorrow together with #4651.
@mhkabir @julianoes I'm ready to test, first of all, I should switch to snappy_rotation branch?
@mhkabir I make a pull request and update to latest commit. Can't find the snappy_rotation branch, so I still use master. Set "df_mpu9250_wrapper start -R 6" in the px4.config file, then issue ./mainapp mainapp.config, attempt to re-calibrate sensors, but can't proceed the calibration progress, stop at the calibration start page.
@SamChenzx - That would never work. Please checkout my branch from mhkabir/Firmware.
@mhkabir I just wanted to merge this and realized that the rotation is done in sensors.cpp and not in the drivers.
@LorenzMeier, @mhkabir: I guess we need both. We need rotation on the driver level to rotate for boards such as Snappy or RPi. At the same time we have the parameter which gets applied in sensors.cpp.
You need both. Driver level for the relative rotation of the board and sensor axes, and the sensors app one for board rotation relative to the vehicle body.
@mhkabir Just now I've tested the snappy_rotation branch in the field, I set the rotation parameter by "df_mpu9250_wrapper start -R 6" in the px4.config file, but leave rotation parameters in QGC as "ROTATION_NONE", let 3DR GPS module direction towards the Y- direction of P2 board(as Y- is the heading of P1 board), I can manually takeoff it, but when takeoff, it will rotation about 90 degrees, then gradually turn into stable. I will do another test if I set the rotation parameters in QGC as well.
Is your 3dr GPS arrow pointing forward towards the nose of the vehicle ? It should, for correct behaviour . Don't add any other rotations other than the - R 6.
It would be much easier to check the horizon and heading indicators in QGC to see if they are consistent with the vehicle, rather than flying every time.
@mhkabir let's close this in favor of #4704.
The horizon indicator didn't work on QGC, both android and Linux version, I don't know why. The heading indicator can be checked by the GPS apps in mobile phone, and heading works. After "df_mpu9250_wrapper start -R 6" is set, the nose of the vehicle no longer along the camera direction but the direction of power interface and that's the direction I place the 3DR GPS module.
| gharchive/pull-request | 2016-05-30T08:15:16 | 2025-04-01T06:37:24.443844 | {
"authors": [
"SamChenzx",
"julianoes",
"mhkabir"
],
"repo": "PX4/Firmware",
"url": "https://github.com/PX4/Firmware/pull/4690",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1309932142 | Update Car.java
Year is a reserved keyword in Hibernate. If not enclosed in backticks it generates a syntax error - expected "identifier"; SQL statement:
Hi, how do i implement Security Config class since websecurity configurer adapter used in this demo has been deprecated in spring boot. I am using v2.7.3. Assist
Hi, how do i implement Security Config class since websecurity configurer adapter used in this demo has been deprecated in spring boot. I am using v2.7.3. Assist
Hello,
I will write solution soon to the GitHub. Before that you can take a look for example Baeuldung blog about this. https://www.baeldung.com/spring-deprecated-websecurityconfigureradapter
thank you, the link, shared has helped tremendously. Looking forward to your solution soon.
The code seems to work well. Please comment if i did the right way? Thanks
package com.springdev.cardatabase.config;
import com.springdev.cardatabase.service.UserDetailsServiceImpl;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.HttpMethod;
import org.springframework.security.authentication.AuthenticationManager;
import org.springframework.security.config.annotation.authentication.configuration.AuthenticationConfiguration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.http.SessionCreationPolicy;
import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
import org.springframework.security.crypto.password.PasswordEncoder;
import org.springframework.security.web.SecurityFilterChain;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;
import org.springframework.web.cors.CorsConfiguration;
import org.springframework.web.cors.CorsConfigurationSource;
import org.springframework.web.cors.UrlBasedCorsConfigurationSource;
import org.springframework.web.servlet.config.annotation.CorsRegistry;
import org.springframework.web.servlet.config.annotation.WebMvcConfigurer;
import java.util.Arrays;
@Configuration
@EnableWebSecurity
public class SecurityConfig {
@Autowired
private UserDetailsServiceImpl userDetailsService;
@Autowired
private AuthenticationFilter authenticationFilter;
@Autowired
private AuthEntryPoint exceptionHandler;
@Bean
public PasswordEncoder passwordEncoder(){
return new BCryptPasswordEncoder();
}
@Bean
public AuthenticationManager authenticationManager(AuthenticationConfiguration authenticationConfiguration) throws Exception{
return authenticationConfiguration.getAuthenticationManager();
}
@Bean
public SecurityFilterChain filterChain(HttpSecurity http) throws Exception {
http.cors().and().csrf().disable()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.STATELESS).and()
.authorizeRequests()
.antMatchers(HttpMethod.POST, "/login").permitAll()
.anyRequest().authenticated().and()
.exceptionHandling()
.authenticationEntryPoint(exceptionHandler).and()
.addFilterBefore(authenticationFilter, UsernamePasswordAuthenticationFilter.class);
return http.build();
}
@Bean
CorsConfigurationSource corsConfigurationSource() {
UrlBasedCorsConfigurationSource source = new UrlBasedCorsConfigurationSource();
CorsConfiguration config = new CorsConfiguration();
config.setAllowedOrigins(Arrays.asList("*"));
config.setAllowedMethods(Arrays.asList("*"));
config.setAllowedHeaders(Arrays.asList("*"));
config.setAllowCredentials(false);
config.applyPermitDefaultValues();
source.registerCorsConfiguration("/**", config);
return source;
}
@Bean
public WebMvcConfigurer corsConfigurer(){
return new WebMvcConfigurer() {
@Override
public void addCorsMappings(CorsRegistry registry) {
registry.addMapping("/")
.allowedMethods("*");
}
};
}
}
Great, looks good.
Thanks so much looking forward to learn more from you sir.
| gharchive/pull-request | 2022-07-19T19:06:25 | 2025-04-01T06:37:24.550000 | {
"authors": [
"Fala-Bonzo",
"juhahinkula",
"rgbailey"
],
"repo": "PacktPublishing/Full-Stack-Development-with-Spring-Boot-and-React",
"url": "https://github.com/PacktPublishing/Full-Stack-Development-with-Spring-Boot-and-React/pull/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
780946791 | Chaper4 (WebApp)Should I copy these files inside the Ubuntu distro?
in order to run the Web-App Chapter-4, should I copy these files inside Ubuntu? or it's unnecessary.
Hi @jhorgint,
The .ps1 file is example PowerShell commands from the chapter and designed to be run from PowerShell in Windows. The web-app folder is intended to be run from WSL.
Hope that helps!
Stuart
Hi @jhorgint,
The .ps1 file is example PowerShell commands from the chapter and designed to be run from PowerShell in Windows. The web-app folder is intended to be run from WSL.
Hope that helps!
Stuart
Thanks Stuart. I have finally gotten it running.
using PowerShell and this command "Bash run.sh"
PS C:\web-app> Bash run.sh
| gharchive/issue | 2021-01-07T00:54:28 | 2025-04-01T06:37:24.556202 | {
"authors": [
"jhorgint",
"stuartleeks"
],
"repo": "PacktPublishing/Windows-Subsystem-for-Linux-2-WSL-2-Tips-Tricks-and-Techniques",
"url": "https://github.com/PacktPublishing/Windows-Subsystem-for-Linux-2-WSL-2-Tips-Tricks-and-Techniques/issues/4",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1505776329 | 使用onnxruntime推理和fastdeploy推理同一个onnx模型,结果出现比较大的偏差
环境
FastDeploy版本: fastdeploy-gpu-python 0.5.0
系统平台:Windows x64(Windows11)
硬件: rtx3060ti
编译语言: Python3.9.7
问题描述
使用相同数据经过相同的处理方式,输入到onnexruntime和fastdeploy,最终结果出来有非常大的偏差,onnxruntime的结果没问题,但是fastdeploy基本就是不可用的结果
这张是onnxruntime推理的结果
这张是fastdeploy的结果
代码、模型和数据在这里
链接:https://pan.baidu.com/s/1YjpzENl_OdsfHof0qzZABw
提取码:v3lo
代码、模型和数据在这里
链接:https://pan.baidu.com/s/1YjpzENl_OdsfHof0qzZABw
提取码:v3lo
我有个疑惑,为什么你要用fd当成推理引擎来用。这两个东西的定位应该是不一致的呀。
我有个疑惑,为什么你要用fd当成推理引擎来用。这两个东西的定位应该是不一致的呀。
我的概念里面fastdeploy不就应该是一个用于部署的推理引擎吗?
我有个疑惑,为什么你要用fd当成推理引擎来用。这两个东西的定位应该是不一致的呀。
我的概念里面fastdeploy不就应该是一个用于部署的推理引擎吗?
FastDeploy类似一个推理工具。区别类似于,onnxruntime输入一张图片,得到的是模型的推理结果。fastdeploy输入一张图片,得到的是经过后处理之后的图片。
哦哦,我才接触fd不久,没有做更深的研究,只是想试试用这个部署一个模型,目前就遇到这个麻烦
我有个疑惑,为什么你要用fd当成推理引擎来用。这两个东西的定位应该是不一致的呀。
我的概念里面fastdeploy不就应该是一个用于部署的推理引擎吗?
FastDeploy类似一个推理工具。区别类似于,onnxruntime输入一张图片,得到的是模型的推理结果。fastdeploy输入一张图片,得到的是经过后处理之后的图片。
哦哦,我才接触fd不久,没有做更深的研究,只是想试试用这个部署一个模型,目前就遇到这个麻烦
@xxjordan 这个是一个已知的问题。我注意到你在使用0.5的fastdeploy,改成1.0.1的即可解决, 参考这个issue的说明 https://github.com/PaddlePaddle/FastDeploy/issues/685#issuecomment-1325970251
@xxjordan 这个是一个已知的问题。我注意到你在使用0.5的fastdeploy,改成1.0.1的即可解决, 参考这个issue的说明 #685 (comment)
好的,多谢,问题已解决
| gharchive/issue | 2022-12-21T06:25:23 | 2025-04-01T06:37:24.570935 | {
"authors": [
"Zheng-Bicheng",
"jiangjiajun",
"xxjordan"
],
"repo": "PaddlePaddle/FastDeploy",
"url": "https://github.com/PaddlePaddle/FastDeploy/issues/933",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2214419070 | [BUG FIX] fix memory leak for ort backend
fix memory leak for ort backend #2414
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.1 out of 2 committers have signed the CLA.:white_check_mark: ChaoII:x: Jiang-Jia-JunYou have signed the CLA already but the status is still pending? Let us recheck it.
@ChaoII 这里修复的内存泄露问题是指读入的模型buffer吗?
| gharchive/pull-request | 2024-03-29T01:05:40 | 2025-04-01T06:37:24.574298 | {
"authors": [
"CLAassistant",
"ChaoII",
"Jiang-Jia-Jun"
],
"repo": "PaddlePaddle/FastDeploy",
"url": "https://github.com/PaddlePaddle/FastDeploy/pull/2418",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1630151888 | 使用paddle inference创建预测器时显示无法读取模型文件
问题:使用paddle inference创建预测器时显示无法读取模型文件
报警信息:
代码如下:
提问之前我已确认:
①python文件路径的三种写法均已尝试但依然报错
②使用模型张量分析代码确认文件可访问
③模型是在百度飞桨模型库下载的模型,根据paddle inference文档,paddle inference支持百度飞浆上面的所有模型
④使用PaddleDetection脚本确认过模型可以运行,只是在使用paddle inference时无法创建
model_file和params_file仅保留文件名去掉文件路径试试呢。
| gharchive/issue | 2023-03-18T04:31:24 | 2025-04-01T06:37:24.576979 | {
"authors": [
"amiscolo",
"vivienfanghuagood"
],
"repo": "PaddlePaddle/Paddle-Inference-Demo",
"url": "https://github.com/PaddlePaddle/Paddle-Inference-Demo/issues/430",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
812496742 | “30分钟玩转PaddleClas”可否做成Ai Studio项目,便于学习和模仿?
https://github.com/PaddlePaddle/PaddleClas/blob/release/2.0/docs/zh_CN/tutorials/quick_start.md
“30分钟玩转PaddleClas”可否做成Ai Studio项目,便于学习和模仿?
按照要求安装和克隆后,在本地pc的CMD上运行,前边的正常,到这来报错:
是在linux下运行?我的是win10
你好,aistudio上面有这个项目,可以参考下:https://aistudio.baidu.com/aistudio/projectdetail/428501?channelType=0&channel=0
你好,aistudio上面有这个项目,可以参考下:https://aistudio.baidu.com/aistudio/projectdetail/428501?channelType=0&channel=0
这个是2.0的?
不好意思,上面的是1.8的
下面这个是2.0的
教程:https://www.paddlepaddle.org.cn/tutorials/projectdetail/1499123
aistudio:https://aistudio.baidu.com/aistudio/projectdetail/1557913
aistudio:https://aistudio.baidu.com/aistudio/projectdetail/1557913
---该项目暂未公开
您好,可以参考一下该教程说明:https://www.paddlepaddle.org.cn/tutorials/projectdetail/1499123#anchor-7
| gharchive/issue | 2021-02-20T03:01:46 | 2025-04-01T06:37:24.791983 | {
"authors": [
"A-Pai",
"TingquanGao",
"littletomatodonkey"
],
"repo": "PaddlePaddle/PaddleClas",
"url": "https://github.com/PaddlePaddle/PaddleClas/issues/612",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1811557244 | 请问rhy训练的模型可以转成ONNX格式吗?
我收集了一批含有韵律标注的数据,准备参照
https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/rhy
训练韵律模型,请问如何将训练得到的模型转成ONNX格式?
我们这边没尝试过,你可以自己试试。
好,试过了,根据paddle官方提供的教程已经顺利转了ONNX,没有问题
| gharchive/issue | 2023-07-19T09:29:53 | 2025-04-01T06:37:24.972095 | {
"authors": [
"Tony-xubiao",
"zxcd"
],
"repo": "PaddlePaddle/PaddleSpeech",
"url": "https://github.com/PaddlePaddle/PaddleSpeech/issues/3405",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
963126746 | training SqueezeNet on the eye disease recognition dataset
题目:在眼疾识别数据集上训练SqueezeNet网络。
training SqueezeNet on the eye disease recognition dataset
我们组的题解:https://aistudio.baidu.com/aistudio/projectdetail/2238212?shared=1
review修改意见:
项目简介不用写awesome-DeepLearning 作业,只写和项目相关内容即可
1.1 实验目的
‘在 AI Studio 上’ 注意大小写
2.2.1 模型简介
CNN微结构写了两次,应该有一个是CNN宏结构
2.2.3 fire模块
‘squeeze层由1x1的卷积层组成,,‘ 这里逗号写了两遍
缺少模型推理部分
跑了10个epoch发现loss一直在0.7左右震荡,精度在0.5/0.6左右,可以尝试调参,使得精度提升一些,调参方法可以写在项目里
训练配置里‘model_path’ 写成了 ‘modle_path’,模型训练里同理
模型训练中模型保存应使用paddle.save,模型训练后发现并未保存
| gharchive/pull-request | 2021-08-07T01:22:32 | 2025-04-01T06:37:24.977013 | {
"authors": [
"Twelveeee",
"ZhangHandi"
],
"repo": "PaddlePaddle/awesome-DeepLearning",
"url": "https://github.com/PaddlePaddle/awesome-DeepLearning/pull/634",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
424108966 | 加载MobileNet-SSD合并模型出现错误tid 22479
加载MobileNet-SSD合并模型出现错误
I/paddle_mobile LOG built on Jan 11 2019 17:32:19: loadCombined invoked
A/libc: Fatal signal 6 (SIGABRT), code -6 in tid 22479 (aoling.aicamera), pid 22479 (aoling.aicamera)
请问分离模型加载有问题么?你那边使用的是什么版本,是最新版本有问题,还是一直有问题?我这边单测是OK的,如果你能帮我们确定一下具体什么情况下会出问题,并提供对应的模型,我这边可以复现问题并修复。谢谢
@hjchen2 还没有试过分离模型,因为没有保存这个模型,如果要测试,估计要有点时间。
我使用的是最新代码编译的得到的os库文件的,应该是最新的了。
以下就是这个模型文件,请您看看是什么问题
object_detection_model.zip
@hjchen2 我测试过分离模型了,还是同样的错误,您能解决一下吗?
@hjchen2 能否提供解决办法啊。好急啊。
同 #1540 ,因此关闭此issue
| gharchive/issue | 2019-03-22T09:07:54 | 2025-04-01T06:37:24.997162 | {
"authors": [
"hjchen2",
"yeyupiaoling"
],
"repo": "PaddlePaddle/paddle-mobile",
"url": "https://github.com/PaddlePaddle/paddle-mobile/issues/1529",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
150562581 | Reload only webview of current tab
This would resolve #50.
Thanks, but I believe this was already fixed yesterday in a50ffdd8fc55b2d812421b31d75eaa7d4739d906.
Roger. Wasn't up to date!
| gharchive/pull-request | 2016-04-23T15:01:53 | 2025-04-01T06:37:25.028574 | {
"authors": [
"PalmerAL",
"b0elter"
],
"repo": "PalmerAL/min",
"url": "https://github.com/PalmerAL/min/pull/102",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
176330121 | Fixes PaloAltoNetworks/minemeld-core#48
New prototypes are added to the minemeldlocal.yml library. The library PATH can be set via MINEMELD_LOCAL_PROTOTYPE_PATH config or env variable. If not set the API will take the first directory inside PROTOTYPE_ENV containing the string '/local/', mostly for compatibility.
Signed-off-by: Luigi Mori lmori@paloaltonetworks.com
| gharchive/pull-request | 2016-09-12T09:21:53 | 2025-04-01T06:37:25.169086 | {
"authors": [
"jtschichold",
"review-ninja"
],
"repo": "PaloAltoNetworks/minemeld-core",
"url": "https://github.com/PaloAltoNetworks/minemeld-core/pull/49",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1208486553 | refresh_variable broken due to recent commit
Describe the bug
Commit https://github.com/PaloAltoNetworks/pan-os-python/commit/0b47a3a6afa8379cdf63d04f90f555e564e235fd changed behaviour of parse_value_from_xml_last_tag to require an additional argument
refresh_variable does not pass the new required argument to parse_value_from_xml_last_tag here
https://github.com/PaloAltoNetworks/pan-os-python/blob/0b47a3a6afa8379cdf63d04f90f555e564e235fd/panos/base.py#L959
Expected behavior
It to work
Current behavior
File "/usr/local/lib/python3.8/dist-packages/panos/base.py", line 970, in refresh_variable
var_path.parse_value_from_xml_last_tag(obj, settings)
TypeError: parse_value_from_xml_last_tag() missing 1 required positional argument: 'attr'
Possible solution
Add the missing argument to the caller
Steps to reproduce
Use pan-os-ansible to gather facts from a host.
Context
Gathering facts from network devices using ansible
Your Environment
Version used: https://github.com/PaloAltoNetworks/pan-os-python/commit/ab4d088e9f231889ef25b926827e77d75d47d6cb
Environment name and version (e.g. Chrome 59, node.js 5.4, python 3.7.3): N/A
Operating System and version (desktop or mobile): N/A
Link to your project: N/A
I also hit this same issue, rolling back to v1.6.0 has resolved my issue for the time being.
| gharchive/issue | 2022-04-19T14:56:10 | 2025-04-01T06:37:25.174116 | {
"authors": [
"clienthax",
"jhlasnik"
],
"repo": "PaloAltoNetworks/pan-os-python",
"url": "https://github.com/PaloAltoNetworks/pan-os-python/issues/444",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
1715632212 | Maxwell Update 1 APIs
Description
Maxwell Update 1 APIs
Motivation and Context
New release update version
How Has This Been Tested?
Tested on a local.
Screenshots (if appropriate)
PCEE (latest):
PCEE (Minor release version):
PCCE (Major release versions):
PCCE (Minor release versions):
Types of changes
New feature (non-breaking change which adds functionality)
Checklist
[x] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] I have added tests to cover my changes if appropriate.
[x] All new and existing tests passed.
@sserrata , @blindaa121: Can you please approve and merge this? This is required for the publish Prisma Cloud Compute Edition APIs release today? I don't have the merge rights.
@sserrata @blindaa121 can you please approve for publish? The release is going out
| gharchive/pull-request | 2023-05-18T13:21:42 | 2025-04-01T06:37:25.179667 | {
"authors": [
"Pubs-MV",
"ssugandh"
],
"repo": "PaloAltoNetworks/pan.dev",
"url": "https://github.com/PaloAltoNetworks/pan.dev/pull/357",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1346852315 | Add CWP incident acknowledgement method and bulk archiver script
Description
This PR:
adds a method for acknowledging/archiving CWP runtime incidents
adds a script to bulk archive runtime incidents based on the contents of a CSV file
fixes some minor _tags.py syntax issues introduced in 53000c11f71e
Motivation and Context
The UI does not currently provide a mechanism for bulk archiving runtime incidents. As
customers tune their runtime rules, they would like to remove incidents that would not have
been generated under the tuned rule set.
How Has This Been Tested?
The new method and script were tested against a 22.06.197 SaaS environment.
Types of changes
Bug fix (non-breaking change which fixes an issue)
New feature (non-breaking change which adds functionality)
Checklist
[x] I have updated the documentation accordingly.
[x] I have read the CONTRIBUTING document.
[x] I have added tests to cover my changes if appropriate.
[x] All new and existing tests passed.
LGTM!
| gharchive/pull-request | 2022-08-22T19:00:29 | 2025-04-01T06:37:25.184098 | {
"authors": [
"cfarquhar",
"tkishel"
],
"repo": "PaloAltoNetworks/prismacloud-api-python",
"url": "https://github.com/PaloAltoNetworks/prismacloud-api-python/pull/75",
"license": "ISC",
"license_type": "permissive",
"license_source": "github-api"
} |
413304377 | npm install 报错
npm install --registry=https://registry.npm.taobao.org
npm WARN deprecated bfj-node4@5.3.1: Switch to the `bfj` package for fixes and new features!
npm WARN deprecated nomnom@1.8.1: Package no longer supported. Contact support@npmjs.com for more info.
npm ERR! Error while executing:
npm ERR! C:\Program Files\Git\mingw64\bin\git.EXE ls-remote -h -t git://github.com/adobe-webplatform/eve.git
npm ERR!
npm ERR! fatal: read error: Invalid argument
npm ERR!
npm ERR! exited with error code: 128
npm ERR! A complete log of this run can be found in:
npm ERR! C:\Users\liguanghui6\AppData\Roaming\npm-cache\_logs\2019-02-22T08_51_41_897Z-debug.log
执行npm install ,报错。求大神指点一下
执行一下命令之后,依旧报同样的错误
git config --global http.sslverify "false"
https://blog.csdn.net/baidu_30809315/article/details/86520093
你可以试试
我也是这个报错,发现是tui-editor那个库的问题,总是安装不下来,把package.json里面这一项删了,然后删除MarkDownEditor里面引用这个库的代码就可以运行了
@liyang5945 已经按照你说的方法,解决了问题。多谢
https://github.com/PanJiaChen/vue-element-admin/issues/1630
| gharchive/issue | 2019-02-22T08:59:17 | 2025-04-01T06:37:25.194247 | {
"authors": [
"Joyeuxman",
"PanJiaChen",
"liyang5945"
],
"repo": "PanJiaChen/vue-element-admin",
"url": "https://github.com/PanJiaChen/vue-element-admin/issues/1627",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1708323296 | help
Error: no_text
at postSlackMessage (file:/slaude/app.js:428:23)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async createSlackThread (file:/slaude/app.js:439:12)
at async file:/slaude/app.js:54:24
Just pulled today.
double checked config.
This might be two weeks old but I'd still like to at least address it. As the error message implies, Slaude tried to create the original thread message with no contents. That could have a number of reasons but to say for sure I'd need to know what exactly your config settings were (not your Slack specific settings, just the Slaude stuff like MAINPROMPT_LAST and PING_MESSAGE as well as the prompt you were using at the time. My guess would be that a combination of your config and the prompt caused the first message we wanted to send to Slack to just be an empty string. I'm just not sure how we would've gotten there since all messages are put together by combining the individual OpenAI formatted messages until we exceed the character limit. But without knowing exactly what you prompted and which settings you used guessing is all I can do.
Of course this is two weeks old and you might not be comfortable with sharing what you were prompting so I'm not expecting you to actually do this, I'm just saying that's the only way I can pin this down and I don't really have the time to try and reproduce bugs with pure trial and error anymore.
| gharchive/issue | 2023-05-12T23:07:55 | 2025-04-01T06:37:25.201204 | {
"authors": [
"PandarusAnon",
"oEjk"
],
"repo": "PandarusAnon/slaude",
"url": "https://github.com/PandarusAnon/slaude/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
538667839 | Always return 0x transaction hash and signed order
Remove includeOrder and includeTx fields
@fabioberger I just remembered one of the reasons we had includeOrder specifically. From the spec:
Quotes indicated as includeOrder as false can be seen as traders checking if a dealer’s prices are favorable at a given time for a certain market and trade size.
For our internal risk-tracking mechanisms, we keep track of how many outstanding quotes there are that could actually be filled, and we’d like to be able to separately differentiate between requests that seek only to record price data. For that reason my plan is to remove includeTx, and make the default to include the signed order and the 0x transaction hash, but also have the option to request what amounts to a “price only” quote.
| gharchive/issue | 2019-12-16T21:22:31 | 2025-04-01T06:37:25.251797 | {
"authors": [
"hrharder"
],
"repo": "ParadigmFoundation/zaidan-dealer-specification",
"url": "https://github.com/ParadigmFoundation/zaidan-dealer-specification/issues/15",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1022325363 | Representation for Vietnamese slaves in Formosa
Historically, slavery in Taiwan was done by the Dutch with some aboriginal people helping in recapturing escaped slaves.
I'm going to try this again in a new PR and hopefully I don't break anything again.
| gharchive/pull-request | 2021-10-11T06:46:58 | 2025-04-01T06:37:25.279581 | {
"authors": [
"CarbonY26"
],
"repo": "ParadoxGameConverters/EU4toVic2",
"url": "https://github.com/ParadoxGameConverters/EU4toVic2/pull/788",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1802325763 | Unable to create Ubuntu VM
Describe the bug
I am unable to create Ubuntu VM from the extension. I have tried to install it with UI, with User Interface, and with Vagrant Box. All three instances result in an error "Error creating VM: Error generating packer file"
To Reproduce
Steps to reproduce the behavior:
Open extension
Click on the "+" sign and choose operating system Linux, distribution choose ubuntu, and version 22.04
Click on Generate vm
Expected behavior
VM is created with selected settings
Screenshots
Extension Version
'v0.0.8' preview
Same issue on x86_64 platform
| gharchive/issue | 2023-07-13T06:50:25 | 2025-04-01T06:37:25.290909 | {
"authors": [
"bnasif25",
"kmathoora"
],
"repo": "Parallels/parallels-vscode-extension",
"url": "https://github.com/Parallels/parallels-vscode-extension/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
591175460 | [DEBUG] fix compatibility with python 3.8
This should fix #14
This has been integrated into Olympe 1.2.1
| gharchive/pull-request | 2020-03-31T14:52:52 | 2025-04-01T06:37:25.296465 | {
"authors": [
"ndessart"
],
"repo": "Parrot-Developers/olympe",
"url": "https://github.com/Parrot-Developers/olympe/pull/15",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
202026514 | Unsuccessful save-operation still causes the first method to return object with changed attributes
I've put a clean code example to help reproduce the issue: https://github.com/layerssss/parse-hook-error-quirk
When a hook yields an error in beforeSave hook. The change to an object is not applied. Call Query.find in browser again. API from parse-server are returning unchanged attributes of the object. But the Query.find is returning the changed object.
Expect: Query.find should return the unchanged object.
parse-server version: 2.3.2
Parse-JS-SDK: 1.9.2
Note: We have a running system with Parse-JS-SDK 1.5 with parse.com, on which this issue doesn't exist.
Cloud code:
Parse.Cloud.beforeSave('Folder', (request, response) => {
if (!request.object.name) {
return response.error('Name is invalid.');
}
response.success();
});
(new Parse.Query('Folder')).first()
.then(folder => {
if (folder) return Promise.resolve();
folder = new Parse.Object('Folder', {
name: 'Untitled Folder'
});
return folder.save();
});
Client code:
new Parse.Query('Folder').first()
.then(folder => {
console.log('expected old name: Untitled Folder');
console.log('old name:' + folder.get('name'));
folder.set('name', ''); // try to change it to invalid
return folder.save()
.fail(error => {
console.log('name not changed: ' + error.message);
return Parse.Promise.as();
});
})
.then(() => new Parse.Query('Folder').first())
.then(folder => {
console.log('expected new name: Untitled Folder');
console.log('new name:' + folder.get('name'));
});
Please have a look, thanks!
It turns out that it's the expected behaviour when "single instance" mode is on (default in browser). On which version did this become the default behaviour? Can you point me to the relevant changelog entries?
| gharchive/issue | 2017-01-20T01:45:48 | 2025-04-01T06:37:25.300793 | {
"authors": [
"layerssss"
],
"repo": "ParsePlatform/Parse-SDK-JS",
"url": "https://github.com/ParsePlatform/Parse-SDK-JS/issues/399",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
153163853 | Android device still receives push from old installationId after app re-install
install the app on Android phone, signup a new user A in Parse, get installationId_1
uninstall the app and re-install the app on the same phone, login into the previously created user A, get installationId_2
note that the app on this phone is now associated with installationId_2
now from parse server cloud code, send push notification targeting installationId_1, the app still receives the push even when its installationId is not installationId_1.
Probably both installationsId's have the same GCMToken as it's still the same phone. GCMToken is provided by google play services library which you are not reinstalling, so it may return the same GCMToken. So even if you target installation_1, all other installations which includes same GCMToken will still receive the same notification;
@xor22h the "deviceToken" for the two installations are different. I believe the "deviceToken" is the GCMToken.
I'm closing this here for now, please update parse-server to the latest version. If the issue is still here, please open on parse-server-push-adapter.
I'm experiencing the same issue and have created https://github.com/parse-server-modules/parse-server-push-adapter/issues/40
| gharchive/issue | 2016-05-05T03:57:57 | 2025-04-01T06:37:25.304508 | {
"authors": [
"flovilmart",
"jiawenzhang",
"macmoe",
"xor22h"
],
"repo": "ParsePlatform/parse-server",
"url": "https://github.com/ParsePlatform/parse-server/issues/1705",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
194579449 | Push Notification - Two apps using same Parse Serve
My two apps (a free app and a pro app)point to the same parse database.
While only one app can send push notification successfully.
I have read everything on the Parse support / help pages and it appears that I am doing the right thing but failed.I have tried the way in https://github.com/ParsePlatform/parse-server/issues/2188
But didn't work.
My p12 files' path are:
My cloud code in index.js are:
push: {
ios: [
{
pfx: __dirname +'/push_certs/DevPushLoveAgainPro.p12', // Dev PFX or P12
bundleId: 'com.app1',
production: false // Dev
},
{
pfx: __dirname +'/push_certs/ApplePushLoveAgainPro.p12', // Prod PFX or P12
bundleId: 'com.app1',
production: true // Prod
},
{
pfx: __dirname +'/push_certs/DevPushLoveAgainFree.p12', // Prod PFX or P12
bundleId: 'com.app2',
production: false // Prod
},
{
pfx: __dirname +'/push_certs/ApplePushLoveAgainFree.p12', // Prod PFX or P12
bundleId: 'com.app2',
production: true // Prod
}
]
}
can you re open the issue filling the issue template completely please?
@flovilmart ,
yes, thanks so much. Created: https://github.com/ParsePlatform/parse-server/issues/3219
@flovilmart , hello?
I haven't had time to look into it, but your new issue is again missing server logs etc... that doesn't help.
Ok, thanks so much.
But can you let me know whether it's feasible that two apps sharing same parse db and notification can work in both apps ?
If feasible, can you point out the key points for me directly?
This issue is closed, ca you keep the conversation on the proper issue please?
| gharchive/issue | 2016-12-09T11:56:58 | 2025-04-01T06:37:25.312454 | {
"authors": [
"allen8300",
"flovilmart"
],
"repo": "ParsePlatform/parse-server",
"url": "https://github.com/ParsePlatform/parse-server/issues/3217",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
139728955 | Push notification doesn't get to the device
Issue
On Cloud Code I have an after save trigger that sends a Push to a specific user, but it doesn't get to the device and I neither get a GCM request and response. Any ideas?
Prerequisites
I've migrated to AWS Elastic Beanstalk(64bit Amazon Linux 2015.09 v2.0.7, running Node.js 4.2.3) and MongoLab
I'm running the 2.1.4 of Parse Server.
I'm testing on Android SDK
I've followed everything on Push tutorial here
Log from Verbose
POST /parse/push { host: 'parseserver-xxx-env.elasticbeanstalk.com',
'x-real-ip': 'xxx',
'x-forwarded-for': 'xxx',
'content-length': '406',
accept: '*/*',
'content-type': 'text/plain',
'user-agent': 'node-XMLHttpRequest, Parse/js1.7.1 (NodeJS 4.3.0)',
'x-forwarded-port': '80',
'x-forwarded-proto': 'http' } {
"where": {
"user": {
"__type": "Pointer",
"className": "_User",
"objectId": "xxxxx"
}
},
"data": {
"alert": "Bla bla bla",
"badge": "Increment",
"uri": "com.xxxx.xxxx://xxxx?id=xxxxx",
"p": "xxxxxx"
}
}
response: {
"response": {
"result": true
}
}
Push to user: xxxxx was successful
How I initialise Parse Server on index.js
var api = new ParseServer({
databaseURI: databaseUri || 'mongodb://localhost:27017/test',
cloud: process.env.PARSE_SERVER_CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
appId: 'xxxxxxx',
masterKey: 'xxxxxxx',
fileKey: process.env.PARSE_SERVER_FILE_KEY || 'xxxxxxx',
facebookAppIds: ['xxxxxxx'],
serverURL: 'http://parseserver-xxxxx-env.elasticbeanstalk.com/parse/',
filesAdapter: new S3Adapter(
"xxxxxxx",
"xxxxxxx",
"xxxxxxx",
{directAccess: true}
),
push: {
android: {
senderId: 'xxxxxxx',
apiKey: 'xxxxxxx'
},
ios: {
pfx: __dirname + '/development.p12',
bundleId: 'com.xxxxxxx.xxxxxxx',
production: false
},
ios: {
pfx: __dirname + '/production.p12',
bundleId: 'com.xxxxxxx.xxxxxxx',
production: true
}
}
});
Function on Cloud Code
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.equalTo('user', user);
Parse.Push.send({
where: pushQuery,
data: {
alert: "Bla bla bla",
badge: "Increment",
uri: "com.xxxx.xxxx://xxxx?id=" + offer.id,
p: offer.id
}
}, {useMasterKey: true}).then(function (result) {
console.log("Push to user: " + user.id + " was successful");
},
function(error) {
console.log("Error sending push: " + error.code + " - " + error.message);
}
);
Possible Similar Issues
#401
The where cluse for push are supposedly on _Installation and not _User
So if I want to send to a specific user, I need the get the Installation ID from that user and query like this?
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.equalTo('objectId', installationId);
You should set the userId to the installation object, and do:
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.equalTo('userId', userId);
```
or set the user pointer
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.equalTo('user', user);
Ok, but that's exactly what I did. I've set the user pointer:
pushQuery.equalTo('user', user);
Here's this user _Installation data on mongodb
{
"_id": "xxxxxx",
"appName": "xxxxxx",
"appVersion": "xxxxxx",
"deviceType": "android",
"appIdentifier": "com.xxxxxx.xxxxxx",
"installationId": "xxxxxx",
"pushType": "gcm",
"timeZone": "America/Sao_Paulo",
"localeIdentifier": "pt-BR",
"parseVersion": "1.13.0",
"_p_user": "_User$xxxxxx",
"_updated_at": {
"$date": "2016-03-09T21:45:01.254Z"
},
"_created_at": {
"$date": "2016-03-09T21:45:01.254Z"
}
}
@flovilmart may I understand why you close this issue?
I'm no android specialist, but it looks like the deviceToken is not set on your installation
I am having the same problem. As I have configured only the ios device. As my application is ios based. So I get the error in push. this is the log.
{ params: { id: '45J1QIaKIi' },
master: false,
user: ParseUser { _objCount: 0, className: '_User', id: '45J1QIaKIi' },
installationId: '82e1d517-7aac-d887-d02b-e407c3de5e7f' }
##### PUSH OK
Can not find sender for push type android, {"where":{"deviceType":"ios","channels":"user_45J1QIaKIi"},"data":{"alert":"Sending push notification"}}
APNS Connection 0 Socket Error
APNS Connection 0 Socket Error
APNS Connection 0 Socket Error
APNS Connection 0 Disconnected
I dont get push on my device. How to solve this problem.
@flovilmart good point, there's no deviceToken column on _Installation for users who were created after the migration. Any idea why the parse-server might not be creating it? Look at this POST sent from user's first login:
POST /parse/classes/_Installation { host: 'parseserver-xxxxx-env.elasticbeanstalk.com',
'x-real-ip': 'xxxxx',
'x-forwarded-for': 'xxxxx',
'content-length': '326',
'accept-encoding': 'gzip',
'content-type': 'application/json',
'user-agent': 'Parse Android SDK 1.13.0 (com.xxxxx.xxxxx/7) API Level 16',
'x-newrelic-id': 'xxxxx==',
'x-parse-app-build-version': '7',
'x-parse-app-display-version': '1.1.3.2',
'x-parse-application-id': 'xxxxx',
'x-parse-client-key': 'xxxxx',
'x-parse-client-version': 'a1.13.0',
'x-parse-installation-id': 'xxxxx',
'x-parse-os-version': '4.1.2',
'x-parse-session-token': 'r:xxxxx',
'x-forwarded-port': '80',
'x-forwarded-proto': 'http' } {
"appName": "xxxxx",
"appVersion": "1.1.3.2",
"deviceType": "android",
"appIdentifier": "com.xxxxx.xxxxx",
"installationId": "xxxxx",
"pushType": "gcm",
"timeZone": "America/Sao_Paulo",
"localeIdentifier": "pt-BR",
"parseVersion": "1.13.0",
"user": {
"__type": "Pointer",
"objectId": "g83vqLMR0N",
"className": "_User"
}
}
response: {
"status": 201,
"response": {
"objectId": "XRR1BEqCZa",
"createdAt": "2016-03-09T21:45:01.254Z"
},
"location": "http://parseserver-xxxxx-env.elasticbeanstalk.com/parse/classes/_Installation/XRR1BEqCZa"
}
And also just for testing I went on mongodb and updated an user _Installation with a deviceToken. And now when trying to push I got this MismatchSenderId error:
GCM request and response {"request":{"params":{"priority":"normal","data":{"time":"2016-03-10T12:17:50.894Z","push_id":"84kHd9B0fX","data":"{\"alert\":\"Bla bla bla\",\"uri\":\"com.xxxx.xxxx://offer?id=kHrtmMOeYF\",\"p\":\"kHrtmMOeYF\"}"}}},"response":{"multicast_id":xxxx,"success":0,"failure":1,"canonical_ids":0,"results":[{"error":"MismatchSenderId"}]}}
hey @mahabubakram, your error message looks similar to mine. They have a problem with the sender. Can you check if there's this column deviceToken on your db?
GCM: MismatchSenderId
APNS: Can not find sender for push type android
@weengo , I am trying to send to an old user. Who is already there in my parse database. So after migration to mongo lab I am trying to send push notification to that user and I have deviceToken to that user. But as you can see push is not going into the device. And I am having this problem for quite a long time but no one has given response on it.
Ok, got it solved. On my android manifest file I was using the API key, instead of Sender Id. And I was also using key of type Android, but I have now changed to one of type server. So anyone with similar issue remember to check that on the google console.
Now my _Installation is receiving the deviceToken normally.
And oddly I'm still getting the GCM error:
GCM request and response {"request":{"params":{"priority":"normal","data":{"time":"2016-03-10T15:07:40.144Z","push_id":"xxxxxxx","data":"{\"alert\":\"bla bla bla\",\"uri\":\"com.xxxxxxx.xxxxxxx://offer?ofid=xxxxxxx\",\"p\":\"xxxxxxx\"}"}}},"response":{"multicast_id":xxxxxxx,"success":1,"failure":2,"canonical_ids":0,"results":[{"error":"MismatchSenderId"},{"error":"MismatchSenderId"},{"message_id":"0:xxxxxxx"}]}}
@mahabubakram I would suggest you create a new user on mongodb, and try to send push to this new user.
I have the same problem. The notification doesn't reach to device. I run parse server example and try to send notification using rest api
curl -X POST
-H "X-Parse-Application-Id: myOtherAppId"
-H "X-Parse-Master-Key: myMasterKey"
-H "Content-Type: application/json"
-d '{
"userId": "8JKVn0ummZERXG53hre2GA==",
"deviceType": "android",
"channels": [
"kostas"
],
"data": {
"title": "Test",
"alert": "Test"
}
}'
http://localhost:9999/myparseapp/push
The rest response is {"result":true}. I don't see any log in the server and I wonder what is wrong. I am using parse-server 2.2.2
@karkaletsis can you show how you're declaring the push on your index.js file?
@weengo I refer it like this
var api = new ParseServer({
databaseURI: 'mongodb://localhost:27017/dev',
cloud: __dirname + '/cloud/main.js',
appId: 'myOtherAppId',
masterKey: 'myMasterKey',
serverURL: 'http://10.0.0.1:9999/myparseapp',
push: {
android: {
senderId: '408817503931',
apiKey: 'AIzaSyBtzYPBj6r6ZDmpUMOhTmNV85QZZbeKIwM'
},
ios: {
pfx: '/home/ubuntu/git/Certificates_PushNotification_Prod.p12',
bundleId: '11',
production: false
}
}
Bundle id should be you CFBundleIdentifier from your iOS app
I try to test only android push for now and so I places a random key in bundleId
@karkaletsis, are you sure to be using a Server key from Google?
Does your android manifest has something like this? The XXXX being you senderId:
<meta-data android:name="com.parse.push.gcm_sender_id" android:value="id:XXXX"/>
Yes, I have this in my android maniferst xml this entry. Didn't I have to see something in error log both in parse server and google developer logs?
Yes i have the same problem. On the client side, the push is successfully sent but the device doesn't receive it. Im thinking of just using onesignal.com since parse's push notifications does not support "high throughput" to better invest in a more longterm solution. I'd still like to debug the issue to learn why its not working.
@karkaletsis I believe you do should see something only if the message is actually being sent. When you run with VERBOSE=1 don't you get nothing?
@otymartin Could you also run with VERBOSE=1 so we can see what's message when the push is sent?
How do I enable VERBOSE=1, to curl that posts the message or to the server? and how?
@weengo How do I do that? I only started learning javascript since the migration so Its a complete gray area.
//SEND PUSH NOTIFICATIONS
Parse.Cloud.define("sendPushToContact", function(request, response) {
// request has 2 parameters: params passed by the client and the authorized user
var params = request.params;
var user = request.user;
// Our "Message" class has a "text" key with the body of the message itself
var messageText = params.text;
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.equalTo('user', user); // targeting incomingUser
pushQuery.equalTo('deviceType', 'ios'); // targeting iOS devices only
Parse.Push.send({
where: pushQuery, // Set our Installation query
data: {
alert: "Message: " + messageText
}
}, { success: function() {
console.log("#### PUSH OK");
}, error: function(error) {
console.log("#### PUSH ERROR" + error.message);
}, useMasterKey: true});
response.success('success');
});
I enabled it but can't see something important except the response plus the request. The response is
....
],
"data": {
"title": "The Shining",
"alert": "The Giants won against the Mets 2-3."
}
}
response: {
"response": {
"result": true
}
}
I can't see something more related to push notifications
Try with DEBUG=aps as well as VERBOSE=1
I put also DEBIG=aps to server, no change to output (same as verbose)
DEBUG not DEBIG, this should enable aps logs, if it doesn't' I'm. Not sure if you didn't set your Env variable incorrectly. Also this is for aps, for GCM, there is another DEBUG flag, check the node GCM doc
Yes, I have put debug not debig (was a typo). Where can I find this gcm doc?
@flovilmart Would this log reveal any reason why my push was successful on client side but was not delivered to the target device? This is from Google App Engine log
13:59:40.784
POST
200
535 B
72 ms
AppName/1 CFNetwork/758.3.15 Darwin/15.3.0
/parse/functions/sendPushToContact
50.101.196.7 - - [28/Mar/2016:13:59:40 -0400] "POST /parse/functions/sendPushToContact HTTP/1.1" 200 535 - "AppName/1 CFNetwork/758.3.15 Darwin/15.3.0" "appName-9203.appspot.com" ms=72 cpu_ms=0 cpm_usd=5.979e-8 instance=- app_engine_release=1.9.35 trace_id=a59c4618197766a5510e12f3ea6230ca
{
metadata:
{
projectId:
"3232341754198"
serviceName:
"appengine.googleapis.com"
zone:
"us3"
labels:
{
appengine.googleapis.com/request_id:
"56f9710c00ff0bf8ab5348ddb00001737e626172732d34354334300001323031363033323774313430393434000100"
appengine.googleapis.com/module_id:
"default"
appengine.googleapis.com/version_id:
"20160327t140944"
}
timestamp:
"2016-03-28T17:59:40.784555Z"
projectNumber:
"3232022342198"
}
protoPayload:
{
@type:
"type.googleapis.com/google.appengine.logging.v1.RequestLog"
appId:
"s~appName-9203"
versionId:
"20160327t140944"
requestId:
"56f9710c00ff0bf8ab5d28ddb00001737e626172732d343034300001323031363033323774313430393434000100"
ip:
"50.101.196.7"
startTime:
"2016-03-28T17:59:40.784555Z"
endTime:
"2016-03-28T17:59:40.856557Z"
latency:
"0.072002s"
method:
"POST"
resource:
"/parse/functions/sendPushToContact"
httpVersion:
"HTTP/1.1"
status:
200
responseSize:
"535"
userAgent:
"AppName/1 CFNetwork/758.3.15 Darwin/15.3.0"
urlMapEntry:
"PLACEHOLDER"
host:
"bars-4040.appspot.com"
cost:
5.979e-8
appEngineRelease:
"1.9.35"
traceId:
"a59c4618197766a5510e12f3ea6230ca"
}
insertId:
"2016-03-28|10:59:44.017917-07|10.106.196.98|-1325307681"
log:
"appengine.googleapis.com/request_log"
httpRequest:
{
status:
200
}
operation:
{
id:
"56f9710c00ff0bf8ab5d28ddb00001737e62617273we43034323423400013345031363033323774313430393434000100"
producer:
"appengine.googleapis.com/request_id"
}
}
I am also facing same issue, not able to send push notifications after migration, i could not find device token in parse dashboard for an existing user after migration.
i have used
Android Code:
final Map<String, Object> params = new HashMap<>();
params.put("message", message);
params.put("userId", ParseUser.getCurrentUser().getObjectId());
ParseCloud.callFunctionInBackground("sendPush", params,
new FunctionCallback<String>() {
@Override
public void done(String result, com.parse.ParseException e) {
// TODO Auto-generated method stub
if (e == null) {
Toast.makeText(context, "HEHE", Toast.LENGTH_SHORT)
.show();
Log.d("ANNOUNCEMENT", "SUCCESS");
} else {
Toast.makeText(context, "FAilure " + e.toString(),
Toast.LENGTH_SHORT).show();
Log.d("ANNOUNCEMENT", "FAILURE" + e.toString());
}
}
});
///////////////////////////////////////////////
main.js
Parse.Cloud.define("sendPush", function(request, response) {
var sendUserId = request.params.userId;
var msg = request.params.message;
var query = new Parse.Query(Parse.Installation);
query.equalTo('userId', sendUserId);
Parse.Push.send({
where: query,
data: {
alert: msg,
sound: 'default'
}
}, {
success: function() {
// Push was successful
response.success("Push sent");
},
error: function(error) {
// Handle error
response.error(error);
},
useMasterKey: true
});
});
But not getting push
Even i am getting this issue. can we reopen the issue and get it solved. i dont find any valid solutions to the one's who closed the issue
i have solved it can u send me the screenshot i can help u
Parse.Push.send({
where: query,
data: {
alert: msg,
sound: 'default'
}
I read in another issue that where should have braces eg. where: { query }, that solved it for someone else. Someone please confirm!
@Heman6886 : Here goes my code. waiting for your response
Android code:
Android Manifest
Server Code
My Server's Dashboard
Where is your cloud code
do we need to call push API only from the cloud code? Normally, I do push from the rest api which has master-key as the header.
Yes the meta-data is inside the application tag.
yes u need to call the cloud code function from the android device
Any reason behind calling push API from the cloud code ?
Also i need to know whether my Installation Object is proper? because i don't see few columns which were there in generic parse server dashboard.
Beacuse the Client push is insecure
@Heman6886
i tried with the cloud code, still the notification does not appear in device.
My Cloud code
My Android Code
My Dashboard
@Heman6886 i am waiting for your response. can you please help me out with this issue
Actually u cannot send push to an installation id
ok. I tried with this cloud code as well. its still not working.
can you verify this, i got the senderID and api key from this screen. if this is wrong can you tell the correct method.
Ok, so i have this same problem.
I am trying with a clean Parse Starter Project with only the exact AndroidManifest.xml additions as specified in "https://github.com/ParsePlatform/Parse-Server/wiki/Push-Configuring-Clients" and i have configured my Parse-Server exactly as specified in "https://github.com/ParsePlatform/parse-server/wiki/Push".
I am running my own Parse-Dashboard and trying to use that to send a push notification to my android device running the Parse Starter app. Also trying to use the "curl example"
Nothing. Nada. What's going on.
How do i enable logging inside index.js file to check what;s going on? I don't see anything in /logs nor do i see any "_Push folders" being created in my mongo, as f@flovimart commented elsewhere.
Start your server with VERBOSE=1 as an environment variable and send out the logs related to push sending.
i'd love to. just can't figure out how to pass that variable directly from inside "index.js". I am running my parse-server on CentOS 7 with nvm and node v 5.10.1 with simple "node index.js" or "pm2 start index.js"
this is an environment variable so, wither VERBOSE=1 node index.js, or in your pm2 configuration
Logs: From curl. Pretty much similar using Dashboard.
verbose: POST /parse/push { 'user-agent': 'curl/7.29.0',
host: 'My-Server-IP:Port',
accept: '/',
'x-parse-application-id': 'My-Key',
'x-parse-master-key': 'My-Key',
'content-type': 'application/json',
'content-length': '311' } {
"where": {
"deviceType": {
"$in": [
"ios",
"android"
]
}
},
"data": {
"title": "The Shining",
"alert": "All work and no play makes Jack a dull boy."
}
}
verbose: {
"headers": {
"X-Parse-Push-Status-Id": "JPgxVmHgoL"
},
"response": {
"result": true
}
}
verbose: sending push to 2 installations
verbose: sent push! 0 success, 0 failures
does the device have the proper deviceTokens?
Android installation does not add any device tokens. I used the exact tutorial to add " " in a clean ParseStarterProject.
// Native: Application.java
public void onCreate() {
// ...
ParseInstallation.getCurrentInstallation().saveInBackground();
}
Though after the latest "parse-server" update to 2.2.7, i notice the schema added more fields such as "DeviceToken, Channels, GcmSender, PushType, Badge".
These did not show up in the earlier version "2.2.6".
My New installs on a anroid device, show these fields as empty, though both app and parse-server side are properly configured with GCM ID/Keys.
does the GCMSender is set? I'm wondering here as there are many users with valid and functioning configurations.
Of course. In my Parse-Server i have set:
push: {
android: {
senderId: '11112222233333', // The Sender ID of GCM
apiKey: 'ABC123DEF456GHI789JKF' // The Server API Key of GCM
}
},
Try to ask over stack overflow. With VERBOSE=1 you should also have logs from the GCM adapter itself. Seems that it's improperly configured to me.
ok. will do.
it's just that we don't really have an active community over on stackoverflow for now. most queries/replies are still related to the older parse.com and lead to a frustrating experience going round in circles.
@benitech did u use custom reciever for recieving push notification
no. just added the "ParseInstallation.getCurrentInstallation().saveInBackground();" in the android client side code, as the tutorial says
have u declare the server url in application activity and which parse library are you using
you mean, am i connecting to my own parse-server properly? of course. It works fine for other things, i can see my installations and sessions from the starter app right away.
Parse.initialize(new Parse.Configuration.Builder(getBaseContext()).applicationId("My-Key").server("http://My-Server-IP:Port/parse/").build());
ParseInstallation.getCurrentInstallation().saveInBackground();
ParseUser.enableAutomaticUser();
ParseACL defaultACL = new ParseACL();
And so far, with Parse-Server, only Parse 1.13.0 can be used.
dependencies {
compile 'com.parse.bolts:bolts-android:1.+'
compile 'com.parse:parse-android:1.+'
}
Please run npm start enable verbose
VERBOSE=1 DEBUG=apn,node-gcm npm start
Same as i posted earlier.
verbose: POST /parse/push { 'user-agent': 'curl/7.29.0',
host: 'My-Server-IP:Port',
accept: '/',
'x-parse-application-id': 'My-Key',
'x-parse-master-key': 'My-Key',
'content-type': 'application/json',
'content-length': '288' } {
"where": {
"deviceType": {
"$in": [
"android"
]
}
},
"data": {
"title": "The Shining",
"alert": "All work and no play makes Jack a dull boy."
}
}
verbose: {
"headers": {
"X-Parse-Push-Status-Id": "JZIN15X0aM"
},
"response": {
"result": true
}
}
verbose: sending push to 3 installations
verbose: sent push! 0 success, 0 failures
no device is registered that is the reason please reverify the project id and app key of GCM
@Heman6886
It is working now. Thanks for the help.
Actually i updated my parse server to the latest one and it started working. Still there needs to be a proper documentation for others to understand.
There is so much of time loss just to make this work. Parse developers need to contribute to these issues and make sure it works for everyone.
ok
@Heman6886
One more thing, we need not call push only from cloud code. we can also call from rest api's as well.
I'll add a Working Sample for both Parse-Server and Parse-Starter project on my github later. Might help others with any number of config issues in either.
@sekharrockz yes using cloud code
O god , The Push Section is a mess , Yet You guys did a fantastic job on bring parse to public !
However , i am encounter a issue when i put push query in aftersave function , I got result success but nothing else T-T . Does anyone else have is problem ??
does my after send
verbose: {
"response": {
"updatedAt": "2016-07-14T17:51:34.864Z"
}
}
How did this even happened LOL I literally got nothing to work with LOL!
Please some one give me a hint And thank your all for doing a great job !
By the way , This is my afterSave Block :`Parse.Cloud.afterSave("OBJECTCLASS",function(req,res){
console.log("does my after send");
var user = req.object.relation("requestUser").query();
user.notContainedIn("objectId",req.object.get("rejectUserList"));
var pushQuery = new Parse.Query(Parse.Installation);
pushQuery.matchesQuery('user',user);
Parse.Push.send({
where: pushQuery ,
data: {
alert: 'Test',
//sound: 'default'
}
}, {
useMasterKey: true,
success: function(object) {
// Push sent!
console.log(object);
res.success();
},
error: function(error) {
console.log(error.message);
res.error(error.message);
// There was a problem :(
}
});
});`
Hello!
I had the same issue. I delete the line "res.success();" or "response.success();" and the pushes are send.
There is no response object passed as a second argument in afterSave
| gharchive/issue | 2016-03-09T23:04:19 | 2025-04-01T06:37:25.379352 | {
"authors": [
"Heman6886",
"benitech",
"flovilmart",
"jorgemendiza",
"karkaletsis",
"lifeisfunny",
"mahabubakram",
"otymartin",
"sandeepkacha",
"sekharrockz",
"weengo"
],
"repo": "ParsePlatform/parse-server",
"url": "https://github.com/ParsePlatform/parse-server/issues/942",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
72676234 | move version.py
fails to import when importing, moving the version.py to pykafka/ seems to resolve this.
Python 2.7.9 (default, Mar 1 2015, 12:57:24)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import pykafka
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/pykafka-1.0.0-py2.7.egg/pykafka/__init__.py", line 1, in <module>
from version import version
ImportError: cannot import name version
>>>
I am new to this project, so I may be doing something wrong.
3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt7-1 (2015-03-01) x86_64 GNU/Linux
Using pykafka.version will cause the whole package to get imported at setup time, which is a big problem if you don't already have the prerequisites installed. We should just do what we do with streamparse and grab it with a regex.
I saw that, but I really don't care for using regex for that either. I've seen it done a couple different ways, so I want to take a quick look around and see what the options are. Otherwise, yeah, regex is the best option I've seen.
@kbourgoin I've seen a few successful approaches to this in the past:
Regular expressions (like we do with streamparse).
Using execfile (we do this with SKLL).
Storing it in a plaintext file and just reading that (although then you need to add something to __init__.py that loads from the file as well).
Injecting a variable into __builtins__ in setup.py that says "I'm running setup," and have __init__.py only import the subpackages when that is false. This is what scikit-learn does, although more for the reason that their C modules won't be built at setup time.
Oh, and conda-build also switched recently to using Python Versioneer, which grabs the version from your git tags. It might be nicer.
I fixed this via the regex-based solution that @dan-blanchard mentioned. The commit is 7ce19f66d909331c4ca5e1a6b9700157022a3faf.
@emmett9001 can I get your opinion on the version number? I decided to go with Kafka's version number followed by our own counter. Anyone else is welcome to comment as well, of course. It's just a scheme that made sense to me, no particular attachment there.
@kbourgoin I don't like tying pykafka's version number to kafka's - it seems like it could get confusing quickly. Personally I'd use a simpler versioning scheme, but I could be convinced either way.
I did it mostly because I was worried about protocol versioning, but maybe it's not really worth it. If the protocol changes so much that we have to break compatibility, we can just do a major version of pykafka.
Then again, there's currently no way to interrogate Kafka to find its version, so it's not like we have a way of managing compatibility. Let's stick with a 1.0.0 release next.
| gharchive/pull-request | 2015-05-02T17:52:05 | 2025-04-01T06:37:25.391751 | {
"authors": [
"atarzwell",
"dan-blanchard",
"emmett9001",
"kbourgoin"
],
"repo": "Parsely/pykafka",
"url": "https://github.com/Parsely/pykafka/pull/159",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
75386444 | Make sure retrying the rebalance also re-checks partition allocation.
This fixes a case where a balanced consumer is trying to acquire partitions it should have because it's balancing information is out of date. To fix, we ensure that every time we retry rebalancing that we re-check which partitions we should have.
Looks great to me, if it works in production.
Yeah, it's working. We'll write real integration tests for it later.
:wind_chime:
| gharchive/pull-request | 2015-05-11T22:55:11 | 2025-04-01T06:37:25.393784 | {
"authors": [
"emmett9001",
"kbourgoin"
],
"repo": "Parsely/pykafka",
"url": "https://github.com/Parsely/pykafka/pull/162",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
310947182 | parsl examples
It would be nice to have a generic master/worker application example, and a genetic algorithm example build on top of it.
the MW part would be:
a master main program that calls a number of instances of a worker program, does something with them, then calls the workers again.
Calls to workers should be scalable, as the number can change. Additionally, the number sometimes may be bigger than the number of available resources.
the GA part would be layered on top as a specific kind of MW:
we would find a GA that's written in Python, and use Parsl function to evaluate the various "genes" maybe could use something from https://github.com/handcraftsman/GeneticAlgorithmsWithPython
| gharchive/issue | 2018-04-03T18:28:40 | 2025-04-01T06:37:25.397693 | {
"authors": [
"danielskatz"
],
"repo": "Parsl/parsl",
"url": "https://github.com/Parsl/parsl/issues/191",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1628871193 | fix: launch_cmd not set when provided to FluxExecutor
Problem: Currently, the launch_cmd is not set if provided to the FluxExecutor directly. Many jobs are likely to start already with access to a flux instance, in which case the launch command should use flux submit instead of flux start.
Solution: Ensure the launch_cmd is set.
Description
Please include a summary of the change and (optionally) which issue is fixed. Please also include
relevant motivation and context.
Fixes #2633
Type of change
Choose which options apply, and delete the ones which do not apply.
Bug fix (non-breaking change that fixes an issue)
Looks like an issue with pytest versioning:
ImportError: cannot import name 'Config' from 'pytest' (/opt/hostedtoolcache/Python/3.10.10/x64/lib/python3.10/site-packages/pytest/__init__.py)
make: *** [Makefile:53: local_thread_test] Error 1
Error: Process completed with exit code 2.
I'll await further instruction, since this is out of scope for my PR. Gnite!
Yeah there's a separate PR open to fix that, which should get merged in the next few days
Thanks @jameshcorbett !
| gharchive/pull-request | 2023-03-17T08:02:10 | 2025-04-01T06:37:25.400838 | {
"authors": [
"benclifford",
"vsoch"
],
"repo": "Parsl/parsl",
"url": "https://github.com/Parsl/parsl/pull/2634",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1091318801 | Custom Character models
Since, we can only make custom boards right now with the PartyPlanner. It would be nice to add support for editing character models in the game.
How feasible is this nearly two years later? Can this be done manually?
| gharchive/issue | 2021-12-30T21:52:02 | 2025-04-01T06:37:25.438003 | {
"authors": [
"Fatih120",
"Luiz12010"
],
"repo": "PartyPlanner64/PartyPlanner64",
"url": "https://github.com/PartyPlanner64/PartyPlanner64/issues/124",
"license": "unlicense",
"license_type": "permissive",
"license_source": "bigquery"
} |
902831178 | No data fixer registered for *
I get the following errors in log on startup:
[11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:duck
[11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:duck_egg
[11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:goose
[11:22:08] [main/ERROR]: No data fixer registered for untitledduckmod:goose_egg
Thanks! These are normal and actually occur for other mods that add entites aswell. Data fixers are only for migration between major minecraft versions which mods usually don't support.
| gharchive/issue | 2021-05-26T19:25:13 | 2025-04-01T06:37:25.440263 | {
"authors": [
"Greg-J",
"Paspartout"
],
"repo": "Paspartout/UntitledDuckMod",
"url": "https://github.com/Paspartout/UntitledDuckMod/issues/20",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
209268890 | Check to make sure char escapes are valid UTF-8
The char escape with \U has to be checked for overflow. It's easier than for number parsing since the value is obligatorily 8 chars, so only the first char has to be checked so that it's not > 7. (The max value is 0x7fffffff.) For the escaped `\X', it'll probably use the functions used to parse regular hexadecimal numbers.
That's just the parsing. Then the number returned should be checked to make sure it's a valid Unicode code point. But that's really optional. Perhaps people should be allowed to put any char, even if it's not a Unicode char.
Whether or not the char is a valid Unicode code point will not be checked. Anyways, a valid Utf-8 char, but non valid Unicode code point, could be inserted without being escaped in a string; that wouldn't be checked.
| gharchive/issue | 2017-02-21T20:51:37 | 2025-04-01T06:37:25.441841 | {
"authors": [
"Pat-Laugh"
],
"repo": "Pat-Laugh/WebssonProjects",
"url": "https://github.com/Pat-Laugh/WebssonProjects/issues/23",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
685552183 | Remove remnants of Tracing Strategy
Description:
Noticed a couple places referencing tracing strategy - removed them.
:shipit:
I believe that the types file is still called tracingStrategy maybe we can add it to this one or a new PR , your call
@aledustet I was thinking this too - I will take care of it in this PR.
| gharchive/pull-request | 2020-08-25T14:49:03 | 2025-04-01T06:37:25.444936 | {
"authors": [
"tparesi"
],
"repo": "Path-Check/gaen-mobile",
"url": "https://github.com/Path-Check/gaen-mobile/pull/273",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1620146566 | JavaScript
JavaScript
User Story
As a boot camp student
I want the prework notes to be structured on a webpage
So that I can easily find and read the information
Acceptance Criteria
GIVEN a Prework Study Guide website
WHEN I view the study guide
THEN I can see the four topics I learned along with a suggestion on what I should study first
Completed
| gharchive/issue | 2023-03-11T21:09:44 | 2025-04-01T06:37:25.461712 | {
"authors": [
"PatrickWLowe"
],
"repo": "PatrickWLowe/prework-study-guide",
"url": "https://github.com/PatrickWLowe/prework-study-guide/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1253796062 | Scale problem
I got another error in tkVideoPlayer's scaled parameter, when we set scaled to True it works and the video fits with the label size, but I don't want stretching in videos so I set scaled to False but it is giving lots of errors.
Can we fit the video on the label with original ratio but with other remaining portions as black/label color?
And the set_scaled parameter is also not working.
@Akascape I have updated the library. Try tkvideoplayer 2.2: https://pypi.org/project/tkvideoplayer/2.2/ .Let me know if you find any issues, Now I have added the ability to keep the aspect ratio, refer docs here, you can then use tkvideo.config(bg="black") to get a black background.
Thanks, its now working properly :)
| gharchive/issue | 2022-05-31T12:28:30 | 2025-04-01T06:37:25.482584 | {
"authors": [
"Akascape",
"PaulleDemon"
],
"repo": "PaulleDemon/tkVideoPlayer",
"url": "https://github.com/PaulleDemon/tkVideoPlayer/issues/7",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1308382565 | create resize logic for screen
now is fixed screen resolution in variables.py
Fixed
| gharchive/issue | 2022-07-18T18:55:19 | 2025-04-01T06:37:25.484880 | {
"authors": [
"Pavel-Petkov03"
],
"repo": "Pavel-Petkov03/Belot",
"url": "https://github.com/Pavel-Petkov03/Belot/issues/2",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1500611865 | [Issue #341] feat: bullmq for sync newly added addresses
Description:
Adds job to verify if new addresses were added to the database and, if so, sync their transactions.
Depends on:
[x] #339
[x] #337
Conflict.
| gharchive/pull-request | 2022-12-16T17:36:48 | 2025-04-01T06:37:25.490264 | {
"authors": [
"Klakurka",
"chedieck"
],
"repo": "PayButton/paybutton-server",
"url": "https://github.com/PayButton/paybutton-server/pull/340",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1407581238 | Updated README.md
I've updated the README.md file with the appropriate documentation, rather than simple, vague instructions on how to install this system.
For example:
Install database
Please try to combine it in a single link not 2
Done! I reduced the number of links from 2 or more into 1.
| gharchive/pull-request | 2022-10-13T10:48:48 | 2025-04-01T06:37:25.496540 | {
"authors": [
"CorwinDev",
"kanetjuh"
],
"repo": "Paymenter/Paymenter",
"url": "https://github.com/Paymenter/Paymenter/pull/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1644423599 | Germplasm Plant Images page
Links point to peanutbase.org, we should move them to dev.peanutbase.org
Image of Krinkle mutant plant shows no images. (Is this an especially hideous mutant plant?)
Image of Krinkle mutant plant shows no images.
Found it - it and some of the other missing images were at a different URL. This will not be a problem once I hunt down the remaining missing images, they will all go in files/brilliant_gallery_temp.
All images now live under files/. Pages generated automatically since commit 4abdbe5.
| gharchive/issue | 2023-03-28T17:56:55 | 2025-04-01T06:37:25.503830 | {
"authors": [
"svengato"
],
"repo": "PeanutBase/jekyll-peanutbase",
"url": "https://github.com/PeanutBase/jekyll-peanutbase/issues/32",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
53967677 | Guard against openssl's new strict DER checks.
This is a port of the patches 488ed32f2ada1d1dd108fc245d025c4d5f252783 and 8dccba6a45db0466370726ed462b9da2eae43bce made by theuni for bitcoin core to fix the problem with the new openssl versions being stricter on the format of DER encoded ECDSA signatures (which could split the network).
I'm going to review this PR today against the Bitcoin patch and get it merged if I don't see any problems building it. Thank you for submitting it, @glv2!
| gharchive/pull-request | 2015-01-10T18:56:39 | 2025-04-01T06:37:25.662904 | {
"authors": [
"brossi",
"glv2"
],
"repo": "Peerunity/Peerunity",
"url": "https://github.com/Peerunity/Peerunity/pull/145",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
679984226 | Skip running fork choice if there is already a previous update still in progress
PR Description
Currently we run fork choice and then may fail to actually apply the new chain head because a new change comes in while we were regenerating the state. Instead if we're still waiting for a previous result to apply, skip running fork choice entirely.
Documentation
[x] I thought about documentation and added the documentation label to this PR if updates are required.
This has not proven effective. Need to try a different approach.
| gharchive/pull-request | 2020-08-17T06:38:43 | 2025-04-01T06:37:25.664510 | {
"authors": [
"ajsutton"
],
"repo": "PegaSysEng/teku",
"url": "https://github.com/PegaSysEng/teku/pull/2592",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
236466668 | Properties of "interface" type
It seems like it is already possible to have a property of interface type. Example:
interface SubInterface {
void doSomething();
}
interface MainInterface {
readonly SubInterface myProperty;
}
However, I could not find out how a code generator can check whether a property is of interface type or not. I would expect that an additional "is_interface" field is added, so that we could write the following:
{% if property.type.is_model -%} // "is_model" field already existing
// the property is a model
{% elif property.type.is_interface -%} // "is_interface" field not existing yet
// the property is a sub interface
{% endif %}
Fixed by https://github.com/Pelagicore/qface/pull/49
Seems to be okay. Done. I added some tests to validate the expectation
| gharchive/issue | 2017-06-16T12:17:47 | 2025-04-01T06:37:25.667603 | {
"authors": [
"jacky309",
"jryannel"
],
"repo": "Pelagicore/qface",
"url": "https://github.com/Pelagicore/qface/issues/48",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2583493165 | temporary variables turbowarp version bug
delete all runtime variables block doesnt delete any variables, and when you click the checkbox on the active runtime variables reporter it doesnt show anything but when you click the block it does
getting error
TypeError: Cannot read properties of undefined (reading 'delete')
for delete runtime var block
and error
TypeError: Cannot read properties of undefined (reading 'has')
using runtime var exists?
im getting both these errors while trying to use these blocks in a custom block, pls fix ur stuff penguinmod 😭
tf
wait, do your variable names invlove periods
oh well um erm its um its well its um complicated um maybe um yes
wat do u mean by loss of tiny bit of simplicity
dangerouz optimizations worked :D
dangerouz optimizations worked :D
yay!
wat do u mean by loss of tiny bit of simplicity
as in instead of get (baz) in (get (bar) in (get (foo) in (get (tempvar)))) you can just do get (tempvar.foo.bar.baz) (assuming tempvar contains {"foo":{"bar":{"baz":"hellorld!"}}}) for this feature to work it is important that it is an obejct/map-like and not a json string that is inside the tempvar, a map-like would be something like the pm "objects" extension in the extension gallery
why is it closed when the main issue is still present
@RedMan13 REDMANNNNNN!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! REOPEN ITTTTTTTTTTTTTTTT!!!!!!!!!!
why is it closed when the main issue from the original comment is still present
which one
ehh both need fixed anyways
lol
so, tf ?
what on skidi was making the second set of errors
also fixed the bug this thread was orignally for
NOOOOOOOOOOO!!!!!!!!!! WAAAAAAAAAAAIIIIIIIIITTTTTTTTT!!!!!!! THERES ONE LAST BUGGGGGGGGGGGGGG!!!!!!!!!
@RedMan13
what on shit
oh wait do monitors never try to read the compield version
i believe that is the case, so this should now be fied
| gharchive/issue | 2024-10-12T23:13:18 | 2025-04-01T06:37:25.679201 | {
"authors": [
"AD1340",
"JeremyGamer13",
"RedMan13"
],
"repo": "PenguinMod/PenguinMod-Vm",
"url": "https://github.com/PenguinMod/PenguinMod-Vm/issues/74",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2462903982 | Lista roba da fare
[x] Cambiare i nomi degli elementi della barra di navigazione, mettili al plurale (Frontend Templates)
[ ] Mostrare il numero di caselli, forse per comune? (Frontend Templates - Backend Python)
[ ] Mostrare il nome del comune e dell'autostrada oltre al codice (Frontend Templates - Backend Python)
[ ] Quando premi sul codice dell'autostrada o del comune nella tabella dei caselli, ti porta alla pagina della tabella rilevante (Frontend JS - Backend Python)
[ ] Sistema di ordinamento (Frontend JS)
[ ] Cambia sistema di ricerca di coordinate (Frontend Templates - Backend Python)
[ ] Impedisci l'inserimento di codici vuoti (Frontend Templates - Frontend JS)
[ ] Metti un prompt extra per la cancellazione (Frontend JS)
[ ] Aggiungere messaggio per quando i dati vengono aggiornati/cancellati correttamente (Frontend Templates - Frontend JS)
[x] Formato date da mettere (Frontend Templates - Backend Python)
Problema, aggiungere nuovi campi alle tabelle rischia di renderla troppo stretta. Va testato
| gharchive/issue | 2024-08-13T09:45:44 | 2025-04-01T06:37:25.696634 | {
"authors": [
"socket772"
],
"repo": "PerilousBooklet/progetto-pweb2",
"url": "https://github.com/PerilousBooklet/progetto-pweb2/issues/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
619571576 | Could a change to the event hook make for a more robust count?
I was poking around Grav's docs to try and find a way to make the view counter slightly more robust/accurate; e.g. I'd like the counter not to increase when someone presses F5 (soft refreshes the page).
Would it help if, instead of incrementing the counter on onPageInitialized, instead increment it in another event like onPageContentRaw?
It might. I don't have time to test anything at the moment. Feel free to give it a shot. Pull requests are always welcome.
That said, it shouldn't always increment when refreshing. If the page is cached, it shouldn't be triggering the plugin. I seem to recall checking that when I last looked at this forever ago. Again, something to look at when things slow down for me.
May be onShutdown is better?
@DeFUCC on what measure would onShutdown be better? I am not very familiar with Grav's event hooks so your explanation could shed some light on this.
onShutdown
A new and very powerful event that lets you perform actions after Grav has finished processing and the connection to the client has been closed. This is particularly useful for performing actions that don't need user interaction and potentially could impact performance. Possible uses include user tracking and jobs processing.
Grav docs
So it increments the view count when the user is actually viewing the page ) I've interchanged the Event in my version of the plugin and it works fine. ;)
@DeFUCC sure, but why is it better than onPageInit or onPageContentRaw?
It doesn't take resources on page loading time, may be? I'm new to Grav and not an expert in PHP, so it's just an idea about improving performance of all those plugins I added )
| gharchive/issue | 2020-05-16T23:03:14 | 2025-04-01T06:37:25.703895 | {
"authors": [
"DeFUCC",
"Perlkonig",
"RojerGS"
],
"repo": "Perlkonig/grav-plugin-count-views",
"url": "https://github.com/Perlkonig/grav-plugin-count-views/issues/15",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
300055400 | Experimental feature: Alias calls
Related pull request: #55
You can define a type alias as follows:
type C_DOTA_BaseNPC = CDOTA_BaseNPC;
Then an explicit cast would cause the transpiler to use the alias instead of the original class. For example
unit.GetMana() ==> CDOTA_BaseNPC.GetMana(unit)
(<C_DOTA_BaseNpc>unit).GetMana() ==> C_DOTA_BaseNPC.GetMana(unit)
From now on this experimental feature will be merged into the main branch to test. Please report any issues with this mechanic here.
This feature will remain open until this experimental feature is reverted or added permanently.
The type aliases pollute the code completion and should probably be kept in a separate file so they can be excluded.
You should be able to simply put them in your declarations file.
For Dota they are in the general declarations file and that causes them to pop up in code completion which is annoying. If they were in a separate file people could choose to exclude them from their code completion.
Removed the need for this in 17378efff93ed72bfb7ed01016c0aeb17d5092c3
| gharchive/issue | 2018-02-25T19:54:44 | 2025-04-01T06:37:25.706825 | {
"authors": [
"Perryvw",
"zapp-brannigan-dota"
],
"repo": "Perryvw/TypescriptToLua",
"url": "https://github.com/Perryvw/TypescriptToLua/issues/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
383953262 | Citeseer data set accuracy
Hello!I used the GAT network to run the citeseer database, but the accuracy could not reach 72.5, only 70.3. How did you set the parameters to run so high?
Hello,
Thanks for your issue!
Regarding the Citeseer setup, we have found that early stopping just on the accuracy (rather than loss and accuracy) yielded better results.
Here is a relevant code segment:
if val_acc_avg/vl_step >= vacc_mx:
if val_acc_avg/vl_step >= vacc_mx:
vacc_early_model = val_acc_avg/vl_step
vlss_early_model = val_loss_avg/vl_step
saver.save(sess, checkpt_file)
vacc_mx = np.max((val_acc_avg/vl_step, vacc_mx))
vlss_mn = np.min((val_loss_avg/vl_step, vlss_mn))
curr_step = 0
Hope that helps! Note that the standard deviation on Citeseer is large (0.7) so it might take multiple runs to achieve a satisfactory accuracy. For example, I had five runs in a row with 73.1%, 74.2%, 71.9%, 73.1%, 70.9% under this configuration.
Thanks,
Petar
Thank you very much for the reply.
Thanks,
Xu Haiyun
| gharchive/issue | 2018-11-24T02:49:55 | 2025-04-01T06:37:25.722606 | {
"authors": [
"PetarV-",
"xuhaiyun42"
],
"repo": "PetarV-/GAT",
"url": "https://github.com/PetarV-/GAT/issues/14",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1110097000 | 🛑 Landing is down
In 5660cc2, Landing (https://thepetra.co/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Landing is back up in 4aa8094.
| gharchive/issue | 2022-01-21T06:15:20 | 2025-04-01T06:37:25.731791 | {
"authors": [
"petraafrica"
],
"repo": "PetraHQ/status",
"url": "https://github.com/PetraHQ/status/issues/46",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1092667210 | patient-ssapp v0.10.10 the PharmaLedger logo should navigate to the dashboard
The PharmaLedger logo at the top right should act as a home button.
c3e58dc does the job, but the mouse over the image does not change to "hand"...
@lehialessandro (low-pri) do you know how to solve the mouse hover change ?
I've tried ion-button around the image, but the colors/bg of the image change.
I've tried ion-anchor, but the mouse-over did not change at all.
| gharchive/issue | 2022-01-03T16:58:46 | 2025-04-01T06:37:25.748361 | {
"authors": [
"joaoluis-pdm"
],
"repo": "PharmaLedger-IMI/ctr-workspace",
"url": "https://github.com/PharmaLedger-IMI/ctr-workspace/issues/80",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1382306534 | Redesign this page using ionic cards component
It should be similar with cards used for studies and added more details on it (e.g visit date and eventually possibility for rescheduling)
| gharchive/issue | 2022-09-22T11:40:03 | 2025-04-01T06:37:25.750187 | {
"authors": [
"Mastaleru"
],
"repo": "PharmaLedger-IMI/eco-iot-pmed-workspace",
"url": "https://github.com/PharmaLedger-IMI/eco-iot-pmed-workspace/issues/511",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1328474329 | (1) LOW - Client_Use_Of_Iframe_Without_Sandbox
From #94
add the proper ifram sandbox configs to the created iframe for the wallet:
//iframe.setAttribute("sandbox", "allow-scripts allow-same-origin allow-forms"); for isntance
The code that produces this warning is not used in production (and is originally from Romsoft's implementation).
It is used for the web representation of the wallets.
We will address this, for reference's sake, but it's low priority
| gharchive/issue | 2022-08-04T11:23:15 | 2025-04-01T06:37:25.752412 | {
"authors": [
"TiagoV-PDMFC"
],
"repo": "PharmaLedger-IMI/fgt-workspace",
"url": "https://github.com/PharmaLedger-IMI/fgt-workspace/issues/109",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1979519252 | Regulated voltage level?
I come here from this issue and am interested in integrating this into my Redox. While looking through the repo I got confused about the regulated voltage of the most recent design.
At the bottom of the README it says "Change regulated voltage from 3.3V to 3V", but the schematic mentions a "3.3V regulator" with no further specification of VREG. So – which is it?
I'm asking because my current hypothesis to mitigate connectivity issues I'm facing is running the Redox at a slightly elevated voltage.
V1.0 has a 3.3V regulator. V1.2 and V1.3 used a 3V regulator. In theory, you get better battery life with a 3V regulator, since you can use the 4.2V to 3.0V range of the battery. I don't know if it actually makes a significant difference in practice, since the Li-po's voltage is not linear with the state of charge. So if you're worry about low voltage, using the 3.3V should be fine. The microcontroler (nRF51822) used by the Redox support an input voltage between 1.8V and 3.6V.
| gharchive/issue | 2023-11-06T16:07:53 | 2025-04-01T06:37:25.759325 | {
"authors": [
"PhiBabin",
"neopostmodern"
],
"repo": "PhiBabin/Redox-Lipo-Adapter",
"url": "https://github.com/PhiBabin/Redox-Lipo-Adapter/issues/3",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
647378975 | log_value sets step to nothing on default
https://github.com/PhilipVinc/TensorBoardLogger.jl/blob/08f57d854ca1b32cc4cfb0a24881d57ec9f5bb1c/src/Loggers/LogValue.jl#L8
Hello, in contrary to the description and probably the expected behavior, the method log_value sets the argument step to nothing on default. As suggested in the description, a more natural default value for the argument would be step(logger). I believe this was intended but somehow accidentally omitted during development.
Cheers,
Hi wattik,
While step is set to nothing instead of the current step, the behaviour is what is documented, as later on in the serialisation chains nothing gets converted to step(logger).
See https://github.com/PhilipVinc/TensorBoardLogger.jl/blob/08f57d854ca1b32cc4cfb0a24881d57ec9f5bb1c/src/event.jl#L10
Indeed our implementation is weird, the code should be cleaned up and the conversion should be moved up to improve the code quality.
I have very little time those days, but if you want to pick this up, I'll be fast in reviewing.
| gharchive/issue | 2020-06-29T13:31:05 | 2025-04-01T06:37:25.778402 | {
"authors": [
"PhilipVinc",
"wattik"
],
"repo": "PhilipVinc/TensorBoardLogger.jl",
"url": "https://github.com/PhilipVinc/TensorBoardLogger.jl/issues/71",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2330322516 | Best way to upgrade a Docker instance?
I was wondering if you have tips to upgrade a Mantis instance running in Docker? Without losing existing data.
Just update the git repo and restart the ./docker-setup-ubuntu.sh script to build new images?
Tx!
Hey @xme
You need to get the latest Mantis repo (clone), re-run the setup script. The setup script will detect existing Mantis setup and provides you options on how to proceed. Choose an option where MongoDB instance is not deleted so that your existing data and the MongoDB instance are preserved.
Hi Prateek,
Nice, it ran smoothly! Tx!
I does not seem to be critical, Mantis works, but I got this error at the end of the setup script:
jq: error (at <stdin>:1): Cannot index array with string "Service"
This error occurs because of changes to docker compose ps command output in the latest docker compose versions. Can you update your docker compose to the latest version and try the same? It should resolve it.
| gharchive/issue | 2024-06-03T06:54:02 | 2025-04-01T06:37:25.803890 | {
"authors": [
"0xbharath",
"Prateek-Thakare",
"xme"
],
"repo": "PhonePe/mantis",
"url": "https://github.com/PhonePe/mantis/issues/26",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2736487430 | [Lecture 11_2][12/13] CycleGAN question
What does "target_fake" mean in the process of calculating "loss_D_fake" for discriminator A? I understand "pred_fake" as the generated fake A, but I was pondering the meaning of "target_fake" and decided to ask for clarification.
In the implementation code in https://github.com/aitorzip/PyTorch-CycleGAN/blob/master/train ,
# Inputs & targets memory allocation
Tensor = torch.cuda.FloatTensor if opt.cuda else torch.Tensor
input_A = Tensor(opt.batchSize, opt.input_nc, opt.size, opt.size)
input_B = Tensor(opt.batchSize, opt.output_nc, opt.size, opt.size)
target_real = Variable(Tensor(opt.batchSize).fill_(1.0), requires_grad=False)
target_fake = Variable(Tensor(opt.batchSize).fill_(0.0), requires_grad=False)
target_real represents the label 1 (real) for real data, while target_fake represents the label 0 (fake) for fake data. In this implementation, criterion_GAN is defined as MSELoss (Mean Squared Error Loss), which trains the model to minimize the difference between the predicted value and the target label.
Specifically, the Discriminator is trained to output 1 for real data and 0 for fake data. Here, target_fake serves as the label "0" indicating fake data, and it is used in the calculation of loss_D_fake. This ensures that the Discriminator learns to correctly distinguish fake data from real data and avoids misclassifying fake data as real.
@nassunii @jaein4722
Thank you for the question and the answer:)
Thank you for the great answer, @jaein4722 . It seems my additional comments are not needed.
| gharchive/issue | 2024-12-12T17:22:03 | 2025-04-01T06:37:25.867861 | {
"authors": [
"jaein4722",
"nassunii",
"yjyoo3312"
],
"repo": "PiLab-CAU/ImageProcessing-2402",
"url": "https://github.com/PiLab-CAU/ImageProcessing-2402/issues/60",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
110640131 | Unsupported major.minor version 52.0
buck install app
Using watchman.
Using buckd.
BUILD FAILED: com/android/dx/command/dexer/Main : Unsupported major.minor version 52.0
[-] PROCESSING BUCK FILES...FINISHED 0.5s [100%]
[-] BUILDING...FINISHED 1.3s [100%] (14/43 JOBS, 13 UPDATED, 30.2% CACHE MISS)
BUILD FAILED: com/android/dx/command/dexer/Main : Unsupported major.minor version 52.0
I think this is neither buck nor OkBuck's problem, you seems use jdk 1.8 to build Android project.
Google your error message to find more details.
| gharchive/issue | 2015-10-09T11:34:20 | 2025-04-01T06:37:25.869878 | {
"authors": [
"Piasy",
"androidmalin"
],
"repo": "Piasy/OkBuck",
"url": "https://github.com/Piasy/OkBuck/issues/18",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
1772992934 | Upgrade Swagger 2.2.12 -> 2.2.13
This PR contains the following updates:
Package
Type
Update
Change
Swagger
compile
patch
2.2.12 -> 2.2.13
Release Notes
swagger-api/swagger-core
v2.2.13: Swagger-core 2.2.13 released!
Compare Source
fix: makes populating instance variables accessible to subclasses (#4434)
OAS 3.1 - properties and ref as siblings / fix ModelConvertes usage (#4433)
support custom annotation for containers (#4429)
[ ] If you want to rebase/retry this PR, check this box
Warning
Renovate's suggested commit message is being replaced with improved initial commits to enable automerging. As a side effect, these suggested commit messages might have changed. Consider comparing the initial commit message and this message to determine the most suitable one. Please leave feedback in #sys-renovate.
Suggested commit message:
Upgrade Swagger 2.2.12 -> 2.2.13
See:
- https://github.com/swagger-api/swagger-core/releases/tag/v2.2.13
- https://github.com/swagger-api/swagger-core/compare/v2.2.12...v2.2.13
| gharchive/pull-request | 2023-06-25T01:01:11 | 2025-04-01T06:37:25.879244 | {
"authors": [
"Picnic-Bot"
],
"repo": "PicnicSupermarket/error-prone-support",
"url": "https://github.com/PicnicSupermarket/error-prone-support/pull/698",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1164921328 | Allow training of wake words for older versions via Picovoice console
Is your feature request related to a problem?
Wake words trained for Porcupine v2.0 won't work with v2.1 and vice versa. Since developers cannot always update right away and release new versions they depend on support for older versions at least for some transition period. Currently it is not possible to train v2.0 wake words anymore via the online console which effectively breaks the custom wake word feature for all apps and devices that still have to run v2.0.
Describe the solution you'd like
Let users train custom wake words for older versions (v2.0 atm) via the console or release offline tools to train older wake words.
you can keep using trained v2.0 models, no?
Yes, but users who are just getting started cannot create custom wake words at the moment :-(
got it. we don't have plans to provide this at the moment as it incurs lots of ops and almost all customers are either happy to stay with already trained (older) models or upgrade to the newest version. I will keep this in mind for the future but closing now as there is no immediate action.
Ok. I'm about to update to 2.1 soon ... any immediate plans for 2.2? ^^ :see_no_evil:
| gharchive/issue | 2022-03-10T08:41:13 | 2025-04-01T06:37:25.882777 | {
"authors": [
"fquirin",
"kenarsa"
],
"repo": "Picovoice/porcupine",
"url": "https://github.com/Picovoice/porcupine/issues/680",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
342870907 | Add pagination
Add pagination for better performance. Currently, the app is getting the 100 latest articles with a GET on http://thinkerview.com/wp-json/wp/v2/posts?categories=9&per_page=100 instead of http://thinkerview.com/wp-json/wp/v2/posts?categories=9&?page=1 and http://thinkerview.com/wp-json/wp/v2/posts?categories=9&?page=2 etc...
Done in 1.3.0
| gharchive/issue | 2018-07-19T20:07:02 | 2025-04-01T06:37:25.886128 | {
"authors": [
"PierreBresson"
],
"repo": "PierreBresson/Thinkerview",
"url": "https://github.com/PierreBresson/Thinkerview/issues/6",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
320979099 | readme: fill minimum info about the project
At least write whats this project is about.
@Quasilyte, maybe close this issue?
| gharchive/issue | 2018-05-07T22:22:25 | 2025-04-01T06:37:25.887140 | {
"authors": [
"Quasilyte",
"fexolm"
],
"repo": "PieselBois/kfulint",
"url": "https://github.com/PieselBois/kfulint/issues/16",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2724588063 | Added PinMAME functions script & helper files
Allows to control ROM volume & DMD of PinMAME conveniently from PinballY
Showpindmd (0 / 1) shows or hides the DMD controlled by PinMAME. Disabling the DMD is useful for older ROM based tables with alphanumeric displays, where PinMAME is showing basic numbers only on the DMD.
Volume (0 to -32) attenuates the volume of the ROM sound from PinMAME. This allows to balance out the volume coming from the backbox speakers vs. the volume of the playfield sound. This is for older ROM based tables, which do not have a DMD menu with volume control. 0 is the loudest (default). I have never needed values below -16. So this is the range being shown in the menu. It is usually sufficient to go in steps of 2, as I have never needed the steps to be finer.
The only change that needs to be done to the script is the scriptpath itself. It defaults to "C:\PinballY\Scripts" which should work out of the box for most installations.
The registry key for the ROM is retrieved from PinballYs metadata. In my experience, this works quite well for 95% of the tables that I have used it for so far.
The changes to the Windows registry are being done by a small helper application, built with AutoIt3. You can use the provided EXE or build it yourself from source.
@mjrgh
Not sure if you noticed this PR. You might want to have a look at it.
| gharchive/pull-request | 2024-12-07T13:11:49 | 2025-04-01T06:37:25.897450 | {
"authors": [
"jueank"
],
"repo": "PinballY/PinballY-Addons-and-Examples",
"url": "https://github.com/PinballY/PinballY-Addons-and-Examples/pull/22",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
108588150 | Couldn't build "Debug" configuration in VS 2015
Building in "Debug" configuration returns 1519 errors on my machine using VS 2015, and this was the default build configuration when I opened the solution. Switching to "Debug-CI" or "Release" fixes the errors. We should maybe remove the "Debug" configuration? Or merge it with "Debug-CI"?
Paste the error log somewhere (gist or pastebin). Debug configuration is pretty much identical to Release, Debug-CI is a special configuration for AppVeyor.
Hmm... couldn't repro today... will keep an eye on it :-)
| gharchive/issue | 2015-09-28T04:38:25 | 2025-04-01T06:37:26.019239 | {
"authors": [
"vosen",
"yacoder"
],
"repo": "PistonDevelopers/VisualRust",
"url": "https://github.com/PistonDevelopers/VisualRust/issues/185",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
66436032 | Region file (.mca) handling.
DO NOT MERGE YET.
First step towards having a chunk loader.
[x] .mca file reading.
[ ] .mca file writing.
[ ] Chunk loader capabilities.
File writing is utterly incomplete.
@toqueteos I'm working on #[derive] functionality for NBT file formats that should make this task massively easier.
Hmmm... Is #[derive] customizable now? That's awesome!
#[derive(NbtFmt)] has landed now.
I'm closing this PR in favor of a new updated one.
| gharchive/pull-request | 2015-04-05T12:52:10 | 2025-04-01T06:37:26.029201 | {
"authors": [
"atheriel",
"fenhl",
"toqueteos"
],
"repo": "PistonDevelopers/hematite_server",
"url": "https://github.com/PistonDevelopers/hematite_server/pull/76",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
113438428 | Return errer on invalid bitmap height
Fixes #473.
Thanks!
| gharchive/pull-request | 2015-10-26T19:57:41 | 2025-04-01T06:37:26.030413 | {
"authors": [
"mbrubeck",
"nwin"
],
"repo": "PistonDevelopers/image",
"url": "https://github.com/PistonDevelopers/image/pull/476",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
588146340 | Create guide "5 Tips for Success"
5 Tips For Success
Solid Resume
Apply EARLY
Study Interview Questions
closed this by accident. I'll work on it this week
lol dis just zero to over
| gharchive/issue | 2020-03-26T04:28:45 | 2025-04-01T06:37:26.033495 | {
"authors": [
"Zmwang622",
"azharichenko"
],
"repo": "PittCSWiki/pittcswiki",
"url": "https://github.com/PittCSWiki/pittcswiki/issues/17",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
930893914 | Add autofocus to search bar
Love the project!
This is a minor minor change: I just added autofocus to the search-bar so whenever you open the task menu you can automatically start typing without having to click on the search bar first, just like in the real windows.
Thanks for the PR. It looks like we're facing some issues with it.
Thanks for the PR. It looks like we're facing some issues with it.
Hey guys my project also same issue but I fixed it. This is because we focus the search before the Startmenu animation. Try focus after animation using
setTimeout(() => { **startmenu**.focus(); }, **animationduration**)
Thanks for the PR. It looks like we're facing some issues with it.
Hey guys my project windows11 also same issue but I fixed it. This is because we focus the search before the Startmenu animation. Try focus after animation using
setTimeout(() => { **startmenu**.focus(); }, **animationduration**)
Ohh Great! Thanks for sharing the solution!
In case you wanna give a PR do let me know.
No I can't. I doesn't know any frameworks. I am comfortable only with plain JS.
| gharchive/pull-request | 2021-06-27T10:00:45 | 2025-04-01T06:37:26.052893 | {
"authors": [
"AnTheMaker",
"PiyushSuthar",
"Rajaniraiyn"
],
"repo": "PiyushSuthar/Windows-11-Web",
"url": "https://github.com/PiyushSuthar/Windows-11-Web/pull/3",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
976981587 | Generic MQTT: support certificate based auth
https://github.com/PlaceOS/drivers/blob/master/drivers/place/mqtt.cr currently supports only username/password authentication.
There is a current project which requires publishing device state to an MQTT broker on the internet that requires certificate based authentication.
Could we add support for certificate based auth?
fyi @jeremy-west I'll assign this to Steve as he created this MQTT driver and the lib which it uses (https://github.com/spider-gazelle/crystal-mqtt). Can we allocate some of his time to it over the coming weeks?
Sorry I deliberately didn't raise this earlier as the original documentation I was provided for the remote MQTT broker stated that certificate based auth was OPTIONAL (password auth could be used instead).
But now that MQTT service has rebranded and their new documentation states a requirement for cert based auth.
It's a good thing to support anyway and I'm sure we will see it required again soon.
| gharchive/issue | 2021-08-23T12:30:31 | 2025-04-01T06:37:26.055773 | {
"authors": [
"w-le"
],
"repo": "PlaceOS/drivers",
"url": "https://github.com/PlaceOS/drivers/issues/240",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1969988808 | It's useless on mac
I deploy it on my mac(Apple M1,8GB,Ventura 13.5),when i use it,it always run a while,then
`VALL-E EOS [413 -> 727]
libc++abi: terminating due to uncaught exception of type c10::Error: Unsupported type byte size: ComplexFloat
Exception raised from getGatherScatterScalarType at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/View.mm:758 (most recent call first):
frame #0: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator> const&) + 92 (0x16a4f92b8 in libc10.dylib)
frame #1: at::native::mps::getGatherScatterScalarType(at::Tensor const&) + 304 (0x28e923150 in libtorch_cpu.dylib)
frame #2: invocation function for block in at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 128 (0x28e924ca0 in libtorch_cpu.dylib)
frame #3: dispatch_client_callout + 20 (0x19acb4400 in libdispatch.dylib)
frame #4: dispatch_lane_barrier_sync_invoke_and_complete + 56 (0x19acc397c in libdispatch.dylib)
frame #5: at::native::mps::gatherViewTensor(at::Tensor const&, at::Tensor&) + 888 (0x28e923838 in libtorch_cpu.dylib)
frame #6: at::native::mps::mps_copy(at::Tensor&, at::Tensor const&, bool) + 3096 (0x28e87ab58 in libtorch_cpu.dylib)
frame #7: at::native::copy_impl(at::Tensor&, at::Tensor const&, bool) + 1944 (0x28a5f7604 in libtorch_cpu.dylib)
frame #8: at::native::copy(at::Tensor&, at::Tensor const&, bool) + 100 (0x28a5f6dac in libtorch_cpu.dylib)
frame #9: at::ops::copy::call(at::Tensor&, at::Tensor const&, bool) + 288 (0x28b32d718 in libtorch_cpu.dylib)
frame #10: at::native::clone(at::Tensor const&, c10::optionalc10::MemoryFormat) + 444 (0x28a981f84 in libtorch_cpu.dylib)
frame #11: at::_ops::clone::call(at::Tensor const&, c10::optionalc10::MemoryFormat) + 276 (0x28b03b0c4 in libtorch_cpu.dylib)
frame #12: at::_ops::contiguous::call(at::Tensor const&, c10::MemoryFormat) + 272 (0x28b45fa60 in libtorch_cpu.dylib)
frame #13: at::TensorBase::__dispatch_contiguous(c10::MemoryFormat) const + 40 (0x28a447130 in libtorch_cpu.dylib)
frame #14: at::native::mps::binaryOpTensor(at::Tensor const&, at::Tensor const&, c10::Scalar const&, at::Tensor const&, std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator>, MPSGraphTensor* (at::native::mps::BinaryOpCachedGraph*, MPSGraphTensor*, MPSGraphTensor*) block_pointer) + 968 (0x28e863330 in libtorch_cpu.dylib)
frame #15: at::native::structured_mul_out_mps::impl(at::Tensor const&, at::Tensor const&, at::Tensor const&) + 128 (0x28e8673f0 in libtorch_cpu.dylib)
frame #16: at::(anonymous namespace)::wrapper_MPS_mul_Tensor(at::Tensor const&, at::Tensor const&) + 140 (0x28c003ea8 in libtorch_cpu.dylib)
frame #17: at::_ops::mul_Tensor::call(at::Tensor const&, at::Tensor const&) + 284 (0x28ae41898 in libtorch_cpu.dylib)
frame #18: torch::autograd::THPVariable_mul(_object*, _object*, _object*) + 396 (0x1781f82dc in libtorch_python.dylib)
frame #19: object* torch::autograd::TypeError_to_NotImplemented<&torch::autograd::THPVariable_mul(_object*, _object*, _object*)>(_object*, _object*, _object*) + 12 (0x178154330 in libtorch_python.dylib)
frame #20: method_vectorcall_VARARGS_KEYWORDS + 144 (0x104b77f88 in Python)
frame #21: vectorcall_maybe + 104 (0x104bd5824 in Python)
frame #22: slot_nb_multiply + 148 (0x104bd2588 in Python)
frame #23: binary_op1 + 228 (0x104b5021c in Python)
frame #24: PyNumber_Multiply + 36 (0x104b5082c in Python)
frame #25: _PyEval_EvalFrameDefault + 51104 (0x104c467d0 in Python)
frame #26: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #27: method_vectorcall + 164 (0x104b6e0c0 in Python)
frame #28: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python)
frame #29: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #30: _PyObject_FastCallDictTstate + 96 (0x104b6afe8 in Python)
frame #31: slot_tp_call + 180 (0x104bd076c in Python)
frame #32: _PyObject_MakeTpCall + 128 (0x104b6ad3c in Python)
frame #33: _PyEval_EvalFrameDefault + 40584 (0x104c43eb8 in Python)
frame #34: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #35: _PyVectorcall_Call + 152 (0x104b6b82c in Python)
frame #36: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python)
frame #37: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #38: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python)
frame #39: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #40: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python)
frame #41: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #42: _PyObject_VectorcallTstate.4608 + 88 (0x104c62a28 in Python)
frame #43: context_run + 92 (0x104c628e4 in Python)
frame #44: cfunction_vectorcall_FASTCALL_KEYWORDS + 76 (0x104bb3a00 in Python)
frame #45: _PyEval_EvalFrameDefault + 48300 (0x104c45cdc in Python)
frame #46: _PyEval_Vector + 116 (0x104c48564 in Python)
frame #47: method_vectorcall + 380 (0x104b6e198 in Python)
frame #48: thread_run + 168 (0x104cfaad4 in Python)
frame #49: pythread_wrapper + 48 (0x104c9c1cc in Python)
frame #50: _pthread_start + 148 (0x19ae63fa8 in libsystem_pthread.dylib)
frame #51: thread_start + 8 (0x19ae5eda0 in libsystem_pthread.dylib)
[1] 36424 abort python3 -X utf8 launch-ui.py`
IT EVEN USE ME 20GB RAM!!!!(zipped,i dont know what's that mean)
Optimize it,please.
ComplexFloat seems to be a common problem reported by several Mac users, but I personally don't have a MacBook to do debugging.
So, I apologize that currently I'm unable to fix this problem.
ComplexFloat seems to be a common problem reported by several Mac users, but I personally don't have a MacBook to do debugging. So, I apologize that currently I'm unable to fix this problem.
ok thanks
| gharchive/issue | 2023-10-31T09:17:14 | 2025-04-01T06:37:26.078073 | {
"authors": [
"Plachtaa",
"zhou20120904"
],
"repo": "Plachtaa/VALL-E-X",
"url": "https://github.com/Plachtaa/VALL-E-X/issues/129",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
152562084 | Why does your 9.2 upgrade section use an outdated version of rxTools?
There is no reason not to use the newer version, load it on http://dukesrg.github.io/?rxTools/sys/code.bin rather than reboot.ms.
Very old system versions cannot run newer rxTools versions.
| gharchive/issue | 2016-05-02T14:15:11 | 2025-04-01T06:37:26.082665 | {
"authors": [
"Ketchup901",
"Plailect"
],
"repo": "Plailect/Guide",
"url": "https://github.com/Plailect/Guide/issues/107",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
168249363 | Black screen after restoring RedNand
On my new 3DS I get a black screen on boot after finishing A9LH installation and restoring the RedNand, sysnand and sysnand from part 2 don't work either. A9LH has installed correctly (I tested it with the tester package, a powers off the console just fine).
Any idea what could be wrong?
Copy Luma's files again, make sure your options are right.
| gharchive/issue | 2016-07-29T04:39:58 | 2025-04-01T06:37:26.084211 | {
"authors": [
"Plailect",
"SuperStuck"
],
"repo": "Plailect/Guide",
"url": "https://github.com/Plailect/Guide/issues/274",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
101993492 | Strlenfix
This avoids the crash on gnews.bin but unfortunatley I haven't been able to confirm it works since I run out of memory.
Attempting to run this code now on an Amazon cluster where I'd previously gotten the error this is supposed to fix. Not sure how long it'll take but I'll update when it's done.
w2v.loadModel('../GoogleNews-vectors-negative300.bin', function(err, model){
console.log('model',model);
});
Thanks for the commit & the fix of the string length. I am thinking that maybe we can remove the slice operation when creating a new WordVec instance. Apparently, node Buffers are allocated in memory outside of the V8 heap, so if we avoid creating a shallow copy and would instead just provide a new view on the underlying data, this might help. I made some little changes to the code to facilitate this and merged it into the master branch.
Oh excellent. I'll use the current master branch and give it a shot now.
$ node index.js --max_old_space_size 4096 > out.txt
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Aborted (core dumped)
Even with the optimization it's still hitting 4GB of memory usage and dumping.
Thanks for trying out the updated code! You were right from the start, and it seems that we might not be able to get this working without a major rewrite of the code, which could utilize either multiple node processes or native C++ code via an add-on. I am a bit at my wit's end, but will let all of you know in case I come up with something in the future.
This might magically get fixed by the upcoming Node "4.0" release, which you can read about here:
https://medium.com/node-js-javascript/4-0-is-the-new-1-0-386597a3436d
(io.js is a fork of node with lots of improvements that is getting folded back into the trunk)
| gharchive/pull-request | 2015-08-19T20:40:40 | 2025-04-01T06:37:26.091277 | {
"authors": [
"Planeshifter",
"dariusk",
"oskarflordal"
],
"repo": "Planeshifter/node-word2vec",
"url": "https://github.com/Planeshifter/node-word2vec/pull/6",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1154623313 | MonitoringApplicationConfiguration doesn't work
Even if you specify an argument for -MonitoringApplicationConfiguration in New-PfBuild, nothing gets sent to the server
So turns out I was wrong--if you pass in a correctly-shaped object it works. But if the passed in object isn't exactly correct, Autorest just silently ignores it. Which is terrible, and I can't find any switches in Autorest to change this behavior.
This affects all arguments, not just MonitoringApplicationConfiguration
| gharchive/issue | 2022-02-28T22:35:42 | 2025-04-01T06:37:26.115381 | {
"authors": [
"brianwp3000"
],
"repo": "PlayFab/MpsPowershell",
"url": "https://github.com/PlayFab/MpsPowershell/issues/8",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1506268098 | Add namespaces to Groups
Currently, tables that are passed to be tied to a key are tied as singular states, as opposed to tables creating a new namespace.
Namespaces are a very powerful feature to the centralized state paradigm, as sometimes, you have to divide multiple states into places according to their relevance- however, how can we explicitly define that a key is a namespace key rather than a normal one? As after all, there is nothing that helps developers know when a key is a namespace/normal one.
However, Namespaces can be also created by nesting groups, and treating each group as a new namespace, which can look something like this (this depends on #9):
local centerGroup = Group({
Namespace1 = Group({
key1 = "hi"
}, AlwaysTrue)
}, AlwaysTrue)
The instead looks like a far more superior solution to me, as it avoids implementing a new feature that risks causing issues with code understandability, Plus, we can actually create a specific processor to each namespace!
Just like previously mentioned, Namespaces can be created as groups which allows for setting a new processor for each namespace, Plus, it avoids designing a new API to allow for native namespaces.
This is rejected.
| gharchive/issue | 2022-12-21T13:04:24 | 2025-04-01T06:37:26.129356 | {
"authors": [
"sinlerdev"
],
"repo": "Plothan/Vinum",
"url": "https://github.com/Plothan/Vinum/issues/10",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1967359962 | ACE-zh
Hello, dear author.
I encountered a problem with Chinese data processing. I tried to use 'bash./scripts/process_ace05ep. sh' to change the English content to Chinese, but an error occurred as shown in the following figure:
, I have successfully preprocessed the English data. Can you provide some help?'? Thank you very much.
Hi, thanks for your interest in our work. For this work, we do not consider Chinese data. If you are interested in this part. You can check another work and script https://github.com/PlusLabNLP/X-Gear/tree/main/preprocessing
| gharchive/issue | 2023-10-30T02:44:07 | 2025-04-01T06:37:26.147859 | {
"authors": [
"Eliauk-TiAmo",
"ej0cl6"
],
"repo": "PlusLabNLP/DEGREE",
"url": "https://github.com/PlusLabNLP/DEGREE/issues/19",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
96884962 | Support Python 2.7, 3.3 and 3.4
Uses the future package to support Python 2.7, 3.3 and 3.4 from a single code base.
Also tested with LCP code base with both unit and integration tests using Python 2.7.
github says this branch has conflicts
The conflict is with #43 which is also unmerged.
Not sure of the GH way of dealing with this? Accept PR #43 and then I can sort out the conflict and push again to this PR?
Done!
The decrease in coverage is due to Python 2/3 handling such as try/except on imports to handle differences in package names.
Let me know what you'd like me to do with this PR.
Ideally we'd be able to make coveralls look at the aggregate coverage across all different python versions, but I don't think that needs to block this pull request.
| gharchive/pull-request | 2015-07-23T19:07:49 | 2025-04-01T06:37:26.283240 | {
"authors": [
"bradsokol",
"greatestape"
],
"repo": "Points/PyLCP",
"url": "https://github.com/Points/PyLCP/pull/44",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
60261898 | Update travis badge
Minor update.
Indeed. Thank you @MitMaro.
| gharchive/pull-request | 2015-03-08T15:42:06 | 2025-04-01T06:37:26.312515 | {
"authors": [
"MitMaro",
"mightyiam"
],
"repo": "PolicyStat/combokeys",
"url": "https://github.com/PolicyStat/combokeys/pull/17",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
133986351 | Simple BleClient mocking
#RxBleClient can be mocked now like this:
rxBleClient = new RxBleClientMock.Builder()
.deviceMacAddress("AA:BB:CC:DD:EE:FF")
.deviceName("TestDevice")
.rssi(42)
.rxBleDeviceServices(
new RxBleClientMock.ServicesBuilder()
.addService(
UUID.fromString("00001234-0000-0000-8000-000000000000"),
new RxBleClientMock.CharacteristicsBuilder()
.addCharacteristic(
UUID.fromString("00002a29-0000-1000-8000-00805f9b34fb",
"SomeData".getBytes(),
new RxBleClientMock.DescriptorsBuilder()
.addDescriptor(UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"), "SomeDescriptor".getBytes())
.build()
).build()
).build()
).build();
Code review fixes applied, now mocking client looks like this:
rxBleClient = new RxBleClientMock.Builder()
.deviceMacAddress("AA:BB:CC:DD:EE:FF")
.deviceName("TestDevice")
.scanRecord("ScanRecord".getBytes())
.rssi(42)
.addService(
UUID.fromString("00001234-0000-0000-8000-000000000000"),
new RxBleClientMock.CharacteristicsBuilder()
.addCharacteristic(
UUID.fromString("00002a29-0000-1000-8000-00805f9b34fb"),
"CharacteristicData".getBytes(),
new RxBleClientMock.DescriptorsBuilder()
.addDescriptor(
UUID.fromString("00002902-0000-1000-8000-00805f9b34fb"),
"DescriptorData".getBytes()
).build()
).build()
).build();
Please verify again.
Looks good to me. What do you think @dariuszseweryn?
I'm not really convinced that Mocking functionality should be introduced in this project. Maybe some kind of extension would be better (like rxjava and rxandroid -> rxandroidble and rxandroidblemock)
This is added as a separate module. If someone wants to use it then he will need to add an additional dependency (exactly like MockWebServer from OkHttp).
Please configure publishing settings in gradle.properties.
Fixes have been applied.
Good points @dariuszseweryn. Thanks. Should be ok now.
I've implemented mocking characteristic notifications. API looks like this:
rxBleClient = new RxBleClientMock.Builder()
//....
.notificationSource(characteristic_UUID, observable)
.build();
You can pass a subject as a notification source and then when you can call getNotification(characteristic_UUID), you will get a notification every time you call onNext() on your subject. See updated tests.
@dariuszseweryn could you check the logic in createCharacteristicNotificationObservable especially the cache and share operators at the end? I've mimicked the behavior of RxBleConnectionImpl implementation but it's quite confusing so I'm not completely sure if I did it right.
I've implemented mocking connection status. API looks like this:
rxBleClient = new RxBleClientMock.Builder()
//....
.connectionStateSource(Subject<RxBleConnection.RxBleConnectionState>)
.build();
You can then subscribe to rxBleDevice.getConnectionState() to get notifications about connection state. Subject that is being passed as a parameter allows to change current connection state - see updated test.
Such functionality has no sense. It will be changed in the future. You can ignore it.
I've added support for simulating device disconnection. Now you will get a CONNECTED status when you subscribe to RxBleConnection and you can simulate a situation when device has disconnected itself. You can do it by calling rxBleClient.disconnect() method. State will change to DISCONNECTED and BleDisconnectedException error will be emited.
LGTM, merging.
| gharchive/pull-request | 2016-02-16T13:49:35 | 2025-04-01T06:37:26.320267 | {
"authors": [
"dariuszseweryn",
"mzgreen",
"uKL"
],
"repo": "Polidea/RxAndroidBle",
"url": "https://github.com/Polidea/RxAndroidBle/pull/3",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2712416105 | python scripts/python/cli.py does not work
Describe the bug
I dont really know what the problem is but "python scripts/python/cli.py" dosnt work.
Look : (.venv) C:\Users\ALFA\Downloads\agents> python scripts/python/cli.py
Traceback (most recent call last):
File "C:\Users\ALFA\Downloads\agents\scripts\python\cli.py", line 4, in
from agents.polymarket.polymarket import Polymarket
ModuleNotFoundError: No module named 'agents'
What am i doing wrong ?
.
Run
export PYTHONPATH="."
and you should be fine.
| gharchive/issue | 2024-12-02T15:47:26 | 2025-04-01T06:37:26.340146 | {
"authors": [
"arekgotfryd",
"max3poloski"
],
"repo": "Polymarket/agents",
"url": "https://github.com/Polymarket/agents/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
642692916 | Question: Does html take fragments as arguments?
I could not find the answer anywhere. Unfortunately the type of the values parameter of TemplateResult is values: readonly unknown[]; and is not really self-explanatory. Therefore I am not sure if I can pass DOM fragments to the html function although I know that it works. But I would like to ask if this could somehow cause problems.
Example:
function createHtmlTemplate(html: string) {
var template = document.createElement("template")
template.innerHTML = html.trim()
return template
}
html`${createHtmlTemplate(`<p>Hello World</p>`).content}`
Is this a valid way to use html?
Thank you!
You can use DOM Nodes as values as documented in the supported data types section: https://lit-html.polymer-project.org/guide/template-reference#supported-data-types-for-text-bindings
Passing a Node will insert that node into the DOM, so make sure that the semantics of that are what you want. In the case of a <template> element, inserting the element itself will not cause anything to render because the template contents are stored as a separate document fragment. You probably want to return template.content.
You seem to be basically re-implementing unsafeHTML without the dirty-checking though. WHat's your use case?
@justinfagnani thank you. I have one quick follow-up question before I close this issue: Do SVGs fall under DOM Nodes here?
| gharchive/issue | 2020-06-22T01:57:43 | 2025-04-01T06:37:26.346267 | {
"authors": [
"justinfagnani",
"timonson"
],
"repo": "Polymer/lit-html",
"url": "https://github.com/Polymer/lit-html/issues/1175",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
190802631 | Unable to query slotted nodes created by dom-repeat
When slotting a dom-repeat, i am unable to query the generated items.
Possible regression of https://github.com/Polymer/polymer/issues/2276
Live Demo
http://codepen.io/oridan/pen/JbWrxv?editors=1000
Expected Results
console log should show something like:
[x-items] [slot]
Actual Results
console log shows
[] [slot]
Browsers Affected
[x] Chrome
Versions
Polymer: 2.0-preview
webcomponents: v1
Closing after discussion in slack. Thanks to @arthurevans suggestion of stamping elements into light dom as follows:
_attachDom: function (dom)
{
this.appendChild(dom);
},
| gharchive/issue | 2016-11-21T18:45:35 | 2025-04-01T06:37:26.381157 | {
"authors": [
"TomK"
],
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/issues/4167",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
134613918 | Refactorings around how computational expressions get their arguments
Do not set splices property on userland arrays. Fixes #2415, #2350
Use value sent through the notification system (#3179)
Don't mutate splices sent into userland. As reported #3239
Should fix #3179
I've added three more commits on top of that initial commit. It seemed appropriate to not open a PR for each of them. Anyhow, please review the commits individually. Overall it's a concise change, @azakus give it a P1 ;-)
This is a nice change :+1:
LGTM from @kevinpschaaf
| gharchive/pull-request | 2016-02-18T15:38:28 | 2025-04-01T06:37:26.383470 | {
"authors": [
"azakus",
"devinivy",
"kaste"
],
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/pull/3439",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
176455506 | Fixes #3938
Fixes #3938
Separate application of listeners and host attributes.
Adds _applyListeners and _ensureAttributes as override points.
Legacy Polymer impl ensures attributes subclass before superclass so that a subclass gets first crack as to what attribute value should exist (This matches the behavior of Polymer 1.0).
LGTM
| gharchive/pull-request | 2016-09-12T18:56:55 | 2025-04-01T06:37:26.385170 | {
"authors": [
"kevinpschaaf",
"sorvell"
],
"repo": "Polymer/polymer",
"url": "https://github.com/Polymer/polymer/pull/3944",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
231158619 | Ensure that express and done are getting returned in prepare:webserver
[X] CHANGELOG.md has been updated
@justinfagnani Can you take a look at this please? Thanks!
| gharchive/pull-request | 2017-05-24T20:16:59 | 2025-04-01T06:37:26.386668 | {
"authors": [
"maxknee"
],
"repo": "Polymer/web-component-tester",
"url": "https://github.com/Polymer/web-component-tester/pull/554",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
447289303 | Use CollectionStorageProvider for handle operations
Could be generalized to something more generic in the future.
Avoids any
Fixes #2340
Any issues with this?
| gharchive/pull-request | 2019-05-22T18:48:25 | 2025-04-01T06:37:26.400485 | {
"authors": [
"lindner"
],
"repo": "PolymerLabs/arcs",
"url": "https://github.com/PolymerLabs/arcs/pull/3060",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
2026112876 | feat: 🎸 Add procedures to clear and remove a MetadataEntry
Description
Add procedure to clear an Asset Metadata value
Add procedure to remove any Local Asset Metadata
Also, adds a method isModifiable to MetadataEntry to return whether any metadata entry can be modified
Breaking Changes
NA
JIRA Link
DA-885, DA-950, DA-951
Checklist
[ ] Updated the Readme.md (if required) ?
:tada: This PR is included in version 23.0.0-alpha.37 :tada:
The release is available on:
npm package (@alpha dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-alpha.1 :tada:
The release is available on:
npm package (@alpha dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-confidential-assets.1 :tada:
The release is available on:
npm package (@confidential-assets dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0-beta.1 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 24.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2023-12-05T12:44:46 | 2025-04-01T06:37:26.416822 | {
"authors": [
"prashantasdeveloper"
],
"repo": "PolymeshAssociation/polymesh-sdk",
"url": "https://github.com/PolymeshAssociation/polymesh-sdk/pull/1099",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2705923247 | style: 💄 remove unused isV6 internal function
Description
Breaking Changes
JIRA Link
Checklist
[ ] Updated the Readme.md (if required) ?
/fast-forward
:tada: This PR is included in version 27.0.0-beta.2 :tada:
The release is available on:
npm package (@beta dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
:tada: This PR is included in version 27.0.0 :tada:
The release is available on:
npm package (@latest dist-tag)
GitHub release
Your semantic-release bot :package::rocket:
| gharchive/pull-request | 2024-11-29T18:49:36 | 2025-04-01T06:37:26.422434 | {
"authors": [
"polymath-eric",
"prashantasdeveloper"
],
"repo": "PolymeshAssociation/polymesh-sdk",
"url": "https://github.com/PolymeshAssociation/polymesh-sdk/pull/1387",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2517797038 | feat: 🎸 ensure one vote per signer for multiSig proposals
Description
when a multiSig signer changes their vote, the vote record will now be updated instead of inserting a second vote. Proposal approval/rejection counts will now be decremented in this case
Breaking Changes
JIRA Link
DA-1289
Checklist
[ ] Updated the Readme.md (if required) ?
Draft for now since I still need to verify the behavior when a signer approves a proposal, but is removed before the proposal is executed.
I merged in 7.x into settlements-v2 (since the v2 one had merge conflicts with alpha). As a result when base branch was updated, I rebased this PR onto settlements-v2 to make it up to date @polymath-eric
/fast-forward
| gharchive/pull-request | 2024-09-10T20:57:57 | 2025-04-01T06:37:26.425793 | {
"authors": [
"polymath-eric",
"prashantasdeveloper"
],
"repo": "PolymeshAssociation/polymesh-subquery",
"url": "https://github.com/PolymeshAssociation/polymesh-subquery/pull/255",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1363263530 | Cannot read properties of undefined (reading 'creator')
Hi, guys. Got an error
And I think that is the reason why web3AccountsSubscribe not working via @polkadot/extension-dapp so I cannot handle switching between Networks
Could you please review
@sgurin Currently the testnet runtime is on v5.0.2, while mainnet is on v4.1.2. The version of the wallet extension on the chrome store is not fully compatible with v5.0.2, as seen from your error.
This issue will be addressed in the next release. There is a development release of the wallet in the chrome store that will allow you to use testnet without this wallet error. See https://chrome.google.com/webstore/detail/polymesh-wallet/ihppiagplceklfpgloomoiehdjikkacp
IF YOU DO DECIDE TO USE THE DEVELOPMENT WALLET IT SHOULD ONLY BE USED WITH TESTNET
Ensure you disable or remove the mainnet wallet when using the dev wallet to avoid conflicts between the two extensions
Still the same issue on Development Extension Version
That's because the development wallet is not compatible with the current mainnet runtime. You need to use development with Testnet and the official release with Mainnet until the runtime on mainnet is update to match testnet at which point the official wallet will also be updated.
Thanks a lot
Hi there, Robinland dev lead here. We are an unofficial partner of Polymath/Polymesh and a friend of Vincent's, as well as partner of the tech solution team that sgurin@ is a part of.
Assuming this issue will be fully addressed in the next release, may I ask for a rough ETA of the next release's timeline just so that we can plan our launches better that integrates with the Polymesh SDK? Thank you very much!
@yzrbl The mainnet update is on hold pending Ledger approval and release of an update hardware wallet app in Ledger Live. The official release on their end is taking longer than we expected and the latest information is that it still may not be approved this month.
If you've any general questions about integration you can reach out to us on the Polymesh discord server.
@F-OBrien @yzrbl Hello guys. Could you let me know whether this issue with Ledger Live has been resolved and updated?
@F-OBrien @yzrbl Hello guys. Could you let me know whether this issue with Ledger Live has been resolved and updated?
@KisIrene Yes. The Polymesh Ledger app was updated a couple of months ago. it can be downloaded from ledger live. The latest ledger app version is v3.5.000003.0
Thank you so much!!!
| gharchive/issue | 2022-09-06T13:09:42 | 2025-04-01T06:37:26.434196 | {
"authors": [
"F-OBrien",
"KisIrene",
"sgurin",
"yzrbl"
],
"repo": "PolymeshAssociation/polymesh-wallet",
"url": "https://github.com/PolymeshAssociation/polymesh-wallet/issues/249",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2092684006 | Setup the base of the front end repo
To setup the base of the front end repo we would need to do the following things
[x] Set up the docs folder where we will be noting down each decision
[x] Set up the tailwind css
[x] Setup the eslint and prettierr
[ ] Finalize with the wireframe / design for the application
Finalizing wireframe in a separate task
| gharchive/issue | 2024-01-21T17:31:18 | 2025-04-01T06:37:26.452802 | {
"authors": [
"PoojeshShetty"
],
"repo": "PoojeshShetty/chat-app-fe",
"url": "https://github.com/PoojeshShetty/chat-app-fe/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1824549538 | [Feature Request] Ability To Hide "No Deadline Set" On Widget
An option to simply hide the "no deadline set" text on the widget when no deadline is set, would be nice. Just for a cleaner look.
Seems like a good idea, planning to work on widgets soon, will do it then
| gharchive/issue | 2023-07-27T14:51:49 | 2025-04-01T06:37:26.453776 | {
"authors": [
"seniorm0ment",
"starry-shivam"
],
"repo": "Pool-Of-Tears/GreenStash",
"url": "https://github.com/Pool-Of-Tears/GreenStash/issues/49",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1975566934 | Integrate IPFS and web3 storage uploads
Is your feature request related to a problem?
Presently, snapshotters send the entire contents of each snapshot to the payload commit service in audit protocol over RabbitMQ. This is a huge overhead considering that this can take place for thousands of projects per epoch. This often causes high resource usage when there is a burst of large snapshots that are computed.
Describe the solution you'd like
Once the snapshots are computed and built, upload them from within the snapshot and aggregation workers itself.
Describe alternatives you've considered
NA
Additional context
NA
Implemented in #53 and #55, closing.
| gharchive/issue | 2023-11-03T07:07:29 | 2025-04-01T06:37:26.582964 | {
"authors": [
"anomit",
"xadahiya"
],
"repo": "PowerLoom/pooler",
"url": "https://github.com/PowerLoom/pooler/issues/52",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.