date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 685 | 2,421 |
<issue_start>username_0: I'm trying to use ECharts to render a simple doughnut chart which I've been able to do. I've noticed by default that the legend will hide the data item on the chart if it is clicked.
I want the user to be able to select the legend to do something (fire an event) which I can do using the events available (<https://ecomfe.github.io/echarts-doc/public/en/api.html#events.legendselected>) however I want to prevent the default behaviour of hiding/showing the data item on the chart.
In the documentation there is mention of a property on the legend called selectedMode (<https://ecomfe.github.io/echarts-doc/public/en/option.html#legend.selectedMode>), which prevents the toggling of the series, but it also stops the legend from being selectable entirely.
I've also tried returning false on the events fired for legendselected and legendunselected but to no success.
Has anyone found a way of stopping this behaviour? I'd appreciate any help on this issue.
Here is a fiddle which contains the selectedMode set to false. Remove this flag to see the default behaviour:
```
legend: {
orient: "vertical",
x: "right",
selectedMode: false,
data: data.map(d => d.name)
}
```
<https://jsfiddle.net/h44jpmpf/12/><issue_comment>username_1: One workaround is to dispatch the `legendSelect` action in a `legendselectchanged` event handler to re-select the option that the user clicks. You may want to toggle animations off to prevent jumpy visuals from toggling the data set.
[jsfiddle](https://jsfiddle.net/0dhvsjxL/5/)
```
myChart.on('legendselectchanged', function(params) {
suppressSelection(myChart, params);
// Add custom functionality here
});
function suppressSelection(chart, params) {
chart.setOption({ animation: false });
// Re-select what the user unselected
chart.dispatchAction({
type: 'legendSelect',
name: params.name
});
chart.setOption({ animation: true });
}
```
Upvotes: 4 <issue_comment>username_2: ```js
option = {
legend: {
selectedMode: 'onlyHover'
}
}
```
<https://github.com/apache/incubator-echarts/issues/11883#issuecomment-568783650>
Upvotes: -1 <issue_comment>username_3: As of 2022 it is also possible to add `emphasis` to series which will do just that. Version `"echarts": "^5.3.0"`.
```js
emphasis: {
focus: 'series'
}
```
Example: <https://jsfiddle.net/Nurech/vqro4zg1/3/>
Upvotes: 0
|
2018/03/16
| 4,343 | 13,846 |
<issue_start>username_0: I'm following [this tutorial](https://drissamri.be/blog/java/enable-https-in-spring-boot/) to enable HTTPS in Spring Boot 2.0 using a self-signed certificate, just for testing purpose. In summary, that tutorial includes these steps:
1.Generate the keystore using `keytool`.
```
keytool -genkey -alias tomcat
-storetype PKCS12 -keyalg RSA -keysize 2048
-keystore keystore.p12 -validity 3650
```
2.Enable HTTPS in Spring Boot by adding some properties in the `application.properties` file.
```
server.port: 8443
server.ssl.key-store: keystore.p12
server.ssl.key-store-password: <PASSWORD>
server.ssl.keyStoreType: PKCS12
server.ssl.keyAlias: tomcat
```
3.Redirect HTTP to HTTPS (optional). I ignored this part.
But when I start my application, I got these error:
```
org.apache.catalina.LifecycleException: Failed to start component [Connector[HTTP/1.1-8443]]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.catalina.core.StandardService.addConnector(StandardService.java:225) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.addPreviouslyRemovedConnectors(TomcatWebServer.java:255) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.web.embedded.tomcat.TomcatWebServer.start(TomcatWebServer.java:197) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.startWebServer(ServletWebServerApplicationContext.java:300) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:162) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:552) [spring-context-5.0.4.RELEASE.jar:5.0.4.RELEASE]
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:752) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:388) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:327) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1246) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1234) [spring-boot-2.0.0.RELEASE.jar:2.0.0.RELEASE]
at epic.gwdg.restgraph.RestgraphApplication.main(RestgraphApplication.java:10) [classes/:na]
Caused by: org.apache.catalina.LifecycleException: Protocol handler start failed
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1021) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
... 13 common frames omitted
Caused by: java.lang.IllegalArgumentException: Private key must be accompanied by certificate chain
at org.apache.tomcat.util.net.AbstractJsseEndpoint.createSSLContext(AbstractJsseEndpoint.java:116) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.tomcat.util.net.AbstractJsseEndpoint.initialiseSsl(AbstractJsseEndpoint.java:87) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:225) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:1150) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:591) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.catalina.connector.Connector.startInternal(Connector.java:1018) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
... 14 common frames omitted
Caused by: java.lang.IllegalArgumentException: Private key must be accompanied by certificate chain
at java.base/java.security.KeyStore.setKeyEntry(KeyStore.java:1170) ~[na:na]
at org.apache.tomcat.util.net.jsse.JSSEUtil.getKeyManagers(JSSEUtil.java:257) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
at org.apache.tomcat.util.net.AbstractJsseEndpoint.createSSLContext(AbstractJsseEndpoint.java:114) ~[tomcat-embed-core-8.5.28.jar:8.5.28]
... 19 common frames omitted
2018-03-16 16:42:30.917 INFO 970 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2018-03-16 16:42:30.931 INFO 970 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2018-03-16 16:42:30.933 ERROR 970 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
The Tomcat connector configured to listen on port 8443 failed to start. The port may already be in use or the connector may be misconfigured.
Action:
Verify the connector's configuration, identify and stop any process that's listening on port 8443, or configure this application to listen on another port.
2018-03-16 16:42:30.934 INFO 970 --- [ main] ConfigServletWebServerApplicationContext : Closing org.springframework.boot.web.servlet.context.AnnotationConfigServletWebServerApplicationContext@58ce9668: startup date [Fri Mar 16 16:42:26 CET 2018]; root of context hierarchy
2018-03-16 16:42:30.936 INFO 970 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
Process finished with exit code 1
```
Basically, the message is:
>
> Private key must be accompanied by certificate chain.
>
>
>
This is a self-signed certificate, so it, of course, doesn't have the trusted chain. How can I fix it?
Here is my current `application.properties` file:
```
server.port=8443
server.ssl.enabled=true
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-password=<PASSWORD>
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
```
Thank you so much for your help.<issue_comment>username_1: The problem is that in your generated keystore you dont have a key pair so there is no private key that's because your using the option -genkey you need to change it by the option -genkeypair :
>
> -genkey generates a Secret Key whereas the -genkeypair generates a
> key pair (a public key and a private key).
>
>
>
So I think this should work :
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
In your spring boot configuration change ":" by "=" and add the path to your keystore I suppose that your keystore.p12 is in your resources folder so :
```
server.ssl.key-store = classpath:keystore.p12
server.ssl.key-store-password = <PASSWORD>
server.ssl.key-store-type = PKCS12
server.ssl.key-alias = tomcat
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I was getting this horrible `Private key must be accompanied by certificate chain` error as well on my Spring Boot application with an embedded Tomcat server. It was making me **insane**.
It turns out a simple typo was my problem:
```
@Override
public void customize(ConfigurableServletWebServerFactory server) {
Ssl ssl = new Ssl();
ssl.setEnabled(true);
ssl.setKeyStore(keystoreFile);
ssl.setKeyPassword(keystorePass); // << Should be `setKeyStorePassword` !!!!
ssl.setKeyStoreType(keystoreType);
ssl.setKeyAlias(keystoreAlias);
server.setSsl(ssl);
server.setPort(sslPort);
}
```
So the error message is not helpful *at all* for this case. I hope this helps someone else. Just be sure to verify that you're putting the right passwords (key vs keystore) in the right place. The same issue can happen in a properties based setup - it depends on what you are working with.
Upvotes: 3 <issue_comment>username_3: I had the same problem. I made the changes from 2nd answer. But problem wasn't gone.
After all I've made, I just included my **keystore.p12** certificate to pom.xml in *profiles* section
```
dev
true
src/main/resources
application.properties
keystore.p12
data/\*\*
```
Upvotes: 0 <issue_comment>username_4: 1.use " -genkeypair"
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
2. change "server.ssl.key-password" to "server.ssl.**key-store-password**"
Upvotes: 2 <issue_comment>username_5: You made a small mistake in the **application.properties** file. Please change
```
server.ssl.key-password=<PASSWORD>
```
to
```
server.ssl.key-store-password=<PASSWORD>
```
It will work fine then. Hope it helps! Thank you!
Upvotes: 3 <issue_comment>username_6: Spring Boot 2.2.1.RELEASE
```
keytool -genkeypair -keystore myKeystore2.p12 -storetype PKCS12 -storepass <PASSWORD> -alias ks-localhost -keyalg RSA -keysize 2048 -validity 99999 -dname "CN=My SSL Certificate, OU=My Team, O=My Company, L=My City, ST=My State, C=SA" -ext san=dns:localhost,ip:127.0.0.1
```
application.yml
```
server:
tomcat:
accesslog:
enabled: true
ssl:
key-store-type: PKCS12
key-store: classpath:myKeystore.p12
key-alias: ks-localhost
enabled: true
protocol: TLS
key-store-password: <PASSWORD>
```
Upvotes: 1 <issue_comment>username_7: I had a similar problem, in my case i was missing the trustAnchors in the trust store.
One solution is to use the java built-in keytool, like explained in the other answers. But there is an even simplest approach using [KeyStore Explorer](https://keystore-explorer.org/downloads.html) GUI, so i'll explain the complete steps with both tools.
**1.** First of all, as described in the answer, we need to enable SSL in the `application.properties` file:
```
# <======= SSL Security ===========>
# Keystore config
server.ssl.key-store-type=PKCS12
server.ssl.key-store-password=<PASSWORD>!
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-alias=alias
server.ssl.enabled=true
# Trust Store Certificates
server.ssl.trust-store=classpath:trust_store.p12
server.ssl.trust-store-type=PKCS12
server.ssl.trust-store-password=<PASSWORD>
# <=====================>
```
The Keystore is the container of the Public - Private Key pair that is used by the server to communicate securely with the clients. The client of course must have the Public Key to be able to communicate with the server.
The Trust Store is simply a container for the certificates. (the Public Keys).
In our case it will contain only one certificate, the one used by the server.
**2.1** Create the keystore with the java `keytool`:
```
keytool -genkeypair -alias alias -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore.p12 -validity 3650
```
**2.2** Export the certificate so we can use it to create the Trust Store Keystore
```
keytool -export -keystore keystore.p12 -alias alias -file certificate.cer
```
**2.3** This step will automatically create a new keystore with the imported trusted certificate. (The tool will ask you a password for the new keystrore and when it asks "Trust this certificate?" of course you should type "yes")
```
keytool -importcert -file certificate.cer -keystore trust_store.p12 -alias alias
```
Finally save both keystores inside the resources folder of your Spring Boot App (as shown in the alternative approach).
**Alternative approach with KeyStore Explorer**
**2.1** Create the keystore with the KeyStore Explorer, as shown in the screenshots:
[](https://i.stack.imgur.com/SE8Z5.png)
[](https://i.stack.imgur.com/2zOAc.png)
[](https://i.stack.imgur.com/6z7sd.png)
[](https://i.stack.imgur.com/ytOIN.png)
[](https://i.stack.imgur.com/rvVDx.png)
[](https://i.stack.imgur.com/m0BEj.png)
Then save the keystore inside the resources folder of your Spring Boot App:
[](https://i.stack.imgur.com/H53eP.png)
**2.2** Now we need to create the trust store, that can be given to the client that needs to communicate with our server. First of all extract the certificate chain created by the KeyStore Explorer and then create a new KeyStore importing the certificate inside it, as shown in the screenshots:
[](https://i.stack.imgur.com/6ttex.png)
[](https://i.stack.imgur.com/mgoHM.png)
[](https://i.stack.imgur.com/TmVEm.png)
Then to create our trust store, click on "Create a new KeyStore" with the PKCS12 format as in the previous steps, the click the red icon "Import trusted certificate", choose the certificate saved in the preceding step, and finally save the keystore inside the resources folder like we did in the first keystore creation.
Now your server will be enabled to communicate with SSL security. Remember that your clients must be configured to load the trust store you've created .
Upvotes: 2
|
2018/03/16
| 2,487 | 8,644 |
<issue_start>username_0: I created a simple application that counts the views of an article. In the database articles table has a column view\_count. Also I have a defined event & listener that increments view\_count every time when some article is viewed:
The question, how to make update view\_count , and store in database..?!
App\Events\ ArticleViews.php:
-----------------------------
```
class ArticleViewsextends Event
{
use SerializesModels;
public $article;
public function __construct(Article $article)
{
$this->article = $article;
}
public function broadcastOn()
{
return [];
}
}
```
App\Listeners\ IncrementArticleViewsCounts.php:
-----------------------------------------------
```
class IncrementArticleViewsCounts
{
public function __construct() { }
public function handle(ArticleViewsextends $event)
{
$event->article->increment('view_count');
}
}
```
And in ArticleController.php@show:
----------------------------------
```
$articles= new Scholarship();
Event::fire(new ArticleViewsextends ($scholarships));
parent::$data['articles'] = $articles->getAllActiveArticlesForFrontEnd(parent::$data['language']);
```
In views don't count.<issue_comment>username_1: The problem is that in your generated keystore you dont have a key pair so there is no private key that's because your using the option -genkey you need to change it by the option -genkeypair :
>
> -genkey generates a Secret Key whereas the -genkeypair generates a
> key pair (a public key and a private key).
>
>
>
So I think this should work :
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
In your spring boot configuration change ":" by "=" and add the path to your keystore I suppose that your keystore.p12 is in your resources folder so :
```
server.ssl.key-store = classpath:keystore.p12
server.ssl.key-store-password = <PASSWORD>
server.ssl.key-store-type = PKCS12
server.ssl.key-alias = tomcat
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I was getting this horrible `Private key must be accompanied by certificate chain` error as well on my Spring Boot application with an embedded Tomcat server. It was making me **insane**.
It turns out a simple typo was my problem:
```
@Override
public void customize(ConfigurableServletWebServerFactory server) {
Ssl ssl = new Ssl();
ssl.setEnabled(true);
ssl.setKeyStore(keystoreFile);
ssl.setKeyPassword(keystorePass); // << Should be `setKeyStorePassword` !!!!
ssl.setKeyStoreType(keystoreType);
ssl.setKeyAlias(keystoreAlias);
server.setSsl(ssl);
server.setPort(sslPort);
}
```
So the error message is not helpful *at all* for this case. I hope this helps someone else. Just be sure to verify that you're putting the right passwords (key vs keystore) in the right place. The same issue can happen in a properties based setup - it depends on what you are working with.
Upvotes: 3 <issue_comment>username_3: I had the same problem. I made the changes from 2nd answer. But problem wasn't gone.
After all I've made, I just included my **keystore.p12** certificate to pom.xml in *profiles* section
```
dev
true
src/main/resources
application.properties
keystore.p12
data/\*\*
```
Upvotes: 0 <issue_comment>username_4: 1.use " -genkeypair"
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
2. change "server.ssl.key-password" to "server.ssl.**key-store-password**"
Upvotes: 2 <issue_comment>username_5: You made a small mistake in the **application.properties** file. Please change
```
server.ssl.key-password=<PASSWORD>
```
to
```
server.ssl.key-store-password=<PASSWORD>_password
```
It will work fine then. Hope it helps! Thank you!
Upvotes: 3 <issue_comment>username_6: Spring Boot 2.2.1.RELEASE
```
keytool -genkeypair -keystore myKeystore2.p12 -storetype PKCS12 -storepass <PASSWORD> -alias ks-localhost -keyalg RSA -keysize 2048 -validity 99999 -dname "CN=My SSL Certificate, OU=My Team, O=My Company, L=My City, ST=My State, C=SA" -ext san=dns:localhost,ip:127.0.0.1
```
application.yml
```
server:
tomcat:
accesslog:
enabled: true
ssl:
key-store-type: PKCS12
key-store: classpath:myKeystore.p12
key-alias: ks-localhost
enabled: true
protocol: TLS
key-store-password: <PASSWORD>
```
Upvotes: 1 <issue_comment>username_7: I had a similar problem, in my case i was missing the trustAnchors in the trust store.
One solution is to use the java built-in keytool, like explained in the other answers. But there is an even simplest approach using [KeyStore Explorer](https://keystore-explorer.org/downloads.html) GUI, so i'll explain the complete steps with both tools.
**1.** First of all, as described in the answer, we need to enable SSL in the `application.properties` file:
```
# <======= SSL Security ===========>
# Keystore config
server.ssl.key-store-type=PKCS12
server.ssl.key-store-password=change_it!
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-alias=alias
server.ssl.enabled=true
# Trust Store Certificates
server.ssl.trust-store=classpath:trust_store.p12
server.ssl.trust-store-type=PKCS12
server.ssl.trust-store-password=<KEY>
# <=====================>
```
The Keystore is the container of the Public - Private Key pair that is used by the server to communicate securely with the clients. The client of course must have the Public Key to be able to communicate with the server.
The Trust Store is simply a container for the certificates. (the Public Keys).
In our case it will contain only one certificate, the one used by the server.
**2.1** Create the keystore with the java `keytool`:
```
keytool -genkeypair -alias alias -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore.p12 -validity 3650
```
**2.2** Export the certificate so we can use it to create the Trust Store Keystore
```
keytool -export -keystore keystore.p12 -alias alias -file certificate.cer
```
**2.3** This step will automatically create a new keystore with the imported trusted certificate. (The tool will ask you a password for the new keystrore and when it asks "Trust this certificate?" of course you should type "yes")
```
keytool -importcert -file certificate.cer -keystore trust_store.p12 -alias alias
```
Finally save both keystores inside the resources folder of your Spring Boot App (as shown in the alternative approach).
**Alternative approach with KeyStore Explorer**
**2.1** Create the keystore with the KeyStore Explorer, as shown in the screenshots:
[](https://i.stack.imgur.com/SE8Z5.png)
[](https://i.stack.imgur.com/2zOAc.png)
[](https://i.stack.imgur.com/6z7sd.png)
[](https://i.stack.imgur.com/ytOIN.png)
[](https://i.stack.imgur.com/rvVDx.png)
[](https://i.stack.imgur.com/m0BEj.png)
Then save the keystore inside the resources folder of your Spring Boot App:
[](https://i.stack.imgur.com/H53eP.png)
**2.2** Now we need to create the trust store, that can be given to the client that needs to communicate with our server. First of all extract the certificate chain created by the KeyStore Explorer and then create a new KeyStore importing the certificate inside it, as shown in the screenshots:
[](https://i.stack.imgur.com/6ttex.png)
[](https://i.stack.imgur.com/mgoHM.png)
[](https://i.stack.imgur.com/TmVEm.png)
Then to create our trust store, click on "Create a new KeyStore" with the PKCS12 format as in the previous steps, the click the red icon "Import trusted certificate", choose the certificate saved in the preceding step, and finally save the keystore inside the resources folder like we did in the first keystore creation.
Now your server will be enabled to communicate with SSL security. Remember that your clients must be configured to load the trust store you've created .
Upvotes: 2
|
2018/03/16
| 2,261 | 7,722 |
<issue_start>username_0: I am trying to automate some stuff using protractor. I have a situation where I need to deal with a Windows pop-up and I have an AutoIt script compiled (.exe) for this. I could import this at runtime in Java and selenium. I am not sure how to do that in Protractor and JavaScript case. Thanks.<issue_comment>username_1: The problem is that in your generated keystore you dont have a key pair so there is no private key that's because your using the option -genkey you need to change it by the option -genkeypair :
>
> -genkey generates a Secret Key whereas the -genkeypair generates a
> key pair (a public key and a private key).
>
>
>
So I think this should work :
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
In your spring boot configuration change ":" by "=" and add the path to your keystore I suppose that your keystore.p12 is in your resources folder so :
```
server.ssl.key-store = classpath:keystore.p12
server.ssl.key-store-password = <PASSWORD>
server.ssl.key-store-type = PKCS12
server.ssl.key-alias = tomcat
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I was getting this horrible `Private key must be accompanied by certificate chain` error as well on my Spring Boot application with an embedded Tomcat server. It was making me **insane**.
It turns out a simple typo was my problem:
```
@Override
public void customize(ConfigurableServletWebServerFactory server) {
Ssl ssl = new Ssl();
ssl.setEnabled(true);
ssl.setKeyStore(keystoreFile);
ssl.setKeyPassword(keystorePass); // << Should be `setKeyStorePassword` !!!!
ssl.setKeyStoreType(keystoreType);
ssl.setKeyAlias(keystoreAlias);
server.setSsl(ssl);
server.setPort(sslPort);
}
```
So the error message is not helpful *at all* for this case. I hope this helps someone else. Just be sure to verify that you're putting the right passwords (key vs keystore) in the right place. The same issue can happen in a properties based setup - it depends on what you are working with.
Upvotes: 3 <issue_comment>username_3: I had the same problem. I made the changes from 2nd answer. But problem wasn't gone.
After all I've made, I just included my **keystore.p12** certificate to pom.xml in *profiles* section
```
dev
true
src/main/resources
application.properties
keystore.p12
data/\*\*
```
Upvotes: 0 <issue_comment>username_4: 1.use " -genkeypair"
```
keytool -genkeypair -alias tomcat -storetype PKCS12 -keyalg RSA -keysize 2048 -keystore keystore.p12 -validity 3650
```
2. change "server.ssl.key-password" to "server.ssl.**key-store-password**"
Upvotes: 2 <issue_comment>username_5: You made a small mistake in the **application.properties** file. Please change
```
server.ssl.key-password=<PASSWORD>
```
to
```
server.ssl.key-store-password=<PASSWORD>
```
It will work fine then. Hope it helps! Thank you!
Upvotes: 3 <issue_comment>username_6: Spring Boot 2.2.1.RELEASE
```
keytool -genkeypair -keystore myKeystore2.p12 -storetype PKCS12 -storepass <PASSWORD> -alias ks-localhost -keyalg RSA -keysize 2048 -validity 99999 -dname "CN=My SSL Certificate, OU=My Team, O=My Company, L=My City, ST=My State, C=SA" -ext san=dns:localhost,ip:127.0.0.1
```
application.yml
```
server:
tomcat:
accesslog:
enabled: true
ssl:
key-store-type: PKCS12
key-store: classpath:myKeystore.p12
key-alias: ks-localhost
enabled: true
protocol: TLS
key-store-password: <PASSWORD>
```
Upvotes: 1 <issue_comment>username_7: I had a similar problem, in my case i was missing the trustAnchors in the trust store.
One solution is to use the java built-in keytool, like explained in the other answers. But there is an even simplest approach using [KeyStore Explorer](https://keystore-explorer.org/downloads.html) GUI, so i'll explain the complete steps with both tools.
**1.** First of all, as described in the answer, we need to enable SSL in the `application.properties` file:
```
# <======= SSL Security ===========>
# Keystore config
server.ssl.key-store-type=PKCS12
server.ssl.key-store-password=<PASSWORD>!
server.ssl.key-store=classpath:keystore.p12
server.ssl.key-alias=alias
server.ssl.enabled=true
# Trust Store Certificates
server.ssl.trust-store=classpath:trust_store.p12
server.ssl.trust-store-type=PKCS12
server.ssl.trust-store-password=<PASSWORD>
# <=====================>
```
The Keystore is the container of the Public - Private Key pair that is used by the server to communicate securely with the clients. The client of course must have the Public Key to be able to communicate with the server.
The Trust Store is simply a container for the certificates. (the Public Keys).
In our case it will contain only one certificate, the one used by the server.
**2.1** Create the keystore with the java `keytool`:
```
keytool -genkeypair -alias alias -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore.p12 -validity 3650
```
**2.2** Export the certificate so we can use it to create the Trust Store Keystore
```
keytool -export -keystore keystore.p12 -alias alias -file certificate.cer
```
**2.3** This step will automatically create a new keystore with the imported trusted certificate. (The tool will ask you a password for the new keystrore and when it asks "Trust this certificate?" of course you should type "yes")
```
keytool -importcert -file certificate.cer -keystore trust_store.p12 -alias alias
```
Finally save both keystores inside the resources folder of your Spring Boot App (as shown in the alternative approach).
**Alternative approach with KeyStore Explorer**
**2.1** Create the keystore with the KeyStore Explorer, as shown in the screenshots:
[](https://i.stack.imgur.com/SE8Z5.png)
[](https://i.stack.imgur.com/2zOAc.png)
[](https://i.stack.imgur.com/6z7sd.png)
[](https://i.stack.imgur.com/ytOIN.png)
[](https://i.stack.imgur.com/rvVDx.png)
[](https://i.stack.imgur.com/m0BEj.png)
Then save the keystore inside the resources folder of your Spring Boot App:
[](https://i.stack.imgur.com/H53eP.png)
**2.2** Now we need to create the trust store, that can be given to the client that needs to communicate with our server. First of all extract the certificate chain created by the KeyStore Explorer and then create a new KeyStore importing the certificate inside it, as shown in the screenshots:
[](https://i.stack.imgur.com/6ttex.png)
[](https://i.stack.imgur.com/mgoHM.png)
[](https://i.stack.imgur.com/TmVEm.png)
Then to create our trust store, click on "Create a new KeyStore" with the PKCS12 format as in the previous steps, the click the red icon "Import trusted certificate", choose the certificate saved in the preceding step, and finally save the keystore inside the resources folder like we did in the first keystore creation.
Now your server will be enabled to communicate with SSL security. Remember that your clients must be configured to load the trust store you've created .
Upvotes: 2
|
2018/03/16
| 479 | 1,513 |
<issue_start>username_0: I have a dynamic string:
It looks like `"1: Name, 2: Another Name"` this. I want to split it and convert it to a `List>` or `IEnmerable>`
I tried this.
```
myString.Split(',').Select(s => s => new KeyValuePair( Convert.ToInt32(s.Substring(s.LastIndexOf(':'), s.Substring(0, s.LastIndexOf(':')) + 1))))
```
Does not to help much. I can do strings of Dictionary or a foreach or a for loop. I rather do it as a key value pair lambda expression one liner.<issue_comment>username_1: You need to split twice first by comma, then by colon. Try this code:
```
var input = "1: Name, 2: Another Name";
var list = input.Split(',')
.Select(p =>
{
var kv = p.Split(':');
return new KeyValuePair(int.Parse(kv[0].Trim()), kv[1]);
})
.ToList();
```
Upvotes: 1 <issue_comment>username_2: Try this:
```
myString.Split(',').Select(s => new KeyValuePair(
int.Parse(s.Split(':').GetValue(0).ToString()),
s.Split(':').GetValue(1).ToString()
));
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: One-liner:
WARNING: No exception handling
```
myString.Split(',').Select(x => new KeyValuePair(int.Parse(x.Split(':')[0]), x.Split(':')[1]))
```
Upvotes: 1 <issue_comment>username_4: Another way to achieve that with the beauty of regex:
```
var result = new List>();
foreach (Match match in Regex.Matches("1: Name, 2: Another Name", @"((\d+): ([\w ]+))"))
{
result.Add(new KeyValuePair(int.Parse(match.Groups[2].Value), match.Groups[3].Value));
}
```
Upvotes: 0
|
2018/03/16
| 729 | 2,573 |
<issue_start>username_0: A follow up from [this question](https://stackoverflow.com/questions/49322712/subclasses-with-functions-having-the-same-name-in-python), which was not very well formulated. The answer provided some additional insight so now I have constructed a limited working example that explains it better.
Basically we have two subclasses `A` and `B` and a class `C` which inherits from both. Classes `A` and `B` both have a function `MyFunc` but which does different things.
I would like for class `C` to be able to use both functions and have full control of which function is called since I wish to do different things with each function. The comment in the limited working example below shows what I am trying to do.
```py
class A():
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
def MyFunc(self):
result = self.var1 + self.var2
return result
class B():
def __init__(self, var1):
self.var1 = var1
def MyFunc(self):
result = self.var1**2
return result
class C(A,B):
def __init__(self, var1, var2, var3):
A.__init__(self, var1, var2)
B.__init__(self, var3)
def MyFunc(self):
#in this function I want to call MyFunc from A and MyFunc from B. For example to add their results together
```
How can I call `MyFunc` in `A` and `MyFunc` in `B` from `MyFunc` in `C`?<issue_comment>username_1: I tried to make the solution look as simple as possible using your example suggestion.
```
def MyFunc(self):
result = 0
result += A.MyFunc(self=self)
result += B.MyFunc(self=self)
return result
```
Upvotes: 0 <issue_comment>username_2: You can use name mangling to make attributes from a class available in a child class even if that child defines an attribute with the same name.
```
class A():
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
def __MyFunc(self):
result = self.var1 + self.var2
return result
MyFunc = __MyFunc
class B():
def __init__(self, var1):
self.var1 = var1
def __MyFunc(self):
result = self.var1**2
return result
MyFunc = __MyFunc
class C(A,B):
def __init__(self, var1, var2, var3):
A.__init__(self, var1, var2)
B.__init__(self, var3)
def MyFunc(self):
return self._A__MyFunc() + self._B__MyFunc()
c = C(1, 2, 3)
print(c.MyFunc())
# 14
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 650 | 2,559 |
<issue_start>username_0: I developed a web application that runs on my computer on localhost. Then I loaded the war file into catalina home on a remote server. Web app runs but it stops when it try to connect to database on server.
The connection is a jbdc connection on localhost, the database is mysql. When I do a connection on my computer, no problems occour.
```
String connectionString="jdbc:mysql://192.168.0.100:3306/"+request.getSession().getAttribute("dbname");
Connection con=null;
try {
try {
Class.forName("com.mysql.jdbc.Driver").newInstance();
} catch (InstantiationException | IllegalAccessException e) {
// TODO Auto-generated catch block
e.printStackTrace();
response.sendRedirect("Errore.html");
return;
};
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
con=(Connection) DriverManager.getConnection(connectionString,"root","root");
} catch (SQLException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
```
The Connection con is null,DriveManager.getConnection doesn't work and I don't know why.
I also tried with postgresql connection but the problem is the same.
Must I configure something in remote server?
The Server is debian 9.2 like my computer.<issue_comment>username_1: I tried to make the solution look as simple as possible using your example suggestion.
```
def MyFunc(self):
result = 0
result += A.MyFunc(self=self)
result += B.MyFunc(self=self)
return result
```
Upvotes: 0 <issue_comment>username_2: You can use name mangling to make attributes from a class available in a child class even if that child defines an attribute with the same name.
```
class A():
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
def __MyFunc(self):
result = self.var1 + self.var2
return result
MyFunc = __MyFunc
class B():
def __init__(self, var1):
self.var1 = var1
def __MyFunc(self):
result = self.var1**2
return result
MyFunc = __MyFunc
class C(A,B):
def __init__(self, var1, var2, var3):
A.__init__(self, var1, var2)
B.__init__(self, var3)
def MyFunc(self):
return self._A__MyFunc() + self._B__MyFunc()
c = C(1, 2, 3)
print(c.MyFunc())
# 14
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 756 | 2,471 |
<issue_start>username_0: I have a file containing addresses, I want to check that the addresses are correct. I am comparing the addresses to a list of approved Australian states.
My address file looks something like this:
```
NEW SOUTH WALE
N S W 2232
SOUTH AUSTRALI
Victoria
SA 5158
Victoria 3136
Victoria 3029
N.S.W. 2428
```
And my file with approved Australian states is:
```
NEW SOUTH WALES
QUEENSLAND
SOUTH AUSTRALIA
TASMANIA
VICTORIA
WESTERN AUSTRALIA
```
When I run the code it will return true for 'NEW SOUTH WALE' but it will return false for 'SOUTH AUSTRALI'
Why does it not return true for 'SOUTH AUSTRALI' as it is contained in 'SOUTH AUSTRALIA' ?
Here is the code for reference:
```
static void Main(string[] args)
{
string[] lines = File.ReadAllLines(@"C:\C# Project\sampledata.dat");
var states = File.ReadAllLines(@"C:\C# Project\States.txt"); //Reads in states
int i = 1;
foreach (string line in lines) //Loops through data
{
if (states.Any(line.Contains))
{
File.AppendAllText(@"C:\C# Project\CorrectAddress.dat",
line + Environment.NewLine);
i++;
}
else
{
File.AppendAllText(@"C:\C# Project\IncorrectAddress.dat",
line + Environment.NewLine);
i++;
}
}
}
```<issue_comment>username_1: I tried to make the solution look as simple as possible using your example suggestion.
```
def MyFunc(self):
result = 0
result += A.MyFunc(self=self)
result += B.MyFunc(self=self)
return result
```
Upvotes: 0 <issue_comment>username_2: You can use name mangling to make attributes from a class available in a child class even if that child defines an attribute with the same name.
```
class A():
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
def __MyFunc(self):
result = self.var1 + self.var2
return result
MyFunc = __MyFunc
class B():
def __init__(self, var1):
self.var1 = var1
def __MyFunc(self):
result = self.var1**2
return result
MyFunc = __MyFunc
class C(A,B):
def __init__(self, var1, var2, var3):
A.__init__(self, var1, var2)
B.__init__(self, var3)
def MyFunc(self):
return self._A__MyFunc() + self._B__MyFunc()
c = C(1, 2, 3)
print(c.MyFunc())
# 14
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 442 | 1,552 |
<issue_start>username_0: I have a big dataset in bigquery and writing SQL queries in bigquery. It produces fast results.
Although I want to use R/python for data preprocessing. I have approx. 200M records in my table and R is very slow.
So considering the amount of data shall I use bigquery query or there is another way of working with R/python which is also fast. Or google offers some product which can be used to create data summary avoiding SQL queries.<issue_comment>username_1: I tried to make the solution look as simple as possible using your example suggestion.
```
def MyFunc(self):
result = 0
result += A.MyFunc(self=self)
result += B.MyFunc(self=self)
return result
```
Upvotes: 0 <issue_comment>username_2: You can use name mangling to make attributes from a class available in a child class even if that child defines an attribute with the same name.
```
class A():
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
def __MyFunc(self):
result = self.var1 + self.var2
return result
MyFunc = __MyFunc
class B():
def __init__(self, var1):
self.var1 = var1
def __MyFunc(self):
result = self.var1**2
return result
MyFunc = __MyFunc
class C(A,B):
def __init__(self, var1, var2, var3):
A.__init__(self, var1, var2)
B.__init__(self, var3)
def MyFunc(self):
return self._A__MyFunc() + self._B__MyFunc()
c = C(1, 2, 3)
print(c.MyFunc())
# 14
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,408 | 5,155 |
<issue_start>username_0: I am building a website for my upcoming wedding and I want a sticky header, but for some reason, it "disappears" by moving up after you go a certain way down the page. My test url is this: <https://betterradiotech.com>. Here is the nav markup:
```
* [Home](/ "Home")
* [Music](/music/ "Music")
* [Gallery](/gallery/ "Gallery")
* [Feed](/feed/ "Feed")
```
Here is the nav SCSS:
```
header {
padding: 1em;
position: sticky;
top: 0;
z-index: 100;
width: 100%;
background-color: $burgandy;
}
.nav-list {
display: flex;
flex-flow: row nowrap;
li {
list-style-type: none;
margin-left: 10px;
}
a {
color: $pink;
font-weight: 600;
}
}
.active-nav {color: $navy !important;}
```
There is no JavaScript in making the nav, except for making the active nav work...for completeness sake, I will include that as well:
```
switch(location.pathname) {
case "/":
document.querySelector("a[title*='Home']").classList.add("active-nav");
break;
case "/admin/":
document.querySelector("a[title*='Admin']").classList.add("active-nav");
break;
case "/feed/":
document.querySelector("a[title*='Feed']").classList.add("active-nav");
break;
case "/gallery/":
document.querySelector("a[title*='Gallery']").classList.add("active-nav");
break;
case "/music/":
document.querySelector("a[title*='Music']").classList.add("active-nav");
break;
}
```
Why is my nav bar disappearing after a certain distance down the page? It seems to happen right before the end of the full background picture in the first section.<issue_comment>username_1: I think you'll get the desired behavior by switching from `sticky` to `fixed`. [Sticky is sort of a hybrid of fixed and relative positioning](https://developer.mozilla.org/en-US/docs/Web/CSS/position#Sticky_positioning), and changes its behavior relative to context, and is commonly used to allow items to respond to its neighbors via scroll position.
>
> Sticky positioning can be thought of as a hybrid of relative and fixed positioning. A stickily positioned element is treated as relatively positioned until it crosses a specified threshold, at which point it is treated as fixed until it reaches the boundary of its parent.
>
>
>
So you want:
```
header {
position: fixed;
}
```
PS: The reason its disappearing for you is that your body has a computed height, but the contents of the body overflow beyond that height. The sticky element scrolls away once you scroll past the computed height of the body, which is the header's parent.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The reason for this is probably that your containing element is not as tall as you think, and you may have to set that element's `height` to `fit-content` explicitly, because `sticky` elements **cannot leave their parent**!
In most situations, the simplest solution will be to add this rule to your CSS:
```css
body {
height: fit-content;
}
```
But generally, which solution you need and which element you have to apply it to depends on your document structure. Let's say it looks something like this:
```html
```
And you probably use some CSS reset that contains a rule like this one:
```css
html, body {
height: 100%;
}
```
This allows using percentage heights on your page, but it will break sticky headers without additional work.
When you look at the size of the body with the dev tools, everything may look alright:
[](https://i.stack.imgur.com/NJgBP.png)
But once you scroll down, you see a problem:
[](https://i.stack.imgur.com/P3UtF.png)
The `body` is just as tall as your viewport. All other content you see is just overflowing out of it. But a sticky header can't do that, it will stay within the `body` and disappear with it. We now have three potential solutions:
If you don't need percentage-based heights on your page, you can use this CSS rule:
```css
body {
height: fit-content;
}
```
If there are some percentage-base heights, try replacing them with `vh` instead, and see if that works for you. Then you can apply the fix from above.
If you do need percentage-based heights, then you might want to make the body stay in place but scroll the overflowing content through it:
```css
html {
overflow: hidden;
}
body {
overflow: scroll;
}
```
Upvotes: 4 <issue_comment>username_3: The previous soultions did not work for my situation.
position: fixed made the other elements hide beneath it. And adding margin top or top to them messed the header a little bit. After almost two days of banging my head against the wall, I ended up adding this css to my modal in my styles.scss:
```
.modal-class{
display: initial;
}
```
This worked for me, hopefully helps save someone else's time.
Upvotes: 0
|
2018/03/16
| 463 | 2,064 |
<issue_start>username_0: The following situation:
* app is started from a deep link (sms) and launches the MainActivity
* user presses the start button in the app which opens SecondActivity
* app goes into background
* user presses on app icon again in launcher => HERE I EXPECT the SecondActivity be still active. But instead, MainActivity is relaunched.
Tried with "singleTop" activity mode - no difference.
SecondActivity is launched from MainActivity without any special intent flags, plain startActivity()
```
<category android:name="android.intent.category.BROWSABLE”/>
```<issue_comment>username_1: When starting the secondActivity you can just call `finish()` in the firstActivity (for example in `onPause()` or `buttonClick()` firstActivity). In this way, your app is still running (i.e secondActivity). When you launch the app the secondActivity should run directly (if the app is not totally destroyed yet).
Upvotes: 0 <issue_comment>username_2: It turned out there are two reasons:
1. Starting app from deep link created separate instance of the app => solved by specifying "singleTask"
2. The SecondActivity was destroyed at some point, which explains why starting the app again from icon will bring MainActivity screen
Thanks all commenters for helpful hints!
Upvotes: 1 <issue_comment>username_3: Do not use `android:launchMode="singleTask"` as recommended in answers/comments but rather use `android:launchMode="singleTop"` because with singleTask custom tab will be opened as separate app (in opened apps list will be both chrome with custom tab and your real app) and user can switch between chrome app with your custom tab and your real app (it can be misleading to user) which can lead to undesirable situations:
* user can kill your real app, but chrome with custom will be still in opened apps list,
* user can open as many chrome instances in app list with your custom tab when he always switches from custom tab back to app and open another chrome custom tab in opened apps list,
* and maybe many other combinations
Upvotes: 0
|
2018/03/16
| 704 | 1,654 |
<issue_start>username_0: I have a dataset in this format:
```
A B LABEL NEW
-0.027651 -0.068485 5 1
-0.039997 -0.071371 5 1
-0.032667 -0.078227 5 1
-0.021502 -0.083501 5 1
-0.018613 -0.082452 5 1
0.134058 -0.145095 7 2
-0.164757 0.029179 4 3
-0.124876 0.022431 4 3
-0.076959 -0.021404 4 3
-0.221781 0.163064 8 4
0.137542 -0.250567 5 5
0.048786 -0.153115 5 5
-0.001230 -0.095431 5 5
```
I want to split the dataframe by new column value 1:
```
A B LABEL NEW
-0.027651 -0.068485 5 1
-0.039997 -0.071371 5 1
-0.032667 -0.078227 5 1
-0.021502 -0.083501 5 1
-0.018613 -0.082452 5 1
```
and save according to the name of that label:
like (NEW-LABEL)--> "1-5.csv"
I have 7000 rows, I need to do it dynamically SPLIT and Save,<issue_comment>username_1: Use `groupby` on `NEW` to split
```
In [11]: for n, g in df.groupby('NEW'):
...: g.to_csv('{}.csv'.format(n))
```
Upvotes: 2 <issue_comment>username_2: Now I know what you mean
```
for x,df1 in df.groupby('NEW'):
df1.to_csv("%s.csv" % x)
```
Update
```
for x,df1 in df.groupby('NEW'):
df1.to_csv("%s-%s.csv" % (x,df1.LABEL[0]))
```
Upvotes: 1
|
2018/03/16
| 319 | 956 |
<issue_start>username_0: I would like to display an R dataframe as a table in Power BI. Using the "R script visual" widget, here is one possible solution:
```
library(gridExtra)
library(grid)
d<-head(df[,1:3])
grid.table(d)
```
reference: [<https://cran.r-project.org/web/packages/gridExtra/vignettes/tableGrob.html][1]>
As stated in the reference - this approach only works for *small* tables.
Is there an alternative approach that will allow an R data frame to be displayed as a table in Power BI - specifically for larger tables that can be 'scrolled'?<issue_comment>username_1: Use `groupby` on `NEW` to split
```
In [11]: for n, g in df.groupby('NEW'):
...: g.to_csv('{}.csv'.format(n))
```
Upvotes: 2 <issue_comment>username_2: Now I know what you mean
```
for x,df1 in df.groupby('NEW'):
df1.to_csv("%s.csv" % x)
```
Update
```
for x,df1 in df.groupby('NEW'):
df1.to_csv("%s-%s.csv" % (x,df1.LABEL[0]))
```
Upvotes: 1
|
2018/03/16
| 1,290 | 4,340 |
<issue_start>username_0: I'm building a web app that has 3 components on the screen.
Navbar (sticky top)
Main Container
map container(75% of main)
chat window (25% of main)
What I want to be able to do is hide the chat footer then have the map container take up all of the main container, but if user wants to see the chat window, he clicks a button and the map shrinks back to 75% of the main and the chat window is visible again. I want this functionality to work across all the bootstrap breakpoints so from what I've read the responsive visibility classes are not what I want.
The javascript I'm using is
```
$('#chatToggle').click(function(e) {
console.log('in chatToggle');
var cf = document.getElementById('chatFooter');
var mc = document.getElementById('leafletMap');
if (cf.style.display === 'none') {
console.log('showing chat window');
cf.className = 'row h-15 ';
mc.className = "row h-85 map-container";
$('#chatToggle').text('Hide Chat')
} else {
console.log('hiding chat window');
cf.className = 'row h-15 d-none';
mc.className = "row h-85 map-container";
$('#chatToggle').text('Show Chat')
}
map.invalidateSize();
});
```
When the page first loads, it appears as I want it and the navbar responds to the breakpoints and acts like a nice responsive navbar. However, as soon as I turn off the chat footer with code above, the navbar disappears, the map takes up the whole viewport and map attribution morphs into some big mess on the top of the screen. Here are two screenshots:
[](https://i.stack.imgur.com/yICUa.png)
[](https://i.stack.imgur.com/KczAm.png)
Relevant HTML is:
```
[TRACKING SYSTEM](#)
* [Menu](#)
[Real-Time Tracking](#realTimeModal)
[Historical Tracking](#historicalModal)
[Display Grids](#gridsModal)
[Stop Tracking](#)
[Clear Grids](#)
[Hide Chat](#chatToggle)
[Settings](#contact)
#### Chat History
test data
```
And in my css I have:
```
html,
body {
height: 100%;
}
```
Is it possible to accomplish what I'm trying to do with Bootstrap 4?<issue_comment>username_1: Instead of all the extra JS to handle the button click, use the [Bootstrap collapse component](https://getbootstrap.com/docs/4.0/components/collapse/#example). Add handlers for the [hide/show events](https://getbootstrap.com/docs/4.0/components/collapse/#events) to resize the map, and toggle the button text:
```
$('.chat').on('hide.bs.collapse',function(){
mymap.invalidateSize();
$('#chatToggle').text("Show Chat");
}).on('show.bs.collapse',function(){
mymap.invalidateSize();
$('#chatToggle').text("Hide Chat");
});
```
The problem you're having with resizing heights is easier solved with flexbox. Just add a class to the `.map-container` so that it grows automatically in height when the chat is collapsed.
```
.map-container {
flex: 1 1 auto;
}
```
Working demo: <https://www.codeply.com/go/jCa2CsQFYY>
Upvotes: 2 [selected_answer]<issue_comment>username_2: I notice the `fixed-top` on nav causes part of the map to be hide. Reworked example using osm:
<https://www.codeply.com/p/2ly5AlChoq>
[](https://i.stack.imgur.com/vJlEx.png)
so I extend the invalidateSize trick on
<https://www.codeply.com/p/7YU91GQrEo>
Still there're some quirks on chat about width
[](https://i.stack.imgur.com/V5BwF.png)
but shows more accurate about showing the map.
with bootstrap 3 it was just about CSS:
<https://embed.plnkr.co/plunk/wH7u64>
using
```
html, body, #container {
height: 100%;
overflow: hidden;
width: 100%;
}
#map {
box-shadow: 0 0 10px rgba(0, 0, 0, 0.5);
height: 100%;
width: auto;
}
```
There must be a better solution using bootstrap 4, but I can find a way to create an empty div that takes rest of size left by navbar.
See leaflet specify that div container should define the height, but looks like bootstrap layout based on actual size (so kind of egg-chicken problem and why invalidateSize do the work)
Upvotes: 0
|
2018/03/16
| 2,882 | 10,461 |
<issue_start>username_0: Pip always fails ssl even when I do `pip install dedupe` or `pip install --trusted-host pypi.python.org dedupe`
The output is always the same no matter what:
>
> Collecting dedupe
>
>
> Retrying (Retry(total=4, connect=None, read=None,
> redirect=None, status=None)) after connection broken by
> 'SSLError(SSLError(1, '[SSL: CERTIFICATE\_VERIFY\_FAILED] certificate
> verify failed (\_ssl.c:777)'),)': /simple/dedupe/
>
> Retrying...
>
>
> skipping
>
>
> Could not find a version that satisfies the requirement dedupe (from versions: ) No matching distribution found for dedupe
>
>
>
So I uninstalled anaconda and reinstalled it. Same thing.
Do you think the problem is that my \_ssl.c file (which I have no idea where it is) must be corrupt or something? Why would pip need to reference that if I'm telling it to bypass ssl verification anyway?<issue_comment>username_1: It may be related to the [2018 change](https://pyfound.blogspot.com/2018/03/warehouse-all-new-pypi-is-now-in-beta.html) of PyPI [domains](https://github.com/pypa/pip/issues/5288#issuecomment-383071739).
Please ensure your firewall/proxy allows access to/from:
* *pypi.org*
* *files.pythonhosted.org*
So you could give a try to something like:
>
> $ python -m pip `install` --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org [--proxy ...] [--user]
>
>
>
Please see `$ pip help install` for the `--user` option description (omit if in a virtualenv).
The `--trusted-host` option doesn't actually bypass SSL/TLS, but allows to mark host as trusted when (and only when) it does not have valid (or any) HTTPS. It shouldn't really matter with PiPY because pypi.org (formerly pypi.python.org) *does* use HTTPS and there is CDN in front of it which always enforces TLSv1.2 handshake requirement regardless of the connecting pip client options.. But if you had your own local mirrors of pypi.org with HTTP-only access, then `--trusted-host` could be handy. Oh, and if you are behind a proxy, please also make sure to also specify: `--proxy [user:passwd@]proxyserver:port`
Some corporate proxies may even go as far as to [replace](https://stackoverflow.com/questions/5846652/can-proxy-change-ssl-certificate) the certificates of HTTPS connections on the fly. And if your system clock is out of sync, it could break SSL verification process as well.
If firewall / proxy / clock isn't a problem, then check SSL certificates being used in pip's SSL handshake. In fact, you could just get a current [cacert.pem](https://curl.haxx.se/ca/cacert.pem) (Mozilla's CA bundle from [curl](https://curl.haxx.se/docs/caextract.html)) and try it using the pip option `--cert`:
>
> `$ pip --cert ~/cacert.pem install --user`
>
> where `--cert` argument is system path to your alternate CA bundle in PEM format. (regarding the --user option, please see below).
>
> Or, it's possible to create a custom config ~/.pip/pip.conf and point the option at a valid system cert (or your cacert.pem) as a workaround, for example:
>
> *[global]*
>
> *cert = /etc/pki/tls/external-roots/ca\_bundle.pem*
>
> (or another pem file)
>
>
>
It's even possible to manually replace the original cacert.pem found in pip with your trusty CA bundle (if your pip is very old for example). Older pip versions knew to fallback between pip/\_vendor/requests/cacert.pem and system stores like `/etc/ssl/certs/ca-certificates.crt` or `/etc/pki/tls/certs/ca-bundle.crt` in case of cert issues, but in recent pip it's no longer the case, as it seems to rely solely on pip/\_vendor/certifi/cacert.pem
Basically, pip package uses `requests` which uses `urllib3` which, among other things, verifies SSL certificates; and all of them are shipped (vendored) within pip, along with the `certifi` package (also included, since pip 9.0.2) that provides current CA bundle (cacert.pem file) required for TLS verification. Requests itself uses urllib3 and certifi internally, and before 9.0.2, pip used cacert.pem from requests or the system. What it all means is that actually updating pip may help fix the CERTIFICATE\_VERIFY\_FAILED error, particularly if the OS and pip were deployed long ago:
* The OP used anaconda, so they could try:
`$ conda update pip` - because [issues can arise](https://www.anaconda.com/using-pip-in-a-conda-environment/) if conda and `pip` are both used together in the same environment. If there's no pip version update available, they could try:
`$ conda config --add channels conda-forge; conda update pip`
Alternatively, it's possible to use [conda](https://conda.io/en/latest/) alone to directly install / manage python packages: it is a tool completely separate from pip, but provides similar features in terms of package and venv management. Its packages come not from PyPI, but from [anaconda's own repositories](https://docs.anaconda.com/anaconda/).
The problem is, if you mix both and run conda after `pip`, the former can overwrite and break packages (and their dependencies) installed via pip, and render it all unusable. So it's recommended to *only use one or the other*, or, if you have to, use *only pip after* conda (and no conda after pip), and only in isolated conda environments.
* On normal Linux Python installations without conda:
If you are using a version of pip supplied by your OS distribution, then use vendor-supplied upgrades for a system-wide pip update:
`$ sudo apt-get install python-pip` or: `$ sudo yum install python27-pip`
Some updates may not be readily available because distros usually lag behind PyPI. In this case, it's possible to upgrade pip at your **user level** (right in your *$HOME* dir), or inside a virtualenv, like:
`$ python -m pip install --user --trusted-host files.pythonhosted.org --trusted-host pypi.org --trusted-host pypi.python.org --upgrade pip`
(omit `--user` if in a virtualenv)
The `--user` switch will upgrade pip only for the current user (in your home ~/.local/lib/) rather than for the whole OS, which is a good practice to avoid interfering with the system python packages. It's enabled by default in a pip distributed in recent Ubuntu/Fedora versions. Be aware of how to solve [ImportError](https://stackoverflow.com/questions/49964093/file-usr-bin-pip-line-9-in-module-from-pip-import-main-importerror-canno/) if you don't use this option and happen to overwrite the OS-level system pip.
Alternatively (also at a user level) you could try:
`$ curl -LO https://bootstrap.pypa.io/get-pip.py && python get-pip.py --user`
The PyPA [script](https://pip.pypa.io/en/latest/installing/) contains a wrapper that extracts the .pem SSL bundle from pip.\_vendor.certifi.
Otherwise, if still no-go, try running pip with `-vvv` option to add verbosity to the output and check if there is now another `SSLError` caused by [tlsv1 alert protocol version](https://stackoverflow.com/questions/51646558).
Upvotes: 7 [selected_answer]<issue_comment>username_2: The error above or one like it was caused by the virtual machine (VM) not be time synchronized, my guest Ubuntu VM was several days in the past.
I ran this commend to get the VM to pick up the correct network time:
```
sudo timedatectl set-ntp on
```
This makes the Ubuntu guest OS get the network time. (You may have to provide a network time source... I used this article: [Digital Ocean - How to set time on Ubuntu](https://www.digitalocean.com/community/tutorials/how-to-set-up-time-synchronization-on-ubuntu-16-04))
Check the time is correct:
```
timedatectl
```
Re-run the failing pip command.
Upvotes: 3 <issue_comment>username_3: My way is a simplification of @Alex C's answer:
```
python -m pip install --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --trusted-host pypi.org --upgrade pip
```
Upvotes: 4 <issue_comment>username_4: This worked for me, try this:
`pip install --trusted-host=pypi.org --trusted-host=files.pythonhosted.org --user {name of whatever I'm installing}`
Upvotes: 6 <issue_comment>username_5: I experienced the same issue because I have **Zscaler** (a cloud security software) installed and was causing:
* URL host for python packages being blocked
* invalid SSL certificate warnings popping up
* SSL inspection certificate not trusted
As mentioned by others, the below will fix individual package installations. `pypi.python.org` is not required since it has been replaced by `pypi.org`.
```
pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org
```
I permanently fixed the issue by creating `pip.ini` file (`pip.conf` in Unix) and adding the below:
```
[global]
trusted-host = pypi.python.org
pypi.org
files.pythonhosted.org
```
See [pip configuration files](https://pip.pypa.io/en/stable/topics/configuration/) for how to locate your `pip.ini`, or where to put it if you need to create one.
Upvotes: 5 <issue_comment>username_6: **For those using macOS:**
The issue is that Python 3.6+ is no longer using macOS's OpenSSL, but rather its own bundled OpenSSL which doesn't have access to macOS's root certificates. Here’s the permanent fix.
First run these commands:
```bash
# Update pip
pip3 --trusted-host files.pythonhosted.org --trusted-host pypi.org install --upgrade pip
# Install certifi
pip3 --trusted-host files.pythonhosted.org --trusted-host pypi.org install certifi
```
Add this to your `~/.bash_profile` (or `~/.zprofile` or whatever startup file your shell uses), so that your environment knows where to find the certifi CA bundle:
```bash
if [ -x "$(command -v python3)" ]; then
# Side note: 'python3 -m certifi' is also equivalent to
# 'python3 -c "import certifi; print(certifi.where())"'
export SSL_CERT_FILE="$(python3 -m certifi)"
export REQUESTS_CA_BUNDLE="${SSL_CERT_FILE}"
fi
```
Restart your shell, or re-source your shell startup file:
```bash
. ~/.bash_profile
```
pip3 (or any Python script making HTTPS requests) should now work without using a long pip3 command or dangerously disabling certificate verification.
Side note: You may notice that I didn’t include pypi.python.org as a trusted host. This is because it has been deprecated since 2018. I haven’t included it for over a year now, and I’ve never had an issue. As far as I know, it’s now unused.
Upvotes: 0
|
2018/03/16
| 408 | 1,333 |
<issue_start>username_0: I want to ask how to call hidden Base class operator function in derived class overriding function, below is my code, the comment line is the question.
```
class Base{
public:
virtual bool operator^(Base &b){
cout << "hehe" << endl;
return true;
}
virtual void fn() = 0;
};
class Dev: public Base{
public:
virtual bool operator^(Base &b){
// how to call operator ^ in the Base class here??
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
```<issue_comment>username_1: Use a qualified name of the operator.
For example
```
#include
using namespace std;
class Base{
public:
virtual bool operator^(Base &b){
cout << "hehe" << endl;
return true;
}
virtual void fn() = 0;
};
class Dev: public Base{
public:
virtual bool operator^(Base &b){
Base::operator^( b );
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
int main()
{
Dev v;
Base &b = v;
v ^ b;
return 0;
}
```
The program output is
```
hehe
haha
```
Upvotes: 2 <issue_comment>username_2: You can also do an explicit upcast of `*this`:
```
class Dev: public Base{
public:
virtual bool operator^(Base &b)
{
static_cast(\*this) ^ b;
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
```
Upvotes: 0
|
2018/03/16
| 499 | 1,439 |
<issue_start>username_0: I want to deploy an Azure ARM Template.
In the parameter section I defined a IP Range for the Subnet.
```
"SubnetIP": {
"defaultValue": "10.0.0.0",
"type": "string"
},
"SubnetMask": {
"type": "int",
"defaultValue": 16,
"allowedValues": [
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27
]
}
```
When creating the private IP i used
```
"privateIPAddress": "[concat(parameters('SubnetIP'),copyindex(20))]",
```
This give me not the excepted output because Subnet Ip is 10.0.0.0 and not 10.0.0. is there a way to edit the parameter in that function?
Regards Stefan<issue_comment>username_1: Use a qualified name of the operator.
For example
```
#include
using namespace std;
class Base{
public:
virtual bool operator^(Base &b){
cout << "hehe" << endl;
return true;
}
virtual void fn() = 0;
};
class Dev: public Base{
public:
virtual bool operator^(Base &b){
Base::operator^( b );
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
int main()
{
Dev v;
Base &b = v;
v ^ b;
return 0;
}
```
The program output is
```
hehe
haha
```
Upvotes: 2 <issue_comment>username_2: You can also do an explicit upcast of `*this`:
```
class Dev: public Base{
public:
virtual bool operator^(Base &b)
{
static_cast(\*this) ^ b;
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
```
Upvotes: 0
|
2018/03/16
| 447 | 1,438 |
<issue_start>username_0: ```
width = input("Please enter grid width:")
height = input("Please enter grid height:")
grid = [["o" for x in range(width)] for y in range(height)]
```
Gives this error on the grid line: TypeError: 'str' object cannot be interpreted as an integer
So maybe make the input an integer?
```
width = int(input("Please enter grid width:"))
```
width line gives me: ValueError: invalid literal for int() with base 10: '{"command":"eval","data":"15","error":""}'
Both times when it asks for the input I typed 15 which is a number. So basically the input wants to be a string and when I try to convert it to an integer it won't.<issue_comment>username_1: Use a qualified name of the operator.
For example
```
#include
using namespace std;
class Base{
public:
virtual bool operator^(Base &b){
cout << "hehe" << endl;
return true;
}
virtual void fn() = 0;
};
class Dev: public Base{
public:
virtual bool operator^(Base &b){
Base::operator^( b );
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
int main()
{
Dev v;
Base &b = v;
v ^ b;
return 0;
}
```
The program output is
```
hehe
haha
```
Upvotes: 2 <issue_comment>username_2: You can also do an explicit upcast of `*this`:
```
class Dev: public Base{
public:
virtual bool operator^(Base &b)
{
static_cast(\*this) ^ b;
cout << "haha" << endl;
return false;
}
virtual void fn(){}
};
```
Upvotes: 0
|
2018/03/16
| 746 | 2,652 |
<issue_start>username_0: I am new to plsql and trying to use oracle sql developer, I try to run a simple procedure with dbms output line and i get the following error,
>
> ora-00904
>
>
>
, the code is
```
create or replace PROCEDURE proc_101 IS
v_string_tx VARCHAR2(256) := 'Hello World';
BEGIN
dbms_output.put_line(v_string_tx);
END;
```
whether i click the run(green colour) or debug(red colour) i get the same error.
You can see from the above code, procedure doesn't access any objects but still i get the same error.<issue_comment>username_1: I'd say that there's some other code in the worksheet which raises that error, not just the `CREATE PROCEDURE` you posted. For example, something like this SQL\*Plus example (just to show what's going on - you'd get the same result in SQL Developer):
```
SQL> select pixie from dual;
select pixie from dual
*
ERROR at line 1:
ORA-00904: "PIXIE": invalid identifier
SQL>
SQL> create or replace PROCEDURE proc_101 IS
2 v_string_tx VARCHAR2(256) := 'Hello World';
3 BEGIN
4 dbms_output.put_line(v_string_tx);
5 END;
6 /
Procedure created.
SQL>
```
See? The first part raised ORA-00904 as there's no PIXIE column in DUAL, while the procedure is created correctly.
So - remove code which fails and everything should be OK.
Upvotes: 0 <issue_comment>username_2: Check with your DBA to make sure the dbms\_output package has been installed on your database, and that you have permissions on it.
Upvotes: 0 <issue_comment>username_3: Your procedure is fine. You may not have permissions to be able to Create a Procedure. If this is the case test your procedure/code without actually Creating it in the Database first. For example, when I'm testing code in my Production database my oracle user cannot Create Procedures, Packages, Tables etc... And so I test my Procedures within my Own PL/SQL Blocks. When the code is good to go I can get a database administrator to Create the Procedures and/or Packages for me.
### The below screenshot is code that simply tests the Procedure:
[](https://i.stack.imgur.com/hI94v.png)
### The below screenshot is code that does much more and tests the Procedure from within a PL/SQL Block
>
> For more advanced situations this allows you to do so much more as you can create all sorts of Procedures/Functions and/or Cursors and test them immediately without needing to CREATE these objects in your Oracle Database.
> [](https://i.stack.imgur.com/BZYN6.png)
>
>
>
Upvotes: 1
|
2018/03/16
| 218 | 722 |
<issue_start>username_0: I'm using this code to show a number of locations:
```
{{item}}
---
```
which results in all the locations, separated by an horizontal rule. How can I edit this to get the exact same result, but without the last horizontal rule?<issue_comment>username_1: Use `last` provided by the `*ngFor` along with `*ngIf` on your `---`:
```
{{item}}
---
```
Here is a [**StackBlitz Demo**](https://stackblitz.com/edit/angular-my6ani?file=app%2Fapp.component.html).
Upvotes: 6 [selected_answer]<issue_comment>username_2: In the current angular version (11) the syntax changed to the following:
```
{{ item }}
---
```
See <https://angular.io/api/common/NgForOf#local-variables>
Upvotes: 2
|
2018/03/16
| 432 | 1,431 |
<issue_start>username_0: I am using jquery to serialize a form, but I would like to exclude all inputs with the class .has-dynamic-prices .
I am using the following code below but it doesn't seem to be working. Any help would be really appreciated! Thanks!!
```
var serialized_form = $("#base-menu-form:input:not(.has-dynamic-prices)").serialize()
```<issue_comment>username_1: Considering - `#base-menu-form` is the form ID
Try:
```
var serialized_form = $('#base-menu-form input:not(.has-dynamic-prices)').serialize();
```
or
```
var serialized_form = $('#base-menu-form input').not('.has-dynamic-prices').serialize();
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: [The other answer from weBBer works well](https://stackoverflow.com/a/49324983/295783)
```
$("form input").not('.has-dynamic-prices').serialize()
```
Alternatively disable the fields before serialising
```js
$("form .has-dynamic-prices").prop("disabled",true); // works
console.log($("form").serialize())
```
```css
.hidden { display:none }
```
```html
Also ignore me although I am not an
```
Upvotes: 1 <issue_comment>username_3: If you want to get all elements for the form, you can use the plain Javascript [elements](https://www.w3schools.com/jsref/coll_form_elements.asp) property of the form, and then filter inputs based on that.
```
$($("myForm")[0].elements).not(".has-dynamic-prices").serialize()
```
Upvotes: 0
|
2018/03/16
| 1,616 | 6,078 |
<issue_start>username_0: Before you say this is a duplicate let me explain.
I know how to use Camera in React Native, but in Android(Java) last month I managed the device camera in a simple way, I not even had to make a new View to use it. I just did something like this:
```
Uri mOutputFileUri;
Intent intent = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
intent.putExtra(MediaStore.EXTRA_OUTPUT, mOutputFileUri); // URI of the file where pic will be stored
startActivityForResult(intent, TAKE_PICTURE_FROM_CAMERA);
```
Basically this automatically open the camera and when press "to take photo button" the method onActivityForResult() is called, getting image data to manage.
So, my doubt is, there is a way to do this in React Native? I mean, calling a method that automatically open the camera and return the data?
Thanks a lot.<issue_comment>username_1: The simplest way to do this would be to use the ImagePicker library <https://github.com/react-community/react-native-image-picker>
This allows you to open the native camera and get the provides a callback with the data from the photo.
```
ImagePicker.launchCamera(options, (response) => {
// Response data
});
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also expo-camera and expo-av (for videos). Here's a quick prototype, that also manages the device permissions.
```
import React, { useState, useRef, useEffect } from "react";
import {
StyleSheet,
Dimensions,
View,
Text,
TouchableOpacity,
SafeAreaView,
} from "react-native";
import { Camera } from "expo-camera";
import { Video } from "expo-av";
const WINDOW_HEIGHT = Dimensions.get("window").height;
const closeButtonSize = Math.floor(WINDOW_HEIGHT * 0.032);
const captureSize = Math.floor(WINDOW_HEIGHT * 0.09);
export default function App() {
const [hasPermission, setHasPermission] = useState(null);
const [cameraType, setCameraType] = useState(Camera.Constants.Type.back);
const [isPreview, setIsPreview] = useState(false);
const [isCameraReady, setIsCameraReady] = useState(false);
const [isVideoRecording, setIsVideoRecording] = useState(false);
const [videoSource, setVideoSource] = useState(null);
const cameraRef = useRef();
useEffect(() => {
(async () => {
const { status } = await Camera.requestPermissionsAsync();
setHasPermission(status === "granted");
})();
}, []);
const onCameraReady = () => {
setIsCameraReady(true);
};
const takePicture = async () => {
if (cameraRef.current) {
const options = { quality: 0.5, base64: true, skipProcessing: true };
const data = await cameraRef.current.takePictureAsync(options);
const source = data.uri;
if (source) {
await cameraRef.current.pausePreview();
setIsPreview(true);
console.log("picture source", source);
}
}
};
const recordVideo = async () => {
if (cameraRef.current) {
try {
const videoRecordPromise = cameraRef.current.recordAsync();
if (videoRecordPromise) {
setIsVideoRecording(true);
const data = await videoRecordPromise;
const source = data.uri;
if (source) {
setIsPreview(true);
console.log("video source", source);
setVideoSource(source);
}
}
} catch (error) {
console.warn(error);
}
}
};
const stopVideoRecording = () => {
if (cameraRef.current) {
setIsPreview(false);
setIsVideoRecording(false);
cameraRef.current.stopRecording();
}
};
const switchCamera = () => {
if (isPreview) {
return;
}
setCameraType((prevCameraType) =>
prevCameraType === Camera.Constants.Type.back
? Camera.Constants.Type.front
: Camera.Constants.Type.back
);
};
const cancelPreview = async () => {
await cameraRef.current.resumePreview();
setIsPreview(false);
setVideoSource(null);
};
const renderCancelPreviewButton = () => (
);
const renderVideoPlayer = () => (
);
const renderVideoRecordIndicator = () => (
{"Recording..."}
);
const renderCaptureControl = () => (
{"Flip"}
);
if (hasPermission === null) {
return ;
}
if (hasPermission === false) {
return No access to camera;
}
return (
{
console.log("cammera error", error);
}}
/>
{isVideoRecording && renderVideoRecordIndicator()}
{videoSource && renderVideoPlayer()}
{isPreview && renderCancelPreviewButton()}
{!videoSource && !isPreview && renderCaptureControl()}
);
}
const styles = StyleSheet.create({
container: {
...StyleSheet.absoluteFillObject,
},
closeButton: {
position: "absolute",
top: 35,
left: 15,
height: closeButtonSize,
width: closeButtonSize,
borderRadius: Math.floor(closeButtonSize / 2),
justifyContent: "center",
alignItems: "center",
backgroundColor: "#c4c5c4",
opacity: 0.7,
zIndex: 2,
},
media: {
...StyleSheet.absoluteFillObject,
},
closeCross: {
width: "68%",
height: 1,
backgroundColor: "black",
},
control: {
position: "absolute",
flexDirection: "row",
bottom: 38,
width: "100%",
alignItems: "center",
justifyContent: "center",
},
capture: {
backgroundColor: "#f5f6f5",
borderRadius: 5,
height: captureSize,
width: captureSize,
borderRadius: Math.floor(captureSize / 2),
marginHorizontal: 31,
},
recordIndicatorContainer: {
flexDirection: "row",
position: "absolute",
top: 25,
alignSelf: "center",
justifyContent: "center",
alignItems: "center",
backgroundColor: "transparent",
opacity: 0.7,
},
recordTitle: {
fontSize: 14,
color: "#ffffff",
textAlign: "center",
},
recordDot: {
borderRadius: 3,
height: 6,
width: 6,
backgroundColor: "#ff0000",
marginHorizontal: 5,
},
text: {
color: "#fff",
},
});
```
Source - [Instamobile](https://www.instamobile.io/react-native-tutorials/capturing-photos-and-videos-with-the-camera-in-react-native)
Upvotes: 2
|
2018/03/16
| 786 | 3,166 |
<issue_start>username_0: I have a custom button in NetSuite, and I am wondering if there is any way for me to set fields on the form using the button. Basically, I am going to have an approval button that will lock down editing, and I am wondering if I can do that through the button and javascript. As far as I can tell, I can't access the context of the script, since I start in the user event script and then go to the client. I might also just be missing something really dumb, but I am fairly new to NetSuite. Here is what I have in my user event script
```
function beforeLoad(context) {
var form = context.form;
form.clientScriptModulePath = './_kk_fc_cs_sd.js';
form.addButton({
id : 'custpage_china_approve_btn',
label : 'Approve - China',
functionName : 'chinaApproveFinalQuote'
});
form.addButton({
id : 'custpage_dallas_approve_btn',
label : 'Approve - Dallas',
functionName : 'dallasApproveFinalQuote'
})
}
```
and then for my client script i have
```
function chinaApproveFinalQuote()
{
alert(record.id);
var firstCost = context.currentRecord;
firstCost.setValue('custrecord_kk_sd_fc_master_carton_cb', true);
}
```
It pretty much has no idea what context is. I tried passing it in the user event script, and also tried to use record.id, but neither of those worked. Any idea, or do I have to go about this a different way?<issue_comment>username_1: Although you can do it this way (look at the relation ship between the client and user scripts in [SS2.0 Display Message on Record](https://stackoverflow.com/questions/40245943/ss2-0-display-message-on-record/40267529#40267529) this is an area where workflows tend to excel
Workflows have methods for locking records except for particular users; showing approve/reject buttons for particular users; setting fields etc on approval.
Upvotes: 2 [selected_answer]<issue_comment>username_2: The issue you are having is because you need to use the currentRecord in the client script in order to set/get values. One other thing to note here is that if you are only referencing the client script via its NetSuite folder location (not actually deployed to the record), you still need to include a pageInit entry point function (it can be empty though). Just another SuiteScript 2.0 caveat...
```
/**
*@NApiVersion 2.x
*@NScriptType ClientScript
*/
define(
[
'N/currentRecord'
],
function (
nsCurRec
) {
function chinaApproveFinalQuote() {
var btn;
var rec = nsCurRec.get();
alert(rec.id);
btn = rec.getField('custpage_china_approve_btn');
btn.isDisabled = true;
rec.setValue({
fieldId: 'custrecord_kk_sd_fc_master_carton_cb',
value: true
});
}
function pageInit(context) {
}
return {
pageInit: pageInit,
chinaApproveFinalQuote: chinaApproveFinalQuote
};
}
);
```
Upvotes: 2
|
2018/03/16
| 595 | 2,275 |
<issue_start>username_0: Why is `n` in
```
const int n = -0123;
```
an octal literal? I thought that all octal literals had to start with a 0, and this one doesn't since it starts with a negative.
It's a small point I know but It's causing me great confusion!<issue_comment>username_1: >
> How is it possible that an octal literal can be negative?
>
>
>
There are no negative integer literals, only positive. The literal here is 0123 which does start with a 0 and is thus octal. `-` in that expression is the unary minus operator.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This is absolutely true. However, the same applies to all integer literals - decimal, hexadecimal, octal, and binary. In fact, there are no negative integer literals. Expressions such as `-1` apply the unary minus operator to the value represented by the literal, which may involve implicit type conversions ([reference](http://en.cppreference.com/w/cpp/language/integer_literal)).
Upvotes: 3 <issue_comment>username_3: *You seem to be mildly confused about literals, so:*
In short, a literal is a way to describe a type that can be seen *literally* in the code. An integer is a literal because when you write, for example `4` in your code. You can *literally* see that it's the number four, and numbers are interpreted as `int`s by default unless they have a dot (`.`) after them, or any of the other valid suffixes.
However, you can tell to interpret it as an unsigned by writing `4u`. Now you can literally see that it's the number 4 interpreted as an unsigned. Likewise, the value `"Hello World"` is a string literal because you can tell in the code that it is the string `"Hello World`". On the other hand, a user defined class `Person` has no literal way of being seen in the code. In C++14 you can however make user defined literals, but for a `Person` that would still arguably make no sense. The standard library uses this in the header, defining literals such as `s`, `ms` and `ns` for seconds, milliseconds and nanoseconds respectively.
When writing the octal `-0123` you tell to interpret it as an octal value in the same way `0xFF` literally interprets as a hex. The `-` sign is simply a way to apply the unary negation operator on the number `0123`.
Upvotes: 1
|
2018/03/16
| 959 | 3,707 |
<issue_start>username_0: I am trying to predict a number based on an image and a numerical value. To put it in a practical matter, let's say I am trying to add to the standard house price predictor an image. So among the other features (price, sqm, no of rooms, etc.) there will be an image. So ultimately, the price to be predicted will be based on the image supplied. Has that been implemented before? Also how can I add an image along with numbers as a feature? Is there an already project, I can use?<issue_comment>username_1: You can extract the image features from any standard convolution nets (vgg16,vgg19 or googlenet) pretrained on imagenet.
Concatenate image features with other features (price, sqm, no of rooms, etc.) and use this as input to a mlp to predict the house prices.
Upvotes: 0 <issue_comment>username_2: The way I understand your question: how can I include information taken from a photo in the prediction of a house price?
* I assume, the picture can be classified into something like e.g. [mansion|villa|town house|beach house|shag]
This leaves you with:
* how would I make an ImageNet competitor (like VGG16) produce a string that gives me one of the 5 values?
The general idea is to use the pretrained ImageNet (here: VGG16) and remove some layers at the end, especially the one that normalizes the output into 1000 classes (BatchNormalization) and replace this layer with e.g. 5 outputs (see example classes above).
You need probably several thousand annotated images which you can use in batches to retrain your existing network (the pruned/altered VGG16).
I don't know which Deep Learning environment you are using, <https://keras.io> and <https://caffe2.ai> would be two of the currently most popular ones. Both have all the networks you mentioned in their Model Zoo or repository with pre-trained weights for the ImageNet data set.
Once you re-trained the DNN (and get high enough accuracy on you test image set), you can run it to get the best fitting class and use it as either one input for your next ML model either as a string/enumeration (by just using the best) or as 5 numbers with the certainty of the DNN for each of the 5 classes.
You can then use a NN again or basically any ML model as listed on the Data Mining Group's page for the PMML standard (<http://dmg.org/pmml/v4-3/GeneralStructure.html>).
SciKit learn (<https://en.wikipedia.org/wiki/Scikit-learn>) has a bunch of them - if you are into Python - Random Forest or Support Vector Machine should produce reasonably good results if your data has a good quality.
Hope it helps!
Upvotes: 2 <issue_comment>username_3: You have two tasks.
1.Get features from the images as one of your inputs for your final prediction task. Or get predictions of images as your inputs for your final prediction task.
2.The received feature extraction or made prediction of images should be used as another input along with other numerical value inputs say 'square feet', 'number of bedrooms'.
```
model = VGG16()
model.layers.pop()
model = Model(inputs=model.inputs, outputs=model.layers[-1].output)
for image_name in listdir(directory_of_images):
imagefile = path.join(directory_of_images, image_name)
image = load_img(imagefile, target_size=(224, 224))
image = img_to_array(image)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
feature = model.predict(image, verbose=0)
image_id = imagename.split('.')[0]
features[image_id] = feature
return features
```
Use these predicted features as your other input column along with 'numberOfBedroom', 'square\_feet\_length' columns of values to predict price.
Upvotes: 2
|
2018/03/16
| 1,148 | 4,347 |
<issue_start>username_0: I'm reading a setting from a json file with an expression, but the expression doesn't work.
The setting is outputPath -> @activity('GetSet').output.value[0].subs.outputPath
The file has the expression:
"outputPath": "/subs/@{formatDateTime(utcnow(), 'yyyy')}/subs.json"
The result should be /subs/2018/subs.json but appear the same has written in the file. If I put the expression in the Settings directly, it works.
json of my pipeline
```
{
"name": "subscription experience",
"type": "ExecutePipeline",
"dependsOn": [
{
"activity": "GetSet",
"dependencyConditions": [
"Succeeded"
]
}
],
"typeProperties": {
"pipeline": {
"referenceName": "Client",
"type": "PipelineReference"
},
"waitOnCompletion": true,
"parameters": {
"outputPath": "@activity('GetSet').output.value[0].subs.outputPath",
}
}
}
```
my file is:
```
{
"test": "my teste"
"subs": {
"outputPath": "/subs/@{formatDateTime(utcnow(), 'yyyy')}/subscriptions.json",
}
```<issue_comment>username_1: You can extract the image features from any standard convolution nets (vgg16,vgg19 or googlenet) pretrained on imagenet.
Concatenate image features with other features (price, sqm, no of rooms, etc.) and use this as input to a mlp to predict the house prices.
Upvotes: 0 <issue_comment>username_2: The way I understand your question: how can I include information taken from a photo in the prediction of a house price?
* I assume, the picture can be classified into something like e.g. [mansion|villa|town house|beach house|shag]
This leaves you with:
* how would I make an ImageNet competitor (like VGG16) produce a string that gives me one of the 5 values?
The general idea is to use the pretrained ImageNet (here: VGG16) and remove some layers at the end, especially the one that normalizes the output into 1000 classes (BatchNormalization) and replace this layer with e.g. 5 outputs (see example classes above).
You need probably several thousand annotated images which you can use in batches to retrain your existing network (the pruned/altered VGG16).
I don't know which Deep Learning environment you are using, <https://keras.io> and <https://caffe2.ai> would be two of the currently most popular ones. Both have all the networks you mentioned in their Model Zoo or repository with pre-trained weights for the ImageNet data set.
Once you re-trained the DNN (and get high enough accuracy on you test image set), you can run it to get the best fitting class and use it as either one input for your next ML model either as a string/enumeration (by just using the best) or as 5 numbers with the certainty of the DNN for each of the 5 classes.
You can then use a NN again or basically any ML model as listed on the Data Mining Group's page for the PMML standard (<http://dmg.org/pmml/v4-3/GeneralStructure.html>).
SciKit learn (<https://en.wikipedia.org/wiki/Scikit-learn>) has a bunch of them - if you are into Python - Random Forest or Support Vector Machine should produce reasonably good results if your data has a good quality.
Hope it helps!
Upvotes: 2 <issue_comment>username_3: You have two tasks.
1.Get features from the images as one of your inputs for your final prediction task. Or get predictions of images as your inputs for your final prediction task.
2.The received feature extraction or made prediction of images should be used as another input along with other numerical value inputs say 'square feet', 'number of bedrooms'.
```
model = VGG16()
model.layers.pop()
model = Model(inputs=model.inputs, outputs=model.layers[-1].output)
for image_name in listdir(directory_of_images):
imagefile = path.join(directory_of_images, image_name)
image = load_img(imagefile, target_size=(224, 224))
image = img_to_array(image)
image = image.reshape((1, image.shape[0], image.shape[1], image.shape[2]))
image = preprocess_input(image)
feature = model.predict(image, verbose=0)
image_id = imagename.split('.')[0]
features[image_id] = feature
return features
```
Use these predicted features as your other input column along with 'numberOfBedroom', 'square\_feet\_length' columns of values to predict price.
Upvotes: 2
|
2018/03/16
| 1,332 | 3,928 |
<issue_start>username_0: i am having difficulty moving my neo4j database from version 2.2.0 community to 3.3.3 enterprise. the error on starting neo4j 3.3.3 is "Not possible to upgrade a store with version 'v0.A.5' to current store version 'v0.A.8' (Neo4j 3.3.3)."
the process i am following is to upgrade the database from 2.2.0 to 2.3.8 and then upgrade the 2.3.8 database to 3.3.3.
**for upgrading from 2.2.0 to 2.3.8 i followed the documentation here:**
<https://neo4j.com/docs/2.3.8/deployment-upgrading.html>
basically, this consisted of:
* shutting down 2.2.0
* copying the graph.db directory to the server where 2.3.8 was installed
* placing that graph.db directory in the data directory, adjusting the permissions an
* starting neo4j
this process completed without error
**for upgrading from 2.3.8 to 3.3.3 i followed the documentation here:**
<https://neo4j.com/docs/operations-manual/current/upgrade/deployment-upgrading/>
i applied this process on the 3.3.3 server to the graph.db directory transferred from 2.3.8
```
sudo service neo4j stop
rm -rf /var/lib/neo4j/data/databases/graph.db
neo4j-admin import --mode=database --database=graph.db --
from=/path/to/2.3/version/of/graph.db
sudo chown -R neo4j:neo4j /var/lib/neo4j/data/databases/graph.db
sudo service neo4j start
```
on the start of neo4j 3.3.3, the syslog showed a tremendous amount of error output, the salient part of which is:
```
Mar 15 22:35:38 ip-XXX-XXX-XXX-XXX neo4j[61967]: 2018-03-15 22:35:38.644+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@a202ccb' was successfully initialized, but failed to start. Please see the attached cause exception "Not possible to upgrade a store with version 'v0.A.5' to current store version `v0.A.8` (Neo4j 3.3.3).". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@a202ccb' was successfully initialized, but failed to start. Please see the attached cause exception "Not possible to upgrade a store with version 'v0.A.5' to current store version `v0.A.8` (Neo4j 3.3.3).".
Mar 15 22:35:38 ip-XXX-XXX-XXX-XXX neo4j[61967]: org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase@a202ccb' was successfully initialized, but failed to start. Please see the attached cause exception "Not possible to upgrade a store with version 'v0.A.5' to current store version `v0.A.8` (Neo4j 3.3.3).".
```
inspection of the graph.db from 2.3.8 on the 3.3.3 server showed:
```
neo4j-admin store-info --store=/path/to/2.3/version/of/graph.db
Store format version: v0.A.5
unexpected error: Unknown store version 'v0.A.5'
```
my server information is:
* 2.2.0 community: ubuntu 14.04 installed, i believe, from tarball (i did not build this server)
* 2.3.8 community: ubuntu 16.04 installed via apt-get running as service
* 3.3.3 enterprise: ubuntu 16.04 installed via apt-get running as service
**my question is:**
what is the correct process for successfully upgrading an existing 2.2.0 db to 3.3.3?<issue_comment>username_1: Maybe the docs are not fully accurate here and we need to take an additional step.
Since your upgrade to 2.3.8 seemed to work nicely I'd take this as baseline for the following procedure.
Long time ago I've written a hackish upgrade script, see <https://gist.github.com/sarmbruster/3011606>. In line 46 change version to 3.0.12 and check if upgrade 2.3.8 -> 3.0.12 worked. If yes, proceed with 3.3.4.
How large is your graph.db folder?
Upvotes: 2 [selected_answer]<issue_comment>username_2: The neo4j operations manual, since version 3.1, documents how to upgrade a database from version 2.x to 3.1+.
For instance, [here is the section of the docs for upgrading from 2.x to 3.3.4](https://neo4j.com/docs/operations-manual/3.3/upgrade/deployment-upgrading/#upgrade-instructions-2x).
Upvotes: 0
|
2018/03/16
| 1,228 | 3,549 |
<issue_start>username_0: Trying to write a simple thread pool. Only the first `thread_t` is initialized and it sort of hangs. I can't proceed. Need help
```
class thread_t
{
public:
thread_t(int id, bool& running)
:id_(id)
, running_(running)
{
idle_ = true;
thread_ = new thread([=]() { run(); });
}
~thread_t()
{
thread_->join();
}
private:
void run()
{
cout << id_ << " starting \n";
while (running_)
{
this_thread::sleep_for(chrono::milliseconds(10ms));
}
}
private:
thread* thread_;
bool idle_;
int id_;
bool& running_;
};
class pool
{
public:
pool(int n, bool& running)
:nthreads_(n)
,running_(running)
{
if (n > std::thread::hardware_concurrency()) nthreads_ = n = std::thread::hardware_concurrency();
for (int i = 0; i < n; i++)
{
threads_.push_back(thread_t(i, running_));
}
}
private:
vector threads\_;
int nthreads\_;
bool& running\_;
};
//queue < function> tasks;
int main()
{
bool running = true;
pool pool1(5, running);
this\_thread::sleep\_for(chrono::seconds(5s));
running = false;
return 0;
}
```<issue_comment>username_1: Your code is trying to join the first thread you created right away. From `gdb`:
```
#0 0x00007ffff729eb6d in pthread_join () from /lib64/libpthread.so.0
#1 0x00007ffff7ab6223 in __gthread_join (__value_ptr=0x0,
__threadid=)
at /var/tmp/paludis/sys-devel-gcc-7.3.0/work/build/x86\_64-pc-linux-gnu/libstdc++-v3/include/x86\_64-pc-linux-gnu/bits/gthr-default.h:668
#2 std::thread::join (this=0x55555576bc20)
at /var/tmp/paludis/sys-devel-gcc-7.3.0/work/gcc-7.3.0/libstdc++-v3/src/c++11/thread.cc:136
#3 0x00005555555553ff in thread\_t::~thread\_t (this=0x7fffffffd6f0,
\_\_in\_chrg=) at thread.cpp:21
#4 0x000055555555559b in pool::pool (this=0x7fffffffd740, n=5,
running=@0x7fffffffd737: true) at thread.cpp:50
#5 0x0000555555555151 in main () at thread.cpp:66
```
Upvotes: 1 <issue_comment>username_2: <NAME> rightly pointed out that vector::push\_back was indeed calling the destructor (+1 to him). Now I use unique\_ptr and it solved my issue for now only.
Edited: Just completed this project here's if anybody interested [link](https://codereview.stackexchange.com/questions/189798/simple-versaitile-thread-pool)
```
class thread_t
{
public:
thread_t(int id, bool& running)
:id_(id)
, running_(running)
{
idle_ = true;
thread_ = new thread([=]() { run(); });
}
~thread_t()
{
cout << id_ << " killing \n";
thread_->join();
}
private:
void run()
{
cout << id_ << " starting \n";
while (running_)
{
this_thread::sleep_for(chrono::milliseconds(10ms));
}
}
private:
thread* thread_;
bool idle_;
int id_;
bool& running_;
};
class pool
{
public:
pool(int n, bool& running)
:nthreads_(n)
,running_(running)
{
if (n > std::thread::hardware_concurrency()) nthreads_ = n = std::thread::hardware_concurrency();
for (int i = 0; i < n; i++)
{
threads_.push_back(make_unique(i, running\_));
}
}
private:
vector> threads\_;
int nthreads\_;
bool& running\_;
};
//queue < function> tasks;
int main()
{
bool running = true;
pool pool1(5, running);
this\_thread::sleep\_for(chrono::seconds(5s));
running = false;
return 0;
}
```
Upvotes: 0
|
2018/03/16
| 366 | 1,066 |
<issue_start>username_0: I have a input file with this format:
```
Head
20 20 20 !Random text
random lines
```
I want to read the values as 3 different variables 'a' 'b' 'c' and multiply a\*b
I have the following code:
```
import sys
import numpy as np
import fileinput
filename = 'file.txt'
#filename = sys.argv[-1]
data = np.genfromtxt(filename, skip_header=8, dtype=[('a',int),('b',int),
('c',int)])
result = a*b
```
but it is not working.<issue_comment>username_1: You must do it using numpy?
Otherwise this should work if the file looks always like this.
```
with open("file.txt") as file_handle:
file_handle.readline() # skip the header
values = file_handle.readline().split()
a = int(values[0])
b = int(values[1])
result = a * b
```
Upvotes: 1 <issue_comment>username_2: Here is the little different approach in one line
```
import re
from functools import reduce
print(reduce(lambda x,y:x*y,[list(map(int,re.findall(r'\d+',i)))[:2] for i in open('file.txt') if re.findall(r'\d+',i)][0]))
```
output:
```
400
```
Upvotes: 0
|
2018/03/16
| 504 | 1,406 |
<issue_start>username_0: MY tables are
Parent WO
```
WO PRICE
1 1790
1 9
```
Child WO
```
WO PRICE
1 200
1 400
1 600
1 100
```
I am trying to do this
```
Select sum(p.price), SUM(c.price) from Parent_WO p
left outer join Child_WO c
on p.WO= c.WO
group by p.WO
```
I am getting the wrong value for the p.price. I am getting 7196 when it should by 1796. It's multipe the total of the parent WO with the 4 child WO.<issue_comment>username_1: Try using 2 [CTE's](https://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx); calculate `Parent` price in one CTE and `Child` price in another CTE and join the results:
```
with parent_sum as (
select
wo
,SUM(price) as ParentSUM
from parent_wo
group by wo
)
,child_sum as (
select
wo
,SUM(price) as ChildSUM
from child_wo
group by wo
)
select
p.wo
,ParentSUM
,ChildSUM
from parent_sum p
left join child_sum c
on p.wo = c.wo
```
Upvotes: 1 <issue_comment>username_2: Do the aggregation *before* doing the `sum()`:
```
Select coalesce(p.wo, c.wo) as wo, sum(p.price), SUM(c.price)
from (select p.wo, sum(p.price) as parent_price
from Parent_WO p
group by p.wo
) p full outer join
(select c.wo, sum(c.price) as child_price
from Child_WO c
group by c.wo
) c
on p.WO = c.WO ;
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 597 | 2,041 |
<issue_start>username_0: I am trying to save a file using VBA but the file name needs to reference a cell within the Workbook in which I am running the Macro from.
I need to copy data from a tab within an Macro enabled workbook into a new comma separated text file. The name of the text file (output file) should contain a reference to a cell in the Workbook from which the Macro is being ran.
Another issue that I am having is that I would like to change the delimiter in the output file to a pipe delimited file, and not a comma separated file. Is this possible?
This is what I have tried
```
Sheets("Cash Sheet").Select
Rows("1:100000").Select
Selection.Copy
Workbooks.Add
Selection.PasteSpecial Paste:=xlPasteValues, Operation:=xlNone,
SkipBlanks _
:=False, Transpose:=False
Application.CutCopyMode = False
Application.DisplayAlerts = False
ActiveWorkbook.SaveAs fileName:= _
"C:\Desktop\CashBook_\" & Range("E3") & ".txt", FileFormat:=xlCSV,
CreateBackup:=False
ActiveWindow.Close
Sheets("Control Sheet").Select
Range("A1").Select
```<issue_comment>username_1: Try using 2 [CTE's](https://technet.microsoft.com/en-us/library/ms190766(v=sql.105).aspx); calculate `Parent` price in one CTE and `Child` price in another CTE and join the results:
```
with parent_sum as (
select
wo
,SUM(price) as ParentSUM
from parent_wo
group by wo
)
,child_sum as (
select
wo
,SUM(price) as ChildSUM
from child_wo
group by wo
)
select
p.wo
,ParentSUM
,ChildSUM
from parent_sum p
left join child_sum c
on p.wo = c.wo
```
Upvotes: 1 <issue_comment>username_2: Do the aggregation *before* doing the `sum()`:
```
Select coalesce(p.wo, c.wo) as wo, sum(p.price), SUM(c.price)
from (select p.wo, sum(p.price) as parent_price
from Parent_WO p
group by p.wo
) p full outer join
(select c.wo, sum(c.price) as child_price
from Child_WO c
group by c.wo
) c
on p.WO = c.WO ;
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 1,771 | 6,488 |
<issue_start>username_0: I am using `AWSSDK.S3` version `3.3.17.2` and `AWSSDK.Core` version `3.3.21.16` to upload a file and then download the same file. The code below not able to download the file **if the file name has spaces ( or `#`)**
```
public class AmazonS3
{
public async Task UploadFileAsync(string sourceFile, string s3BucketUrl)
{
AmazonS3Uri s3Uri = new AmazonS3Uri(s3BucketUrl);
using (var s3 = new AmazonS3Client(s3Uri.Region))
{
using (TransferUtility utility = new TransferUtility(s3))
{
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest
{
BucketName = s3Uri.Bucket,
ContentType = "application/pdf",
FilePath = sourceFile,
Key = s3Uri.Key + Path.GetFileName(sourceFile),
};
await utility.UploadAsync(request).ConfigureAwait(false);
}
}
return Path.Combine(s3BucketUrl, Path.GetFileName(sourceFile));
}
public async Task DownloadFileAsync(string destinationFilePath, string s3Url)
{
var s3Uri = new AmazonS3Uri(s3Url);
var s3Client = new AmazonS3Client(s3Uri.Region);
GetObjectRequest getObjectRequest = new GetObjectRequest
{
BucketName = s3Uri.Bucket,
Key = s3Uri.Key
};
// dispose the underline stream when writing to local file system is done
using (var getObjectResponse = await s3Client.GetObjectAsync(getObjectRequest).ConfigureAwait(false))
{
await getObjectResponse.WriteResponseStreamToFileAsync(destinationFilePath, false, default(System.Threading.CancellationToken)).ConfigureAwait(false);
}
}
}
```
Then for testing purpose i am uploading the file and downloading the same file again
```
AmazonS3 s3 = new AmazonS3();
var uploadedFileS3Link = await s3.UploadFileAsync("C:\\temp\\my test file.pdf", @"https://mybucket.s3-us-west-2.amazonaws.com/development/test/");
// get exception at line below
await s3.DownloadFileAsync("C:\\temp\\downloaded file.pdf",uploadedFileS3Link );
```
I am getting exception
>
> **Amazon.S3.AmazonS3Exception: The specified key does not exist**. --->
> Amazon.Runtime.Internal.HttpErrorResponseException: The remote server
> returned an error: **(404) Not Found**. ---> System.Net.WebException: The
> remote server returned an error: (404) Not Found. at
> System.Net.HttpWebRequest.EndGetResponse(IAsyncResult asyncResult)
>
> at
> System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult
> iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean
> requiresSynchronization) --- End of stack trace from previous
> location where exception was thrown --- at
> System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task
> task) at ......
>
>
>
*removed remaining exception for brevity*
The file does exist inside bucket. In-fact i can copy and paste the s3url (i.e. the value of `uploadedFileS3Link` variable) and download the file via browser.
(Note that in reality i am trying to download 1000+ files that are already uploaded with spaces in their name. So removing the spaces while uploading is not an option)
**Update 1**
i noticed S3 browser Url Encode the file name
[](https://i.stack.imgur.com/Fpvtx.png)
I tried downloading the file using the encoded file path `https://mybucket.s3-us-west-2.amazonaws.com/development/test/my%20test%20file.pdf`
but it still did not work<issue_comment>username_1: so finally i found what was the issue. I am using [AmazonS3Uri](https://github.com/aws/aws-sdk-net/edit/master/sdk/src/Services/S3/Custom/Util/AmazonS3Uri.cs) class to parse the given S3 url and get the key, bucket and region. The `AmazonS3Uri` returns my key as `development/test/my%20test%20file.pdf`
Because internally `AmazonS3Uri` is using `System.Uri` to build Uri and then returns `AbsolutePath` which returns Encoded path as Key (*Should it return Local Path as Key?*)
I don't know why but `AmazonS3Client` does not like it, and it throws exception if you pass Encoded key.
So to fix the issue i decode the key using `System.Net.WebUtility.UrlDecode(s3Uri.Key)`. So new download method looks like
```
public async Task DownloadFileAsync(string destinationFilePath, string s3Url)
{
var s3Uri = new S3UrlParser(s3Url);
var s3Client = new AmazonS3Client(s3Uri.Region);
GetObjectRequest getObjectRequest = new GetObjectRequest
{
BucketName = s3Uri.Bucket,
Key = System.Net.WebUtility.UrlDecode(s3Uri.Key)
};
// dispose the underline stream when writing to local file system is done
using (var getObjectResponse = await s3Client.GetObjectAsync(getObjectRequest).ConfigureAwait(false))
{
await getObjectResponse.WriteResponseStreamToFileAsync(destinationFilePath, false, default(System.Threading.CancellationToken)).ConfigureAwait(false);
}
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I had this same issue parsing filename with space and forward slashes in the name. The problem with unescaped forward slashes is most tools treat each forward slash as a directory delimiter within S3.
So a simple name like myfilename = "file/name.pdf" appended to a virtual path would turn into; "directory/subdirectory/myfile/name.pdf" (here it will now have a directory called myfile where it was not intended)
It can be mitigated by escaping the filename "directory/subdirectory/myfile%2fname.pdf" at the point of upload and then decoding as suggested in the answer of this post.
After a experiencing errors from erroneous slashes and blank folder names in AWS Explorer caused by extra double slashes e.g. "//" (described further here: <https://webapps.stackexchange.com/a/120135>)
I've come to the conclusion it's better to completely strip forward slashes altogether from any filename proportion (not the virtual directory) rather than relying on escaping on upload and unescaping it on download. Treating the forward slash as a reserved character as others have similarly experienced: <https://stackoverflow.com/a/19515741/1165173>
In the case of an escaped whitespace symbol "+" it is usually safe to unescape the character and it will work as intended.
Filename proportion can be encoded using
```
System.Web.HttpUtility.Encode(input);
```
List of non-acceptable filename characters can be compared at the point of file creation using Path.GetInvalidPathChars()
Documentation: <https://learn.microsoft.com/en-us/dotnet/api/system.io.path.getinvalidfilenamechars>
Upvotes: 0
|
2018/03/16
| 368 | 1,305 |
<issue_start>username_0: I'm new to git, so this may seem like a simple question.
I'm working on Laravel 5.2 project with many git commits, Now I've migrated Laravel version to 5.3 with it's new folder structure, copying files from the old project and make `git init` and 3 git commits.
How to push these new commits of the new Laravel 5.3 project to the **same** old remote repo which I used with Laravel 5.2 ?<issue_comment>username_1: You need to force push to the old repository; DANGER! This will OVERWRITE the old repo, replacing the history of the old repo with only the 3 commits you have in the new one (after the `git init`). You write this as (assuming your main branch is `master`):
```
git push --force [remote_URL] master
```
If you're not sure what `[remote_URL]` is, you can run the following in the old (5.2) repo:
```
git remote get-url [repo_name]
```
(`[repo_name]` is likely `origin`.)
To simplify future pushes, you can also add the old repo as a remote:
```
git remote add [repo_name] [remote_URL]
```
Then force push to that:
```
git push --force [remote_name] master
```
Upvotes: 0 <issue_comment>username_2: I solved this problem by copying `.git` folder along with `.gitattributes` and `.gitignore` from the old repo to the new project.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 709 | 1,982 |
<issue_start>username_0: Currently, i have a list of list containing:
```
lst = [['abc','def'],['efg','hjk']]
```
and i want to extract each element in the list into it's own individual list together with an added index at the front, to produce an output of:
```
lst = [[1,'abc'],[1,'def'],[2,'efg'],[2,'hjk']]
```
I tried a way where:
```
for ind,val in enumerate(lst):
print(ind+1,val)
```
but I'm getting:
```
1 ['abc','def']
2 ['efg','hjk']
```
Any help is much appreciated.<issue_comment>username_1: Try this:
```
lst = [['abc','def'],['efg','hjk']]
new_list =[]
lenght_list = len(lst)
counter = 0
while counter < lenght_list:
for elem in lst[counter]:
new_list.append([counter+1, elem]) #counter starts from 0
counter +=1
```
Upvotes: 1 <issue_comment>username_2: You can do it in a list comprehension like this:
```
new_lst = [[idx, val] for idx, i in enumerate(lst,1) for val in i]
# [[1, 'abc'], [1, 'def'], [2, 'efg'], [2, 'hjk']]
```
Or, if you prefer the syntax of the nested loop:
```
new_list = []
for idx, i in enumerate(lst,1):
for val in i:
new_list.append([idx,val])
```
Upvotes: 2 <issue_comment>username_3: Only a slight modification needed. I understand correctly you want to have a list afterwards?
```
lst2 = []
for ind,val in enumerate(lst):
for element in val:
lst2.append([ind+1, element])
```
Upvotes: 1 <issue_comment>username_4: Simple and straightforward.
```
first = [['abc','def'],['efg','hjk']]
new_first = []
for i,v in enumerate(first):
second = v
for j,w in enumerate(second):
new_first.append(((i+1), w))
print(new_first)
```
Produces:
```
[(1, 'abc'), (1, 'def'), (2, 'efg'), (2, 'hjk')]
```
Upvotes: 0 <issue_comment>username_5: You can try with range :
```
lst = [['abc','def'],['efg','hjk']]
print([[i+1,sub_item] for i in range(len(lst)) for sub_item in lst[i]])
```
output:
```
[[1, 'abc'], [1, 'def'], [2, 'efg'], [2, 'hjk']]
```
Upvotes: 0
|
2018/03/16
| 796 | 2,664 |
<issue_start>username_0: recently I implemented `RecyclerView` and `CardView` in gradle file but the version of both of them doesn't match the `compileSdkVersion` this is my build.gradle file:
```
android {
compileSdkVersion 26
defaultConfig {
applicationId "com.example.mostafa.tostrategies"
minSdkVersion 24
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
dependencies {
implementation fileTree(include: ['*.jar'], dir: 'libs')
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
implementation 'com.android.support:cardview-v7:28.0.0-alpha1'
implementation 'com.android.support:recyclerview-v7:28.0.0-alpha1'
}
```<issue_comment>username_1: The major version of support libraries and `recyclerview` and `cardview` must be equal to `compileSdkVersion`
for example if you use `recyclerview-v7:xx.yy.zz` your `compileSdkVersion` must be xx
Upvotes: 0 <issue_comment>username_2: Replace these
```
implementation 'com.android.support:cardview-v7:28.0.0-alpha1'
implementation 'com.android.support:recyclerview-v7:28.0.0-alpha1'
```
With
```
implementation 'com.android.support:cardview-v7:26.1.0'
implementation 'com.android.support:recyclerview-v7:26.1.0'
implementation 'com.android.support:design:26.1.0'
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: As you can check in the [official doc](https://developer.android.com/topic/libraries/support-library/revisions.html#28-0-0-alpha1):
>
> Note: 28.0.0-alpha1 is a pre-release version to support the Android P developer preview.
>
>
>
To fully test your app's compatibility with Android P and begin using new APIs you have to use:
```
android {
compileSdkVersion 'android-P'
defaultConfig {
targetSdkVersion 'P'
}
...
}
```
Also pay attention to use the **same version** of support libraries.
Instead if you want to use the version v26 just use:
```
implementation 'com.android.support:cardview-v7:26.1.0'
implementation 'com.android.support:recyclerview-v7:26.1.0'
implementation 'com.android.support:appcompat-v7:26.1.0'
```
Upvotes: 1
|
2018/03/16
| 607 | 2,038 |
<issue_start>username_0: I am pretty new to R and spark. I want to read a parquet file with the following code. Anyone knows how to specify schema there?
```
library(sparklyr)
sc <- spark_connect(master = "yarn",
appname = "test")
df <- spark_read_parquet(sc,
"name",
"path/to/the/file",
repartition = 0,
schema = "?")
```
I looked at the link <https://spark.rstudio.com/reference/spark_read_parquet/>, there isn't any detail or example regarding how to set schema in the function to optimize it.<issue_comment>username_1: The major version of support libraries and `recyclerview` and `cardview` must be equal to `compileSdkVersion`
for example if you use `recyclerview-v7:xx.yy.zz` your `compileSdkVersion` must be xx
Upvotes: 0 <issue_comment>username_2: Replace these
```
implementation 'com.android.support:cardview-v7:28.0.0-alpha1'
implementation 'com.android.support:recyclerview-v7:28.0.0-alpha1'
```
With
```
implementation 'com.android.support:cardview-v7:26.1.0'
implementation 'com.android.support:recyclerview-v7:26.1.0'
implementation 'com.android.support:design:26.1.0'
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: As you can check in the [official doc](https://developer.android.com/topic/libraries/support-library/revisions.html#28-0-0-alpha1):
>
> Note: 28.0.0-alpha1 is a pre-release version to support the Android P developer preview.
>
>
>
To fully test your app's compatibility with Android P and begin using new APIs you have to use:
```
android {
compileSdkVersion 'android-P'
defaultConfig {
targetSdkVersion 'P'
}
...
}
```
Also pay attention to use the **same version** of support libraries.
Instead if you want to use the version v26 just use:
```
implementation 'com.android.support:cardview-v7:26.1.0'
implementation 'com.android.support:recyclerview-v7:26.1.0'
implementation 'com.android.support:appcompat-v7:26.1.0'
```
Upvotes: 1
|
2018/03/16
| 2,733 | 8,467 |
<issue_start>username_0: I am trying to combine these two loops together per my instructions, but I cant figure out how I should do something like this. since my counter i in the for loop can't go inside while loop condition. My teacher wants the program to do one loop only. Thanks in advance. the code basically runs through the a txt file and is supposed to first assign values to an array and in the for loop I assign it to a pointer.
This part I have problem with:
```
void fillArr(vector &arr, vector &ptrels){
student temp;
ifstream inFile ("index.txt");
while (!inFile.eof()) {
inFile >> temp.fName >> temp.Lname >> temp.gpa >> temp.id >> temp.email;
arr.push\_back(temp);
}
for (unsigned i = 0; i < arr.size(); i++ ){
ptrels.push\_back(&arr[i]);
}
// combine the two above loops
}
```
Here is the whole code:
```
> #include
> #include
> #include
> #include
> #include
> #include using namespace std;
>
> struct student {
> string fName, Lname, email, id;
> double gpa; } ;
>
> void input(); void fillArr(vector &arr, vector
> &ptrels); void printFile(vector pointarray); void
> sortFile(vector &arr, vector &pointarray);
>
> int main() {
> vector elements;
> vector ptrels;
>
> ifstream inFile;
> inFile.open("index.txt");
>
> int answer;
> fillArr(elements, ptrels);
> cout << "Please select:" << endl
> << "1 = Select All" << endl
> << "2 = Order A-Z by Name"< << "3 = To exit"<< endl;
> cin >> answer ;
> if (answer == 1){
> printFile(ptrels);
> main();
> }
> if (answer ==2){
> sortFile(elements, ptrels);
> printFile(ptrels);
> main();
> }
> if (answer ==3){
> inFile.close();
> exit(0);
> }
> else {
> cout << "Invalid Try Again:"<< endl;
> main();
> }
> return 0; }
>
> void fillArr(vector &arr, vector &ptrels){
> student temp;
>
> ifstream inFile ("index.txt");
> while (!inFile.eof()) {
> inFile >> temp.fName >> temp.Lname >> temp.gpa >> temp.id >> temp.email;
> arr.push\_back(temp);
> }
> for (unsigned i = 0; i < arr.size(); i++ ){
> ptrels.push\_back(&arr[i]);
> }
>
> // combine the two above loops }
>
>
>
> void printFile(vector pointarray){
> for (unsigned j = 0; j < pointarray.size(); j++){
> cout << pointarray[j] -> fName << " ";
> cout << pointarray[j] -> Lname<< " ";
> cout << pointarray[j] -> gpa << " ";
> cout << pointarray[j] -> id << " ";
> cout << pointarray[j] -> email << " ";
> cout << endl;
> } }
>
> //swap the elements by pointer. you are swaping the record not the
> pointers. // only sorting by firstname, sort by all 5 void
> sortFile(vector &arr, vector &pointarray){
> for(unsigned i = 0; i < arr.size(); ++i){
> for(unsigned j = i + 1; j < arr.size(); ++j) {
> if(arr[i].fName > pointarray[j] -> fName) {
> swap(arr[i].fName,pointarray[j] ->fName);
> swap(arr[i].Lname,pointarray[j] ->Lname);
> swap(arr[i].gpa,pointarray[j] ->gpa);
> swap(arr[i].id,pointarray[j] ->id);
> swap(arr[i].email,pointarray[j] ->email);
> }
> }
> } }
```
Also I know I am supposed to ask this in a different question but she also want me to do the sorting which is the last function sortFile to sort by all values not only firstName. Plus somehow she hates the swap function, looking for alternatives. Any hints would be appreciated.<issue_comment>username_1: A naive solution might collapse the two loops into the following:
```
void fillArr(vector &arr, vector &ptrels){
student temp;
ifstream inFile ("index.txt");
while (!inFile.eof()) {
inFile >> temp.fName >> temp.Lname >> temp.gpa >> temp.id >> temp.email;
arr.push\_back(temp);
ptrels.push\_back(&arr.back());
}
// combine the two above loops
}
```
Assuming that `arr.empty()` is a precondition of this function (if it isn't, the semantics are broken anyway), this almost does the same thing.
Almost.
Trouble is, each `arr.push_back(temp)` may result in an expansion of `arr`, invalidating all of the pointers you've put in `ptrels` so far. The original solution used a second loop so that you know `arr` is done and dusted before you start messing around with pointers.
We could still use the one-loop solution, as long as we prevented reallocations by `reserve`ing space in the vector. To do that, you would need to know in advance how many elements you will need. To do that, you will need to consume `"index.txt"` twice, and then you're back to two loops.
So, basically, no you can't do this.
The real solution is to reject the requirement. Hopefully, that's what your teacher wants you to report. I can't think of any real-world use cases for an "indexing" vector in a simple case like this anyway.
Also, [watch your I/O methodology](https://stackoverflow.com/q/5605125/10077).
Upvotes: 2 <issue_comment>username_2: So with your posting of the whole code; the solution is absolutely to remove the need for the vector of pointers entirely.
You make life much harder in your sort function by trying to deal with 2 vectors the way you do; and your `printFile()` is no simpler working with pointers than it would be with a reference to the vector.
I can't see why you have this vector of pointers at all in the first place; other than sometimes they're used to try and perform sorting on a group withuot changing the order of the original group. Since you do change the order, it makes is a vector of pain.
Get rid of it; make life simple.
Aside:
Your `sortFile()` can use std::sort and become not only faster but easier to read.
Upvotes: 1 <issue_comment>username_3: C++17 code (I left the `sort` function body empty so as to leave something to do for the homework, and I think that part was meant to be the Main Thing™):
```
// Source encoding: utf-8 with BOM ∩
#include // std::(string)
#include // std::(cout, cerr)
#include // std::(optional)
#include // std::(move)
#include // std::(vector)
#include // std::(ifstream)
#include // std::(istringstream)
#include // std::(exception, runtime\_error)
using namespace std;
auto hopefully( bool const e ) -> bool { return e; }
[[noreturn]] auto fail( string const& s ) -> bool { throw runtime\_error( s ); }
auto optional\_line\_from( istream& stream )
-> optional
{
string result;
return getline( stream, result )? result : optional{};
}
struct Student\_info
{
string first\_name;
string last\_name;
double gpa;
string id;
string email;
};
// Originally
// void fillArr(vector &arr, vector &ptrels):
auto data\_from( string const& filename )
-> vector
{
ifstream in\_file{ "index.txt" };
hopefully( not in\_file.fail() )
or fail( "Failed to open “index.txt” for reading." );
vector result;
while( optional line = optional\_line\_from( in\_file ) )
{
Student\_info temp;
istringstream items{ \*line };
items >> temp.first\_name >> temp.last\_name >> temp.gpa >> temp.id >> temp.email;
hopefully( not items.fail() )
or fail( "Failed to parse input line: “" + \*line + "”." );
result.push\_back( move( temp) );
}
return result;
}
auto operator<<( ostream& stream, Student\_info const& item )
-> ostream&
{
stream
<< item.first\_name << " "
<< item.last\_name<< " "
<< item.gpa << " "
<< item.id << " "
<< item.email;
return stream;
}
// Originally
// void printFile(vector pointarray):
void print( vector const& pointers )
{
for( auto const p : pointers ) { cout << \*p << endl; }
}
// Originally
// void sortFile(vector &arr, vector &pointarray):
void sort( vector& pointarray )
{
// TODO:
// Using std::sort is a good idea, if you're allowed to do that.
// std::tuple instances can be lexicographically compared with `<`, and
// `std::tie` produces a tuple. That's great for the current job.
}
// Originally
// int main():
void cpp\_main()
{
vector const data = data\_from( "index.txt" );
vector pointers;
for( Student\_info const& item : data ) { pointers.push\_back( &item ); }
for( ;; )
{
cout << "Please select\n"
<< "1 = Select All\n"
<< "2 = Order A-Z by Name\n"
<< "3 = To exit"
<< endl;
try
{
switch( stoi( \*optional\_line\_from( cin ) ) )
{
case 1:
{
print( pointers );
break;
}
case 2:
{
sort( pointers );
print( pointers );
break;
}
case 3:
{
return;
}
default:
{
fail( "" );
}
}
}
catch( ... )
{
// Avoid infinite loop on input failure:
if( cin.fail() )
{
fail( "Failed reading standard input." );
}
cout << "Invalid Try Again:" << endl;
}
}
}
auto main()
-> int
{
try
{
cpp\_main();
return EXIT\_SUCCESS;
}
catch( exception const& x )
{
cerr << "!" << x.what() << endl;
}
return EXIT\_FAILURE;
}
```
Upvotes: 0
|
2018/03/16
| 2,257 | 6,843 |
<issue_start>username_0: Here is my current setup.
1. 2 radio button to select either year or month
2. 2 sliders, one for year and another one for month
3. Textbox which will display value of the slider
using onclick radio function, I am displaying only one slider depends upon the year\month selection.
when I change option between year and month, I would like to update the slider and textbox value accordingly. For example, if the year slider is on year 15 and when I change radio button to month, the slider should move to 180 (15\*12) and textbox should update as 180.
As of now, I am able to display\hide the slider depends on the radio selection and update the textbox with slider value. But this value is not getting converted between year and month.
How can I achieve this ?
Fiddle : <https://jsfiddle.net/anoopcr/vvemxcL3/>
Below is my current code:
HTML:
```
<NAME>
Yr
Mo
0
5
10
15
20
25
30
0
60
120
180
240
300
360
```
Jquery:
```
$( "#tentext" ).val( "20");
$("#tenslidery").slider({
orientation: "horizontal",
range: false,
min: 0,
max: 30 ,
value: 20,
step: .1,
animate: true,
range:'min',
slide: function( event, ui ) {
$( "#tentext" ).val( ui.value );
}
});
$("#tentext").on("keyup",function(e){
$("#tenslidery").slider("value",this.value);
});
$("#tensliderm").slider({
orientation: "horizontal",
range: false,
min: 0,
max: 360,
value: 240,
step: 1,
animate: true,
range:'min',
slide: function( event, ui ) {
$( "#tentext" ).val( ui.value );
}
});
$("#tentext").on("keyup",function(e){
$("#tensliderm").slider("value",this.value);
});
function yesnoCheck() {
if (document.getElementById('switch_left').checked) {
document.getElementById('MarkWrap1').style.display = 'flex';
document.getElementById('MarkWrap2').style.display = 'none';
}
else if (document.getElementById('switch_right').checked) {
document.getElementById('MarkWrap2').style.display = 'flex';
document.getElementById('MarkWrap1').style.display = 'none';
}
}
```
CSS:
```
.tenslidery {
height:8px;
flex-basis:100%;
margin:0 calc((100% / 7) / 2);
}
.T {
font-size: 11px;
font-family:verdana;
margin-top:15px;
flex:1;
text-align:center;
position:relative;
}
.T:before {
content:"";
position:absolute;
height:15px;
bottom:100%;
width:1px;
left:calc(50% - 1px);
background:#c5c5c5;
}
.MarkWrap1 {
width:83%; /*Adjust this to adjust the width*/
margin: auto;
display:flex;
flex-wrap:wrap;
}
.Tm {
font-size: 11px;
font-family:verdana;
margin-top:15px;
flex:1;
text-align:center;
position:relative;
}
.Tm:before {
content:"";
position:absolute;
height:15px;
bottom:100%;
width:1px;
left:calc(50% - 1px);
background:#c5c5c5;
}
.MarkWrap2 {
width:83%; /*Adjust this to adjust the width*/
margin: auto;
display:none;
flex-wrap:wrap;
}
.tensliderm {
height:8px;
flex-basis:100%;
margin:0 calc((100% / 7) / 2);
}
.switch-field {
font-family: "Lucida Grande", Tahoma, Verdana, sans-serif;
overflow: hidden;
width:auto;
}
.switch-field input {
position: absolute !important;
clip: rect(0, 0, 0, 0);
height: 1px;
width: 1px;
border: 0;
overflow: hidden;
}
.switch-field label {
float: left;
}
label[for=switch_right]
{
border:1px solid #ccc;
border-radius:4px;
border-top-left-radius:0px;
border-bottom-left-radius:0px;
}
label[for=switch_left]
{
border-top:1px solid #ccc;
border-bottom:1px solid #ccc;
}
.switch-field label {
display: inline-block;
width: 35px;
background-color: #eee;
color: black;
font-size: 18px;
font-weight: normal;
text-align: center;
text-shadow: none;
height:25.4px;
line-height:1.4;
padding:2px;
cursor: pointer;
}
.switch-field label switch-right ( background:red;)
.switch-field label:hover {
cursor: pointer;
}
.switch-field input:checked + label {
background-color: deeppink;
-webkit-box-shadow: none;
box-shadow: none;
}
```<issue_comment>username_1: Just as a tidy up and to add readability...
This will keep both sliders and the input in sync at all times, so if you need to get any current value elsewhere, it will be correct.
```
var yearVal;
var monthVal;
var yearSelected;
configure();
function updateVal(val) {
yearVal = yearSelected ? val : val / 12;
monthVal = yearSelected ? val * 12 : val;
}
function switchSliders() {
document.getElementById('MarkWrap1').style.display = yearSelected ? 'flex' : 'none';
document.getElementById('MarkWrap2').style.display = yearSelected ? 'none' : 'flex';
}
function updateSliders() {
$("#tenslidery").slider("value", yearVal);
$("#tensliderm").slider("value", monthVal);
}
function updateTenure() {
$("#tentext").val(yearSelected ? yearVal : monthVal);
}
function yesnoCheck() {
yearSelected = document.getElementById('switch_left').checked;
updateTenure();
switchSliders();
updateSliders();
}
function configure() {
$("#tenslidery").slider({
orientation: "horizontal",
range: false,
min: 0,
max: 30,
value: 0,
step: .1,
animate: true,
range: 'min',
slide: function (event, ui) {
updateVal(ui.value);
$("#tentext").val(ui.value);
}
});
$("#tensliderm").slider({
orientation: "horizontal",
range: false,
min: 0,
max: 360,
value: 0,
step: 1,
animate: true,
range: 'min',
slide: function (event, ui) {
updateVal(ui.value);
$("#tentext").val(ui.value);
}
});
$("#tentext").on("keyup", function (e) {
updateVal(Number(this.value));
updateSliders();
});
yearVal = 20;
monthVal = yearVal * 12;
yearSelected = true;
$("#tentext").val(yearVal);
updateSliders();
}
```
Upvotes: 0 <issue_comment>username_2: I simply explicitly calculate the the values and submit to both slider and input box, should solve your problem
```
function yesnoCheck() {
var markWrap1 = $('#MarkWrap1');
var markWrap2 = $('#MarkWrap2');
var text = $('#tentext');
var value;
if ($('#switch_left').is(':checked')) {
markWrap1.css('display', 'flex');
markWrap2.css('display', 'none');
value = +$('#tensliderm').slider("option", "value") / 12;
text.val(String(value));
$('#tenslidery').slider('value', value);
} else if ($('#switch_right').is(':checked')) {
markWrap2.css('display', 'flex');
markWrap1.css('display', 'none');
value = +$('#tenslidery').slider("option", "value") * 12;
text.val(String(value));
$('#tensliderm').slider('value', value);
}
}
```
see <https://jsfiddle.net/kevinkassimo/ayhenhL5/8/>
Upvotes: 2 [selected_answer]
|
2018/03/16
| 588 | 2,108 |
<issue_start>username_0: I have an Excel calendar in which certain cells have a shape on them. I wish to be able to see which cells have a shape and then be able to extract some data.
I've searched a bit and found that the best option was to use TopLeftCell.Row but it seems there's an error on my code. I've copied a code and tried to adapt it, here it is:
```
Sub ActiveShapeMacro()
Dim ActiveShape As Shape
Dim UserSelection As Variant
'Pull-in what is selected on screen
Set UserSelection = ActiveWindow.Selection
'Determine if selection is a shape
On Error GoTo NoShapeSelected
Set ActiveShape = ActiveSheet.Shapes(UserSelection.Name)
On Error Resume Next
'Do Something with your Shape variable
Cells(Sheet1.Shapes(ActiveShape).TopLeftCell.Row, Sheet1.Shapes(ActiveShape).TopLeftCell.Column).Address
MsgBox (ActiveShape.Address)
Exit Sub
'Error Handler
NoShapeSelected:
MsgBox "You do not have a shape selected!"
End Sub
```
Thank you for your help! :)<issue_comment>username_1: the error is in:
```
Sheet1.Shapes(ActiveShape)
```
where `Shapes` is waiting for a string (the shape name) while you're providing an `Object` (the shape itself)
so use:
```
'Do Something with your Shape variable
MsgBox Cells(ActiveShape.TopLeftCell.Row, ActiveShape.TopLeftCell.Column).Address
```
that can be simplified to:
```
MsgBox ActiveShape.TopLeftCell.Address
```
Moreover change:
```
On Error Resume Next
```
to:
```
On Error GoTo 0
```
and keep watching what's happening in there...
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is an easy way to determine if a range or Shape has been selected and if it is a Shape, where it is:
```
Sub skjdkffdg()
Dim s As Shape, typ As String
typ = TypeName(Selection)
If typ = "Range" Then
MsgBox " you have a range selected: " & Selection.Address
Else
Set s = ActiveSheet.Shapes(Selection.Name)
MsgBox "you have a Shape selected: " & s.TopLeftCell.Address
End If
End Sub
```
*This assumes that the only things on the worksheet are Shapes and Ranges.*
Upvotes: 0
|
2018/03/16
| 2,109 | 7,081 |
<issue_start>username_0: I was trying out the Time class in Java and the following outputs December even though the system time shows March:
```
Calendar c = Calendar.getInstance();
SimpleDateFormat MonthName = new SimpleDateFormat("MMMM");
System.out.println(MonthName.format(c.get(Calendar.MONTH)));
```
But using this returns March:
```
System.out.println(MonthName.format(c.getTime()));
```
I am aware that the counting of months in JAVA begins from 0 and not 1 one so it displaying February would be appropriate but March?<issue_comment>username_1: The SimpleDateFormat expects a date, not a month number
```
Calendar c = Calendar.getInstance();
Date d = c.getTime();
SimpleDateFormat MonthName = new SimpleDateFormat("MMMM");
System.out.println(MonthName.format(d));
```
Upvotes: 0 <issue_comment>username_2: Because `c.get(Calendar.MONTH)` returns a number, and calling `format(number)` is the same as calling `format(new Date(number))` (check [here](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/text/DateFormat.java#294)).
In this case, `c.get(Calendar.MONTH)` returns `2`, because - as you said - this API uses 0-based months, so March is 2.
When you call `format(2)`, it's equivalent to calling `format(new Date(2))`, which means a date that corresponds to "2 milliseconds after [unix epoch](https://en.wikipedia.org/wiki/Unix_time)", which is `1970-01-01T00:00:00.002` (basically, 2 milliseconds after January 1st 1970 at midnight **in UTC**).
Then, this date (Jan 1st 1970 **UTC**) will be formatted by your `SimpleDateFormat`, which uses the JVM default timezone. So, when that date (that corresponds to Jan 1st in UTC) is converted to your JVM default timezone, gives you "December". Just print the value of `new Date(2)` and see what you get (spoiler: it'll be a date in December 31st 1969).
---
Your second attempt works because `c.getTime()` returns a `java.util.Date`, which in this case will correspond to March.
Upvotes: 2 <issue_comment>username_3: tl;dr
=====
```
LocalDate.now() // Better to specify time zone explicitly: LocalDate.now( ZoneId.of( "Pacific/Auckland" ) )
.getMonth // Get `Month` enum object appropriate to that date in that zone.
.getDisplayName( // Generate a `String`, the localized name of the month.
FormatStyle.FULL , // Control how long or how abbreviated the text.
Locale.CANADA_FRENCH // Specify the human language and cultural norms to be applied in localizing.
)
```
java.time
=========
The [Answer by username_2](https://stackoverflow.com/a/49325322/642706) is correct.
The modern approach uses the *java.time* classes that supplanted the troublesome old legacy date-time classes (`Date`, `Calendar`, `SimpleDateFormat`).
[`ZonedDateTime`](https://docs.oracle.com/javase/9/docs/api/java/time/ZonedDateTime.html) replaces `Calendar`, representing a moment on the timeline with the wall-clock time used by people of a certain region (a time zone), with a resolution of nanoseconds.
```
ZonedDateTime zdt = ZonedDateTime.now() ; // Would be better to pass explicitly the desired/expected time zone rather than implicitly rely on the JVM’s current default.
```
Retrieve a [`Month`](https://docs.oracle.com/javase/9/docs/api/java/time/Month.html) enum object for the month of this `ZonedDateTime` object’s date.
```
Month month = zdt.getMonth() ;
```
Generate a string for the name of the month, automatically localized. To localize, specify:
* [`TextStyle`](https://docs.oracle.com/javase/9/docs/api/java/time/format/TextStyle.html) to determine how long or abbreviated should the string be.
* [`Locale`](http://docs.oracle.com/javase/9/docs/api/java/util/Locale.html) to determine (a) the human language for translation of name of day, name of month, and such, and (b) the cultural norms deciding issues of abbreviation, capitalization, punctuation, separators, and such.
Ask for localized name of month.
```
Locale locale = Locale.US ; // Or Locale.CANADA_FRENCH etc.
String monthName = month.getDisplayName( FormatStyle.FULL , locale ) ;
```
---
About *java.time*
=================
The [*java.time*](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) framework is built into Java 8 and later. These classes supplant the troublesome old [legacy](https://en.wikipedia.org/wiki/Legacy_system) date-time classes such as [`java.util.Date`](https://docs.oracle.com/javase/9/docs/api/java/util/Date.html), [`Calendar`](https://docs.oracle.com/javase/9/docs/api/java/util/Calendar.html), & [`SimpleDateFormat`](http://docs.oracle.com/javase/9/docs/api/java/text/SimpleDateFormat.html).
The [*Joda-Time*](http://www.joda.org/joda-time/) project, now in [maintenance mode](https://en.wikipedia.org/wiki/Maintenance_mode), advises migration to the [java.time](http://docs.oracle.com/javase/9/docs/api/java/time/package-summary.html) classes.
To learn more, see the [*Oracle Tutorial*](http://docs.oracle.com/javase/tutorial/datetime/TOC.html). And search Stack Overflow for many examples and explanations. Specification is [JSR 310](https://jcp.org/en/jsr/detail?id=310).
You may exchange *java.time* objects directly with your database. Use a [JDBC driver](https://en.wikipedia.org/wiki/JDBC_driver) compliant with [JDBC 4.2](http://openjdk.java.net/jeps/170) or later. No need for strings, no need for `java.sql.*` classes.
Where to obtain the java.time classes?
* [**Java SE 8**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_8), [**Java SE 9**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_9), and later
+ Built-in.
+ Part of the standard Java API with a bundled implementation.
+ Java 9 adds some minor features and fixes.
* [**Java SE 6**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_6) and [**Java SE 7**](https://en.wikipedia.org/wiki/Java_version_history#Java_SE_7)
+ Much of the java.time functionality is back-ported to Java 6 & 7 in [***ThreeTen-Backport***](http://www.threeten.org/threetenbp/).
* [**Android**](https://en.wikipedia.org/wiki/Android_(operating_system))
+ Later versions of Android bundle implementations of the java.time classes.
+ For earlier Android (<26), the [***ThreeTenABP***](https://github.com/JakeWharton/ThreeTenABP) project adapts *ThreeTen-Backport* (mentioned above). See [*How to use ThreeTenABP…*](http://stackoverflow.com/q/38922754/642706).
The [**ThreeTen-Extra**](http://www.threeten.org/threeten-extra/) project extends java.time with additional classes. This project is a proving ground for possible future additions to java.time. You may find some useful classes here such as [`Interval`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/Interval.html), [`YearWeek`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearWeek.html), [`YearQuarter`](http://www.threeten.org/threeten-extra/apidocs/org/threeten/extra/YearQuarter.html), and [more](http://www.threeten.org/threeten-extra/apidocs/index.html).
Upvotes: 0
|
2018/03/16
| 469 | 1,521 |
<issue_start>username_0: I create a function and need to paste " " around the string, the final desired code is `'table_df'` in the following code
```
if (exists('table_df') && is.data.frame(get('table_df'))&nrow(table_df)>0) {
tracking_sheet$var1[tracking_sheet$var2=="table_name"]<-'Completed'
} else {tracking_sheet$var1[tracking_sheet$var2=="table_name"]<-'Check'}
```
this is my function, but it doesnt work, mainly because of the quotes around the string part. `paste('", table_df, "',sep="")`, so my question is how to use paste or other function to achieve the final result `'table_df'`
```
check<-defmacro(tracking_sheet,table_df,table_name,
expr={if (exists(paste('", table_df, "',sep="")) && is.data.frame(get(paste('", table_df, "',sep="")))&nrow(table_df)>0) {
tracking_sheet$var1[tracking_sheet$var2==table_name]<-'Completed'
} else {tracking_sheet$var1[tracking_sheet$var2==table_name]<-'Check'}
})
check(tracking_sheet,app_df_pivot,"T_Applications")
```
the code above is trying to create a summary sheet to report which dataframe is existed in the environment and if the df contains data. I am welcome to all advice and thank you!<issue_comment>username_1: I think you mean to use
```
paste('"', table_df, '"',sep="")
```
Without the closing single-quotes,
```
paste('", table_df, "',sep="")
```
evaluates to ", table\_df, "
Upvotes: 1 <issue_comment>username_2: If you want to paste a quote, you have to escape it with "\"
```
paste0("\'", "example", "\'")
```
Upvotes: 0
|
2018/03/16
| 1,770 | 4,618 |
<issue_start>username_0: I need to capture some values from dataframe A (colname : "price") and put them in dataframe B ("PRECIO\_LISTA"). Same with column "sale\_price" from dataframe A, put the value in column "PRECIO\_INDEXADO" in dataframe B.
**dataframe A**
```
df_a <- structure(list(TIENDA = c("CURACAO", "CURACAO", "CURACAO", "CURACAO",
"CURACAO", "CURACAO", "CURACAO", "CURACAO", "CURACAO"), SKU = c("4896 PG",
"4896 PG", "4896 PG", "4896 PG", "4896 PG", "4896 PG", "4896 PG",
"4896 PG", "4896 PG"), NOMBRE = c("Ventilador 3en1 Air Monster 4896 40cm",
"Ventilador 3en1 Air Monster 4896 40cm", "Ventilador 3en1 Air Monster 4896 40cm",
"Ventilador 3en1 Air Monster 4896 40cm", "Ventilador 3en1 Air Monster 4896 40cm",
"Ventilador 3en1 Air Monster 4896 40cm", "Ventilador 3en1 Air Monster 4896 40cm",
"Ventilador 3en1 Air Monster 4896 40cm", "Ventilador 3en1 Air Monster 4896 40cm"
), PRECIO_OFERTA = c(29, 29, 29, 29, 29, 29, 29, 29, 29), PRECIO_LISTA = c(80,
80, 80, 80, 80, 80, 80, 80, 80), PRECIO_INDEXADO = c(29, 29,
29, 29, 29, 29, 29, 29, 29)), .Names = c("TIENDA", "SKU", "NOMBRE",
"PRECIO_OFERTA", "PRECIO_LISTA", "PRECIO_INDEXADO"), row.names = c(NA,
-9L), class = c("tbl_df", "tbl", "data.frame"))
```
**data frame B**
```
df_b <- structure(list(id = "4896 PG", title = "Ventilador 3en1 Air Monster 4896 40cm",
description = "Tu mejor aliado contra los días de intenso calor... este ventilador 3 en 1 Air Monster.",
google_product_category = NA, link = "https://www.lacuracao.pe/curacao/ventilador-3en1-air-monster-4896-40cm--4896-pg",
image_link = "http://www.lacuracao.pe/wcsstore/efe_cat_as/646x1000/4896 PG_1.jpg",
additional_image_link = NA, availability = 1, price = 80,
sale_price = 49), .Names = c("id", "title", "description",
"google_product_category", "link", "image_link", "additional_image_link",
"availability", "price", "sale_price"), row.names = c(NA, -1L
), class = c("tbl_df", "tbl", "data.frame"))
```
**Desired ouput:**
Data Frame B (df\_b) with the values from data Frame A. The data provided is just for 1 product with SKU (id): "4896 PG".
Right now, df\_b has values: "price" = 80, "sale\_price" = 49. But it should be: "price": 80, "sale\_price": 29.
I tried this without success:
```
desired_result <- inner_join(df_b,
df_a,
by = c("id" ="SKU", "price" = "PRECIO_LISTA",
"sale_price" = "PRECIO_INDEXADO"))
```<issue_comment>username_1: If I understand correctly, you want to replace `price` and `sale_price` in `df_b` with `PRECIO_LISTA` and `PRECIO_INDEXADO` in `df_a`, joining only by `id = SKU`.
You can first discard the current `price` and `sale_price` from `df_b` using `select()` and then join with `df_a` keeping only the variables you are interested into. Moreover you can use `distinct()` to avoid duplicates.
Using the pipes:
```
library(dplyr)
desired_result =
df_b %>% select(-price,-sale_price) %>%
left_join(
df_a %>%
transmute(id = SKU, price = PRECIO_LISTA, sale_price = PRECIO_INDEXADO) %>%
distinct()
)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Borrowed from this [question](https://stackoverflow.com/a/44930332/786542), you can use `data.table` to do it
```r
library(data.table)
setDT(df_a) # convert to a data.table without copy
setDT(df_b)
# join and update "df" by reference, i.e. without copy
df_b[df_a, on = c("id" = "SKU", "price" = "PRECIO_LISTA"), sale_price := PRECIO_INDEXADO]
df_b
#> id title
#> 1: 4896 PG Ventilador 3en1 Air Monster 4896 40cm
#> description
#> 1: Tu mejor aliado contra los días de intenso calor... este ventilador 3 en 1 Air Monster.
#> google_product_category
#> 1: NA
#> link
#> 1: https://www.lacuracao.pe/curacao/ventilador-3en1-air-monster-4896-40cm--4896-pg
#> image_link
#> 1: http://www.lacuracao.pe/wcsstore/efe_cat_as/646x1000/4896 PG_1.jpg
#> additional_image_link availability price sale_price
#> 1: NA 1 80 29
```
Edit: to update both `price` and `sale_price`
```r
df_b[df_a, on = .(id = SKU), c("price", "sale_price") := list(PRECIO_LISTA, PRECIO_INDEXADO)]
df_b
```
Created on 2018-03-16 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 0
|
2018/03/16
| 477 | 1,734 |
<issue_start>username_0: I have a strange behavior with the setState callback, hopefully somebody can help. The callback just isn't fired.
Here is what I do:
```js
this.setState(
(prevState, props) => {
return { first: obj, questions: [] }
},
this.changeStateCb
);
```
For some reason the changeStateCb function is never being called. Same problem when I change it to:
```js
this.setState(
(prevState, props) => {
return { first: obj, questions: [] }
},
() => console.log(this.state)
);
```
I just updated from v15.x to 16.2.0<issue_comment>username_1: Do you have a demo? I just put up a simple [working example](https://codesandbox.io/s/4xx27l61q7), and it seems to run just fine under React `16.2.0`. Do note though that the optional callback should be used [sparingly](https://reactjs.org/docs/react-component.html#setstate). As the docs mention, it will be executed once `setState` is completed and the component is re-rendered. So, a better place to do the logic in `this.changeStateCb` would be inside `componentDidUpdate`.
Upvotes: 1 <issue_comment>username_2: I just had the same issue happen to me. A simple `console.log('test')` in the callback wouldn't even run. It turns out I had to delete my /dist folder in the build directory. I am using Type Script and webpack and upgraded from a .net core template.
Upvotes: 1 <issue_comment>username_3: Have the same issue. The callback is not invoked. I traced it to React min code and the function to enqueueSetState does not accept the callback even though it is passed in from internal React code...so code explains why it is not called. But this used to work just fine in React 15.x - what's going on...I am trying React 16.8.4
Upvotes: 0
|
2018/03/16
| 468 | 1,848 |
<issue_start>username_0: I want to build a task scheduling service on the Google Cloud Platform. The tasks can be as simple as triggering a URL. Tasks can be recurring (once an hour, twice a day, every thursday, ...) and can be created and removed dynamically.
Which services/APIs on the Google Cloud Platform can I use for this?
I have looked into Google App Engine cron jobs but there seems to be no way to programmatically modify them. If possible I would like to avoid running a cron job every minute just to check if there is some task to run.
My framework of choice is ASP.NET Core but if there is a better solution available, e.g. in Java, I'm willing to try it out.<issue_comment>username_1: Do you have a demo? I just put up a simple [working example](https://codesandbox.io/s/4xx27l61q7), and it seems to run just fine under React `16.2.0`. Do note though that the optional callback should be used [sparingly](https://reactjs.org/docs/react-component.html#setstate). As the docs mention, it will be executed once `setState` is completed and the component is re-rendered. So, a better place to do the logic in `this.changeStateCb` would be inside `componentDidUpdate`.
Upvotes: 1 <issue_comment>username_2: I just had the same issue happen to me. A simple `console.log('test')` in the callback wouldn't even run. It turns out I had to delete my /dist folder in the build directory. I am using Type Script and webpack and upgraded from a .net core template.
Upvotes: 1 <issue_comment>username_3: Have the same issue. The callback is not invoked. I traced it to React min code and the function to enqueueSetState does not accept the callback even though it is passed in from internal React code...so code explains why it is not called. But this used to work just fine in React 15.x - what's going on...I am trying React 16.8.4
Upvotes: 0
|
2018/03/16
| 246 | 939 |
<issue_start>username_0: I'm following the steps [here](https://software.sandia.gov/downloads/pub/pyomo/PyomoOnlineDocs.html) in order to use all my computing power (10-core Intel i9 CPU) and solve a one-instance abstract Pyomo model. However, it seems that the solver is just using one CPU core and it takes more than 2 days to return a solution for a 50-node input (with 10 nodes it just takes seconds.) any help about making Pyomo model run in all available CPU cores?
Thanks<issue_comment>username_1: Thanks to @<NAME> for pointing out GLPK's serial nature, after doing some experiments I migrated to Gurobi and now all my 10 CPU cores are being used by the Pyomo model.
Upvotes: 1 [selected_answer]<issue_comment>username_2: This may just be a syntax error on your end. Try using Arduino for C++. I9 processors do not support other ones like the Five.9 servers. Beware, overrides may occur to burn out your i9.
Upvotes: -1
|
2018/03/16
| 639 | 2,244 |
<issue_start>username_0: I´m trying to figure out if I shouyld map or not in my reducer.
Here´s the context.
I have a redux state composed by...
```
friends [{}, {}]
pets [{}, {}]
users [{}, {}]
```
3 of them are array of objects based on GET responses (local API)
Here´s an example of one object of the friends array.
```
{
id: '982347646324',
name: 'Peter',
lastName: 'Doe',
age: 40
}
```
So... Now let´s say that I want to edit this entry.
My current reducer case would be...
```
...
case EDIT_FRIEND:
return state.map(el => {
if (el.id === action.payload.data.id) {
return {
...el,
name: action.payload.data.name,
lastName: action.payload.data.lastName,
age: action.payload.data.age
}
}
return el
});
...
default:
return state;
```
Since I´m new and in the learning process, I should ask:
Is this the right way to do it...?
It works, yes, but I want to learn and do it properly.
Should I do it in other way...?
Should I map or not...? Any help will be really helpful.
Thnanks a lot<issue_comment>username_1: Things you should not do in a reducer
1. make it impure
2. mutate the state
Since you are using map method, it will return a new array everytime and doesn't mutate your earlier array. So, it is totally fine to use map.
Upvotes: 0 <issue_comment>username_2: There are a few ways to do it, the map method which you use is one such and it is perfectly fine to use it. The other solution is using `spread syntax` and updating the desired object like
```
...
case EDIT_FRIEND: {
const index = state.findIndex(obj => obj.id === action.payload.data.id);
return [
...state.slice(0, index),
{
...state[index],
name: action.payload.data.name,
lastName: action.payload.data.lastName,
age: action.payload.data.age
}
...state.slice(index + 1)
]
}
...
default:
return state;
```
You can opt to go for any of the solution that you find comfortable from the above two. What you must keep in mind is that you are not directly mutating the store state.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,007 | 3,076 |
<issue_start>username_0: I'd like to get lowest price based on maximum timestamp. I've tableid only to get the date out of table.
My query should be something like:
```
SELECT min(price) as price, timestamp, resellerid, tableid
FROM `sql_date_table`
WHERE tableid=%d
AND max(timestamp)
```
Any time will be appreciated. Thank you.
Edit: Example data
```
tableid | resellerid | price | timestamp
-----------------------------------------
4 1 1549900 1516405599
4 1 2097042 1518618827
4 1 2107168 1519739181
4 2 1649900 1515352455
4 2 1649900 1518618508
4 2 1649900 1519739180
4 3 1700000 1520962427
4 3 1649900 1519828070
6 2 299400 1519738727
6 3 188800 1520962413
8 1 249900 1518618488
8 2 249500 1518618509
```
**The idea is to get "latest crawled" lowest price**<issue_comment>username_1: I suspect that you might want:
```
select t.*
from t
where tableid = ?
order by timestamp desc, price asc
limit 1;
```
Upvotes: 1 <issue_comment>username_2: What do you mean? Max timestamp will have multiple occurrence? Use group by with MIN() will resolve your issue if this is the problem.
Upvotes: 0 <issue_comment>username_3: Try this
```
SELECT * FROM `sql_date_table` WHERE `timestamp`
IN (SELECT min(`timestamp`) FROM `sql_date_table`
WHERE `tableid` = ? ORDER BY `price` DESC) LIMIT 1;
```
Upvotes: 1 <issue_comment>username_4: To find the lowest price among the resellers based on their latest price:
```
select resellerid, price, timestamp
from sql_date_table
where
tableid = ?
and
timestamp = (select max(timestamp)
from sql_date_table t2
where t2.tableid = sql_date_table.tableid
and t2.resellerid = sql_date_table.resellerid)
order by price limit 1;
```
OR
```
select resellerid, price, timestamp
from sql_date_table
where
tableid = 4
and
timestamp = (select timestamp
from sql_date_table t2
where t2.tableid = sql_date_table.tableid
and t2.resellerid = sql_date_table.resellerid
order by timestamp desc limit 1)
order by price limit 1
```
Upvotes: 0 <issue_comment>username_5: This returns the lowest price from the *latest price quotes per reseller*:
```
SELECT Min(price)
FROM tab AS t1
JOIN
(
SELECT resellerid, Max(timestamp) AS max_ts
FROM tab
GROUP BY resellerid
) AS t2
ON t1.timestamp = t2.max_ts
```
As your question is unclear you might want/need to add `tableid` to the GROUP BY.
If you want additional colums:
```
SELECT *
FROM tab AS t1
JOIN
(
SELECT resellerid, Max(timestamp) AS max_ts
FROM tab
GROUP BY resellerid
) AS t2
ON t1.timestamp_ = t2.max_ts
ORDER BY price
LIMIT 1
```
When there are multiple resellers with the same price this will return one of them.
Upvotes: 1
|
2018/03/16
| 757 | 2,551 |
<issue_start>username_0: I have the following:
```
```
is a grid component. I want to set the props on it differently based on the value of a variable. If it was a self enclosing tag I would simply store it in a variable like so and call that in the render method:
```
let someVar =
if(someCondition) {
someVar =
}
{someVar} // Called like so...
```
But it's not a self enclosing tag and doing this causes an error:
```
let someVar =
if(someCondition) {
someVar =
}
{someVar}
```
How to solve this issue?<issue_comment>username_1: I suspect that you might want:
```
select t.*
from t
where tableid = ?
order by timestamp desc, price asc
limit 1;
```
Upvotes: 1 <issue_comment>username_2: What do you mean? Max timestamp will have multiple occurrence? Use group by with MIN() will resolve your issue if this is the problem.
Upvotes: 0 <issue_comment>username_3: Try this
```
SELECT * FROM `sql_date_table` WHERE `timestamp`
IN (SELECT min(`timestamp`) FROM `sql_date_table`
WHERE `tableid` = ? ORDER BY `price` DESC) LIMIT 1;
```
Upvotes: 1 <issue_comment>username_4: To find the lowest price among the resellers based on their latest price:
```
select resellerid, price, timestamp
from sql_date_table
where
tableid = ?
and
timestamp = (select max(timestamp)
from sql_date_table t2
where t2.tableid = sql_date_table.tableid
and t2.resellerid = sql_date_table.resellerid)
order by price limit 1;
```
OR
```
select resellerid, price, timestamp
from sql_date_table
where
tableid = 4
and
timestamp = (select timestamp
from sql_date_table t2
where t2.tableid = sql_date_table.tableid
and t2.resellerid = sql_date_table.resellerid
order by timestamp desc limit 1)
order by price limit 1
```
Upvotes: 0 <issue_comment>username_5: This returns the lowest price from the *latest price quotes per reseller*:
```
SELECT Min(price)
FROM tab AS t1
JOIN
(
SELECT resellerid, Max(timestamp) AS max_ts
FROM tab
GROUP BY resellerid
) AS t2
ON t1.timestamp = t2.max_ts
```
As your question is unclear you might want/need to add `tableid` to the GROUP BY.
If you want additional colums:
```
SELECT *
FROM tab AS t1
JOIN
(
SELECT resellerid, Max(timestamp) AS max_ts
FROM tab
GROUP BY resellerid
) AS t2
ON t1.timestamp_ = t2.max_ts
ORDER BY price
LIMIT 1
```
When there are multiple resellers with the same price this will return one of them.
Upvotes: 1
|
2018/03/16
| 635 | 1,971 |
<issue_start>username_0: I am a complete noob here. I am trying to extract Marine Corp. (without quotes from $data. I've searched and searched for how to deal with double quotes but am coming up short. Can someone please offer some guidance? Thank you.
```
$data = '"title":{"rendered":"Marine Corp."}';
preg_match('/title":{"rendered":"(.*)"}/U',$data,$matches);
echo $matches[0]; //=> target
```<issue_comment>username_1: The regex you're after is `/\"title\"\:\{\"rendered\"\:\"(.*)\"\}/`
```
$data = '"title":{"rendered":"Marine Corp."}';
preg_match('/\"title\"\:\{\"rendered\"\:\"(.*)\"\}/',$data,$matches);
$tmp = array_shift( $matches );
```
`$tmp` will hold "`Marine Corp.`"
On another note it seems like you have a JSON object there. If that's the case you could do, which would be a lot simpler:
```
$data = '{"title":{"rendered":"Marine Corp."}}';
$tmp = json_decode($data, true);
var_dump($tmp['title']['rendered']);
```
Upvotes: 0 <issue_comment>username_2: if you won't to extract "Marine Corp" from the $data string use the str\_replace()
function:
echo str\_replace('Marine Corp.', '', $data);
result: "title":{"rendered":""}
Upvotes: -1 <issue_comment>username_3: Try like this,
```
php
//If it is valid json then try with json_decode()
$json_string = '{"title":{"rendered":"Marine Corp."}}';
$json_array = json_decode($json_string,1);
echo "For Json Object:\n" . $json_array['title']['rendered']."\n";
//If it is just string pattern then with preg_match()
$re = '/"title":{"rendered":"(.*?)\.?"}/';
$str = '"title":{"rendered":"Marine Corp."}
"title":{"rendered":"Marine Police"}';
preg_match_all($re, $str, $matches, PREG_SET_ORDER, 0);
echo "\nFor String: \n";
foreach($matches as $match){
echo $match[1]."\n";
}
?
```
**Output:**
```
For Json Object:
Marine Corp.
For String:
Marine Corp
Marine Police
```
See PHP demo : <https://eval.in/973297>
See REGEX demo : <https://regex101.com/r/GyGN9U/1>
Upvotes: 1 [selected_answer]
|
2018/03/16
| 453 | 1,566 |
<issue_start>username_0: Team,
I am looking for a date function that when i use it in procedures, it should automatically get the desired range looking at parameter.
ex: query should show me results from today's date to 90 days in history.
like
```
select * from Table
where dataFunction(currDate, 90)
```
where currDate should consider todays date and pull me all the rows for past 90 days.
am using MSSMS v17.5<issue_comment>username_1: You seek the DATEADD() function.
Upvotes: 0 <issue_comment>username_2: Don't write functions for this. They don't play well with indexes, which cuts to the core of database performance. You need to use the built-in items in the WHERE clause... something more like this:
```
select *
from Table
where Table.DateColumn BETWEEN DATEADD(d, -90, current_timestamp) AND current_timestamp
```
If you really want to use just the date, instead of the full DateTime value, then cast as date:
```
select *
from Table
where Table.DateColumn BETWEEN DATEADD(d, -90, cast(current_timestamp as date)) AND cast(current_timestamp as date)
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: The [`DATEADD()`](https://learn.microsoft.com/en-us/sql/t-sql/functions/dateadd-transact-sql) function gives you what you need.
Replace with the field you're comparing:
```
SELECT *
FROM Table
WHERE >= DATEADD(DAY, -90, CAST(GETDATE() AS DATE));
```
Upvotes: 0 <issue_comment>username_4: This is the query you are searching for:
```
Select *
from Table t
where t.date >= dateadd(day, -90, getdate())
```
Upvotes: 0
|
2018/03/16
| 510 | 1,142 |
<issue_start>username_0: If I have a lot of time series data, is it possible to remove any rows in the dataframe if its not on a 0, 15, 30, 45, minute time stamp?
For example I can make up some data on 5 minute intervals...
```
import pandas as pd
import numpy as np
time = pd.date_range('6/28/2013', periods=2000, freq='5min')
data = pd.Series(np.random.randint(100, size=2000), index=time)
print(data)
```
But how would I filter to remove any row that doesnt fall on a 0, 15, 30, 45 minute time interval? Basically the end result would need to be 15 minutes only...<issue_comment>username_1: You can using `isin`
```
data[data.index.minute.isin([0,15,30,45])]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since minutes are multiple of 15\* like `[0,15,30,45]`, you can also use
```
data[(data.index.minute % 15) == 0]
```
Upvotes: 1 <issue_comment>username_3: Use `pandas.Series.asfreq`
```
data.asfreq(‘15T’)
2013-06-28 00:00:00 21
2013-06-28 00:15:00 20
2013-06-28 00:30:00 92
2013-06-28 00:45:00 70
2013-06-28 01:00:00 86
2013-06-28 01:15:00 82
2013-06-28 01:30:00 13
...
```
Upvotes: 1
|
2018/03/16
| 532 | 1,335 |
<issue_start>username_0: I can't display the data it said **undefined variable : avg**. this is my script:
my controller :
```
public function rate_config() {
$rt = $_POST['hasil_rating'];
$content["hasil_rating"] = $rt;
$this->db->insert("tb_rating", $content);
redirect("rate/index?status=tambah_success","refresh");
$data['avg'] = $this->db->select_avg("hasil_rating")
->get("tb_rating");
$this->load->view('rating', $data);
}
```
and the view :
```
RATE
=$avg['hasil\_rating']?
---
```
the **$avg['hasil\_rating']** is I used to display the data from **$data['avg']**
should I use model? I don't use model because its gonna save a lot of time. thank you before.<issue_comment>username_1: You can using `isin`
```
data[data.index.minute.isin([0,15,30,45])]
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since minutes are multiple of 15\* like `[0,15,30,45]`, you can also use
```
data[(data.index.minute % 15) == 0]
```
Upvotes: 1 <issue_comment>username_3: Use `pandas.Series.asfreq`
```
data.asfreq(‘15T’)
2013-06-28 00:00:00 21
2013-06-28 00:15:00 20
2013-06-28 00:30:00 92
2013-06-28 00:45:00 70
2013-06-28 01:00:00 86
2013-06-28 01:15:00 82
2013-06-28 01:30:00 13
...
```
Upvotes: 1
|
2018/03/16
| 393 | 1,804 |
<issue_start>username_0: ```
**Student Table**
---------------------------------
id name address placeofbirth
---------------------------------
1 Kim 1 2
2 ahmed 3 4
3 john 1 3
**City Table**
---------------------------------
id name
---------------------------------
1 New York
2 Boston
3 Denver
4 Washington
**Result**
-----------------------------------------
Student address Placeofbirth
-----------------------------------------
Kim New York Boston
ahmed Denver Washington
John New York Denver
```
I don't have a bridging table to connect them, please help me on how can I achieve the desired result, I am using mysql and c#<issue_comment>username_1: you have place of birth .. that match with city id .. this is the relation for the result and adresss too this is another relation
so you can simply join for both the column related to you city table
```
select s.student, a.name address, b.name Placeofbirth
from Student s
inner join City a on a.id = s.address
inner join City b on b.id = s.Placeofbirth
```
twice fk.. twice join
Upvotes: 2 <issue_comment>username_2: ```
SELECT s.name as Student, a.name AS address, p.name AS Placeofbirth
FROM Student s
INNER JOIN City a ON s.address = a.id
INNER JOIN City p ON s.placeofbirth = p.id;
```
Read more on JOINs [here](https://dev.mysql.com/doc/refman/5.7/en/join.html).
Upvotes: 2 [selected_answer]
|
2018/03/16
| 578 | 2,023 |
<issue_start>username_0: I have created a zip code lookup for users. When they enter their zip code, if their zip code is in the list, then it will report back that they are in one of our service areas. If their zip code is not in the list, then it will report back that service is not available in their area.
The example code I am using works well, but my challenge is that I am only using 3 zip codes but I need my list to include 100s of zip codes. Ideally I am hoping to refer to an external file that has a list of these zip codes but if not I can list them all in the script. I am open to suggestions. I am currently using switch. What is the best way to do this?
Thank you in advance.
Here is my code:
```html
function myFunction() {
var zipcode = document.getElementById("myZipCode").value;
var message;
switch(zipcode) {
case "85142":
case "99999":
case "88888":
message = "Service is available for your location.";
break;
default:
message = "Sorry your location is not available for service.<br />Click here to be notified once your location becomes available.";
}
document.getElementById("response").innerHTML = message;
}
Enter your 5-digit Zip Code:
Lookup
```
My code is also available at: <https://jsfiddle.net/aplanet/7ttb19h5/11/><issue_comment>username_1: you have place of birth .. that match with city id .. this is the relation for the result and adresss too this is another relation
so you can simply join for both the column related to you city table
```
select s.student, a.name address, b.name Placeofbirth
from Student s
inner join City a on a.id = s.address
inner join City b on b.id = s.Placeofbirth
```
twice fk.. twice join
Upvotes: 2 <issue_comment>username_2: ```
SELECT s.name as Student, a.name AS address, p.name AS Placeofbirth
FROM Student s
INNER JOIN City a ON s.address = a.id
INNER JOIN City p ON s.placeofbirth = p.id;
```
Read more on JOINs [here](https://dev.mysql.com/doc/refman/5.7/en/join.html).
Upvotes: 2 [selected_answer]
|
2018/03/16
| 4,971 | 15,093 |
<issue_start>username_0: I was trying to add a swift file to an existing Obj-C project and this project has some frameworks linked to it. When I add a swift file to it, it's compiling just fine but at runtime it is complaining about some swift standard classes are being implemented in both linked framework and the current project.
These duplicate class definitions are not originally from the linked frameworks but from swift standard libraries for example **libswiftFoundation.dylib**. Anyone know how to fix this?.
I'm using xcode8.3 running on sierra.
```
objc[74652]: Class _TtC8Dispatch16DispatchWorkItem is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftDispatch.dylib (0x101328e58) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100415608). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation20_SwiftNSCharacterSet is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x101456ab0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413708). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation12_DataStorage is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x10145a048) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100419258). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation12_NSSwiftData is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x101456b50) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004137a8). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation25_NSErrorRecoveryAttempter is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x10145a680) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100419888). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation25NSFastEnumerationIterator is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x10145a720) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100419928). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation15NSSimpleCString is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x10145a7d8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004199e0). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtC10Foundation16NSConstantString is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x10145a868) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100419a70). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs18_EmptyArrayStorage is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100e990) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413870). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs20_RawNativeSetStorage is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ea40) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413920). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs17_CocoaSetIterator is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x101023a98) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x10042ed90). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs27_RawNativeDictionaryStorage is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100eb08) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004139e8). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs24_CocoaDictionaryIterator is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x101024a98) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x10042fd90). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs17NonObjectiveCBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x101026538) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100431830). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs17_stdlib_AtomicInt is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x1010289a8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100433ca0). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs19_SwiftNativeNSArray is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ebe0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413ac0). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs24_SwiftNativeNSDictionary is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ec48) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413b28). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs17_SwiftNativeNSSet is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ecb0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413b90). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs24_SwiftNativeNSEnumerator is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ed18) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413bf8). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs18_SwiftNativeNSData is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ed80) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413c60). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs26_SwiftNativeNSCharacterSet is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ede8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413cc8). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs31_stdlib_ReturnAutoreleasedDummy is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ee50) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413d30). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs20_SwiftNativeNSString is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100eec8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413da8). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs19_NSContiguousString is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100ef30) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413e10). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs40_SwiftNativeNSArrayWithContiguousStorage is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100efb0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413e90). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs21_SwiftDeferredNSArray is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f020) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413f00). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs27_ContiguousArrayStorageBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f0b0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100413f90). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs14_VaListBuilder is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10102dec8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004391c0). One of the two will be used. Which one is undefined.
objc[74652]: Class _TtCs13VaListBuilder is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10102dfb8) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004392b0). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftTypePreservingNSNumber is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftFoundation.dylib (0x101456c08) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414028). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSError is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f148) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414078). One of the two will be used. Which one is undefined.
objc[74652]: Class SwiftObject is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f198) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004140c8). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftValue is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f210) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414140). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNull is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f260) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414190). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSArrayBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f2b0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004141e0). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSDictionaryBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f300) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414230). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSSetBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f350) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414280). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSStringBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f3a0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004142d0). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSEnumeratorBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f3f0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414320). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSDataBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f440) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414370). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSCharacterSetBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f490) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x1004143c0). One of the two will be used. Which one is undefined.
objc[74652]: Class _SwiftNativeNSIndexSetBase is implemented in both /Users/username/Dev/Source/build/UninstalledProducts/macosx/BKMySQL.framework/Versions/A/Frameworks/libswiftCore.dylib (0x10100f4e0) and /Users/username/Dev/Source/build/Debug/BKServerFramework (0x100414410). One of the two will be used. Which one is undefined.
```
`Note: BKServerFramework is a command line tool, ignore the name it's not a framework.`<issue_comment>username_1: It looks like `BKServerFramework` was statically linked with `libswiftCore.a`. It shouldn't have been.
Upvotes: 1 <issue_comment>username_2: Add `SWIFT_FORCE_DYNAMIC_LINK_STDLIB: YES` and `SWIFT_FORCE_STATIC_LINK_STDLIB: NO` as `User-Defined Settings` in your commandline tool project's settings to prevent `BKServerFramework` from statically linking to Swift\* libraries
Upvotes: 0
|
2018/03/16
| 389 | 1,307 |
<issue_start>username_0: I came across a strange behavior in my hive queries. While inserting values based on an input parameter I set an attribute to NULL. When I execute count
```
select count(id) from hive_table where val1 is NULL;
```
I get 0 as result. But I know there are around ~7.5 K records which has been set to NULL. Whereas the following query returns accurate result
```
select count(id) from hive_table where trim(val1) is NULL;
```
I am using Apache Hive version 1.2 and Hortonworks Hadoop 2.7. Any ideas why this is happening.<issue_comment>username_1: `count` ignores `null` values which is why you get `0` as output. You should be counting rows instead.
```
select count(*) from hive_table where val1 is NULL;
```
Edit: I suppose you interpret empty strings as `null`s. Try this to check.
```
select sum(cast(trim(val1)='' as int)) as empty_str_count
,sum(cast(val1 is null as int)) as null_count
from hive_table
```
Upvotes: 2 <issue_comment>username_2: I have no idea why the second works. But if you want to count `NULL` values, use:
```
select count(*)
from hive_table
where val1 is NULL;
```
Upvotes: 0 <issue_comment>username_3: you can try another way to find Null records count from hive,
```
select count(id) from hive_table where length(val1)=0;
```
Upvotes: 1
|
2018/03/16
| 334 | 1,154 |
<issue_start>username_0: Coming from Swift now want to get into Kotlin UI Apps so I am trying out TornadoFX in IntelliJ atm.
When creating an hbox in a View Class, invoking severeal elemants like buttons, textfields, etc., how can I change properties like their names, alignment or other?
Unfortunately tornadoFX/Kotlin is not as well documented as Swift.
Thank you.<issue_comment>username_1: Found the solution, for everyone whos looking..:
<https://github.com/edvin/tornadofx-guide/blob/master/part1/4.%20Basic%20Controls.md>
Upvotes: 1 <issue_comment>username_2: I would also try <https://github.com/edvin/tornadofx-guide> It is being updated more actively.
<https://github.com/edvin/tornadofx-guide/blob/master/part1/4.%20Basic%20Controls.md> should be what you are looking for.
Upvotes: 2 <issue_comment>username_3: This is an old post but I thought I'd drop the link for what Edvin intends to be the definitive guide for TornadoFX:
<https://edvin.gitbooks.io/tornadofx-guide/>
The other answers are still valid but this is his intended official guide so this might last longer in case he eventually he deletes his github version.
Upvotes: 0
|
2018/03/16
| 1,103 | 4,708 |
<issue_start>username_0: My question is mostly about the code safety case in C#.
Here is the code:
```
void foo() {
theFirst();
//some code which needs to call theFirst() before executing goes here
definitelyShouldBeCalledAfterTheFirst();
}
```
Now I want to ensure that in every scope (basically any method) where the function `theFirst()` is called the function `definitelyShouldBeCalledAfterTheFirst()` is called afterwards somewhere in the same scope. Between `theFirst()` and `definitelyShouldBeCalledAfterTheFirst()` calls I need to call other functions most of the time and I can not call the functions after `definitelyShouldBeCalledAfterTheFirst()` or before `theFirst()`.<issue_comment>username_1: Maybe something like this with delegates can help:
```
void ExecuteBetweenFirstAndSecond(Action action)
{
bool success = false;
try
{
First();
success = true;
action?.Invoke();
}
finally
{
//Uncomment if you want to execute Second only if First wast executed successfully
//if (success)
Second();
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: There are various ways to achieve this, depending on context.
@username_1 gives a [great option using delegates](https://stackoverflow.com/a/49325556/361842).
One alternative is where you have a shared base class (though this option is only applicable to a small subset of scenarios).
```
abstract class MyBaseClass
{
void doFirst() =>
Debug.WriteLine("Done First");
void definitelyShouldBeCalledAfterTheFirst() =>
Debug.WriteLine("Done After First");
protected virtual void doSomething() =>
Debug.WriteLine("between first and after first");
public void DoStuff()
{
doFirst();
doSomething();
definitelyShouldBeCalledAfterTheFirst();
}
}
class MyClass: MyBaseClass {}
class MyOtherClass: MyBaseClass
{
protected override void doSomething() =>
Debug.WriteLine("something else happens now");
}
void Main()
{
var a = new MyClass();
a.DoStuff();
/* outputs:
Done First
between first and after first
Done After First
*/
var b = new MyOtherClass();
b.DoStuff();
/* outputs:
Done First
something else happens now
Done After First
*/
}
```
Upvotes: 1 <issue_comment>username_3: This is what try-finally is for. Very simple. If the call to theFirst succeeds, then the call to definitelyShouldBeCalledAfterTheFirst is guaranteed to be made even if "some code" fails.
```
void foo()
{
theFirst();
try
{
// some code
}
finally
{
definitelyShouldBeCalledAfterTheFirst();
}
}
```
Upvotes: 0 <issue_comment>username_4: If I understood your question correctly you have following requirements:
1. Have a function that always executed the first,
2. Have a function that always executed the last,
3. Some other functions that can be executed in any order but not first or last.
What function is always executes and executes the first? a constructor...
So if you create a class and put a call to your `First()` function in constructor the requirement #1 is fulfilled.
The insuring that order of execution is met requirement #3 is easy. No code will be executed before constructor. And if you put a flag that function `Last()` has been already executed you can check that flag in the beginning of each function and throw / return if it is set.
The hardest part is to ensure execution of the `Last()` function. The closest guarantee will be to implement IDisposable but you cannot guarantee that your class will be called within `using`.
So we end up with something along those lines:
```
public sealed class Foo: IDisposable
{
private bool isLastCalled = false;
public Foo()
{
First();
}
public void First()
{
if (isLastCalled) return;
// do something...
}
public void Other()
{
if (isLastCalled) return;
// do something...
}
public void Last()
{
if (isLastCalled) return;
isLastCalled = true;
}
public void Dispose()
{
Last();
}
~Foo()
{
Last();
}
}
```
Additional notes:
* `First()` will be called automatically from constructor but you can call it again as any other before `Last()` is called. if this is not your intention - make the function `private` and remove check for the flag
* You can call `Last()` explicitly or it will be called automatically when the instance is out of the scope. If you do not need to call it explicitly make it `private` too
Upvotes: 1
|
2018/03/16
| 1,455 | 4,999 |
<issue_start>username_0: I want to point to something like `add(2, 1)` and then call it without needing to specify parameters. Is there some way to do this with function pointers?<issue_comment>username_1: ```
int add(int a, int b) { return a + b; }
```
No you canno't have a pointer to `add` with assigned parameter values for it. However you can do this:
```
int add_2_1() { return add(2, 1); }
```
however I don't see how helpful is this...
Upvotes: 3 <issue_comment>username_2: While the short answer is "no", there are ways to accomplish what you want to do.
The first thing to remember is that `add` needs two parameters. No matter what you do, you have to provide it the two parameters.
The second thing to remember is that if you want to call a function that ultimately calls `add(2, 1)`, you have to have stored or hard coded the values `2` and `1` somewhere so that they can be used.
There are couple of ways to do that that I can think of.
1. Create a wrapper function that calls `add(2, 1)` and let your code call the wrapper function.
```
int add2And1()
{
return add(2, 1);
}
```
and then use `add2And1` in your call.
2. Create a wrapper function that relies on global data. Set the global data first before using the wrapper function.
```
int param1 = 0;
int param2 = 0;
int addParams()
{
return add(param1, param2);
}
```
and then use:
```
param1 = 2;
param2 = 1;
```
in your code before calling `addParams`.
Upvotes: 3 <issue_comment>username_3: A GCC extension will let you do this:
```
int func(int addend, int (*next)(int (*f2)(int))
{
int add(int addend2) { return addend + addend2; }
next(add);
}
```
However if you try to do:
```
int (*)(int) func(int addend)
{
int add(int addend2) { return addend + addend2; }
return add;
}
```
the function cannot be used because `func(3)(3)` executes a trampoline\* on the stack that has already been freed. This is pretty much the most undefined behavior there is.
\*A trampoline is a small piece of code that immediately jumps to another piece of code. In this case the trampoline looks something like
```
mov rax, 0xfuncstackframeaddress
mov r10, rax
lea rax, [func.add]
jmp rax
```
Of course there's lots of ways of doing this and it doesn't matter which one GCC uses. GCC is solving the problem of how to pass a parameter with a function pointer by writing dynamic code. No matter what code was written there, the next time function call depth gets deep enough the stack frame will be overwritten.
Upvotes: 1 <issue_comment>username_4: As a comment to your question, [<NAME>](https://stackoverflow.com/users/841108/basile-starynkevitch) suggested reading about [closures](https://en.wikipedia.org/wiki/Closure_(computer_programming)). At its core, a closure combines a function to be called with some context, referred to in the literature as the function’s *environment*.
Object-oriented languages have a similar concept known as [delegates](https://en.wikipedia.org/wiki/Delegation_(object-oriented_programming)) where a particular instance can become the environment for a later call where the calling site has no direct knowledge of the underlying object. Languages that support closures natively automatically capture or “close over” the bindings provided in the environment. It may seem an odd language feature, but it can be expressive and useful as the motivation behind your question suggests.
Below is a simple example of the concept of a closure at work in a C program, where it is up to the programmer to direct the captures explicitly.
The function you want to eventually call plus some front matter is
```
#include
int
add\_two\_numbers(int a, int b)
{
return a+b;
}
```
A closure is a function to be called combined with its environment. In C, the function to be called is a *pointer to function*. Note that the parameters of `f` align with those of `add_two_numbers`.
```
typedef struct {
struct { /* environment */
int a;
int b;
} env;
int (*f)(int, int); /* function to be called */
} closure;
```
We’d like to create a closure, *i.e.*, set up the association of parameter values to pass with the function to be called when we are ready to do so. In this simple example, `make_adder` leaves the problem of allocating space for the closure to its caller.
```
void
make_adder(int a, int b, closure *c)
{
c->env.a = a;
c->env.b = b;
c->f = add_two_numbers;
}
```
Now that you know how to create one of our simple closures, you call or *invoke* it as in
```
int invoke_closure(closure *c)
{
return c->f(c->env.a, c->env.b);
}
```
Finally, usage will look like
```
int main(void)
{
closure c;
make_adder(2, 1, &c);
printf("The answer is %d.\n", invoke_closure(&c));
return 0;
}
```
with output of
```
The answer is 3.
```
### Further Reading
* [Currying](https://en.wikipedia.org/wiki/Currying)
* [Partial Application](https://en.wikipedia.org/wiki/Partial_application)
Upvotes: 5 [selected_answer]
|
2018/03/16
| 618 | 2,259 |
<issue_start>username_0: I am currently working on a project in ASP.NET MVC. I have a page for displaying data where users can use checkboxes to mark different items and then makes changes to those items. On some pages, there are enough checkboxes that a "Select all Checkboxes" button is required. I typed up the code for in javascript and I cannot seem to get it to work no matter what I try. Here is my HTML:
I first loop through my viewmodel data and display a checkbox with an id of my Model's id for each checkbox.
```
@foreach(var item in Model.Data)
{
}
```
Each checkbox displayed on the page now has an ID of each database ID.
I then create this button
```
Select All
```
That button corresponds to this javascript:
```
function selectAll()
{
var stepIDs = @Html.Raw(JsonConvert.SerializeObject(this.Model.Data));
for(let i = 0; i < stepIDs.length; i++)
{
var curr = stepIDs[i].id;
document.getElementById(curr).checked = true;
}
}
```
I get the ID's again using the SerializeObject() command and then set each corresponding one to true (dependent upon how many are in the database).
On runtime, **the value of the checkbox successfully changes to true, but it does not display a tick in the checkbox**. I have also tried using Jquery to try and select it but it does not work either.<issue_comment>username_1: try
```
yourCheckBoxVariable.setAttribute("checked","checked");
```
or
```
yourCheckBoxVariable.click();
```
hopefully that helps. what you're doing is a bit out of the scope of my knowledge, but I have dealt with checkboxes not ticking before and both of those solutions above worked for me.
Upvotes: 1 <issue_comment>username_2: Try
```
var curr = stepIDs[i].id;
```
instead of
```
var curr = stepIDs[i];
```
inside the for-loop
Upvotes: 1 <issue_comment>username_3: You shouldn't use `id`s for this kind of functionality.
Instead, give the collection of controls you want to manipulate a `class` name.
Then, iterate over the elements with that `class` name and perform the actions required.
Here's an example:
```
One
Two
Three
var chk = document.getElementsByClassName("myChecks");
for(var i = 0; i < chk.length; i++) {
chk[i].checked = true;
}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 774 | 3,106 |
<issue_start>username_0: The method below has 3 possible paths:
1. True,
2. False,
3. Exception.
To test it, I need to mock the private `getWootsWithAvailableProducts` for the `True` and `Exception` paths. However, the collective wisdom seems to be saying that you should not mock private methods. How else can I test those paths if I don't mock the private method, and spy on it for verification. If that's all true, why is it so hard to mock private methods. If its not true, what am I missing?
Under test:
```
public List findAllWoots(final boolean isBuy) throws Exception {
final List allWoots = wootService.findAllWoots();
return isBuy ? getWootsWithAvailableProducts(allWoots) : allWoots;
}
```
Futher details:
`getWootsWithAvailableProducts` calls a public service that makes a network request. So, I can mock the actual service class and prevent any net requests from occurring.
```
private List getWootsWithAvailableProducts(List allWoots)
throws ServiceException {
final String stringOfWootIds = buildStringOfCommaSeparatedIDs(allWoots);
final List categoryIDs = wootSearchService
.getWootIDsOfAvailableProducts(stringOfWootIds);
return filterOnlyWootsWithAvailProducts(allCategories, categoryIDs);// also private.
}
```<issue_comment>username_1: You can write tests for the accessible methods from which this private method is being called and then test all possible outcomes
Also, please ensure you are mocking other class functions and all variables which otherwise may cause issue while executing private function while testing.
So basically, you will be testing possible scenarios like this:
Public function when private function returns
1. true
2. false
3. exception
Hope that helps.
Upvotes: 0 <issue_comment>username_2: From comments it was indicated that `getWootsWithAvailableProducts`
>
> calls a service that makes a network request
>
>
>
That is an external dependency that should be abstracted out into its own concern as it is tightly coupling your code and making it difficult to test in isolation.
```
public interface WootProductsService {
List getWootsWithAvailableProducts(List woots);
}
```
The implementation of said abstractions will encapsulate the network calls, decoupling the original code so that the abstraction can be mocked when testing the method in isolation.
```
WootProductsService wootProductService; //Injected
public List findAllWoots(final boolean isBuy) throws Exception {
final List allWoots = wootService.findAllWoots();
return isBuy ? wootProductService.getWootsWithAvailableProducts(allWoots) : allWoots;
}
```
You now have control of all the dependencies and can manipulate them as desired when testing for all scenarios.
Pay attention to the concerns of your methods and by extension, classes, as they will indicate if your class is doing too much.
When encountering problem in testing your code, take that as a sign that something is not right with its design. Review the cause of the problem and consider what refactors need to be applied to make the code more SOLID.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 844 | 3,094 |
<issue_start>username_0: I am creating a `Modal` component and on a high level, it looks like this:
```
class Modal extends Component {
hideModal() {
/* logic to hide modal */
}
render() {
return (
{this.props.title}
{this.props.body}
{this.props.footer}
);
}
}
```
I want to provide a way for consumers of this component to specify a button that closes the modal. I was thinking of putting this button in its own component like so:
```
class ModalCloseButton extends Component {
render() {
return (
{this.props.text}
);
}
}
```
The `Modal` component and the `ModalCloseButton` component would be used this way:
```
}
/>
```
How can I link the `Modal` and `ModalCloseButton` components so that the `onClick` handler in the button component triggers the `hideModal` function in the `Modal` component?<issue_comment>username_1: Take a look at this:
<https://codepen.io/JanickFischr/pen/JLRovb>
You can give the child a prop with a function.
```
onClick={this.props.function}
```
Upvotes: 1 <issue_comment>username_2: You can pass a callback function to the modal component and then pass that callback to the buttonClose component. When onClick you execute that callback, and receive the event from the parent
Upvotes: 0 <issue_comment>username_3: Add a constructor function inside the parent:
```
constructor(props) {
super(props)
this.hideModal = this.hideModal.bind(this)
}
```
Then pass it into your child like so:
```
```
Then in your child you can call it:
```
this.props.hideModal()}>click me
```
Upvotes: 1 <issue_comment>username_4: In Modal component you can extend passed Button element with additional prop holding reference to `hideModal` method using [React.cloneElement](https://reactjs.org/docs/react-api.html#cloneelement):
```
class Modal extends Component {
hideModal() {
/* logic to hide modal */
}
render() {
const ButtonWithHide = React.cloneElement(this.props.footer, { hide: this.hideModal.bind(this)}));;
return (
{this.props.title}
{this.props.body}
);
}
}
```
and in Button component set passed method as onClick handler:
```
class ModalCloseButton extends Component {
render() {
return (
{this.props.text}
);
}
}
```
Upvotes: 0 <issue_comment>username_5: Add a state varible in modal(hide=true) and set visiblity of that modal on the basis of that modal. create a method hideModal() that toggle to the state variable in your Modal component, after that pass this method to the ModalCloseButton component as a props and call that method onClick of button in ModalCloseButton component.
class Modal extends Component {
constructor(props){
super(props)
this.state={
hide:true,
}
hideModal() {
this.setState({
hide:false,
})
}
render() {
return (
{this.props.title}
{this.props.body}
{this.props.footer}
{this.hideModal()}}/>
);
}
}
class ModalCloseButton extends Component {
render() {
return (
{this.props.text}
);
}
}
Upvotes: -1
|
2018/03/16
| 740 | 2,669 |
<issue_start>username_0: I have this Firebase database:

and I need to get all uids(children) of alii-b9d94. But the problem is: when I get it as an object, I can't access the object to get the values from it.
This what I am able to get it but I can't access it:

ts code:
```
import { Component } from '@angular/core';
import { IonicPage, NavController, NavParams } from 'ionic-angular';
import firebase from 'firebase';
import { AuthService } from '../../services/auth';
import { InfoService } from '../../services/info';
@IonicPage()
@Component({
selector: 'page-dash',
templateUrl: 'dash.html',
})
export class DashPage {
constructor(public navCtrl: NavController,
public navParams: NavParams,
private infoService: InfoService,
private authService: AuthService) {
firebase.database().ref().on('value', (snap) => {
let rootVals = snap.val();
let uids : string[] = [];
/* I am trying to access this by this code but not working :(
I knew the problem with .this but Is there any other way i can through it
retrieve every child in a single variable */
console.log(rootVals.this.uids);
console.log("rootVals");
console.log(rootVals);
} );
}
}
```
How to get and store each child in a single variable?<issue_comment>username_1: To get only the keys(uids), try this:
```
firebase.database().ref().on('value', (snap) => {
snap.forEach(child => {
let keys=snap.key;
});
});
```
more info here:
<https://firebase.google.com/docs/reference/js/firebase.database.Reference#key>
Upvotes: 0 <issue_comment>username_2: ```
let uids = [];
let rootVals = [];
firebase.database().ref().on('value', (snap) => {
let result = snap.value();
for(let k in result){
uids.push(k);
rootVals.push(result[k]);
}
});
```
"snap.value()" has both key and value object together , so using for loop like this, you can get object key and value separately.
when each time for loop runs , "k" gives object key. so you can get all keys of each object. Also "result[k]" gives the value object.
if you need to get key and value together, you can make your own json object and push into array
```
let keys_values = []
firebase.database().ref().on('value', (snap) => {
let result = snap.value();
for(let k in result){
keys_values.push({
key : k,
values : result[k]
})
}
});
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 888 | 3,760 |
<issue_start>username_0: I am creating a Lambda function that makes a call to some functionality on my server.
I would like to test the Lambda function against a local instance of my server, so that I don't have to deploy to AWS in order to test run the whole flow.
Is there any way to call my local machine's development server from inside Lambda without having to keep deploying to AWS (or some other remote server)?<issue_comment>username_1: This may be possible if you set up a VPN or Direct Connect between your VPC and your local network.
But honestly, this will probably be more expensive and more complex than simply deploying to an EC2 instance. Automating deployment should be simple and straightforward.
Upvotes: 1 <issue_comment>username_2: Too little info. It depends on how your lambda is configured. If it's in a VPC then how that VPC is configured. Whether your laptop has static IP, if it's sitting behind firewalls.
In general what you need is:
* your service exposed to Internet:
+ a public IP for your local machine
+ ports (on which your service is exposed) opened up the if blocked by firewall etc.
* your lambda has Internet access (route, security-group/acl/gateways):
+ if lambda is in public subnet, see AWS doc on how to setup IGW.
+ if it's in private subnet, see AWS doc on how to setup NAT GW.
Else if your laptop is inside some special secure network then VPN as username_1 suggested, which make sure your service is NOT exposed to Internet.
Some links that might help:
* Setup [Internet access for your lambdas](https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/) in VPC.
* Setup [Internet access for your VPC](https://docs.aws.amazon.com/lambda/latest/dg/vpc.html#vpc-internet).
* Setup [VPC for VPN](http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_VPN.html) access.
Upvotes: 1 <issue_comment>username_3: I will write specific to the Java language:
AWS Lambda functions are plain old Java objects so you can write unit tests to trigger them locally. Even if you are using the RequestHandler interface which requires the entry point of the function to provide the
public O handleRequest(I input, Context context);
method and if you are using the context variable you can easily mock that like for example:
```
@Test
public void testAddAddressHandlerDevStage() throws Exception {
Context context = mock(Context.class);
when(context.getInvokedFunctionArn()).thenReturn("arn:aws:lambda:eu-central-1:xxxxxxxxxxx:function:updateAddress:TEST");
Address newAddress = new Address();
new AddAddress().handleRequest(newAddress, context);
assertNotNull(newAddress.getId());
}
```
If you write unit tests like this you can also trigger them via a CI tool like Jenkins on a test server.
So the above test works with TEST env variables but you can write a configuration for DEV environment too. Java language provides richer tools for such generalizations and abstractions compared to most interpreted language.
Upvotes: 0 <issue_comment>username_4: If your local machine is publicly reachable and your server is callable on an open port then I don't see why it wouldn't just work. Maybe DynDNS hack would work if it is not publicly accessible.
Are you mainly concerned with having your favorite IDE for debugging? If not then it is cheap to set up EC2 server that you only spin up for testing. No charge when not in use. Or maybe checkout out [AWS workspaces](https://aws.amazon.com/workspaces/) if you need a richer environment for testing and debugging this.
Also remember your EC2 instances can have Route 53 private host zone mapping an arbirtrary url to your instance which makes things a bit cleaner and closer to production.
Upvotes: 0
|
2018/03/16
| 453 | 1,419 |
<issue_start>username_0: I have quite complicated BDH function with many overrides, could you recommend any books or Website that I can find Information about these overrides ?
```
=BDH($B$9,"TRADE",$C$1,$C$2,"Dir=V","Dts=S","Sort=A","IntrRw=True","CondCodes=H","QRM=H","ExchCode=S","BrkrCodes=S","RPSCodes=H","RPTParty=H","RPTContra=H","RPTRemuneration=H","RPTAtsIndicator=H","BICMICCodes=S","Type=S","Price=S","Size=S","TradeTime=S","Yield=H","ActionCodes=H","IndicatorCodes=H","UpfrontPrice=H","Spread=H","UseDPDF=Y","cols=10;rows=757")
```<issue_comment>username_1: It's all explained in the help for the function.
In Excel, select the cell with the function, click on the arrow next to the "AutoSum" button and select "More Functions...".
[](https://i.stack.imgur.com/cuECl.png)
Then click on the "Help on this function" link at the bottom of the window that pops up.
Upvotes: 0 <issue_comment>username_2: Details of override options are available in the Bloomberg terminal, use `FLDS` for details.
Also, the API Developer Guide has a section on converting Bloomberg Excel formulas to their API equivalent:
[Bloomberg Core API Developer Guide](https://data.bloomberglp.com/professional/sites/10/2017/03/BLPAPI-Core-Developer-Guide.pdf)
Page 95 describes intra-day tick requests (i.e. the BDH "TRADE" function in your example).
Upvotes: 1
|
2018/03/16
| 1,344 | 5,719 |
<issue_start>username_0: How does one properly and cleanly terminate a python program if needed? `sys.exit()` does not reliably perform this function as it merely terminates the thread it is called from, `exit()` and `quit()` are not supposed to be used except in terminal windows, `raise SystemExit` has the same issues as `sys.exit()` and is bad practice, and `os._exit()` immediately kills everything and does not clean up, which can cause issues with residuals.
Is there a way to ALWAYS kill the program and all threads, regardless of where it is called from, while still cleaning up?<issue_comment>username_1: While your app is running Bad Things can happen, which we'd like to recover from. One example is Power Fail.
There is no computing technique for arranging for instructions to execute on a device that is powered off. So we may need to reset some state upon restart. Your app already has this requirement; I'm just making it explicit.
It is hard to reliably gain control just after each of the various Bad Things that might happen, as you found when you carefully considered several standard techniques. You weren't specific about the sort of items needing cleanup that you envision, but we could consider these cases:
1. transient - TCP connections, flock, etc.
2. permanent - disk files, side effects on distant hosts
Rather than invoking your app directly, arrange for it to be run by a Nanny process which forks the app as a child. At some point the app will exit, the Nanny will regain control with all transient items having been tidied up by the OS, and then the Nanny can do any necessary cleanup on permanent items prior to an app restart. This is identical to the cleanup the Nanny will need to do on initial startup, for example after power fail events. The advantage of running your app under a parent process is that the parent can perform immediate cleanups after simple app failures such as SEGV.
Cleaning up permanent items likely involves timeouts on timestamped resources. If your system is able to reboot within say 2 seconds of a brief power outage, you may find it necessary to deliberately stay Down (sleep) for long enough to ensure that distant hosts have reliably detected your transition to Down, prior to announcing a transition to Up. Techniques like Virtual Synchrony and Paxos can help you drive toward rapid convergence.
summary
-------
Sometimes an app will unexpectedly die before running cleanup code. Take a belt-and-suspenders approach: put essential cleanup code in the (simpler, more reliable) parent process.
Upvotes: 0 <issue_comment>username_2: >
> Is there a way to ALWAYS kill the program and all threads, regardless of where it is called from, while still cleaning up?
>
>
>
No - "regardless where it is called from" and "cleaning up" do not mix.
---
It is simply not *meaningful* to both reliably and safely kill a thread. Killing a thread (or process) means interrupting what it is doing - that includes clean up. Not interrupting any clean up means, well, not actually killing a thread. You cannot have both at the same time.
If you want to **kill** all threads, then `os._exit()` is precisely what you are asking for. If you want to clean up a thread, no generic feature can fulfil that.
The only reliable way to shut down threads is to implement your own, safe interrupt. To some extent, this must be customised to your use case - after all, you are the only one knowing when it is safe to shut down.
Killing a thread with Exceptions
--------------------------------
The underlying CPython API allows you to raise an exception in another thread. See for example [this answer](https://stackoverflow.com/a/325528/5349916).
This is not portable and *not safe*. You could be killing that thread at any arbitrary point. If your code expects an exception or your resources clean up after themselves (via `__del__`), you can limit harm, but not exclude it. Still, it is very close to what most people think of as a "clean kill".
Self-Cleaning daemon threads with `atexit`
------------------------------------------
Threads running with [Thread.daemon](https://docs.python.org/3/library/threading.html#threading.Thread.daemon) are abruptly terminated if no other threads remain. Usually, this is half of what you want: *gracefully* terminate if all proper threads exit.
Now, the key is that a `daemon` thread does not prevent shutdown. This also means it does not prevent `atexit` from running! Thus, a daemon can use `atexit` to automatically shutdown itself and cleanup on termination.
```
import threading
import atexit
import time
class CleanThread(threading.Thread):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.daemon = True
# use a signal to inform running code about shutdown
self.shutdown = threading.Event()
def atexit_abort(self):
# signal to the thread to shutdown with the interpreter
self.shutdown.set()
# Thread.join in atexit causes interpreter shutdown
# to be delayed until cleanup is done
self.join() # or just release resources and exit
def run(self):
atexit.register(self.atexit_abort)
while not self.shutdown.wait(0.1):
print('who wants to live forever?')
print('not me!')
atexit.unregister(self.atexit_abort)
thread = CleanThread()
thread.start()
time.sleep(0.3)
# program exits here
```
Note that this still requires your code to listen for a cleanup signal! Depending on what the Thread does, there are other mechanisms to achieve this. For example, the `concurrent.future` module empties the task queue of all worker threads on shutdown.
Upvotes: 2
|
2018/03/16
| 1,348 | 5,664 |
<issue_start>username_0: I am using angular and I have a datasource for my table. I want to delete the item from datasource. please help me
```
empDataSource: EmpDataSource;
selection = new SelectionModel(true, []);
ngOnInit() {
this.empDataSource= new EmpDataSource(this.empList);
}
deleteEmp(){
this.empDataSource= this.empDataSource.data.filter(row => row!=this.selection.select(row));
}
```
Here I am not able to find the "filter" option in datasource, Can some one please help any other way to delete item from datasource<issue_comment>username_1: While your app is running Bad Things can happen, which we'd like to recover from. One example is Power Fail.
There is no computing technique for arranging for instructions to execute on a device that is powered off. So we may need to reset some state upon restart. Your app already has this requirement; I'm just making it explicit.
It is hard to reliably gain control just after each of the various Bad Things that might happen, as you found when you carefully considered several standard techniques. You weren't specific about the sort of items needing cleanup that you envision, but we could consider these cases:
1. transient - TCP connections, flock, etc.
2. permanent - disk files, side effects on distant hosts
Rather than invoking your app directly, arrange for it to be run by a Nanny process which forks the app as a child. At some point the app will exit, the Nanny will regain control with all transient items having been tidied up by the OS, and then the Nanny can do any necessary cleanup on permanent items prior to an app restart. This is identical to the cleanup the Nanny will need to do on initial startup, for example after power fail events. The advantage of running your app under a parent process is that the parent can perform immediate cleanups after simple app failures such as SEGV.
Cleaning up permanent items likely involves timeouts on timestamped resources. If your system is able to reboot within say 2 seconds of a brief power outage, you may find it necessary to deliberately stay Down (sleep) for long enough to ensure that distant hosts have reliably detected your transition to Down, prior to announcing a transition to Up. Techniques like Virtual Synchrony and Paxos can help you drive toward rapid convergence.
summary
-------
Sometimes an app will unexpectedly die before running cleanup code. Take a belt-and-suspenders approach: put essential cleanup code in the (simpler, more reliable) parent process.
Upvotes: 0 <issue_comment>username_2: >
> Is there a way to ALWAYS kill the program and all threads, regardless of where it is called from, while still cleaning up?
>
>
>
No - "regardless where it is called from" and "cleaning up" do not mix.
---
It is simply not *meaningful* to both reliably and safely kill a thread. Killing a thread (or process) means interrupting what it is doing - that includes clean up. Not interrupting any clean up means, well, not actually killing a thread. You cannot have both at the same time.
If you want to **kill** all threads, then `os._exit()` is precisely what you are asking for. If you want to clean up a thread, no generic feature can fulfil that.
The only reliable way to shut down threads is to implement your own, safe interrupt. To some extent, this must be customised to your use case - after all, you are the only one knowing when it is safe to shut down.
Killing a thread with Exceptions
--------------------------------
The underlying CPython API allows you to raise an exception in another thread. See for example [this answer](https://stackoverflow.com/a/325528/5349916).
This is not portable and *not safe*. You could be killing that thread at any arbitrary point. If your code expects an exception or your resources clean up after themselves (via `__del__`), you can limit harm, but not exclude it. Still, it is very close to what most people think of as a "clean kill".
Self-Cleaning daemon threads with `atexit`
------------------------------------------
Threads running with [Thread.daemon](https://docs.python.org/3/library/threading.html#threading.Thread.daemon) are abruptly terminated if no other threads remain. Usually, this is half of what you want: *gracefully* terminate if all proper threads exit.
Now, the key is that a `daemon` thread does not prevent shutdown. This also means it does not prevent `atexit` from running! Thus, a daemon can use `atexit` to automatically shutdown itself and cleanup on termination.
```
import threading
import atexit
import time
class CleanThread(threading.Thread):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.daemon = True
# use a signal to inform running code about shutdown
self.shutdown = threading.Event()
def atexit_abort(self):
# signal to the thread to shutdown with the interpreter
self.shutdown.set()
# Thread.join in atexit causes interpreter shutdown
# to be delayed until cleanup is done
self.join() # or just release resources and exit
def run(self):
atexit.register(self.atexit_abort)
while not self.shutdown.wait(0.1):
print('who wants to live forever?')
print('not me!')
atexit.unregister(self.atexit_abort)
thread = CleanThread()
thread.start()
time.sleep(0.3)
# program exits here
```
Note that this still requires your code to listen for a cleanup signal! Depending on what the Thread does, there are other mechanisms to achieve this. For example, the `concurrent.future` module empties the task queue of all worker threads on shutdown.
Upvotes: 2
|
2018/03/16
| 985 | 3,586 |
<issue_start>username_0: I am new to the use of columnstore index. The new different structure of columnstored data raised a question.
How we know which data from one column1 (page1)
are connected to other column2 (page2).
For example if we have the following representation of a table using traditional rowstore:
```
row1 1 2 3 -- page1
row2 4 5 6 -- page2
```
And for columnstore index:
```
col1 col2 col3
1 2 3
4 5 6
```
How we know using columnstore index which data are connected to who?<issue_comment>username_1: You are not totally getting rid of the relationship between the columns and their rows. The simplified difference is the way the table is stored. Traditional storage is physically stored in a row-wise manner while a columnstore is stored column-wise. The doc link provided here has much more info that I would prefer to not copy and paste.
From the [docs](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview "docs"):
>
> Key terms and concepts These are key terms and concepts are associated
> with columnstore indexes.
>
>
> columnstore A columnstore is data that is logically organized as a
> table with rows and columns, and physically stored in a column-wise
> data format.
>
>
> rowstore A rowstore is data that is logically organized as a table
> with rows and columns, and then physically stored in a row-wise data
> format. This has been the traditional way to store relational table
> data. In SQL Server, rowstore refers to table where the underlying
> data storage format is a heap, a clustered index, or a
> memory-optimized table.
>
>
>
So here is an example of how the relationship is retained in TSQL. Run this against a table that has a CS index (Disclaimer: I am not a pro at CS indexing):
```
SELECT o.name AS table_,
i.name AS index_,
i.type_desc AS index_type,
p.partition_number,
rg.row_group_id,
cs.column_id,
c.name AS column_
FROM sys.objects o
INNER JOIN sys.indexes i
ON i.object_id = o.object_id
INNER JOIN sys.partitions p
ON p.object_id = o.object_id
AND i.index_id = p.index_id
INNER JOIN sys.column_store_row_groups rg
ON rg.object_id = o.object_id
AND i.index_id = rg.index_id
INNER JOIN sys.column_store_segments cs
ON cs.partition_id = p.partition_id
INNER JOIN sys.columns c
ON c.object_id = o.object_id
AND c.column_id = cs.column_id
WHERE o.object_id = OBJECT_ID(your_table_name)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: There is no explicit connection, just as there is no explicit connection between column values in a row-based table. Even so, we can always go from one to the other by simply enumerating.
Imagine reading off the column groups in a row-based manner (first value of `col1`, first value of `col2`, first value of `col3`) and there's your row. When identical column values are compressed into ranges, imagine they carry numbers telling you how many times they occur -- you can still read off rows this way by simple counting, even though the process is inefficient. Asking for any particular row (`SELECT * FROM T WHERE Column = uniquevalue`) entails searching for that value in the columnstore, which is very fast, and then using its position to find all the other values in all the other column groups to get back a row, which generally is not, since we need to read through all the values in all the ranges in the worst case. (Of course, a traditional B-tree index can help with this, which is why you'd use those for row lookups.)
Upvotes: 2
|
2018/03/16
| 1,677 | 5,592 |
<issue_start>username_0: I am newbie in text mining and R. I doing terms clustering using kmeans from a set of documents. In grouping the terms I used cosine formula. There are 57 terms of 839 document I want to cluster. But somehow, my supervisor said I the process is not correct because there's overlap in the plotting.
Here's TFIDF matrix

Here's the code I used
```
dokumen <- read.csv("dokumen.csv", stringsAsFactors = FALSE, header=TRUE)
corp <- Corpus(DataframeSource(dokumen))
corp <- tm_map(corp, content_transformer(tolower))
corp <- tm_map(corp, removeNumbers)
removeURL <- function(x) gsub("http[[:alnum:][:punct:]]*", "", x)
removeURL2 <- function(x) gsub("www[[:alnum:][:punct:]]*", "", x)
removeEmail <- function(x) gsub ("[[:alnum:] [:punct:]] *?@ [:alnum:][:punct:]]*", "", x)
delPunct <- content_transformer(function(x) {return (gsub("[[:punct:]]", " ", x))})
corp <- tm_map(corp, content_transformer(removeURL))
corp <- tm_map(corp, content_transformer(removeURL2))
corp <- tm_map(corp, content_transformer(removeEmail))
corp <- tm_map(corp, delPunct)
corp <- tm_map(corp, removeWords, stopwords("smart"))
corp <- tm_map(corp, stemDocument, language = "english")
corp <- tm_map(corp, removeWords, c("australia", "australian", "indonesia", "indonesian", "embassi", "january", "february", "march", "april", "may", "june", "july", "august", "september", "october", "november", "december", "past", "yesterday", "today", "present", "tomorrow", "day", "week", "month", "year", "time", "hundred", "thousand", "million", "billion", "countri", "world", "nation", "one", "two", "three", "four", "six", "seven", "eight", "nine", "ten", "eleven", "twelve"))
corp <- tm_map(corp, stripWhitespace)
tdm <- DocumentTermMatrix(corp)
m <- as.matrix(tdm)
dtmi <- DocumentTermMatrix(corp, control = list(weighting = weightTfIdf))
m1 <- as.matrix(dtmi)
dtms <- removeSparseTerms(dtmi, 0.79)
m2 <- as.matrix(dtms)
m3 <- 1 - crossprod_simple_triplet_matrix(dtms)/(sqrt(col_sums(dtms^2) %*% t(col_sums(dtms^2))))
km.res <- eclust(m3, "kmeans", k = 3, nstart = 100, graph = FALSE)
```
But when I visualised into 4 clusters there's overlap that my supervisor said that incorrect result. K means result shouldn't overlap. Here's the image

Does anyone can help whether I used the correct code. Or is there any action I should do with the dtm before clustering? Thank you very much!
<NAME><issue_comment>username_1: You are not totally getting rid of the relationship between the columns and their rows. The simplified difference is the way the table is stored. Traditional storage is physically stored in a row-wise manner while a columnstore is stored column-wise. The doc link provided here has much more info that I would prefer to not copy and paste.
From the [docs](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview "docs"):
>
> Key terms and concepts These are key terms and concepts are associated
> with columnstore indexes.
>
>
> columnstore A columnstore is data that is logically organized as a
> table with rows and columns, and physically stored in a column-wise
> data format.
>
>
> rowstore A rowstore is data that is logically organized as a table
> with rows and columns, and then physically stored in a row-wise data
> format. This has been the traditional way to store relational table
> data. In SQL Server, rowstore refers to table where the underlying
> data storage format is a heap, a clustered index, or a
> memory-optimized table.
>
>
>
So here is an example of how the relationship is retained in TSQL. Run this against a table that has a CS index (Disclaimer: I am not a pro at CS indexing):
```
SELECT o.name AS table_,
i.name AS index_,
i.type_desc AS index_type,
p.partition_number,
rg.row_group_id,
cs.column_id,
c.name AS column_
FROM sys.objects o
INNER JOIN sys.indexes i
ON i.object_id = o.object_id
INNER JOIN sys.partitions p
ON p.object_id = o.object_id
AND i.index_id = p.index_id
INNER JOIN sys.column_store_row_groups rg
ON rg.object_id = o.object_id
AND i.index_id = rg.index_id
INNER JOIN sys.column_store_segments cs
ON cs.partition_id = p.partition_id
INNER JOIN sys.columns c
ON c.object_id = o.object_id
AND c.column_id = cs.column_id
WHERE o.object_id = OBJECT_ID(your_table_name)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: There is no explicit connection, just as there is no explicit connection between column values in a row-based table. Even so, we can always go from one to the other by simply enumerating.
Imagine reading off the column groups in a row-based manner (first value of `col1`, first value of `col2`, first value of `col3`) and there's your row. When identical column values are compressed into ranges, imagine they carry numbers telling you how many times they occur -- you can still read off rows this way by simple counting, even though the process is inefficient. Asking for any particular row (`SELECT * FROM T WHERE Column = uniquevalue`) entails searching for that value in the columnstore, which is very fast, and then using its position to find all the other values in all the other column groups to get back a row, which generally is not, since we need to read through all the values in all the ranges in the worst case. (Of course, a traditional B-tree index can help with this, which is why you'd use those for row lookups.)
Upvotes: 2
|
2018/03/16
| 472 | 1,747 |
<issue_start>username_0: How do you find the index of an entry within a list. My code below.
`listFrom` and `listTo` contain entries of strings.
I am getting the error:
>
> Severity Code Description Project File Line Suppression State
> Error CS1503 Argument 1: cannot convert from 'System.Collections.Generic.List' to 'string'
>
>
>
```
List listFrom = new List(); //Contains a list of strings
List listTo = new List(); //Contains a list of strings
List lineStart = listFrom.Except(listTo).ToList(); //Will always return a single value
List lineEnd = listTo.Except(listFrom).ToList(); //Will always return a single value
int startLineIndex = listFrom.IndexOf(lineStart); //Error on this line
Console.WriteLine("Index of Start: " + startLineIndex);
```<issue_comment>username_1: It's because when you call `listFrom.IndexOf(lineStart);` it's expecting a String. lineStart in your code is defined as a List of Strings so you would either need to call on a specific value inside the list, or change lineStart to a string.
Upvotes: 0 <issue_comment>username_2: As the error states, it cannot convert from `'System.Collections.Generic.List'` to `'string'`. However I never knew the function `SingleOrDefult()` existed.
`All credit to @maccettura even though he didn't know what I was trying to do! xD`
Code change below for the answer:
```
List listFrom = new List(); //Contains a list of strings
List listTo = new List(); //Contains a list of strings
string lineStart = listFrom.Except(listTo).SingleOrDefault();
string lineEnd = listFrom.Except(listTo).SingleOrDefault();
int startLineIndex = listFrom.IndexOf(lineStart); //Error on this line
Console.WriteLine("Index of Start: " + startLineIndex);
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,183 | 2,856 |
<issue_start>username_0: I need to perform a basic `group_by` / `mutate` operation using an auxiliary grouping variable. For instance:
```
df <- data.frame(
u = c(0, 0, 1, 0, 1),
v = c(8, 4, 2, 3, 5)
)
df %>%
group_by(tmp = cumsum(u)) %>%
mutate(w = cumprod(v)) %>%
ungroup %>%
select(-tmp)
```
My problem is that if `df` happens to already contain a column named `tmp` I will lose it.
Of course I could choose a very exotic name instead of `tmp` to reduce the likeliness of a collision (or I could even choose something like `strrep("z", max(nchar(names(df))) + 1)` to be sure) but I'd prefer to have a cleaner solution.
In other words, I'm looking for the `dplyr` equivalent of this `data.table` line:
```
setDT(df)[, w := cumprod(v), by = cumsum(u)]
```<issue_comment>username_1: We could create a function to take care of this. Assuming that the temporary grouping variable to be created is 'tmp', by concatenating with the column names of the dataset and calling `make.unique`, if there is already a 'tmp' column in the dataset, the duplicate one will be renamed as 'tmp.1'. Using the `!!`, naming the column with 'tmp.1' (from `nm1`) will not affect the 'tmp' already present in the dataset. In case, if there is no 'tmp', column, the grouping column will be named as 'tmp' and later removed with `select`
```
f1 <- function(dat, grpCol, Col) {
grpCol <- enquo(grpCol)
Col <- enquo(Col)
changeCol <- "tmp"
nm1 <- tail(make.unique(c(names(dat), changeCol)), 1)
dat %>%
group_by(!! (nm1) := cumsum(!! grpCol)) %>%
mutate(w = cumprod(!!Col)) %>%
ungroup %>%
select(-one_of(nm1))
}
```
-run the function
```
f1(df, u, v)
# A tibble: 5 x 3
# u v w
#
#1 0 8.00 8.00
#2 0 4.00 32.0
#3 1.00 2.00 2.00
#4 0 3.00 6.00
#5 1.00 5.00 5.00
f1(df %>% mutate(tmp = 1), u, v) #create a 'tmp' column in dataset
# A tibble: 5 x 4
# u v tmp w
#
#1 0 8.00 1.00 8.00
#2 0 4.00 1.00 32.0
#3 1.00 2.00 1.00 2.00
#4 0 3.00 1.00 6.00
#5 1.00 5.00 1.00 5.00
```
---
As a followup (comments from @Frank) about passing expressions
```
expr <- quos(tmp = cumsum(u), w = cumprod(v))
#additional checks outside the function
names(expr)[1] <- if(names(expr)[1] %in% names(df))
strrep(names(expr)[1], 2) else names(expr)[1]
f2 <- function(dat, exprs ){
dat %>%
group_by(!!! exprs[1]) %>%
mutate(!!! exprs[2])
}
f2(df, expr)
# A tibble: 5 x 4
# Groups: tmp [3]
# u v tmp w
#
#1 0 8.00 0 8.00
#2 0 4.00 0 32.0
#3 1.00 2.00 1.00 2.00
#4 0 3.00 1.00 6.00
#5 1.00 5.00 2.00 5.00
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You could use `ave` instead:
```
df %>% mutate(w = ave(v, cumsum(u), FUN = cumprod))
```
`by` would also work:
```
df %>%
by(cumsum(.$u), mutate, w = cumprod(v)) %>%
unclass %>%
bind_rows
```
Upvotes: 2
|
2018/03/16
| 3,703 | 13,649 |
<issue_start>username_0: I have an IAP setup in an app, along with a few Cocoapods:
```
- Firebase/AdMob (4.8.0):
- Firebase/Core
- Google-Mobile-Ads-SDK (= 7.27.0)
- Firebase/Core (4.8.0):
- FirebaseAnalytics (= 4.0.5)
- FirebaseCore (= 4.0.13)
- Firebase/Crash (4.8.0):
- Firebase/Core
- FirebaseCrash (= 2.0.2)
- FirebaseAnalytics (4.0.5):
- FirebaseCore (~> 4.0)
- FirebaseInstanceID (~> 2.0)
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- nanopb (~> 0.3)
- FirebaseCore (4.0.13):
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- FirebaseCrash (2.0.2):
- FirebaseAnalytics (~> 4.0)
- FirebaseInstanceID (~> 2.0)
- GoogleToolboxForMac/Logger (~> 2.1)
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- Protobuf (~> 3.1)
```
IAP and all of the above frameworks are working perfect! No problems at all.
**Once I do a pod update, things start to go south.**
After a pod update, here are the updated versions:
PODS:
```
- Firebase/AdMob (4.10.1):
- Firebase/Core
- Google-Mobile-Ads-SDK (= 7.29.0)
- Firebase/Core (4.10.1):
- FirebaseAnalytics (= 4.1.0)
- FirebaseCore (= 4.0.17)
- Firebase/Crash (4.10.1):
- Firebase/Core
- FirebaseCrash (= 2.0.2)
- FirebaseAnalytics (4.1.0):
- FirebaseCore (~> 4.0)
- FirebaseInstanceID (~> 2.0)
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- nanopb (~> 0.3)
- FirebaseCore (4.0.17):
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- FirebaseCrash (2.0.2):
- FirebaseAnalytics (~> 4.0)
- FirebaseInstanceID (~> 2.0)
- GoogleToolboxForMac/Logger (~> 2.1)
- GoogleToolboxForMac/NSData+zlib (~> 2.1)
- Protobuf (~> 3.1)
```
After this pod update - my IAP crashes on a successful purchase 100% of the time. Absolutely nothing changed in code. Just a pod update to the newest frameworks listed above.
I am getting the following crash once the IAP completes (and the "You're all set!" success alert pops up):
```
libc++abi.dylib: terminating with uncaught exception of type NSException
(lldb)
libsystem_kernel.dylib`__pthread_kill:
0x1859bc2e0 <+0>: mov x16, #0x148
0x1859bc2e4 <+4>: svc #0x80
-> 0x1859bc2e8 <+8>: b.lo 0x1859bc300 ; <+32>
0x1859bc2ec <+12>: stp x29, x30, [sp, #-0x10]!
0x1859bc2f0 <+16>: mov x29, sp
0x1859bc2f4 <+20>: bl 0x18599cbdc ; cerror_nocancel
0x1859bc2f8 <+24>: mov sp, x29
0x1859bc2fc <+28>: ldp x29, x30, [sp], #0x10
0x1859bc300 <+32>: ret
```
Here is a screenshot of the debug panel: <https://i.stack.imgur.com/exmsO.png>
```
[![Debug panel][1]][1]
```
Here is what Firebase crash reporting is logging:
```
-[__NSCFBoolean timeIntervalSince1970]: unrecognized selector sent to instance 0x1b6f8a878
```
***Some items to note:***
* Nothing at all changed in code.
* Cocoapods were updated in terminal. Steps: 1. CD to dir, 2. $ pod update
* I tested the IAP before updating Cocoapods - everything worked flawless; the app did not crash.
* I did a project clean before testing the IAP after Cocoapod update.
* Crashing on multiple devices - (iOS 11.2.6 and 11.2.1).
**What is causing this crash since I only updated pods?**
**Bounty update:**
I have added a Bounty to this question because I am now experiencing it on other projects. I had an old project that i wanted to update the Pods (Firebase / Firebase Crash / Google Ads). Here are the exact steps I took:
1. CD to project directory.
2. Pod update. Cocoapods did not give me any errors at all.
Within Xcode, I run the project whose Podfile I updated... I go through purchasing an IAP and it crashes as soon as it's complete. Again, this does not happen before the pod file was updated! The IAP works fine until I run pod update.
With the newly offended broken project, I removed Podfile, Podfile.lock, and Pods directory. I dragged the same files and directory in from an older project. Works perfect without any crashing.
This problem is persisting ONLY after pod update. I'm lost..
IAP Helper file
```
import StoreKit
import Firebase
public typealias MYProductIdentifier = String
public typealias MYProductRequestCompletionHandler = (_ success: Bool, _ products: [SKProduct]?) -> ()
// MARK: - Class
public class IAPHelper: NSObject {
// Define properties!
fileprivate let myProductIdentifiers: Set
fileprivate var myPurchasedProductIdentifiers = Set()
// Optional properties
fileprivate var myProductsRequest: SKProductsRequest?
fileprivate var myProductsRequestCompletionHandler: MYProductRequestCompletionHandler?
// NOTIFICATION
static let IAPTransactionInProgress = "IAPTransactionInProgress"
static let IAPTransactionFailed = "IAPTransactionFailed"
static let myIAPHelperPurchaseNotification = "IAPHelperPurchaseNotification" // Whenever a purchase takes place!
static let myRestorePurchaseNotification = "myRestorePurchaseNotification" // Whenever a restore takes place!
static let myPurchaseMadeThankYou = "myPurchaseMadeThankYou" // Whenever a first purchase takes place!
// init!
public init(productIDs: Set) {
myProductIdentifiers = productIDs
// CHECK IF USER ALREADY BOUGHT! (to set the correct Defaults)
for productIdentifier in productIDs {
let purchased = MYConstants.nsDefaults.bool(forKey: productIdentifier)
if purchased {
myPurchasedProductIdentifiers.insert(productIdentifier)
print("Already purchased! \(productIdentifier)")
}
else {
print("Not yet purchased! \(productIdentifier)")
}
}
super.init()
SKPaymentQueue.default().add(self)
}
public func requestProducts(completionHandler: @escaping MYProductRequestCompletionHandler) {
myProductsRequest?.cancel()
myProductsRequestCompletionHandler = completionHandler
myProductsRequest = SKProductsRequest(productIdentifiers: myProductIdentifiers)
myProductsRequest?.delegate = self
myProductsRequest?.start()
}
public func buyProduct(product: SKProduct) {
let payment = SKPayment(product: product)
SKPaymentQueue.default().add(payment)
}
public func isProductPurchased(productIdentifier: MYProductIdentifier) -> Bool {
return myPurchasedProductIdentifiers.contains(productIdentifier)
}
public class func canMakePayment() -> Bool {
return SKPaymentQueue.canMakePayments()
}
public func restorePurchases() {
SKPaymentQueue.default().restoreCompletedTransactions()
}
}
// MARK: - SKProductRequestsDelegate
extension IAPHelper: SKProductsRequestDelegate {
public func productsRequest(\_ request: SKProductsRequest, didReceive response: SKProductsResponse) {
let products = response.products
myProductsRequestCompletionHandler?(true, products)
reset()
}
public func request(\_ request: SKRequest, didFailWithError error: Error) {
// Called wheneever there is an ERROR or NO PRODUCTS!
myProductsRequestCompletionHandler?(false, nil)
reset()
print("ERROR \(error.localizedDescription)")
}
private func reset() {
myProductsRequest = nil
myProductsRequestCompletionHandler = nil
}
}
// MARK: - SKPaymentTransactionObserver
extension IAPHelper: SKPaymentTransactionObserver {
// Tells us if the payment from the user was successful. Then react accordingly!
public func paymentQueue(\_ queue: SKPaymentQueue, updatedTransactions transactions: [SKPaymentTransaction]) {
// Check outstanding transactions and react to them.
for transaction in transactions {
// check what kind of transaction is happening!
switch transaction.transactionState {
case .purchased :
completeTransaction(transaction: transaction)
case .failed :
failedTransaction(transaction: transaction)
case .restored :
restoreTransaction(transaction: transaction)
case .deferred :
showTransactionAsInProgress(deferred: true)
case .purchasing :
showTransactionAsInProgress(deferred: false)
}
}
}
//MARK: Payment transaction related methods
private func showTransactionAsInProgress(deferred: Bool) {
NotificationCenter.default.post(name: Notification.Name(IAPHelper.IAPTransactionInProgress), object: deferred)
}
private func completeTransaction(transaction: SKPaymentTransaction) {
postPurchaseNotificationForIdentifier(identifier: transaction.payment.productIdentifier)
NotificationCenter.default.post(name: NSNotification.Name(IAPHelper.myPurchaseMadeThankYou), object: nil)
SKPaymentQueue.default().finishTransaction(transaction)
}
private func failedTransaction(transaction: SKPaymentTransaction) {
// User aborts payment!!
if transaction.error!.\_code != SKError.Code.paymentCancelled.rawValue {
print("Error: \(transaction.error!.localizedDescription)")
}
NotificationCenter.default.post(name: Notification.Name(IAPHelper.IAPTransactionFailed), object: transaction.error)
SKPaymentQueue.default().finishTransaction(transaction)
}
private func restoreTransaction(transaction: SKPaymentTransaction) {
guard let productIdentifier = transaction.original?.payment.productIdentifier else {
return
}
postRestoreNotificationForIdentifier(identifier: productIdentifier)
SKPaymentQueue.default().finishTransaction(transaction)
}
private func postPurchaseNotificationForIdentifier(identifier: String?) {
// TELL VC THAT PURCHASE WAS OR WAS NOT success.
guard let identifier = identifier else {
return
}
Analytics.logEvent("IAP\_Purchase\_Made", parameters: nil)
// I believe it crashes right here.
// NEW ==================================
myPurchasedProductIdentifiers.insert(identifier)
MYConstants.nsDefaults.set(true, forKey: identifier)
MYConstants.unlockLogic(restoring: false)
NotificationCenter.default.post(name: Notification.Name(IAPHelper.myIAPHelperPurchaseNotification), object: identifier)
// END NEW ==============================
}
private func postRestoreNotificationForIdentifier(identifier: String?) {
// TELL VC THAT PURCHASE WAS OR WAS NOT success.
guard let identifier = identifier else {
return
}
Analytics.logEvent("IAP\_Restore\_Made", parameters: nil)
// NEW ==================================
myPurchasedProductIdentifiers.insert(identifier)
MYConstants.nsDefaults.set(true, forKey: identifier)
print("NEW RESTORE Identifier: \(identifier)")
MYConstants.unlockLogic(restoring: true)
NotificationCenter.default.post(name: NSNotification.Name(IAPHelper.myRestorePurchaseNotification), object: nil)
// END NEW ==============================
}
}
```<issue_comment>username_1: To find where is this call for the TimeInterval being made I would add a public extension for Bool so you can add a breakpoint to it and modify accordingly.
Something like this:
```
public extension Bool {
public var timeIntervalSince1970:TimeInterval {
get {
// Add breakpoint here
return 0
}
}
}
```
**Update:**
I failed to recognise your crash mentiones NSCFBoolean and not Bool.
As I understand NSCFBoolean is a private class bridging CFBoolean, so you can't extend it, but maybe it works by extending CFBoolean. More on NSCFBoolean: <https://nshipster.com/bool/>
Please, try adding this extension and breakpoint:
```
public extension CFBoolean {
var timeIntervalSince1970: TimeInterval {
get {
// Add breakpoint here
return 0
}
}
}
```
Upvotes: 2 <issue_comment>username_2: I've ***temporarily*** fixed this issue by going back to the pod versions before I ran the update.
**Here are the exact steps I took:**
1. In the Podfile, I commented out the 3 pods, saved & closed.
2. I then ran `pod install` to delete the 3 pods.
3. I then ran `pod deintegrate`.
4. I then manually deleted `Podfile.lock` and `Workspace` from the project directory.
5. I re-opened the Podfile, uncommented out the 3 pods, and then explicitly specified the dependency versions that I used before the pod update that was causing the crash. See below:
pod 'Firebase/Core', '4.8.0'
pod 'Google-Mobile-Ads-SDK', '7.27.0'
pod 'Firebase/Crash', '4.8.0'
The project works as intended now. The App completes the IAP with no exceptions.
**Why is this occurring?** I am led to believe it's NOT a code issue, because this issue only occurs when I update my pods.
Upvotes: 1 <issue_comment>username_3: I found out that this crash occurs when `registerDefaults:` is used to register a user preference key that is the same as the IAP product identifier.
This exception always occurs when the NSUserDefaults key that is the same as the IAP product ID key is has a default prefs value registered like so:
```
#define kTipPurchasedIAPProductIdKey @"tipAdditional99Cents"
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
if (YES) {
//Firebase initialization
[FIRApp configure];
[[FIRConfiguration sharedInstance] setLoggerLevel:FIRLoggerLevelMin];
}
//Register user defaults
[[NSUserDefaults standardUserDefaults] registerDefaults:@{
kTipPurchasedIAPProductIdKey: @NO
}];
}
```
In this case the IAP product ID key (that you would use to validate whether the iAP exists, etc) is tipAdditional99Cents. The crash occurs when the transaction returns as purchased/restored.
The current workaround is to register a different key in the user defaults instead of the actual IAP product ID.
I reported it to Google Firebase on their Github repository [here](https://github.com/firebase/firebase-ios-sdk/issues/4176) and they will fix the bug in the next release (6.13.0, fixed 14NOV2019).
Upvotes: 2 [selected_answer]
|
2018/03/16
| 351 | 955 |
<issue_start>username_0: `121426` <- Here, 1 is an alternating repetitive digit.
`523563` <- Here, NO digit is an alternating repetitive digit.
`552523` <- Here, both 2 and 5 are alternating repetitive digits.
`333567` <- Here, 3 is an alternating repetitive digit.
I found `re.findall(r'(?=(\d)\d\1)',P)` as the solution in editorial but not able to understand it.
Edit - Not allowed to use `if` conditions.<issue_comment>username_1: You may use this regex using lookaheads:
```
(\d)(?=\d\1)
```
[RegEx Demo](https://regex101.com/r/JpdXdl/2)
**Explanation:**
* `(\d)`: Match and capture a digit in group #1
* `(?=`: Start lookahead
+ `\d`: Match any digit
+ `\1`: Back-reference to captured group #1
* `)`: End lookahead
Upvotes: 6 [selected_answer]<issue_comment>username_2: You could do this without a regex using `zip()` in a list comprehension:
```
>>> s = '552523'
>>> [a for a, b in zip(s, s[2:]) if a == b]
['5', '2']
```
Upvotes: 4
|
2018/03/16
| 450 | 1,502 |
<issue_start>username_0: I thought a callable is just a function from the tf-library that I *call*. This:
```
tensor = tf.while_loop(tf.less(tf.rank(tensor), ndims), # cond
tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars
```
errors to `TypeError: cond must be callable.`
What is a callable condition if not `tf.less()`?<issue_comment>username_1: A callable is anything that can be called. [See here](https://stackoverflow.com/questions/111234/what-is-a-callable-in-python).
The cond should be a function. You can use `lambda` ([See here](https://stackoverflow.com/questions/890128/why-are-python-lambdas-useful)) to make your condition `callable`.
[Here](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/while_loop) there is a minimal example of how to use `tf.while_loop`:
```
i = tf.constant(0)
c = lambda i: tf.less(i, 10)
b = lambda i: tf.add(i, 1)
r = tf.while_loop(c, b, [i])
```
And in the end, not a bad idea to post a minimal code that actually runs and generates your error.
Upvotes: 3 [selected_answer]<issue_comment>username_2: `tf.less` is an `Operation` object. To make it callable, just use a `lambda`:
```
tensor = tf.while_loop(lambda tensor: tf.less(tf.rank(tensor), ndims), # cond
lambda tensor: tf.append(tensor, axis=axis), # body
loop_vars = [tensor]) # loop_vars
```
Upvotes: 1
|
2018/03/16
| 172 | 573 |
<issue_start>username_0: ```
MIX_ENV=test mix compile
```
...is not working on Windows `cmd`. It shows:
>
> `'MIX_ENV'` is not recognized as an internal or external command, operable program or batch file
>
>
>
How to solve this problem?<issue_comment>username_1: Like @Dogbert said, in Windows you have to use the `set` command:
```
set MIX_ENV=test && mix compile
```
Upvotes: 1 <issue_comment>username_2: Try using other terminals in Windows.
Using commands like `MIX_ENV=dev iex -S mix phx.server` within Git Bash works fine on Windows machines.
Upvotes: 0
|
2018/03/16
| 331 | 1,327 |
<issue_start>username_0: I am using intent to transfer data.
When I try the following code,
Intent intent=getIntent() or Intent intent=getIntent(DrawerActivity.this, null)
Exception will occur.
How can I use intent to put data but do not jump to another activity?<issue_comment>username_1: If you want to put data somewhere on the app and not use it instantly, then You can store some values on SharedPreferences, especially if it's not huge amount of data
Upvotes: 1 <issue_comment>username_2: Intent is a mechanism to pass data from one activity to another.
If you have to store some data in a place and then later send it to another activity, create a bundle, store data in it and then when you have to go to next activity then create a intent, set the class of the next activity and in putExtras pass the bundle and the data will be sent to that activity.
Upvotes: 0 <issue_comment>username_3: It is depends of situation, you can use [PendingIntent](https://developer.android.com/reference/android/app/PendingIntent.html) or you can create [Bundle](https://developer.android.com/reference/android/os/Bundle.html) and put in it what values you want and when you want to start the activity then pass the Bundle as desctibed [here](https://stackoverflow.com/questions/768969/passing-a-bundle-on-startactivity).
Upvotes: 0
|
2018/03/16
| 730 | 2,529 |
<issue_start>username_0: Below is sample angular project which implements a sample Users Listing with datagrid row expand feature.
Now as a standalone webapp, it is throwing error while navigating to next page as shown below:
[](https://i.stack.imgur.com/bKnhP.png)
If I comment the "" in parent component. the error will disappear.
**Steps to run project:**
**prerequisite:**
1. node.js version 6.9 or higher
2. npm install
3. npm install -g json-server
Run:
1. start json-server to get mock data and load local message
bundles(cmd terminal #1)
**npm run json-server**
2. run the plugin in dev mode and watches your files for live-reload (cmd terminal #2)
**ng serve**
3. Problematic Code Files path "testplugin-ui\src\app\users\*"
Here is the project seed with code
<https://drive.google.com/open?id=1Meeo_SXZEJfYyboihimJGr2DtJHeMP8k>
or
<https://nofile.io/f/q4FerHzV0Z0>
As I need json-server, I'm uploading sample project instead of plunker/stackblitz.
let me know what is wrong with this code.<issue_comment>username_1: From [Datagrid expandable rows additional features](https://github.com/vmware/clarity/issues/1295#issuecomment-322194872), it would seem you have the `*clrIfExpanded` directive in the wrong place.
**user-row-detail.component.ts**
```html
{{child.username}}
{{child.fullName}}
{{child.role}}
```
instead of
```html
{{child.username}}
{{child.fullName}}
{{child.role}}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: @shravan,
There are quite a lot of problems with the code, including at least 35 Circular dependency warnings… Anyway, you might want to refactor your code, but for the time being, try changing your UserRowDetailComponent to this:
```
@Component({
selector: 'user-row-detail',
templateUrl: './user-row-detail.component.html',
encapsulation: ViewEncapsulation.Emulated
})
export class UserRowDetailComponent implements OnInit, OnDestroy {
@Input() children: Array;
constructor(private cd: ChangeDetectorRef) {
console.log("calling UserRowDetailComponent Comp constructor!");
}
ngOnInit() {
this.cd.detectChanges(); //Force change detection here
console.log("calling detectChanges user-row-detail from OnInit!");
}
ngOnDestroy() {
console.log("calling UserRowDetailComponent onDestroy!");
}
}
```
As you can see, I have added a forced change detection. You might want to review the blog post suggested by @Vikas to understand what is going on here.
Upvotes: 1
|
2018/03/16
| 1,314 | 5,470 |
<issue_start>username_0: I am creating a card game using .NET Standard and Monogame. I have created and tested the core of the game in a dll (PlayingCards.Common.Dll).
I now need to add a GUI layer and am having problems with the design.
For example, my card class lives in PlayingCards.Common.Dll. The GUI needs to use this player card but add a texture, position and possibly an update.
I am aware that I can use the decorator pattern and create a GuiCard that holds a Card and the extra functionality needed for the GUI however when my dealer class deals cards to users (As this is done in the core) they will only receive a Card and not a GuiCard.
It is also the **Player** class that holds a **hand** which contains x amount of **cards** so when drawing in the Gui I need to somehow draw all cards for all players....
Classes of interest:
```
public class Card
{
...
public Card(Suit suit, Value value)
{
Suit = suit;
Value = value;
...
}
}
public class Dealer
{
private readonly Deck _deck = new Deck();
...
private Card DealCard() => _deck.Draw();
}
//This is in GUI.DLL
public class CardEntity
{
...
private readonly Position _position;
private readonly Card _card = new Card(Suit.Spades, Value.King);
public CardEntity(GraphicsDevice graphicsDevice, Card card)
{
_position = new Position();
...
}
public void Draw(SpriteBatch spriteBatch)
{
var topLeftOfSprite = new Vector2(_position.X, _position.Y);
var sourceRectangle = new Rectangle
{
X = XPosOnSpriteSheet,
Y = YPosOnSpriteSheet,
Height = CardTextureHeight,
Width = CardTextureWidth - Offset
};
spriteBatch.Draw(_cardsSheetTexture, topLeftOfSprite, sourceRectangle, XnaColor.White);
}
}
```
Thank you.<issue_comment>username_1: There are many possible answers to design problems. It might be worth [refining your question](https://stackoverflow.com/help/how-to-ask) to be more specific if you want a good answer.
However, I did spot one thing that can be answered clearly.
>
> I am aware that I can use the decorator pattern and create a GuiCard that holds a Card and the extra functionality needed for the GUI however when my dealer class deals cards to users (As this is done in the core) they will only receive a Card and not a GuiCard.
>
>
>
The thing you're missing from the decorator pattern is deriving from a **base class**. In most examples they use either an **abstract base class** or an **interface**.
So in your particular case the classes should look more like this:
```
public abstract class Card
{
public Card(Suit suit, Value value)
{
Suit = suit;
Value = value;
}
public abstract Suit Suit { get; }
public abstract Value Value { get; }
}
public class CardEntity : Card
{
private readonly Position _position;
private readonly Card _card;
public CardEntity(GraphicsDevice graphicsDevice, Card card)
{
_position = new Position();
_card = card;
}
public Suit Suit => _card.Suit;
public Value Value => _card.Value;
public void Draw(SpriteBatch spriteBatch)
{
var topLeftOfSprite = new Vector2(_position.X, _position.Y);
var sourceRectangle = new Rectangle
{
X = XPosOnSpriteSheet,
Y = YPosOnSpriteSheet,
Height = CardTextureHeight,
Width = CardTextureWidth - Offset
};
spriteBatch.Draw(_cardsSheetTexture, topLeftOfSprite, sourceRectangle, XnaColor.White);
}
}
```
Keep in mind, this will solve the problem by using the decorator pattern but it still might not be the right design choice. The decorator pattern comes with an overhead that ultimately forces you to implement considerably more code.
If you're looking to explore more design patterns that might be useful I read a great book a while back called [Game Programming Patterns](http://www.gameprogrammingpatterns.com/). The author even provides a [web version](http://www.gameprogrammingpatterns.com/contents.html) for free :)
Upvotes: 1 <issue_comment>username_2: After reading about and testing multiple design patterns this weekend I have got this working using the Visitor Pattern.
My **Card** and **Hand** implement **IVisitable** and an Accept Method.
My visitor then visits all cards and holds all the logic to Draw the cards. This means I can separate the concerns of any object and the draw-able component completely and is nicely SOLID. This will also separate the update logic in the future as I will simply create a new UpdateVisitor.
The visitor code below, if anyone is interested:
```
using System.Collections.Generic;
...
namespace ...
{
public class CardDrawingVisitor : Visitor
{
private readonly SpriteBatch _spriteBatch;
private readonly PngHandler _cardPng;
private readonly Stack \_result = new Stack();
public CardDrawingVisitor(GraphicsDevice graphicsDevice, SpriteBatch spriteBatch)
{
\_spriteBatch = spriteBatch;
\_cardPng = new PngHandler(graphicsDevice, "cards");
}
public override void Visit(Card card)
{
\_result.Push(card);
}
public void Draw()
{
var cardNumber = 0;
foreach (var card in \_result)
{
\_cardPng.Draw(\_spriteBatch, card, cardNumber++, 0);
}
}
}
}
```
Upvotes: 0
|
2018/03/16
| 1,246 | 4,193 |
<issue_start>username_0: I have to display a number in currency format using the country code with comma and period separators based on the country.
Example if the number is 4294967295000 then
1. USA = USD 4,294,967,295,000.00
2. INDIA = INR 42,94,96,72,95,000.00
I got it working for India, but for USA I am getting this string but I need space between currency code and number:
```js
var number = 4294967295000;
console.log(number.toLocaleString('en-IN', {
style: 'currency', currency: 'INR', currencyDisplay: 'code'
})); // INR 42,94,96,72,95,000.00
console.log(number.toLocaleString('en-US', {
style: 'currency', currency: 'USD', currencyDisplay: 'code'
})); // USD4,294,967,295,000.00
```
How do I achieve spacing between "USD" and number? I did not see anyting in option parameter regarding space. I can write custom code to add space, but I am trying to see if there is better option for doing the same.<issue_comment>username_1: If it's a reliable pattern that what you want to fix is a three-letter code followed by a digit, and you want to fix that by inserting a space, you could use this regex like this:
```
currencyStr = currencyStr.replace(/^([A-Z]{3})(\d)/, (match, $1, $2) => $1 + ' ' + $2);
```
Upvotes: 1 <issue_comment>username_2: >
> I did not see anyting in option parameter regarding space.
>
>
>
So I set off down the rabbit hole.
When you pass options in to `toLocaleString`, it follows a number of steps. Firstly, it converts your passed in options to a `NumberFormat` object. It [goes through a series of steps to do so](https://www.ecma-international.org/ecma-402/2.0/#sec-InitializeNumberFormat), one of which is:
>
> 23. If s is "currency", then
>
> a. Let c be converting c to upper case as specified in 6.1.
>
> b. Set numberFormat.[[currency]] to c.
>
>
>
That means that whatever you've passed in as the `currency` option, so long as it's a proper currency code, is converted to uppercase and stored in the internal `currency` property on the `NumberFormat` object.
We then see that there are some other [internal properties](https://www.ecma-international.org/ecma-402/2.0/#sec-Intl.NumberFormat-internal-slots) used on a `NumberFormat` - in this case, specifically the `positivePattern` internal slot. The spec notes:
>
> The value of these properties must be string values that contain a substring "{number}"; the values within the currency property must also contain a substring "{currency}". The pattern strings must not contain any characters in the General Category “Number, decimal digit" as specified by the Unicode Standard.
>
>
>
IE note that at this point, for a given culture, we've created an object that effectively has a formatting string along the lines of `{currency} {number}`. Only in Chrome's (at least) case for USD, it is `{currency}{number}`. Note that in IE and Edge, you get the space after USD, so it's decided on a formatting string of `{currency} {number}`.
Next up, we get to the [actual implementation of formatting the number](https://www.ecma-international.org/ecma-402/2.0/#sec-number-format-functions). Step 7 says:
>
> 7. If the value of the numberFormat.[[style]] is "currency", then
>
> **a. Let currency be the value of numberFormat.[[currency]].
>
> b. If numberFormat.[[currencyDisplay]] is "code", then
>
> i. Let cd be currency.**
> c. Else, if numberFormat.[[currencyDisplay]] is "symbol", then
>
> i. Let cd be an ILD string representing currency in short form. If the implementation does not have such a representation of currency, then use currency itself.
>
> d. Else, if numberFormat.[[currencyDisplay]] is "name", then
>
> i. Let cd be an ILD string representing currency in long form. If the implementation does not have such a representation of currency, then use currency itself.
>
> e. **Replace the substring "{currency}" within result with cd**.
>
>
>
emphasis mine, showing the steps taken in this case.
---
**TL;DR** - this behaviour appears to browser dependent, and you'll need to parse and fix the resulting string yourself if you consistently want the space, there's no built-in way to do so.
Upvotes: 2
|
2018/03/16
| 404 | 1,764 |
<issue_start>username_0: use of private constructor :
it cant able to create instance,
it cant be inherit,
it contain only static data members
without private constructor also i can able to access class with its static declaration and static data member when assign value like the below example
```
class Test
{
public static int x = 12;
public static int method()
{
return 13;
}
}
class Program
{
int resut1 = Test.x;
int resut2 = Test.method();
static void Main(string[] args)
{
}
}
```
so i have doubts as below
why should go to private constructor
what is the use of private constructor block
is we cant do anything inside of private constructor block
when it execute please explain clearly
thanks in advance<issue_comment>username_1: A private constructor prevents the generic default constructor from being used.
The generic default constructor allows a class to be instantiated; while, a private constructor.
According to this [microsoft doc](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/private-constructors), they are generally used to prevent people from instantiating classes that only have static members/functions.
Upvotes: -1 <issue_comment>username_2: Private constructors are used to prevent creating instances of a class when there are no instance fields or methods, such as the Math class, or when a method is called to obtain an instance of a class. If all the methods in the class are static, consider making the complete class static. For more information see Static Classes and Static Class Members.
Follow this <https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/private-constructors>
Upvotes: -1
|
2018/03/16
| 800 | 3,036 |
<issue_start>username_0: It’s not about using .gitignore to ignore certain files, it’s about completely excluding that particular repository from file watcher of VS Code.
It complains that there’s more than 5000 files are opened and then just hanged my laptop. This repository has insane amount of files, and it’s located in my home folder as my configs are stored in git, in .gitignore I’ve added all files to ignore, and when I need to stash the files I’m using `git add -f` .
As you can see on the picture, the repository "holms" is always opened, and it's causing me issues. This is parent repo, which is in my home folder. How can I ignore it completely? If I close that repo, it appears again after like 3 minutes of working in a editor
[](https://i.stack.imgur.com/TmsZ1.png)<issue_comment>username_1: Had the same problem, try adding option:
`"git.autoRepositoryDetection": false`
and close repo from your parent (home) folder. It shouldn't appear again.
Update:
There's now an option to watch only repositories for opened files:
`"git.autoRepositoryDetection": "openEditors"`
This way, if you open any file in a specific repository, vscode will only watch it.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Another solution is to use the `Ignored Repositories` in settings, for example to ignore the repository in home directory you can add:
```
"git.ignoredRepositories": [
"/home/holms"
],
```
Or from the settings editor (my username is obi)
[](https://i.stack.imgur.com/Sr2Ku.png)
Upvotes: 5 <issue_comment>username_3: You could also select which repositories to hide manually:
[](https://i.stack.imgur.com/IROft.png)
Upvotes: 2 <issue_comment>username_4: Open `source control repositories view [](https://i.stack.imgur.com/N4BOq.png)
And select the main repository you want. It won't be reset after restarting.
Upvotes: 2 <issue_comment>username_5: In VS Code 1.80, all you should need to do is close the repository. From [the release notes](https://code.visualstudio.com/updates/v1_80#_close-repository-improvements):
>
> In the past, users could close a repository either using the **Git: Close Repository** command or the **Close Repository** action in the Source Control view but there were certain actions (for example, opening a file from the closed repository) that would reopen the closed repository. This milestone we have made some improvements so the fact that a repository is closed is now persisted per workspace. Users can reopen closed repositories using the **Git: Reopen Closed Repositories...** command.
>
>
>
You can close a repository using the "Close Repository" item in the context menu of the repository in the SCM View, or the `Git: Close Repository` command in the command palette.
Upvotes: 0
|
2018/03/16
| 213 | 659 |
<issue_start>username_0: I am using Primeng Tabview in my angular application. I have the code as shown below to set the active tab using code.
```
First
Second
Third
```
But as soon as I manually go to a tab and try to click on the button, it is not going to the right tab. Not sure what is wrong or how to make it work?<issue_comment>username_1: Update your `index` variable while changing tab manually.
**HTML**
```
...
```
**TS**
```
handleChange(e) {
this.index = e.index;
}
```
See [Plunker](https://plnkr.co/edit/cIeuepKmQxN84wL1hfX4?p=preview)
Upvotes: 3 <issue_comment>username_2: Alternatively, use two-way binding:
```
```
Upvotes: 2
|
2018/03/16
| 703 | 2,243 |
<issue_start>username_0: ```
def digits(n):
total=0
for i in range(0,n):
if n/(10**(i))<1 and n/(10**(i-1))=>1:
total+=i
else:
total+=0
return total
```
I want to find the number of digits in 13 so I do the below
===========================================================
```
print digits(13)
```
it gives me $\0$ for every number I input into the function.
there's nothing wrong with what I've written as far as I can see:
if a number has say 4 digits say 1234 then dividing by 10^4 will make it less than 1: 0.1234 and dividing by 10^3 will make it 1.234
and by 10^3 will make it 1.234>1. when i satisfies BOTH conditions you know you have the correct number of digits.
what's failing here? Please can you advise me on the specific method I've tried
and not a different one?
Remember for every n there can only be one i which satisfies that condition.
so when you add i to the total there will only be i added so total returning total will give you i<issue_comment>username_1: your loop makes no sense at all. It goes from 0 to exact number - not what you want.
It looks like python, so grab a solution that uses string:
```
def digits(n):
return len(str(int(n))) # make sure that it's integer, than conver to string and return number of characters == number of digits
```
EDIT:
If you REALLY want to use a loop to count number of digits, you can do this this way:
```
def digits(n):
i = 0
while (n > 1):
n = n / 10
++i
return i
```
EDIT2:
since you really want to make your solution work, here is your problem. Provided, that you call your function like `digits(5)`, 5 is of type integer, so your division is integer-based. That means, that `6/100 = 0`, not 0.06.
```
def digits(n):
for i in range(0,n):
if n/float(10**(i))<1 and n/float(10**(i-1))=>1:
return i # we don't need to check anything else, this is the solution
return null # we don't the answer. This should not happen, but still, nice to put it here. Throwing an exception would be even better
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I fixed it. Thanks for your input though :)
def digits(n):
for i in range(0,n):
if n/(10\*\*(i))<1 and n/(10\*\*(i-1))>=1:
return i
Upvotes: 0
|
2018/03/16
| 444 | 1,766 |
<issue_start>username_0: I am looking to develop a Windows 10 Universal app based in HTML, CSS, and JavaScript for use on Microsoft Surface tablets primarily. I have found information that says it can be done on [this page](https://phonegap.com/blog/2016/04/25/windows-10-and-phonegap-cli-6_1-now-on-build/), but at the very bottom of the page it says it is in beta. However, I can't seem to find anything more than that post. No updates, or documentation or anything. Does anyone know if you can do this from the PhoneGap app on Mac OS?
If not, I guess the next best option it to do it on a Windows machine using Visual Studio 2017 and Cordova?
Any information anyone has on this subject would be helpful.
Thanks!<issue_comment>username_1: No, I am pretty sure you can't build UWP apps on macOS. The reason is quite similar as the reason you cannot build iOS apps on Windows - you could build the JS portion of your code, but the problem is the fact that UWP build tools and SDKs are **OS specific** and unless Microsoft specifically ports them to macOS, they cannot work there.
You could use Parallels and run Windows as second OS, or use a build server to build the app in the cloud. [Visual Studio App Center](https://appcenter.ms/) is a very good solution for you purpose as it allows you to configure a full build pipeline including UWP, iOS and Android builds.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can actually develop a UWP app using phoneGap on Mac if you are building it using web technologies. (HTML, CSS, Javascript) But you can not build/publish the app on Mac, but if you have a Creative Cloud account, you can use [Phone Gap Build](https://build.phonegap.com/) to upload it to the service and have it built there.
Upvotes: 0
|
2018/03/16
| 1,008 | 3,728 |
<issue_start>username_0: The official docs seem to be missing the documentation around using these outputbindings, <https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-java> does not give any examples of using these bindings.
Has anyone had any luck using these? I am looking to achieve something similar to:
```
@FunctionName("consumeNodeInfo")
fun consumeNodeInfoFromQueue(@QueueTrigger(queueName = "nodeinfos", connection = "AzureWebJobsStorage", name = "nodeinfos", dataType = "binary") addedNodeInfo: ByteArray,
@TableOutput(name = "networkmap", tableName = "networkmap") table: OutputBinding) {
table.value = SignedNodeInfoRow(addedNodeInfo)
}
open class SignedNodeInfoRow(val signedNodeInfo: ByteArray) {
val rowKey = signedNodeInfo.deserialize().raw.hash
}
```<issue_comment>username_1: **@BlobOutput:**
Please refer to my sample code:
```
@FunctionName("blob")
public String functionHandler(
@QueueTrigger(name = "myQueueItem", queueName = "walkthrough", connection = "AzureWebJobsStorage") String queue,
@BlobOutput(name = "blob", connection = "AzureWebJobsStorage" , path = "samples-java/2.txt") OutputBinding blob) {
blob.setValue(queue);
return queue;
}
```
`AzureWebJobsStorage` is configured in the `local.settings.json`
```
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "",
"AzureWebJobsDashboard": ""
}
}
```
***function.json:***
```
{
"scriptFile" : "..\\fabrikam-functions-1.0-SNAPSHOT.jar",
"entryPoint" : "com.fabrikam.functions.Function.functionHandler",
"bindings" : [ {
"type" : "queueTrigger",
"name" : "myQueueItem",
"direction" : "in",
"connection" : "AzureWebJobsStorage",
"queueName" : "walkthrough"
}, {
"type" : "blob",
"name" : "blob",
"direction" : "out",
"connection" : "AzureWebJobsStorage",
"path" : "samples-java/2.txt"
} ],
"disabled" : false
}
```
---
***@TableOutput:***
Just for summary:
We could check the properties in the `function.json` from this [doc](https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-table#output---configuration).
Never forget the property named `RowKey`.
***sample code:***
```
@FunctionName("consumeNodeInfo")
fun consumeNodeInfoFromQueue(@QueueTrigger(queueName = "nodeinfos", connection = "AzureWebJobsStorage", name = "nodeinfos", dataType = "binary") addedNodeInfo: ByteArray,
@TableOutput(name = "networkmap", tableName = "networkmap", connection = "AzureWebJobsStorage", partitionKey = "nodeInfos") table: OutputBinding) {
val nodeInfo = addedNodeInfo.deserialize()
table.value = SignedNodeInfoRow(nodeInfo.raw.hash.toString(), addedNodeInfo.toBase58())
}
data class SignedNodeInfoRow(val RowKey: String, val arrayAsBase58String: String)
```
Hope it helps you.
Upvotes: 3 [selected_answer]<issue_comment>username_2: So after much investigation, it turns out you have to have a property named "RowKey"
**The capital R is *very* important**
Below is an example of a working implementation.
```
@FunctionName("consumeNodeInfo")
fun consumeNodeInfoFromQueue(@QueueTrigger(queueName = "nodeinfos", connection = "AzureWebJobsStorage", name = "nodeinfos", dataType = "binary") addedNodeInfo: ByteArray,
@TableOutput(name = "networkmap", tableName = "networkmap", connection = "AzureWebJobsStorage", partitionKey = "nodeInfos") table: OutputBinding) {
val nodeInfo = addedNodeInfo.deserialize()
table.value = SignedNodeInfoRow(nodeInfo.raw.hash.toString(), addedNodeInfo.toBase58())
}
data class SignedNodeInfoRow(val RowKey: String, val arrayAsBase58String: String)
```
Upvotes: 1
|
2018/03/16
| 485 | 1,575 |
<issue_start>username_0: I'm trying to use SQL to get information from a Postgres database using Rails.
This is what I've tried:
```
Select starts_at, ends_at, hours, employee.maxname, workorder.wonum from events where starts_at>'2018-03-14'
inner join employees on events.employee_id = employees.id
inner join workorders on events.workorder_id = workorders.id;
```
I get the following error:
>
> ERROR: syntax error at or near "inner"
> LINE 2: inner join employees on events.employee\_id = employees.id
>
>
><issue_comment>username_1: You request should look at this :
```
select starts_at, ends_at, hours, employee.maxname, workorder.wonum
from events
inner join employees on events.employee_id = employees.id
inner join workorders on events.workorder_id = workorders.id
where starts_at>'2018-03-14';
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Sami's comment is correct, but since this question is tagged with `ruby-on-rails` you can try to use ActiveRecord's API to do the same:
Make sure that your models relations are defined
```
class Event < ActiveRecord::Base
belongs_to :employee
belongs_to :workorder
end
```
And then you can do something like:
```
Event
.where('starts_at > ?', '2018-03-14')
.joins(:employee, :workorder)
```
or
```
Event
.joins(:employee, :workorder)
.where('starts_at > ?', '2018-03-14')
```
And you don't need to worry which one goes first.
In general, it's suboptimal to create the SQL queries in rails if you don't absolutely need to because they're harder to maintain.
Upvotes: 1
|
2018/03/16
| 3,776 | 17,125 |
<issue_start>username_0: **How to implement JWT based authentication and authorization in Spring Security**
I am trying to implement jwt based authentication and authorization in my spring boot app. I followed a tutorial written [here](https://auth0.com/blog/implementing-jwt-authentication-on-spring-boot/). But it does not do anything in my app. It does not return jwt token rather I am authenticated and my request is fulfilled. I am new to spring security. here is my code.
I want my app return jwt token and using the token the requests must be authorized.
Here is my code.
**JWTAuthenticationFilter.java**
```
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;
public class JWTAuthenticationFilter extends UsernamePasswordAuthenticationFilter {
private AuthenticationManager authenticationManager;
@Autowired
CustomUserDetailsService userService;
public JWTAuthenticationFilter(AuthenticationManager authenticationManager) {
this.authenticationManager = authenticationManager;
}
@Override
public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response)
throws AuthenticationException {
try {
CustomUserDetails user = new ObjectMapper().readValue(request.getInputStream(), CustomUserDetails.class);
return authenticationManager.authenticate(
new UsernamePasswordAuthenticationToken(user.getUsername(), user.getPassword(), new ArrayList<>()));
} catch (Exception e) {
}
return super.attemptAuthentication(request, response);
}
@Override
protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain,
Authentication auth) {
String loggedInUser = ((CustomUserDetails) auth.getPrincipal()).getUsername();
Claims claims = Jwts.claims().setSubject(loggedInUser);
if (loggedInUser != null) {
CustomUserDetails user = (CustomUserDetails) userService.loadUserByUsername(loggedInUser);
String roles[] = {};
for (Role role : user.getUser().getUserRoles()) {
roles[roles.length + 1] = role.getRole();
}
claims.put("roles", roles);
claims.setExpiration(new Date(System.currentTimeMillis() + EXPIRATION_TIME));
}
String token = Jwts.builder().setClaims(claims)
.setExpiration(new Date(System.currentTimeMillis() + EXPIRATION_TIME))
.signWith(SignatureAlgorithm.HS512, SECRET.getBytes()).compact();
response.addHeader(HEADER_STRING, TOKEN_PREFIX + token);
}
}
```
**JWTAuthorizationFilter.java**
```
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jwts;
public class JWTAuthorizationFilter extends BasicAuthenticationFilter {
public JWTAuthorizationFilter(AuthenticationManager authenticationManager) {
super(authenticationManager);
}
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain chain)
throws IOException, ServletException {
String header = request.getHeader(HEADER_STRING);
if (header == null || !header.startsWith(TOKEN_PREFIX)) {
chain.doFilter(request, response);
return;
}
UsernamePasswordAuthenticationToken authentication = getToken(request);
SecurityContextHolder.getContext().setAuthentication(authentication);
chain.doFilter(request, response);
}
@SuppressWarnings("unchecked")
private UsernamePasswordAuthenticationToken getToken(HttpServletRequest request) {
String token = request.getHeader(HEADER_STRING);
System.out.println("-----------------------------------------------------");
System.out.println("Token: " + token);
System.out.println("-----------------------------------------------------");
if (token != null) {
Claims claims = Jwts.parser().setSigningKey(SECRET.getBytes())
.parseClaimsJws(token.replace(TOKEN_PREFIX, "")).getBody();
String user = claims.getSubject();
ArrayList roles = (ArrayList) claims.get("roles");
ArrayList rolesList = new ArrayList<>();
if (roles != null) {
for (String role : roles) {
rolesList.add(new MyGrantedAuthority(role));
}
}
if (user != null) {
return new UsernamePasswordAuthenticationToken(user, null, null);
}
return null;
}
return null;
}
}
```
**SecurityConfig.java**
```
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Qualifier("userDetailsService")
@Autowired
CustomUserDetailsService userDetailsService;
@Autowired
PasswordEncoder passwordEncoder;
@Autowired
AuthenticationManager authenticationManager;
@Autowired
JWTAuthenticationEntryPoint jwtAuthenticationEntryPoint;
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) {
try {
auth.userDetailsService(this.userDetailsService).passwordEncoder(passwordEncoder);
} catch (Exception e) {
}
}
/*
* @Autowired public void configureGlobal(AuthenticationManagerBuilder auth)
* throws Exception {
* auth.inMemoryAuthentication().withUser("student").password("<PASSWORD>").roles(
* "student").and().withUser("admin") .password("<PASSWORD>").roles("admin"); }
*/
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable();
// http.authorizeRequests().anyRequest().permitAll();
// http.authorizeRequests().antMatchers("/api/**").permitAll();
http.addFilter(new JWTAuthenticationFilter(authenticationManager));
http.addFilter(new JWTAuthorizationFilter(authenticationManager));
http.authorizeRequests().antMatchers("/api/student/**").hasAnyRole("STUDENT", "ADMIN");
http.authorizeRequests().antMatchers("/api/admin/**").hasRole("ADMIN");
http.authorizeRequests().antMatchers("/api/libararian/**").hasAnyRole("LIBRARIAN", "ADMIN");
http.authorizeRequests().antMatchers("/api/staff/**").hasAnyRole("STAFF", "ADMIN");
http.authorizeRequests().antMatchers("/api/teacher/**").hasAnyRole("TEACHER", "ADMIN");
http.authorizeRequests().antMatchers("/api/parent/**").hasAnyRole("PARENT", "ADMIN");
http.httpBasic().authenticationEntryPoint(jwtAuthenticationEntryPoint);
http.sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS);
// http.formLogin().and().logout().logoutSuccessUrl("/login?logout").permitAll();
}
}
```
**MyGrantedAuthority.java**
```
public class MyGrantedAuthority implements GrantedAuthority {
String authority;
MyGrantedAuthority(String authority) {
this.authority = authority;
}
@Override
public String getAuthority() {
// TODO Auto-generated method stub
return authority;
}
}
```
**JWTAuthenticationEntryPoint.java**
```
@Component
public class JWTAuthenticationEntryPoint implements AuthenticationEntryPoint {
@Override
public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException exception)
throws IOException, ServletException {
response.setStatus(403);
response.setContentType(MediaType.APPLICATION_JSON_VALUE);
String message;
if (exception.getCause() != null) {
message = exception.getCause().getMessage();
} else {
message = exception.getMessage();
}
byte[] body = new ObjectMapper().writeValueAsBytes(Collections.singletonMap("error", message));
response.getOutputStream().write(body);
}
}
```<issue_comment>username_1: I am also using jwt authentication on my project and I could see that you are missing an entry point which should be used on the project. I will tell you how I implemented it and see if it can help you =).
You need to implement an authenticationEntryPoint in order to tell the code how the authentication will be done. It can be added after the filters, on the http.authorizerequest, with the command:
```
.authenticationEntryPoint(jwtAuthEndPoint);
```
where jwtAuthEndPoint is the following component:
```
@Component
public class JwtAuthenticationEntryPoint implements AuthenticationEntryPoint {
@Override
public void commence(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse,
AuthenticationException e) throws IOException, ServletException {
httpServletResponse.setStatus(SC_FORBIDDEN);
httpServletResponse.setContentType(MediaType.APPLICATION_JSON_VALUE);
String message;
if (e.getCause() != null) {
message = e.getCause().getMessage();
} else {
message = e.getMessage();
}
byte[] body = new ObjectMapper().writeValueAsBytes(Collections.singletonMap("error", message));
httpServletResponse.getOutputStream().write(body);
}
}
```
I would also suggest you to take a look on this tutorial, which helped me A LOT in this case: <https://sdqali.in/blog/2016/07/07/jwt-authentication-with-spring-web---part-4/>
Upvotes: 2 <issue_comment>username_2: I got it.
I followed another tutorial which made my job easy.
Here is the complete rewrite working code
**TokenProvider.java**
```
package com.cloudsofts.cloudschool.security;
import static com.cloudsofts.cloudschool.security.SecurityConstants.EXPIRATION_TIME;
import static com.cloudsofts.cloudschool.security.SecurityConstants.SECRET;
import java.util.ArrayList;
import java.util.Date;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.userdetails.UserDetails;
import org.springframework.stereotype.Component;
import com.cloudsofts.cloudschool.people.users.pojos.CustomUserDetails;
import com.cloudsofts.cloudschool.people.users.pojos.Role;
import io.jsonwebtoken.Claims;
import io.jsonwebtoken.Jwts;
import io.jsonwebtoken.SignatureAlgorithm;
@Component
public class TokenProvider {
@Autowired
CustomUserDetailsService userService;
public String createToken(String username) {
CustomUserDetails user = (CustomUserDetails) userService.loadUserByUsername(username);
Claims claims = Jwts.claims().setSubject(username);
ArrayList rolesList = new ArrayList();
for (Role role : user.getUser().getUserRoles()) {
rolesList.add(role.getRole());
}
claims.put("roles", rolesList);
String token = Jwts.builder().setClaims(claims)
.setExpiration(new Date(System.currentTimeMillis() + EXPIRATION\_TIME)).setIssuedAt(new Date())
.signWith(SignatureAlgorithm.HS512, SECRET).compact();
return token;
}
public Authentication getAuthentication(String token) {
String username = Jwts.parser().setSigningKey(SECRET).parseClaimsJws(token).getBody().getSubject();
UserDetails userDetails = this.userService.loadUserByUsername(username);
return new UsernamePasswordAuthenticationToken(userDetails, "", userDetails.getAuthorities());
}
}
```
**JWTFilter.java**
```
package com.cloudsofts.cloudschool.security;
import static com.cloudsofts.cloudschool.security.SecurityConstants.HEADER_STRING;
import java.io.IOException;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.security.core.Authentication;
import org.springframework.security.core.context.SecurityContextHolder;
import org.springframework.security.core.userdetails.UsernameNotFoundException;
import org.springframework.util.StringUtils;
import org.springframework.web.filter.GenericFilterBean;
import io.jsonwebtoken.ExpiredJwtException;
import io.jsonwebtoken.MalformedJwtException;
import io.jsonwebtoken.SignatureException;
import io.jsonwebtoken.UnsupportedJwtException;
public class JWTFilter extends GenericFilterBean {
public final static String AUTHORIZATION_HEADER = "Authorization";
private final TokenProvider tokenProvider;
public JWTFilter(TokenProvider tokenProvider) {
this.tokenProvider = tokenProvider;
}
@Override
public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain chain)
throws IOException, ServletException {
try {
HttpServletRequest httpRequest = (HttpServletRequest) servletRequest;
String jwt = resolveToken(httpRequest);
if (jwt != null) {
Authentication authentication = this.tokenProvider.getAuthentication(jwt);
if (authentication != null) {
SecurityContextHolder.getContext().setAuthentication(authentication);
}
}
chain.doFilter(servletRequest, servletResponse);
} catch (ExpiredJwtException | UnsupportedJwtException | MalformedJwtException | SignatureException
| UsernameNotFoundException e) {
// Application.logger.info("Security exception {}", e.getMessage());
((HttpServletResponse) servletResponse).setStatus(HttpServletResponse.SC_UNAUTHORIZED);
}
}
private String resolveToken(HttpServletRequest request) {
String bearerToken = request.getHeader(HEADER_STRING);
if (StringUtils.hasText(bearerToken) && bearerToken.startsWith("Bearer ")) {
return bearerToken.substring(7, bearerToken.length());
}
return null;
}
}
```
**JWTConfigurer.java**
```
import org.springframework.security.config.annotation.SecurityConfigurerAdapter;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.web.DefaultSecurityFilterChain;
import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter;
public class JWTConfigurer extends SecurityConfigurerAdapter {
private final TokenProvider tokenProvider;
public JWTConfigurer(TokenProvider tokenProvider) {
this.tokenProvider = tokenProvider;
}
@Override
public void configure(HttpSecurity http) throws Exception {
JWTFilter customFilter = new JWTFilter(tokenProvider);
http.addFilterBefore(customFilter, UsernamePasswordAuthenticationFilter.class);
}
}
```
**SecurityConfig.java**
```
import javax.servlet.http.HttpServletResponse;
import org.springframework.security.authentication.AuthenticationManager;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import com.cloudsofts.cloudschool.people.users.pojos.User;
@RestController
public class LoginController {
private AuthenticationManager authenticationManager;
private TokenProvider tokenProvider;
private CustomUserDetailsService userService;
LoginController(AuthenticationManager auth, CustomUserDetailsService service, TokenProvider tokenProvider) {
this.authenticationManager = auth;
this.userService = service;
this.tokenProvider = tokenProvider;
}
@PostMapping("/login")
public String getToken(@RequestBody User user, HttpServletResponse response) {
UsernamePasswordAuthenticationToken authToken = new UsernamePasswordAuthenticationToken(user.getUsername(),
user.getPassword());
authenticationManager.authenticate(authToken);
return tokenProvider.createToken(user.getUsername());
}
}
```
**LoginController.java**
```
import javax.servlet.http.HttpServletResponse;
import org.springframework.security.authentication.AuthenticationManager;
import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
import com.cloudsofts.cloudschool.people.users.pojos.User;
@RestController
public class LoginController {
private AuthenticationManager authenticationManager;
private TokenProvider tokenProvider;
private CustomUserDetailsService userService;
LoginController(AuthenticationManager auth, CustomUserDetailsService service, TokenProvider tokenProvider) {
this.authenticationManager = auth;
this.userService = service;
this.tokenProvider = tokenProvider;
}
@PostMapping("/login")
public String getToken(@RequestBody User user, HttpServletResponse response) {
UsernamePasswordAuthenticationToken authToken = new UsernamePasswordAuthenticationToken(user.getUsername(),
user.getPassword());
authenticationManager.authenticate(authToken);
return tokenProvider.createToken(user.getUsername());
}
}
```
[**Link to the Github project that helped me.**](https://github.com/ralscha/blog/blob/master/jwt/server/src/main/java/ch/rasc/jwt/security/jwt/TokenProvider.java)
Upvotes: 3
|
2018/03/16
| 266 | 930 |
<issue_start>username_0: I have a material mat table and the cell has a conditional. What I need to do is color the cell if the condition is true.
```
Value
{{record.Value == -1 ? 'N/A' : record.Value }}
```
I need only to have the N/A colored red otherwise no color. Any help would be appreciated. I have tried to wrap the binding with style.background-color="'red'" binding condition without success.<issue_comment>username_1: Can you try :
```
Value
{{record.Value == -1 ? 'N/A' : record.Value }}
```
and
```
mat-cell.red {
background-color: red;
align-self: stretch; // so the cell take all the height
line-height: 48px; // for vertical align of content if you are with the default cell height
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can create a conditional style with `[ngStyle]`:
```
Value
{{record.Value == -1 ? 'N/A' : record.Value }}
```
Upvotes: 2
|
2018/03/16
| 4,719 | 15,209 |
<issue_start>username_0: In my Producer/Consumer exercise, I want to build two threads, one for producer and the other one for the consumer.
They have access on a buffer for reading/writing messages on it, and i want use the mutex for manage this buffer.
```
#include
#include
#include
#include
#include
#include
#define BUFFER\_LENGHT 10
#define BUFFER\_ERROR (msg\_t \*) NULL
#define TRUE 1
#define FALSE 0
pthread\_mutex\_t printf\_mutex;
typedef struct msg\_t{
void\* content;
struct msg\_t\* (\*msg\_init) (void\*);
void (\*msg\_destroy) (struct msg\_t\*);
struct msg\_t\* (\*msg\_copy)(struct msg\_t\*);
}msg\_t;
void msg\_destroy(msg\_t\* msg){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: Message deleted \n");
pthread\_mutex\_unlock(&printf\_mutex);
free(msg);
}
struct msg\_t\* msg\_copy(struct msg\_t\* message){
struct msg\_t\* new\_message;
new\_message = malloc(sizeof(msg\_t));
new\_message->content = message->content;
new\_message->msg\_copy = message->msg\_copy;
new\_message->msg\_destroy = message->msg\_destroy;
new\_message->msg\_init = message->msg\_init;
return new\_message;
}
struct msg\_t\* msg\_init(void\* content){
if(content != ""){
struct msg\_t\* messaggio;
messaggio = malloc(sizeof(msg\_t));
messaggio->content = content;
messaggio->msg\_init = msg\_init;
messaggio->msg\_destroy = msg\_destroy;
messaggio->msg\_copy = msg\_copy;
return messaggio;
} else {
return NULL;
}
}
typedef struct buffer\_t {
int whereToWrite;
int totMessagesToRead;
int maxMsgs;
pthread\_mutex\_t mutex;
struct buffer\_t\* (\*buffer\_init) (unsigned int maxsize);
struct msg\_t\* (\*put\_bloccante)(struct buffer\_t\*, struct msg\_t\*);
struct msg\_t\* messaggio[];
}buffer\_t;
\_Bool bufferIsFull(buffer\_t \*buffer){
int size = buffer->maxMsgs;
int nToRead = buffer->totMessagesToRead;
if(nToRead == size)
return TRUE;
else
return FALSE;
}
struct msg\_t\* put\_bloccante(struct buffer\_t\* buffer, struct msg\_t\* messaggio){
struct msg\_t\* new\_message;
new\_message = messaggio->msg\_copy(messaggio);
if(!bufferIsFull(buffer)){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Mutex enabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
pthread\_mutex\_lock(&(buffer->mutex));
buffer->whereToWrite++;
if(new\_message == NULL){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Error on creating the message \n");
pthread\_mutex\_unlock(&printf\_mutex);
} else{
buffer->messaggio[buffer->whereToWrite] = new\_message;
buffer->totMessagesToRead++;
}
pthread\_mutex\_unlock(&(buffer->mutex));
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Mutex disabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
} else{
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Buffer full... \n");
pthread\_mutex\_unlock(&printf\_mutex);
}
return new\_message;
}
struct buffer\_t\* buffer\_init (unsigned int maxsize){
if(maxsize > 0){
struct buffer\_t\* buffer;
buffer = malloc(sizeof(struct buffer\_t) + maxsize\*sizeof(struct msg\_t));
buffer->whereToWrite = 0;
buffer->maxMsgs = maxsize;
buffer->totMessagesToRead = 0;
buffer->put\_bloccante = put\_bloccante;
pthread\_mutex\_init(&(buffer->mutex), NULL);
return buffer;
} else {
return NULL;
}
}
void sleep\_random(char\* whoCall){
srand(time(NULL));
int mills = rand() % 100+1;
pthread\_mutex\_lock(&printf\_mutex);
printf("%s: Waiting for %d milliseconds \n", whoCall, mills);
pthread\_mutex\_unlock(&printf\_mutex);
Sleep(mills);
pthread\_mutex\_lock(&printf\_mutex);
printf("%s: Waiting finished \n", whoCall);
pthread\_mutex\_unlock(&printf\_mutex);
}
void \*consumatore(buffer\_t \*buffer){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: Thread created\n");
pthread\_mutex\_unlock(&printf\_mutex);
int life = 500;
while(life > 0){
life--;
if(buffer->totMessagesToRead > 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: mutex enabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
int whereToRead = (buffer->maxMsgs - (buffer->totMessagesToRead - buffer->whereToWrite)) % buffer->maxMsgs;
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMEr: Messagge recived: %s \n", (char \*) buffer->messaggio[whereToRead]);
printf("CONSUMER: The message is on the buffer position n. %d \n", whereToRead);
pthread\_mutex\_unlock(&printf\_mutex);
pthread\_mutex\_lock(&(buffer->mutex));
buffer->messaggio[whereToRead]->msg\_destroy(buffer->messaggio[whereToRead]);
buffer->totMessagesToRead--;
pthread\_mutex\_unlock(&(buffer->mutex));
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: mutex disabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
sleep\_random("CONSUMER");
}
}
}
void \*produttore(buffer\_t \*buffer){
void\* messaggio;
int life = 500;
//Here I got the issue
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Thread created\n");
pthread\_mutex\_unlock(&printf\_mutex);
while(life > 0){
messaggio = (rand() % 1000);
struct msg\_t \*msg\_temp;
msg\_temp = malloc(sizeof( msg\_t));
msg\_temp->msg\_init = msg\_init;
msg\_temp = msg\_temp->msg\_init(messaggio);
msg\_temp = put\_bloccante(buffer, msg\_temp);
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: There are %d messages to read \n", buffer->totMessagesToRead);
pthread\_mutex\_unlock(&printf\_mutex);
sleep\_random("PRODUCER");
life--;
}
}
int main(int argc, char \*\*argv) {
struct buffer\_t \*buffer, \*buffer\_temp;
pthread\_t pid\_c, pid\_p;
int result\_consumatore, result\_produttore;
pthread\_mutex\_init(&printf\_mutex, NULL);
buffer\_temp = malloc(sizeof(buffer\_t));
buffer\_temp->buffer\_init = buffer\_init;
buffer\_temp = buffer\_temp->buffer\_init(BUFFER\_LENGHT);
if(buffer\_temp == NULL){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Error on buffer creation\n");
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
buffer = buffer\_temp;
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Buffer size %d \n", buffer\_temp->maxMsgs);
printf("MAIN: Current buffer position %d \n", buffer\_temp->whereToWrite);
pthread\_mutex\_unlock(&printf\_mutex);
result\_produttore = pthread\_create(&pid\_p, NULL, (void \*) produttore, (void \*) buffer);
if(result\_produttore != 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Error on producer thread creation, error: %d \n", result\_produttore);
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
result\_consumatore = pthread\_create(&pid\_c, NULL, (void \*) consumatore, (void \*)buffer);
if(result\_consumatore != 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Error on consumer thread creation, error: %d \n", result\_consumatore);
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
pthread\_join(pid\_c, NULL);
}
```
If i run that code, I got a segment fault on the thread\_mutex\_loc(&printf\_mutex) on the consumer thread, but my mutex is alredy initialized.<issue_comment>username_1: <NAME>!
The main problem (amongst others)of your code was that thread
>
> void \*consumatore
>
>
>
doesn't have any pthread\_exit() implementation (productore also).
```
#include
#include
#include
#include
#include
#include
#define BUFFER\_LENGHT 10
#define BUFFER\_ERROR (msg\_t \*) NULL
#define TRUE 1
#define FALSE 0
pthread\_mutex\_t printf\_mutex;
typedef struct msg\_t{
void\* content;
struct msg\_t\* (\*msg\_init) (void\*);
void (\*msg\_destroy) (struct msg\_t\*);
struct msg\_t\* (\*msg\_copy)(struct msg\_t\*);
}msg\_t;
void msg\_destroy(msg\_t\* msg){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: Message deleted \n");
pthread\_mutex\_unlock(&printf\_mutex);
free(msg);
}
struct msg\_t\* msg\_copy(struct msg\_t\* message){
struct msg\_t\* new\_message;
new\_message = malloc(sizeof(msg\_t));
new\_message->content = message->content;
new\_message->msg\_copy = message->msg\_copy;
new\_message->msg\_destroy = message->msg\_destroy;
new\_message->msg\_init = message->msg\_init;
return new\_message;
}
struct msg\_t\* msg\_init(void\* content){
if(content != NULL){
struct msg\_t\* messaggio;
messaggio = malloc(sizeof(msg\_t));
messaggio->content = content;
messaggio->msg\_init = msg\_init;
messaggio->msg\_destroy = msg\_destroy;
messaggio->msg\_copy = msg\_copy;
return messaggio;
} else {
return NULL;
}
}
typedef struct buffer\_t {
int whereToWrite;
int totMessagesToRead;
int maxMsgs;
pthread\_mutex\_t mutex;
struct buffer\_t\* (\*buffer\_init) (unsigned int maxsize);
struct msg\_t\* (\*put\_bloccante)(struct buffer\_t\*, struct msg\_t\*);
struct msg\_t\* messaggio[];
}buffer\_t;
\_Bool bufferIsFull(buffer\_t \*buffer){
int size = buffer->maxMsgs;
int nToRead = buffer->totMessagesToRead;
if(nToRead == size)
return TRUE;
else
return FALSE;
}
struct msg\_t\* put\_bloccante(struct buffer\_t\* buffer, struct msg\_t\* messaggio){
struct msg\_t\* new\_message;
new\_message = messaggio->msg\_copy(messaggio);
if(!bufferIsFull(buffer)){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Mutex enabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
pthread\_mutex\_lock(&(buffer->mutex));
buffer->whereToWrite++;
if(new\_message == NULL){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Error on creating the message \n");
pthread\_mutex\_unlock(&printf\_mutex);
} else{
buffer->messaggio[buffer->whereToWrite] = new\_message;
buffer->totMessagesToRead++;
}
pthread\_mutex\_unlock(&(buffer->mutex));
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Mutex disabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
} else{
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Buffer full... \n");
pthread\_mutex\_unlock(&printf\_mutex);
}
return new\_message;
}
struct buffer\_t\* buffer\_init (unsigned int maxsize){
if(maxsize > 0){
struct buffer\_t\* buffer;
buffer = malloc(sizeof(struct buffer\_t) + maxsize\*sizeof(struct msg\_t));
buffer->whereToWrite = 0;
buffer->maxMsgs = maxsize;
buffer->totMessagesToRead = 0;
buffer->put\_bloccante = put\_bloccante;
pthread\_mutex\_init(&(buffer->mutex), NULL);
return buffer;
} else {
return NULL;
}
}
void sleep\_random(char\* whoCall){
srand(time(NULL));
int mills = rand() % 100+1;
pthread\_mutex\_lock(&printf\_mutex);
printf("%s: Waiting for %d milliseconds \n", whoCall, mills);
pthread\_mutex\_unlock(&printf\_mutex);
sleep(mills);
pthread\_mutex\_lock(&printf\_mutex);
printf("%s: Waiting finished \n", whoCall);
pthread\_mutex\_unlock(&printf\_mutex);
}
void \*consumatore(buffer\_t \*buffer){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: Thread created\n");
pthread\_mutex\_unlock(&printf\_mutex);
int life = 500;
while(life > 0){
life--;
if(buffer->totMessagesToRead > 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: mutex enabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
int whereToRead = (buffer->maxMsgs - (buffer->totMessagesToRead - buffer->whereToWrite)) % buffer->maxMsgs;
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMEr: Messagge recived: %s \n", (char \*) buffer->messaggio[whereToRead]);
printf("CONSUMER: The message is on the buffer position n. %d \n", whereToRead);
pthread\_mutex\_unlock(&printf\_mutex);
pthread\_mutex\_lock(&(buffer->mutex));
buffer->messaggio[whereToRead]->msg\_destroy(buffer->messaggio[whereToRead]);
buffer->totMessagesToRead--;
pthread\_mutex\_unlock(&(buffer->mutex));
pthread\_mutex\_lock(&printf\_mutex);
printf("CONSUMER: mutex disabled\n");
pthread\_mutex\_unlock(&printf\_mutex);
sleep\_random("CONSUMER");
}
}
pthread\_exit(0);
}
void \*produttore(buffer\_t \*buffer){
int\* messaggio;
int life = 500;
//Here I got the issue
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Thread created\n");
pthread\_mutex\_unlock(&printf\_mutex);
while(life > 0){
messaggio =(rand() % 1000);
struct msg\_t \*msg\_temp;
msg\_temp = malloc(sizeof( msg\_t));
msg\_temp->msg\_init = msg\_init;
msg\_temp = msg\_temp->msg\_init(messaggio);
msg\_temp = put\_bloccante(buffer, msg\_temp);
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: There are %d messages to read \n", buffer->totMessagesToRead);
pthread\_mutex\_unlock(&printf\_mutex);
sleep\_random("PRODUCER");
life--;
}
pthread\_exit(0);
}
int main(int argc, char \*\*argv) {
struct buffer\_t \*buffer, \*buffer\_temp;
pthread\_t pid\_c, pid\_p;
int result\_consumatore, result\_produttore;
pthread\_mutex\_init(&printf\_mutex, NULL);
buffer\_temp = malloc(sizeof(buffer\_t));
buffer\_temp->buffer\_init = buffer\_init;
buffer\_temp = buffer\_temp->buffer\_init(BUFFER\_LENGHT);
if(buffer\_temp == NULL){
pthread\_mutex\_lock(&printf\_mutex);
printf("PRODUCER: Error on buffer creation\n");
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
buffer = buffer\_temp;
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Buffer size %d \n", buffer\_temp->maxMsgs);
printf("MAIN: Current buffer position %d \n", buffer\_temp->whereToWrite);
pthread\_mutex\_unlock(&printf\_mutex);
result\_produttore = pthread\_create(&pid\_p, NULL, (void \*) produttore, (void \*) buffer);
if(result\_produttore != 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Error on producer thread creation, error: %d \n", result\_produttore);
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
result\_consumatore = pthread\_create(&pid\_c, NULL, (void \*) consumatore, (void \*)buffer);
if(result\_consumatore != 0){
pthread\_mutex\_lock(&printf\_mutex);
printf("MAIN: Error on consumer thread creation, error: %d \n", result\_consumatore);
pthread\_mutex\_unlock(&printf\_mutex);
return 0;
}
pthread\_join(pid\_c, NULL);
return 0;
}
```
Upvotes: 0 <issue_comment>username_2: Thank for help, but I solved my issue.
My problem was in the variable whereToWrite, because it must points on the next empty element on the array, but in my code it points on the last element added, so when I tried to read with this code:
```
int whereToRead = (buffer->maxMsgs - (buffer->totMessagesToRead - buffer->whereToWrite)) % buffer->maxMsgs;
```
whereToRead points on an empty location, so I changed this lines in put\_bloccante:
```
pthread_mutex_lock(&(buffer->mutex));
buffer->whereToWrite++;
if(new_message == NULL){
pthread_mutex_lock(&printf_mutex);
printf("PRODUCER: Error on creating the message \n");
pthread_mutex_unlock(&printf_mutex);
} else{
buffer->messaggio[buffer->whereToWrite] = new_message;
buffer->totMessagesToRead++;
}
pthread_mutex_unlock(&(buffer->mutex));
```
with this code:
```
pthread_mutex_lock(&(buffer->mutex));
if(new_message == NULL){
pthread_mutex_lock(&printf_mutex);
printf("PRODUCER: Error on creating the message \n");
pthread_mutex_unlock(&printf_mutex);
} else{
buffer->messaggio[buffer->whereToWrite] = new_message;
buffer->totMessagesToRead++;
buffer->whereToWrite++;
}
pthread_mutex_unlock(&(buffer->mutex));
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,554 | 4,968 |
<issue_start>username_0: Have a Ordereddict "d" looking like that:
```
[OrderedDict([
('id', '1'),
('date', '20170101'),
('quantity', '10')]),
OrderedDict([
('id', '2'),
('date', '20170102'),
('quantity', '3')]),
OrderedDict([
('id', '3'),
('date', '20170102'),
('quantity', '1')])]
```
I'm trying to do the group by 'date' and calculating the sum of quantity and display these two columns 'date' and 'sum\_quantity'. How can I do that not using pandas groupby options?
Thanks!<issue_comment>username_1: Here is pure python approach, This is just an example to give you a hint. If you want in pure python you can use this.
```
from collections import OrderedDict
import itertools
data=[OrderedDict([
('id', '1'),
('date', '20170101'),
('quantity', '10')]),
OrderedDict([
('id', '2'),
('date', '20170102'),
('quantity', '3')]),
OrderedDict([
('id', '3'),
('date', '20170102'),
('quantity', '1')])]
def get_quantity(ord_dict):
new_ = []
for g in [list(i) for j, i in itertools.groupby(ord_dict, lambda x: x['date'])]:
if len(g) > 1:
sub_dict={}
temp = []
date = []
for i in g:
temp.append(int(i['quantity']))
date.append(i['date'])
sub_dict['date'] = date[0]
sub_dict['sum_quantity'] = sum(temp)
new_.append(sub_dict)
else:
for i in g:
sub_dict={}
sub_dict['date']=i['date']
sub_dict['sum_quantity']=i['quantity']
new_.append(sub_dict)
return new_
print(get_quantity(data))
```
output:
```
[{'date': '20170101', 'sum_quantity': '10'}, {'date': '20170102', 'sum_quantity': 4}]
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: >
> I'm trying to do the group by 'date' and calculating the sum of quantity and display these two columns 'date' and 'sum\_quantity'
>
>
>
This code puts the dates as keys and then the value is the sum of quantity. The output is a bit of a guess until you show an example of your desired output.
```
In[2]: from collections import OrderedDict, defaultdict
...:
...:
...: def solution(data):
...: result = defaultdict(int)
...: for od in data:
...: result[od['date']] += int(od['quantity'])
...: return result
...:
In[3]: data = [
...: OrderedDict([
...: ('id', '1'),
...: ('date', '20170101'),
...: ('quantity', '10')]),
...: OrderedDict([
...: ('id', '2'),
...: ('date', '20170102'),
...: ('quantity', '3')]),
...: OrderedDict([
...: ('id', '3'),
...: ('date', '20170102'),
...: ('quantity', '1')])
...: ]
In[4]: grouped = solution(data)
In[5]: grouped
Out[5]: defaultdict(int, {'20170101': 10, '20170102': 4})
In[6]: print('{:>8}\tSum Quantity'.format('Date'))
...: for k, v in grouped.items():
...: print('{}\t{:>12}'.format(k, v))
...:
Date Sum Quantity
20170101 10
20170102 4
```
Upvotes: 2 <issue_comment>username_3: **Given**
```
from collections import OrderedDict, defaultdict
lst = [
OrderedDict([
("id", "1"),
("date", "20170101"),
("quantity", "10")]),
OrderedDict([
("id", "2"),
("date", "20170102"),
("quantity", "3")]),
OrderedDict([
("id", "3"),
("date", "20170102"),
("quantity", "1")])
]
```
Borrowing from [`more_itertools.map_reduce`](https://more-itertools.readthedocs.io/en/latest/api.html#more_itertools.map_reduce) recipe:
```
def map_reduce(iterable, keyfunc, valuefunc=None, reducefunc=None):
valuefunc = (lambda x: x) if (valuefunc is None) else valuefunc
ret = defaultdict(list)
for item in iterable:
key = keyfunc(item)
value = valuefunc(item)
ret[key].append(value)
if reducefunc is not None:
for key, value_list in ret.items():
ret[key] = reducefunc(value_list)
ret.default_factory = None
return ret
```
**Code**
`map_reduce` builds a `defaultdict` with customizable keys and values. The reducing function is applied to the final list of values.
```
kfunc = lambda d: d["date"]
vfunc = lambda d: int(d["quantity"])
rfunc = lambda lst_: sum(lst_)
agg = map_reduce(lst, keyfunc=kfunc, valuefunc=vfunc, reducefunc=rfunc)
agg
# defaultdict(None, {'20170101': 10, '20170102': 4})
```
We use a list comprehension for the final result.
```
[{"date": k, "sum_quantity": v} for k, v in agg.items()]
# [{'date': '20170101', 'sum_quantity': 10}, {'date': '20170102', 'sum_quantity': 4}]
```
Upvotes: 0
|
2018/03/16
| 295 | 1,193 |
<issue_start>username_0: How to share code between multiple [NativeScript](https://www.nativescript.org/) apps and multiple Angular apps using [@nrwl/nx](https://nrwl.io/nx) (Nx Workspace)?<issue_comment>username_1: Have a look at <https://nstudio.io/xplat/>. It seems these guys are trying to make it easier, but it still isn't an easy thing to do.
Upvotes: 2 <issue_comment>username_2: I tried out [xplat](https://nstudios.io/xplat/). It did look promising but I was trying to integrate into an existing codebase and it felt disjointed trying to figure out what belonged in their core and how I was supposed to deal with all the existing modules each with their own service that used HttpClientModule and needed to be converted to using the NativeScript version.
Instead, I ended up going with the Code Sharing approach that NativeScript recommends. [Code Sharing](https://docs.nativescript.org/angular/code-sharing/intro)
This method complicates the folder structure as you duplicate any file in-place that needs to be different for NativeScript. Basically, you add `.tns` just before the file extension. Then it uses some build tools that use those files when they are there.
Upvotes: 1
|
2018/03/16
| 316 | 1,313 |
<issue_start>username_0: In our jmeter result jtl file I need sampler type, as we have different SLA based on sampler (java request, HTTP request, JDBC request).
Is there is a way we can print the sampler type in JTL file?
Currently even I did all changes in jmeter.properties still not getting sampler type.<issue_comment>username_1: Have a look at <https://nstudio.io/xplat/>. It seems these guys are trying to make it easier, but it still isn't an easy thing to do.
Upvotes: 2 <issue_comment>username_2: I tried out [xplat](https://nstudios.io/xplat/). It did look promising but I was trying to integrate into an existing codebase and it felt disjointed trying to figure out what belonged in their core and how I was supposed to deal with all the existing modules each with their own service that used HttpClientModule and needed to be converted to using the NativeScript version.
Instead, I ended up going with the Code Sharing approach that NativeScript recommends. [Code Sharing](https://docs.nativescript.org/angular/code-sharing/intro)
This method complicates the folder structure as you duplicate any file in-place that needs to be different for NativeScript. Basically, you add `.tns` just before the file extension. Then it uses some build tools that use those files when they are there.
Upvotes: 1
|
2018/03/16
| 793 | 2,665 |
<issue_start>username_0: I would like `find()` function to return `true` when it finds `'john'` and stop iterating trough array.
Or return false if looking for name, let's say `maria`, which is not in any of our objects.
What am I not understanding that I can't achieve what I need in this code?
Thanks.
```
var array = [
{name:'paul',age:20},
{name:'john',age:30},
{name:'albert',age:40}
];
var find = function(arr){
arr.forEach(function(i){
if(i.name === 'john'){
console.log('found him');
return true;
} else {
console.log('not there');
return false;
}
});
};
find(array);
```
I have seen some similar questions here but I could not get or understand answer for my question. Explicitly I need the function to be able return the `name` value and at the same time return `true` or `false`.<issue_comment>username_1: You could use [`Array#some`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/some) which stops iterating if a [truthy](https://developer.mozilla.org/en-US/docs/Glossary/Truthy) value is returned inside of the callback.
```js
var array = [{ name: 'paul', age:20 }, { name: 'john', age:30 }, { name: 'albert', age:40 }],
find = function(array, name) {
return array.some(function(object) {
return object.name === name;
});
};
console.log(find(array, 'paul')); // true
console.log(find(array, 'maria')); // false
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You are returning in the `forEach(function(i) {})`, which is only returning in the inside function `function(i) {}`, that does not help return from the outer function `find()`. Also, your logic with `return false;` seems also problematic. Simply use normal `for` loops would be fine.
```js
var array = [
{name:'paul',age:20},
{name:'john',age:30},
{name:'albert',age:40}
];
var find = function(arr, name) {
for (let i of arr) {
if(i.name === name){
console.log('found ' + name);
return true;
}
}
console.log(name + ' not there');
return false;
};
find(array, 'paul');
find(array, 'maria');
```
Upvotes: 1 <issue_comment>username_3: Stop using the `forEach` method and try this instead:
```js
var array = [
{name:'paul',age:20},
{name:'john',age:30},
{name:'albert',age:40}
];
var find = function(arr){
var returnValue = false;
for (var i = 0; i <= arr.length; i++) {
if(arr[i].name === 'john') {
ret = true;
break;
}
}
return returnValue;
};
find(array);
```
Upvotes: 0
|
2018/03/16
| 1,172 | 4,131 |
<issue_start>username_0: I'm trying to interact with a dockerized PostgreSQL server using SQLAlchemy. Something like:
```
engine = create_engine('postgresql://user:user_password@localhost:5432/database')
df.to_sql('table', engine)
```
Which gives me this error:
>
> OperationalError: (psycopg2.OperationalError) could not connect to server: Connection refused
> Is the server running on host "localhost" (::1) and accepting
> TCP/IP connections on port 5432?
> could not connect to server: Connection refused
> Is the server running on host "localhost" (127.0.0.1) and accepting
> TCP/IP connections on port 5432?
>
>
>
Which suggests the Docker postgresql (which is running) isn't available at that port. I've tried adding `-p 5432:5432` to my `docker-compose exec` without success. Any tips?<issue_comment>username_1: As your flask app and Postgres images are not in the same docker container you cannot access the database via localhost !!
in your database URL replace *localhost* the name of Postgres Service in docker-compose/
```
engine = create_engine('postgresql://user:user_password@{}:5432/database'.format('service_name_of_postgres'))
```
[kudos to this answer.](https://stackoverflow.com/a/48422901/4683950)
Upvotes: 3 <issue_comment>username_2: Connecting SQLAlchemy with PostgreSQL on Docker
===============================================
Hello, guys! Let me try to help you to connect a Flask Web App to a PostgreSQL database, both running on Docker.
### Please read everything before trying to copy and paste code
First of all you should have installed these python modules:
* SQLAlchemy
* psycopg2
You should install them using **pip**.
psycopg2 is one of the PostgreSQL drivers needed to connect SQLAlchemy to your PostgreSQL database.
---
I'm showing you my *docker-compose.yml* file.
```
version: '3'
services:
web:
#YourWebServiceConfiguration
#YourWebServiceConfiguration
#YourWebServiceConfiguration
db_postgres:
image: postgres:latest
container_name: postgres_db_container
ports:
- "5432:5432"
volumes:
- postgres_db_vol:/var/lib/postgresql/data
environment:
POSTGRES_USER: yourUserDBName
POSTGRES_PASSWORD: <PASSWORD>
POSTGRES_DB: yourDBName
```
---
Now let's take a look at my python code! This is a *test.py* file.
```
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
conn_url = 'postgresql+psycopg2://yourUserDBName:yourUserDBPassword@yourDBDockerContainerName/yourDBName'
engine = create_engine(conn_url)
db = scoped_session(sessionmaker(bind=engine))
query_rows = db.execute("SELECT * FROM anyTableName").fetchall()
for register in query_rows:
print(f"{register.col_1_name}, {register.col_2_name}, ..., {register.col_n_name}")
# Note that this Python way of printing is available in Python3 or more!!
```
---
If you want to learn more you can take a look at the following links.
* [SQLAlchemy docs](https://docs.sqlalchemy.org/en/13/core/engines.html#postgresql)
General database connection url:
dialect+driver://username:password@host:port/database
* [Simple App with Flask, SQLAlchemy and Docker - <NAME>](https://medium.com/@hmajid2301/implementing-sqlalchemy-with-docker-cb223a8296de)
If using Docker you need to replace the host value for the docker container name.
Upvotes: 3 <issue_comment>username_3: I had the same problem. After a few tries, it occurs that the problem is in the **connection URL** string given to the *sqlalchemy.create\_engine()*
Firstly I had similarly as you:
```
db_URL = f"postgresql://{db_login}:{db_password}@localhost:5432/postgres"
```
Fixed one:
```
db_URL = f"postgresql+psycopg2://{db_login}:{db_password}@db:5432/postgres"
```
A bit dumb approach from my side doing two alterations at once, but I've checked it and looks like the problem was **localhost**. Changing server address to *container\_name* stated in the *docker\_compose.yml* should fix the problem with the connection.
```
db:
image: postgres:12.7-alpine
container_name: db
```
Upvotes: 2
|
2018/03/16
| 1,054 | 3,746 |
<issue_start>username_0: I am trying to play with **getattribute** method in python.
```
class Foo:
def __init__(self):
self.x = 3
def __getattribute__(self, name):
print("getting attribute %s" %name)
return super().__getattribute__(self, name)
f = Foo()
f.x
```
I get the "getting attribute" printed out, but also a TypeError here:expected 1 arguments, got 2.
So, what's wrong with this snippet?<issue_comment>username_1: As your flask app and Postgres images are not in the same docker container you cannot access the database via localhost !!
in your database URL replace *localhost* the name of Postgres Service in docker-compose/
```
engine = create_engine('postgresql://user:user_password@{}:5432/database'.format('service_name_of_postgres'))
```
[kudos to this answer.](https://stackoverflow.com/a/48422901/4683950)
Upvotes: 3 <issue_comment>username_2: Connecting SQLAlchemy with PostgreSQL on Docker
===============================================
Hello, guys! Let me try to help you to connect a Flask Web App to a PostgreSQL database, both running on Docker.
### Please read everything before trying to copy and paste code
First of all you should have installed these python modules:
* SQLAlchemy
* psycopg2
You should install them using **pip**.
psycopg2 is one of the PostgreSQL drivers needed to connect SQLAlchemy to your PostgreSQL database.
---
I'm showing you my *docker-compose.yml* file.
```
version: '3'
services:
web:
#YourWebServiceConfiguration
#YourWebServiceConfiguration
#YourWebServiceConfiguration
db_postgres:
image: postgres:latest
container_name: postgres_db_container
ports:
- "5432:5432"
volumes:
- postgres_db_vol:/var/lib/postgresql/data
environment:
POSTGRES_USER: yourUserDBName
POSTGRES_PASSWORD: <PASSWORD>
POSTGRES_DB: yourDBName
```
---
Now let's take a look at my python code! This is a *test.py* file.
```
from sqlalchemy import create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
conn_url = 'postgresql+psycopg2://yourUserDBName:yourUserDBPassword@yourDBDockerContainerName/yourDBName'
engine = create_engine(conn_url)
db = scoped_session(sessionmaker(bind=engine))
query_rows = db.execute("SELECT * FROM anyTableName").fetchall()
for register in query_rows:
print(f"{register.col_1_name}, {register.col_2_name}, ..., {register.col_n_name}")
# Note that this Python way of printing is available in Python3 or more!!
```
---
If you want to learn more you can take a look at the following links.
* [SQLAlchemy docs](https://docs.sqlalchemy.org/en/13/core/engines.html#postgresql)
General database connection url:
dialect+driver://username:password@host:port/database
* [Simple App with Flask, SQLAlchemy and Docker - <NAME>](https://medium.com/@hmajid2301/implementing-sqlalchemy-with-docker-cb223a8296de)
If using Docker you need to replace the host value for the docker container name.
Upvotes: 3 <issue_comment>username_3: I had the same problem. After a few tries, it occurs that the problem is in the **connection URL** string given to the *sqlalchemy.create\_engine()*
Firstly I had similarly as you:
```
db_URL = f"postgresql://{db_login}:{db_password}@localhost:5432/postgres"
```
Fixed one:
```
db_URL = f"postgresql+psycopg2://{db_login}:{db_password}@db:5432/postgres"
```
A bit dumb approach from my side doing two alterations at once, but I've checked it and looks like the problem was **localhost**. Changing server address to *container\_name* stated in the *docker\_compose.yml* should fix the problem with the connection.
```
db:
image: postgres:12.7-alpine
container_name: db
```
Upvotes: 2
|
2018/03/16
| 514 | 1,958 |
<issue_start>username_0: [](https://i.stack.imgur.com/kL7o7.png)
I uploaded a build and sent it to an external team in test flight a day ago. I fixed a few bugs QA found and sent another build today, but when I went to send to an external team I got this error message and was blocked from deploying it to test flight. I am on XCode 9.2 Mac 10.12.6 and I successfully sent a build over test flight 24-48 hours ago.<issue_comment>username_1: I got the same problem today, and it didn't work after several retries. I managed to finally upload and deploy by using an older Xcode: Version 8.3.2 (8E2002).
Edit: In response to the comments on the post, this question is not a duplicate because xcode 9.2 is the current release. <https://developer.apple.com/download/>
It seems to be a bug by apple, because they don't usually suddenly stop accepting builds from release versions suddenly. Like I managed to upload and deploy it on version 8.3.2
Upvotes: -1 <issue_comment>username_2: TL;DR: Bump build #, rebuild, upload, and try submitting app for review again
I ran into this issue as well. I built the app using Xcode 9.2 (9C40b), which is a full release and not beta. I attempted to submit the app for review and external testing multiple times at various points throughout the day, no luck, all failed. I was able to submit for external testing yesterday, but today things stopped working.
After submitting a bug report to Apple about this, I received a response with the suggestion to bump the build number, rebuild the app, and upload it again to iTunes Connect/TestFlight. After doing so, I was successful in submitting the app to our external testers group.
[](https://i.stack.imgur.com/LJRAo.png)
Upvotes: 0
|
2018/03/16
| 836 | 3,075 |
<issue_start>username_0: I have tried everything and am running out of ideas. Please help.
We have Elastic Beanstalks in AWS deployed with 64bit Amazon Linux 2017.09 v2.7.1 running Ruby 2.4 (Puma). Running on them is Nginx 1.12.1 and Rails 5.
In a controller I'm downloading a PDF from an API and then attempting to send it along.
```
data = API::StatementPDF.new(id: params[:id]).result
send_data data.force_encoding('BINARY'),
:filename => "statement.pdf",
:type => "application/pdf",
:disposition => "attachment",
stream: 'true',
buffer_size: '4096',
:x_sendfile => true
```
I've tried with and without `force_encoding`, `buffer_size`, `x_sendfile`. Have tried incresing the buffer size to huge numbers. Tried disabling gzip in nginx in `.ebextensions/nginx/conf.d/nginx-extensions.conf`
```
# Configure GZIP compression
gzip off;
gzip_min_length 1100;
gzip_types application/pdf;
gzip_vary on;
```
But no matter what I do, the PDF comes thru corrupted, and if I open the file in a text editor, many of the characters aren't encoded correctly.
[](https://i.stack.imgur.com/cGjvu.png)
On the left is the working PDF, on the right the PDF sent by the Beanstalk / Rails / Nginx server.
The PDFs come thru fine when running the rails server locally. Adding a static PDF into the App and serving it up also causes the file to be corrupted.
```
send_file "#{Rails.root}/app/assets/statement.pdf", type: "application/pdf", x_sendfile: true
```
...so I'm convinced it's a problem with Nginx, Puma or the Elastic Beanstalk. Please help.<issue_comment>username_1: I got the same problem today, and it didn't work after several retries. I managed to finally upload and deploy by using an older Xcode: Version 8.3.2 (8E2002).
Edit: In response to the comments on the post, this question is not a duplicate because xcode 9.2 is the current release. <https://developer.apple.com/download/>
It seems to be a bug by apple, because they don't usually suddenly stop accepting builds from release versions suddenly. Like I managed to upload and deploy it on version 8.3.2
Upvotes: -1 <issue_comment>username_2: TL;DR: Bump build #, rebuild, upload, and try submitting app for review again
I ran into this issue as well. I built the app using Xcode 9.2 (9C40b), which is a full release and not beta. I attempted to submit the app for review and external testing multiple times at various points throughout the day, no luck, all failed. I was able to submit for external testing yesterday, but today things stopped working.
After submitting a bug report to Apple about this, I received a response with the suggestion to bump the build number, rebuild the app, and upload it again to iTunes Connect/TestFlight. After doing so, I was successful in submitting the app to our external testers group.
[](https://i.stack.imgur.com/LJRAo.png)
Upvotes: 0
|
2018/03/16
| 670 | 2,554 |
<issue_start>username_0: I wanted to check if between two or more threads that access a shared resource without mutual exclusion would have presented interleaving problems and I noticed that if I increase the number within the condition, then from 50 I put 1000, I notice that the threads no longer run first one and then the other, but overlap, why? And how can I solve this problem?
This is the code
```
import java.util.*;
import java.lang.*;
import java.io.*;
class Mutex {
private int val;
public Mutex(int val) {
this.val=val;
}
public int getVal() {
return val;
}
public void printVal() {
int i=0;
while (true) {
System.out.println(Thread.currentThread().getName()+" "+this.getVal());
if(i==50)
break;
else
i++;
}
}
public static void main(String args[]) {
final Mutex m1=new Mutex(1);
final Mutex m2 = new Mutex(2);
new Thread(new Runnable() {
@Override
public void run() {
m1.printVal();
}
},"THREAD-1").start();
new Thread(new Runnable() {
@Override
public void run() {
m2.printVal();
}
},"THREAD-2").start();
}
}
```<issue_comment>username_1: I got the same problem today, and it didn't work after several retries. I managed to finally upload and deploy by using an older Xcode: Version 8.3.2 (8E2002).
Edit: In response to the comments on the post, this question is not a duplicate because xcode 9.2 is the current release. <https://developer.apple.com/download/>
It seems to be a bug by apple, because they don't usually suddenly stop accepting builds from release versions suddenly. Like I managed to upload and deploy it on version 8.3.2
Upvotes: -1 <issue_comment>username_2: TL;DR: Bump build #, rebuild, upload, and try submitting app for review again
I ran into this issue as well. I built the app using Xcode 9.2 (9C40b), which is a full release and not beta. I attempted to submit the app for review and external testing multiple times at various points throughout the day, no luck, all failed. I was able to submit for external testing yesterday, but today things stopped working.
After submitting a bug report to Apple about this, I received a response with the suggestion to bump the build number, rebuild the app, and upload it again to iTunes Connect/TestFlight. After doing so, I was successful in submitting the app to our external testers group.
[](https://i.stack.imgur.com/LJRAo.png)
Upvotes: 0
|
2018/03/16
| 1,351 | 2,559 |
<issue_start>username_0: I have this list:
```
n = ['FAKE0.0.1.8', '10.2.2.22', '10.2.182.10', '10.2.20.5', '10.2.94.135', '10.2.110.1', '10.2.94.73', '10.2.20.1', '10.2.94.38', '10.2.94.37', '10.2.7.121']
```
And this dictionary:
```
i = {'10.2.94.38': {'area': '0.0.1.8'}}
```
As you can see, there is only one item inside the list, which is a valid key for the dictionary: `10.2.94.38`.
If I do the following, I can get the inner diciontary `{'area':'0.0.1.8'}`:
```
>>> [i.get(x,'NA') for x in n]
['NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', {'area': '0.0.1.8'}, 'NA', 'NA']
```
If I do the following, I can get the value `0.0.1.8`, as usual:
```
>>> i[n[8]]['area']
'0.0.1.8'
```
The problem I'm facing is that I cannot reach the ultimate value `0.0.1.8`. I've tried the following without success:
```
>>> [i.get(x['area'],'NA') for x in n]
Traceback (most recent call last):
File "", line 1, in
File "", line 1, in
TypeError: string indices must be integers
```
How can I do it? The final result that I want to achieve is:
```
['NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', '0.0.1.8', 'NA', 'NA']
```
Thanks!
Lucas<issue_comment>username_1: One way is to use `try` / `except`:
```
n = ['FAKE0.0.1.8', '10.2.2.22', '10.2.182.10', '10.2.20.5', '10.2.94.135',
'10.2.110.1', '10.2.94.73', '10.2.20.1', '10.2.94.38', '10.2.94.37', '10.2.7.121']
i = {'10.2.94.38': {'area': '0.0.1.8'}}
def try_get_all(i, n):
for j in n:
try:
yield i[j]['area']
except KeyError:
yield 'NA'
res = list(try_get_all(i, n))
# ['NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', 'NA', '0.0.1.8', 'NA', 'NA']
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This is very simple , don't make it complicated:
```
n = ['FAKE0.0.1.8', '10.2.2.22', '10.2.182.10', '10.2.20.5', '10.2.94.135', '10.2.110.1', '10.2.94.73', '10.2.20.1', '10.2.94.38', '10.2.94.37', '10.2.7.121']
i = {'10.2.94.38': {'area': '0.0.1.8'}}
print([i[x]['area'] if x in i else 'NaN' for x in n])
```
output:
```
['NaN', 'NaN', 'NaN', 'NaN', 'NaN', 'NaN', 'NaN', 'NaN', '0.0.1.8', 'NaN', 'NaN']
```
If you only want values then you can also filter the result:
```
n = ['FAKE0.0.1.8', '10.2.2.22', '10.2.182.10', '10.2.20.5', '10.2.94.135', '10.2.110.1', '10.2.94.73', '10.2.20.1', '10.2.94.38', '10.2.94.37', '10.2.7.121']
i = {'10.2.94.38': {'area': '0.0.1.8'},'10.2.2.22':{'area': '0.0.1.9'}}
print(list(map(lambda x:i[x]['area'],filter(lambda x:x in i,n))))
```
output:
```
['0.0.1.9', '0.0.1.8']
```
Upvotes: 0
|
2018/03/16
| 1,177 | 4,300 |
<issue_start>username_0: Hibernate 5 does not support the PostgreSQL `jsonb` data type by default.
Is there any way to implement `jsonb` support for Hibernate + Spring JPA?
If there is a way, what are the pros and cons of using `jsonb` with Hibernate?<issue_comment>username_1: Thanks [<NAME>](https://vladmihalcea.com) we have such opportunity! )
He created [hibernate-types](https://github.com/vladmihalcea/hibernate-types) lib:
```
com.vladmihalcea
hibernate-types-52
2.1.1
```
which adds a support of 'json', 'jsonb' and other types to Hibernate:
```
@Data
@NoArgsConstructor
@Entity
@Table(name = "parents")
@TypeDefs({
@TypeDef(name = "string-array", typeClass = StringArrayType.class),
@TypeDef(name = "int-array", typeClass = IntArrayType.class),
@TypeDef(name = "json", typeClass = JsonStringType.class),
@TypeDef(name = "jsonb", typeClass = JsonBinaryType.class)
})
public class Parent implements Serializable {
@Id
@GeneratedValue(strategy = SEQUENCE)
private Integer id;
@Column(length = 32, nullable = false)
private String name;
@Type(type = "jsonb")
@Column(columnDefinition = "jsonb")
private List children;
@Type(type = "string-array")
@Column(columnDefinition = "text[]")
private String[] phones;
public Parent(String name, List children, String... phones) {
this.name = name;
this.children = children;
this.phones = phones;
}
}
@Data
@NoArgsConstructor
@AllArgsConstructor
public class Child implements Serializable {
private String name;
}
```
More info: [1](https://vladmihalcea.com/the-hibernate-types-open-source-project-is-born/), [2](https://vladmihalcea.com/how-to-map-json-collections-using-jpa-and-hibernate/)
Upvotes: 7 [selected_answer]<issue_comment>username_2: [@username_1](https://stackoverflow.com/users/5380322/cepr0)-s answer is correct but although I got some issues with it. I was getting exception when trying to use it with PostgreSQL `org.hibernate.MappingException: No Dialect mapping for JDBC type: 1111`. Way to solve this, in my case, was adding custom hibernate dialect. [This](https://vladmihalcea.com/hibernate-no-dialect-mapping-for-jdbc-type/) resource might be helpful.
```
// CustomPostgreSQLDialect.java
public class CustomPostgreSQLDialect extends PostgreSQL10Dialect {
public CustomPostgreSQLDialect() {
super();
registerHibernateType(Types.OTHER, StringArrayType.class.getName());
registerHibernateType(Types.OTHER, IntArrayType.class.getName());
registerHibernateType(Types.OTHER, JsonStringType.class.getName());
registerHibernateType(Types.OTHER, JsonBinaryType.class.getName());
registerHibernateType(Types.OTHER, JsonNodeBinaryType.class.getName());
registerHibernateType(Types.OTHER, JsonNodeStringType.class.getName());
}
}
```
-
```
# application.yml
spring:
jpa:
properties:
hibernate:
dialect: "com.test.CustomPostgreSQLDialect"
```
Upvotes: 3 <issue_comment>username_3: Update for Hibernate 6+ (2022+)
-------------------------------
With the advent of Hibernate 6, mapping to PostgreSQL JSON(B) [has became possible out-of-the box](https://docs.jboss.org/hibernate/orm/6.0/userguide/html_single/Hibernate_User_Guide.html#basic-mapping-json).
One only needs to annotate the field with `JdbcTypeCode(SqlTypes.JSON)`.
**With non-typed JSON:**
```java
@Entity
public class Entity {
...
@JdbcTypeCode(SqlTypes.JSON)
private Map payload;
...
}
```
(Although not every JSON will serialize into a Map).
**With JSON which is serialized as a custom Java type:**
```java
public class Foo implements Serializable {
private String strValue;
private Long longValue;
public String getStrValue() {
return strValue;
}
public void setStringProp(String stringProp) {
this.strValue = strValue;
}
public Long getLongValue() {
return longProp;
}
public void setLongValue(Long longValue) {
this.longValue = longValue;
}
}
@Entity
public class Entity {
...
@JdbcTypeCode(SqlTypes.JSON)
private Foo fooJson;
...
}
```
Please see this article for more examples:
<https://thorben-janssen.com/persist-postgresqls-jsonb-data-type-hibernate/>
Upvotes: 2
|