date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/14
| 416 | 1,190 |
<issue_start>username_0: Suppose x is a real number between 0 and 1
If n = 2 there are two fractions ranges 0 to 0.5 and 0.5 to 1
If n = 3 there are three fractions ranges 0 to 0.33, 0.33 to 0.66 and 0.66 to 1
I want to know the most efficient algorithm that tells which fraction x belongs to.
If x = 0.2 and n = 3, x belongs to first fraction so index is 0
If x = 0.4 and n = 3, x belongs to second fraction so index is 1
Here's the python 3 code which has O(N) complexity.
```
def index(x, n):
for i in range(0, n):
if i/n <= x and x <= (i + 1)/n:
return i
```
I want to know if there is a better algorithm may be with constant time?
Edit: I did not explicitly said before but both 0 and 1 is a legit value for x and result should be n - 1 when x = 1<issue_comment>username_1: Just multiply both numbers and cut the decimal place:
```
def index(x, n):
return int(x*n)
```
Complexity is O(1)
Upvotes: 4 [selected_answer]<issue_comment>username_2: I would simply do the following:
```
def index(x,n):
return (10*x // n)
```
Upvotes: 1 <issue_comment>username_3: Simply do this:
```
import math
def index(x, n):
return math.ceil(x*n)
```
Upvotes: 0
|
2018/03/14
| 1,055 | 3,034 |
<issue_start>username_0: So, my data is travel data.
I want to create a column `df['user_type']` in which it'll determine if the `df['user_id']` occurs more than once. If it does occur more than once, I'll list them as a frequent user.
Here is my code below, but it takes way too long:
```
#Column that determines user type
def determine_user_type(val):
df_freq = df[df['user_id'].duplicated()]
user_type = ""
if(val in df_freq['user_id'].values):
user_type = "Frequent"
else:
user_type = "Single"
return user_type
df['user_type'] = df['user_id'].apply(lambda x: determine_user_type(x))
```<issue_comment>username_1: Use [`numpy.where`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html) with [`duplicated`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html) and for return all dupes add parameter `keep=False`:
```
df = pd.DataFrame({'user_id':list('aaacbbt')})
df['user_type'] = np.where(df['user_id'].duplicated(keep=False), 'Frequent','Single')
```
Alternative:
```
d = {True:'Frequent',False:'Single'}
df['user_type'] = df['user_id'].duplicated(keep=False).map(d)
print (df)
user_id user_type
0 a Frequent
1 a Frequent
2 a Frequent
3 c Single
4 b Frequent
5 b Frequent
6 t Single
```
EDIT:
```
df = pd.DataFrame({'user_id':list('aaacbbt')})
print (df)
user_id
0 a
1 a
2 a
3 c
4 b
5 b
6 t
```
Here [`drop_duplicates`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html) remove all duplicates row by column `user_id` and return only first row (default parameter is `keep='first'`):
```
df_single = df.drop_duplicates('user_id')
print (df_single)
user_id
0 a
3 c
4 b
6 t
```
But [`Series.duplicated`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.duplicated.html) return `True`s for all dupes without first:
```
print (df['user_id'].duplicated())
0 False
1 True
2 True
3 False
4 False
5 True
6 False
Name: user_id, dtype: bool
df_freq = df[df['user_id'].duplicated()]
print (df_freq)
user_id
1 a
2 a
5 b
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using username_1's data
```
df = pd.DataFrame({'user_id':list('aaacbbt')})
```
You can use array slicing
```
df.assign(
user_type=
np.array(['Single', 'Frequent'])[
df['user_id'].duplicated(keep=False).astype(int)
]
)
user_id user_type
0 a Frequent
1 a Frequent
2 a Frequent
3 c Single
4 b Frequent
5 b Frequent
6 t Single
```
Upvotes: 2 <issue_comment>username_3: Data from Jez , method involve `value_counts`
```
df.user_id.map(df.user_id.value_counts().gt(1).replace({True:'Frequent',False:'Single'}))
Out[52]:
0 Frequent
1 Frequent
2 Frequent
3 Single
4 Frequent
5 Frequent
6 Single
Name: user_id, dtype: object
```
Upvotes: 2
|
2018/03/14
| 1,379 | 3,299 |
<issue_start>username_0: I have a DataFrame like this
```
dict_ = {'Date':['2018-01-01','2018-01-02','2018-01-03','2018-01-04','2018-01-05'],'Col1':[1,2,3,4,5],'Col2':[1.1,1.2,1.3,1.4,1.5],'Col3':[0.33,0.98,1.54,0.01,0.99]}
df = pd.DataFrame(dict_, columns=dict_.keys())
```
I then calculate the pearson correlation between the columns and filter out columns that are correlated above my threshold of 0.95
```
def trimm_correlated(df_in, threshold):
df_corr = df_in.corr(method='pearson', min_periods=1)
df_not_correlated = ~(df_corr.mask(np.eye(len(df_corr), dtype=bool)).abs() > threshold).any()
un_corr_idx = df_not_correlated.loc[df_not_correlated[df_not_correlated.index] == True].index
df_out = df_in[un_corr_idx]
return df_out
```
which yields
```
uncorrelated_factors = trimm_correlated(df, 0.95)
print uncorrelated_factors
Col3
0 0.33
1 0.98
2 1.54
3 0.01
4 0.99
```
So far I am happy with the result, but I would like to keep one column from each correlated pair, so in the above example I would like to include Col1 or Col2. To get s.th. like this
```
Col1 Col3
0 1 0.33
1 2 0.98
2 3 1.54
3 4 0.01
4 5 0.99
```
Also on a side note, is there any further evaluation I can do to determine which of the correlated columns to keep?
thanks<issue_comment>username_1: You can use [`np.tril()`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tril.html#numpy.tril) instead of `np.eye()` for the mask:
```
def trimm_correlated(df_in, threshold):
df_corr = df_in.corr(method='pearson', min_periods=1)
df_not_correlated = ~(df_corr.mask(np.tril(np.ones([len(df_corr)]*2, dtype=bool))).abs() > threshold).any()
un_corr_idx = df_not_correlated.loc[df_not_correlated[df_not_correlated.index] == True].index
df_out = df_in[un_corr_idx]
return df_out
```
Output:
```
Col1 Col3
0 1 0.33
1 2 0.98
2 3 1.54
3 4 0.01
4 5 0.99
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Use this directly on the dataframe to sort out the top correlation values.
```
import pandas as pd
import numpy as np
def correl(X_train):
cor = X_train.corr()
corrm = np.corrcoef(X_train.transpose())
corr = corrm - np.diagflat(corrm.diagonal())
print("max corr:",corr.max(), ", min corr: ", corr.min())
c1 = cor.stack().sort_values(ascending=False).drop_duplicates()
high_cor = c1[c1.values!=1]
## change this value to get more correlation results
thresh = 0.9
display(high_cor[high_cor>thresh])
correl(X)
output:
max corr: 0.9821068918331252 , min corr: -0.2993837739125243
object at 0x0000017712D504E0>
count_rech_2g_8 sachet_2g_8 0.982107
count_rech_2g_7 sachet_2g_7 0.979492
count_rech_2g_6 sachet_2g_6 0.975892
arpu_8 total_rech_amt_8 0.946617
arpu_3g_8 arpu_2g_8 0.942428
isd_og_mou_8 isd_og_mou_7 0.938388
arpu_2g_6 arpu_3g_6 0.933158
isd_og_mou_6 isd_og_mou_8 0.931683
arpu_3g_7 arpu_2g_7 0.930460
total_rech_amt_6 arpu_6 0.930103
isd_og_mou_7 isd_og_mou_6 0.926571
arpu_7 total_rech_amt_7 0.926111
dtype: float64
```
Upvotes: 0
|
2018/03/14
| 548 | 2,246 |
<issue_start>username_0: I was wondering if anybody knows what happens if you have a Cron setting on an Azure Function to run every 5 minutes if its task takes more than 5 minutes to execute. Does it back up? Or should I implement a locking feature that would prevent something, say in a loop, from handling data already being processed by a prior call?<issue_comment>username_1: Azure function with timer trigger will only run one job at a time. If a job takes longer then next one is delayed.
Quoting from [Wiki](https://github.com/Azure/azure-webjobs-sdk-extensions/wiki/TimerTrigger#scheduling)
>
> If your function execution takes longer than the timer interval,
> another execution won't be triggered until after the current
> invocation completes. The next execution is scheduled after the
> current execution completes.
>
>
>
That is true even if you scale out.
<https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer#scale-out>
You may want to ensure that your function does not time out on you. See <https://buildazure.com/2017/08/17/azure-functions-extend-execution-timeout-past-5-minutes/> on how to configure function timeout.
Upvotes: 5 [selected_answer]<issue_comment>username_2: If a timer trigger occurs again before the previous run has completed, it will start a second parallel execution.
Ref: <https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference#parallel-execution>
>
> When multiple triggering events occur faster than a single-threaded function runtime can process them, the runtime may invoke the function multiple times in parallel.
>
>
>
Something else to note is that even using the new `WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT` settings, you cannot prevent multiple executions on the same instance.
>
> Each instance of the function app, whether the app runs on the Consumption hosting plan or a regular App Service hosting plan, might process concurrent function invocations in parallel using multiple threads.
>
>
>
The best advice is to either reduce the duration it takes to execute, through optimisation or changing the approach to the problem. Perhaps splitting the task down and triggering it in a different way may help.
Upvotes: 1
|
2018/03/14
| 454 | 1,487 |
<issue_start>username_0: I have two commands to execute in release phase and as per this tutorial (<https://devcenter.heroku.com/articles/release-phase>), I have included them in a shell script named *release.sh* (located in the root of my Django project).
```
#!/bin/bash
python manage.py migrate
python manage.py compress
```
In my *Procfile*, I added the script thus, as described in the article.
```
release: ./release.sh
web: gunicorn myapp.wsgi --log-file -
```
But during release I get the following error.
```
/bin/sh: 1: ./release.sh: not found
```
Then the release fails.
I don't know if the problem is with the path in *Procfile* (I also tried `bash $PWD/releash.sh`) or the file not being available at the time it is called. Any help would be appreciated.
**EDIT:**
My *release.sh* was in a subfolder and that's why it wasn't found, but now I'm getting permission denied.
```
/bin/sh: 1: ./release.sh: Permission denied
```<issue_comment>username_1: For this to work, `release.sh` must be executable
Locally, you could run `chmod a+x release.sh`. But you would not want to do that on heroku, so instead you can change the Profile to have this:
```
release: bash release.sh
web: gunicorn myapp.wsgi --log-file -
```
Upvotes: 4 <issue_comment>username_2: This worked
```
chmod u+x release.sh && ./release.sh
```
So *Procfile* becomes
```
release: chmod u+x release.sh && ./release.sh
web: gunicorn myapp.wsgi --log-file -
```
Upvotes: 6 [selected_answer]
|
2018/03/14
| 1,215 | 3,251 |
<issue_start>username_0: We're working on panel data, and there is a command in Stata, `xtsum`, that gives you within and between variance for the variables in the data set.
Is there a similar command for R, that produces clean output?<issue_comment>username_1: I have used a little function to do it.
The function XTSUM takes three inputs:
data -- the dataset
varname -- the variable to xtsum
unit -- the identifier for the within dimension
```
library(rlang)
library(dplyr)
XTSUM <- function(data, varname, unit) {
varname <- enquo(varname)
loc.unit <- enquo(unit)
ores <- data %>% summarise(ovr.mean=mean(!! varname, na.rm=TRUE), ovr.sd=sd(!! varname, na.rm=TRUE), ovr.min = min(!! varname, na.rm=TRUE), ovr.max=max(!! varname, na.rm=TRUE), ovr.N=sum(as.numeric((!is.na(!! varname)))))
bmeans <- data %>% group_by(!! loc.unit) %>% summarise(meanx=mean(!! varname, na.rm=T), t.count=sum(as.numeric(!is.na(!! varname))))
bres <- bmeans %>% ungroup() %>% summarise(between.sd = sd(meanx, na.rm=TRUE), between.min = min(meanx, na.rm=TRUE), between.max=max(meanx, na.rm=TRUE), Units=sum(as.numeric(!is.na(t.count))), t.bar=mean(t.count, na.rm=TRUE))
wdat <- data %>% group_by(!! loc.unit) %>% mutate(W.x = scale(!! varname, scale=FALSE))
wres <- wdat %>% ungroup() %>% summarise(within.sd=sd(W.x, na.rm=TRUE), within.min=min(W.x, na.rm=TRUE), within.max=max(W.x, na.rm=TRUE))
return(list(ores=ores,bres=bres,wres=wres))
}
library(haven)
nlswork <- read_stata("http://www.stata-press.com/data/r13/nlswork.dta")
XTSUM(nlswork, varname=hours, unit=idcode)
```
Upvotes: 2 <issue_comment>username_2: This code is adapted from Rob and username_1's code (see above), and outputs a table which is configured in the manner in which Stata *xtsum* outputs are presented.
```
XTSUM <- function(data, varname, unit) {
# Xtsum
varname <- enquo(varname)
loc.unit <- enquo(unit)
ores <- data %>% summarise(Mean=mean(!! varname, na.rm=TRUE), sd=sd(!! varname, na.rm=TRUE), min = min(!! varname, na.rm=TRUE), max=max(!! varname, na.rm=TRUE), N=sum(as.numeric((!is.na(!! varname)))))
bmeans <- data %>% group_by(!! loc.unit) %>% summarise(meanx=mean(!! varname, na.rm=T), t.count=sum(as.numeric(!is.na(!! varname))))
bres <- bmeans %>% ungroup() %>% summarise(sd = sd(meanx, na.rm=TRUE), min = min(meanx, na.rm=TRUE), max=max(meanx, na.rm=TRUE), n=sum(as.numeric(!is.na(t.count))), `T-bar`=mean(t.count, na.rm=TRUE))
wdat <- data %>% group_by(!! loc.unit) %>% mutate(W.x = scale(!! varname, center=TRUE, scale=FALSE))
wres <- wdat %>% ungroup() %>% summarise(sd=sd(W.x, na.rm=TRUE), min=min(W.x, na.rm=TRUE), max=max(W.x, na.rm=TRUE))
# Loop to adjust the scales within group outputs against the overall mean
for(i in 2:3) {
wres[i] <- sum(ores[1], wres[i])
}
# Table Output
Variable <- matrix(c(varname, "", ""), ncol=1)
Comparison <- matrix(c("Overall", "Between", "Within"), ncol=1)
Mean <- matrix(c(ores[1], "", ""), ncol=1)
Observations <- matrix(c(paste0("N = ", ores[5]), paste0("n = ", bres[4]), paste0("T-bar = ", round(bres[5], 4))), ncol=1)
Tab <- rbind(ores[2:4], bres[1:3], wres[1:3])
Tab <- cbind(Tab, Observations)
Tab <- cbind(Mean, Tab)
Tab <- cbind(Comparison, Tab)
Tab <- cbind(Variable, Tab)
# Output
return(Tab)
}
```
Upvotes: 0
|
2018/03/14
| 754 | 2,593 |
<issue_start>username_0: I have a method that generates an array based on the min and max provided to it. I am getting an error while pushing the value to the method. The error states that Cannot read property genArray. Not sure what the problem is
```
public genArray: number[];
ngOnInit() {
generateArray(1000,20000);
});
function generateArray(min: any, max: any)
{
let count = min;
for(count=min; count<=max; count= count + 5000)
{
console.log(count);
this.genArray.push(count);
};
};
```
I am listing my complete code below
```
import { Component, OnInit, Input } from '@angular/core';
import { StressTestAnalysis } from '../../../../api/dtos';
@Component({
selector: 'app-stress-test-analysis',
templateUrl: './stress-test-analysis.component.html',
})
export class StressTestAnalysisComponent implements OnInit {
isExpanded = false;
showTable = true;
@Input() results: Array = [];
chartSeries : Array = [];
minYAxis : number;
maxYAxis : number;
public genArray: number[] = [];
constructor() { }
ngOnInit() {
this.results.map(r => {
this.chartSeries.push(r);
let myArray = this.generateArray(1000,20000);
});
// Function that generates the array based on the min and max derived from the previous method
private generateArray(min: any, max: any)
{
let count = min;
for(count=min; count<=max; count= count + 5000)
{
console.log(count);
this.genArray.push(count);
}
}
}
```<issue_comment>username_1: You need to declare and intialize the array before using it and properly declare your function `generateArray()` in the component.
```
public genArray: number[] = [];
ngOnInit() {
this.generateArray(1000,20000);
});
private generateArray(min: any, max: any)
{
let count = min;
for(count=min; count<=max; count= count + 5000)
{
console.log(count);
this.genArray.push(count);
};
};
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Your `genArray` has to be initialized like this:
```
public genArray: number[] = [];
```
---
Maybe you should do this with a function-like solution, and return with the array, so:
```
public genArray: number[] = [];
ngOnInit() {
this.genArray = generateArray(1000, 20000);
});
```
And your function:
```
generateArray(min: any, max: any): number[] {
let arr: number[] = [];
for (let count = min; count <= max; count = count + 5000) {
arr.push(count);
};
return arr;
}
```
Upvotes: 0
|
2018/03/14
| 502 | 1,873 |
<issue_start>username_0: First I enqueue a script using:
```
wp_enqueue_script( "script", plugins_url( "/test/js/script.js", PATH ), array("jquery"), VERSION, true );
```
Then I'm inserting an inline script after "script".
```
wp_add_inline_script( "script", "console.log('hello world');" );
```
Now I need to add **defer or async attribute** to my inline script, is there a way to do this to a script embedded by **wp\_add\_inline\_script()** ?<issue_comment>username_1: `wp_enqueue_script( "script", plugins_url( "/test/js/script.js", PATH ), array("jquery"), VERSION, true );`
`wp_script_add_data( 'script', 'async/defer' , true );`
[see more](https://developer.wordpress.org/reference/functions/wp_script_add_data/)
Upvotes: 4 [selected_answer]<issue_comment>username_2: I know this isn't what you're looking for, but `defer` doesn't work unless there's a `src` attribute. I.e. it doesn't work on inline scripts.
From the [MDN docs](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script):
>
> `defer`
> This Boolean attribute is set to indicate to a browser that the script is meant to be executed after the document has been parsed, but before firing DOMContentLoaded.
> Scripts with the defer attribute will prevent the DOMContentLoaded event from firing until the script has loaded and finished evaluating.
> **Warning: This attribute must not be used if the src attribute is absent (i.e. for inline scripts)**, in this case it would have no effect.
> The defer attribute has no effect on module scripts — they defer by default.
> Scripts with the defer attribute will execute in the order in which they appear in the document.
> This attribute allows the elimination of parser-blocking JavaScript where the browser would have to load and evaluate scripts before continuing to parse. async has a similar effect in this case.
>
>
>
Upvotes: 0
|
2018/03/14
| 540 | 1,841 |
<issue_start>username_0: I have created a portfolio page for myself.
<https://alonoparag.github.io/index.html#home>
My problem is that when I check the page using Google Developers tools or with my android (samsung galaxy s4) device, the navbar's items are always behind the content of #home.
I tried tweaking the z index of the navbar items versus the home content, with no avail. When Checking the elements in the developer's tools I saw that both the navbar elements and the div z-index have changed, but it didn't affected the way that the elements are stacked.
I would appreciate help with this.
Cheers
here's my code
```
.topnav.responsive a {
float: none;
display: block;
text-align: left;
z-index: 10000;
}
div.content {
align-content: center;
width: 85%;
margin: auto;
padding: 16px;
z-index: 1;
}
```<issue_comment>username_1: You have to specify a (static) position, for example.
```
position: relative;
```
or
```
position: fixed;
```
"z-index only effects elements that have a position value other than static (the default)." - <https://css-tricks.com/almanac/properties/z/z-index/>
Upvotes: 2 <issue_comment>username_2: You have to give a position for z-index to work. So if you add `position:relative; z-index:10;` to your header, it should work fine.
Upvotes: 3 [selected_answer]<issue_comment>username_3: The one issue I found is that the parent `#home` container itself seems to be the one giving you grief. Here are some tweaks I made in the developer console on my end. I tested it on a full desktop view as well as shrunk it down in the mobile preview and it worked properly.
**CSS:**
```
#home {
z-index: 1;
}
#myTopnav {
z-index: 999;
}
```
**In addition, make sure to establish a position value for your elements.**
That should do the trick for you!
Upvotes: 2
|
2018/03/14
| 1,125 | 4,587 |
<issue_start>username_0: I have a database, where I store some fixed values like product categories. When I create a new product and I want to assign a category to it, I do it this way:
```
$categories = new ProductCategoryRepository();
$category = $categories->find(ProductCategory::EXAMPLE);
$product = new Product();
$product->setCategory($category);
```
However, I'm not sure why I have to lookup the database all the time to get static entities my app is already aware of.
It should be enough to assign the category statically. Maybe something like this:
```
$category = ProductCategory::EXAMPLE;
```
Now Doctrine should persist the relation with the correct ID (described by the ProductCategory class (which could be an entity?)) and I no longer have to lookup the database for static properties.
I don't know how to do this, yet. I could create new entities all the time, but this doesn't seem to be correct, because the values are already stored in the DB and they are always the same and **not** new entities.
```
$category = new ProductCategory::EXAMPLE;
```
Fetching the relation from the product however should return the property as an entity:
```
$category = $product->getCategory();
return $category instanceof ProductCategory; // true
```
Is there a way to achieve this behaviour?
It is more an architecture question than a performance tweak. I don't want to describe information multiple times (db entries, php constants, entity relations etc.).<issue_comment>username_1: If you don't want it to hit the database every time you could just store it in the Cache:
```
public function getCategory(){
return Cache::rememberForever('category-'.$this->category_id, function() {
return $categories->find($this->category_id);
});
}
```
This will pull the info from the database if it has never been pulled, but will just grab it from the cache if it has been. You would have to use `Cache::forget('category-2')` to remove it, or `php artisan cache:clear`. Your static values would just be integer IDs and your products would have a `category_id` but the categories themselves would be cached.
Upvotes: -1 <issue_comment>username_2: There is something called "second level cache" in Doctrine, but the feature is considered experimental and you should maybe read the documentation carefully before using it.
A quote from [the official documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/second-level-cache.html) of this feature:
>
> The Second Level Cache
> ----------------------
>
>
> The second level cache functionality is marked as experimental for now. It is a very complex feature and we cannot guarantee yet that it works stable in all cases.
>
>
>
Entity cache definition is done like this: ([documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/second-level-cache.html#entity-cache-definition))
```
/**
* @Entity
* @Cache(usage="READ_ONLY", region="my_entity_region")
*/
```
---
To improve performance for such entities like you are talking about in your question you should also consider to mark them as "read only", which will lead to performance increase from Doctrine 2.1, as can be found in [the Doctrine documentation on improving performance](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/improving-performance.html#read-only-entities):
>
> Read-Only Entities
> ------------------
>
>
> Starting with Doctrine 2.1 you can mark entities as read only (See metadata mapping references for details). This means that the entity marked as read only is never considered for updates, which means when you call flush on the EntityManager these entities are skipped even if properties changed. Read-Only allows to persist new entities of a kind and remove existing ones, they are just not considered for updates.
>
>
>
The entity should be configured like this: ([documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/annotations-reference.html#entity))
```
/** @Entity(readOnly=true) */
```
---
Second level cache and read only for your ProductCategory:
----------------------------------------------------------
So after setting up second level read only caching with for example a region named `read_only_entity_region` your configuration for your `ProductCategory` would look something like this:
```
/**
* @Entity(readOnly=true)
* @Cache(usage="READ_ONLY", region="read_only_entity_region")
*/
class ProductCategory
{
//...your entity definition...
}
```
Upvotes: 0
|
2018/03/14
| 837 | 3,297 |
<issue_start>username_0: Given string "<NAME>., <NAME>.: Introduction to Java"
I need to extract only "<NAME>." and "<NAME>." from given string using regex how can I do it?<issue_comment>username_1: If you don't want it to hit the database every time you could just store it in the Cache:
```
public function getCategory(){
return Cache::rememberForever('category-'.$this->category_id, function() {
return $categories->find($this->category_id);
});
}
```
This will pull the info from the database if it has never been pulled, but will just grab it from the cache if it has been. You would have to use `Cache::forget('category-2')` to remove it, or `php artisan cache:clear`. Your static values would just be integer IDs and your products would have a `category_id` but the categories themselves would be cached.
Upvotes: -1 <issue_comment>username_2: There is something called "second level cache" in Doctrine, but the feature is considered experimental and you should maybe read the documentation carefully before using it.
A quote from [the official documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/second-level-cache.html) of this feature:
>
> The Second Level Cache
> ----------------------
>
>
> The second level cache functionality is marked as experimental for now. It is a very complex feature and we cannot guarantee yet that it works stable in all cases.
>
>
>
Entity cache definition is done like this: ([documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/second-level-cache.html#entity-cache-definition))
```
/**
* @Entity
* @Cache(usage="READ_ONLY", region="my_entity_region")
*/
```
---
To improve performance for such entities like you are talking about in your question you should also consider to mark them as "read only", which will lead to performance increase from Doctrine 2.1, as can be found in [the Doctrine documentation on improving performance](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/improving-performance.html#read-only-entities):
>
> Read-Only Entities
> ------------------
>
>
> Starting with Doctrine 2.1 you can mark entities as read only (See metadata mapping references for details). This means that the entity marked as read only is never considered for updates, which means when you call flush on the EntityManager these entities are skipped even if properties changed. Read-Only allows to persist new entities of a kind and remove existing ones, they are just not considered for updates.
>
>
>
The entity should be configured like this: ([documentation](https://www.doctrine-project.org/projects/doctrine-orm/en/2.6/reference/annotations-reference.html#entity))
```
/** @Entity(readOnly=true) */
```
---
Second level cache and read only for your ProductCategory:
----------------------------------------------------------
So after setting up second level read only caching with for example a region named `read_only_entity_region` your configuration for your `ProductCategory` would look something like this:
```
/**
* @Entity(readOnly=true)
* @Cache(usage="READ_ONLY", region="read_only_entity_region")
*/
class ProductCategory
{
//...your entity definition...
}
```
Upvotes: 0
|
2018/03/14
| 641 | 2,223 |
<issue_start>username_0: How can I get page URL from `window.location.href` without decode in javascript?
For example we can not get exactly this URL: `http://example.com/report#title=example_report&url=project_chart%2F%3Fproject_id%3D77`.
When we use `window.location.href` in javascript , we will get this URL:
`http://example.com/report#title=example_report&url=project_chart/?project_id=77`.
But I want to get exactly the same real URL.
Any solution?
Edited
------
as @Eugenio told, `$(document)[0].URL` works fine , but Is it safe?!<issue_comment>username_1: Try to use `encodeURI`.
As for example;
```
var url = window.location.href;
var originalUrl = encodeURI(url);
```
This function(`encodeURI`) encodes special characters,
except: , / ? : @ & = + $ #
You can use `encodeURIComponent()` to encode these characters.
Upvotes: 2 <issue_comment>username_2: You can use `encodeURIComponent`, but you have to get the part of a string you want to encode.
```
encodeURIComponent(window.location.href.split('&url=')[1])
```
Or you can use RegExp to be more precise.
Upvotes: 2 <issue_comment>username_3: as @username_4 told,
i use below code and it works fine:
```
var url = $(document)[0].URL;
```
Upvotes: 1 <issue_comment>username_4: Just to make a clear and concise answer I will sum up all the comments.
For your problem the best solution is to use `document[x].url` where x is the index of the URL part that you want to use.
The main difference for your problem between `window.location.href` and `document.url` is that the last one gives you the URL in a string format, whilest the other return the URL already parsed.
Using either one is completely normal and safe and is [widely adopted in all modern browsers](https://developer.mozilla.org/en-US/docs/Web/API/Document/URL).
```js
var url1 = document.URL;
var url2 = window.location.href;
document.getElementById("documentUrl").append (url1);
document.getElementById("windowLocationUrl").append (url2);
```
```html
document.url:
window.location.href:
```
There is no difference in this particular snippet example because there are no parameters attached to the URL. Anyway, hope this helped. Cheers!
Upvotes: 2 [selected_answer]
|
2018/03/14
| 603 | 2,095 |
<issue_start>username_0: I am new to angular little confused,
When I am calling the Rest API, Getting the JSON response
But I need to store that JSON response into a `(Map)`,
cloud please help to fix this.
**Edit 1:**
On calling `api/user/id` getting below json response
```
{
"firstname":"test",
"lastname":"testlastname",
"email":"<EMAIL>",
"username":"ufk",
"password":"<PASSWORD>",
"phone":"123456789"
}
```<issue_comment>username_1: Try to use `encodeURI`.
As for example;
```
var url = window.location.href;
var originalUrl = encodeURI(url);
```
This function(`encodeURI`) encodes special characters,
except: , / ? : @ & = + $ #
You can use `encodeURIComponent()` to encode these characters.
Upvotes: 2 <issue_comment>username_2: You can use `encodeURIComponent`, but you have to get the part of a string you want to encode.
```
encodeURIComponent(window.location.href.split('&url=')[1])
```
Or you can use RegExp to be more precise.
Upvotes: 2 <issue_comment>username_3: as @username_4 told,
i use below code and it works fine:
```
var url = $(document)[0].URL;
```
Upvotes: 1 <issue_comment>username_4: Just to make a clear and concise answer I will sum up all the comments.
For your problem the best solution is to use `document[x].url` where x is the index of the URL part that you want to use.
The main difference for your problem between `window.location.href` and `document.url` is that the last one gives you the URL in a string format, whilest the other return the URL already parsed.
Using either one is completely normal and safe and is [widely adopted in all modern browsers](https://developer.mozilla.org/en-US/docs/Web/API/Document/URL).
```js
var url1 = document.URL;
var url2 = window.location.href;
document.getElementById("documentUrl").append (url1);
document.getElementById("windowLocationUrl").append (url2);
```
```html
document.url:
window.location.href:
```
There is no difference in this particular snippet example because there are no parameters attached to the URL. Anyway, hope this helped. Cheers!
Upvotes: 2 [selected_answer]
|
2018/03/14
| 342 | 1,465 |
<issue_start>username_0: I have developed my microservice ecosystem and i managed to deploy and run it localy using docker containers and minikube. For each service i have specified two files: deployment.yml (pod specification) and service.yml (service specification). When I deploy each service to minikube cluster i simply run:
>
> kubectl create -f deployment.yml
>
>
>
and after that
>
> kubectl create -f service.yml
>
>
>
Now I want to deploy microservice ecosystem to IBM Cloud Services. I spend some time researching the deployment procedures and I did not find any using the deployment.yml and service.yml when deploying services.
My question is, can I just somehow deploy my services using existing deployment.yml and service.yml files?
Thank you for the answers.<issue_comment>username_1: As long as it's kubernetes under the hood and kubernetes API is accessible (kubectl works) you can do exactly the same. Is it sustainable in long term, that depends on your case, but likely it is not and I would suggest looking into stuff like ie. Helm
Upvotes: 1 <issue_comment>username_2: So I was confused about the deployment steps.
I just needed to go to IBM Cloud Service dashboard, find my cluster, click on cluster link and follow the steps in the Access section on the page.
After finishing the steps described within the section we can deploy our services as we were using the minikube and kubectl locally.
Upvotes: 1 [selected_answer]
|
2018/03/14
| 998 | 3,431 |
<issue_start>username_0: I am having an issue with my scoring in my blackjack game. It works in finding the right score, but when the user draws a new card it will incorrectly add the score.
For example:
Orginial hand is : 4 and 5 (so score 9)
User draws a 10.
Instead of score being 19 is will instaed be 19+9 or 28.
Here is my code:
Scoring method:
```
public int getHandValue() {
boolean ace = false;
for (int i = 0; i < this.hand.size(); i++) {
if (this.hand.get(i).getRank().value > 10) {
points += 10;
} else if (this.hand.get(i).getRank().value == 1) {
ace = true;
} else {
points += this.hand.get(i).getRank().value;
}
if (ace == true && points + 11 <= 21) {
points += 11;
}
}
return points;
}
```
Play method:
```
public void play(Deck deck) {
boolean isDone = false;
if (this.getHandValue() > 21){
System.out.println("You have busted!");
isDone = true;
this.lose();
}
takeCard(deck.drawCard());
takeCard(deck.drawCard());
System.out.println("Here are your cards and your score:");
System.out.println(this.hand.toString());
System.out.println("Score: " + getHandValue());
ListItemInput hitOrPass = new ListItemInput();
hitOrPass.add("h", "hit");
hitOrPass.add("p", "pass");
while (!isDone){
System.out.println("Hit or pass?");
hitOrPass.run();
if (hitOrPass.getKey().equalsIgnoreCase("h")) {
String result = "";
this.takeCard(deck.drawCard());
result += "You hand is now " + this.hand.toString() + "\n";
result += "Your score is now " + this.getHandValue();
System.out.println(result);
} else {
System.out.println("You have chosen to pass.");
isDone = true;
}
}
}
```<issue_comment>username_1: You loop over the hand each time you call your method so your points should reset before doing so. Otherwise the points increase by 2x + the extra card in the hand. Reset the value before you loop your hand
```
public int getHandValue() {
boolean ace = false;
points = 0; //<--- reset the point total
for (int i = 0; i < this.hand.size(); i++) {
if (this.hand.get(i).getRank().value > 10) {
points += 10;
} else if (this.hand.get(i).getRank().value == 1) {
ace = true;
} else {
points += this.hand.get(i).getRank().value;
}
if (ace == true && points + 11 <= 21) {
points += 11;
}
}
return points;
```
Upvotes: 1 <issue_comment>username_2: I assume `points` is being declared outside of this method.
Since you are returning points it is best not to use a class-wide variable for this. You'll end up with unexpected results like this. Instead, use variable within the method scope, like this.
```
public int getHandValue() {
boolean ace = false;
int value = 0;
for (int i = 0; i < this.hand.size(); i++) {
if (this.hand.get(i).getRank().value > 10) {
value += 10;
} else if (this.hand.get(i).getRank().value == 1) {
ace = true;
} else {
value += this.hand.get(i).getRank().value;
}
if (ace == true && points + 11 <= 21) {
value += 11;
}
}
return value;
}
```
Upvotes: 0
|
2018/03/14
| 933 | 3,284 |
<issue_start>username_0: This code works:
```
@ mutable.Seq(1, 2).asInstanceOf[Seq[Int]]
res1: Seq[Int] = ArrayBuffer(1, 2)
```
But this doesn't:
```
@ mutable.Map(1 -> 2).asInstanceOf[Map[Int, Int]]
java.lang.ClassCastException: scala.collection.mutable.HashMap cannot be cast
to scala.collection.immutable.Map
ammonite.$sess.cmd1$.(cmd1.sc:1)
ammonite.$sess.cmd1$.(cmd1.sc)
```
Why can `mutable.Seq` be viewed as immutable, but not `mutable.Map`? I understand that casting a mutable `Seq` to an immutable one is "lying" about the mutability of the underlying collection, but in some situations the programmer knows better—e.g. when returning a collection from a function which uses a mutable list to build up a result, but returns an immutable value.<issue_comment>username_1: The default `Map` is defined in [`Predef`](https://www.scala-lang.org/api/current/scala/Predef$.html) as
```
type Map[A, +B] = collection.immutable.Map[A, B]
```
so it is explicitly `immutable`, and `mutable.Map` is not a subclass of it.
In contrast to that, the default `Seq` is defined directly in [`scala`](https://www.scala-lang.org/api/current/scala/index.html) as
```
type Seq[+A] = scala.collection.Seq[A]
```
so it is a supertype of both `mutable.Seq` and `immutable.Seq`. This is why your first `asInstanceOf` does not fail: every `mutable.Seq` is also a `collection.Seq`.
As explained [here](https://groups.google.com/forum/#!topic/scala-internals/g_-gIWgB8Os), the decision to not specify whether `Seq` has to be mutable or immutable had something to do with support for arrays and varargs.
[In 2.13, the default `Seq` will become `immutable`, and a new type `ImmutableArray` will be introduced to deal with `varargs`.](http://www.scala-lang.org/blog/2017/02/28/collections-rework.html#language-integration) (Thanks @SethTisue for pointing it out)
Upvotes: 4 [selected_answer]<issue_comment>username_2: The primary problem is this:
If `scala.collection.mutable.Map` was a subclass of `scala.collection.immutable.Map`, then the former *is-a* latter too. That is, a *mutable* `Map` is also *immutable*. Does that make sense?
To illustrate this, you could pass an instance of a *mutable* `Map` to a function or constructor expecting an *immutable* `Map`. Alas, the two types have different semantics: if you, say, add an element to the *immutable* version, you'll get a new *immutable* `Map` instance returned; yet if you add an element to the *mutable* version, it changes that instance's contents—thus, it will have a *side-effect*.
As a consequence, if you wanted to write a *pure, referentially transparent* (*RT*) function (i.e. one that has no *side-effects*) that takes an *immutable* `Map` argument, you couldn't achieve your goal—anyone could screw that up by passing it a *mutable* `Map` instance instead. This would then change the meaning of your code and potentially cause all manner of problems.
In *functional programming*, *immutability* is big deal, as is *RT*. By ensuring that the two cannot be confused, programs that need *immutable* `Map`s can guarantee that they will get them.
(Of course, if you explicitly want to write code that will accept either, you could request an instance of their common `scala.collection.Map` trait instead.)
Upvotes: 2
|
2018/03/14
| 323 | 1,257 |
<issue_start>username_0: I have the following `Java` method that searches `MongoDB` for a specific entry:
```
public List search(String collection, String entry){
List documentList = new ArrayList<>();
createIndexforCollection(collection);
getCollection(collection).find(Filters.text(entry)).forEach((Block super Document) documentList::add);
return documentList;
}
```
The above makes use of `Java 8` features e.g. `documentList::add` , however in the Project I am working on I am only able to use `Java 7`.
How can I re-write the above the have the same logic `using Java 7`?<issue_comment>username_1: Use below code.
```
List documentList = new ArrayList<>();
MongoCursor cursor = getCollection(collection).find(Filters.text(entry)).iterator();
while(cursor.hasNext()) {
Document document = cursor.next();
documentList.add(document;
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this.
```
try(final DBCursor dbCursor = mongoTemplate.getCollection("YourCollectionName").find(Filters.text(entry))) {
while (dbCursor.hasNext()) {
Document document = dbCursor.next(); // You need to cast to the type you need
documentList.add(document);
}
}
```
Upvotes: 0
|
2018/03/14
| 551 | 1,873 |
<issue_start>username_0: I am making a search function. I want to search where several selected columns have given value but must be restricted to where `user_id='x'`. What I have gives all results and not just where `user_id='x'` and that is what I want :
Below my query that returns everything and not just search data for `user_id='x'`
```
SELECT * FROM request
WHERE product_name LIKE 'NDQB12%' OR request_id LIKE 'NDQB12%'
OR batch_no LIKE 'NDQB12%' AND user_id='16';
```
Suggestions<issue_comment>username_1: ```
Select * from request where user_id = '16' AND
(select * from request where product_name LIKE 'NDQB12%' OR request_id LIKE 'NDQB12%' OR batch_no LIKE 'NDQB12%')
```
Can you try if this works?
Upvotes: 0 <issue_comment>username_2: You need to use parentheses to separate conditions :
```
SELECT *
FROM request
WHERE
(
(product_name LIKE 'NDQB12%')
OR (request_id LIKE 'NDQB12%')
OR (batch_no LIKE 'NDQB12%')
)
AND user_id='16';
```
This query will show data only for the user `16` and if one of the previous conditions is True.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Try something like this. I think you are not wrapping your condition right.
```
SELECT * FROM request
WHERE user_id='16'
AND (product_name LIKE 'NDQB12%'
OR request_id LIKE 'NDQB12%'
OR batch_no LIKE 'NDQB12%' )
```
Upvotes: 1 <issue_comment>username_4: Theres a problem of precedence between your logical operations. In SQL and most programming languages, when mixing AND and OR (and maybe others) logical operators, it's always recommended to separate them with parenthesis. For your case:
```
SELECT
*
FROM
request
WHERE
user_id='16' AND
(
product_name LIKE 'NDQB12%' OR
request_id LIKE 'NDQB12%' OR
batch_no LIKE 'NDQB12%'
)
```
Upvotes: 1
|
2018/03/14
| 873 | 2,671 |
<issue_start>username_0: Why is the `datetime` type lost after inserting them in a Sqlite database?
```
import sqlite3, datetime
dbconn = sqlite3.connect(':memory:')
c = dbconn.cursor()
c.execute('create table mytable(title text, t timestamp)')
c.execute('insert into mytable (title, t) values (?, ?)', ("hello2", datetime.datetime(2018,3,10,12,12,00)))
c.execute("select * from mytable")
for a in c.fetchall():
print a[0] # hello2
print type(a[0]) #
print a[1] # 2018-03-10 12:12:00
print type(a[1]) #
```
**Shouldn't the datetime type remain after an insertion and a query?**
PS: I lost nearly one hour because of this problem, so I'll post the answer now with the "Answer your own question – share your knowledge, Q&A-style" SO feature.
Note: this is not a duplicate of [this neighbour question](https://stackoverflow.com/questions/1933720/how-do-i-insert-datetime-value-into-a-sqlite-database) because it doesn't deal about how datetimes are stored/retrieved.<issue_comment>username_1: According to [this documentation](https://docs.python.org/2/library/sqlite3.html#default-adapters-and-converters), the solution is to use a `detect_types` parameter:
```
dbconn = sqlite3.connect(':memory:', detect_types=sqlite3.PARSE_DECLTYPES)
```
Then the output of the previous code will be:
```
hello2
2018-03-10 12:12:00
```
Also [this is an important note](https://www.sqlite.org/datatype3.html#date_and_time_datatype) about datetime in Sqlite:
>
> SQLite does not have a storage class set aside for storing dates and/or times. Instead, the built-in Date And Time Functions of SQLite are capable of storing dates and times as TEXT, REAL, or INTEGER values:
>
>
> * TEXT as ISO8601 strings ("YYYY-MM-DD HH:MM:SS.SSS").
> * REAL as Julian day numbers, the number of days since noon in Greenwich on November 24, 4714 B.C. according to the proleptic Gregorian calendar.
> * INTEGER as Unix Time, the number of seconds since 1970-01-01 00:00:00 UTC.
>
>
> Applications can chose to store dates and times in any of these formats and freely convert between formats using the built-in date and time functions.
>
>
>
Upvotes: 2 <issue_comment>username_2: ask user to enter date time: (ex, 'Jun 1 2005 1:33PM')
```
date_entry = input('Enter a date in YYYY-MM-DD h:m am/pm format')
```
convert it to date time object:
```
date1 = datetime.strptime(date_entry, '%b %d %Y %I:%M%p')
```
To inter to sql table:
```
insert table1 (dateField)
values (convert(datetime,date1,5));
```
and if you again need to convert after reading from sql
```
datetime_object = datetime.strptime(dateField, '%b %d %Y %I:%M%p')
```
Upvotes: 0
|
2018/03/14
| 804 | 3,363 |
<issue_start>username_0: We use this function to convert an SKProduct into a localised price per item for our consumable bundles:
```
static func pricePerUnit(_ product: SKProduct, quantity: NSDecimalNumber) -> String? {
let numberFormatter = NumberFormatter()
numberFormatter.numberStyle = .currency
numberFormatter.locale = product.priceLocale
let pricePerItem = product.price.dividing(by: quantity)
guard let formattedPricePerItem = numberFormatter.string(from: pricePerItem) else {
return nil
}
return "\(formattedPricePerItem) each"
}
```
For example, a bundle of 10 items for £9.99 becomes £0.99 for a UK user or $0.99 for a US user.
Ideally, if the amount is less than a single unit of the currency (aka one dollar), we'd like it to display with the minor currency unit (such as cents, pence, etc.).
I can't find a NumberFormatter style for this or any answer elsewhere. Can this be done using NumberFormatter?<issue_comment>username_1: Unfortunately, this cannot be done using `NumberFormatter`. `NumberFormatter` uses `Locale` to get currency format, including the number of decimal digits, positive patterns, negative patterns, currency symbols etc.
(see [Number Format Patterns in Unicode](http://www.unicode.org/reports/tr35/tr35-numbers.html#Number_Format_Patterns))
However, the format for minor currencies is not standardized in Unicode (although some necessary data are, e.g. multiplicators) and it's not present in iOS `Locale` data.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Formatting as cent and pence may not be directly supported by `NumberFormatter` but `NumberFormatter` is flexible enough so that you can configure it to your wishes. For example, one can create a helper struct that holds 2 `NumberFormatter`: one to format it in dollars and pounds, and one to format it in cent and pence.
```
struct CurrencyFormatter {
private static let defaultFormatter: NumberFormatter = {
let formatter = NumberFormatter()
formatter.numberStyle = .currency
return formatter
}()
private static let alternativeFormatter: NumberFormatter = {
let formatter = NumberFormatter()
formatter.multiplier = 100
return formatter
}()
static var alternativeCurrencySymbols = [
"USD": "c",
"GBP": "p"
]
static func string(from number: NSNumber, locale: Locale) -> String? {
var formatter = defaultFormatter
if number.isLessThan(1),
let currencyCode = locale.currencyCode,
let alternativeCurrencySymbol = alternativeCurrencySymbols[currencyCode]
{
formatter = alternativeFormatter
formatter.positiveSuffix = alternativeCurrencySymbol
formatter.negativeSuffix = alternativeCurrencySymbol
}
formatter.locale = locale
return formatter.string(from: number)
}
}
let number = NSNumber(value: 0.7)
let locale = Locale(identifier: "en_GB")
if let str = CurrencyFormatter.string(from: number, locale: locale) {
print(str) // result: 70p
}
```
The solution has a built-in fail safe mechanism. If you haven't defined the alternative symbol for a currency, it will fall back to the default currency format. You can test it by changing the locale to `fr_FR` and the result becomes `0,70 €`.
Upvotes: 1
|
2018/03/14
| 1,922 | 6,979 |
<issue_start>username_0: It looks like there are several similar questions but after several days I do not find the proper answer. My question is how to know if the server has closed the websocket and how to try to reconnect. I have seen several examples but none of them worked properly when I wanted to implement the fonctionality of closing the websocket from the client when I change the view.
Then I found this [example](https://medium.com/@lwojciechowski/websockets-with-angular2-and-rxjs-8b6c5be02fac) which it's the best one I have seen so far, and with a small modifications I was able to add a close function which works quite well.
Closing the websocket from the client is not a problem anymore, however, I am not able to know when the websocket is **closed by the server** and how to **reconnect** again.
My code is very similar to [this](https://stackoverflow.com/questions/39183793/angular-2-share-websocket-service-across-components/49281248#49281248) question but I am asking for a different thing. Also, I had problems re-using the websocket until I saw the share function that they talk in the link I put, so in the class I have posted, the websocket service and the service which used the websocket service are the merged in one
My websocket service
```
import { Injectable } from '@angular/core';
import { Observable, Observer, Subscription } from 'rxjs';
import { Subject } from 'rxjs/Subject';
import { ConfigService } from "../config.service";
@Injectable()
export class WSEtatMachineService {
public messages: Subject = new Subject();
private url: string = '';
static readonly ID = 'machine';
private \_subject: Subject;
private \_subjectData: Subject;
private \_ws: any;
constructor(private configService: ConfigService) {
console.log('constructyor ws machine service')
this.setUrl(WSEtatMachineService.ID)
}
public setUrl(id:string) {
const host = this.configService.getConfigReseau().ipServer;
const port = this.configService.getConfigReseau().portServer;
this.url = `ws://${host}:${port}/` + id
}
public connect() {
console.log('connect ws machine service ', this.url)
this.messages = >this.\_connect(this.url)
.map((response: any): any => {
console.log('ws etat machine service: ', response)
return JSON.parse(response.data);
})
}
public close() {
console.log('on closing WS');
this.\_ws.close()
this.\_subject = null
}
public send(msg: any) {
this.messages.next(JSON.stringify(msg));
}
// Private methods to create the websocket
public \_connect(url: string): Subject {
if (!this.\_subject) {
this.\_subject = this.\_create(url);
}
return this.\_subject;
}
private \_create(url: string): Subject {
this.\_ws = new WebSocket(url);
let observable = Observable.create(
(obs: Observer) => {
this.\_ws.onmessage = obs.next.bind(obs);
this.\_ws.onerror = obs.error.bind(obs);
this.\_ws.onclose = obs.complete.bind(obs);
return this.\_ws.close.bind(this.\_ws);
}).share();
let observer = {
next: (data: Object) => {
if (this.\_ws.readyState === WebSocket.OPEN) {
this.\_ws.send(JSON.stringify(data));
}
}
};
return Subject.create(observer, observable);
}
} // end class
```
Then in the component I do:
```
constructor( private wsMachineService: WSMachineService) { ... }
ngOnInit() {
...
this.wsMachineService.connect();
// connexion du web socket
this.wsMachineService.messages.subscribe(
machine => {
console.log(" wsMachineService open and alive", machine);
},
err => {
// This code is never executed
console.log(" wsMachineService closed by server!!", err);
}
);
}
ngOnDestroy() {
//let tmp = this.synoptiqueSocketSubscription.unsubscribe();
this.wsMachineService.messages.unsubscribe();
this.wsMachineService.close()
}
```
I guess I'm missing something in the \_create function because I try to do a catch in the subject of the connect function and it does not work.
Any ideas of how I can know if it is being closed and try to reconnect again?
Thank you
---
Edit:
I think my problem is related to the subject / observables as I do not control them totally. I had an old approach where I could know when the server was dead and it was trying to reconnect each X seconds but unfortunately, I wasn't able to close the connection from the client as I didn't have access to the websocket object. I add the code as well:
```
public messages: Observable;
private ws: Subject;
private url: string;
public onclose = new Subject();
public connect(urlApiWebSocket: string): Observable {
if (this.messages && this.url === urlApiWebSocket) {
return this.messages;
}
this.url = urlApiWebSocket;
this.ws = Observable.webSocket({
url: urlApiWebSocket,
closeObserver: this.onclose
});
return this.messages = this.ws.retryWhen(errors => errors.delay(10000)).map(msg => msg).share();
}
send(msg: any) {
this.ws.next(JSON.stringify(msg));
}
```
Let's see if we have any way to combine both solutions.<issue_comment>username_1: You don't have any way to know when server close connection with client.
As you have notice,`this._ws.onclose = obs.complete.bind(obs);` Will be fire only when 'close' action is done by client.
Common way to play around :
* onClose of your server, you send special message to all your clients to notify it.
* Create ping mechanic to ask server if he is still alive.
Upvotes: 0 <issue_comment>username_2: Well, I found a way. I'm using the old approach with Observable.websocket
```
@Injectable()
export class WSMyService {
private url: string = '';
static readonly ID = 'mytestwebsocket';
readonly reload_time = 3000;
public messages: Observable;
private ws: Subject;
public onclose = new Subject();
constructor(private configService: ConfigService) {
console.log('constructor ws synop service')
this.setUrl(WSActionFrontService.ID)
}
public setUrl(id:string) {
...
this.url = `ws://${host}:${port}/` + id
}
public connect(): Observable {
this.ws = Observable.webSocket({
url: this.url,
closeObserver: this.onclose
});
return this.messages = this.ws.retryWhen(errors => errors.delay(this.reload\_time)).map(msg => msg).share();
}
send(msg: any) {
this.ws.next(JSON.stringify(msg));
}
public close() {
console.log('on closing WS');
this.ws.complete();
}
```
and when I use it:
```
constructor(
private wsMyService: WSMyService,
ngOnInit():
...
this.mySocketSubscription = this.wsMyService.connect().subscribe(message => {
... }
this.wsMyService.onclose.subscribe(evt => {
// the server has closed the connection
})
ngOnDestroy() {
this.wsMyService.close();
this.mySocketSubscription.unsubscribe();
...
}
```
Looks like all I had to do was to call the function complete() which tells the server that the client has finished.
I'm sure there is a better way, but this is the only one I found that works for me.
Thank you for your help.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 803 | 2,975 |
<issue_start>username_0: I am implementing a factory pattern to create two concrete classes in an abstract which has the factory methods overloaded (see below):
```
public abstract class User {
...
public static User make(int id, String name) {
return new Admin(id, name);
}
public static User make(int id, int student_id, String name) {
return new Student(id, student_id, name);
}
}
```
Here is the factory calls:
```
ArrayList users = new ArrayList<>(
Arrays.asList(
User.make(1000, "Andy"), // makes new Admin
User.make(1001, 101001, "Bob") // makes new Student
)
);
```
Here is the Admin class:
```
public class Admin extends User {
...
// constructor
protected Admin(int id, String name) {
super(id, name);
}
...
}
```
Here is the Student class:
```
public class Student extends User {
...
// constructor
protected Student(int id, int student_id, String name) {
super(id, name);
this.student_id = student_id;
}
...
}
```
Each of these concretes are placed into a User ArrayList. I have a function (below) which loops through the list and does runtime inference to call particular methods unique to each concrete; however I am getting a ClassCastException error in my IDE stating Admin cannot be cast to Student.
Full exception message is:
**Exception in thread "main" java.lang.ClassCastException: presentation\_layer.Admin cannot be cast to presentation\_layer.Student**
```
public class App {
...
public static void main(String[] args) {
ArrayList users = new ArrayList<>(
Arrays.asList(
User.make(1000, "Andy"), // makes new Admin
User.make(1001, 101001, "Bob") // makes new Student
)
);
users.forEach((u) -> {
if (u instanceof Admin)) {
System.out.println("hello admin");
((Admin)u).anAdminFunc();
} else if (u instanceof Student)) {
System.out.println("hello student");
((Student)u).aStudentFunc();
}
});
}
...
}
```
When I comment out the concrete method calls, the respective print statements output as expected with no errors; however, when trying to use these unique method calls between each loop iteration I get the casting error. Can you please advise how this can be addressed and what I am doing wrong, with either my approach at inference or my approach at a factory pattern?<issue_comment>username_1: use instanceof instead.
also, you might want to rethink your use of inheritance if you find yourself doing lots of casting
Upvotes: 2 [selected_answer]<issue_comment>username_2: Try this:
```
public static User make(int id, String name) {
User user = new Admin(id,name);
return user;
}
public static User make(int id, int student_id, String name) {
User user = new Student(id, student_id, name);
return user;
}
```
reference: [parent-child type conversion](https://stackoverflow.com/questions/25193029/typecasting-an-object-from-parent-class-to-child)
Upvotes: 0
|
2018/03/14
| 3,936 | 9,466 |
<issue_start>username_0: Our AD Team is going to disable RC4-HMAC so I have to change our JBoss-applications to AES.
I added the aes types to krb5.conf and created new keytabs but that seems to not work. Tests besides the application with kinit show the same results.
There was an [similar issue](https://stackoverflow.com/questions/43674331/kerberos-aes-256-encryption-not-working) but its solution was already enabled for us. There is another guy (<NAME>) with my problem without an answer.
Server: SLES12
AD: Windows Server 2016
**krb5.conf**
```
[libdefaults]
debug = false
default_realm = MY.DOMAIN
ticket_lifetime = 24000
default_keytab_name = /app/myapp/sso/myapp_eu.keytab_AES
dns_lookup_realm = false
dns_lookup_kdc = false
default_tkt_enctypes = aes256-cts aes128-cts rc4-hmac
default_tgs_enctypes = aes256-cts aes128-cts rc4-hmac
permitted_enctypes = aes256-cts aes128-cts rc4-hmac
[realms]
MY.DOMAIN = {
kdc = my.domain
default_domain = my.domain
}
[domain_realm]
.my.domain = MY.DOMAIN
my.domain = MY.DOMAIN
[appdefaults]
forwardable = true
```
**Keytabs**
keytab old RC4:
```
klist -ket myapp_eu.keytab_RC4
Keytab name: FILE:myapp_eu.keytab_RC4
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
0 02/19/2018 14:41:39 MyappEU@MY.DOMAIN (arcfour-hmac)
```
keytab new AES256:
```
klist -ket myapp_eu.keytab_AES
Keytab name: FILE:myapp_eu.keytab_AES
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
0 03/14/2018 15:03:31 MyappEU@MY.DOMAIN (aes256-cts-hmac-sha1-96)
```
**kinit tests (krb5 Version 1.12.5)**
authentication with password (success):
```
kinit -fV MyappEU@MY.DOMAIN
klist -ef
Valid starting Expires Service principal
03/14/18 14:37:12 03/15/18 00:37:12 krbtgt/MY.DOMAIN@MY.DOMAIN
renew until 03/15/18 14:37:06, Flags: FRIA
Etype (skey, tkt): aes256-cts-hmac-sha1-96, aes256-cts-hmac-sha1-96
```
authentication with old keytab RC4 (success):
```
kinit -fV -k -t /app/myapp/sso/myapp_eu.keytab_RC4 MyappEU@MY.DOMAIN
klist -ef
Valid starting Expires Service principal
03/14/18 14:36:52 03/15/18 00:36:52 krbtgt/MY.DOMAIN@MY.DOMAIN
renew until 03/15/18 14:36:51, Flags: FRIA
Etype (skey, tkt): arcfour-hmac, aes256-cts-hmac-sha1-96
```
authentication with new keytab AES256 (failure):
```
kinit -fV -k -t /app/myapp/sso/myapp_eu.keytab_AES MyappEU@MY.DOMAIN
Using principal: MyappEU@MY.DOMAIN
Using keytab: /app/myapp/sso/myapp_eu.keytab_AES
kinit: Preauthentication failed while getting initial credentials
```
A look on the etypes shows that aes seems to work. But i cant figure out why i get a preauthentication error with the aes-keytabs.
The old and new keytabs were created by the following ktpass command:
```
ktpass -princ <EMAIL> -crypto RC4-HMAC-NT -ptype KRB5_NT_PRINCIPAL -pass <PASSWORD> -kvno 0 -out myapp_eu.keytab_RC4
ktpass -princ <EMAIL> -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -pass <PASSWORD> -kvno 0 -out myapp_eu.keytab_AES
```
I already tried it with the correct kvno instead of 0 with the same result.
Thanks for you help or ideas.
P.S. anonymized MY.DOMAIN and myapp
**Test with fresh compiled krb5 1.16**
i combined the tips from <NAME> and T.Heron and now i see a difference between the SALT i get from ktpass at the creation of the keytab and from the trace-output of kinit. But i dont know where it comes from and how to change it. The salt consists one of the SPNs in this case.
**ktpass**
```
PS X:\> ktpass -out x:\MyappEUv3.keytab -mapOp set +DumpSalt -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -pass <PASSWORD> -princ MyappEU@MY.DOMAIN
Building salt with principalname MyappEU and domain MY.DOMAIN (encryption type 18)...
Hashing password with salt "MY.DOMAINMyappEU".
Key created.
Output keytab to x:\MyappEUv3.keytab:
Keytab version: 0x502
keysize 71 MyappEU@MY.DOMAIN ptype 1 (KRB5_NT_PRINCIPAL) vno 1 etype 0x12 (AES256-SHA1) keylength 32 (<KEY>1cf721d7d044f7eb72eaa92a20125055a3b25)
```
**kinit trace**
```
env KRB5_TRACE=/dev/stdout /home/akirsch/krb5-1.16_made/bin/kinit -fV -k -t /home/akirsch/MyappEUv3.keytab <EMAIL>
Using default cache: /tmp/krb5cc_0
Using principal: <EMAIL>
Using keytab: /home/akirsch/MyappEUv3.keytab
[32175] 1521108914.135563: Getting initial credentials for MyappEU@MY.DOMAIN
[32175] 1521108914.135564: Looked up etypes in keytab: aes256-cts
[32175] 1521108914.135566: Sending unauthenticated request
[32175] 1521108914.135567: Sending request (153 bytes) to MY.DOMAIN
[32175] 1521108914.135568: Resolving hostname MY.DOMAIN
[32175] 1521108914.135569: Sending initial UDP request to dgram 172.18.32.134:88
[32175] 1521108914.135570: Received answer (214 bytes) from dgram 172.18.32.134:88
[32175] 1521108914.135571: Response was not from master KDC
[32175] 1521108914.135572: Received error from KDC: -1765328359/Additional pre-authentication required
[32175] 1521108914.135575: Preauthenticating using KDC method data
[32175] 1521108914.135576: Processing preauth types: 16, 15, 19, 2
[32175] 1521108914.135577: Selected etype info: etype aes256-cts, salt "MY.DOMAINHTTPmyapp-entw.intranet-test.my.domain", params ""
[32175] 1521108914.135578: Retrieving MyappEU@MY.DOMAIN from FILE:/home/akirsch/MyappEUv3.keytab (vno 0, enctype aes256-cts) with result: 0/Success
[32175] 1521108914.135579: AS key obtained for encrypted timestamp: aes256-cts/ECF3
[32175] 1521108914.135581: Encrypted timestamp (for 1521108914.396292): plain 301AA011180F32303138303331353130313531345AA1050203060C04, encrypted F92E4F783F834FF6500EA86CAF8CA3088517CB02F75BD2C962E5B454DC02C6F3BBCAF59EEB6F52D58AA873FF5EDFCA1496F59D2A587701A1
[32175] 1521108914.135582: Preauth module encrypted_timestamp (2) (real) returned: 0/Success
[32175] 1521108914.135583: Produced preauth for next request: 2
[32175] 1521108914.135584: Sending request (231 bytes) to MY.DOMAIN
[32175] 1521108914.135585: Resolving hostname MY.DOMAIN
[32175] 1521108914.135586: Sending initial UDP request to dgram 10.174.50.13:88
[32175] 1521108914.135587: Received answer (181 bytes) from dgram 10.174.50.13:88
[32175] 1521108914.135588: Response was not from master KDC
[32175] 1521108914.135589: Received error from KDC: -1765328360/Preauthentication failed
[32175] 1521108914.135592: Preauthenticating using KDC method data
[32175] 1521108914.135593: Processing preauth types: 19
[32175] 1521108914.135594: Selected etype info: etype aes256-cts, salt "MY.DOMAINHTTPmyapp-entw.intranet-test.my.domain", params ""
[32175] 1521108914.135595: Getting initial credentials for MyappEU@MY.DOMAIN
[32175] 1521108914.135596: Looked up etypes in keytab: des-cbc-crc, des, des-cbc-crc, rc4-hmac, aes256-cts, aes128-cts
[32175] 1521108914.135598: Sending unauthenticated request
[32175] 1521108914.135599: Sending request (153 bytes) to MY.DOMAIN (master)
kinit: Preauthentication failed while getting initial credentials
```<issue_comment>username_1: Please ensure you clear the SPN(s) from the Active Directory account related to the keytab before generating a new keytab. This is a little known issue. In your case, I would run the following six step process and it should work:
1. `setspn -D HTTP/myapp.my.domain MyappEU`
2. Then generate the keytab:
`ktpass -princ HTTP/myapp.my.domain -mapUser MyappEU@MY.DOMAIN -pass x<PASSWORD> -crypto AES256-SHA1 -ptype KRB5_NT_PRINCIPAL -kvno 0 -out myapp_eu.keytab_AES`
3. Verify the SPN you need is on the Active Directory account:
`setspn -L MyappEU`
4. Ensure the new SPN is reflected in the "User logon name" field in the Account tab of the Active Directory account and the checkbox "This account supports Kerberos AES 256 bit encryption" beneath that is checked:
[](https://i.stack.imgur.com/svEsb.png)
5. In the standalone.xml file on your JBOSS server, don't forget to update the keytab file name there and then restart the JBOSS engine for the change to take effect.
6. Finally, you'll need the [unlimited encryption strength Java JAR files](http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html) in your *Java\_Home\lib\security* directory on the JBOSS server or else your keytab won't be able to de-crypt AES256-SHA1 Kerberos tickets. If you are convinced the problem isn't in steps 1-5, then maybe this one is it.
Upvotes: 1 <issue_comment>username_2: Thanks and T.Heron and Samson for the tips.
At the end there were only 2 steps to do.
1. Activate AES for the Account as described in [T.Herons article](https://social.technet.microsoft.com/wiki/contents/articles/36470.kerberos-keytabs-explained.aspx)
2. Use ktpass with mapuser to set the salt to the principal that is used as login. (a error will be shown but the salt will still be set)
The second part was hard to find out. MapUser will set the SALT and the UPN to the SPN which is mapped! There can only be one SALT.
You can see the current salt on linux using:
```
env KRB5_TRACE=/dev/stdout env KRB5_CONFIG=krb5.conf kinit -fV ADUSER@MYDOMAIN.COM
```
ExampleOutputLine (wrong salt in this case)
```
[10757] 1523617677.379889: Selected etype info: etype aes256-cts, salt "MYDOMAIN.COMHTTPvm41568226", params ""
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 504 | 1,808 |
<issue_start>username_0: I have a set of `.tf` files that reflect an `AWS` infra.
The files in my `terraform` folder are more or less:
```
eip.tf
instance.tf
key.tf
provider.tf
rds.tf
route53.tf
securitygroup.tf
terraform.tfstate
terraform.tfstate.1520442018.backup
terraform.tfstate.backup
terraform.tfvars
terraform.tfvars.dist
vars.tf
vpc.tf
```
I created the infra and I want to destroy it.
I see that the internet gateway destruction takes forever:
```
aws_internet_gateway.my-gw: Still destroying... (ID: igw-d53fa0b2, 14m50s elapsed)
```
By browsing in my `aws` console I see that this is because my `ec2` instance is still up and running.
Why is terraform trying to destroy the internet gateway without making sure the ec2 instance is down?
How can I prevent this from hapenning again?
The same scripts have executed (`apply`/`destroy`) many times before without any issues.<issue_comment>username_1: Turns out this was due to my instance having been created with this:
```
# enable termination protection
disable_api_termination = true
```
This will apparently prevent normal termination behavior from terraform.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Had the same issue but mine was because I had deletion protection turned on for a Load Balancer
```
enable_deletion_protection = true
```
I just logged into AWS and manually disabled by editing the Load Balancer.
[](https://i.stack.imgur.com/i29k0.png)
I suppose you could `terraform apply` the change and then `terraform destroy` everything but if you're seeing this error you've most likely destroyed most of your infrastructure already and you don't want to re-create it all just to allow you to destroy it all again.
Upvotes: 2
|
2018/03/14
| 1,885 | 7,849 |
<issue_start>username_0: I am making an app which has login page. I am using default django username and password fields.I want to add some css to it.How can we modify the default style and make it look good
My HTML is of this form
```
{% load staticfiles %}
Purple Admin
### Login
Username:
{{ form.username }}
Password:
{{ form.password }}
{% if form.errors %}
Your username and/or password didn't match. Please try again.
{% endif %}
[Forgot password](#)
Login
Don't have an Account? [Sign Up](#)
```
Variables `{{ form.username }}` and `{{ form.password }}` are having default styling to them.I just want to add a css class to them.How can I do that in this template itself<issue_comment>username_1: You could set the class attribute in the form widget of each field (see the [django docs](https://docs.djangoproject.com/en/1.11/ref/forms/widgets/#django.forms.Widget.attrs)).
I ended up using [django-bootstrap4](https://github.com/zostera/django-bootstrap4) and there are other similar packages you can find through google.
Other options were discussed in [this post](https://stackoverflow.com/questions/401025/define-css-class-in-django-forms) for example.
Upvotes: 0 <issue_comment>username_2: Lets say that you want to add a class or placeholder like:
```
```
in your forms.py import your model and then:
```
class YourModelForm(forms.ModelForm):
class Meta:
model = YourModel
#if you want to use all models then set field = to '__all__' (whithout the [] )
fields = ['Username']
username = forms.TextImput(
widget=forms.TextInput(attrs={
'class': 'SomeClass',
'autofocus': True,
'placeholder': 'TypeSomething'
}
)
)
```
and finally in your views.py, import your form and add it to form\_class :
```
Class YourLoginView(FormView):
form_class = YourModelForm
```
I have this snipped if you want for the login/logout views (you need to import your own form and put it in form\_class):
```
from django.utils.http import is_safe_url
from django.contrib.auth import REDIRECT_FIELD_NAME, login as auth_login
from django.contrib.auth.views import LogoutView
from django.utils.decorators import method_decorator
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_protect
from django.views.decorators.debug import sensitive_post_parameters
from django.views.generic import FormView
class LoginView(FormView):
"""
Provides the ability to login as a user with a username and password
"""
template_name = 'login.html'
success_url = '/'
form_class = AuthenticationForm
redirect_field_name = '/'
@method_decorator(sensitive_post_parameters('password'))
@method_decorator(csrf_protect)
@method_decorator(never_cache)
def dispatch(self, request, *args, **kwargs):
# Sets a test cookie to make sure the user has cookies enabled
request.session.set_test_cookie()
return super(LoginView, self).dispatch(request, *args, **kwargs)
def form_valid(self, form):
auth_login(self.request, form.get_user())
# If the test cookie worked, go ahead and
# delete it since its no longer needed
if self.request.session.test_cookie_worked():
self.request.session.delete_test_cookie()
return super(LoginView, self).form_valid(form)
def get_success_url(self):
redirect_to = self.request.GET.get(self.redirect_field_name)
if not is_safe_url(url=redirect_to, host=self.request.get_host()):
redirect_to = self.success_url
return redirect_to
class Logout(LogoutView):
"""
Provides users the ability to logout
"""
template_name = 'logout.html'
```
And for the AuthenticationForm:
```
import unicodedata
from django import forms
from django.contrib.auth import (
authenticate, get_user_model, password_validation,
)
from django.utils.translation import gettext, gettext_lazy as _
from django.utils.text import capfirst
UserModel = get_user_model()
class UsernameField(forms.CharField):
def to_python(self, value):
return unicodedata.normalize('NFKC', super().to_python(value))
class AuthenticationForm(forms.Form):
"""
(This is a modified version of the original:
django.contrib.auth.forms.AuthenticationForm)
Base class for authenticating users. Extend this to get a form that accepts
username/password logins.
"""
username = UsernameField(
label=_("Username"),
max_length=32,
widget=forms.TextInput(attrs={
'autofocus': True,
'placeholder': _('Type your username')
}
),
)
password = forms.CharField(
label=_("Password"),
strip=False,
widget=forms.PasswordInput(attrs={
'class': 'form-control',
'placeholder': _('Contraseña')
}
),
)
error_messages = {
'invalid_login': _(
"Please enter a correct %(username)s and password. Note that both "
"fields may be case-sensitive."
),
'inactive': _("This account is inactive."),
}
def __init__(self, request=None, *args, **kwargs):
"""
The 'request' parameter is set for custom auth use by subclasses.
The form data comes in via the standard 'data' kwarg.
"""
self.request = request
self.user_cache = None
super().__init__(*args, **kwargs)
# Set the label for the "username" field.
self.username_field = UserModel._meta.get_field(
UserModel.USERNAME_FIELD)
if self.fields['username'].label is None:
self.fields['username'].label = capfirst(
self.username_field.verbose_name)
def clean(self):
username = self.cleaned_data.get('username')
password = self.cleaned_data.get('password')
if username is not None and password:
self.user_cache = authenticate(
self.request, username=username, password=<PASSWORD>)
if self.user_cache is None:
# An authentication backend may reject inactive users. Check
# if the user exists and is inactive, and raise the 'inactive'
# error if so.
try:
self.user_cache = UserModel._default_manager.get_by_natural_key(
username)
except UserModel.DoesNotExist:
pass
else:
self.confirm_login_allowed(self.user_cache)
raise forms.ValidationError(
self.error_messages['invalid_login'],
code='invalid_login',
params={'username': self.username_field.verbose_name},
)
else:
self.confirm_login_allowed(self.user_cache)
return self.cleaned_data
def confirm_login_allowed(self, user):
"""
Controls whether the given User may log in. This is a policy setting,
independent of end-user authentication. This default behavior is to
allow login by active users, and reject login by inactive users.
If the given user cannot log in, this method should raise a
``forms.ValidationError``.
If the given user may log in, this method should return None.
"""
if not user.is_active:
raise forms.ValidationError(
self.error_messages['inactive'],
code='inactive',
)
def get_user_id(self):
if self.user_cache:
return self.user_cache.id
return None
def get_user(self):
return self.user_cache
```
Upvotes: 1
|
2018/03/14
| 1,014 | 3,898 |
<issue_start>username_0: Anyone else experiencing issues with the status bar always being grey since using sdk 26 or 27? I recently updated Android Studio and using 27 as my target sdk and now every Activity that is launched has a grey status bar. I have not changed the styles or themes, yet they are always grey.
Another interesting development is my MainActivity has a navigation drawer there and it has no issues rendering a translucent status over my primary colour. But every Activity I launch from there always has the grey status bar. I am completely lost as to why but I am assuming it's the SDK version since I had no issues before.
Every Activity references this style, which used to work for both the MainActivity and all other Activities:
```
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
<item name="android:windowDrawsSystemBarBackgrounds">true</item>
<item name="android:windowTranslucentStatus">true</item>
```
I can fix the grey problem by removing the translucent flag and setting the status bar colour manually in xml. However this now breaks the MainActivity's navigation drawer opening under the translucent status bar (it's just the solid colour). In a perfect world I would want the translucent status bar over my primary colour in all activities.
Any help would be appreciated!
**UPDATE:**
I can confirm the issue manifests itself if I use SDK 27 and update the support dependencies with it. If I go back to SDK 26 versions, the problem seems to go away. Is there something specifically related to 27 that we need to be aware of?<issue_comment>username_1: Add this line in your styles.xml(v21):
```
@color/PrimaryDark
```
It may solve your problem.
Upvotes: 3 <issue_comment>username_2: As I stated in my comment, I was having a similar problem that only started to manifest itself in API27. The status bar isn't specifically grey - it's just transparent.
My particular solution was that the `styles.xml` in my `values-v21` folder was as follows :-
```
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
<item name="android:windowDrawsSystemBarBackgrounds">true</item>
<item name="android:statusBarColor">@android:color/transparent</item>
```
Removing the `android:statusBarColor` row fixed the issue for me.
The reason I'm posting this as an answer is that in earlier API versions this iffy line had not affected the status bar - which is probably a separate issue in itself. It seems that that issue has been fixed in API27 which is why the status bar is now appearing transparent.
Upvotes: 2 <issue_comment>username_3: Same problem with API 27.
In styles.xml(v21) I have:
```
<item name="windowActionBar">false</item>
<item name="windowNoTitle">true</item>
<item name="android:windowDrawsSystemBarBackgrounds">true</item>
<item name="android:windowTranslucentStatus">true</item>
<item name="android:windowActivityTransitions">true</item>
```
Then I added this, in root layout of my application:
```
android:background="@color/colorPrimary"
android:fitsSystemWindows="true"
```
That solved my problem.
**I'm using a second `CoordinatorLayout` that wrap the AppBar as suggested in [this answer](https://stackoverflow.com/a/35889189/8380417) for not showing the AppBar's shadow over the Status Bar**
As I can see the issue is that the root `CoordinatorLayout` is drawed behind the StatusBar, with white color as background, but the AppBar is not.
**Edit:** After some more tests I can says that using in styles.xml(21):
@android:color/transparent
```
true
```
and using in my root layout:
```
android:background="@color/colorPrimaryDark"
```
results in a better color feeling.
Upvotes: 1 <issue_comment>username_4: It still doesn't work in SDK 28 but to set the backgrounds is a good fix for this problem e.g.:
```
```
Upvotes: 1
|
2018/03/14
| 323 | 1,279 |
<issue_start>username_0: I cannot for the life of me figure out the table name that would enable me to see the approval queue.
There is time pending approval in the approval process "TLReportedTime" and I'm looking for a table to build a report so that I can see **who** the approval is pending with...<issue_comment>username_1: (PS\_)EOAW\_USERINST is the table that stores the information with who a step's approval is pending with.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To elaborate on my comment, in the more generic case, rather than @Robin's correct answer, you want to check the HTML `input name` via Inspect Element in Browser.
Taking a screen I am currently looking at and using right-click `Inspect` in browser, you get my screenshot.
The field and table names are concatenated and separated with a `_` which is perhaps not the best choice as underscores are widely used in names.
`TL_RUN_CTRL_GRP` and `EMPLID` are table and field names, respectively.
[](https://i.stack.imgur.com/9laOw.png)
Take note: depending on specific PeopleSoft page, table names can sometimes be those of in-memory-only work tables, which is going to be of limited use for writing queries.
Upvotes: 1
|
2018/03/14
| 1,720 | 6,863 |
<issue_start>username_0: I want to parallelise my application. I am using Tomcat8 to deploy my web application. I am using Tomcat Default settings(HTTP Connector Thread count 200 and Default JVM settings). I want to use CompletableFuture in Java to complete the task in parallel. Ex - If I have 3 tasks TASK1,TASK2,TASK3 then instead of executing them in sequence I want to execute each task in separate threads using CompletableFuture and combine results.
My question is, at any point of time, Tomcat receives 200 request, How many Threads are safe to create in Executor?
If Executors.newFixedThreadPool(600), is 600 is a good number because, at any point of time I get 200 request and three parallel task to complete so I need minimum 600 threads(Theoretically). I feel creating more Threads may loose performance.<issue_comment>username_1: How many threads you can create depends on many factors, mainly in the specifications of the machine and OS.
This [answer](https://stackoverflow.com/questions/763579/how-many-threads-can-a-java-vm-support) says.
>
> This depends on the CPU you're using, on the OS, on what other processes are doing, on what Java release you're using, and other factors. I've seen a Windows server have > 6500 Threads before bringing the machine down.
>
>
>
I personally have used almost 1000 threads, and the performance of my machine was still good.
About using `Executors.newFixedThreadPool(600)` you have to analize if that is the better executor type that fits your application caracteristics and needs.
Here you can see a comparison between `FixedThreadPool` and `CachedThreadPool`:
[FixedThreadPool vs CachedThreadPool: the lesser of two evils](https://stackoverflow.com/questions/17957382/fixedthreadpool-vs-cachedthreadpool-the-lesser-of-two-evils)
If the constant thread pool (of 600), will have most of the threads idle most of the time, you can use the chache thread pool that will create as many threads as necessary and then will keep them for certain time or as long as they continue to being used. You would probably get a benefit of using fixed thread pool, if you have the 200 executing the 3 tasks constantly.
You can also use the `CachedThreadPool` with a maximun number of threads to be createad using a custom thread factory.
By the other hand, if most of the tasks are short tasks, you can use `Executors.newWorkStealingPool()` this ensures that your available cpu cores are always working, setting the level of parallelistm with `Runtime.getRuntime().availableProcessors()`, if some thread finishes its work, it can steal tasks from another threads queue.
You can see more about `ForkJoinPool` and `Executors.newWorkStealingPool()` (note: `newWorkStealingPool` uses `ForkJoinPool` internally):
[Detailed difference between Java8 ForkJoinPool and Executors.newWorkStealingPool?](https://stackoverflow.com/questions/41337451/detailed-difference-between-java8-forkjoinpool-and-executors-newworkstealingpool)
Upvotes: 3 [selected_answer]<issue_comment>username_2: [The Answer](https://stackoverflow.com/a/49286375/642706) by <NAME> is correct and smart. A bit more explanation here.
There is no hard-and-fast rule to follow. As others said, it depends on many factors such as the nature of the particular tasks, their duration, how CPU-intensive the tasks versus how often the tasks may be waiting on resources, how the thread scheduler works in your host OS and your JVM, the nature of your CPU, how many cores in your CPU, and more.
Keep in mind that [scheduling](https://en.wikipedia.org/wiki/Thread_(computing)#Scheduling) of [threads](https://en.wikipedia.org/wiki/Thread_(computing)) is not free-of-cost. There is overhead in scheduling the the threads for their moments of execution. There is expense in changing between threads, the [context switch](https://en.wikipedia.org/wiki/Context_switch). [Hyper-threading](https://en.wikipedia.org/wiki/Hyper-threading) is a hardware feature to lessen that cost of a context switch, but even then, switching between more than two threads for one core is back to a full context-switch.
So, it is naïve to think “the more the merrier” with threads. Quite often, you may find that *fewer* threads is more performant than too-many threads with too many context-switches.
Generally speaking, hundreds of active threads may be counter-productive, unless those threads spend most of their time doing nothing. And remember, your app (Tomcat + web apps) is not the only source of active threads on the host machine. The OS and other apps are likely running dozens of somewhat-active threads, and a few busier threads (local I/O, networking, and such).
For example, if you have a 4 core CPU with hyper-threading enabled, that means 8 logical cores, so you might expect to be using 5 or so for your dedicated Tomcat machine. If your threads were busy (CPU intensive) about a third of the time, then you might want to start with a thread pool of about 12-20. If the threads are busy only less than 5-10% of the time, then maybe pool of 100. Then monitor the real-world performance and see how it goes. If all the cores are blasting away at 100% utilization for minutes at a time, then you may be over-subscribing and might want to decrease the size of your thread-pool.
Regarding duration, if the threads are short-lived, but you may a great many of them during peak times of usage on your server, you may need to keep the pool smaller to avoid too many threads clamoring for the CPU simultaneously.
If you have many threads and each is very busy with the *CPU* such as encryption or codec, then you would want a thread pool size limited to less than the number of *physical* cores. For our example above, limit the pool to two or three of the four physical cores (8 logical hyper-threaded cores), leaving physical core(s) open for processes of the OS or other apps. Indeed, if you really have such very CPU-intensive tasks, you might consider disabling hyper-threading on your deployment computers. Hyper-threaded pairs of logical cores per physical core do *not* execute simultaneously, they trade-off back-and-forth with less cost for the context switch but not *zero* cost. If your tasks are extremely CPU-intensive (quite rare in common business apps), with no waiting-on-resource rest breaks, then there may be no benefit to hyper-threading.
Of course, you cannot know exactly the numbers discussed above as demanded by your particular web app(s) with your particular deployment. Trial-and-error is the only way to go.
Tip: Rather than hard-code your thread-pool sizes, you may wish to externalize the size of your thread pools to allow you to make changes at deployment-time. Perhaps set values to be retrieved through [JNDI](https://en.wikipedia.org/wiki/Java_Naming_and_Directory_Interface) or some other external source.
Upvotes: 2
|
2018/03/14
| 1,252 | 3,784 |
<issue_start>username_0: Anybody was wondering how to get these weird time strings in BigQuery look like the ones in Google Analytics?
They consist of `[any amount of hours]:[minutes]:[seconds]` that's why normal time functions don't really work.
Examples:
* 85667:34:02
* 260:59:34
* 02:01:01<issue_comment>username_1: I'm using a temporary function (which expects seconds) to concatenate 3 values:
* hours: cut off minutes and seconds by division /3600 and
* minutes: cut off second with division and floor; cut off hours with modulo
* seconds: cut off minutes and hours with modulo
All resulting integers are formatted to show two zeros.
Example code:
-------------
```sql
CREATE TEMP FUNCTION time_str(seconds INT64) AS (
FORMAT("%02d:%02d:%02d",
DIV(seconds,3600),
MOD(DIV(seconds,60) , 60),
MOD(seconds, 60)
)
);
SELECT
device.browser,
time_str(SUM(totals.timeOnSite)) timeOnSite
FROM
`google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
GROUP BY 1
ORDER BY SUM(totals.timeOnSite) DESC
```
Hope that helps!
Would be cool if anyone finds a solution with fewer functions involved and posts it here.
Thanks [Elliott](https://stackoverflow.com/users/6253347/elliott-brossard) and [Mikhail](https://stackoverflow.com/users/5221944/mikhail-berlyant) for the hints!
Upvotes: 2 <issue_comment>username_2: >
> Would be cool if anyone finds a solution with fewer functions involved and posts it here
>
>
>
The only what makes me (always) a little dizzy is use of those `CAST`s, `FLOOR`s and `MOD`s
So, below is a little more aesthetic (from my prospective) version of that transforming to ga format
Simple example first - i think it is self-explanatory
```sql
#standardSQL
WITH `table` AS (
SELECT 1925 AS seconds UNION ALL
SELECT 3600 UNION ALL
SELECT 86430 UNION ALL
SELECT 1111111925
)
SELECT seconds,
FORMAT('%02d:%s', hours, FORMAT_TIMESTAMP('%M:%S', ts)) as ga_style
FROM `table`,
UNNEST([STRUCT(
TIMESTAMP_SECONDS(seconds) AS ts,
TIMESTAMP_DIFF(TIMESTAMP_SECONDS(seconds), TIMESTAMP_SECONDS(0), HOUR) AS hours
)])
```
with result as
```sql
Row seconds ga_style
1 1,925 00:32:05
2 3,600 01:00:00
3 86,430 24:00:30
4 1,111,111,925 308642:12:05
```
Finally, below is above logic wrapped with SQL UDF and applied to the same script as in OP's answer
```sql
CREATE TEMP FUNCTION time_str(seconds INT64) AS ((
SELECT FORMAT('%02d:%s', hours, FORMAT_TIMESTAMP('%M:%S', ts))
FROM UNNEST([STRUCT(
TIMESTAMP_SECONDS(seconds) AS ts,
TIMESTAMP_DIFF(TIMESTAMP_SECONDS(seconds), TIMESTAMP_SECONDS(0), HOUR) AS hours
)])
));
SELECT
device.browser,
time_str(SUM(totals.timeOnSite)) timeOnSite
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
GROUP BY 1
ORDER BY SUM(totals.timeOnSite) DESC
```
or to simplify even more:
```sql
CREATE TEMP FUNCTION time_str(seconds INT64) AS (
FORMAT('%02d:%s',
TIMESTAMP_DIFF(TIMESTAMP_SECONDS(seconds), TIMESTAMP_SECONDS(0), HOUR),
FORMAT_TIMESTAMP('%M:%S', TIMESTAMP_SECONDS(seconds))
)
);
SELECT
device.browser,
time_str(SUM(totals.timeOnSite)) timeOnSite
FROM `google.com:analytics-bigquery.LondonCycleHelmet.ga_sessions_20130910`
GROUP BY 1
ORDER BY SUM(totals.timeOnSite) DESC
```
with result as
```sql
Row browser timeOnSite
1 Chrome 00:32:05
2 Firefox 00:12:40
3 Android Browser 00:05:04
4 Safari 00:03:28
5 Internet Explorer 00:00:26
```
Note: even though my focus was mostly on aesthetic aspect - meantime this version has 6 functions vs. 8 in original post - if it really matter :o)
Upvotes: 2
|
2018/03/14
| 429 | 1,542 |
<issue_start>username_0: Right now I put `console.log(something)` where I want to see what's happening in those places where I put it. When I see it's fine, I remove it.
How do I better implement logging functionality which I can easily turn on/off, e.g. with a start command line parameter `node app.js debug=true`?<issue_comment>username_1: you can do it like this:
somewhere in your code:
```
const debug = true
```
then:
```
if(debug){
console.log(yourvar)
}
```
or like someone suggested:
```
function debugMe(debug, yourvar){
if(debug){
console.log(yourvar)
}
}
```
wrap it in a function
Upvotes: 1 <issue_comment>username_2: Your problem can be solved by using different logging levels. Ideally the log statements that you delete after development can be assigned to DEBUG level. So they will output only when the app is running in debug mode.
Most 3rd party loggers support this. You can check out [Winston](https://github.com/winstonjs/winston)
Upvotes: 0 <issue_comment>username_3: Easiest you could do is use [Winston](https://github.com/winstonjs) or any similar node packages
Add this code as a logger-service and wrap it in a function
```
const logger = winston.createLogger({
transports: [
new winston.transports.Console()
]
});
```
Require this logger service's function and log using any of the below formats.
```
logger.log({
level: 'info',
message: 'Hello distributed log files!'
});
logger.info('Hello again distributed logs');
```
Hope this helps.
Upvotes: 0
|
2018/03/14
| 597 | 2,198 |
<issue_start>username_0: I have this code where I have defined two classes using generics.
1. Section which can have a generic type of data.
2. Config which uses kind of builder patterns and stores list of such sections.
On running this code it gives compilation error and I am no where to understand why. I have mentioned the type.
Error : incompatible types: java.util.List> cannot be converted to java.util.List>
```
public class Main {
public static void main(String[] args) {
Section section = new Section<>("wow");
List> sections = new ArrayList<>();
sections.add(section);
Config config = new Config<>().setSections(sections);
}
public static class Section {
private T data;
public Section(T data) {
this.data = data;
}
public T getData() {
return data;
}
}
public static class Config {
private List> sections;
public Config() {
}
public Config setSections(List> sections) {
this.sections = sections;
return this;
}
}
}
```<issue_comment>username_1: you can do it like this:
somewhere in your code:
```
const debug = true
```
then:
```
if(debug){
console.log(yourvar)
}
```
or like someone suggested:
```
function debugMe(debug, yourvar){
if(debug){
console.log(yourvar)
}
}
```
wrap it in a function
Upvotes: 1 <issue_comment>username_2: Your problem can be solved by using different logging levels. Ideally the log statements that you delete after development can be assigned to DEBUG level. So they will output only when the app is running in debug mode.
Most 3rd party loggers support this. You can check out [Winston](https://github.com/winstonjs/winston)
Upvotes: 0 <issue_comment>username_3: Easiest you could do is use [Winston](https://github.com/winstonjs) or any similar node packages
Add this code as a logger-service and wrap it in a function
```
const logger = winston.createLogger({
transports: [
new winston.transports.Console()
]
});
```
Require this logger service's function and log using any of the below formats.
```
logger.log({
level: 'info',
message: 'Hello distributed log files!'
});
logger.info('Hello again distributed logs');
```
Hope this helps.
Upvotes: 0
|
2018/03/14
| 486 | 1,754 |
<issue_start>username_0: I am learning A\* algorithm and dijkstra algorithm. And found out the only difference is the Heuristic value it used by A\* algorithm. But how can I get these Heuristic value in my graph?. I found a example graph for A\* Algorithm(From A to J). Can you guys help me how these Heuristic value are calculated.[](https://i.stack.imgur.com/QgEI5.png)
The RED numbers denotes Heuristic value.
My current problem is in creating maze escape.<issue_comment>username_1: you can do it like this:
somewhere in your code:
```
const debug = true
```
then:
```
if(debug){
console.log(yourvar)
}
```
or like someone suggested:
```
function debugMe(debug, yourvar){
if(debug){
console.log(yourvar)
}
}
```
wrap it in a function
Upvotes: 1 <issue_comment>username_2: Your problem can be solved by using different logging levels. Ideally the log statements that you delete after development can be assigned to DEBUG level. So they will output only when the app is running in debug mode.
Most 3rd party loggers support this. You can check out [Winston](https://github.com/winstonjs/winston)
Upvotes: 0 <issue_comment>username_3: Easiest you could do is use [Winston](https://github.com/winstonjs) or any similar node packages
Add this code as a logger-service and wrap it in a function
```
const logger = winston.createLogger({
transports: [
new winston.transports.Console()
]
});
```
Require this logger service's function and log using any of the below formats.
```
logger.log({
level: 'info',
message: 'Hello distributed log files!'
});
logger.info('Hello again distributed logs');
```
Hope this helps.
Upvotes: 0
|
2018/03/14
| 921 | 3,609 |
<issue_start>username_0: For a project I am working on I created a trigger to run my makeCopy() function at the beginning of every year. The goal is to make a copy of a spreadsheet and save it in a folder already created in my drive. In this function I also create a new folder for the new year.
```
function makeCopy(){
var sheet = SpreadsheetApp.getActiveSpreadsheet();
var d = new Date();
var year = d.getYear();
var FolderName = "Time Off Request - " + year;
DriveApp.createFolder(FolderName);
year = year - 1;
var FileName = "Copy of Time Off Request - " + year;
Logger.log(FolderName);
FolderName = "Time Off Request - " + year;
var FolderID = DriveApp.getFoldersByName(FolderName).next().getId();
Logger.log(FolderID);
var destFolder = DriveApp.getFolderById(FolderID);
Logger.log(destFolder);
DriveApp.getFileById(sheet.getId()).makeCopy(FileName, destFolder);
}
```
This code actually ran and worked the first time. However, every time I try to test it again I get the error, "Cannot retrieve the next object: iterator has reached the end. " On the line where I try to get my folder ID. How can I fix this error and get the folder ID of the folder in my drive so I can make a copy of the spreadsheet to that folder.<issue_comment>username_1: You get that specific error, `Cannot retrieve the next object: iterator has reached the end.`, because the search for that folder name yielded an empty result set.
If you alter your program structure to only use `next()` if the iterator's `hasNext()` method returns true, you will not get that error anymore. However, such a change will not make your search term suddenly find results. It does, however, allow for making repeated attempts with different search queries.
I suggest renaming your variables to be expressive of their intent:
```
// Copy the current file into last year's folder, and create this year's folder.
function makeCopy() {
var now = new Date();
var thisYear = now.getYear();
var lastYear = thisYear - 1;
var query = "Time Off Request - " + lastYear;
var folderIterator = DriveApp.getFoldersByName(query);
// Instead of throwing an error, you could modify this to be a while loop
// that requests a search term from the user - see Browser/UI and inputBox
// and updates folderIterator. If you do that, you'll want to add guards
// that ensure the correct folder will be the first result.
if (!folderIterator.hasNext()) {
throw new Error("Bad search term - no folders with name '" + query + "' exist!");
}
var lastYearFolder = folderIterator.next();
console.log({message: "Backup of Time Off Requests",
query: query,
destFolder: lastYearFolder.getName()}
); // View->Stackdriver Logging
var sheet = SpreadsheetApp.getActiveSpreadsheet();
var backupName = "Copy of Time Off Request - " + lastYear;
DriveApp.getFileById(sheet.getId()).makeCopy(backupName, lastYearFolder);
// Before exiting, now that the backup is done, create this year's folder.
DriveApp.createFolder("Time Off Request - " + thisYear);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: [createFolder](https://developers.google.com/apps-script/reference/drive/drive-app#createfoldername) returns a Folder object, but the code didn't take advange of this. Instead of
```
DriveApp.createFolder(FolderName);
```
Use
```
var myFolder = DriveApp.createFolder(FolderName);
```
Then to get the folder id use [getId](https://developers.google.com/apps-script/reference/drive/folder#getid)
```
var FolderID = myFolder.getId();
```
Upvotes: 0
|
2018/03/14
| 1,734 | 5,800 |
<issue_start>username_0: I have an array of objects
```
var arrayOfObjects: [Object]?
```
And they all have a property called `depth`. I want to find the **very next** object in that array that has the same depth with a specific object I know the index of:
```
[
...objects_before...,
object_I_know: {depth:3},
...objects_after...
]
```
Is there a more efficient way other than using a for loop starting from the `object_I_know` index, and traversing down until it finds one?<issue_comment>username_1: ```
class Object {
var name: String
var depth: Float
init(name: String, depth: Float) {
self.name = name
self.depth = depth
}
}
let o1 = Object(name: "object1", depth: 10)
let o2 = Object(name: "object2", depth: 12)
let o3 = Object(name: "object3", depth: 4)
let o4 = Object(name: "object4", depth: 12)
let o5 = Object(name: "object5", depth: 14)
let array = [o1, o2, o3, o4, o5]
let knownIndex = 1
let knownDepth = array[knownIndex].depth
var searchResults = [Object]()
// iterate through the second half of the array after the known
// index and break the loop when a match is found
for i in knownIndex + 1.. 0 {
print("match found: \(searchResults[0].name)")
} else {
print("no match found")
}
```
`index(where:)` uses a loop also, unbeknownst to the commenter, except that the compiler does it for you behind the scenes. `index(where:)` also loops through the entire array which is not very efficient if you already know the starting index (which OP does).
Upvotes: 0 <issue_comment>username_2: ```
let nextIndex: Int? = (givenIndex ..< array.endIndex).first { index in
return array[index].depth == array[givenIndex].depth
}
```
The item with the object with the same depth would be at that `nextIndex` if there is one
```
let nextObject: Object? = (nextIndex == nil) ? nil : array[nextIndex!]
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Here's a sample model I came up with for testing:
```
struct S {
let id: Int
let depth: Int
}
var id = 0
let getID: () -> Int = { defer { id += 1 }; return id }
let objects = [
S(id: getID(), depth: 1),
S(id: getID(), depth: 3),
S(id: getID(), depth: 2),
S(id: getID(), depth: 3),
S(id: getID(), depth: 4),
]
```
Here's a solution that account for the situations in which there are no elements which match the predicate, or only 1 such element:
```
let isDepth3: (S) -> Bool = { $0.depth == 3 }
// Get the index of the first item (can be nil)
let indexOfFirstDepth3 = objects.index(where: isDepth3)
// Get the index after that (can be nil), so that we can exclude everything before it
let firstIndexOfRemainingItems = indexOfFirstDepth3.flatMap { objects.index($0, offsetBy: +1, limitedBy: objects.endIndex) }
let indexOfSecondDepth3 = firstIndexOfRemainingItems.flatMap {
// Slice the `objects` array, to omit all the items before up to and including the first depth 3 item.
// Then find the index of the next next 3 item thereafter.
return objects[$0...].index(where: isDepth3)
}
// Print results
func stringifyOptional(\_ item: T?) -> String {
return item.map{ String(describing: $0) } ?? "nil"
}
print("First item with depth 3 is \(stringifyOptional(indexOfFirstDepth3.map{ objects[$0] })) at index \(stringifyOptional(indexOfFirstDepth3))")
print("Second item with depth 3 is \(stringifyOptional(indexOfSecondDepth3.map{ objects[$0] })) at index \(stringifyOptional(indexOfFirstDepth3))")
```
If you're ***sure*** that you'll have 2 such elements, and you're sure that force unwrapping will be safe, then this can be simplified dramatically:
```
let isDepth3: (S) -> Bool = { $0.depth == 3 }
let indexOfFirstDepth3 = objects.index(where: isDepth3)!
let indexOfSecondDepth3 = objects[indexOfFirstDepth3...].index(where: isDepth3)!
// Just printing the result
print("First item with depth 3 is \(objects[indexOfFirstDepth3]) at index \(indexOfFirstDepth3)")
print("Second item with depth 3 is \(objects[indexOfFirstDepth3])) at index \(indexOfFirstDepth3)")
```
Upvotes: 0 <issue_comment>username_4: **Context**
```
struct DepthObject { let depth: Int }
let objs = [a, b, c, d ,e]
let index = 1 //predetermined index
let depthToFind = objs[index].depth
let startIndex = index + 1
let remainingArray = objs[startIndex...] //The slice we want to work with
```
**One way**
```
let aMessage: String? = remainingArray
.first { $0.depth == depthToFind }
.flatMap { "The world is yours \($0)" }
```
**Decide based on it**
```
if let nextDepthObject = remainingArray.first(where: { $0.depth == depthToFind }) {
//Found the next one!
} else {
//Didn't find it!
}
```
**Loop it**
```
var nextDepthObject: DepthObject? = nil
for sliceDepthObject in remainingArray {
if sliceDepthObject.depth == depthToFind {
nextDepthObject = sliceDepthObject
break
}
}
```
**Implementing a particular approach**
```
func nextDepthObject(within array: Array, startingAt index: Int) -> DepthObject? {
guard index + 1 < array.count && index < array.count else {
return nil
}
let depthToFind = array[index].depth
let suffixArray = array[(index + 1)...]
return suffixArray.first { $0.depth == depthToFind }
}
let theNextOne: DepthObject? = nextDepthObject(within: objs, startingAt: index)
```
Upvotes: 0 <issue_comment>username_5: You can add an extension over `Collection` (which `Array` conforms to):
```
extension Collection {
func next(startingWith next: Self.Index, where match: (Element) -> Bool) -> Element? {
guard next < endIndex else { return nil }
return self[next..
```
You'd use it like this:
```
let nextMatch = arrayOfObjects.next(startingWith: foundIndex+1) { $0.depth == searchedDepth }
```
Upvotes: 0
|
2018/03/14
| 3,432 | 10,604 |
<issue_start>username_0: So I'm creating an app that uses the Google Places API to gather a list of restaurants. Using the API provides me with a JSON file full of details about each location like lat, long, rating, priceLevel, openNow, photos, etc. Below is the code used to gather said JSON from my given parameters:
```
func performGoogleSearch(radius: Double, type: String, price: Int ) {
let location = locationManager.location?.coordinate
let url: URL = URL(string: "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=\(location?.latitude ?? 0),\(location?.longitude ?? 0)&radius=\(radius * 1609.34)&type=restaurant&maxprice=\(price)&key=<KEY>")!
let task = URLSession.shared.dataTask(with: url) {(data, response, error) in
print(NSString(data: data!, encoding: String.Encoding.utf8.rawValue))
}
task.resume()
}
```
The data returned is a JSON like the following:
```
{
"html_attributions" : [],
"results" : [
{
"geometry" : {
"location" : {
"lat" : 37.7867167,
"lng" : -122.4111737
},
"viewport" : {
"northeast" : {
"lat" : 37.7881962302915,
"lng" : -122.4098846697085
},
"southwest" : {
"lat" : 37.7854982697085,
"lng" : -122.4125826302915
}
}
},
"icon" : "https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png",
"id" : "3344890deedcb97b1c2d64814f92a02510ba39c8",
"name" : "<NAME>",
"opening_hours" : {
"open_now" : false,
"weekday_text" : []
},
"photos" : [
{
"height" : 900,
"html_attributions" : [
"\u003ca href=\"https://maps.google.com/maps/contrib/114937580614387417622/photos\"\u003eClift San Francisco\u003c/a\u003e"
],
"photo_reference" : "<KEY>",
"width" : 1155
}
],
"place_id" : "ChIJFUBxSY6AhYARwOaLV7TsLjw",
"price_level" : 4,
"rating" : 4.1,
"reference" : "<KEY>",
"scope" : "GOOGLE",
"types" : [
"night_club",
"bar",
"lodging",
"restaurant",
"food",
"point_of_interest",
"establishment"
],
"vicinity" : "495 Geary Street, San Francisco"
},
{
"geometry" : {
"location" : {
"lat" : 37.78988329999999,
"lng" : -122.4091511
},
"viewport" : {
"northeast" : {
"lat" : 37.79135163029149,
"lng" : -122.4078268197085
},
"southwest" : {
"lat" : 37.78865366970849,
"lng" : -122.4105247802915
}
}
},
"icon" : "https://maps.gstatic.com/mapfiles/place_api/icons/lodging-71.png",
"id" : "547ceb15210b70b8734500183410bb10c644c395",
"name" : "<NAME> France",
"opening_hours" : {
"open_now" : true,
"weekday_text" : []
},
"photos" : [
{
"height" : 315,
"html_attributions" : [
"\u003ca href=\"https://maps.google.com/maps/contrib/114711934191765864568/photos\"\u003eCornell Hotel De France\u003c/a\u003e"
],
"photo_reference" : "CmRaAAAAJ3kTtFbeGT-8NWKbf9TPlN6gL6daO5zKq9DNZnzShZ-CcPUJnxMaVZybHZ0sGefM7<KEY>",
"width" : 851
}
],
"place_id" : "ChIJs6F3JYyAhYARDiVdBrmivCs",
"price_level" : 4,
"rating" : 4.2,
"reference" : "<KEY>",
"scope" : "GOOGLE",
"types" : [
"clothing_store",
"store",
"lodging",
"restaurant",
"food",
"point_of_interest",
"establishment"
],
"vicinity" : "715 Bush Street, San Francisco"
}
],
"status" : "OK"
}
```
I have an object called "Location" that hopes to take the JSON as a parameter to fill in its respective values. I would like to get to the point where I can take this JSON and turn it into an array of my "Location" struct populated with one "Location" for each restaurant returned in the Google Places API JSON.
Here is the "Location" struct:
```
import Foundation
struct Location: Codable {
var lat: Double
var long: Double
var icon: String?
var id: String?
var name: String
var openNow: Bool?
var photos: [String : Any]?
var placeID: String?
var priceLevel: Int?
var rating: Double?
var types: [String]?
init?(json: [String: Any]) {
guard let lat = json["lat"] as? Double,
let long = json["lng"] as? Double,
let icon = json["icon"] as? String,
let id = json["id"] as? String,
let name = json["name"] as? String,
let openNow = json["open_now"] as? Bool,
let photos = json["photos"] as? [String : Any],
let placeID = json ["place_id"] as? String,
let priceLevel = json["price_level"] as? Int,
let rating = json["rating"] as? Double,
let types = json["types"] as? [String]? else {
return nil
}
self.lat = lat
self.long = long
self.icon = icon
self.id = id
self.name = name
self.openNow = openNow
self.photos = photos
self.placeID = placeID
self.priceLevel = priceLevel
self.rating = rating
self.types = types
}
}
```
This struct is a start but is clearly lacking as I do not know how to go about taking the data from the JSON to make an array of this "Location" struct. Any guidance would be appreciated.<issue_comment>username_1: You can use [JSONDecoder](https://developer.apple.com/documentation/foundation/jsondecoder)
```
let decoder = JSONDecoder()
do {
let locations = try decoder.decode([Location].self, from: yourJson)
print(locations)
} catch {
print(error.localizedDescription)
}
```
Also you can nest structs to represent your data which you're probably going to have to do. Have a look at "More Complex Nested Response" on [this guide to JSON parsing](https://benscheirman.com/2017/06/swift-json/)
Upvotes: 2 <issue_comment>username_2: I eventually changed my "Location" struct to
1. Not be codable
2. Take each property as a parameter as opposed to taking the whole JSON dictionary as the parameter
In the "performGoogleSearch()" function, I essentially had to keep converting values of the dictionary into dictionaries themselves in order to reach deeper into the file. I then used each value I could get out of the JSON file and used them to create an object of my class. I then appended each object to my array of results.
```
func performGoogleSearch(radius: Double, type: String, price: Int ) {
let location = locationManager.location?.coordinate
let url: URL = URL(string: "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=\(location?.latitude ?? 0),\(location?.longitude ?? 0)&radius=\(radius * 1609.34)&type=restaurant&maxprice=\(price)&key=<KEY>")!
let task = URLSession.shared.dataTask(with: url) {(data, response, error) in
let jsonObject = try? JSONSerialization.jsonObject(with: data! as Data, options: [])
if let jsonArray = jsonObject as? [String: Any] {
if let results = jsonArray["results"] as! [Any]?{
for result in results {
if let locationDictionary = result as? [String : Any] {
let geometry = locationDictionary["geometry"]! as! [String : Any]
let location = geometry["location"]! as! [String : Any]
let lat = location["lat"]
let long = location["lng"]
let openingHours = locationDictionary["opening_hours"] as? [String : Any]
let openNow = openingHours?["open_now"]
let photos = locationDictionary["photos"] as? [[String : Any]]
let newLocation = Location(lat: lat as! Double, long: long as! Double, id: locationDictionary["id"] as? String, name: locationDictionary["name"] as! String, openNow: openNow as? Int, photos: photos, placeID: locationDictionary["place_id"] as? String, priceLevel: locationDictionary["price_level"] as? Int, rating: locationDictionary["rating"] as? Double, types: locationDictionary["types"] as? [String])
googleResults.append(newLocation!)
}
}
print("Results: \(googleResults.count)")
let randomInt = Int(arc4random_uniform(UInt32(googleResults.count)))
print("RandomInt: \(randomInt)")
if googleResults.count != 0 {
self.googleResult = googleResults[randomInt]
self.annotation = MapBoxAnnotation(coordinate: CLLocationCoordinate2D(latitude: (self.self.googleResult?.lat)!, longitude: (self.self.googleResult?.long)!), title: self.googleResult?.name, subtitle: nil)
}
}
}
DispatchQueue.main.async {
self.animateResultView()
}
}
task.resume()
```
Location struct:
```
struct Location {
var lat: Double
var long: Double
var id: String?
var name: String
var openNow: Int?
var photos: [[String : Any]]?
var placeID: String?
var priceLevel: Int?
var rating: Double?
var types: [String]?
init?(lat: Double, long: Double, id: String?, name: String, openNow: Int?, photos: [[String : Any]]?, placeID: String?, priceLevel: Int?, rating: Double?, types: [String]?) {
self.lat = lat
self.long = long
self.id = id
self.name = name
self.openNow = openNow
self.photos = photos
self.placeID = placeID
self.priceLevel = priceLevel
self.rating = rating
self.types = types
}
}
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,156 | 4,596 |
<issue_start>username_0: Im pretty new in TFS and need to create a hub extension.
The thing is I can't find a way to specify a collection project for the extension so it will exist only when this project is selected on the TFS and not for all projects.
Maybe there is a way to specify this in the extension-manifest.json file?<issue_comment>username_1: You can use [JSONDecoder](https://developer.apple.com/documentation/foundation/jsondecoder)
```
let decoder = JSONDecoder()
do {
let locations = try decoder.decode([Location].self, from: yourJson)
print(locations)
} catch {
print(error.localizedDescription)
}
```
Also you can nest structs to represent your data which you're probably going to have to do. Have a look at "More Complex Nested Response" on [this guide to JSON parsing](https://benscheirman.com/2017/06/swift-json/)
Upvotes: 2 <issue_comment>username_2: I eventually changed my "Location" struct to
1. Not be codable
2. Take each property as a parameter as opposed to taking the whole JSON dictionary as the parameter
In the "performGoogleSearch()" function, I essentially had to keep converting values of the dictionary into dictionaries themselves in order to reach deeper into the file. I then used each value I could get out of the JSON file and used them to create an object of my class. I then appended each object to my array of results.
```
func performGoogleSearch(radius: Double, type: String, price: Int ) {
let location = locationManager.location?.coordinate
let url: URL = URL(string: "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=\(location?.latitude ?? 0),\(location?.longitude ?? 0)&radius=\(radius * 1609.34)&type=restaurant&maxprice=\(price)&key=<KEY>")!
let task = URLSession.shared.dataTask(with: url) {(data, response, error) in
let jsonObject = try? JSONSerialization.jsonObject(with: data! as Data, options: [])
if let jsonArray = jsonObject as? [String: Any] {
if let results = jsonArray["results"] as! [Any]?{
for result in results {
if let locationDictionary = result as? [String : Any] {
let geometry = locationDictionary["geometry"]! as! [String : Any]
let location = geometry["location"]! as! [String : Any]
let lat = location["lat"]
let long = location["lng"]
let openingHours = locationDictionary["opening_hours"] as? [String : Any]
let openNow = openingHours?["open_now"]
let photos = locationDictionary["photos"] as? [[String : Any]]
let newLocation = Location(lat: lat as! Double, long: long as! Double, id: locationDictionary["id"] as? String, name: locationDictionary["name"] as! String, openNow: openNow as? Int, photos: photos, placeID: locationDictionary["place_id"] as? String, priceLevel: locationDictionary["price_level"] as? Int, rating: locationDictionary["rating"] as? Double, types: locationDictionary["types"] as? [String])
googleResults.append(newLocation!)
}
}
print("Results: \(googleResults.count)")
let randomInt = Int(arc4random_uniform(UInt32(googleResults.count)))
print("RandomInt: \(randomInt)")
if googleResults.count != 0 {
self.googleResult = googleResults[randomInt]
self.annotation = MapBoxAnnotation(coordinate: CLLocationCoordinate2D(latitude: (self.self.googleResult?.lat)!, longitude: (self.self.googleResult?.long)!), title: self.googleResult?.name, subtitle: nil)
}
}
}
DispatchQueue.main.async {
self.animateResultView()
}
}
task.resume()
```
Location struct:
```
struct Location {
var lat: Double
var long: Double
var id: String?
var name: String
var openNow: Int?
var photos: [[String : Any]]?
var placeID: String?
var priceLevel: Int?
var rating: Double?
var types: [String]?
init?(lat: Double, long: Double, id: String?, name: String, openNow: Int?, photos: [[String : Any]]?, placeID: String?, priceLevel: Int?, rating: Double?, types: [String]?) {
self.lat = lat
self.long = long
self.id = id
self.name = name
self.openNow = openNow
self.photos = photos
self.placeID = placeID
self.priceLevel = priceLevel
self.rating = rating
self.types = types
}
}
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 468 | 1,722 |
<issue_start>username_0: I have the following class:
```
@Builder @NoArgsConstructor
public class ConsultationPointOfContact {
private String fullName;
private String phoneNumber;
private String userLogin;
}
```
When the `@Builder` annotation exists, it is causing problems with the `@NoArgsConstructor`.
I am getting the error:
```
Error:(11, 1) java: constructor ConsultationPointOfContact in class models.ConsultationPointOfContact cannot be applied to given types;
required: no arguments
found: java.lang.String,java.lang.String,java.lang.String
reason: actual and formal argument lists differ in length
```<issue_comment>username_1: Add `@AllArgsConstructor` as well and this should fix the issue
Upvotes: 8 [selected_answer]<issue_comment>username_2: For me works this:
- upgrade or install Lombok plugin on IntellIJ and enable annotation processing checkbox for your module
[](https://i.stack.imgur.com/OhCAB.png)
Upvotes: 3 <issue_comment>username_3: Met the same issue with lombok and gradle. The same code works with maven, but throws error when using the gradle way.
So the issue turns out to be with lombok and gradle, so I searched the keywords `lombok gradle`, and find an article: <https://projectlombok.org/setup/gradle>.
By referencing the article's tip and added the following to build.gradle, then it works like a charm!
```
...
dependencies {
...
compileOnly 'org.projectlombok:lombok:1.18.22'
annotationProcessor 'org.projectlombok:lombok:1.18.22'
testCompileOnly 'org.projectlombok:lombok:1.18.22'
testAnnotationProcessor 'org.projectlombok:lombok:1.18.22'
...
}
...
```
Upvotes: 4
|
2018/03/14
| 461 | 1,649 |
<issue_start>username_0: I'm using country picker from Bootstrap form helper:
[Country Picker](http://bootstrapformhelpers.com/country/)
Problem is that i need to initialize it dynamically with a country. I'm using data-country="US" as default, anyways i need to change it on document ready function.
```
```<issue_comment>username_1: From the [Example 3](http://bootstrapformhelpers.com/country/), you can use
```
$(document).ready(function(){
$('#country-select').bfhcountries({country: 'TN'});
});
```
```js
$('#LoadCountry').click(function(){
$('#countries1').bfhcountries({country: 'TN'});
});
```
```html
Load Countries
```
Upvotes: 2 <issue_comment>username_2: use the html below to select a default country on initialize. Also add `bootstrap-formhelpers.js` and `bootstrap-formhelpers.css` to your html or your select **will not** be populated with data.
```
```
and to select a country after initialization,
```
$('#country-select').val('TN');
```
see the [document](http://bootstrapformhelpers.com/country/#example1) for more options on initilization.
Here's a [working fiddle](https://jsfiddle.net/1xyL9u6e/17/).
**Per OP's comment** The OP is using a theme using [select2](https://select2.org/programmatic-control/add-select-clear-items), quoting from there
>
> Select2 will listen for the change event on the element that it is attached to. When you make any external changes that need to be reflected in Select2 (such as changing the value), you should trigger this event.
>
>
>
and therefore to reflect changes;
```
$('#country-select').val('TN').trigger('change');
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 706 | 2,379 |
<issue_start>username_0: First things first I should say I spend my time to read all topics in this case but no success, recently I faced with a problem with client/browser cache on my website, It occurred without any change. Server side cache is good but client side working so bad, I should press `CTRL+F5` everytime, I don't want this because it is bad for users, I know I can disable cache when devTool is open, but I'm talking about user not just myself. This happen on desktop and mobile device too. In mobile device I should go to setting/privacy/clear cache.
Here is my website codes relate to cache:
**htaccess:mod\_expires**
```
ExpiresByType text/css "access plus 1 month"
```
I removed `css` from gzip, but no success. Also changing 1 month to 1 second.
```
mod_gzip_item_include file .(html?|txt|css|js|php|pl)$
```
**PHP header:**
```
header("Cache-Control: no-store, no-cache, must-revalidate, max-age=0");
header("Cache-Control: post-check=0, pre-check=0", false);
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
header("Pragma: no-cache");
header("Vary: Accept-Encoding");
header("Expires: Sat, 26 Jul 1997 05:00:00 GMT");
```
**HTML meta:**
```
```
As you see I tried all possible way to fight with annoying cache but no success. I know I can add `version` at the end of `css` or `js` but all know this is a bad habit to clear cache:
>
> **Remove query strings from static resources**
>
>
> Resources with a "?" in the URL are not cached by some proxy caching
> servers. Remove the query string and encode the parameters into the
> URL for the following resources:
>
>
>
[source](https://gtmetrix.com/remove-query-strings-from-static-resources.html)
So, what is the best to remove `css` and `js` heavy cache in the right direction?<issue_comment>username_1: Try using [HTTP Etags](https://en.wikipedia.org/wiki/HTTP_ETag). This will allow the client to deduce if the resource needs to be reloaded from the server.
Upvotes: 0 <issue_comment>username_2: This works every time : add `? + random stuff` at the end of your files. For instance :
In PHP, I believe this should be done this way (I haven't written any PHP in years, but here goes :)
`"`
This will force the browser to download a fresh copy of the files every time.
Upvotes: 0 <issue_comment>username_3: ```
php $filename "path/to/file.css"; ?
```
Upvotes: 1
|
2018/03/14
| 3,788 | 12,177 |
<issue_start>username_0: I am very new to selenium and trying to set up selenium in my laptop to begin. I am unable to invoke IE from my simple code. The details are given below. Can you please help me to understand where the issue is
IE version: IE 11,
IE Driver: 64-bit,
OS: Windows 10 64 bit,
Selenium version: 3.11
Error:
```js
Started InternetExplorerDriver server (64-bit)
192.168.3.11
Listening on port 29313
Only local connections are allowed
Exception in thread "main" org.openqa.selenium.WebDriverException: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:29313
Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:33:08.638Z'
System info: host: 'DESKTOP-B1D1PSJ', ip: '192.168.79.96', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_161'
Driver info: driver.version: InternetExplorerDriver
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:92)
at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:545)
at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:209)
at org.openqa.selenium.ie.InternetExplorerDriver.run(InternetExplorerDriver.java:223)
at org.openqa.selenium.ie.InternetExplorerDriver.(InternetExplorerDriver.java:215)
at org.openqa.selenium.ie.InternetExplorerDriver.(InternetExplorerDriver.java:152)
at testing.Tryselenium.main(Tryselenium.java:31)
Caused by: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:29313
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:240)
at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:158)
at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:256)
at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:134)
at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:113)
at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147)
at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121)
at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200)
at okhttp3.RealCall.execute(RealCall.java:77)
at org.openqa.selenium.remote.internal.OkHttpClient.execute(OkHttpClient.java:101)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:101)
at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:73)
at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:136)
at org.openqa.selenium.remote.service.DriverCommandExecutor.execute(DriverCommandExecutor.java:83)
... 6 more
Caused by: java.net.ConnectException: Connection refused: connect
at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
at java.net.DualStackPlainSocketImpl.socketConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.doConnect(Unknown Source)
at java.net.AbstractPlainSocketImpl.connectToAddress(Unknown Source)
at java.net.AbstractPlainSocketImpl.connect(Unknown Source)
at java.net.PlainSocketImpl.connect(Unknown Source)
at java.net.SocksSocketImpl.connect(Unknown Source)
at java.net.Socket.connect(Unknown Source)
at okhttp3.internal.platform.Platform.connectSocket(Platform.java:125)
at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:238)
```
Code:
```js
mport java.util.Arrays;
import java.util.List;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.chrome.ChromeOptions;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
public class Tryselenium {
public static void main(String[] args) {
// TODO Auto-generated method stub
WebDriver web;
/*System.setProperty("webdriver.chrome.driver","C:\\Users\\Divakar\\Documents\\General\\Learnings\\Selenium\\Others\\Set Up files\\Browser Drivers\\" + "chromedriver_x32.exe");
ChromeOptions chromeOptions = new ChromeOptions();
chromeOptions.setExperimentalOption("excludeSwitches",Arrays.asList("test-type"));
web = new ChromeDriver(chromeOptions);
web.manage().window().maximize();
web.manage().timeouts().implicitlyWait(50,TimeUnit.SECONDS);*/
System.setProperty("webdriver.ie.driver","E:\\General\\Learnings\\Selenium\\Others\\Set Up files\\Browser Drivers\\" + "IEDriverServer.exe");
DesiredCapabilities capab = DesiredCapabilities.internetExplorer();
capab.setCapability(InternetExplorerDriver.INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true);
web = new InternetExplorerDriver();
web.manage().window().maximize();
web.manage().timeouts().implicitlyWait(50,TimeUnit.SECONDS);
/*FirefoxDriver web = new FirefoxDriver();
web = new FirefoxDriver();
web.manage().window().maximize();
web.manage().timeouts().implicitlyWait(50,TimeUnit.SECONDS);*/
web.get("http://www.icicibank.com/");
web.findElement(By.xpath("/html/body/div[1]/div[1]/div[1]/div/a[1]")).click();
web.findElement(By.xpath("/html/body/div[1]/div[1]/div[1]/div/div[3]/div/ul[1]/li[1]/a")).click();
web.findElement(By.xpath("/html/body/div[1]/div[2]/div[2]/div/div[1]/div/div[3]/a")).click();
web.findElement(By.name("AuthenticationFG.USER_PRINCIPAL")).sendKeys("507133118");
web.findElement(By.name("AuthenticationFG.ACCESS_CODE")).sendKeys("Soft2008");
web.findElement(By.name("Action.VALIDATE_CREDENTIALS")).click();
web.findElement(By.id("ContentLink1")).click();
WebElement accttable = web.findElement(By.xpath("/html/body/form/div/div[3]/div[1]/div[4]/div[2]/div/div/div[5]/div/div/div/div/div/div/div"));
List all\_rows = accttable.findElements(By.tagName("tr"));
int j = 0;
for (int i = 0; i < all\_rows.size(); i++) {
List rowAllCells = all\_rows.get(i).findElements(By.tagName("td"));
if(rowAllCells.size()>1)
{
String acctno=rowAllCells.get(2).getText();
if(rowAllCells.get(2).getText().trim().equalsIgnoreCase("602605049934"))
{
web.findElement(By.id("AccountSummaryFG.SELECTED\_INDEX")).click(); ///html/body/form/div/div[3]/div[1]/div[4]/div[2]/div/div/div[5]/div/div/div/div/div/div/div/table/tbody/tr[4]/td[1]/input
web.findElement(By.id("HREF\_actNicNameOutput[" + j + "]")).click();
web.findElement(By.id("backBtn")).click();
web.findElement(By.id("VIEW\_MINI\_STATEMENT")).click();
web.findElement(By.id("HREF\_Logout")).click();
web.close();
}
else j = j + 1;
}
}
}
}
```<issue_comment>username_1: The error says it all :
```
3.9.0.0
Exception in thread "main" org.openqa.selenium.WebDriverException: java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:29313
Build info: version: '3.11.0', revision: 'e59cfb3', time: '2018-03-11T20:33:08.638Z'
System info: host: 'DESKTOP-B1D1PSJ', ip: '192.168.79.96', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_161'
Driver info: driver.version: InternetExplorerDriver
```
It is clear from your error stack trace that the new *session* is not getting initiated and the **driver.version** is also not getting recognized.
Your main issue is the **version compatibility** between the binaries you are using as follows :
* You are using *Selenium Client v3.11.0*
* You are using *IEDriverServer v3.9.0.0*
* You are using *InternetExplorer v11.0* (as per your question)
So there is a clear mismatch between the *Selenium Client v3.11.0* , *IEDriverServer v192.168.3.11*.
Solution
--------
* Upgrade *IEDriverServer* to [**v3.11.1**](http://selenium-release.storage.googleapis.com/index.html?path=3.11/).
* *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only.
* Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *test Suite*.
* If your base *Web Browser* base version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of *Web Browser*.
* Execute your `@Test`.
Additional Considerations
-------------------------
* You are using the flag **`INTRODUCE_FLAKINESS_BY_IGNORING_SECURITY_DOMAINS, true`** which is against the [best practice](http://jimevansmusic.blogspot.in/2012/08/youre-doing-it-wrong-protected-mode-and.html). Don't do it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This answer is updated for Edge Browser. If you are using Edge Browser and facing the same issue. Then follow the steps:
1. Download the latest IEDriverServer from this location : <https://selenium.dev/downloads/>
2. Update your code as follows :
```
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.edge.EdgeDriver;
public class SL001_OpenWebpage_Edge
{
public static void main(String[] args)
{
System.setProperty("webdriver.ie.driver", "\\location\\IEDriverServer.exe");
WebDriver driver=new EdgeDriver();
driver.get("https://www.hul.co.in/");
driver.findElement(By.id("tcm:1255-50553-4")).click();
//driver.manage().window().maximize();
}
}
```
Upvotes: 0 <issue_comment>username_3: The issue in our case was **not specific to Jenkins** but the **Chrome Driver crashing** under the VMs where our automation tests run.
If you log into your VM machine and where your workspace have been copied into, and under the terminal run the same maven command you have on Jenkins you should be able to watch the tests run and this may be a way for you to identify and fix the issue.
[](https://i.stack.imgur.com/ADaeD.png)
[](https://i.stack.imgur.com/lFaj0.png)
[](https://i.stack.imgur.com/QfG3s.png)
Upvotes: 0 <issue_comment>username_4: I had this occur even with the same Driver and client versions.
```
Started InternetExplorerDriver server (32-bit)
172.16.17.32
...
java.net.ConnectException: Failed to connect to localhost/0:0:0:0:0:0:0:1:XXXX
Build info: version: '3.141.59', revision: 'e82be7d358', time: '2018-11-14T08:17:03'
System info: host: 'HOST_NAME', ip: 'XX.XXX.XX.XX', os.name: 'Windows 10', os.arch: 'amd64', os.version: '10.0', java.version: '1.8.0_251'
Driver info: driver.version: RemoteWebDriver
```
I ended up finding a perfect match here
<http://selenium-release.storage.googleapis.com/index.html>
but the Security Settings in IE were different and not correct on a new computer. Making each zone have the same **Enable Protected Mode** box state (every one selected or deselected) in
```
Internet Options -> Security
```
Fixed it 100%. Was getting the driver to launch and navigate to a page, but the prior two steps fixed the errors.
Upvotes: 0
|
2018/03/14
| 1,040 | 3,951 |
<issue_start>username_0: I have some software with an event-based networking protocol for control which is using a `IObservable` for handling in bound messages.
In many cases a sent message will expect a specific response (or sequence, such as to report progress). In order to not potentially miss the response a task is being set up in advance with `FirstAsync` and `ToTask`, however these appear to leak if the task never completes.
It is also not allowed to simply put `evtTask` in a `using` block as trying to dispose the incomplete task is not allowed.
```
var jobUuid = Guid.NewGuid();
var evtTask = Events.FirstAsync((x) => x.Action == Action.JobComplete && x.JobUuid == jobUuid).ToTask();
// e.g. if this throws without ever sending the message
await SendMessage($"job {jobUuid} download {url}");
var evt = await evtTask;
if (evt.Success)
{
...
}
```
Does the library provide a simple means for this use-case that will unsubscribe on leaving the scope?
```
var jobUuid = Guid.NewGuid();
using(var evtTask = Events.FirstAsync((x) => x.Action == Action.JobComplete && x.JobUuid == jobUuid)
.ToDisposableTask())) // Some method like this
{
// e.g. if this throws without ever sending the message
await SendMessage($"job {jobUuid} download {url}");
var evt = await evtTask;
if (evt.Success)
{
...
}
} // Get rid of the FirstAsync task if leave here before it completes for any reason
```<issue_comment>username_1: You can either use TPL Timeout (as referenced by @Fabjan), or the Rx/System.Reactive version of Timeout.
`using` sounds nice, but doesn't make sense. Using is the equivalent of calling `.Dispose` on something at the end of the using block. The problem here, I'm assuming, is that your code never gets past `await evtTask`. Throwing all of that in a hypothetical `using` wouldn't change anything: Your code is still waiting forever.
At a higher level, your code is more imperative than reactive, you may want to refactor it to something like this:
```
var subscription = Events
.Where(x => x.Action == Action.JobComplete)
.Subscribe(x =>
{
if(x.Success)
{
//...
}
else
{
//...
}
});
```
Upvotes: 1 <issue_comment>username_2: Disposing `Task` will not help, since it does nothing useful (in most situations, including this one). What will help though is cancelling task. Cancelling disposes underlying subscription created by `ToTask` and so, resolves this "leak".
So it can go like this:
```
Task evtTask;
using (var cts = new CancellationTokenSource()) {
evtTask = Events.FirstAsync((x) => x.Action == Action.JobComplete && x.JobUuid == jobUuid)
.ToTask(cts.Token);
// e.g. if this throws without ever sending the message
try {
await SendMessage($"job {jobUuid} download {url}");
}
catch {
cts.Cancel(); // disposes subscription
throw;
}
}
var evt = await evtTask;
if (evt.Success)
{
...
}
```
Of course you can wrap that in some more convenient form (like extension method). For example:
```
public static class ObservableExtensions {
public static CancellableTaskWrapper ToCancellableTask(this IObservable source) {
return new CancellableTaskWrapper(source);
}
public class CancellableTaskWrapper : IDisposable
{
private readonly CancellationTokenSource \_cts;
public CancellableTaskWrapper(IObservable source)
{
\_cts = new CancellationTokenSource();
Task = source.ToTask(\_cts.Token);
}
public Task Task { get; }
public void Dispose()
{
\_cts.Cancel();
\_cts.Dispose();
}
}
}
```
Then it becomes close to what you want:
```
var jobUuid = Guid.NewGuid();
using (var evtTask = Events.FirstAsync((x) => x.Action == Action.JobComplete && x.JobUuid == jobUuid).ToCancellableTask()) {
await SendMessage($"job {jobUuid} download {url}");
var evt = await evtTask.Task;
if (evt.Success) {
...
}
}
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,176 | 3,920 |
<issue_start>username_0: While working on a project, I ran into the following issue I could not explain to myself.
I have the following [is\_in\_set(..) function](https://stackoverflow.com/questions/1701067/how-to-check-that-an-element-is-in-a-stdset), which simply checks if a cstring is in an unordered\_set of cstrings:
```
bool is_in_set(const char * str, std::unordered_set the\_set)
{
if ( the\_set.find( str ) != the\_set.end() )
return true;
else
return false;
}
```
And then I created the following sample main method to demonstrate my problem:
```
int main()
{
std::unordered_set the\_set({"one",
"two", "three", "four", "five"});
std::string str = "three";
const char \* cstr = "three";
std::cout << "str in set? "
<< is\_in\_set( str.c\_str() , the\_set ) << std::endl
<< "cstr in set? "
<< is\_in\_set( cstr, the\_set ) << std::endl;
const char \* str\_conv = str.c\_str();
std::cout << "str\_conv in set? "
<< is\_in\_set( str\_conv , the\_set ) << std::endl
<< "strcmp(str\_conv, cstr) = " << strcmp( str\_conv , cstr )
<< std::endl;
return 0;
}
```
I expected the above code to find the std::string casted to const char\*, as well as the cstring in the set.
Instead of that, it generates the following output (Visual Studio Community 2017):
```
str in set? 0
cstr in set? 1
str_conv in set? 0
strcmp(str_conv, cstr) = 0
```
I also ran a for-loop over both variables, outputting byte by byte (in hexadecimal representation) for each, which results in the following:
```
74 68 72 65 65 00 = c_str
74 68 72 65 65 00 = str_conv
```
Why is the std::string casted to const char \* not being found in the set?
Shouldn't strcmp return a value different from 0 in this case?<issue_comment>username_1: For `const char *`, there is no overload of the `==` operator that compares strings by value, so I believe the `unordered_set` container will always compare pointers, not the values of the pointed-to strings.
The compiler can, as an optimization, make multiple string literals with the same characters use the same memory location (and thus have identical pointers), which is why you're able to find the string when you use another string literal. But any string you construct by some other mechanism, even if it contains the same characters, won't be at the same memory location, and thus the pointers will not be equal.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Use `std::unordered_set` or provide custom hasher if you sure that your strings will not left the stack while you are using the hashtable e.g. static variables or allocated with new/malloc etc.
Something like:
```
struct str_eq {
bool opeator()(const char* lsh, const char rhs) const noexcept
{
return lsh == rhs || 0 == std::strcmp(lsh, rhs);
}
};
struct str_hash {
std::size_t opeator()(const char* str) const noexcept
{
// some mur-mur2, google cityhash hash_bytes etc instead of this
return std::hash( std::string(str) ) ();
}
};
typedef std::unordered\_set > my\_string\_hashset;
```
Upvotes: 1 <issue_comment>username_3: As @username_1 pointed out, you are doing address comparisons. To fix this you will need to either have the `unordered_set` store `std::string` objects, or create a custom comparison for the `unordered_set` to use.
Based on the answer to a [related question](https://stackoverflow.com/questions/36058387/how-to-use-unordered-set-with-compare-function), something like this:
```
struct StringEqual
{
bool operator()(const char* a, const char* b) { return 0 == strcmp(a,b); }
};
std::unordered_set, StringEqual> the\_set(
{"one", "two", "three", "four", "five"});
```
should do the trick. This gives the `unordered_set` a better operator to use for testing the strings.
For more information about the `Pred` template parameter, see the [documentation](http://en.cppreference.com/w/cpp/container/unordered_set).
Upvotes: 1
|
2018/03/14
| 1,572 | 7,049 |
<issue_start>username_0: I have successfully upload a Users profile image to the Firebase storage area under their UserID.
From there I have put the file path to that image in the Firebase database under the usersID
[Firebase Database Structure, Showing he profileImage uri that is linked to the firebase storage area](https://i.stack.imgur.com/IJBWc.png)
When I am displaying the image on the page the image is rotated the wrong way around. The photo on my phone is portrait, but it is saving in the Firebase storage area as landscape.
[Image Stored In Landscape orientation in Firebase Storage, but was taken in portrait orientation](https://i.stack.imgur.com/munTW.png)
[The image rendered on my phone in the wrong orientation](https://i.stack.imgur.com/ItWGL.png)
What I want to be able to do is let the user select an image from the gallery, then display the image on a page.Then I want to be able to let the user rotate the image themselves, left or right using two buttons.
When I press the rotate buttons. The image successfully rotates once.
Then when I press the Save Profile Image button it sends the original image from the gallery to the Firebase Storage area. It is storing and sending the wrong image to the Storage. Essentially, it is saving the original, unrotated image to the storage.
Is there a way that I can fix this?
Here is my code:
```
private FirebaseAuth auth;
private DatabaseReference myRef;
private FirebaseDatabase database;
private StorageReference storageReference;
private FirebaseUser user;
private ImageView profileImage;
private Uri imageUri;
private static final int GALLERY_INTENT = 2;
private ProgressDialog progressDialog;
/*
Skip irrelevant code
*/
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState)
{
View view = inflater.inflate(R.layout.fragment_upload_image, container, false);
// Creating firebase links etc
database = FirebaseDatabase.getInstance();
myRef = FirebaseDatabase.getInstance().getReference();
auth = FirebaseAuth.getInstance();
storageReference = FirebaseStorage.getInstance().getReference();
user = auth.getCurrentUser();
// Setting buttons
profileImage = (ImageView) view.findViewById(R.id.imageViewProfileImage);
ImageView rotateLeft = (ImageView) view.findViewById(R.id.imageRotateLeft);
ImageView rotateRight = (ImageView) view.findViewById(R.id.imageRotateRight);
Button uploadProfileImage = (Button) view.findViewById(R.id.buttonUploadProfileImage);
Button saveProfileImage = (Button) view.findViewById(R.id.buttonSaveProfileImage);
// Rotate Left is a button
rotateLeft.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
if(profileImage != null)
{
// Rotate image 90
Picasso.get().load(imageUri).rotate(90).into(profileImage);
}
}
});
// Rotate Right is a button
rotateRight.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
if(profileImage != null)
{
// Rotate image -90
Picasso.get().load(imageUri).rotate(-90).into(profileImage);
}
}
});
uploadProfileImage.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
// Send user to gallery
Intent intent = new Intent(Intent.ACTION_PICK);
intent.setType("image/*");
startActivityForResult(intent, GALLERY_INTENT);
}
});
// Save image to storage area
saveProfileImage.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
progressDialog.setMessage("Uploading Image Please Wait...");
progressDialog.show();
final StorageReference filePath = storageReference.child("Images").child(user.getUid()).child(imageUri.getLastPathSegment());
filePath.putFile(imageUri).addOnSuccessListener(new OnSuccessListener()
{
@Override
public void onSuccess(UploadTask.TaskSnapshot taskSnapshot)
{
Toast.makeText(getActivity(), "Uploaded Successfully!", Toast.LENGTH\_SHORT).show();
Uri downloadUri = taskSnapshot.getDownloadUrl();
// Save image uri in the Firebase database under the usersID
myRef.child("Users").child(user.getUid()).child("profileImage").setValue(downloadUri.toString());
progressDialog.dismiss();
}
}).addOnFailureListener(new OnFailureListener()
{
@Override
public void onFailure(@NonNull Exception e)
{
progressDialog.dismiss();
Toast.makeText(getActivity(), "Failed To Upload!", Toast.LENGTH\_SHORT).show();
}
});
}
});
return view;
}
// Get image data and display on page
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data)
{
super.onActivityResult(requestCode, resultCode, data);
if(requestCode == GALLERY\_INTENT && resultCode == RESULT\_OK)
{
progressDialog = new ProgressDialog(getActivity());
progressDialog.setMessage("Displaying Image...");
progressDialog.show();
imageUri = data.getData();
Picasso.get().load(imageUri).into(profileImage);
progressDialog.dismiss();
}
}
```<issue_comment>username_1: You can try downloading image as a `Bitmap`, then rotating it and then saving. Here is the code you can use to do that with `Picasso`:
```
int rotationAngle; // You set this angle before calling the method
Target target = new Target() {
@Override
public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) {
Matrix matrix = new Matrix();
matrix.postRotate(rotationAngle);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(),
matrix, true);
// Save the rotatedBitmap elsewhere
}
@Override
public void onBitmapFailed(Drawable errorDrawable) {}
@Override
public void onPrepareLoad(Drawable placeHolderDrawable) {}
};
void downloadImageRotateAndSave() {
Picasso.with(getContext()).load(imageUri).into(target);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: first let see what is the problem. First of all, picaso open the image file and retrieve a bitmap from it. This bitmap is what is displayed into the imageView. When you rotate the image, what you are doing is rotating the bitmap, but the file with the original photo never change. So if you want to upload the rotate photo, you have to retrieve the new bitmap from the image view, create a new file with it and upload the new file.
That must do the trick.
Good luck
Upvotes: 0
|
2018/03/14
| 663 | 2,418 |
<issue_start>username_0: I have a YML file, which I parse to Map using yamlBeans library.
I don't know how deep the nested map goes.
for example:
>
> * key1:
> + key2: value1
> + key3:
> - key4: value2
> - key5: value3
>
>
>
I need to find a specific value in this map, update it, and write the map back to YML file (which I know how to do).
This is my code for updating the value, and it's working.
**However, this is only iterating twice through the nested map, and I need it to iterate it for as long as needed**:
```
static void updateYmlContent(Map ymlMap, String value, String... keys) {
boolean found = false;
for (Map.Entry entry : ymlMap.entrySet()) {
if (entry.getKey().equals(keys[0])) {
found = true;
for (Map.Entry subEntry : ((Map, ?) entry.getValue()).entrySet()) {
if (subEntry.getKey().equals(keys[1])) {
subEntry.setValue(value);
break;
} else {
throwKeyNotFoundException(keys[1]);
}
}
break;
}
}
if (!found) {
throwKeyNotFoundException(keys[0]);
}
}
```<issue_comment>username_1: You can try downloading image as a `Bitmap`, then rotating it and then saving. Here is the code you can use to do that with `Picasso`:
```
int rotationAngle; // You set this angle before calling the method
Target target = new Target() {
@Override
public void onBitmapLoaded(Bitmap bitmap, Picasso.LoadedFrom from) {
Matrix matrix = new Matrix();
matrix.postRotate(rotationAngle);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(),
matrix, true);
// Save the rotatedBitmap elsewhere
}
@Override
public void onBitmapFailed(Drawable errorDrawable) {}
@Override
public void onPrepareLoad(Drawable placeHolderDrawable) {}
};
void downloadImageRotateAndSave() {
Picasso.with(getContext()).load(imageUri).into(target);
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: first let see what is the problem. First of all, picaso open the image file and retrieve a bitmap from it. This bitmap is what is displayed into the imageView. When you rotate the image, what you are doing is rotating the bitmap, but the file with the original photo never change. So if you want to upload the rotate photo, you have to retrieve the new bitmap from the image view, create a new file with it and upload the new file.
That must do the trick.
Good luck
Upvotes: 0
|
2018/03/14
| 1,435 | 4,644 |
<issue_start>username_0: I'm trying to recreate an input box using divs so that I can create fractions, surds etc inside. I want a user to be able to click on the div and press a key and the correct character appear inside the div, but nothing happens when I click on the div and press a key.
JS
```
$(document).ready(function() {
renderInputBox('#answerSpace');
$('#input1').on('click', function() {
inputSelected = !inputSelected
console.log(inputSelected);
})
let inputSelected = false
$(document).keypress(function(e) {
if (inputSelected) {
$('#input1').text(e.which)
}
})
function renderInputBox(area) {
$(area).html('')
}
```
CSS
```
.input {
min-width: 200px;
min-height: 24px;
border: 1px solid rgb(153, 153, 153);
display: flex;
flex-direction: row;
padding: 1px;
}
.input:hover {
border: 1px solid rgb(49, 156, 255);
box-shadow: 0 0 1px rgb(49, 155, 255);
cursor: text;
}
```<issue_comment>username_1: I made a slight change to your code, I changed what I said in the comments above and also used another way to bind the event: `.on('keypress', ...)`.
I also made a way to already write the key to the div, take a look.
```js
$(document).ready(function() {
renderInputBox('#answerSpace');
$('#input1').on('click', function() {
inputSelected = !inputSelected
console.log(inputSelected);
})
var inputSelected = false
$(document).on('keypress', function(e) {
if (e.which == '13' && inputSelected) {
console.log('test');
}else if (inputSelected){
console.log(e.key);
var currentText = $(".input").text();
$(".input").text(currentText + e.key);
}
});
});
function renderInputBox(area) {
$(area).html('')
}
```
```css
.input {
min-width: 200px;
min-height: 24px;
border: 1px solid rgb(153, 153, 153);
display: flex;
flex-direction: row;
padding: 1px;
}
.input:hover {
border: 1px solid rgb(49, 156, 255);
box-shadow: 0 0 1px rgb(49, 155, 255);
cursor: text;
}
```
```html
```
Upvotes: 0 <issue_comment>username_2: While I'm using your approach below you will see the issues you are going to have and I do mention the resources below what you are probably looking for.
There is no problem doing what you want to do, writing pressed keys to a `div`, however, you will have to implement every single basic text editor behavior yourself and that is a very big task.
As an example below, I have implemented `backspace` manually, so when it is pressed, the last character is removed.
While that is easy enough, **what if** the user highlights some letters in the middle of the string? In our case we would still only remove the last character but that is not what you would want.
**What if**, the user wants to click the mouse between 2 characters, how are you going to do that? How will you show the blinking caret or when a new key is pressed add the new text where the caret is and not just to the end?
**What if**, the user presses `Ctrl+Home` or `Shift+Home`?
You see how quickly you are going to end up re-writing basic editor features.
While the below example shows you what you want to do in it's basic essence, including some changes to make selection better, I think what you do want is a pre-build Text Editor for Math Equations.
This SO post has some suggestions for what you are looking for, talking about several Math Editors:
[**online-visual-maths-equation-editor-which-can-be-implemented-in-website**](https://stackoverflow.com/questions/12127456/online-visual-maths-equation-editor-which-can-be-implemented-in-website)
```js
$(document).ready(function() {
let inputSelected = false
renderInputBox('#answerSpace');
$(document).on('click', function(e) {
inputSelected = $(e.target).is($('#input1'));
})
$(document).on('keyup', function(e) {
//console.log(e.which);
if (inputSelected) {
switch (e.which) {
case 8: // Backspace
$('#input1').text($('#input1').text().substring(0, $('#input1').text().length - 1));
break;
default:
$('#input1').text($('#input1').text() + e.key);
}
}
/* Append div with character to #input1 */
})
})
function renderInputBox(area) {
$(area).html('')
}
```
```css
.input {
min-width: 200px;
min-height: 24px;
border: 1px solid rgb(153, 153, 153);
display: flex;
flex-direction: row;
padding: 1px;
}
.input:hover {
border: 1px solid rgb(49, 156, 255);
box-shadow: 0 0 1px rgb(49, 155, 255);
cursor: text;
}
```
```html
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 3,275 | 10,750 |
<issue_start>username_0: I have some server that provides access to data by cryptographic API. I need to write client in C# that can create requests to server and read responses from.
For doing it I need to create public and private RSA keys and convert them to bytes array. I had the working example in java:
```
java.security.KeyPairjava.security.KeyPair keypair = keyGen.genKeyPair();
byte[] pubKeyBytes = keypair.getPublic().getEncoded();
byte[] privKeyBytes = keypair.getPrivate().getEncoded();
```
I tried to do the same with C# in .NET:
```
RSACryptoServiceProvider keyPair = new RSACryptoServiceProvider(2048);
var publicKey = keyPair.ExportParameters(false);
var privateKey = keyPair.ExportParameters(true);
```
And I don't know how to do it. I have D, Dp, DQ, InverseQ, Modulus, Exponent as properties of publicKey and privateKey, but in java sample those key looks like single united keys. Which one of D, Dp, DQ, InverseQ, Modulus, Exponent I should use for my task? What the way to do the same as in java example, but in C#?<issue_comment>username_1: You need to use the [`ExportCspBlob`](https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider.exportcspblob(v=vs.110).aspx) method:
```
RSACryptoServiceProvider keyPair = new RSACryptoServiceProvider(2048);
var publicKey = keyPair.ExportCspBlob(false);
var privateKey = keyPair.ExportCspBlob(true);
```
`ExportParameters` exports the specific parameters from which the keys themselves can be calculated. For more information about those parameters, see the [wiki article](https://en.wikipedia.org/wiki/RSA_(cryptosystem)).
Upvotes: 0 <issue_comment>username_2: According to <https://docs.oracle.com/javase/7/docs/api/java/security/Key.html#getFormat()> the default for a public key encoding is X.509 SubjectPublicKeyInfo and for a private key is PKCS#8 PrivateKeyInfo.
There are a number of questions (like [Correctly create RSACryptoServiceProvider from public key](https://stackoverflow.com/q/41808094/6535399)) on creating RSAParameters from a SubjectPublicKeyInfo, but not as many for the reverse.
If you're creating your key via RSACryptoServiceProvider then the new key will always have an exponent value of 0x010001, which means the only variable sized piece of data that you have to contend with is the modulus value. The reason that this is important is that a SubjectPublicKeyInfo is (almost always) encoded in DER (defined by [ITU-T X.690](http://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.690)), which uses length-prefixed values. The ASN.1 ([ITU-T X.680](http://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.680)) is defined in [RFC 5280](https://www.rfc-editor.org/rfc/rfc5280#section-4.1) as
```
SubjectPublicKeyInfo ::= SEQUENCE {
algorithm AlgorithmIdentifier,
subjectPublicKey BIT STRING }
```
The encoded value for the AlgorithmIdentifier for RSA is
```
30 0D 06 09 2A 86 48 86 F7 0D 01 01 01 05 00
```
(aka SEQUENCE([OID("1.2.840.113549.1.1.1")](http://www.oid-info.com/get/1.2.840.113549.1.1.1), NULL))
The value for `subjectPublicKey` depends on the algorithm. For RSA it's `RSAPublicKey`, defined in [RFC 3447](https://www.rfc-editor.org/rfc/rfc3447#appendix-A.1.1) as
```
RSAPublicKey ::= SEQUENCE {
modulus INTEGER, -- n
publicExponent INTEGER -- e }
```
The encoding for an INTEGER is 02 (then the length) then the signed big-endian value. So, assuming that your Exponent value is `01 00 01` the encoded value is `02 03 01 00 01`. The modulus length depends on the size of your key.
```
int modulusBytes = parameters.Modulus.Length;
if (parameters.Modulus[0] >= 0x80)
modulusBytes++;
```
RSACryptoServiceProvider should always create keys that need the extra byte, but technically keys could exist which don't. The reason we need it is that parameters.Modulus is an UNsigned big-endian encoding, and if the high bit is set then we would be encoding a negative number into the RSAPublicKey. We fix that by inserting an 00 byte to keep the sign bit clear.
The length bytes for the modulus are slightly tricky. If the modulus is representable in 127 bytes or fewer (RSA-1015 or smaller) then you just use one byte for that value. Otherwise you need the smallest number of bytes to report the number, plus one. That extra byte (the first one, actually) says how many bytes the length is. So 128-255 is one byte, `81`. 256-65535 is two, so `82`.
We then need to wrap that into a BIT STRING value, which is easy (if we ignore the hard parts, since they're not relevant here). And then wrap everything else up in a SEQUENCE, which is easy.
Quick and dirty, only works on a 2048-bit key with exponent=0x010001:
```
private static byte[] s_prefix =
{
0x30, 0x82, 0x01, 0x22,
0x30, 0x0D,
0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01,
0x05, 0x00,
0x03, 0x82, 0x01, 0x0F,
0x00,
0x30, 0x82, 0x01, 0x0A,
0x02, 0x82, 0x01, 0x01, 0x00
};
private static byte[] s_suffix = { 0x02, 0x03, 0x01, 0x00, 0x01 };
private static byte[] MakeSubjectPublicInfoEasy2048(RSA rsa)
{
if (rsa.KeySize != 2048)
throw new ArgumentException(nameof(rsa));
RSAParameters rsaParameters = rsa.ExportParameters(false);
if (Convert.ToBase64String(rsaParameters.Exponent) != "AQAB")
{
throw new ArgumentException(nameof(rsa));
}
return s_prefix.Concat(rsaParameters.Modulus).Concat(s_suffix).ToArray();
}
```
Or, for a general-purpose response (that creates a lot of temporary byte[]s):
```
private static byte[] MakeTagLengthValue(byte tag, byte[] value, int index = 0, int length = -1)
{
if (length == -1)
{
length = value.Length - index;
}
byte[] data;
if (length < 0x80)
{
data = new byte[length + 2];
data[1] = (byte)length;
}
else if (length <= 0xFF)
{
data = new byte[length + 3];
data[1] = 0x81;
data[2] = (byte)length;
}
else if (length <= 0xFFFF)
{
data = new byte[length + 4];
data[1] = 0x82;
data[2] = (byte)(length >> 8);
data[3] = unchecked((byte)length);
}
else
{
throw new InvalidOperationException("Continue the pattern");
}
data[0] = tag;
int dataOffset = data.Length - length;
Buffer.BlockCopy(value, index, data, dataOffset, length);
return data;
}
private static byte[] MakeInteger(byte[] unsignedBigEndianValue)
{
if (unsignedBigEndianValue[0] >= 0x80)
{
byte[] tmp = new byte[unsignedBigEndianValue.Length + 1];
Buffer.BlockCopy(unsignedBigEndianValue, 0, tmp, 1, unsignedBigEndianValue.Length);
return MakeTagLengthValue(0x02, tmp);
}
for (int i = 0; i < unsignedBigEndianValue.Length; i++)
{
if (unsignedBigEndianValue[i] != 0)
{
if (unsignedBigEndianValue[i] >= 0x80)
{
i--;
}
return MakeTagLengthValue(0x02, unsignedBigEndianValue, i);
}
}
// All bytes were 0, encode 0.
return MakeTagLengthValue(0x02, unsignedBigEndianValue, 0, 1);
}
private static byte[] MakeSequence(params byte[][] data)
{
return MakeTagLengthValue(0x30, data.SelectMany(a => a).ToArray());
}
private static byte[] MakeBitString(byte[] data)
{
byte[] tmp = new byte[data.Length + 1];
// Insert a 0x00 byte for the unused bit count value
Buffer.BlockCopy(data, 0, tmp, 1, data.Length);
return MakeTagLengthValue(0x03, tmp);
}
private static byte[] s_rsaAlgorithmId = new byte[] { 0x30, 0x0D, 0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01, 0x05, 0x00 };
private static byte[] ExportSubjectPublicKeyInfo(RSA rsa)
{
RSAParameters parameters = rsa.ExportParameters(false);
return MakeSequence(
s_rsaAlgorithmId,
MakeBitString(
MakeSequence(
MakeInteger(parameters.Modulus),
MakeInteger(parameters.Exponent))));
}
```
You shouldn't really need the encoded private key. But, if you really do, you need the general-purpose approach because there's a lot of room for variability in the private key data.
`PrivateKeyInfo` is defined in [RFC 5208](https://www.rfc-editor.org/rfc/rfc5208#section-5) as
```
PrivateKeyInfo ::= SEQUENCE {
version Version,
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
privateKey PrivateKey,
attributes [0] IMPLICIT Attributes OPTIONAL }
Version ::= INTEGER
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
PrivateKey ::= OCTET STRING
Attributes ::= SET OF Attribute
```
It also says the current version number is 0.
The octet string of the private key is defined by the algorithm. For RSA we see in RFC 3447, along with `RSAPublicKey`:
```
RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER, -- (inverse of q) mod p
otherPrimeInfos OtherPrimeInfos OPTIONAL }
```
Ignore `otherPrimeInfos`. It doesn't, and shouldn't ever, apply. Therefore the version number to use is 0.
Taking utility methods already defined, we get the rest by
```
private static byte[] MakeOctetString(byte[] data)
{
return MakeTagLengthValue(0x04, data);
}
private static byte[] s_integerZero = new byte[] { 0x02, 0x01, 0x00 };
private static byte[] ExportPrivateKeyInfo(RSA rsa)
{
RSAParameters parameters = rsa.ExportParameters(true);
return MakeSequence(
s_integerZero,
s_rsaAlgorithmId,
MakeOctetString(
MakeSequence(
s_integerZero,
MakeInteger(parameters.Modulus),
MakeInteger(parameters.Exponent),
MakeInteger(parameters.D),
MakeInteger(parameters.P),
MakeInteger(parameters.Q),
MakeInteger(parameters.DP),
MakeInteger(parameters.DQ),
MakeInteger(parameters.InverseQ))));
}
```
Making all of this easier is on the feature roadmap for .NET Core (<https://github.com/dotnet/corefx/issues/20414> - doesn't say export, but where there's an import there's usually an export :))
Save your output to a file and you can check it with `openssl rsa -inform der -pubin -text -in pub.key` and `openssl rsa -inform der -text -in priv.key`
Upvotes: 3 [selected_answer]
|
2018/03/14
| 3,259 | 10,781 |
<issue_start>username_0: I am trying to do a simple thing:
check if a parameter is on the URL and then set a state for my component. This state will determine if some html code should be displayed or not.
Basically this dummy sample may give you an idea of what I have:
```
class Lalala extends Component {
constructor(props) {
super(props);
this.state = {showStatement : false}
}
parseURLParams(urlParams) {
if (urlParams.indexOf('blabla')> -1) {
this.setState({
showStatement: true
})
}
}
render() {
const { location } = this.prop
this.parseURLParams(location.search);
}
}
```
So, as you can see, every time it renders, it calls the parseURLParams function which tries to set the state, and, of course, when the setState is called, the render function is being called again, causing a infinite loop which ends up returning this error in the console.
Could you guys tell me a better way to set this state? once this is something that doesn't depend on an event, I am a bit confused.
Thanks<issue_comment>username_1: You need to use the [`ExportCspBlob`](https://msdn.microsoft.com/en-us/library/system.security.cryptography.rsacryptoserviceprovider.exportcspblob(v=vs.110).aspx) method:
```
RSACryptoServiceProvider keyPair = new RSACryptoServiceProvider(2048);
var publicKey = keyPair.ExportCspBlob(false);
var privateKey = keyPair.ExportCspBlob(true);
```
`ExportParameters` exports the specific parameters from which the keys themselves can be calculated. For more information about those parameters, see the [wiki article](https://en.wikipedia.org/wiki/RSA_(cryptosystem)).
Upvotes: 0 <issue_comment>username_2: According to <https://docs.oracle.com/javase/7/docs/api/java/security/Key.html#getFormat()> the default for a public key encoding is X.509 SubjectPublicKeyInfo and for a private key is PKCS#8 PrivateKeyInfo.
There are a number of questions (like [Correctly create RSACryptoServiceProvider from public key](https://stackoverflow.com/q/41808094/6535399)) on creating RSAParameters from a SubjectPublicKeyInfo, but not as many for the reverse.
If you're creating your key via RSACryptoServiceProvider then the new key will always have an exponent value of 0x010001, which means the only variable sized piece of data that you have to contend with is the modulus value. The reason that this is important is that a SubjectPublicKeyInfo is (almost always) encoded in DER (defined by [ITU-T X.690](http://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.690)), which uses length-prefixed values. The ASN.1 ([ITU-T X.680](http://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.680)) is defined in [RFC 5280](https://www.rfc-editor.org/rfc/rfc5280#section-4.1) as
```
SubjectPublicKeyInfo ::= SEQUENCE {
algorithm AlgorithmIdentifier,
subjectPublicKey BIT STRING }
```
The encoded value for the AlgorithmIdentifier for RSA is
```
30 0D 06 09 2A 86 48 86 F7 0D 01 01 01 05 00
```
(aka SEQUENCE([OID("1.2.840.113549.1.1.1")](http://www.oid-info.com/get/1.2.840.113549.1.1.1), NULL))
The value for `subjectPublicKey` depends on the algorithm. For RSA it's `RSAPublicKey`, defined in [RFC 3447](https://www.rfc-editor.org/rfc/rfc3447#appendix-A.1.1) as
```
RSAPublicKey ::= SEQUENCE {
modulus INTEGER, -- n
publicExponent INTEGER -- e }
```
The encoding for an INTEGER is 02 (then the length) then the signed big-endian value. So, assuming that your Exponent value is `01 00 01` the encoded value is `02 03 01 00 01`. The modulus length depends on the size of your key.
```
int modulusBytes = parameters.Modulus.Length;
if (parameters.Modulus[0] >= 0x80)
modulusBytes++;
```
RSACryptoServiceProvider should always create keys that need the extra byte, but technically keys could exist which don't. The reason we need it is that parameters.Modulus is an UNsigned big-endian encoding, and if the high bit is set then we would be encoding a negative number into the RSAPublicKey. We fix that by inserting an 00 byte to keep the sign bit clear.
The length bytes for the modulus are slightly tricky. If the modulus is representable in 127 bytes or fewer (RSA-1015 or smaller) then you just use one byte for that value. Otherwise you need the smallest number of bytes to report the number, plus one. That extra byte (the first one, actually) says how many bytes the length is. So 128-255 is one byte, `81`. 256-65535 is two, so `82`.
We then need to wrap that into a BIT STRING value, which is easy (if we ignore the hard parts, since they're not relevant here). And then wrap everything else up in a SEQUENCE, which is easy.
Quick and dirty, only works on a 2048-bit key with exponent=0x010001:
```
private static byte[] s_prefix =
{
0x30, 0x82, 0x01, 0x22,
0x30, 0x0D,
0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01,
0x05, 0x00,
0x03, 0x82, 0x01, 0x0F,
0x00,
0x30, 0x82, 0x01, 0x0A,
0x02, 0x82, 0x01, 0x01, 0x00
};
private static byte[] s_suffix = { 0x02, 0x03, 0x01, 0x00, 0x01 };
private static byte[] MakeSubjectPublicInfoEasy2048(RSA rsa)
{
if (rsa.KeySize != 2048)
throw new ArgumentException(nameof(rsa));
RSAParameters rsaParameters = rsa.ExportParameters(false);
if (Convert.ToBase64String(rsaParameters.Exponent) != "AQAB")
{
throw new ArgumentException(nameof(rsa));
}
return s_prefix.Concat(rsaParameters.Modulus).Concat(s_suffix).ToArray();
}
```
Or, for a general-purpose response (that creates a lot of temporary byte[]s):
```
private static byte[] MakeTagLengthValue(byte tag, byte[] value, int index = 0, int length = -1)
{
if (length == -1)
{
length = value.Length - index;
}
byte[] data;
if (length < 0x80)
{
data = new byte[length + 2];
data[1] = (byte)length;
}
else if (length <= 0xFF)
{
data = new byte[length + 3];
data[1] = 0x81;
data[2] = (byte)length;
}
else if (length <= 0xFFFF)
{
data = new byte[length + 4];
data[1] = 0x82;
data[2] = (byte)(length >> 8);
data[3] = unchecked((byte)length);
}
else
{
throw new InvalidOperationException("Continue the pattern");
}
data[0] = tag;
int dataOffset = data.Length - length;
Buffer.BlockCopy(value, index, data, dataOffset, length);
return data;
}
private static byte[] MakeInteger(byte[] unsignedBigEndianValue)
{
if (unsignedBigEndianValue[0] >= 0x80)
{
byte[] tmp = new byte[unsignedBigEndianValue.Length + 1];
Buffer.BlockCopy(unsignedBigEndianValue, 0, tmp, 1, unsignedBigEndianValue.Length);
return MakeTagLengthValue(0x02, tmp);
}
for (int i = 0; i < unsignedBigEndianValue.Length; i++)
{
if (unsignedBigEndianValue[i] != 0)
{
if (unsignedBigEndianValue[i] >= 0x80)
{
i--;
}
return MakeTagLengthValue(0x02, unsignedBigEndianValue, i);
}
}
// All bytes were 0, encode 0.
return MakeTagLengthValue(0x02, unsignedBigEndianValue, 0, 1);
}
private static byte[] MakeSequence(params byte[][] data)
{
return MakeTagLengthValue(0x30, data.SelectMany(a => a).ToArray());
}
private static byte[] MakeBitString(byte[] data)
{
byte[] tmp = new byte[data.Length + 1];
// Insert a 0x00 byte for the unused bit count value
Buffer.BlockCopy(data, 0, tmp, 1, data.Length);
return MakeTagLengthValue(0x03, tmp);
}
private static byte[] s_rsaAlgorithmId = new byte[] { 0x30, 0x0D, 0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01, 0x05, 0x00 };
private static byte[] ExportSubjectPublicKeyInfo(RSA rsa)
{
RSAParameters parameters = rsa.ExportParameters(false);
return MakeSequence(
s_rsaAlgorithmId,
MakeBitString(
MakeSequence(
MakeInteger(parameters.Modulus),
MakeInteger(parameters.Exponent))));
}
```
You shouldn't really need the encoded private key. But, if you really do, you need the general-purpose approach because there's a lot of room for variability in the private key data.
`PrivateKeyInfo` is defined in [RFC 5208](https://www.rfc-editor.org/rfc/rfc5208#section-5) as
```
PrivateKeyInfo ::= SEQUENCE {
version Version,
privateKeyAlgorithm PrivateKeyAlgorithmIdentifier,
privateKey PrivateKey,
attributes [0] IMPLICIT Attributes OPTIONAL }
Version ::= INTEGER
PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier
PrivateKey ::= OCTET STRING
Attributes ::= SET OF Attribute
```
It also says the current version number is 0.
The octet string of the private key is defined by the algorithm. For RSA we see in RFC 3447, along with `RSAPublicKey`:
```
RSAPrivateKey ::= SEQUENCE {
version Version,
modulus INTEGER, -- n
publicExponent INTEGER, -- e
privateExponent INTEGER, -- d
prime1 INTEGER, -- p
prime2 INTEGER, -- q
exponent1 INTEGER, -- d mod (p-1)
exponent2 INTEGER, -- d mod (q-1)
coefficient INTEGER, -- (inverse of q) mod p
otherPrimeInfos OtherPrimeInfos OPTIONAL }
```
Ignore `otherPrimeInfos`. It doesn't, and shouldn't ever, apply. Therefore the version number to use is 0.
Taking utility methods already defined, we get the rest by
```
private static byte[] MakeOctetString(byte[] data)
{
return MakeTagLengthValue(0x04, data);
}
private static byte[] s_integerZero = new byte[] { 0x02, 0x01, 0x00 };
private static byte[] ExportPrivateKeyInfo(RSA rsa)
{
RSAParameters parameters = rsa.ExportParameters(true);
return MakeSequence(
s_integerZero,
s_rsaAlgorithmId,
MakeOctetString(
MakeSequence(
s_integerZero,
MakeInteger(parameters.Modulus),
MakeInteger(parameters.Exponent),
MakeInteger(parameters.D),
MakeInteger(parameters.P),
MakeInteger(parameters.Q),
MakeInteger(parameters.DP),
MakeInteger(parameters.DQ),
MakeInteger(parameters.InverseQ))));
}
```
Making all of this easier is on the feature roadmap for .NET Core (<https://github.com/dotnet/corefx/issues/20414> - doesn't say export, but where there's an import there's usually an export :))
Save your output to a file and you can check it with `openssl rsa -inform der -pubin -text -in pub.key` and `openssl rsa -inform der -text -in priv.key`
Upvotes: 3 [selected_answer]
|
2018/03/14
| 513 | 1,737 |
<issue_start>username_0: so i'm new to powershell and i'm training myself by doing my first script with a menu, and from time to time i'm adding stuff to understand and learn new things
so now i have a question, when the user is asked to input a choice, let say number 1, he then will get asked to input a number of pc's that will be pinged
i'me tring to get something like "if the pc is alive" the user will get answer "the pc is alive"
here is the code:
```
$mypc = read-host -prompt "what is the name of the pc?"
Test-Connection -ComputerName $mypc -Count 1
if ($mypc = $true) {
write-host "the $mypc is alive"
}
```
and i'm not getting the right answer
this is what i'm getting after successful "test-connection"
```
the True is alive
```
ps: please direct me to the right code, i want to figure it by myself
thanks alot !
Update:
the most funniest thing is that all the things were working as i want to, the thing is that i wasn't familiar with the Test-connection "no ping" message and i thought that i'm doing something wrong loool<issue_comment>username_1: In PowerShell, `=` is the assignment operator, not a comparison operator - you're assigning the value `$true` to `$mypc`.
Use `-eq` instead:
```
if ($mypc -eq $true) {
write-host "the $mypc is alive"
}
```
Or, since the value of `$mypc` is already a boolean you can skip the equality check:
```
if ($mypc) {
write-host "the $mypc is alive"
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're pretty close for your first script. Here is your issue:
`if ($mypc = $true)` needs to be `if ($mypc -eq $true)`
`-eq` is a comparison, when you used the `=` sign you were actually setting the variable `$mypc` to `$true`
Upvotes: 0
|
2018/03/14
| 1,077 | 3,810 |
<issue_start>username_0: I am encapsulating a std::list to make it safely iterable when iterating can potentially mark contents as 'invalid', and 'invalid' contents are skipped in the iteration. Specifically, during the iteration the current object could schedule itself or other objects for removal from the list and mark those objects as invalid. The list is then periodically cleaned of invalid objects.
**How do I define an increment operator to make range-based for loops work correctly?** Here is my class:
```
template class DeferredCleanupList
{
public:
DeferredCleanupList() {
(void)static\_cast((T)0);
}
virtual ~DeferredCleanupList() {}
typedef typename std::list::iterator iterator;
iterator begin() {
iterator it = container.begin();
if ((\*it)->valid())
return it;
return next(it);
}
iterator next(iterator it) {
do {
++it;
}
while (it != end() && !(\*it)->valid());
return it;
}
iterator end() { return container.end(); }
// to be implemented:
// typedef typename std::list::const\_iterator const\_iterator ;
// const\_iterator cbegin() const { return container.cbegin(); }
// const\_iterator cend() const { return container.cend(); }
// const\_iterator cnext() const { ??? }
size\_t size() const { return container.size(); }
void add(T \*ptr) { container.push\_front(ptr); }
void remove(T \*ptr) { ptr->invalidate(); }
// called occasionally
void delete\_invalid() {
for (auto it = container.begin(); it != container.end(); ) {
auto ptr = \*it;
if (ptr->valid())
++it;
else {
delete ptr;
it = container.erase(it);
}
}
}
private:
DeferredCleanupList(const DeferredCleanupList&);
DeferredCleanupList& operator=(const DeferredCleanupList&);
std::list container;
};
```
My current test case is something like:
```
int main() {
class D : public Valid {};
DeferredCleanupList list;
for (auto it = list.begin(); it != list.end(); it = list.next(it)); // works
for (auto ptr : list); // iterates, but doesn't call list.next(it)
}
```
EDIT:
After some trial and error, I wrote this iterator wrapper based on the suggestions in the comments:
```
template class DeferredCleanupList
{
public:
class iterator {
public:
iterator(typename std::list::iterator it, DeferredCleanupList& ls) : it(it), list(ls) {}
iterator& operator=(const iterator& rhs) { it = rhs; return \*this; }
iterator& operator++() {
do {
++it;
}
while (it != list.end().it && !(\*it)->valid());
return \*this;
}
friend bool operator==(const iterator& lhs, const iterator& rhs) { return lhs.it == rhs.it; }
friend bool operator!=(const iterator& lhs, const iterator& rhs) { return !(lhs == rhs); }
T& operator\*() { return \*it; }
private:
typename std::list::iterator it;
DeferredCleanupList& list;
};
iterator begin() {
iterator it = iterator(container.begin(), \*this);
if (it == end() || (\*it)->valid())
return it;
return ++it;
}
iterator end() { return iterator(container.end(), \*this); }
}
```
It appears to work perfectly in all the test cases I throw at it. Am I missing anything obvious with this approach? Is there a more elegant solution?<issue_comment>username_1: In PowerShell, `=` is the assignment operator, not a comparison operator - you're assigning the value `$true` to `$mypc`.
Use `-eq` instead:
```
if ($mypc -eq $true) {
write-host "the $mypc is alive"
}
```
Or, since the value of `$mypc` is already a boolean you can skip the equality check:
```
if ($mypc) {
write-host "the $mypc is alive"
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You're pretty close for your first script. Here is your issue:
`if ($mypc = $true)` needs to be `if ($mypc -eq $true)`
`-eq` is a comparison, when you used the `=` sign you were actually setting the variable `$mypc` to `$true`
Upvotes: 0
|
2018/03/14
| 873 | 3,097 |
<issue_start>username_0: I am using laravel eloquent. I have fetched data from two table using eloquent.
I have `post` table and `chat` table. For `post` table I have model `Post.php` and for `chat` table I have model `Chat.php`. Here is the the eloquent relation I have created to fetch chat for individual post for a user.
in `Post.php`
```
public function TeamMessage()
{
return $this->hasMany('App\Chat','post_id');
}
```
And in `Chat.php`
```
public function ChatRelation()
{
return $this->belongsTo('App\Post');
}
```
it is working perfect. But this relation fetch all messages for a specific post. I want to fetch all unread message from `chat` table. I have a column named `unread` in `chat` table.
Now my question is how I can fetch only `unread` message for a specific post.<issue_comment>username_1: For querying relationships, you have to call them as functions instead of properties, like this:
```
$unreadPosts = $post->TeamMessage()->where('unread', true)->get();
```
For more information on this you can take a look at the [docs](https://laravel.com/docs/master/eloquent-relationships#querying-relations).
Upvotes: 0 <issue_comment>username_2: You need to create a local scope on your model, information on local scopes can be found here: <https://laravel.com/docs/5.6/eloquent#local-scopes>
```
public function scopeUnread($query)
{
return $query->where('unread', 1);
}
```
Then in your controller/view
```
$unread = $yourmodel->unread()
```
Upvotes: 0 <issue_comment>username_3: First I would change your relation names to the name of the entity in lower case:
in Post.php
```
public function chats()
{
return $this->hasMany('App\Chat','post_id');
}
```
And in Chat.php
```
public function post()
{
return $this->belongsTo('App\Post');
}
public function scopeUnread($query)
{
return $query->where('unread', 1);
}
```
Then you can use
```
$post->chats()->unread()->get();
```
Upvotes: 0 <issue_comment>username_4: While the other answers all work, they either depend on scopes (which are very useful in many circumstances) or on you having already instantiated an instance of `$post`, which doesn't let you eager load multiple posts with their messages.
The dynamic solution is this, which will let you fetch either 1 or more posts and eager load their messages with subquery:
```
$posts = Post::with(['TeamMessage' => function ($query) {
$query->where('unread', true); // This part applies to the TeamMessage query
}])->get();
```
[See in documentation](https://laravel.com/docs/5.6/eloquent-relationships#constraining-eager-loads)
Edit:
If you, however, want to filter the posts, to only show those that have unread messages, you need to use `whereHas` instead of `with`:
```
$posts = Post::whereHas(['TeamMessage' => function ($query) {
$query->where('unread', true); // This part applies to the TeamMessage query
}])->get();
```
[More in the documentation](https://laravel.com/docs/5.6/eloquent-relationships#querying-relationship-existence).
You can also chain `whereHas(...)` with `with(...)`.
Upvotes: 1
|
2018/03/14
| 758 | 2,865 |
<issue_start>username_0: I wish I could do the following:
```
type Logger = {
warn : System.Type -> string -> unit
// also methods for error, info etc
}
// during application startup
let warn (serviceProvider : IServiceProvider) (t : Type) s =
let loggerType = typeof>.MakeGenericType(t)
let logger = serviceProvider.GetService(loggerType) :?> ILogger
logger.LogWarning(s)
let log = { warn = warn; ... } // with similar setups for info, error etc
```
Then, I'd be able to partially apply with `log`, and deep inside the app I could
```
log.warn typeof "a warning message"
```
which would log a message using the type `Foo` to determine the [log category](https://learn.microsoft.com/en-us/aspnet/core/fundamentals/logging/?tabs=aspnetcore2x#log-category).
The troublesome expression is
```
typeof>.MakeGenericType(t)
```
where F# seems to infer the type of `'a` in the generic definition to be `obj`, and so the call to `MakeGenericType` fails with an exception saying that the type definition I'm calling it on is not generic.
The equivalent C# would be
```
typeof(ILogger<>).MakeGenericType(t)
```
but `typeof>` doesn't compile in F#.
How do I instantiate a generic type like this, where the type argument is only known at runtime, in F#?<issue_comment>username_1: The F# equivalent of `typeof(ILogger<>)` in C# is to use the `typedefof` function:
```
typedefof>.MakeGenericType(t)
```
This is still a normal function that takes a fully instantiated type - the `_` placeholder will be automatically filled with `obj`, but the `typedefof` function does not return the type, but its generic type definition. You could also do the same by calling `GetGenericTypeDefinition` on the result of `typeof`, but `typedefof` is a nicer shortcut!
Upvotes: 2 <issue_comment>username_2: The syntax you're looking for is `typedefof>` - that will give you the non-instantiated type `ILooger<_>`, on which you can then call `MakeGenericType`.
However, I would rather recommend you rethink your architecture. Why instantiate types at runtime? That's slower and less safe. In this case, I see no reason to do it.
Better pass your types as generic parameters. Of course you can't have record members be generic functions, so that's a bummer. However, F# does offer another facility for this sort of thing - interfaces. Interface methods can be fully generic:
```
type Logger =
abstract member warn<'logger> : string -> unit
let mkLog (serviceProvider : IServiceProvider) =
{ new Logger with
member warn<'logger> s =
let logger = serviceProvider.GetService(typeof>) :?> ILogger
logger.LogWarning(s)
}
let log = mkLog serviceProvider
// usage:
log.warn "a warning message"
```
Granted, the syntax for creating an instance of such type is a bit clunky, but you only have to do it once.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,150 | 4,197 |
<issue_start>username_0: I have an array containing strings.
I have a text file.
I want to loop through the text file line by line.
And check whether each element of my array is present or not.
(they must be whole words and not substrings)
I am stuck because my script only checks for the presence of the first array element.
However, I would like it to return results with each array element and a note as to whether this array element is present in the entire file or not.
```
#!/usr/bin/python
with open("/home/all_genera.txt") as file:
generaA=[]
for line in file:
line=line.strip('\n')
generaA.append(line)
with open("/home/config/config2.cnf") as config_file:
counter = 0
for line in config_file:
line=line.strip('\n')
for part in line .split():
if generaA[counter]in part:
print (generaA[counter], "is -----> PRESENT")
else:
continue
counter += 1
```<issue_comment>username_1: ```
from collection import Counter
import re
#first normalize the text (lowercase everything and remove puncuation(anything not alphanumeric)
normalized_text = re.sub("[^a-z0-9 ]","",open("some.txt","rb").read().lower())
# note that this normalization is subject to the rules of the language/alphabet/dialect you are using, and english ascii may not cover it
#counter will collect all the words into a dictionary of [word]:count
words = Counter(normalized_text.split())
# create a new set of all the words in both the text and our word_list_array
set(my_word_list_array).intersection(words.keys())
```
Upvotes: 1 <issue_comment>username_2: the counter is not increasing because it's outside the `for` loops.
```
with open("/home/all_genera.txt") as myfile: # don't use 'file' as variable, is a reserved word! use myfile instead
generaA=[]
for line in myfile: # use .readlines() if you want a list of lines!
generaA.append(line)
# if you just need to know if string are present in your file, you can use .read():
with open("/home/config/config2.cnf") as config_file:
mytext = config_file.read()
for mystring in generaA:
if mystring in mytext:
print mystring, "is -----> PRESENT"
# if you want to check if your string in line N is present in your file in the same line, you can go with:
with open("/home/config/config2.cnf") as config_file:
for N, line in enumerate(config):
if generaA[N] in line:
print "{0} is -----> PRESENT in line {1}".format(generaA[N], N)
```
I hope that everything is clear.
This code could be improved in many ways, but i tried to have it as similar as yours so it will be easier to understand
Upvotes: -1 <issue_comment>username_3: If I understand correctly, you want a sequence of words that are in both files. If yes, `set` is your friend:
```
def parse(f):
return set(word for line in f for word in line.strip().split())
with open("path/to/genera/file") as f:
source = parse(f)
with open("path/to/conf/file" as f:
conf = parse(f)
# elements that are common to both sets
common = conf & source
print(common)
# elements that are in `source` but not in `conf`
print(source - conf)
# elements that are in `conf` but not in `source`
print(conf - source)
```
So to answer "I would like it to return results with each array element and a note as to whether this array element is present in the entire file or not", you can use either common elements or the `source - conf` difference to annotate your `source` list:
```
# using common elements
common = conf & source
result = [(word, word in common) for word in source]
print(result)
# using difference
diff = source - conf
result = [(word, word not in diff) for word in source]
```
Both will yeld the same result and since set lookup is O(1) perfs should be similar too, so I suggest the first solution (positive assertions are easier to the brain than negative ones).
You can of course apply further cleaning / normalisation when building the sets, ie if you want case insensitive search:
```
def parse(f):
return set(word.lower() for line in f for word in line.strip().split())
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 822 | 3,488 |
<issue_start>username_0: I developed an app a few months back for iOS devices that generates real-time harmonic rich drones. It works fine on newer devices, but it's running into buffer underruns on slower devices. I need to optimize this thing and need some mental help. Here's a super basic overview of what I'm currently doing:
* Create an "Oscillator Bank" that consists of X number of harmonics (simply calculated from a given fundamental frequency. Nothing fancy here.)
* Inside my DAC function that spits out samples to an iOS audio buffer, I call a "GetNextSample()" function that goes through the bank of sine oscillators, calculates the sample for each one and adds them up. Some simple additive synthesis.
* Enjoy the beauty of the drone.
Again, it works great, until it doesn't. I'd like to optimize this thing so I'm not using brute additive synthesis of real-time calculated sine waves. If I limit the number of harmonics ("banks") to 2, it'll work on the older devices. Not cool. On the newer devices, it underruns around 50 harmonics. Not too bad. But if I want to play multiple drones at once to create some chords, that's too much processing power.... so...
* Should I generate waveform tables to just loop through instead of constant calculation? (I assume yes...)
* Should I convert my usage of double-precision floating point to integer based calculations? (I assume yes...)
And my big algorithmic question (being pretty non-mathematical):
* If I use a waveform table, how do I accurately determine how long the wave/table should be?? In my experience developing this app, if I just go to the end of a period (2\*PI) and start over again, resetting the phase back to 0, I get a sound artifact, since I'm force offsetting the phase. In other words, I can't guarantee that one period will give me the right results...
Maybe I'm over complicating things... What's the standard way of doing quick, processor friendly real-time synth of multiple added sines?
I'll keep poking around in the meantime.
Thanks!<issue_comment>username_1: I have written such a beast in golang on server side ... for starters yes use single precision floating point
To address table population, I would assure your implementation is solid by having it synthesize a square wave. Visualize the output for each run as you give it each additional frequency (with its corresponding parms of amplitude and phase shift) ... by definition a single cycle is enough as long as you are correctly using enough cycles to cover the time period of a sample
Its important to leverage the knowledge that generating an output curve from an input set of sine waves ( each with freq, amplitude, phase shift) lends itself to doing the reverse ... namely perform a FFT on that output curve to have the api give you its version of the underlying sine waves (again each with a freq, amplitude and phase) ... this will confirm your system is accurate
The name of the process you are implementing is : ***inverse Fourier transform*** and there are libraries for this however I too prefer rolling my own
Upvotes: 0 <issue_comment>username_2: Have you (or can you, not an iOS person) increase the buffer size? Might give you enough overhead that you do not need this. Otherwise yes [wave-table synthesis](https://ccrma.stanford.edu/~bilbao/booktop/node9.html) is a viable approach. You could calculate a new wavetable from the sum of all the harmonics only when a parameter changes.
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,325 | 4,930 |
<issue_start>username_0: I have just started using c++ and saw that their is a null value for pointers. I am curious as to what this is used for. It seems like it would be pointless to add a pointer to point to nothing.<issue_comment>username_1: Basically, pointers are just integers. The null pointer is a pointer with a value of 0. It doesn't strictly point to nothing, it points to absolute address 0, which generally isn't accessible to your program; dereferencing it causes a fault.
It's generally used as a flag value, so that you can, for example, use it to end a loop.
**Update**:
There seem to be a lot of people confused by this answer, which is, strictly, completely correct. See C11(ISO/IEC 9899:201x) §6.3.2.3 Pointers Section 3:
>
> An *integer constant expression with the value 0*, or such an expression cast to type void \*, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function.
>
>
>
So, what's an address? It's a number *n* where 0 ≤ *n* ≤ `max_address`. And how do we represent such a number? Why, it's an *integer*, just like the standard says.
The C11 standard makes it clear that there's never anything to reference at address 0, because in some old pathologically non-portable code in BSD 4.2, you often saw code like this:
```
/* DON'T TRY THIS AT HOME */
int
main(){
char target[100] ;
char * tp = ⌖ ;
char * src = "This won't do what you think." ;
void exit(int);
while((*tp++ = *src++))
;
exit(0);
}
```
This is still valid C:
```
$ gcc -o dumb dumb.c
dumb.c:6:12: warning: incompatible pointer types initializing 'char *' with an
expression of type 'char (*)[100]' [-Wincompatible-pointer-types]
char * tp = ⌖ ;
^ ~~~~~~~
1 warning generated.
$
```
In 4.2BSD on a VAX, you could get away with that nonsense, because address `0` reliably contained the *value* 0, so the assignment evaluated to `0`, which is of course FALSE.
Now, to demonstrate:
```
/* Very simple program dereferencing a NULL pointer. */
int
main() {
int * a_pointer ;
int a_value ;
void exit(int); /* To avoid any #includes */
a_pointer = ((void*)0);
a_value = *a_pointer ;
exit(0);
}
```
Here's the results:
```
$ gcc -o null null.c
$ ./null
Segmentation fault: 11
$
```
Upvotes: -1 <issue_comment>username_2: Generally it's a placeholder. If you just declare a pointer, `int *a;`, there's no guarantee what is in the pointer when you want to access it. So if your code may or may not set the pointer later, there's no way to tell if the pointer is valid or just pointing to garbage memory. But if you declare it as `NULL`, such as `int *a = NULL;` you can then check later to see if the pointer was set, like `if(a == NULL)`.
Most of the time during initialization we assign null value to a pointer so that we can check whether it is still null or a address has been assign to it or not.
Upvotes: 1 <issue_comment>username_3: null value in C and C++ is equal to 0. But `nullptr` in C++ is different from it, nullptr is always a pointer type in C++. We assign a null value to a pointer variable for various reason.
>
> 1. To check whether a memory has been allocated to the pointer or not
> 2. To neutralize a dangling pointer so that it should not create any side effect
> 3. To check whether a return address is a valid address or not etc.
>
>
>
Most of the time during initialization we assign null value to a pointer so that we can check whether it is still null or a address has been assign to it or not.
Upvotes: -1 <issue_comment>username_4: Well, the null pointer value has the remarkable property that, despite it being a well-defined and unique constant value, the exact value depending on machine-architecture and ABI (on most modern ones all-bits-zero, not that it matters), it *never* points to (or just behind) an object.
This allows it to be used as a reliable error-indicator when a valid pointer is expected (functions might throw an exception or terminate execution instead), as well as a sentinel value, or to mark the absence of something optional.
On many implementations accessing memory through a nullpointer will reliably cause a hardware exception (some even trap on arithmetic), though on many others, especially those without paging and / or segmentation it will not.
Upvotes: 2 <issue_comment>username_5: >
> It seems like it would be pointless to add a pointer to point to
> nothing.
>
>
>
No, it is not. Suppose you have a function returning optional dynamically allocated value. When you want to return "nothing" you return null. The caller can check for null and distinguish between 2 different cases: when the return value is "nothing" and when the return value is some valid usable object.
Upvotes: 1
|
2018/03/14
| 448 | 1,759 |
<issue_start>username_0: I have an integration test setup that uses RSpec + Capybara. We recently switched from PhantomJS to headless Chromedriver and I miss that javascript errors are not displayed anymore.
Is there a convenient way of configuring Chromedriver so that javascript errors show up in the log output of capybara tests?
The ways of accessing the javascript errors I found (see below) all are a bit cumbersome or not really what I was looking for.
* setting an explicit log output path as described in [this answer](https://stackoverflow.com/a/37925681/1870317)
* accessing the (kind of cut back) error log through the driver interface as described in this [gist](https://gist.github.com/gkop/1371962)<issue_comment>username_1: Assuming your second method is referring to the comments in that gist rather than the method that requires installing an extension, then no there isn't. Selenium is basing its functionality/feature support off the WebDriver spec - <https://w3c.github.io/webdriver/webdriver-spec.html> - which doesn't specify anything about reporting JS errors back to the automation client.
Note - there are configuration options for the Chrome logs which may change the verbosity and get rid of the "kind of cut back" issue - see [Capture browser console logs with capybara](https://stackoverflow.com/questions/46278514/capture-browser-console-logs-with-capybara)
Upvotes: 2 <issue_comment>username_2: With
```
RSpec.configure do |config|
config.after(type: :feature) do
STDERR.puts page.driver.browser.manage.logs.get(:browser)
end
end
```
one can print out the logs after each example, that is a feature spec with RSpec. Should be similar in other test frameworks. This does not fail the example however.
Upvotes: 3
|
2018/03/14
| 309 | 1,374 |
<issue_start>username_0: I have a Sony A7S2 - is there a way to access the Remote Camera API endpoint while the camera is connected to an access point - and, while we're at it: is it possible to keep the camera connected to an AP and the endpoint running all the time, even during "normal" operation? As soon as I exit the WiFi menu, the camera disconnects from the AP.
Reason: I'd like to control my camera from way more far away than the integrated AP allows, and I'd like to be able to control the camera while also connected to the Internet (so I can upload transferred photos without having to switch my laptop's WiFi connection).<issue_comment>username_1: Unfortunately with the camera Remote API you will only be able to connect directly to the camera to control it. This issue has been raised with the team though and we will for sure post an update if anything changes.
As far as keeping the camera connected, this does vary from camera to camera, but generally there is no way to extend the wifi connection period beyond the default time.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Sony's A7-series supports the PTP/IP protocol when connected to an AP, e.g., for downloading files through the wireless importer. You cannot use the remote-API through that but it might or might not support PTP's build-in capture commands (never tried...).
Upvotes: 0
|
2018/03/14
| 853 | 2,021 |
<issue_start>username_0: I've a simple question: suppose that I would like to make an operation on one column (in this case multiplying by 1000 column Measure) according to the value of another column (in this case when Unit is equal to L).
Is there any efficient way to do?
I'll report a simple example but you don't have to focus on changing the name of the value of the second column.
Input:
```
a <- data.frame(Measure = c(10, 2000, 10000, 15, 40),
Unit = c("L","mL","mL","L","L"),
Price = c(100, 200, 500, 700, 900 ))
```
Expected Output:
```
b <- data.frame(Measure = c(10000, 2000, 10000, 15000, 40000),
Unit = c("mL","mL","mL","mL","mL"),
Price = c(100, 200, 500, 700, 900 ))
```<issue_comment>username_1: With below code you only multiply Measure by 1000 when Unit is "L"; in all other cases just the Measure value is returned without a multiplication
```
a <- data.frame(Measure = c(10, 2000, 10000, 15, 40),
Unit = c("L","mL","mL","L","L"),
Price = c(100, 200, 500, 700, 900 ))
a$Measure <- ifelse(a$Unit == "L", a$Measure * 1000, a$Measure)
a
Measure Unit Price
1 10000 L 100
2 2000 mL 200
3 10000 mL 500
4 15000 L 700
5 40000 L 900
```
Upvotes: 1 <issue_comment>username_2: Same as above, only the tidyverse/dplyr way:
```
> library(dplyr)
> a %>% mutate(Measure=ifelse(Unit=="mL",Measure,(Measure*1000)))
Measure Unit Price
1 10000 L 100
2 2000 mL 200
3 10000 mL 500
4 15000 L 700
5 40000 L 900
```
Upvotes: 1 <issue_comment>username_3: ```
library(data.table)
DT = data.table(a)
DT[Unit == "L", `:=`(Measure = Measure * 1000, Unit = "mL")]
Measure Unit Price
1: 10000 mL 100
2: 2000 mL 200
3: 10000 mL 500
4: 15000 mL 700
5: 40000 mL 900
```
The syntax is `DT[i, j]`:
1. Filter to rows using `i`
2. Do `j`
The `:=` function is used to edit columns.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 934 | 2,029 |
<issue_start>username_0: I need to duplicate some values contained in a given colum by the value contained in another column of a dataframe.
For example, this is my dataframe:
```
Year <- rep(c('2011', '2012', '2013', '2014', '2015', '2016'), 3)
LT <- rep(c(8, 9, 10), each=6)
Nind <- rep(c(10, 7, 5), each=6)
df <- data.frame(Year, LT, Nind)
```
what I would like to do is to replicate each LT for the respective value in LT by year, like this:
```
lt <- c(rep(8, 60), rep(9, 42), rep(10, 30))
yr <- c(rep(c('2011', '2012', '2013', '2014', '2015', '2016'), each=10),
rep(c('2011', '2012', '2013', '2014', '2015', '2016'), each=7),rep(c('2011',
'2012', '2013', '2014', '2015', '2016'), each=5))
df2 <- data.frame(yr, lt)
```
Thank you so much for your help!
Silvia<issue_comment>username_1: With below code you only multiply Measure by 1000 when Unit is "L"; in all other cases just the Measure value is returned without a multiplication
```
a <- data.frame(Measure = c(10, 2000, 10000, 15, 40),
Unit = c("L","mL","mL","L","L"),
Price = c(100, 200, 500, 700, 900 ))
a$Measure <- ifelse(a$Unit == "L", a$Measure * 1000, a$Measure)
a
Measure Unit Price
1 10000 L 100
2 2000 mL 200
3 10000 mL 500
4 15000 L 700
5 40000 L 900
```
Upvotes: 1 <issue_comment>username_2: Same as above, only the tidyverse/dplyr way:
```
> library(dplyr)
> a %>% mutate(Measure=ifelse(Unit=="mL",Measure,(Measure*1000)))
Measure Unit Price
1 10000 L 100
2 2000 mL 200
3 10000 mL 500
4 15000 L 700
5 40000 L 900
```
Upvotes: 1 <issue_comment>username_3: ```
library(data.table)
DT = data.table(a)
DT[Unit == "L", `:=`(Measure = Measure * 1000, Unit = "mL")]
Measure Unit Price
1: 10000 mL 100
2: 2000 mL 200
3: 10000 mL 500
4: 15000 mL 700
5: 40000 mL 900
```
The syntax is `DT[i, j]`:
1. Filter to rows using `i`
2. Do `j`
The `:=` function is used to edit columns.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,005 | 4,252 |
<issue_start>username_0: I'm using Xamarin.Forms but I'm having a few problems handling push notifications. **They only work fine when app is in foreground.**
When the app is in background, all notifications are received and displayed correctly but I can't manage to get them to vibrate or even turn led on. I have found a lot of documentation but it's pretty old and most of the time, methods referred are deprecated.
Anyway, I have a *FirebaseMessagingService* to handle FCM events and this is the action that generates the push notification:
```
void SendNotification(RemoteMessage message)
{
var intent = new Intent(this, typeof(MainActivity));
intent.AddFlags(ActivityFlags.ClearTop);
intent.PutExtra("type", message.Data["type"]);
var pendingIntent = PendingIntent.GetActivity(this, 0, intent, PendingIntentFlags.OneShot);
NotificationManager notificationManager = (NotificationManager)ApplicationContext.GetSystemService(NotificationService);
NotificationCompat.Builder builder = null;
if (Android.OS.Build.VERSION.SdkInt >= Android.OS.BuildVersionCodes.O)
{
var importance = NotificationImportance.Default;
NotificationChannel notificationChannel = new NotificationChannel("ID", "Name", importance);
notificationManager.CreateNotificationChannel(notificationChannel);
builder = new NotificationCompat.Builder(ApplicationContext, notificationChannel.Id);
}
else
builder = new NotificationCompat.Builder(ApplicationContext);
builder = builder
.SetSmallIcon(Resource.Drawable.icon)
.SetDefaults((int)((int)NotificationDefaults.Sound | (int)NotificationDefaults.Vibrate))
.SetAutoCancel(true)
.SetContentTitle(message.GetNotification().Title)
.SetContentText(message.GetNotification().Body)
.SetContentIntent(pendingIntent)
.SetLights(65536, 1000, 500)
.SetVibrate(new long[] { 1000, 1000, 1000 })
.SetPriority((int)NotificationPriority.High);
PowerManager pm = (PowerManager)ApplicationContext.GetSystemService(PowerService);
WakeLock wl = pm.NewWakeLock(WakeLockFlags.Partial, "TAG");
wl.Acquire(15000);
notificationManager.Notify(0, builder.Build());
wl.Release();
}
```
I also added these lines in the Android.Manifest file:
```
```
**It was pretty much simpler before** but I've been adding new things trying to make it work, like the WAKE\_LOCK section or the distinction of Android Version when creating the Builder object. **It vibrates only when app is in foreground.**
I even tried using the James Montemagno's Vibrate Plugin or using Android.Vibrator class and they didn't work either when app is in background.
I'm open to suggestions here. Thanks.<issue_comment>username_1: On Oreo(+), you have to enable the vibration on the `NotificationChannel` via `EnableVibration(true)`.
### Oreo Example:
```
NotificationChannel channel;
channel = notificationManager.GetNotificationChannel(myUrgentChannel);
if (channel == null)
{
channel = new NotificationChannel(myUrgentChannel, channelName, NotificationImportance.High);
channel.EnableVibration(true);
channel.EnableLights(true);
channel.SetSound(
RingtoneManager.GetDefaultUri(RingtoneType.Notification),
new AudioAttributes.Builder().SetUsage(AudioUsageKind.Notification).Build()
);
channel.LockscreenVisibility = NotificationVisibility.Public;
notificationManager.CreateNotificationChannel(channel);
}
channel.Dispose();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: *After reviewing everything over and over again all I needed was adding this to the payload. I didn't know this must be included in order for it to vibrate. I wasn't able to modify this behavior programmatically but I'm sure it's possible.*
```
"notification":{
"sound":"default"
}
```
**FOUND IT HERE**
[Vibration on push-notification not working when phone locked/app not open](https://stackoverflow.com/questions/45588629/vibration-on-push-notification-not-working-when-phone-locked-app-not-open)
Upvotes: 1
|
2018/03/14
| 655 | 2,056 |
<issue_start>username_0: I want to create a table and put on it all the param which are not empty in param like this
```
http://localhost:5002/5u3k0HRYqYEyToBB53m1=<EMAIL>&m2=<EMAIL>&m3=&m4=&m5=&m6=&m7=&m8=&m9=<EMAIL>&m10=&m11=<EMAIL>&m12=&m13=&m14=&m15=&m16=&m17=&m18=&m19=&v=4.1.2&hw=6
```
And after this i want to store all the email ( m1,m2,m9,m11 ) in a table.
```
console.log(b) // <EMAIL> , <EMAIL> , <EMAIL> , <EMAIL>
```
So I did it like this
```
let emailTable = [req.query.m0,req.query.m1,req.query.m2......]
Box.push({Email:emailTable}}
console.log(Box)// it show me even the empty param, i want only the full one
```<issue_comment>username_1: On Oreo(+), you have to enable the vibration on the `NotificationChannel` via `EnableVibration(true)`.
### Oreo Example:
```
NotificationChannel channel;
channel = notificationManager.GetNotificationChannel(myUrgentChannel);
if (channel == null)
{
channel = new NotificationChannel(myUrgentChannel, channelName, NotificationImportance.High);
channel.EnableVibration(true);
channel.EnableLights(true);
channel.SetSound(
RingtoneManager.GetDefaultUri(RingtoneType.Notification),
new AudioAttributes.Builder().SetUsage(AudioUsageKind.Notification).Build()
);
channel.LockscreenVisibility = NotificationVisibility.Public;
notificationManager.CreateNotificationChannel(channel);
}
channel.Dispose();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: *After reviewing everything over and over again all I needed was adding this to the payload. I didn't know this must be included in order for it to vibrate. I wasn't able to modify this behavior programmatically but I'm sure it's possible.*
```
"notification":{
"sound":"default"
}
```
**FOUND IT HERE**
[Vibration on push-notification not working when phone locked/app not open](https://stackoverflow.com/questions/45588629/vibration-on-push-notification-not-working-when-phone-locked-app-not-open)
Upvotes: 1
|
2018/03/14
| 525 | 1,984 |
<issue_start>username_0: I am looking for a way to iterate over an array and only output the first item, and when iterating again, output the second item.
```
var array = [1,2,3]
iterate : function () {
$.each(array, function(a,b) {
console.log(a); // This will log 1 and 2 and 3
});
```
This will log 1,2 and 3 but I just want to log 1 first and when the `iterate` function is called again, it will log 2 and then 3
I am assigning `iterate` function to a button (an increment button). So the increment button will simply output 1 when clicked first, and then 2 when clicked again, and then 3 when clicked again.<issue_comment>username_1: On Oreo(+), you have to enable the vibration on the `NotificationChannel` via `EnableVibration(true)`.
### Oreo Example:
```
NotificationChannel channel;
channel = notificationManager.GetNotificationChannel(myUrgentChannel);
if (channel == null)
{
channel = new NotificationChannel(myUrgentChannel, channelName, NotificationImportance.High);
channel.EnableVibration(true);
channel.EnableLights(true);
channel.SetSound(
RingtoneManager.GetDefaultUri(RingtoneType.Notification),
new AudioAttributes.Builder().SetUsage(AudioUsageKind.Notification).Build()
);
channel.LockscreenVisibility = NotificationVisibility.Public;
notificationManager.CreateNotificationChannel(channel);
}
channel.Dispose();
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: *After reviewing everything over and over again all I needed was adding this to the payload. I didn't know this must be included in order for it to vibrate. I wasn't able to modify this behavior programmatically but I'm sure it's possible.*
```
"notification":{
"sound":"default"
}
```
**FOUND IT HERE**
[Vibration on push-notification not working when phone locked/app not open](https://stackoverflow.com/questions/45588629/vibration-on-push-notification-not-working-when-phone-locked-app-not-open)
Upvotes: 1
|
2018/03/14
| 716 | 2,641 |
<issue_start>username_0: I am using springboot and springdata with Mysql.
I have 2 entities, Customer & Order:
```
@Entity
@Table(name = "customers")
public class Customer {
@Id
@GeneratedValue(strategy= GenerationType.IDENTITY)
@Column(name="id", nullable = false)
protected long id;
@Column(name = "name")
private String name;
}
@Entity
@Table(name = "orders")
public class Order {
@Id
@GeneratedValue(strategy= GenerationType.IDENTITY)
@Column(name="id", nullable = false)
protected long id;
@Column(name="customer_id")
private long customerId;
}
```
I also have a repository:
```
@Repository
public interface OrdersRepository extends JpaRepository {
@Query("select o from Order o, Customer c where o.customerId = c.id")
Page searchOrders(final Pageable pageable);
}
```
The method has some more arguments for searching, but the problem is when I send a PageRequest object with sort that is a property of Customer.
e.g.
```
Sort sort = new Sort(Sort.Direction.ASC, "c.name");
ordersRepository.search(new PageRequest(x, y, sort));
```
However, sorting by a field of Order works well:
```
Sort sort = new Sort(Sort.Direction.ASC, "id");
ordersRepository.search(new PageRequest(x, y, sort));
```
The error I get is that c is not a property of Order (but since the query is a join of the entities I would expect it to work).
```
Caused by: org.hibernate.QueryException: could not resolve property c of Order
```
Do you have any idea how I can sort by a field of the joined entity?
Thank you<issue_comment>username_1: In JPA , the thing that you sort with must be something that is returned in the select statement, you can't sort with a property that is not returned
Upvotes: 2 <issue_comment>username_2: You got the error because the relationship is not modeled properly. In your case it is a `ManyToOne` relation. I can recomend the [wikibooks](https://en.wikibooks.org/wiki/Java_Persistence/Relationships) to read further.
```
@Entity
@Table(name = "orders")
public class Order {
@Id
@GeneratedValue(strategy= GenerationType.IDENTITY)
@Column(name="id", nullable = false)
protected long id;
@ManyToOne
@JoinColumn(name="customer_id", referencedColumnName = "id")
private Customer customer;
}
```
The query is not needed anymore because the customer will be fetched.
```
@Repository
public interface OrdersRepository extends PagingAndSortingRepository {
}
```
Now you can use nested properties.
```
Sort sort = new Sort(Sort.Direction.ASC, "customer.name");
ordersRepository.findAll(new PageRequest(x, y, sort));
```
Upvotes: 0
|
2018/03/14
| 2,444 | 7,586 |
<issue_start>username_0: I'm trying to prove that a number is prime using the Znumtheory library.
In Znumtheory primes are defined in terms of relative primes:
```
Inductive prime (p:Z) : Prop :=
prime_intro :
1 < p -> (forall n:Z, 1 <= n < p -> rel_prime n p) -> prime p.
```
So to prove that 3 is prime I should apply prime\_intro to the goal. Here is my try:
```
Theorem prime3 : prime 3.
Proof.
apply prime_intro.
- omega.
- intros.
unfold rel_prime. apply Zis_gcd_intro.
+ apply Z.divide_1_l.
+ apply Z.divide_1_l.
+ intros. Abort.
```
I don't know how to use the hypothesis `H : 1 <= n < 3` which says that `n` is `1` or `2`. I could destruct it, apply `lt_eq_cases` and destruct it again, but I would be stuck with a useless `1 < n` in the first case.
I wasn't expecting to have a hard time with something that looks so simple.<issue_comment>username_1: The lemma you mentioned is actually proved in that library, under the name `prime_3`. You can look up its proof on [GitHub](https://github.com/coq/coq/blob/master/theories/ZArith/Znumtheory.v).
You mentioned how strange it is to have such a hard time to prove something so simple. Indeed, the proof in the standard library is quite complicated. Luckily, there are much better ways to work out this result. The [Mathematical Components](https://github.com/math-comp/math-comp) library advocates for a different style of development based on boolean properties. There, `prime` is not an inductively defined predicate, but a function `nat -> bool` that checks whether its argument is prime. Because of this, we can prove such simple facts by computation:
```
From mathcomp Require Import ssreflect ssrbool ssrnat prime.
Lemma prime_3 : prime 3. Proof. reflexivity. Qed.
```
There is a bit of magic going on here: the library declares a coercion `is_true : bool -> Prop` that is automatically inserted whenever a boolean is used in a place where a proposition is expected. It is defined as follows:
```
Definition is_true (b : bool) : Prop := b = true.
```
Thus, what `prime_3` really is proving above is `prime 3 = true`, which is what makes that simple proof possible.
The library allows you to connect this boolean notion of what a prime number is to a more conventional one via a *reflection lemma*:
```
Lemma primeP p :
reflect (p > 1 /\ forall d, d %| p -> xpred2 1 p d) (prime p).
```
Unpacking notations and definitions, what this statement says is that `prime p` equals `true` if and only if `p > 1` and every `d` that divides `p` is equal to `1` or `p`. I am afraid it would be a lengthy detour to explain how this reflection lemma works exactly, but if you find this interesting I strongly encourage you to look up more about Mathematical Components.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is a proof that I think is quite understandable as one steps through it.
It stays at the level of number theory and doesn't unfold definitions that much. I put in some comments, don't know if it makes it more or less readable. But try to step through it in the IDE, if you care for it...
```
Require Import ZArith.
Require Import Znumtheory.
Inductive prime (p:Z) : Prop :=
prime_intro :
1 < p -> (forall n:Z, 1 <= n < p -> rel_prime n p) -> prime p.
Require Import Omega.
Theorem prime3 : prime 3.
Proof.
constructor.
omega. (* prove 1 < 3 *)
intros; constructor. (* prove rel_prime n 3 *)
exists n. omega. (* prove (1 | n) *)
exists 3. omega. (* prove (1 | 3) *)
(* our goal is now (x | 1), and we know (x | n) and (x | 3) *)
assert (Hn: n=1 \/ n=2) by omega; clear H. (* because 1 <= n < 3 *)
case Hn. (* consider cases n=1 and n=2 *)
- intros; subst; trivial. (* case n = 1: proves (x | 1) because we know (x | n) *)
- intros; subst. (* case n = 2: we know (x | n) and (x | 3) *)
assert (Hgcd: (x | Z.gcd 2 3)) by (apply Z.gcd_greatest; trivial).
(* Z.gcd_greatest: (x | 2) -> x | 3) -> (x | Z.gcd 2 3) *)
apply Hgcd. (* prove (x | 1), because Z.gcd 2 3 = 1 *)
Qed.
```
Upvotes: 1 <issue_comment>username_3: I have a variant of @username_2's proof.
```
Require Import ZArith.
Require Import Znumtheory.
Require Import Omega.
Theorem prime3 : prime 3.
Proof.
constructor.
- omega.
- intros.
assert (n = 1 \/ n = 2) as Ha by omega.
destruct Ha; subst n; apply Zgcd_is_gcd.
Qed.
```
Like @username_2's proof, we prove that `1 < 3` using `omega` and then prove that either `n=1` or `n=2` using `omega` again.
To prove `rel_prime 1 3` and `rel_prime 2 3` which are defined in terms of `Zis_gcd`, we apply `Zgcd_is_gcd`. This lemma states that computing the `gcd` is enough. This is trivial on concrete inputs like `(1,3)` and `(2,3)`.
**EDIT**: We can generalize this result, using only Gallina. We define a boolean function `is_prime` that we prove correct w.r.t. the inductive specification `prime`. I guess this can be done in a more elegant way, but I am confused with all the lemmas related to `Z`. Moreover, some of the definitions are opaque and cannot be used (at least directly) to define a computable function.
```
Require Import ZArith.
Require Import Znumtheory.
Require Import Omega.
Require Import Bool.
Require Import Recdef.
(** [for_all] checks that [f] is true for any integer between 1 and [n] *)
Function for_all (f:Z->bool) n {measure Z.to_nat n}:=
if n <=? 1 then true
else f (n-1) && for_all f (n-1).
Proof.
intros.
apply Z.leb_nle in teq.
apply Z2Nat.inj_lt. omega. omega. omega.
Defined.
Lemma for_all_spec : forall f n,
for_all f n = true -> forall k, 1 <= k < n -> f k = true.
Proof.
intros.
assert (0 <= n) by omega.
revert n H1 k H0 H.
apply (natlike_ind (fun n => forall k : Z, 1 <= k < n ->
for_all f n = true -> f k = true)); intros.
- omega.
- rewrite for_all_equation in H2.
destruct (Z.leb_spec0 (Z.succ x) 1).
+ omega.
+ replace (Z.succ x - 1) with x in H2 by omega. apply andb_true_iff in H2.
assert (k < x \/ k = x) by omega.
destruct H3.
* apply H0. omega. apply H2.
* subst k. apply H2.
Qed.
Definition is_prime (p:Z) :=
(1 p) && for_all (fun k = Z.gcd k p =? 1) p.
Theorem is_prime_correct : forall z, is_prime z = true -> prime z.
Proof.
intros. unfold is_prime in H.
apply andb_true_iff in H. destruct H as (H & H0).
constructor.
- apply Z.ltb_lt. assumption.
- intros.
apply for_all_spec with (k:=n) in H0; try assumption.
unfold rel_prime. apply Z.eqb_eq in H0. rewrite <- H0.
apply Zgcd_is_gcd.
Qed.
```
The proof becomes nearly as simple as @Arthur's one.
```
Theorem prime113 : prime 113.
Proof.
apply is_prime_correct; reflexivity.
Qed.
```
Upvotes: 2 <issue_comment>username_2: Fun fact: @epoiner's answer can be used together with Ltac in a proof script for any prime number.
```
Theorem prime113 : prime 113.
Proof.
constructor.
- omega.
- intros n H;
repeat match goal with | H : 1 <= ?n < ?a |- _ =>
assert (Hn: n = a -1 \/ 1 <= n < a - 1) by omega;
clear H; destruct Hn as [Hn | H];
[subst n; apply Zgcd_is_gcd | simpl in H; try omega ]
end.
Qed.
```
However, the proof term gets unwieldy, and checking becomes slower and slower. This is why it small scale reflection (ssreflect) where computation is moved into the type checking probably is preferrable in the long run. It's hard to beat @username_1's proof:
`Proof. reflexivity. Qed.` :-) Both in terms of computation time, and memory-wise.
Upvotes: 1
|
2018/03/14
| 979 | 2,913 |
<issue_start>username_0: In one of my assignments I came across a weird implementation, and I was curious if it's a bug or the designed behavior.
In Python 3, division by `/` returns a floating point number, and `//` means integer division and should return an integer. I've discovered though that if either of the values is a float when doing integer division, it will return a float.
Example:
```
# These all work as expected
10 / 2
-> 5.0
11 / 2
-> 5.5
10 // 2
-> 5
11 // 2
-> 5
# Here things start to get weird
10.0 // 2
-> 5.0
10 // 2.0
-> 5.0
11.0 // 2
-> 5.0
```
Is this *supposed* to behave this way? If so, *why* does it behave this way?<issue_comment>username_1: From [PEP-238](https://www.python.org/dev/peps/pep-0238/), which introduced the new division (emphasis mine):
>
> Semantics of Floor Division
> ---------------------------
>
>
> Floor division will be implemented in all the Python numeric types,
> and will have the semantics of:
>
>
>
> ```py
> a // b == floor(a/b)
>
> ```
>
> except that the result type will be the common type into which *a* and
> *b* are coerced before the operation.
>
>
> **Specifically, if *a* and *b* are of the same type, `a//b` will be of that type too.** If the inputs are of different types, they are first
> coerced to a common type using the same rules used for all other
> arithmetic operators.
>
>
> In particular, if *a* and *b* are both ints or longs, the result has the
> same type and value as for classic division on these types (including
> the case of mixed input types; `int//long` and `long//int` will both
> return a long).
>
>
> **For floating point inputs, the result is a float.** For example:
>
>
>
> ```py
> 3.5//2.0 == 1.0
>
> ```
>
> For complex numbers, `//` raises an exception, since `floor()` of a
> complex number is not allowed.
>
>
> For user-defined classes and extension types, all semantics are up to
> the implementation of the class or type.
>
>
>
So yes, it is supposed to behave that way. *"`//` means integer division and should return an integer"* - not quite, it means **floor division** and should return something *equal to* an integer (you'd always expect `(a // b).is_integer()` to be true where either operand is a float).
Upvotes: 5 [selected_answer]<issue_comment>username_2: I hope this clarifies the situation:
>
> **/ Division**
>
>
> Divides left hand operand by right hand operand
>
>
>
> ```
> b / a = 2
>
> ```
>
> **// Floor Division**
>
>
> The division of operands where the result is the quotient in which the
> digits after the decimal point are removed. But if one of the operands
> is negative, the result is floored, i.e., rounded away from zero
> (towards negative infinity)
>
>
>
> ```
> 9//2 = 4 and 9.0//2.0 = 4.0, -11//3 = -4, -11.0//3 = -4.0
>
> ```
>
>
<https://www.tutorialspoint.com/python/python_basic_operators.htm>
Upvotes: -1
|
2018/03/14
| 862 | 2,630 |
<issue_start>username_0: I've been struggling to find how I could display an icon in a Cell in a ReactTable (from the library react-table). All I could find is that it accepts HTML symbols. There's a lot of symbols, but what I'm looking for is to show flags...
```
Cell: () => (
♥
)
```
I already tried to use for a test but I couldn't make it work.
Any ideas ?<issue_comment>username_1: From [PEP-238](https://www.python.org/dev/peps/pep-0238/), which introduced the new division (emphasis mine):
>
> Semantics of Floor Division
> ---------------------------
>
>
> Floor division will be implemented in all the Python numeric types,
> and will have the semantics of:
>
>
>
> ```py
> a // b == floor(a/b)
>
> ```
>
> except that the result type will be the common type into which *a* and
> *b* are coerced before the operation.
>
>
> **Specifically, if *a* and *b* are of the same type, `a//b` will be of that type too.** If the inputs are of different types, they are first
> coerced to a common type using the same rules used for all other
> arithmetic operators.
>
>
> In particular, if *a* and *b* are both ints or longs, the result has the
> same type and value as for classic division on these types (including
> the case of mixed input types; `int//long` and `long//int` will both
> return a long).
>
>
> **For floating point inputs, the result is a float.** For example:
>
>
>
> ```py
> 3.5//2.0 == 1.0
>
> ```
>
> For complex numbers, `//` raises an exception, since `floor()` of a
> complex number is not allowed.
>
>
> For user-defined classes and extension types, all semantics are up to
> the implementation of the class or type.
>
>
>
So yes, it is supposed to behave that way. *"`//` means integer division and should return an integer"* - not quite, it means **floor division** and should return something *equal to* an integer (you'd always expect `(a // b).is_integer()` to be true where either operand is a float).
Upvotes: 5 [selected_answer]<issue_comment>username_2: I hope this clarifies the situation:
>
> **/ Division**
>
>
> Divides left hand operand by right hand operand
>
>
>
> ```
> b / a = 2
>
> ```
>
> **// Floor Division**
>
>
> The division of operands where the result is the quotient in which the
> digits after the decimal point are removed. But if one of the operands
> is negative, the result is floored, i.e., rounded away from zero
> (towards negative infinity)
>
>
>
> ```
> 9//2 = 4 and 9.0//2.0 = 4.0, -11//3 = -4, -11.0//3 = -4.0
>
> ```
>
>
<https://www.tutorialspoint.com/python/python_basic_operators.htm>
Upvotes: -1
|
2018/03/14
| 361 | 1,150 |
<issue_start>username_0: I am trying to read .dta files with pandas:
```
import pandas as pd
my_data = pd.read_stata('filename', encoding='utf-8')
```
the error message is:
```
ValueError: Unknown encoding. Only latin-1 and ascii supported.
```
other encoding formality also didn't work, such as **gb18030** or **gb2312** for dealing with Chineses characters. If I remove the encoding parameter, the DataFrame will be all of garbage values.<issue_comment>username_1: Looking at the source code of pandas (version 0.22.0), the supported encodings for read\_stata are ('ascii', 'us-ascii', 'latin-1', 'latin\_1', 'iso-8859-1', 'iso8859-1', '8859', 'cp819', 'latin', 'latin1', 'L1'). So you can only choose from this list.
Upvotes: 0 <issue_comment>username_2: Simply read the original data by default encoding, then transfer to the expected encoding! Suppose the column having garbled text is `column1`
```
import pandas as pd
dta = pd.read_stata('filename.dta')
print(dta['column1'][0].encode('latin-1').decode('gb18030'))
```
The `print` result will show normal Chinese characters, and `gb2312` can also make it.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 597 | 1,661 |
<issue_start>username_0: ```
CREATE TABLE `Students` (
`id` int(11) NOT NULL,
`name` varchar(500) NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `Students` (`id`, `name`) VALUES
(1, 'Student 1'),
(2, 'Student 2'),
(3, 'Student 3');
CREATE TABLE `lessons` (
`id` int(11) NOT NULL,
`name` varchar(500) NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `lessons` (`id`, `name`) VALUES
(1, 'Lesson 1'),
(2, 'Lesson 2'),
(3, 'Lesson 3');
CREATE TABLE `completed` (
`student` int(11) NOT NULL,
`lesson` varchar(500) NOT NULL,
`completed` int(11) NOT NULL,
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
INSERT INTO `completed` (`student`, `lesson`, `completed`) VALUES
(1, 2, 1),
(3, 3, 1),
(2, 1, 1);
```
we are adding students who completed lessons to "completed" table. we need to get 5 student with lesson which id not exists in "completed" table.
example output is;
```
1, 1
1, 3
2, 2
2, 3
3, 1
3, 2
```
thank you<issue_comment>username_1: Looking at the source code of pandas (version 0.22.0), the supported encodings for read\_stata are ('ascii', 'us-ascii', 'latin-1', 'latin\_1', 'iso-8859-1', 'iso8859-1', '8859', 'cp819', 'latin', 'latin1', 'L1'). So you can only choose from this list.
Upvotes: 0 <issue_comment>username_2: Simply read the original data by default encoding, then transfer to the expected encoding! Suppose the column having garbled text is `column1`
```
import pandas as pd
dta = pd.read_stata('filename.dta')
print(dta['column1'][0].encode('latin-1').decode('gb18030'))
```
The `print` result will show normal Chinese characters, and `gb2312` can also make it.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 331 | 1,099 |
<issue_start>username_0: How do I check for an empty JSON object in Javascript? I thought the key was
```
var isEmpty = (response || []).length === 0;
```
but this isn't it. In my JS console, I have tried
```
jsonresp = {}
{}
jsonresp.length
undefined
```
What's driving this is I have a Ruby on Rails controller that is returning the following
```
render json: job_id.eql?(cur_job_id) ? {} : json_obj
```
In the case where the controller returns the "{}" is where I'm having trouble on the JS side recognizing if it is empty or not.<issue_comment>username_1: Shortly:
`const isEmpty = !Object.keys(jsonresp).length`
Upvotes: 0 <issue_comment>username_2: You can use `Object.keys()` function to get keys of the object as an array and then check the array to see whether it is empty.
```js
var obj = {};
var arr = Object.keys(obj);
console.log(arr);
console.log("Is Empty : " + (arr.length == 0));
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: You can also try:
```
if (typeof jsonresp == 'undefined' || typeof jsonresp == 'null'){
// some logic here
}
```
Upvotes: 1
|
2018/03/14
| 327 | 1,059 |
<issue_start>username_0: ```
I have a the following dataframe using pandas.
Names
Jason
<NAME>
John
<NAME>
Nate
Dave
```
I want to get the unique names. In this case, the output I am looking for is
```
Nate
Dave
```
I have the following codes but it does not print what I am looking for.
```
df = pd.DataFrame(df.Names.unique())
print(df)
```
Where should I fix? Thanks.<issue_comment>username_1: You can get first names by splitting whitespace, then drop duplicates.
```
df['FirstNames'] = df['Names'].str.split().str[0]
unique = df['FirstNames'].drop_duplicates(keep=False).values.tolist()
# ['Nate', 'Dave']
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If what you're actually trying to do is remove duplicate first names, then you first need to isolate the first names, then use [`drop_duplicates()`](http://pandas.pydata.org/pandas-docs/version/0.17/generated/pandas.DataFrame.drop_duplicates.html) to remove the duplicated rows:
```
df.Names.apply(lambda n: n.split(' ', 1)[0]).drop_duplicates(keep=False)
```
Upvotes: 1
|
2018/03/14
| 804 | 2,618 |
<issue_start>username_0: We have a dedicated server with proxmox on OVH. The idea is to connect the containers locally but also trough internet. So far I have 2 containers.
I added a bridge network for local IP and that is working since I am able to ping the containers from each other.
Also added `bind-address=192.168.1.3` to my.cnf.
1 container is running apache + php 7.2 (192.168.1.3)
The other container is running MySQL. (192.168.1.2)
---
Problem
-------
My MySQL keeps saying `SQLSTATE[HY000] [2002] Connection timed out`
Here is my php code:
```
php
/**
* Configuration for database connection
*
*/
$host = "192.168.1.2";
$username = "root";
$password = "<PASSWORD>";
$dbname = "test";
$dsn = "mysql:host=$host;dbname=$dbname";
$options = array(
PDO::ATTR_ERRMODE = PDO::ERRMODE_EXCEPTION
);
try
{
$connection = new PDO("mysql:host=$host", $username, $password, $options);
$sql = file_get_contents("data/init.sql");
$connection->exec($sql);
echo "Database and table users created successfully.";
}
catch(PDOException $error)
{
echo $sql . "
" . $error->getMessage();
}
```
From my understanding the code is correct so it must be something with my mysql configuration.
I'm sure that is something really simple but I'm losing to much time with this.<issue_comment>username_1: Try to `telnet 192.168.1.2 3306` from the Apache, PHP container. Can you connect?
Ensure the listening port for MySQL is 3306, if other then adjust the PHP code accordingly. Also ensure that `iptables` is not blocking any incoming connections. Also ensure you have correct permissions for the `root` and any other users you need to have permissions from other hosts.
Also, please check when making any config changes to MySQL, that you restart the service.
Upvotes: 1 <issue_comment>username_2: After searching around for a while. Found out that I had to create a new user and grant permissions to it
```
mysql> CREATE USER 'monty'@'localhost' IDENTIFIED BY '<PASSWORD>';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'localhost'
-> WITH GRANT OPTION;
mysql> CREATE USER 'monty'@'%' IDENTIFIED BY '<PASSWORD>';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'monty'@'%'
-> WITH GRANT OPTION;
```
Also on my server config `/etc/mysql/mariadb.conf.d/50-server.conf` I had to **comment** the
```
bind-address = 192.168.1.3
```
into
```
#bind-address = 192.168.1.3
```
Then restart mysql server and change my mysql details on my php code with username 'monty' and password '<PASSWORD>'
Upvotes: 1 [selected_answer]
|
2018/03/14
| 675 | 1,686 |
<issue_start>username_0: I have office 365 id on message divs. such as
```
Message Subject
```
But when I call it in functions. It gives `jquery-2.1.1.js:2 Uncaught Error: Syntax error, unrecognized expression:`
This **works** on devtools console.
```
$('#AQMkADFGFGDFGDFG<KEY>ADI4LTAwAi0wMAoALgAAA1IXzflHRQlLlY3LIdjzH3MBADg_s4AQY3NEqDFmBjvfdZIAAAIBDwAAAA')
```
This **does not work** on devtools console. I want to fix this because office 365 messages ids usually have double equal in the end `==`.
```
$('#AQMkADFGFGDFGDFG<KEY>AAAIBDwAAAA==')
```<issue_comment>username_1: 1. If the structure is not known to change, save yourself a headache and select by structure instead of ID.
2. If you must select by ID, either use an attribute selector:
```
$('[id="<KEY>]')
```
or [escape the equals signs](https://stackoverflow.com/questions/5825948/jquery-selectors-and-backslashes/5826146#5826146):
```
$('#AQMkADFGFGDFGDFGwYWIxLTU1ADI4LTAwAi0wMAoALgAAA1<KEY>_s4AQY3NEqDFmBjvfdZIAAAIBDwAAAA\\=\\=')
```
Upvotes: 3 <issue_comment>username_2: The root cause is **jQuery uses CSS syntax for selecting elements**.
You can use one regex expression to escape with double backslashes for an ID that has characters used in CSS notation.
```js
console.log($("#"+"AQMkADAwATM0MDAAMS0wYWIxLTU1ADI4LTAwAi0wMAoALgAAA1IXzflHRQlLlY3LIdjzH3MBADg_s4AQY3NEqDFmBjvfdZIAAAIBDwAAAA==".replace( /(:|\.|\[|\]|,|=|@)/g, "\\$1" )).text())
```
```html
Message Subject
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 769 | 2,499 |
<issue_start>username_0: so I have a string:
"I am a dentist in a health organization."
I would like to extract the first word that contains more than three letters and put it into a new column in a datatable, i.e. in this case "dentist". I suppose I have to use strsplit, but I have no idea on how to proceed.
Any help is appreciated
Regards,
Antje<issue_comment>username_1: Try this easy solution:
```
string<-"I am a dentist in a health organization."
words<-unlist(strsplit(string," "))
words[which.max(nchar(words)>3)]
[1] "dentist"
```
To apply this solution to a dataframe adding a column:
```
df<-data.frame(string=c("I am a dentist in a health organization.","a dentist in a health organization.","health organization."))
f<-function(string,n)
{
words<-unlist(strsplit(string," "))
return(words[which.max(nchar(words)>n)])
}
df$word<-unlist(lapply(as.character(df$string),f,n=3))
df
string word
1 I am a dentist in a health organization. dentist
2 a dentist in a health organization. dentist
3 health organization. health
```
Upvotes: 2 <issue_comment>username_2: Match, from the beginning, `"^"`, the shortest sequence of characters, `".*?"`, followed by a word of 4 or more characters, `"\\b(\\w{4,}\\b"` followed by the remaining text, `".*"`, and replace all that with the first (and only) capture group (i.e. the match to the part of the regular expression with parentheses).
```
# input data frame
DF <- data.frame(x = c("I am a dentist in a health organization.",
"I am a dentist in a health organization."))
pat <- "^.*?\\b(\\w{4,})\\b.*"
transform(DF, longword = sub(pat, "\\1", x))
```
giving:
```
x longword
1 I am a dentist in a health organization. dentist
2 I am a dentist in a health organization. dentist
```
Upvotes: 1 <issue_comment>username_3: 1. `\w{4,}` matches any word character (equal to [a-zA-Z0-9\_])
2. `{4,}` Quantifier — Matches between 4 and unlimited times, as many times as possible, giving back as needed (greedy)
3. Non-capturing group `(?:\\w+)?`
4. `?` Quantifier — Matches between zero and one times, as many times as possible, giving back as needed (greedy)
5. `+` Quantifier — Matches between one and unlimited times, as many times as possible, giving back as needed (greedy)
`strng <- "I am a dentist in a health organization."`
`stringr::str_extract(strng,"\\w{4,}(?:\\w+)?")`
Result
`[1] "dentist"`
Upvotes: 1
|
2018/03/14
| 986 | 3,524 |
<issue_start>username_0: I have a numpy array X that has 3 columns and looks like the following:
```
array([[ 3791, 2629, 0],
[ 1198760, 113989, 0],
[ 4120665, 0, 1],
...
```
The first 2 columns are continuous values and the last column is binary (0,1). I would like to apply the StandardScaler class only to the first 2 columns. I am currently doing this the following way:
```
scaler = StandardScaler()
X_subset = scaler.fit_transform(X[:,[0,1]])
X_last_column = X[:, 2]
X_std = np.concatenate((X_subset, X_last_column[:, np.newaxis]), axis=1)
```
The output of X\_std is then:
```
array([[-0.34141308, -0.18316715, 0. ],
[-0.22171671, -0.17606473, 0. ],
[ 0.07096154, -0.18333483, 1. ],
...,
```
Is there a way to perform this all in one step? I would like to include this as part of a pipeline where it will scale the first 2 columns and leave the last binary column as is.<issue_comment>username_1: I can't think of another way to compact you code more, but you can definitely use your transformation in a Pipeline. You have to define a class extending StandardScaler that only performs the transformations on the columns passed as arguments, keeping the others intact. See the code in this [example](http://scikit-learn.org/stable/auto_examples/hetero_feature_union.html), you would have to program something similar to ItemSelector.
Upvotes: 1 <issue_comment>username_2: Inspired by username_1's recommendation to extend StandardScaler, I came up with the below. It's not super efficient or robust (e.g., you'd need to update the inverse\_transform function), but hopefully it's a helpful starting point:
```
class StandardScalerSelect(StandardScaler):
def __init__(self, copy=True, with_mean=True, with_std=True, cols=None):
self.cols = cols
super().__init__(copy, with_mean, with_std)
def transform(self, X):
not_transformed_ix = np.isin(np.array(X.columns), np.array(self.cols), invert=True)
# Still transforms all, just for convenience. For larger datasets
# will want to modify self.mean_ and self.scale_ so the dimensions match,
# and then just transform the subset
trans = super().transform(X)
trans[:,not_transformed_ix] = np.array(X.iloc[:,not_transformed_ix])
return trans
```
Upvotes: 1 <issue_comment>username_3: I ended up using a class to select columns like this:
```
class ItemSelector(BaseEstimator, TransformerMixin):
def __init__(self, columns):
self.columns = columns
def fit(self, x, y=None):
return self
def transform(self, data_array):
return data_array[:, self.columns]
```
I then used FeatureUnion in my pipeline as follows to fit StandardScaler only to continuous variables:
```
FeatureUnion(
transformer_list=[
('continous', Pipeline([ # Scale the first 2 numeric columns
('selector', ItemSelector(columns=[0, 1])),
('scaler', StandardScaler())
])),
('categorical', Pipeline([ # Leave the last binary column as is
('selector', ItemSelector(columns=[2]))
]))
]
)
```
This worked well for me.
Upvotes: 3 <issue_comment>username_4: Since scikit-learn version 0.20 you can use the function [sklearn.compose.ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) exactly for this purpose.
Upvotes: 4 [selected_answer]
|
2018/03/14
| 1,858 | 7,877 |
<issue_start>username_0: I have flicked through few popular Event Sourcing frameworks written in a variety of different common languages. I have got the impression all of them affect the domain models to a really high degree. As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state. Of course, it facilitates message driven inter-context integration but in core domain's point of view is negligible. I consider commands and events to be part of the domain itself so it looks perfectly fine that aggregate creates events (but not publishes them) or handles commands.
The problem is that all of DDD building blocks tend to be polluted by ES framework. Events must inherit from some base class. Aggregates at least are supposed to implement foreign interfaces. I wonder if domain models should be even aware of using ES approach within an application. In my opinion, even necessity of providing `apply()` methods indicates that other layer shapes our domain.
How you approach this issue in your projects?<issue_comment>username_1: The real answer to this is that DDD is overrated. It is not true that you have to have one model to rule them all. You may have different views on the state of your world, depending on your current needs. One part of the application has one view, another part - completely different view.
To put it another way, your model is not "what is", but "what happened so far". The *actual* data model of your application is the event stream itself. Everything else you derive from there.
Upvotes: 0 <issue_comment>username_2: >
> As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state.
>
>
>
No, events are really core to the domain model.
Technically, you could store *diffs* in a domain agnostic way. For example, you could look at an aggregate and say "here is the representation before the change, here is the representation after, we'll compute the difference and store that.
The difference between patches and events is the fact that you switch from a domain agnostic spelling to a domain specific spelling. Doing that is normally going to require being intimate with the domain model itself.
>
> The problem is that all of DDD building blocks tend to be polluted by ES framework.
>
>
>
Yup, there's a lot of crap framework in the examples you find in the wild. [Sturgeon's Law at work](https://en.wikipedia.org/wiki/Sturgeon%27s_law).
Thinking about the domain model from a functional perspective can help a lot. At it's core, the most general form of the model is a function that accepts current state as an input, and returns a list of events as the output.
```
List change(State current)
```
From there, if you want to save current state, you just wrap this function in something that knows how to do the fold
```
State current = ...
List events = change(current)
State updated = State.fold(current, events)
```
Similarly, you can get current state by folding over the previous history
```
List savedHistory = ...
State current = State.reduce(savedHistory)
List events = change(current)
State updated = State.fold(current, events)
```
Another way of saying the same thing; the "events" are already there in your (not event sourced) domain model -- they are just *implicit*. *If* there is business value in tracking those events, then you should replace the implementation of your domain model with one that makes those events explicit. Then you can decide which persisted representation to use *independent of the domain model*.
>
> Core of my problem is that domain Event inherits from framework Event and aggregate implements some foreign interface (from framework). How to avoid this?
>
>
>
There are a couple of possibilities.
1) Roll your own: take a close look at the framework -- what is it *really* buying you? If your answer is "not much", then maybe you can do without it.
From what I've seen, the "win" of these frameworks tends to be in taking a heterogeneous collection of events and managing the routing for you. That's not nothing -- but it's a bit magic, and you might be happier having that code explicit, rather than relying on implicit framework magic
2) Suck it up: if the framework is unobtrusive, then it may be more practical to accept the tradeoffs that it imposes and live with them. To some degree, event frameworks are like object relational mappers or databases; sure, in *theory* you should be able to change them out freely. In practice? how often do you derive benefit from the investment in that flexibility
3) Interfaces: if you squint a little bit, you can see that your domain behaviors don't usually depend on in memory representations, but instead on the algebra of the domain itself.
For example, in the domain model, we deposit `Money` into an `Account` updating its `Balance`. We don't typically care whether those are integers, or longs, or floats, or JSON documents. We can satisfy the model with any implementation that satisfies the constraints of the algebra.
So you can use the framework to provide the implementation (which also happens to have all the hooks the framework needs); the behavior just interacts with the interface it defined itself.
In a strongly typed implementation, this can get *really twisty*. In Java, for instance, if you want the strong type checks you need to be comfortable with the magic of generics and type erasure.
Upvotes: 2 <issue_comment>username_3: *My answer applies only when CQRS is involved (write and read models are split and they communicate using domain events).*
>
> As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state
>
>
>
Event sourcing is indeed an infrastructure concern, a kind of repository but event-based Aggregates are not. I consider them to be an architectural style, different from the classical style.
So, the fact that an Aggregate, in reaction to an command, generates zero or more domain events that are applied onto itself in order to build its internal (private) state used to decide what events to generate in the future is just a different mode of thinking and designing an Aggregate. This is a perfect valid style, along with classical style (the one not using events but only objects) or functional programming style.
Event sourcing just means that every time a command reaches an Aggregate, its entire internal state is rebuild instead of being loaded from a flat persistence. Of course there are other huge advantages (!) but they do not affect the design of an Aggregate.
>
> ... but not publishes them ...
>
>
>
I like the frameworks that permit us to just `return` (or better `yield` - Aggregate's command methods are just `generators`!) the events.
>
> Events must inherit from some base class
>
>
>
It's sad that some frameworks require that but this is not necessarily. In general, a framework needs one mean of detecting an event class. However, they can be implemented to detect an event by other means instead of using marker interfaces. For example, the client (as in YOU) could provide a filter method that rejects non-event classes.
However, there is one thing that I couldn't avoid in my framework (yes, I know, I'm guilty, I have one): the `Command` interface with only one method: `getAggregateId`.
>
> Aggregates at least are supposed to implement foreign interfaces.
>
>
>
Again, like with events, this is not a necessity. A framework could be given a custom client `event-applier-on-aggregates function` or a convention can be used (i.e. all event-applier methods have the form apply*EventClassNameOrType*.
>
> I wonder if domain models should be even aware of using ES approach within an application
>
>
>
Of ES not, but event-based YES, so the `apply` method must still exists.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 419 | 1,134 |
<issue_start>username_0: Am trying to add the values in my tuple to get the total figure with the below code
```
Black = (("Hans","100"),("Frank","20.5"))
for v in Black:
print(v[1])
print (v[1]+v[1])
```
But when i do it this way it concatenate the values like 100100 and not total figure of the values<issue_comment>username_1: You have two options:
1) Take the quotes off the values in the tuple:
```
Black = (("Hans",100),("Frank",20.5))
for v in Black:
print(v[1])
print (v[1]+v[1])
```
2) Convert the strings to float during the loop:
```
Black = (("Hans","100"),("Frank","20.5"))
for v in Black:
print(v[1])
print (float(v[1])+float(v[1]))
```
3) If your goal is the sum all the [1] index values in your list of lists:
```
Black = (("Hans",100),("Frank",20.5))
sum([x[1] for x in Black])
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is one way:
```
Black = (("Hans","100"),("Frank","20.5"))
res = sum(float(num) for name, num in Black)
# 120.5
```
Alternative method using `zip`:
```
res = sum(map(float, list(zip(*Black))[1]))
# 120.5
```
Upvotes: 0
|
2018/03/14
| 412 | 1,239 |
<issue_start>username_0: i want to call the isAuthenticated() method within the logout function.
(Im not sure what you call this layout its like a variable object containing functions)
```
var svc = {
logout: function () {
isAuthenticated() // Call isAuthenticated function here
},
login: function () {},
isAuthenticated: function () {}
}
```
Simply calling `isAuthenticated()` does not work.<issue_comment>username_1: You have two options:
1) Take the quotes off the values in the tuple:
```
Black = (("Hans",100),("Frank",20.5))
for v in Black:
print(v[1])
print (v[1]+v[1])
```
2) Convert the strings to float during the loop:
```
Black = (("Hans","100"),("Frank","20.5"))
for v in Black:
print(v[1])
print (float(v[1])+float(v[1]))
```
3) If your goal is the sum all the [1] index values in your list of lists:
```
Black = (("Hans",100),("Frank",20.5))
sum([x[1] for x in Black])
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is one way:
```
Black = (("Hans","100"),("Frank","20.5"))
res = sum(float(num) for name, num in Black)
# 120.5
```
Alternative method using `zip`:
```
res = sum(map(float, list(zip(*Black))[1]))
# 120.5
```
Upvotes: 0
|
2018/03/14
| 492 | 1,309 |
<issue_start>username_0: I have the following data set:
```
data =
{
a:[1,2,3]
b:[3,4,5]
c:[5,6,7]
...
}
```
And I am trying to find the smallest number from the index 1 from all the lists.
The only way I could imagine doing it would be this:
```
num = []
index_number = 1
for var in data:
num.append(data.get(var)[index_number])
return min(num)
```
But i think this is not a very good solution.
Also I will have to find the key that corresponds to the value that i just found.
Is there a good solution that i am not aware of?<issue_comment>username_1: ```
data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
min_key = min(data,key=lambda key:min(data[key]))
```
this tells python you want the "min" value of the data dict, we will use the minimum of each keys list of values as our comparison key in finding the min
Upvotes: 2 <issue_comment>username_2: A simple comprehension may do the trick:
```
>>> data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
>>> index_number = 1
>>> min(((k, v[1]) for k, v in data.items()), key=lambda x: x[1])
('a', 2)
```
If you need only the minimum value, you may use a simpler approach
```
>>> data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
>>> index = 1
>>> min(v[index] for v in data.values())
2
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 928 | 2,040 |
<issue_start>username_0: I have a multi-hierarchical pandas dataframe shown below. How, for a given attribute, attr ('rh', 'T', 'V'), can I set certain values (say values > 0.5) to NaN over the ***entire set*** of pLevs? I have seen answers on how to set a specific column (e.g., df['rh', 50]) but have not seen how to select the entire set.
>
>
> ```
> attr rh T V
> pLev 50 75 100 50 75 100 50 75 100
> refIdx
> 0 0.225026 0.013868 0.306472 0.144581 0.379578 0.760685 0.686463 0.476179 0.185635
> 1 0.496020 0.956295 0.471268 0.492284 0.836456 0.852873 0.088977 0.090494 0.604290
> 2 0.898723 0.733030 0.175646 0.841776 0.517127 0.685937 0.094648 0.857104 0.135651
> 3 0.136525 0.443102 0.759630 0.148536 0.426558 0.731955 0.523390 0.965385 0.094153
>
> ```
>
>
To facilitate assistance, I am including code to create the dataframe here:
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.random((4,9)))
df.columns = pd.MultiIndex.from_product([['rh','T','V'],[50,75,100]])
df.columns.names = ['attr', 'pLev']
df.index.name = 'refIdx'
```<issue_comment>username_1: ```
data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
min_key = min(data,key=lambda key:min(data[key]))
```
this tells python you want the "min" value of the data dict, we will use the minimum of each keys list of values as our comparison key in finding the min
Upvotes: 2 <issue_comment>username_2: A simple comprehension may do the trick:
```
>>> data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
>>> index_number = 1
>>> min(((k, v[1]) for k, v in data.items()), key=lambda x: x[1])
('a', 2)
```
If you need only the minimum value, you may use a simpler approach
```
>>> data = {'a': [1, 2, 3], 'b': [3, 4, 5], 'c': [5, 6, 7]}
>>> index = 1
>>> min(v[index] for v in data.values())
2
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 931 | 3,140 |
<issue_start>username_0: I am getting crazy to achieve this very simple task.
I need to set an Output Variable in an Azure CLI task on Visual Studio Team Services, because next task in the Release definition will be executed according to the value of this variable.
I wrote this simple code
```
call az extension add --name azure-cli-iot-ext
call echo ##vso[task.setvariable variable=iotEdgeExists;]$(az iot hub query -n $(iotHub) -q "select * from devices.modules where devices.deviceId ='$(iotEdge)'")
```
which works, but not as exepected, in fact when I read the Ouput Variable in the next Azure CLI task and I try to print it on the screen I get the command string instead of the output...
```
call echo"$(az iot hub query -n -q "select \* from devices.modules where devices.deviceId =''")"
```
What am I doing wrong?<issue_comment>username_1: Refer to this code below:
```
call {your command}>tmpFile1
set /p myvar= < tmpFile1
echo "##vso[task.setvariable variable=testvar;]%myvar%"
```
or
```
FOR /F "tokens=* USEBACKQ" %%F IN (`{your command}`) DO (
SET var=%%F
)
echo "##vso[task.setvariable variable=testvar;]%var%"
```
Mechaflash's answer in [How to set commands output as a variable in a batch file](https://stackoverflow.com/questions/6359820/how-to-set-commands-output-as-a-variable-in-a-batch-file)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Inspired from the answer above but with some variation.
Works in an **Azure CLI** task on a Hosted Ubuntu agent in Microsoft DevOps as of July 2019.
This example runs an az command to fetch the full resource name of a storage account and sets it in the variable *\_StorageAccountNameVar* for use in another pipeline task.
```
myvar=`az storage account list -g MyResourceGroup --query "[?contains(name, 'config')].name" -o tsv`
echo $myvar
echo "##vso[task.setvariable variable=_StorageAccountNameVar;]$myvar"
```
Upvotes: 1 <issue_comment>username_3: If using **Azure CLI version 2.**\* , you can use a powershell command rather than batch script wizardry. Microsoft's documentation found [here](https://github.com/microsoft/azure-pipelines-tasks/blob/master/docs/authoring/commands.md)
For example if you needed access token to update azure database it would look like so:
```
$token= & az account get-access-token --resource=https://database.windows.net --query accessToken
Write-Output("##vso[task.setvariable variable=sqlToken;]$token")
```
Don't forget to login first if testing locally:
```
az login
```
[Install Azure cli here](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli-windows)
Upvotes: 4 <issue_comment>username_4: I've got some trouble (error code 255) with
```
FOR /F “tokens=* USEBACKQ” %%F IN (`az account get-access-token --resource=https://database.windows.net/ -query accessToken`) DO (
SET var=%%F
)
echo ##vso[task.setvariable variable=sqlToken;]%var%
```
So I used this (and that work for me)
```
for /f "tokens=1 USEBACKQ" %%F in (`az account get-access-token --resource=https://database.windows.net/ --query accessToken`) do echo ##vso[task.setvariable variable=sqltoken;]%%F
```
Upvotes: 0
|
2018/03/14
| 565 | 1,892 |
<issue_start>username_0: I have 3 tables, user, artist and a join table.
I'd like to find for a particular user, the ordering of the rest of the user table by the number of artists they have in common in the join table, or potentially just the `n` other users who are have the most in common with them.
For example in the table:
```
userID | artistID
-------|----------
1 | 1
1 | 2
2 | 1
2 | 2
3 | 1
```
I'd want to get that the ordering for user1 would be (2,3) because user2 shares both artist1 and artist2 with user1, whereas user3 only shares artist1.
Is there a way to get this from SQL?
Thanks<issue_comment>username_1: You can do this with a self-join and aggregation:
```
select ua.userID, count(ua1.artistID) as numInCommonWithUser1
from userartists ua left join
userartists ua1
on ua.artistID = ua1.artistID and ua1.userID = 1
group by ua.userID
order by numInCommonWithUser1 desc;
```
Upvotes: 1 <issue_comment>username_2: Assuming that you always know the user ID you want to check agaist, you can also do the following:
```
SELECT user, count(*) as in_common
FROM user_artist
WHERE
user<>1 AND
artist IN (SELECT artist FROM user_artist WHERE user=1)
GROUP BY user
ORDER BY in_common DESC;
```
This avoids joining which ***might*** have better performance on a large table. Your example is sqlfiddle [here](http://sqlfiddle.com/#!9/b46cff4/5/0)
Upvotes: 2 [selected_answer]<issue_comment>username_3: If Suppose you know the user ID you are going to check then this query will complete your requirement and also perform very well.
```
SELECT ua1.user, count(*) as all_Common
FROM user_artist ua1
WHERE
(
Select count(*)
From user_artist ua2
Where ua2.user=1
AND ua2.artist=ua1.artist
)>0
AND ua1.user = 1
GROUP BY ua1.user
ORDER BY ua1.all_Common DESC;
```
Let me know if any question!
Upvotes: 0
|
2018/03/14
| 1,860 | 6,527 |
<issue_start>username_0: In a project with [Spring Boot 2.0.0.RELEASE](https://docs.spring.io/spring-boot/docs/2.0.0.RELEASE/reference/htmlsingle/), when I start the application the first time, when the database tables are being created, I'm getting the following warning message:
```
Hibernate: alter table if exists bpermission drop constraint if exists UK_qhp5om4s0bcb6j0j8pgcwitke
2018-03-14 11:32:03.833 WARN 15999 --- [ restartedMain] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Warning Code: 0, SQLState: 00000
2018-03-14 11:32:03.833 WARN 15999 --- [ restartedMain] o.h.engine.jdbc.spi.SqlExceptionHelper : constraint "uk_qhp5om4s0bcb6j0j8pgcwitke" of relation "bpermission" does not exist, skipping
```
Notice the constraint name -> `uk_qhp5om4s0bcb6j0j8pgcwitke`
...and below I'm seeing this being logged:
```
Hibernate: alter table if exists bpermission add constraint UK_qhp5om4s0bcb6j0j8pgcwitke unique (label)
Hibernate: alter table if exists bpermission drop constraint if exists UK_ow4uw3orjjykeq869spvqtv6u
```
From the previous message we can see Hibernate is adding the constraint `UK_qhp5om4s0bcb6j0j8pgcwitke`, the same as the one shown in the warning but the first letter is uppercase. This is related to the `unique` constraint in the `label` property (see class below).
The (possible) involved entities in getting this warning are:
**BPermission**
```
@Data
@NoArgsConstructor(force = true)
@RequiredArgsConstructor
@EqualsAndHashCode(callSuper = true, exclude = "roles")
@ToString(callSuper = true, exclude = "roles")
@Entity
public class BPermission extends GmsEntity {
@NotNull()
@NotBlank()
@Size(max = 255)
@Pattern(regexp = "someDefinedRegexp")
@Column(unique = true, nullable = false, length = 255)
private final String name;
@NotNull()
@NotBlank()
@Size(max = 255)
@Column(unique = true, nullable = false, length = 255)
private final String label;
@ManyToMany(mappedBy = "permissions")
private Set roles;
}
```
`BPermission` is related (in case this info helps in any way) to
**BRole**
```
@Data
@NoArgsConstructor(force = true)
@RequiredArgsConstructor
@EqualsAndHashCode(callSuper = true, exclude = "permissions")
@ToString(callSuper = true, exclude = {"description", "permissions"})
@Entity
public class BRole extends GmsEntity{
@NotNull()
@NotBlank()
@Size(max = 255)
@Pattern(regexp = "someDefinedRegexp"))
@Column(unique = true, nullable = false, length = 255)
private final String label;
@Size(max = 10485760)
@Column(length = 10485760)
private String description;
private Boolean enabled = false;
@ManyToMany
@JoinTable(
name = "brole_bpermission",
joinColumns = @JoinColumn(name = "brole_id"),
inverseJoinColumns = @JoinColumn(name = "bpermission_id")
)
private Set permissions;
public void addPermission(BPermission... p) {
// code for adding permissions
}
public void removePermission(BPermission... p) {
// code for removing permissions
}
public void removeAllPermissions() {
// code for removing permissions
}
```
They are mapped to a PostgreSQL9.5.11 database as follow:
[](https://i.stack.imgur.com/DjpK6.png)
The related spring configurations are:
```
spring.datasource.url = jdbc:postgresql://127.0.0.1/mydbname
spring.datasource.username = postgres
spring.datasource.password = <PASSWORD>
spring.datasource.driver-class-name = org.postgresql.Driver
spring.jpa.hibernate.ddl-auto = update
spring.jpa.properties.hibernate.jdbc.lob.non_contextual_creation = true
spring.jpa.open-in-view = false
```
I'm getting the mentioned warning message for all entities with properties which are annotated with [`@Column(unique = true)`](https://docs.oracle.com/javaee/5/api/javax/persistence/Column.html)
**Question**
* Why is this warning being thrown? Maybe... a bug?
* How can I get rid of it?
---
Of course, a warning sometimes is not bad at all, but I feel here this is either unnecessary or it is indicating that "something should be done differently" despite [SQLCode 0000](https://www.postgresql.org/docs/9.5/static/errcodes-appendix.html) means "successful\_completion".
PS: I'm using [Lombok](https://projectlombok.org).<issue_comment>username_1: This looks like a bug.
I'd recommend you to create the schema via higher-level migration tools like `flywaydb` and let hibernate only validate the generated schema. It is integrated to spring-boot and it is very easy to setup, see [the documantation](https://docs.spring.io/spring-boot/docs/current/reference/html/howto-database-initialization.html#howto-execute-flyway-database-migrations-on-startup) and [examples](https://github.com/spring-projects/spring-boot/tree/v2.0.0.RELEASE/spring-boot-samples/spring-boot-sample-flyway).
The benefit is that you have full control on the schema, you don't have unexpected schema changes while upgrading hibernate.
Usually automatical schema generation is used only during development, but not in production. You can find more details on why it is so important [here](https://flywaydb.org/getstarted/why).
Having such setup you might let hibernate generate the schema only in development mode, but flighway will take responsibility for the rest of the cases.
Upvotes: 2 <issue_comment>username_2: Spring Boot 2.0 (Hibernate 5?) apparently uses `DROP_RECREATE_QUIETLY` as [unique constraint update strategy](https://github.com/hibernate/hibernate-orm/blob/5.3.0.Final/hibernate-core/src/main/java/org/hibernate/tool/hbm2ddl/UniqueConstraintSchemaUpdateStrategy.java) which is really wrong as a default option, because simply what it does each time you start the app is removing the unique index and creating it again. If you work on databases with some (a lot of?) data I can imagine how slow will be the start of everything with this option.
In such scenario, when you start on empty database, operation of removing the index generates warning which you can see in the logs. When you start again, the warning dissapears but it silently does the expensive operation of recreating index.
To disable this you need to switch the strategy back to `RECREATE_QUIETLY` with following params:
```
# for plain hibernate
hibernate.schema_update.unique_constraint_strategy=RECREATE_QUIETLY
# for spring data
spring.jpa.properties.hibernate.schema_update.unique_constraint_strategy=RECREATE_QUIETLY
```
Upvotes: 4 [selected_answer]
|
2018/03/14
| 1,217 | 4,497 |
<issue_start>username_0: I am having issues uploading new code to an Arduino Leonardo board. This happens after uploading previous buggy code (e.g., array out of range indexing error). Once I fixed the buggy code, I can upload it to another hardware unit, but not the same unit that previously had the buggy code.
I am using [Microduino](http://wiki.microduinoinc.com/Microduino-Module_CoreUSB) and I have tried resetting the hardware (by connecting GND to RST pin), yet I still cannot upload the new code on it. How can I fix this?
See the verbose output log below:
```
avrdude: Version 6.3, compiled on Jan 17 2017 at 12:00:53
Copyright (c) 2000-2005 <NAME>, http://www.bdmicro.com/
Copyright (c) 2007-2014 <NAME>
System wide configuration file is "C:\Program Files (x86)\Arduino IDE for Microduino\hardware\tools\avr/etc/avrdude.conf"
Using Port : COM5
Using Programmer : avr109
Overriding Baud Rate : 57600
AVR Part : ATmega32U4
Chip Erase delay : 9000 us
PAGEL : PD7
BS2 : PA0
RESET disposition : dedicated
RETRY pulse : SCK
serial program mode : yes
parallel program mode : yes
Timeout : 200
StabDelay : 100
CmdexeDelay : 25
SyncLoops : 32
ByteDelay : 0
PollIndex : 3
PollValue : 0x53
Memory Detail :
Block Poll Page Polled
Memory Type Mode Delay Size Indx Paged Size Size #Pages MinW MaxW ReadBack
----------- ---- ----- ----- ---- ------ ------ ---- ------ ----- ----- ---------
eeprom 65 20 4 0 no 1024 4 0 9000 9000 0x00 0x00
flash 65 6 128 0 yes 32768 128 256 4500 4500 0x00 0x00
lfuse 0 0 0 0 no 1 0 0 9000 9000 0x00 0x00
hfuse 0 0 0 0 no 1 0 0 9000 9000 0x00 0x00
efuse 0 0 0 0 no 1 0 0 9000 9000 0x00 0x00
lock 0 0 0 0 no 1 0 0 9000 9000 0x00 0x00
calibration 0 0 0 0 no 1 0 0 0 0 0x00 0x00
signature 0 0 0 0 no 3 0 0 0 0 0x00 0x00
Programmer Type : butterfly
Description : Atmel AppNote AVR109 Boot Loader
Connecting to programmer: .
avrdude: butterfly_recv(): programmer is not responding
avrdude: ser_send(): write error: sorry no info avail
avrdude: butterfly_recv(): programmer is not responding
```<issue_comment>username_1: This was fixed by pushing the reset button quickly after I clicked on the upload button!
Upvotes: 6 [selected_answer]<issue_comment>username_2: This could be due to the way the Leonardo/Micro bootloader works (the uploader waits for a new serial port to appear, presuming that to be the bootloader).
The official advice is to press and hold reset on the board, click upload, wait for the code to compile and for the status bar to say 'Uploading...' and then release the reset button on the board.
See here: <https://www.arduino.cc/en/Guide/ArduinoLeonardoMicro#toc6>
Upvotes: 3 <issue_comment>username_3: I was having this problem with my pro micro( using Arduino type Leonardo) , finally fixed it by NOT plugging into my USB hub and plugging directly into my PC.
Upvotes: 1 <issue_comment>username_4: I use Arduino Leonardo from Amperka kit and I stumbled upon the same issue. I finally beat this by following the next steps (they are similar with some of the solutions present here but different in details):
1. Press and hold the reset button on the Arduino
2. Click “Upload” in the IDE
3. Await for the compiling step is done
4. Await for the “Waiting for upload port…” message
5. Release the reset Arduino button and everything works!
Upvotes: 0 <issue_comment>username_5: For me it only works connected to the computer's rear USB port, the one that is directly on the motherboard
Upvotes: 0
|
2018/03/14
| 370 | 1,269 |
<issue_start>username_0: `Path::exists` is not suitable, since its documentation states:
>
> This function will traverse symbolic links to query information about the destination file. In case of broken symbolic links this will return false.
>
>
><issue_comment>username_1: [`std::fs::read_link`](https://doc.rust-lang.org/std/fs/fn.read_link.html) seems what you want.
>
> This function will return an error in the following situations, but is not limited to just these cases:
>
>
> * path is not a symbolic link.
> * path does not exist.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Another option is to use `std::fs::symlink_metadata` ([docs](https://doc.rust-lang.org/1.21.0/std/fs/fn.symlink_metadata.html)).
>
> Query the metadata about a file without following symlinks.
> This function will return an error ... [when] `path` does not exist.
>
>
>
The upside of this is that the returned `Metadata` struct contains a `FileType` struct ([docs](https://doc.rust-lang.org/1.21.0/std/fs/struct.Metadata.html#method.file_type)), which you can query about whether the path is a plain file, a directory or a symbolic link. This could be useful if the possible outcomes were more than "link exists" and "link does not exist".
Upvotes: 3
|
2018/03/14
| 406 | 1,223 |
<issue_start>username_0: I want to join on Column1 and (Column2 if there is a match, null if there isn't)
```
Table1
a,x
a,null
a,y
Table2
a,x,1
a,null,2
```
Join Result
```
a,x,1
a,null,2
a,y,2
```<issue_comment>username_1: [`std::fs::read_link`](https://doc.rust-lang.org/std/fs/fn.read_link.html) seems what you want.
>
> This function will return an error in the following situations, but is not limited to just these cases:
>
>
> * path is not a symbolic link.
> * path does not exist.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Another option is to use `std::fs::symlink_metadata` ([docs](https://doc.rust-lang.org/1.21.0/std/fs/fn.symlink_metadata.html)).
>
> Query the metadata about a file without following symlinks.
> This function will return an error ... [when] `path` does not exist.
>
>
>
The upside of this is that the returned `Metadata` struct contains a `FileType` struct ([docs](https://doc.rust-lang.org/1.21.0/std/fs/struct.Metadata.html#method.file_type)), which you can query about whether the path is a plain file, a directory or a symbolic link. This could be useful if the possible outcomes were more than "link exists" and "link does not exist".
Upvotes: 3
|
2018/03/14
| 1,604 | 2,743 |
<issue_start>username_0: I'm trying to create a new series within a dataframe that maps a dictionary along two-dimensions, first matching the key, then matching the value within an array . The existing series is a datetime and the key match is against the date and the value match is the hour (thus the new series `'dh'`)
There is a similar question for mapping a 1-dimensional array here: [Adding a new pandas column with mapped value from a dictionary](https://stackoverflow.com/questions/24216425/adding-a-new-pandas-column-with-mapped-value-from-a-dictionary), but that maps the entire array to each day.
Current code:
```
import pandas as pd
df = pd.DataFrame({'datetime':pd.date_range('1/1/2018','1/4/2018', freq = '1H', closed = 'left')})
day_hour = {1:range(48,0,-2),
2:range(96,0,-4),
3:range(120,0,-5) }
df['dh'] = df['datetime'].dt.day.map(day_hour)
```
Output snippet:
```
datetime dh
0 2018-01-01 00:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
1 2018-01-01 01:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
2 2018-01-01 02:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
3 2018-01-01 03:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
4 2018-01-01 04:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
5 2018-01-01 05:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
6 2018-01-01 06:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
7 2018-01-01 07:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
8 2018-01-01 08:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
9 2018-01-01 09:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
10 2018-01-01 10:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
11 2018-01-01 11:00:00 [48, 46, 44, 42, 40, 38, 36, 34, 32, 30, 28, 2...
```
Desired Output:
```
datetime dh
0 2018-01-01 00:00:00 48
1 2018-01-01 01:00:00 46
2 2018-01-01 02:00:00 44
3 2018-01-01 03:00:00 42
4 2018-01-01 04:00:00 40
5 2018-01-01 05:00:00 38
...
```<issue_comment>username_1: If you dict is well created , you do not need map
```
df['dh']=sum(map(list,day_hour.values()),[])
```
Update
```
df['dh'] = df['datetime'].dt.day.map(day_hour)
df['new']=df.groupby(df['datetime'].dt.date).cumcount()
df['dh']=df.apply(lambda x : x['dh'][x['new']],axis=1)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I was playing golf with your problem
```
df.assign(dh=[h[t.hour] for t, h in df.values])
datetime dh
0 2018-01-01 00:00:00 48
1 2018-01-01 01:00:00 46
2 2018-01-01 02:00:00 44
3 2018-01-01 03:00:00 42
4 2018-01-01 04:00:00 40
...
```
Upvotes: 0
|
2018/03/14
| 1,578 | 6,975 |
<issue_start>username_0: I have the following code to configure Jetty server:
```
@Configuration
public class RedirectHttpToHttpsOnJetty2Config {
@Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.addServerCustomizers(new JettyServerCustomizer() {
@Override
public void customize(Server server) {
ServerConnector connector = new ServerConnector(server);
connector.setPort(80);
server.addConnector(connector);
}
});
return factory;
}
}
```
and
application.properties as
```
server.port=8443
server.ssl.key-store=classpath:keystore
server.ssl.key-store-password=<PASSWORD>
server.ssl.key-password=<PASSWORD>
```
My applications works expectedly when I access localhost:8443 but localhost:80 is not reachable. gradlew bootRun mentions
...
*Jetty started on port(s) 8443 (ssl, http/1.1), 80 (http/1.1) with context path '/'*
...
but upon visiting <http://localhost:80> I get the message
*This site can’t be reached... localhost refused to connect.*
I am looking for <http://localhost:80> getting redirected to <https://localhost:8443>.
I had it working in Tomcat:
```
@Bean
public ServletWebServerFactory servletContainer(){
TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory(){
@Override
protected void postProcessContext(Context context) {
SecurityConstraint securityConstraint = new SecurityConstraint();
securityConstraint.setUserConstraint("CONFIDENTIAL");
SecurityCollection collection = new SecurityCollection();
collection.addPattern("/*");
securityConstraint.addCollection(collection);
context.addConstraint(securityConstraint);
}
};
tomcat.addAdditionalTomcatConnectors(redirectConnector());
return tomcat;
}
private Connector redirectConnector(){
Connector connector = new Connector("org.apache.coyote.http11.Http11NioProtocol");
connector.setScheme("http");
connector.setPort(80);
connector.setSecure(false);
connector.setRedirectPort(8443);
return connector;
}
```
But unable to find an equivalent for Jetty. Any pointers much appreciated.<issue_comment>username_1: You are missing the required `HttpConfiguration` on your port 80 `ServerConnector` to tell Jetty what your secure vs unsecured ports are.
The Jetty side `SecuredRedirectHandler` is how the redirect actually functions.
See: <https://github.com/jetty-project/embedded-jetty-cookbook/blob/master/src/main/java/org/eclipse/jetty/cookbook/SecuredRedirectHandlerExample.java>
SecuredRedirectHandlerExample.java
----------------------------------
```java
package org.eclipse.jetty.cookbook;
import java.net.URL;
import org.eclipse.jetty.cookbook.handlers.HelloHandler;
import org.eclipse.jetty.server.HttpConfiguration;
import org.eclipse.jetty.server.HttpConnectionFactory;
import org.eclipse.jetty.server.SecureRequestCustomizer;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.server.SslConnectionFactory;
import org.eclipse.jetty.server.handler.HandlerList;
import org.eclipse.jetty.server.handler.SecuredRedirectHandler;
import org.eclipse.jetty.util.ssl.SslContextFactory;
public class SecuredRedirectHandlerExample
{
public static void main(String[] args) throws Exception
{
Server server = new Server();
int httpPort = 8080;
int httpsPort = 8443;
// Setup HTTP Connector
HttpConfiguration httpConf = new HttpConfiguration();
httpConf.setSecurePort(httpsPort);
httpConf.setSecureScheme("https");
// Establish the HTTP ServerConnector
ServerConnector httpConnector = new ServerConnector(server,
new HttpConnectionFactory(httpConf));
httpConnector.setPort(httpPort);
server.addConnector(httpConnector);
// Find Keystore for SSL
ClassLoader cl = SecuredRedirectHandlerExample.class.getClassLoader();
String keystoreResource = "ssl/keystore";
URL f = cl.getResource(keystoreResource);
if (f == null)
{
throw new RuntimeException("Unable to find " + keystoreResource);
}
// Setup SSL
SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setKeyStorePath(f.toExternalForm());
sslContextFactory.setKeyStorePassword("<PASSWORD>");
sslContextFactory.setKeyManagerPassword("<PASSWORD>");
// Setup HTTPS Configuration
HttpConfiguration httpsConf = new HttpConfiguration(httpConf);
httpsConf.addCustomizer(new SecureRequestCustomizer()); // adds ssl info to request object
// Establish the HTTPS ServerConnector
ServerConnector httpsConnector = new ServerConnector(server,
new SslConnectionFactory(sslContextFactory,"http/1.1"),
new HttpConnectionFactory(httpsConf));
httpsConnector.setPort(httpsPort);
server.addConnector(httpsConnector);
// Add a Handlers for requests
HandlerList handlers = new HandlerList();
handlers.addHandler(new SecuredRedirectHandler()); // always first
handlers.addHandler(new HelloHandler("Hello Secure World"));
handlers.addHandler(new DefaultHandler()); // always last
server.setHandler(handlers);
server.start();
server.join();
}
}
```
Upvotes: 2 <issue_comment>username_2: The following configuration will set up a redirect from HTTP to HTTPS. It assumes that you already configure Spring Boot to listen on port 443 and SSL is configured properly.
```
@Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.addServerCustomizers(new JettyServerCustomizer() {
@Override
public void customize(Server server) {
final HttpConnectionFactory httpConnectionFactory = server.getConnectors()[0].getConnectionFactory(HttpConnectionFactory.class);
final ServerConnector httpConnector = new ServerConnector(server, httpConnectionFactory);
httpConnector.setPort(80 /* HTTP */);
server.addConnector(httpConnector);
final HandlerList handlerList = new HandlerList();
handlerList.addHandler(new SecuredRedirectHandler());
for(Handler handler : server.getHandlers())
handlerList.addHandler(handler);
server.setHandler(handlerList);
}
});
return factory;
}
```
Upvotes: 2
|
2018/03/14
| 1,265 | 5,300 |
<issue_start>username_0: I have a dilemma I am trying to find out the best way to compare a string variable called (code) to an array of strings. if it equal it i want it to break the for loop. Which would should I select. I think the 2nd one will work but the 1st one seems like it would and its simpler. Any advice would be appreciated.
```
String[] badcodes = {"8QQ", "8BQ", "8JQ"};
if (code.equals(badcodes)) {
break;
}
String[] badcodess = {"8QQ", "8BQ", "8JQ"};
for (String s : badcodess) {
if (s.equals(code)) {
break; // break out of for loop
}
}
```
--------------based on answer ----------------------
```
String[] badcodes = {"8QQ", "8BQ", "8JQ"};
boolean check = Arrays.asList(badcodess).contains(code);
if (check = true) {
// do something
} else {
// do something
}
```<issue_comment>username_1: You are missing the required `HttpConfiguration` on your port 80 `ServerConnector` to tell Jetty what your secure vs unsecured ports are.
The Jetty side `SecuredRedirectHandler` is how the redirect actually functions.
See: <https://github.com/jetty-project/embedded-jetty-cookbook/blob/master/src/main/java/org/eclipse/jetty/cookbook/SecuredRedirectHandlerExample.java>
SecuredRedirectHandlerExample.java
----------------------------------
```java
package org.eclipse.jetty.cookbook;
import java.net.URL;
import org.eclipse.jetty.cookbook.handlers.HelloHandler;
import org.eclipse.jetty.server.HttpConfiguration;
import org.eclipse.jetty.server.HttpConnectionFactory;
import org.eclipse.jetty.server.SecureRequestCustomizer;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.server.SslConnectionFactory;
import org.eclipse.jetty.server.handler.HandlerList;
import org.eclipse.jetty.server.handler.SecuredRedirectHandler;
import org.eclipse.jetty.util.ssl.SslContextFactory;
public class SecuredRedirectHandlerExample
{
public static void main(String[] args) throws Exception
{
Server server = new Server();
int httpPort = 8080;
int httpsPort = 8443;
// Setup HTTP Connector
HttpConfiguration httpConf = new HttpConfiguration();
httpConf.setSecurePort(httpsPort);
httpConf.setSecureScheme("https");
// Establish the HTTP ServerConnector
ServerConnector httpConnector = new ServerConnector(server,
new HttpConnectionFactory(httpConf));
httpConnector.setPort(httpPort);
server.addConnector(httpConnector);
// Find Keystore for SSL
ClassLoader cl = SecuredRedirectHandlerExample.class.getClassLoader();
String keystoreResource = "ssl/keystore";
URL f = cl.getResource(keystoreResource);
if (f == null)
{
throw new RuntimeException("Unable to find " + keystoreResource);
}
// Setup SSL
SslContextFactory sslContextFactory = new SslContextFactory();
sslContextFactory.setKeyStorePath(f.toExternalForm());
sslContextFactory.setKeyStorePassword("<PASSWORD>");
sslContextFactory.setKeyManagerPassword("<PASSWORD>");
// Setup HTTPS Configuration
HttpConfiguration httpsConf = new HttpConfiguration(httpConf);
httpsConf.addCustomizer(new SecureRequestCustomizer()); // adds ssl info to request object
// Establish the HTTPS ServerConnector
ServerConnector httpsConnector = new ServerConnector(server,
new SslConnectionFactory(sslContextFactory,"http/1.1"),
new HttpConnectionFactory(httpsConf));
httpsConnector.setPort(httpsPort);
server.addConnector(httpsConnector);
// Add a Handlers for requests
HandlerList handlers = new HandlerList();
handlers.addHandler(new SecuredRedirectHandler()); // always first
handlers.addHandler(new HelloHandler("Hello Secure World"));
handlers.addHandler(new DefaultHandler()); // always last
server.setHandler(handlers);
server.start();
server.join();
}
}
```
Upvotes: 2 <issue_comment>username_2: The following configuration will set up a redirect from HTTP to HTTPS. It assumes that you already configure Spring Boot to listen on port 443 and SSL is configured properly.
```
@Bean
public ConfigurableServletWebServerFactory webServerFactory() {
JettyServletWebServerFactory factory = new JettyServletWebServerFactory();
factory.addServerCustomizers(new JettyServerCustomizer() {
@Override
public void customize(Server server) {
final HttpConnectionFactory httpConnectionFactory = server.getConnectors()[0].getConnectionFactory(HttpConnectionFactory.class);
final ServerConnector httpConnector = new ServerConnector(server, httpConnectionFactory);
httpConnector.setPort(80 /* HTTP */);
server.addConnector(httpConnector);
final HandlerList handlerList = new HandlerList();
handlerList.addHandler(new SecuredRedirectHandler());
for(Handler handler : server.getHandlers())
handlerList.addHandler(handler);
server.setHandler(handlerList);
}
});
return factory;
}
```
Upvotes: 2
|
2018/03/14
| 521 | 2,113 |
<issue_start>username_0: I am trying to reinstall anaconda 64 bit on Windows 7 to use Python 3.6 and the installations works right till the end but it fails executing the post install script with following error.
[](https://i.stack.imgur.com/gk82Zh.jpg)
However, if I check the `Lib` folder in the `Anaconda3` folder, I see the files `_nsis.py` and `subprocess.py` which the installer says are missing.
Ignoring the error results in no Anaconda navigator in the start menu but the Anaconda prompt is available for some reason.
I tried installing Miniconda too but am facing the same issue. Below is the error snapshot<issue_comment>username_1: I was able to resolve this issue by installing an archived version of Anaconda rather than the latest one available. Still not sure why the latest version won't install successfully.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can install directly from command prompt or terminal.
I too was facing the same problem but then it get resolve by using pip. Make sure that you upgrade your pip version before installing jyupter notebook/ anaconda.
Upvotes: 1 <issue_comment>username_3: I identified the reason behind the problem from this [github issue](https://github.com/ContinuumIO/anaconda-issues/issues/8856), which is summarized as follows:
>
> subprocess.py is trying to create the process to run post\_install.bat from the command line interpreter (cmd.exe). In order to execute the command, the program has to find the full path of cmd.exe from the system variable called ComSpec. Anaconda3 will fail if the value of ComSpec is wrong.
>
>
>
In my case, I did the following to solve the problem:
1. I erased the existing value of ComSpec in environmental variables of advanced system settings, set it to C:\windows\system32\cmd.exe and restarted my pc.
2. I deleted the Anaconda3 folder from the ProgramData folder in C:\ (ProgramData folder is generally hidden, so change the folder options to show the hidden folders) and then started the installation again.
Upvotes: 0
|
2018/03/14
| 711 | 2,840 |
<issue_start>username_0: I have method to calculate installed applications storage size, data size, and cache size using IPackageStatsObserver, but on actual device callback.onSucces() is not triggered. On emulator is working fine, but on board success in never called. I also have GET\_PACKAGE\_SIZE permission in manifest.
Here is the code
```
public void requestAppInfo(String packageName, final AsyncDataReceiver> callback) {
PackageManager packageManager = getPackageManager();
try {
Method getPackageSizeInfo = packageManager.getClass().getMethod(GET\_PACKAGE\_SIZE\_INFO, String.class, IPackageStatsObserver.class);
getPackageSizeInfo.invoke(packageManager, packageName, new IPackageStatsObserver.Stub() {
@Override
public void onGetStatsCompleted(PackageStats pStats, boolean succeeded) {
int appSize = (int) pStats.codeSize / 1048576;
int dataSize = (int) pStats.dataSize / 1024;
int cacheSize = (int) pStats.cacheSize / 1024;
systemData.clear();
systemData.add(new InstalledAppItem(0, "Internal Storage Used", appSize + " MB"));
systemData.add(new InstalledAppItem(1, "Data Used", dataSize + " KB"));
systemData.add(new InstalledAppItem(2, "Cache Used", cacheSize + "KB"));
//TODO Check Why callback not triggered on board.
callback.onSuccess(systemData);
}
});
} catch (NoSuchMethodException | IllegalAccessException | InvocationTargetException e) {
e.printStackTrace();
}
```<issue_comment>username_1: I was able to resolve this issue by installing an archived version of Anaconda rather than the latest one available. Still not sure why the latest version won't install successfully.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can install directly from command prompt or terminal.
I too was facing the same problem but then it get resolve by using pip. Make sure that you upgrade your pip version before installing jyupter notebook/ anaconda.
Upvotes: 1 <issue_comment>username_3: I identified the reason behind the problem from this [github issue](https://github.com/ContinuumIO/anaconda-issues/issues/8856), which is summarized as follows:
>
> subprocess.py is trying to create the process to run post\_install.bat from the command line interpreter (cmd.exe). In order to execute the command, the program has to find the full path of cmd.exe from the system variable called ComSpec. Anaconda3 will fail if the value of ComSpec is wrong.
>
>
>
In my case, I did the following to solve the problem:
1. I erased the existing value of ComSpec in environmental variables of advanced system settings, set it to C:\windows\system32\cmd.exe and restarted my pc.
2. I deleted the Anaconda3 folder from the ProgramData folder in C:\ (ProgramData folder is generally hidden, so change the folder options to show the hidden folders) and then started the installation again.
Upvotes: 0
|
2018/03/14
| 1,003 | 3,655 |
<issue_start>username_0: I have a static site (`mysite.com`) hosted via Netlify. Netlify currently manages my DNS, so that I have nameservers like so:
* `dns1.p07.nsone.net`
* `dns2.p07.nsone.net`
* `dns3.p07.nsone.net`
* `dns4.p07.nsone.net`
I have a bucket on my S3 named `dl.mysite.com`. I want to have it so that when somebody clicks a link such as `http://dl.mysite.com/file.pdf`, it grabs it from the S3 bucket.
Within my management dashboard for Netlify, I'm able to create a custom subdomain (`dl.mysite.com`), and it directs me to do the following:
>
> Point `dl` CNAME record to `obfuscated-url-d6f26e.netlify.com` Log in
> to the account you have with your DNS provider, and add a CNAME record
> for `dl` pointing to `obfuscated-url-d6f26e.netlify.com`.
>
>
>
In the past, when I used AWS exclusively to host the app and manage DNS, this was easily accomplished by just creating an Alias record in Route53 for the subdomain and pointing it to my bucket.
How can I accomplish this now that Route53 doesn't handle my DNS? Is it still possible to point that subdomain at a particular S3 bucket?<issue_comment>username_1: In order to use Static Hosting on S3 (allowing for your bucket to be a domain name), you can only use an alias on Route 53.
You can however use CloudFront to serve files using a domain name of your choice.
I would say your options are:
1. Switch back to using Route 53
2. Use CloudFront in-front of your S3 bucket
3. Just use the S3 bucket link
(Considering how many browsers obscure the address and that people don’t really care, I would go with option 3 *personally*.)
Upvotes: 3 <issue_comment>username_2: disclaimer: I work for Netlify.
Netlify is intended to host web content, not proxy entirely to S3 buckets. Using it as you describe may work but is against our [terms of service](https://www.netlify.com/tos) which specifically state that we intend to host websites with html content for people to browse, not computers. If that is literally all you want to do - serve content out of that S3 bucket, then I wouldn't bother with the intermediate step of using Netlify at all here. It's another point of failure that doesn't buy you much to put us between your visitors and S3. One more reasonable way to do it is the way @thomas above suggested.
However, if you want to use Netlify's DNS hosting for your domain since we host other site(s) for you, you can absolutely still use our DNS to set up a hostname that points directly to S3 - no terms of service violation there as we don't limit your use of our DNS service. Many customers have us host DNS for domains where there is one website on netlify and a dozen elsewhere. To configure this, I'd instead set that record as a CNAME to the S3 hostname, and handle any SSL certificate on AWS' side. This will work well and if it doesn't our tech support team will be happy to help you out.
In a third situation, if you want to serve **some** content at that hostname from S3 (e.g. PDF's) and then some other content (e.g. your website html files that link to PDF's), that is legit! We'll be happy to proxy to your S3 content. For instance, you can set up a (reverse) proxy to `/files/*` on S3 like this, in a `/_redirects` file:
`/files/* http://aws-bucket-hostname/:splat 200!`
This says "load all files from Netlify EXCEPT /files/\* which should instead come from S3 with a path matching the part after /files/ in the URL". In this case we WILL handle the SSL certificate for you since we terminate the browser connection instead of S3.
More docs on that functionality here:
<https://www.netlify.com/docs/redirects/>
Upvotes: 4 [selected_answer]
|
2018/03/14
| 538 | 2,013 |
<issue_start>username_0: How can I tell `MockRestServiceServer` to expect a specific `json` content during a junit `@Test`?
I thought I could expect just a json string as follows:
`.andExpect(content().json(...))`, but that method does not exist. So how could I rewrite the following?
```
private MockRestServiceServer mockServer;
private ObjectMapper objectMapper; //jackson
Person p = new Person();
p.setFirstname("first");
p.setLastname("last");
//mocking a RestTemplate webservice response
mockServer.expect(once(), requestTo("/servlet"))
.andExpect(content().json(objectMapper.writeValueAsString(p))) //.json() does not exist! why?
.andRespond(withSuccess(...));
```<issue_comment>username_1: How about using jsonPath?
andExpect(jsonPath("$.firstname", is("first")))
Upvotes: 0 <issue_comment>username_2: Seems your spring-test version is lower than 4.2
Have a look at <https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/test/web/servlet/result/ContentResultMatchers.html#json-java.lang.String-boolean->
You can update your spring test dependency to be able to use the code that you have provided.
Upvotes: 2 <issue_comment>username_3: You can use for example:
```
mockServer.expect(ExpectedCount.once(), requestTo(path))
.andExpect(method(HttpMethod.POST))
.andExpect(jsonPath("$", hasSize(1)))
.andExpect(jsonPath("$[0].someField").value("some value"))
```
Upvotes: 3 <issue_comment>username_4: It seems the ContentRequestMatcher object supports JSON strings starting with [spring-test version 5.0.5.RELEASE](https://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/test/web/client/match/ContentRequestMatchers.html#json-java.lang.String-).
Upvotes: 2 <issue_comment>username_5: In *spring-test* version 5.0.5 the requested `json` method was added.
Now, I am able to write
```
.andExpect(content().json(objectMapper.writeValueAsString(someDTO)))
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,166 | 4,214 |
<issue_start>username_0: I have an async function and in the middle of the function body I have two awaits before finally some logic and a return. How does this get executed? Does AsyncCall1 finish before AsyncCall2 starts or are they started in parallel? What happens in the interim before they both have results?
pseudo javascript code resembling what I am doing:
```
const myFunction = async () => {
// some variable declarations
await AsyncCall1
await AsyncCall2
// some logic
return someData;
}
```<issue_comment>username_1: >
> What happens in the interim before they both have results?
>
>
>
The logic next to those functions will be blocked until they finish their job.
>
> How does this get executed? Does AsyncCall1 finish before AsyncCall2 starts or are they started in parallel?
>
>
>
`AsyncCall1` finishes before and they won't be execute in parallel.
Regardless of the time that the function `AsyncCall1` takes, the function `AsyncCall2` won't be executed until `AsyncCall1` ends its job.
```js
function AsyncCall1() {
return new Promise(function(resolve) {
setTimeout(function() {
resolve("AsyncCall1");
}, 2000);
});
}
function AsyncCall2() {
return new Promise(function(resolve) {
setTimeout(function() {
resolve("AsyncCall2");
}, 5);
});
}
const myFunction = async () => {
let result = await AsyncCall1();
console.log(result);
let result2 = await AsyncCall2();
console.log(result2);
return someData;
}
console.log("Waiting 2 secs...")
myFunction();
```
The `await` blocks the execution, even if function `AsyncCall1` doesn't return a Promise:
>
> The [`await`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/await) expression causes async function execution to pause until a Promise is fulfilled, that is resolved or rejected, and to resume execution of the async function after fulfillment. When resumed, the value of the await expression is that of the fulfilled Promise.
>
>
> If the Promise is rejected, the await expression throws the rejected value.
>
>
> **If the value of the expression following the await operator is not a Promise, it's converted to a resolved Promise**.
>
>
>
```js
function AsyncCall1() {
return 5;
}
function AsyncCall2() {
return new Promise(function(resolve) {
setTimeout(function() {
resolve("AsyncCall2");
}, 5);
});
}
const myFunction = async () => {
let result = await AsyncCall1();
console.log(result);
let result2 = await AsyncCall2();
console.log(result2);
return someData;
}
myFunction();
```
Upvotes: 0 <issue_comment>username_2: ```
await AsyncCall1;
console.log("one");
await AsyncCall2;
console.log("two");
```
Is equal to:
```
AsyncCall1.then(function(){
console.log("one");
AsyncCall2.then(function(){
console.log("two");
});
});
```
>
> How does this get executed?
>
>
>
One actually cant tell. That depends when the promises are resolved. However *one* will always be logged before *two*.
>
> Does AsyncCall1 finish before AsyncCall2 starts or are they started in parallel?
>
>
>
Its impossible to say because you dont start the promising function, you just await the results. If you would do:
```
await asyncFunc1();
await asyncFunc2();
```
Then *asyncFunc2* will only be called (/started) after the first function resolved. However if you do:
```
const promise1 = asyncFunc(), promise2 = asyncFunc();
await promise1;
await promise2;
```
Then both promises will be started at nearly the same time, then the code will continue the execution after *awaiting* both. Note that you should add proper error handling here, e.g. `catch` errors before *awaiting* (read on [here](https://stackoverflow.com/questions/46889290/waiting-for-more-than-one-concurrent-await-operation)):
```
const promise1 = asyncFunc().catch(e => null);
const promise2 = asyncFunc().catch(e => null);
```
>
> What happens in the interim before they both have results?
>
>
>
The js engine executes something else. When both are finished it jumps back to the position the result was awaited.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 2,189 | 6,613 |
<issue_start>username_0: Here is my query:
```
INSERT INTO aa_Tanks_Ponds ([User_Name],
[Type],
Site_Name,
Species,
Year_Class,
Tank_Pond,
Letter,
[Name],
Tank_Pond_uID,
Start_Inventory_#,
Start_Inventory_lbs,
Start_Inventory_avg,
Stocking_#,
Stocking_lbs,
Stocking_from,
Stocking_Total_#,
Stocking_Total_lbs,
Sales_#,
Sales_lbs,
Sales_Total_#,
Sales_Total_lbs,
Mortality_#,
Mortality_lbs,
Transfers_#,
Transfers_lbs,
Transfers_To,
Transfer_Total_#,
Transfer_Total_lbs,
Plus_Min_#,
Plus_Min_lbs,
Plus_Min_Total_#,
Plus_Min_Total_lbs,
Feed_lbs_Tanks_Ponds,
Feed_growth_feed,
Feed_growth_FCR,
Feed_growth_gain,
Final_Inventory_#,
Final_Inventory_lbs,
Final_Inventory_avg,
bg_color,
tx_color,
sort_order,
FeedValue)
VALUES ('Chase Ayers','Chase Ayers','Chase Ayers','Chase Ayers','Chase Ayers','Chase Ayers'),
(NULL,NULL,NULL,NULL,NULL,NULL),
('Dry Creek','Dry Creek','Dry Creek','Dry Creek','Dry Creek','Dry Creek'),
('Sturgeon','Sturgeon','Sturgeon','Sturgeon','Sturgeon','Sturgeon'),
('14C','14C','14C','14C','14C','14C'),
('Tank','Tank','Tank','Tank','Tank','Tank'),
('H','H','H','H','H','H'),
('H1','H2','H3','H4','H5','H6'),
('DCH1','DCH2','DCH3','DCH4','DCH5','DCH6'),
(2985,2995,2678,2947,3175,3040),
(30144,27560,27161,27956,32600,33221),
(10.1,9.2,10.14,9.49,10.27,10.93),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(' ',' ',' ',' ',' ',' '),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(' ',' ',' ',' ',' ',' '),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(0,0,0,0,0,0),
(2,2,2,2,2,2),
(0,0,0,0,0,0),
(2985,2995,2678,2947,3175,3040),
(30144,27560,27161,27956,32600,33221),
(10.1,9.2,10.14,9.49,10.27,10.93),
('FFFF00','99CC33','3399CC','FF66CC','FFFF00','FFFF00'),
('996600','CCFF66','660099','CCCCCC','900000','FFCC99'),
(0,0,0,0,0,0),
(NULL,NULL,NULL,NULL,NULL,NULL);
```
I have checked through it multiple time to try and find why it says the error:
>
> There are more columns in the INSERT statement than values specified
> in the VALUES clause. The number of values in the VALUES clause must
> match the number of columns specified in the INSERT statement.
>
>
>
To me this error is saying that I have 43 statements in the `INSERT` Clause, and I don't have 43 statements in the `VALUES` Clause. Yet every time I check through there is a match.
Is there an easier way to trouble shoot data like this other than just walking through it manually?<issue_comment>username_1: I think you're expecting that each group of `()` is a column of values. Those are rows. Each Item in that group is a column.
You need to insert one row at a time.
```
VALUES(User_Name
,Type
,Site_Name
,Species
,Year_Class
,Tank_Pond
,Letter
,NAME
,Tank_Pond_uID
,...etc)
,
(User_Name
,Type
,Site_Name
,Species
,Year_Class
,Tank_Pond
,Letter
,NAME
,Tank_Pond_uID
,...etc)
```
This makes sense because data are stored as rows in the database. They are not stored as columns thus (basically) every operation is a row based operation.
Upvotes: 2 <issue_comment>username_2: >
> In the SQL Server Management Studio, errors can be tracked down easily, using the built in Error List pane. This pane can be activated in the View menu, or by using shortcuts **Ctrl+\ and Ctrl+E**
>
>
>
Read more [here](https://www.sqlshack.com/sql-syntax-errors/).
Upvotes: 1 <issue_comment>username_3: Because it's much more legible, you might consider putting the to-be-imported data into a CSV and then importing *that* to SQL, rather than having to write this out in the current format. It's easier to be sure that everything is arranged as you want it to be. Here's the how-to on CSV import, assuming you're using the SQL Server Management Studio: <https://support.discountasp.net/kb/a1179/how-to-import-a-csv-file-into-a-database-using-sql-server-management-studio.aspx>
Upvotes: 1 <issue_comment>username_4: What you have in the brackets after the "VALUES" is what is trying to be inserted. So currently you're trying to insert 6 lots of "Chase Ayers" into 43 rows. Your first line of the insert would be this (then just needs adjusting for the remaining 5 rows to be inserted) - Notice all the values to be inserted are within 1 set of brackets:
```
INSERT INTO aa_Tanks_Ponds ([User_Name],[Type],Site_Name,Species,Year_Class,Tank_Pond,Letter,[Name],Tank_Pond_uID,Start_Inventory_#,Start_Inventory_lbs,Start_Inventory_avg,Stocking_#,Stocking_lbs,Stocking_from,Stocking_Total_#,Stocking_Total_lbs,Sales_#,Sales_lbs,Sales_Total_#,Sales_Total_lbs,Mortality_#,Mortality_lbs,Transfers_#,Transfers_lbs,Transfers_To,Transfer_Total_#,Transfer_Total_lbs,Plus_Min_#,Plus_Min_lbs,Plus_Min_Total_#,Plus_Min_Total_lbs,Feed_lbs_Tanks_Ponds,Feed_growth_feed,Feed_growth_FCR,Feed_growth_gain,Final_Inventory_#,Final_Inventory_lbs,Final_Inventory_avg,bg_color,tx_color,sort_order,FeedValue)
VALUES ('Chase Ayers',NULL,'Dry Creek','Sturgeon','14C','Tank','H','H1','DCH1',2985,30144,10.1,0,0,' ',0,0,0,0,0,0,0,0,0,0,' ',0,0,,0,0,0,0,0,0,2,0,2985,30144,10.1,'FFFF00','996600',0,NULL);
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 623 | 2,000 |
<issue_start>username_0: I am fairly new to SQL and I'm stumped on a query I'm trying to make. I have a dataset similar to this
Player
```
masterID firstName lastName
11111 Rob Caspian
22222 Bob Harper
33333 Jon Willow
44444 Ron Anderson
55555 Doug Blake
```
Appearances
```
masterID gameName
11111 chess
11111 checkers
11111 chess
22222 chess
22222 chess
33333 monopoly
33333 monopoly
33333 monopoly
44444 chess
55555 risk
55555 chess
```
I want to return all player's first names and last names who only played chess and no other game. So it should return <NAME> and <NAME>.
```
SELECT player.firstName, player.lastName from player, appearances where player.masterID = appearances.masterID and appearances.gameName = "chess";
```
I have a query that would return everyone who ever played chess no matter what other games they played but I can't figure out how to limit it to the players who only played chess. It should also only return player's names once. Thanks for the help!<issue_comment>username_1: You can just add another filter towards the end of the query to filter out MasterID's that appear in other games:
```
SELECT player.firstName, player.lastName
from player
join appearances on player.masterID = appearances.masterID
where appearances.gameName = 'chess'
and masterID not in(select masterID from Appearances where gameName != 'chess')
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can obtain the result by just using a [join](http://www.mysqltutorial.org/mysql-join/) and [aggregation](http://www.mysqltutorial.org/mysql-group-by.aspx) as illustrated below.
```
SELECT A.firstName, A.lastName
FROM Player A LEFT JOIN Appearances B
ON A.masterID=B.masterID
GROUP BY A.firstName, A.lastName
HAVING SUM(B.gameName = 'chess')=COUNT(*);
```
See it [run on SQL Fiddle](http://sqlfiddle.com/#!9/d356cf/1).
Upvotes: 1
|
2018/03/14
| 1,023 | 3,760 |
<issue_start>username_0: I'm currently looking for simplest possible JSON log messages that would simply write a severity and a message to Stackdriver logging from a container that is run in Kubernetes Engine and is using the managed Fluentd daemon.
Basically I'm writing single line JSON entries as follows.
```js
{"severity": "DEBUG", "message": "I'm a debug entry"}
{"severity": "ERROR", "message": "I'm an error entry"}
```
These end up in Stackdriver logging with following results.
* Severity is always INFO
* There's JSON payload in the log entry, and the only content is the message, i.e. severity does not go there.
My conclusion is that Fluentd recognizes log row as JSON, but what I don't understand is that how the severity is not set into log entries correctly. Am I e.g. missing some mandatory fields that need to be in place?<issue_comment>username_1: From the information you provided I guess fluentd is passing your whole JSON as as a jsonpayload as a [logEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) and providing the logName, resource type and the rest of required information from the environment variables.
In the end what Stackdriver is receiving must look something like this:
```
{
"logName": "projects/[YOUR PROJECT ID]/logs/[KUBERNETES LOG]",
"entries": [
{
"jsonPayload": {
"message": "I'm an ERROR entry",
"severity": "ERROR"
},
"resource": {
"labels": {
"project_id": "[YOUR PROJECT ID]",
"instance_id": "[WHATEVER]",
"zone": "[YOUR ZONE]"
},
"type": "gce_instance"
}
}
]
}
```
So you are actually getting the content of the JSON payload on Stackdriver, but as the severity is defined either **outside** the JSON payload or, if you want to do it inside you'll have to use `"severity": enum([NUMERICAL VALUE])`
[The numerical values of each log level are:](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logseverity)
>
> Enums
>
> DEFAULT (0) The log entry has no assigned severity level.
>
> DEBUG (100) Debug or trace information.
>
> INFO (200) Routine information, such as ongoing status or performance.
>
> NOTICE (300) Normal but significant events, such as start up, shut down, or a configuration change.
>
> WARNING (400) Warning events might cause problems.
>
> ERROR (500) Error events are likely to cause problems.
>
> CRITICAL (600) Critical events cause more severe problems or outages.
>
> ALERT (700) A person must take an action immediately.
>
> EMERGENCY (800) One or more systems are unusable.
>
>
>
So, including the field `"severity": enum(500)` should log the entry as an ERROR instead to fallback to the default INFO.
Upvotes: 4 [selected_answer]<issue_comment>username_2: My conclusion of the logging behavior is the following as I've now got things working.
* GKE handles the resource/logName part, no need to worry about this one.
* You can log any JSON structure, it will go into jsonPlayload
* Plain text log rows go into plainText, and same applies e.g. to broken JSON. In few occasions I've seen two JSON log rows in a single textPayload, i.e. broken JSON. Whether it's my applications or something else, I don't know yet.
* There are special JSON fields such as severity that will not go into jsonPayload. Works as documented in [GCP logging entry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry).
Upvotes: 2 <issue_comment>username_3: an update for logging severity, severity can be logged within the `jsonPayload` and the fluentd daemon moves it to the outside of jsonPayload. it works with using the string values of severity and not the enums, eg, `"severity": "ERROR"` inside `jsonPayload` works fine.
Upvotes: 2
|
2018/03/14
| 1,640 | 4,405 |
<issue_start>username_0: How to do 2-tier sorting on an array using below criteria:
`edd` value groups:
1. 1-10
2. 11-20
3. 21-30
The array is
```
$var_array = [
['name' => 'product1', 'edd'=>16, 'price' => 89],
['name' => 'product2', 'edd'=>21, 'price' => 99],
['name' => 'product3', 'edd'=>2, 'price' => 110],
['name' => 'product4', 'edd'=>14, 'price' => 102],
['name' => 'product5', 'edd'=>8, 'price' => 119],
['name' => 'product6', 'edd'=>6, 'price' => 123],
['name' => 'product7', 'edd'=>26, 'price' => 93],
['name' => 'product8', 'edd'=>27, 'price' => 105],
['name' => 'product9', 'edd'=>18, 'price' => 133],
];
```
First sort the `edd`, and then sort the `price` within each `edd` group level.
Expected result
```
$var_array = [
['name' => 'product3', 'edd' => 2, 'price' => 110],
['name' => 'product5', 'edd' => 8, 'price' => 119],
['name' => 'product6', 'edd' => 6, 'price' => 123],
['name' => 'product1', 'edd' => 16, 'price' => 89],
['name' => 'product4', 'edd' => 14, 'price' => 102],
['name' => 'product9', 'edd' => 18, 'price' => 133],
['name' => 'product7', 'edd' => 26, 'price' => 93],
['name' => 'product2', 'edd' => 21, 'price' => 99],
['name' => 'product8', 'edd' => 27, 'price' => 105],
];
```<issue_comment>username_1: You can use `array_reduce` and `array_map`
```
$var_array = array(
array('name' => 'product1', 'edd'=>16, 'price' => 89),
array('name' => 'product2', 'edd'=>21, 'price' => 99),
array('name' => 'product3', 'edd'=>2, 'price' => 110),
array('name' => 'product4', 'edd'=>14, 'price' => 102),
array('name' => 'product5', 'edd'=>8, 'price' => 119),
array('name' => 'product6', 'edd'=>6, 'price' => 123),
array('name' => 'product7', 'edd'=>26, 'price' => 93),
array('name' => 'product8', 'edd'=>27, 'price' => 105),
array('name' => 'product9', 'edd'=>18, 'price' => 133),
);
//Group array and sort key
$temp = array_reduce($var_array, function($c, $v){
$c[ ceil($v["edd"] / 10) * 10 ][] = $v;
return $c;
}, array());
ksort($temp);
//Sort array
$temp = array_map(function ($n) {
usort($n, function($a, $b){
return $a["price"] - $b["price"];
});
return $n;
}, $temp );
//Make 2 dimentional array into 1
$result = array_reduce($temp, 'array_merge', array());
echo "
```
";
print_r( $result );
echo "
```
";
```
This will result to:
```
Array
(
[0] => Array
(
[name] => product3
[edd] => 2
[price] => 110
)
[1] => Array
(
[name] => product5
[edd] => 8
[price] => 119
)
[2] => Array
(
[name] => product6
[edd] => 6
[price] => 123
)
[3] => Array
(
[name] => product1
[edd] => 16
[price] => 89
)
[4] => Array
(
[name] => product4
[edd] => 14
[price] => 102
)
[5] => Array
(
[name] => product9
[edd] => 18
[price] => 133
)
[6] => Array
(
[name] => product7
[edd] => 26
[price] => 93
)
[7] => Array
(
[name] => product2
[edd] => 21
[price] => 99
)
[8] => Array
(
[name] => product8
[edd] => 27
[price] => 105
)
)
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This 2-rule sort can be simply done with `usort()`. Use `ceil($number / 10)` to make groups of 1 - 10, 11 - 20, etc. I want 0 - 9, 10 - 19, 20 - 21, use `intdiv($number, 10)`.
1. Sort by the floored tenth of the `edd` values ascending, then
2. Sort by `price` value ascending.
Code: ([Demo](https://3v4l.org/DQDK0))
```
usort(
$var_array,
fn($a, $b) => [ceil($a['edd'] / 10), $a['price']]
<=>
[ceil($b['edd'] / 10), $b['price']]
);
```
However, it will be more efficient to use `array_multsort()` because fewer function calls will be needed.
Code: ([Demo](https://3v4l.org/hu8vc))
```
$groups = [];
$prices = [];
foreach ($var_array as $row) {
$groups[] = ceil($row['edd'] / 10);
$prices[] = $row['price'];
}
array_multisort($groups, $prices, $var_array);
```
Upvotes: 0
|
2018/03/14
| 364 | 1,380 |
<issue_start>username_0: I have a new project for a new job. The project will not build on my computer. It has yellow triangles under every reference that isn't a project in the solution. It even has a yellow triangle for microsoft.csharp. I have been digging and tried many things online and on stack overflow. Anyone have an ideas, would love to keep this job. Thanks.<issue_comment>username_1: you need to restore the nuget packages
Upvotes: 1 <issue_comment>username_2: Like @Dai and @Michael suggest, you need to re-install the references. I just provide the detail.
There are many ways to do it. This is what I usually do.
Step 1: [Enable NuGet package restore](https://learn.microsoft.com/en-us/nuget/consume-packages/package-restore#migrating-to-automatic-restore).
[](https://i.stack.imgur.com/Ao7jV.png)
Step 2: go to the `packages` folder of your project: C:\path\your project\packages\
Step 3: delete everything in that folder
Step 4: open the project again in VS and **Build** the project (**Build > Build Solution**). NuGet will download and re-install all required packages
Upvotes: 1 <issue_comment>username_3: 1. try to do a build first
2. if that doesn't fix it then restore the nuget packages
3. if that doesn't work then clear your local nuget cache and rebuild the project
Upvotes: 0
|
2018/03/14
| 1,059 | 2,672 |
<issue_start>username_0: I have a dataframe with columns "pid", "code", "type" and "note". Each pid (patient id) is is coupled to diagnose and medicine. These have both a code, and a note (see example). Each code can have several notes. How to replace all codes with the corresponding first seen note and type? Doing a for-loop is not an option.
Note that code A for example, have both alvedon, and ipren in the IN, but only alvedon in the OUT. Doing so, I can identify them by names, instead of codes.
IN:
```
pid code type note
1 A M alvedon
1 B D pain
1 C M ulcer
2 A M ipren
2 B D hurt
3 A M alvedon
3 B D hurt
3 C M stomach
4 A M ipren
4 B D pain
5 A M ipren
5 B D pain
```
OUT:
```
pid code
1 A_M_alvedon
1 B_D_pain
1 C_M_ulcer
2 A_M_alvedon
2 B_D_pain
3 A_M_alvedon
3 B_D_pain
3 C_M_ulcer
4 A_M_alvedon
4 B_D_pain
5 A_M_alvedon
5 B_D_pain
```
I have come this far:
```
df.groupby('code').note.agg(['first'])
```
But this only gives me the grouping:
OUT:
```
code type note
A alvedon
B pain
C ulcer
```<issue_comment>username_1: You could concatenate, then groupby on the code like you're doing, and then transform:
```
In [9]: df["out"] = (
(df["code"] + "_" + df["type"] + "_" + df["note"]).groupby(df["code"]).transform("first"))
In [10]: df
Out[10]:
pid code type note out
0 1 A M alvedon A_M_alvedon
1 1 B D pain B_D_pain
2 1 C M ulcer C_M_ulcer
3 2 A M ipren A_M_alvedon
4 2 B D hurt B_D_pain
5 3 A M alvedon A_M_alvedon
6 3 B D hurt B_D_pain
7 3 C M stomach C_M_ulcer
8 4 A M ipren A_M_alvedon
9 4 B D pain B_D_pain
10 5 A M ipren A_M_alvedon
11 5 B D pain B_D_pain
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is one way. You can drop duplicates and construct your combined format. Then map to your original dataframe by `code`.
```
df2 = df.drop_duplicates('code')
df2['comb'] = df2['code'] + '_' + df2['type'] + '_' + df2['note']
df['code'] = df['code'].map(df2.set_index('code')['comb'])
df = df[['pid', 'code']]
# pid code
# 0 1 A_M_alvedon
# 1 1 B_D_pain
# 2 1 C_M_ulcer
# 3 2 A_M_alvedon
# 4 2 B_D_pain
# 5 3 A_M_alvedon
# 6 3 B_D_pain
# 7 3 C_M_ulcer
# 8 4 A_M_alvedon
# 9 4 B_D_pain
# 10 5 A_M_alvedon
# 11 5 B_D_pain
```
Upvotes: 1
|
2018/03/14
| 733 | 1,974 |
<issue_start>username_0: I created a plain vanilla app using ember-cli v 3.0.0 and added a route to it. When I build the app and serve it using ember-serve, all routes are loaded successfully. However, when I change working working directory to dist and serve it using http-serve, none of the routes load.
Is there any specific configuration required for serving the app through http-serve (or any similar web server)?
The app is on public repo at GitHub on <https://github.com/shubmittal/testemberapp.git><issue_comment>username_1: You could concatenate, then groupby on the code like you're doing, and then transform:
```
In [9]: df["out"] = (
(df["code"] + "_" + df["type"] + "_" + df["note"]).groupby(df["code"]).transform("first"))
In [10]: df
Out[10]:
pid code type note out
0 1 A M alvedon A_M_alvedon
1 1 B D pain B_D_pain
2 1 C M ulcer C_M_ulcer
3 2 A M ipren A_M_alvedon
4 2 B D hurt B_D_pain
5 3 A M alvedon A_M_alvedon
6 3 B D hurt B_D_pain
7 3 C M stomach C_M_ulcer
8 4 A M ipren A_M_alvedon
9 4 B D pain B_D_pain
10 5 A M ipren A_M_alvedon
11 5 B D pain B_D_pain
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is one way. You can drop duplicates and construct your combined format. Then map to your original dataframe by `code`.
```
df2 = df.drop_duplicates('code')
df2['comb'] = df2['code'] + '_' + df2['type'] + '_' + df2['note']
df['code'] = df['code'].map(df2.set_index('code')['comb'])
df = df[['pid', 'code']]
# pid code
# 0 1 A_M_alvedon
# 1 1 B_D_pain
# 2 1 C_M_ulcer
# 3 2 A_M_alvedon
# 4 2 B_D_pain
# 5 3 A_M_alvedon
# 6 3 B_D_pain
# 7 3 C_M_ulcer
# 8 4 A_M_alvedon
# 9 4 B_D_pain
# 10 5 A_M_alvedon
# 11 5 B_D_pain
```
Upvotes: 1
|
2018/03/14
| 1,448 | 5,857 |
<issue_start>username_0: I have a pretty standard setup related to `JMS` - `Spring Boot` and `ActiveMQ`. It works fine, until i tried to do a simple integration test. After some investigation I found that both the Spring context and the embedded broker are closed after the first JMS message has been consumed, no matter that during the consumption, another event is fired. The broker issue i was able to solve by adding the `useShutdownHook=false` connection option in the test setup, i.e.
```
spring.activemq.broker-url = vm://broker?async=false&broker.persistent=false&broker.useShutdownHook=false
```
What i'm looking for is basically a way to force the test to "stay alive" until all JMS messages are consumed (in this case they are just two). I understand the async nature of the whole setup, but still during tests it would be helpful to get all the results of these messages being produced and consumed.
Below is my setup, although it's fairly simple.
```
@EnableJms
public class ActiveMqConfig {
@Bean
public JmsTemplate jmsTemplate(ConnectionFactory connectionFactory, MessageConverter messageConverter) {
JmsTemplate jmsTemplate = new JmsTemplate(connectionFactory);
jmsTemplate.setMessageConverter(messageConverter);
return jmsTemplate;
}
@Bean
public MessageConverter messageConverter() {
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setTargetType(MessageType.TEXT);
messageConverter.setTypeIdPropertyName("_type");
return messageConverter;
}
}
```
I then have a message-driven POJO that listens for a given event:
```
@JmsListener(destination = "events")
public void applicationSubmitted(MyType event) {
// do some work with the event here
jmsTemplate.convertAndSend("commands", mymessage);
}
```
And another one:
```
@JmsListener(destination = "commands")
public void onCommand(TextMessage textMessage) {
}
```
One thing that I tried out and it worked is to add a delay, i.e. `sleep(200)` after the message is sent. However, that's very unreliable and also slows down tests, as the execution perhaps takes less that 50ms. Below's the test itself. Unless the waiting is uncommented, i never get to the second event listener, as the application context closes, the tests ends and the message is "forgotten".
```
@SpringBootTest
class MyEventIntegrationTest extends Specification {
@Autowired
JmsTemplate jmsTemplate
def "My event is successfully handled"() {
given:
def event = new MyEvent()
when:
jmsTemplate.convertAndSend("events", event)
// sleep(200)
then:
1 == 1
}
}
```<issue_comment>username_1: I think the root of your problem is the asynchronous event handling. After you've send the event, your test is just over. This will - of course - cause the Spring context and the broker to shutdown. The JMS listeners are running in another thread. You must find a way to wait for them. Otherwise, your thread (which is your test case) is just done.
We faced a similar problem in our last project and wrote a small utility which helped us a lot. JMS offers the ability to "browse" a queue and to look if it's empty:
```
public final class JmsUtil {
private static final int MAX_TRIES = 5000;
private final JmsTemplate jmsTemplate;
public JmsUtil(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
private int getMessageCount(String queueName) {
return jmsTemplate.browseSelected(queueName, "true = true", (s, qb) -> Collections.list(qb.getEnumeration()).size());
}
public void waitForAll(String queueName) {
int i = 0;
while (i <= MAX_TRIES) {
if (getMessageCount(queueName) == 0) {
return;
}
i++;
}
}
```
With this utility you could do something like this:
```
def "My event is successfully handled"() {
given:
def event = new MyEvent()
when:
jmsTemplate.convertAndSend("events", event)
jmsUtility.waitForAll("events"); // wait until the event has been consumed
jmsUtility.waitForAll("commands"); // wait until the command has been consumed
then:
1 == 1
}
```
Note: This utility assumes that you send JMS messages to a queue. By browsing the queue we can check if it's empty. In case of a topic you might need to do another check. So be aware of that!
Upvotes: 3 [selected_answer]<issue_comment>username_2: Well, this is a standard problem when testing systems based on async message exchange. Usually, it's solved in the part of the test that you skipped - the `then` part.
The thing is that in your tests you usually expect the system to do something useful, e.g. make changes in the DB, send a rest call to another system, send a message in another queue etc. **We could wait some time until it happened by constantly checking the result - if the result is achieved within the time window that we have set - then we can assume the test has passed.**
The pseudo code of this approach is the following:
```
for (i to MAX_RETRIES; i++) {
checkThatTheChangesInDBHasBeenMade();
checkThatTheRestCallHasBeenMade();
checkThatTheMessageIsPostedInAnotherQueue();
Thread.sleep(50ms);
}
```
This way in the best scenario your test will pass in 50ms. In the worse case, it will fail and this will take MAX\_RETRIES \* 50ms time to execute the test.
Also, I should mention that there is a nice tool called [awaitility](https://github.com/awaitility/awaitility/wiki/Getting_started) that provides nice API (btw it has support of groovy DSL) to handle such kind of problems in the async world:
```
await().atMost(5, SECONDS).until(customerStatusIsUpdated());
```
Upvotes: 2
|