date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 417 | 1,703 |
<issue_start>username_0: I have Implemented google map and Display the my current location on Map.
Now I have implemented Place Autocomplete and this getting Result getting successfully
```
override fun onPlaceSelected(place: Place) {
Log.i("TAG", "Place Selected: " + place.name)
if (!TextUtils.isEmpty(place.attributions)) {
Log.d("TAG", place.attributions.toString())
}
}
```
Now I want to display this place Name to my google Map. How we can achieve.<issue_comment>username_1: You can use the below code to implement autocomplete
----------------------------------------------------
```
var placeSearch, autocomplete;
var componentForm = {
street_number: 'short_name',
route: 'long_name',
locality: 'long_name',
administrative_area_level_1: 'short_name',
country: 'long_name',
postal_code: 'short_name'
};
function initAutocomplete() {
// Create the autocomplete object, restricting the search to geographical
// location types.
autocomplete = new google.maps.places.Autocomplete(
/** @type {!HTMLInputElement} */(document.getElementById('autocomplete')),
{types: ['geocode']});
// When the user selects an address from the dropdown, populate the address
// fields in the form.
autocomplete.addListener('place_changed', fillInAddress);
}
```
Upvotes: 0 <issue_comment>username_2: I think you want to add a marker on the point of that place and show the name of that place on the title of marker.
Something like:
```
googleMap.addMarker(new MarkerOptions()
.position(new LatLng(latitude, longitude))
.title("Your position")).showInfoWindow();
```
Hope this helps.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 549 | 2,252 |
<issue_start>username_0: I am developing a react-native quiz application. Where i will fetch 5 questions from the backend and display them one by one to user. User can skip the current question or can answer current question to go to next question.
There is a timer on the top that will start countdown from 300 seconds.
Now the problem i am facing is that whenever the user skips/attempt the question (visit next next question), timer resets itself and start countdown again from 300 seconds.
I want to keep the timer running even after next question is visited.
Can anyone please help me resolving this?
if anyone need i can give example code what/how i am implementing this.
Below is example code :
```
export default class QuestionLayout extends Component {
constructor(props) {
super(props);
this.state = {
currentQuestion: 0,
questionsData: {}
};
}
handleSkipQuestion() {
this.props.next_question();
}
render() {
const data = this.props.questions;
return (
);
}
}
```<issue_comment>username_1: You can use the below code to implement autocomplete
----------------------------------------------------
```
var placeSearch, autocomplete;
var componentForm = {
street_number: 'short_name',
route: 'long_name',
locality: 'long_name',
administrative_area_level_1: 'short_name',
country: 'long_name',
postal_code: 'short_name'
};
function initAutocomplete() {
// Create the autocomplete object, restricting the search to geographical
// location types.
autocomplete = new google.maps.places.Autocomplete(
/** @type {!HTMLInputElement} */(document.getElementById('autocomplete')),
{types: ['geocode']});
// When the user selects an address from the dropdown, populate the address
// fields in the form.
autocomplete.addListener('place_changed', fillInAddress);
}
```
Upvotes: 0 <issue_comment>username_2: I think you want to add a marker on the point of that place and show the name of that place on the title of marker.
Something like:
```
googleMap.addMarker(new MarkerOptions()
.position(new LatLng(latitude, longitude))
.title("Your position")).showInfoWindow();
```
Hope this helps.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 450 | 1,303 |
<issue_start>username_0: I am working with a dataframe which has a column 'Col' of type Float. The values of the columns have too many decimals (example: 1.00000000000111). How can I limit the column to save values with only 1 decimal (example: 1.0)?<issue_comment>username_1: Check this out:
```
import pandas as pd
df = pd.DataFrame([4.5678,5,1.00000000000111], columns=['Col'])
s = df['Col'].round(1)
print(s)
0 4.6
1 5.0
2 1.0
```
Upvotes: -1 <issue_comment>username_2: You can use round from functions,
```
+----------------+
| Col|
+----------------+
|1.00000000000111|
| 1.000000011|
+----------------+
>>> from pyspark.sql import functions as F
>>> df = df.withColumn('Col',F.round('Col',1))
>>> df.show()
+---+
|Col|
+---+
|1.0|
|1.0|
+---+
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can use the `round`, `ceil` or `floor` functions in `pyspark.sql.functions` ( depending on how you want to limit the digits)
For example:
```
import pyspark.sql.functions as F
# assuming df is your dataframe and float_column_name is the name of the
# column with type FloatType, replace the column that has floats with
# the column that has rounded floats:
df = df.withColumn('float_column_name', F.round('float_column_name', 2))
```
Upvotes: 0
|
2018/03/16
| 901 | 2,091 |
<issue_start>username_0: I have the following DataFrame:
```
+----------+-------------------+
| timestamp| created|
+----------+-------------------+
|1519858893|2018-03-01 00:01:33|
|1519858950|2018-03-01 00:02:30|
|1519859900|2018-03-01 00:18:20|
|1519859900|2018-03-01 00:18:20|
```
How to create a timestamp correctly`?
I was able to create `timestamp` column which is epoch timestamp, but dates to not coincide:
```
df.withColumn("timestamp",unix_timestamp($"created"))
```
For example, `1519858893` points to `2018-02-28`.<issue_comment>username_1: Try below code
```
df.withColumn("dateColumn", df("timestamp").cast(DateType))
```
Upvotes: 0 <issue_comment>username_2: Just use `date_format` and `to_utc_timestamp` *inbuilt functions*
```
import org.apache.spark.sql.functions._
df.withColumn("timestamp", to_utc_timestamp(date_format(col("created"), "yyy-MM-dd"), "Asia/Kathmandu"))
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can check one solution here <https://stackoverflow.com/a/46595413>
To elaborate more on that with the dataframe having different formats of timestamp/date in string, you can do this -
```
val df = spark.sparkContext.parallelize(Seq("2020-04-21 10:43:12.000Z", "20-04-2019 10:34:12", "11-30-2019 10:34:12", "2020-05-21 21:32:43", "20-04-2019", "2020-04-21")).toDF("ts")
def strToDate(col: Column): Column = {
val formats: Seq[String] = Seq("dd-MM-yyyy HH:mm:SS", "yyyy-MM-dd HH:mm:SS", "dd-MM-yyyy", "yyyy-MM-dd")
coalesce(formats.map(f => to_timestamp(col, f).cast(DateType)): _*)
}
val formattedDF = df.withColumn("dt", strToDate(df.col("ts")))
formattedDF.show()
+--------------------+----------+
| ts| dt|
+--------------------+----------+
|2020-04-21 10:43:...|2020-04-21|
| 20-04-2019 10:34:12|2019-04-20|
| 2020-05-21 21:32:43|2020-05-21|
| 20-04-2019|2019-04-20|
| 2020-04-21|2020-04-21|
+--------------------+----------+
```
**Note**: - This code assumes that data does not contain any column of format -> MM-dd-yyyy, MM-dd-yyyy HH:mm:SS
Upvotes: 0
|
2018/03/16
| 1,004 | 3,636 |
<issue_start>username_0: I am trying to set a default value at a select option with Angular 5.
I have read about the [compareWith] but it does not seem to help.
Here is the code:
**My enumeration:**
```
export enum EncryptionProtocol {
NONE,
SSL,
TLS }
```
**My Config Model**
```
export class Config implements BaseEntity {
constructor(
public id?: number,
...
public encryptionProtocol?: EncryptionProtocol,
...
) {
}
}
```
**My HTML:**
```
Encryption Protocol
{{encryptionProtocol}}
```
**EDIT: My TS:**
```
...
encryptionProtocols = EncryptionProtocol;
encryptionProtocolKeys() : Array {
var keys = Object.keys(this.encryptionProtocols);
return keys.slice(keys.length / 2);
}
...
compareFn(c1: EncryptionProtocol, c2: EncryptionProtocol): boolean {
console.log('compare function');
console.log(c1);
console.log(c2);
console.log(c1 && c2 ? c1.valueOf() === c2.valueOf() : c1 === c2);
return c1 && c2 ? c1.valueOf() === c2.valueOf() : c1 === c2;
}
...
ngOnInit() {
this.config = {encryptionProtocol: EncryptionProtocol.NONE, headers: []};
}
```
**and the output at the console is:**
```
compare function
NONE
undefined
false
compare function
NONE
undefined
false
compare function
SSL
undefined
false
compare function
NONE
undefined
false
compare function
SSL
undefined
false
compare function
TLS
undefined
false
{encryptionProtocol: 0, headers: Array(0)}
0
compare function
NONE
null
false
compare function
SSL
null
false
compare function
TLS
null
false
compare function
NONE
0
false
compare function
SSL
0
false
compare function
TLS
0
false
compare function
NONE
SSL
false
compare function
SSL
SSL
true
```
You will notice that at the end the last **'compare function'** block is **true**. And the pre-selected value at the drop down is the ***SSL*** although I have set the ***NONE*** at the onInit method.
Do you have any idea why this is happening and what should I do in order to select the correct value?
**UPDATE:**
Here is the response from the server:
```
{
"id" : 50000,
...
"encryptionProtocol" : "SSL",
"keystoreType" : "PKCS12",
...
}
```
and here is the code that assigns this object to the model:
```
private subscribeToSaveResponse(result: Observable>) {
result.subscribe((res: HttpResponse) =>
this.onSaveSuccess(res.body), (res: HttpErrorResponse) => this.onSaveError());
}
private onSaveSuccess(config: Config) {
this.isSaving = false;
this.config = config;
}
```
but although the default selection works ok when we set it from the onInit method it does not when it comes back from the server.
I suspect the reason is that it comes as a string. Should I convert it to an enumeration value? What is the best way to do this?
Thank you in advance,<issue_comment>username_1: Add the following property to your component's script:
```
encryptionProtocolValues = EncryptionProtocol;
```
Then on HTML side, do this:
```
{{encryptionProtocol}}
```
Explanation: when you define an enum with only keys, the names you define are that, keys. But behind the scenes their actual implicit values are 0, 1, 2, etc, that you can access via `EnumTypeName[EnumKey]`.
Note that you can use `[value]` instead of `[ngValue]`. I believe it's the proper thing to do with recent Angular versions at least.
Upvotes: 4 [selected_answer]<issue_comment>username_2: ```
{{encryptionProtocol}}
```
Thank you for your post...it helped me out alot...the only thing i would added to the answer is the binding to the value [value]="encryptionProtocol" ... this will retain the choice the user selected.
Upvotes: 1
|
2018/03/16
| 438 | 1,315 |
<issue_start>username_0: Working on the case statement below and keep getting a missing parenthesis error. any suggestions?
```
( CASE
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 2
THEN 'A'
END
ELSE
CASE
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 3
THEN 'B'
END
END ) XYT_BAND
```<issue_comment>username_1: There should only be one `END` per `CASE` expression. This should work:
```
CASE
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 2
THEN 'A'
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 3
THEN 'B'
END XYT_BAND
```
If you need to nest `CASE` expressions then:
```
CASE
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 2
THEN 'A'
ELSE
CASE
WHEN XBAND = 4 AND TBAND = 0 AND YBAND >= 3
THEN 'B'
END
END XYT_BAND
```
Upvotes: 2 <issue_comment>username_2: In order to generate multiple rows for each entry you could use `union` or more flexible `unpivot` transposing columns to rows:
**[demo](http://rextester.com/SUOI51451)**
```
select id, xyt_band
from (select id,
case when xband = 4 and yband = 0 and tband >= 2 then 'A' end c1,
case when xband = 4 and yband = 0 and tband >= 3 then 'B' end c2
from t)
unpivot (xyt_band for cx in (C1, C2))
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,799 | 5,907 |
<issue_start>username_0: I am learning this feature from ES6(Function generators) and I am having difficulties to understand the following code :
```
function* HelloGen2() {
var a = yield 100;
var b = yield a + 100;
console.log(b);
}
var gen2 = HelloGen2();
console.log(gen2.next()); // {value: 100, done: false}
console.log(gen2.next(500)); // {value: 600, done: false}
console.log(gen2.next(1000)); // {value: undefined, done: true}
```
Questions :
1 : In the first `gen2.next()` the line of code `var a = yield 100;` is called , is it set some value on the `var a` variable ?
2: In each gen2.next() only the line until the semicolon is executed ?
So for instance , in the second call `gen2.next(500)` the line `console.log(b);` is not executed
3: i do not understand the last line `gen2.next(1000)`, how the b variable get the 1000 value ?<issue_comment>username_1: The relationship between `yield x` and `generator.next(y)` is as follows:
* `generator.next(y)` **returns** `{value: x, done: false}`
* `yield x` **evaluates to** `y`
Additionally, a generator function doesn't begin execution when initially invoked. You can think of it as a constructor. Only after `generator.next()` is called does the actual function code execute until the first `yield` statement (or the function terminates).
In your example:
* `var gen2 = HelloGen2()` creates the generator, but nothing happnes.
* `gen2.next()` executes the function up to `yield 100` and then returns `{value: 100...}`.
* `gen2.next(500)` resumes function execution, firstly evaluating the `yield` statement to 500 (and immediately setting this value into `a`), and continue execution up to `yield a+100` and then returns `{value: 600...}`.
* `gen2.next(1000)` resumes function execution, firstly evaluating the `yield` statement to 1000 (and immediately setting this value into `b`), and continue execution up to termination and then returns `{value: undefined, done: true}`.
That last `undefined` is the result of the function *not* returning any value explicitly.
Upvotes: 2 <issue_comment>username_2: >
> Questions :
>
>
> 1. In the first `gen2.next()` the line of code `var a = yield 100;` is called, is it set some value on the `var a` variable?
>
>
>
Not at this time, with the next call of `gen2.next(500)` it gets the value of `500`. At the first call the value is still `undefined`.
>
> 2. In each gen2.next() only the line until the semicolon is executed?
>
> So for instance, in the second call `gen2.next(500)` the line `console.log(b);` is not executed.
>
>
>
Right. The function stops directly after the last `yield` statement.
>
> 3. I do not understand the last line `gen2.next(1000)`, how the `b` variable get the 1000 value?
>
>
>
With the call of `next()`, the function continues at the last stop with the `yield` statement, the value is handed over and assigned to `b`.
The `console.log(b)`is performed and the function exits at the end and the result of the generator is set to `{ value: undefined, done: true }`.
```js
function* HelloGen2() {
var a, b;
console.log(a, b); // undefined undefined
a = yield 100;
console.log(a, b); // 500 undefined
b = yield a + 100;
console.log(b); // 1000
}
var gen2 = HelloGen2();
console.log('#1');
console.log(gen2.next()); // { value: 100, done: false }
console.log('#2');
console.log(gen2.next(500)); // { value: 600, done: false }
console.log('#3');
console.log(gen2.next(1000)); // { value: undefined, done: true }
console.log('#4');
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 2 <issue_comment>username_3: When working with coroutines, it's important to understand, that although `yield` and `next` are different keywords, what they do is essentially the same thing. Imagine two coroutines as being connected by a two-way communication pipe. Both
```
Y = yield X
```
and
```
Y = next(X)
```
perform the same set of operations:
* write X to the pipe
* wait for the answer
* once the answer is there, read it from the pipe and assign it to Y
* proceed with the execution
Initially, the main program is in the active state, and the generator (`gen2` in your example) is waiting listening to the pipe. Now, when you call `next`, the main program writes a dummy value (`null`) to the pipe, thus waking up the generator. The generator executes `yield 100`, writes `100` to the pipe and waits. The main wakes up, reads `100` from the pipe, logs it and writes `500`. The generator wakes up, reads `500` into `a`, etc. Here's the complete flow:
```
gen wait
main next() null -> pipe
main wait
gen pipe -> null
gen yield 100 100 -> pipe
gen wait
main pipe -> arg (100)
console.log(arg) 100
main next(500) 500 -> pipe
main wait
gen a= pipe -> a (500)
gen yield a + 100 600 -> pipe
gen wait
main pipe -> arg (600)
console.log(arg) 600
main next(1000) pipe -> 1000
main wait
gen b= pipe -> b
console.log(b) 1000
gen (ended) done -> pipe
main pipe -> arg (done)
console.log(arg)
main (ended)
```
Essentially, to understand generators, you have to remember that when you assign or somehow else use the result of `yield/next`, there's a pause between the right and the left (or "generate" and "use") parts. When you do
```
var a = 5
```
this is executed immediately, while
```
var a = yield 5
```
involves a pause between `=` and `yield`. This requires some mental gymnastics, very similar to `async` workflows, but with time you'll get used to that.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,766 | 9,128 |
<issue_start>username_0: I am new in Angular and doing key value pair for the first time. I am trying to get the key value pair based on a particular value inside a nested key value map. I have a nested JSON data format:
```
trips = {
"20180201": [{
"journeyId": 1001,
"Number": "001",
"DriverName": "Alex",
"Transporter": {
"id": "T1",
"number": "AN01001",
"Company": "Tranzient"
},
"place": [{
"id": 001,
"value": "Washington DC"
}]
[{
"id": 002,
"value": "Canberra"
}]
}]
[{
"journeyId": 1002,
"Number": "001",
"DriverName": "Tom",
"Transporter": {
"id": "T2",
"number": "AN01002",
"Company": "Trax"
},
"place": [{
"id": 002,
"value": "Canberra"
}]
[{
"id": 004,
"value": "Vienna"
}]
}]
[{
"journeyId": 1003,
"Number": "004",
"DriverName": "Jack",
"Transporter": {
"id": "T3",
"number": "AN01003",
"Company": "Trax"
},
"place": [{
"id": 001,
"value": "Washington DC"
}]
[{
"id": 004,
"value": "Vienna"
}]
}],
"20180211": [{
"journeyId": 1004,
"Number": "005",
"DriverName": "Jack",
"Transporter": {
"id": "T3",
"number": "AN01013",
"Company": "Trax"
},
"place": [{
"id": 005,
"value": "Bridgetown"
}]
[{
"id": 006,
"value": "Ottawa"
}]
[{
"id": 004,
"value": "Vienna"
}]
}]
[{
"journeyId": 1005,
"Number": "005",
"DriverName": "Jerry",
"Transporter": {
"id": "T3",
"number": "AN01020",
"Company": "Trax"
},
"place": [{
"id": 005,
"value": "Bridgetown"
}]
[{
"id": 006,
"value": "Ottawa"
}]
}],
"20180301": [{
"journeyId": 1006,
"Number": "005",
"DriverName": "demy",
"Transporter": {
"id": "T3",
"number": "AN01003",
"Company": "Trax"
},
"place": [{
"id": 005,
"value": "Bridgetown"
}]
[{
"id": 006,
"value": "Ottawa"
}]
}]};
```
I am trying to filter out the all the trips key value pairs which have its place[value]=Vienna.
My expected output should :
```
trips = {
"20180201":
[{
"journeyId": 1002,
"Number": "001",
"DriverName":"Tom",
"Transporter": {
"id": "T2",
"number": "AN01002",
"Company": "Trax"
}
"place": [{"id":002,"value":"Canberra" }]
[{"id":004,"value":"Vienna"}]
}]
[{
"journeyId": 1003,
"Number": "004",
"DriverName":"Jack",
"Transporter": {
"id": "T3",
"number": "AN01003",
"Company": "Trax"
}
"place": [{"id":001,"value":"Washington DC" }]
[{"id":004,"value":"Vienna"}]
}],
"20180211": [{
"journeyId": 1004,
"Number": "005",
"DriverName":"Jack",
"Transporter": {
"id": "T3",
"number": "AN01013",
"Company": "Trax"
}
"place": [{"id":005,"value":"Bridgetown" }]
[{"id":006,"value":"Ottawa"}]
[{"id":004,"value":"Vienna"}]
}]
};
```
please help me to find the right approach. I am trying the following function but got stuck in the middle:
```
for (var date in trips) {
var res={}
for (var index = 0; index < trips[date].length; index++) {
var data = trips[date][index];
//rest of the logic here
}
}
```<issue_comment>username_1: You could try this
```
const expected = {};
for (let date in trips) {
for (let trip of trips[date]) {
if (trip.place.map(place => place.value).includes('Vienna')) {
expected[date] = trips[date];
}
}
}
```
Once your object is valid, this should work.
Upvotes: 0 <issue_comment>username_2: Here you go using [Array.reduce](https://www.w3schools.com/jsref/jsref_reduce.asp), [Array.filter](https://www.w3schools.com/Jsref/jsref_filter.asp), [Array.some](https://www.w3schools.com/Jsref/jsref_some.asp) and [Object.keys](https://developer.mozilla.org/fr/docs/Web/JavaScript/Reference/Objets_globaux/Object/keys)
```
const filteredTrips = Object.keys(trips).reduce((tmp, x) => {
const filtered = trips[x].filter(y => y.place.some(z => z.value === 'Vienna'));
if (filtered.length) {
tmp[x] = filtered;
}
return tmp;
}, {});
```
```js
const trips = {
"20180201": [{
"journeyId": 1001,
"Number": "001",
"DriverName": "Alex",
"Transporter": {
"id": "T1",
"number": "AN01001",
"Company": "Tranzient"
},
"place": [{
"id": 001,
"value": "Washington DC"
},
{
"id": 002,
"value": "Canberra"
}
],
},
{
"journeyId": 1002,
"Number": "001",
"DriverName": "Tom",
"Transporter": {
"id": "T2",
"number": "AN01002",
"Company": "Trax"
},
"place": [{
"id": 2,
"value": "Canberra"
},
{
"id": 4,
"value": "Vienna"
}
],
},
{
"journeyId": 1003,
"Number": "004",
"DriverName": "Jack",
"Transporter": {
"id": "T3",
"number": "AN01003",
"Company": "Trax"
},
"place": [{
"id": 1,
"value": "Washington DC",
}, {
"id": 4,
"value": "Vienna",
}],
}
],
"20180211": [{
"journeyId": 1004,
"Number": "005",
"DriverName": "Jack",
"Transporter": {
"id": "T3",
"number": "AN01013",
"Company": "Trax"
},
"place": [{
"id": 5,
"value": "Bridgetown"
},
{
"id": 6,
"value": "Ottawa"
},
{
"id": 4,
"value": "Vienna"
}
],
},
{
"journeyId": 1005,
"Number": "005",
"DriverName": "Jerry",
"Transporter": {
"id": "T3",
"number": "AN01020",
"Company": "Trax"
},
"place": [{
"id": 5,
"value": "Bridgetown"
},
{
"id": 6,
"value": "Ottawa"
}
],
}
],
"20180301": [{
"journeyId": 1006,
"Number": "005",
"DriverName": "demy",
"Transporter": {
"id": "T3",
"number": "AN01003",
"Company": "Trax"
},
"place": [{
"id": 5,
"value": "Bridgetown"
},
{
"id": 6,
"value": "Ottawa"
}
],
}],
};
const filteredTrips = Object.keys(trips).reduce((tmp, x) => {
const filtered = trips[x].filter(y => y.place.some(z => z.value === 'Vienna'));
if (filtered.length) {
tmp[x] = filtered;
}
return tmp;
}, {});
console.log(filteredTrips);
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Let's assume the author just mistyped the input data.
Here is the code:
```
function getTripsWithPlaceVanillaJS(trips, place) {
var result = {};
for (var key in trips) {
if (trips.hasOwnProperty(key)) {
var journeys = trips[key];
var filteredJourneys = [];
for (var i = 0; i < journeys.length; i++) {
var journey = journeys[i];
for (var j = 0; j < journey.place.length; j++) {
if (journey.place[i].value === place) {
filteredJourneys.push(journey);
break;
}
}
}
if (filteredJourneys.length) {
result[key] = filteredJourneys;
}
}
}
return result;
}
function getTripsWithPlaceSugaredES6(trips, place) {
return Object.keys(trips).reduce((result, key) => {
const filteredJourneys = trips[key].filter(journey => journey.place.some(item => item.value === place));
if (filteredJourneys.length) {
result[key] = filteredJourneys;
}
return result;
}, {});
}
```
Upvotes: 0
|
2018/03/16
| 249 | 804 |
<issue_start>username_0: I have a request where I need to check if all records of a column `cost_code` contain a string composed of column `invoices` records
and one character of type number.
How can I achieve this task?
I tried it with this query:
```
SELECT * FROM Deal WHERE cost_code LIKE ('invoice%');
```<issue_comment>username_1: `CONCAT()` can be used with `LIKE`
```
SELECT * FROM Deal WHERE cost_code LIKE CONCAT(invoices, '%');
```
If `cost_code` is `ABCD1234` & `invoices` is `1234`, Then you can use query as:
```
SELECT * FROM Deal WHERE cost_code LIKE
CONCAT( SUBSTRING( invoices, 5, length(invoices) -4), '%');
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can use regular expressions:
```
where not regexp_like(cost_code, '^invoice[0-9]$')
```
Upvotes: 1
|
2018/03/16
| 701 | 2,256 |
<issue_start>username_0: I am trying to add a SSL certificate to a wordpress container but the default compose configuration only redirects port 80.
How can I add a new port in the running container? I tried to modify the docker-compose.yml file and restart the container but this doesn't solve the problem.
Thank you.<issue_comment>username_1: Have you tried like in this example:
<https://docs.docker.com/compose/compose-file/#ports>
Should work like this:
```
my-services:
ports:
- "80:80"
- "443:443"
```
Upvotes: 0 <issue_comment>username_2: Expose ports.
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend alway**s explicitly specifying your port mappings as strings.**
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
<https://docs.docker.com/compose/compose-file/#pid>
Upvotes: 1 <issue_comment>username_3: you just add the new port in the port section of the `docker-compose.yml` and then you must do
```
docker-compose up -d
```
because it will read the .yml file again and recreate the container. If you do just restart it will not read the new config from the .yml and just restart the same container.
Upvotes: -1 <issue_comment>username_4: You should re-create container, when listening new port, like this
```
docker-compose up -d --force-recreate {CONTAINER}
```
Upvotes: 3 <issue_comment>username_5: After you add the new port to the docker-compose file, what I did that works is:
1. Stop the container
docker-compose stop
2. Run the docker-compose up command (NOTE: docker-compose start did not work)
docker-compose up -d
According to the documentation the 'docker-compose' command:
>
> Builds, (re)creates, starts, and attaches to containers for a service
> ... Unless they are already running
>
>
>
That started up the stopped service, WITH the exposed ports I had configured.
Upvotes: 1
|
2018/03/16
| 915 | 3,060 |
<issue_start>username_0: I'm currently planning a new application. The values used in this application can either be entered by the user or calculated by the application itself.
What I need to do is to mark the values in a way that I can clearly identify the source (user or app).
EDIT (additional information): The idea is not to identify that a user can input a value or not. The idea is to clearly have the possibility to flag values as being calculated or entered by the users. The value will not only be used in the view but also in calculations.
A class A would have a double v. This double can either be calculated or have been entered by a user.
```
class A
{
public double v; // <-- this would be the value I'd like to mark as program-defined or user defined
}
```
So, when I do this:
```
v = 1.0;
```
it would be marked as program-defined.
Has anyone a hint on how to achieve this in C#?
In C++ I would create a base class and derive from it. But in C# I would like to take a more general approach which doesn't force me to create a class per input type.
Any ideas on how to achieve this?
Thanks.<issue_comment>username_1: Have you tried like in this example:
<https://docs.docker.com/compose/compose-file/#ports>
Should work like this:
```
my-services:
ports:
- "80:80"
- "443:443"
```
Upvotes: 0 <issue_comment>username_2: Expose ports.
Either specify both ports (HOST:CONTAINER), or just the container port (an ephemeral host port is chosen).
Note: When mapping ports in the HOST:CONTAINER format, you may experience erroneous results when using a container port lower than 60, because YAML parses numbers in the format xx:yy as a base-60 value. For this reason, we recommend alway**s explicitly specifying your port mappings as strings.**
ports:
- "3000"
- "3000-3005"
- "8000:8000"
- "9090-9091:8080-8081"
- "49100:22"
- "127.0.0.1:8001:8001"
- "127.0.0.1:5000-5010:5000-5010"
- "6060:6060/udp"
<https://docs.docker.com/compose/compose-file/#pid>
Upvotes: 1 <issue_comment>username_3: you just add the new port in the port section of the `docker-compose.yml` and then you must do
```
docker-compose up -d
```
because it will read the .yml file again and recreate the container. If you do just restart it will not read the new config from the .yml and just restart the same container.
Upvotes: -1 <issue_comment>username_4: You should re-create container, when listening new port, like this
```
docker-compose up -d --force-recreate {CONTAINER}
```
Upvotes: 3 <issue_comment>username_5: After you add the new port to the docker-compose file, what I did that works is:
1. Stop the container
docker-compose stop
2. Run the docker-compose up command (NOTE: docker-compose start did not work)
docker-compose up -d
According to the documentation the 'docker-compose' command:
>
> Builds, (re)creates, starts, and attaches to containers for a service
> ... Unless they are already running
>
>
>
That started up the stopped service, WITH the exposed ports I had configured.
Upvotes: 1
|
2018/03/16
| 508 | 1,576 |
<issue_start>username_0: Hi, I am trying to add bootstrap with angular project and the styling is not reflecting in the browser.
```
The details are as below.
Versions:
Angular CLI: 1.7.3
Node: 9.7.1
OS: win32 x64
Angular: 5.2.8
```
installed bootstrap and jquery using angular cli
```
"bootstrap": "^4.0.0"
"jquery": "^3.3.1"
```
in .angular-cli.json added as below
```
"styles": [
"styles.css",
"../node_modules/bootstrap/dist/css/bootstrap.min.css"
],
"scripts": [
"../node_modules/jquery/dist/jquery.min.js",
"../node_modules/bootstrap/dist/js/bootstrap.min.js"
],
```
Below is the code in my html.
```
Angular + Bootstrap
* [Link (current)](#)
* [Link](#)
* Dropdown
+ [Action](#)
+ [Another action](#)
+ [Something else here](#)
+
+ [Separated link](#)
+
+ [One more separated link](#)
```
I am getting the view in browser as below and not getting any errors in the browser console.
[output in browser](https://i.stack.imgur.com/E1sz6.jpg)
Thanks for your help in advance.<issue_comment>username_1: Try this
Go to app folder --> src
Open style.css and paste below code
@import "~bootstrap/dist/css/bootstrap.css";
Upvotes: 1 <issue_comment>username_2: you have installed bootstrap 4 and using bootstrap 3 version navbar code.
here the sample code for bootstrap 4 navbar.Some classes has been changed in latest bootstrap 4 version.
```
* [Link 1](#)
* [Link 2](#)
* [Link 3](#)
```
for more info refer this [link](https://www.w3schools.com/bootstrap4/bootstrap_navbar.asp)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 560 | 1,944 |
<issue_start>username_0: I know typescript is strongly typed, but why the following code prints `12` instead of `3`?
```
function add_numbers(a: number, b: number){
return a + b;
}
var a = '1';
var b = 2;
var result = add_numbers(a, b);
console.log(result);
```<issue_comment>username_1: The ***any*** type holds a special place in the TypeScript type system. It gives you an escape hatch from the type system to tell the compiler to bugger off. any is compatible with any and all types in the type system. This means that anything can be assigned to it and it can be assigned to anything. This is demonstrated in the example below:
```
var power: any;
// Takes any and all types
power = '123';
power = 123;
// Is compatible with all types
var num: number;
power = num;
num = power;
```
For Ref : <https://basarat.gitbooks.io/typescript/content/docs/types/type-system.html>
Upvotes: 1 <issue_comment>username_2: Typescript is strongly typed at compile time, it can't prevent runtime errors if you write poor code. You've overridden the types using to trick the compiler and so normal javascript behavior of adding a string to number is executed when your function runs.
If you remove the casting you're doing with you'll see that TypeScript catches and flags your error.
Upvotes: 3 [selected_answer]<issue_comment>username_3: At the end, typescript is always converted to javascript, so what you are doing is adding the string '1' and the number 2. Javascript converts the number to a string, because it is not possible to add a string and a number.
if you delare a type for the variables passed to the function, the typescript transpiler will output an error:
```
var a: number = '1';
var b: number = 1;
add_numbers(a, b);
```
error:
>
> type mismatch in first line
>
>
>
```
var a: string = '1';
var b: number = 1;
add_numbers(a, b);
```
error:
>
> wrong parameter type in function call
>
>
>
Upvotes: 1
|
2018/03/16
| 910 | 3,440 |
<issue_start>username_0: am doing this challenge of reversing a given sentence at <https://www.codewars.com/kata/reversed-words/train/java> ,,,I have already managed to reverse the sentence to the expected but am getting a slight error in their JUnit testing..
this is my code to reverse any sentence to the expected result e.g
>
> "The greatest victory is that which requires no battle"
>
> // should return "battle no requires which that is victory greatest The"
>
>
>
**My code**
```
public class ReverseWords{
public static String reverseWords(String sentence){
String reversedsentence ="";
for(int x=sentence.length()-1;x>=0;--x){ //Reversing the whole sentence
reversedsentence += sentence.charAt(x);
} //now you are assured the whole sentence is reversed
String[]words = reversedsentence.split(" "); //getting each word in the reversed sentence and storing it in a string array
String ExpectedSentence= "";
for(int y=0;y=0;j--){ /\*Reversing each word \*/
reverseWord += word.charAt(j);
}
ExpectedSentence +=reverseWord + " "; //adding up the words to get the expected sentence
}
return ExpectedSentence;
}
}
```
and there JUnit testing code
```
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import org.junit.runners.JUnit4;
// TODO: Replace examples and use TDD development by writing your own tests
public class SolutionTest {
@Test
public void testSomething() {
assertEquals(ReverseWords.reverseWords("I like eating"), "eating like I");
assertEquals(ReverseWords.reverseWords("I like flying"), "flying like I");
assertEquals(ReverseWords.reverseWords("The world is nice"), "nice is world The");
}
}
```
The Error am getting
```
> expected: but was:
```
**More details about the error are**
```
> org.junit.ComparisonFailure: expected: but was: at org.junit.Assert.assertEquals(Assert.java:115) at org.junit.Assert.assertEquals(Assert.java:144) at SolutionTest.testSomething(SolutionTest.java:10)
```
You can simply follow this link and paste **my code** you will see a code play ground [Train: Reversed Words |CodeWars](https://www.codewars.com/kata/reversed-words/train/java)<issue_comment>username_1: First you are using `Assert.assertEquals()` incorrectly because `expected` parameter should be provided first. Change it to:
```
assertEquals("eating like I", ReverseWords.reverseWords("I like eating"));
```
which makes the error clear:
```
> expected: but was:
```
This is caused by following line which blindly adds the space after each `reverseWord` processed:
```
ExpectedSentence +=reverseWord + " "; //adding up the words to get the expected sentence
```
Upvotes: 2 <issue_comment>username_2: Try it this way
```
public static void main(String[] args) {
Assert.assertEquals("sentence a is This", reverseSentence("This is a sentence"));
}
public static String reverseSentence(String sentence) {
String[] words = sentence.split(" ");
for (int i = 0; i < words.length / 2; i++) {
String temp = words[i];
words[i] = words[words.length - i - 1];
words[words.length - i - 1] = temp;
}
return String.join(" ", words);
}
```
With `String.join()` you skip the hassle to avoid adding the delimiter to the last element.
Upvotes: 2
|
2018/03/16
| 771 | 2,907 |
<issue_start>username_0: I'm new to using Heroku so I'm a bit confused because I initially had my website hosted on GoDaddy, but when I added node to my web app I switched to Heroku.
So what is happening is when I visit my website using www. it goes to the correct, updated site, however, when I try the naked route it takes it to the older version of the website. I figured since it's still showing the older site with the naked route that there was a problem on GoDaddy, therefore, I updated one page on GoDaddy and sure enough, the page is updated via the naked route now.
That leads to my first question. If my website is now on Heroku, does it still need to be hosted on GoDaddy as well? Additionally, if the first question is yes, then how do I set up the naked route to point to the same site as the www. route?
I've already have the host as www with the 'points to' pointing to the Heroku domain name but still getting the problem above. This is why I believe the problem lies with the hosting on GoDaddy.<issue_comment>username_1: **You don't need to host a copy of your site at Goddady.**
In Godaddy, you must have a **CNAME** **www** pointing to heroku:
* your\_application\_name.herokuapp.com
or
* www.your\_application\_name.com.herokudns.com.
To use heroku for your naked domain, you can define a redirection for the naked domain to your www:
1. Go to My domains (<https://dcc.godaddy.com/manage/>)
2. Click on Manage connection
3. Set forwarding option (choose www.your\_application\_name.com, forward type permanent)
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is what helped me as of 2019. First, get to the list of your domain(s) on GoDaddy's interface. You'll see something like this:
[GoDaddy's all domains page](https://i.stack.imgur.com/HOsOp.png)
Choose your domain. Once you're on your domain's settings, scroll to the bottom and click on 'Manage DNS'. You should see some records created, if there are any.
ACTUAL CONFIGURATION
FIRST STEP:
Create a CNAME record(there should be an 'add' button somewhere). The record should have the following parameters - Type - CNAME, Host - www, Points to - enter the link heroku created for your app.
[creating a CNAME record on GoDaddy's](https://i.stack.imgur.com/OjjZD.png)
Sometimes when there are other old CNAME records, it'll throw an error. Erase old CNAME records and try again.
SECOND step:
Go to your terminal and enter the command 'host www.yourdomain.com'. If the CNAME record was successful, you should see the heroku domain you enter earlier. IMPORTANT - You should also see a bunch IP addresses which we will need.
THIRD step:
Create 'A' records for all the IP addresses provided on the 'host www.yourdomain.com' terminal command. They should have the following parameters: Type - A, Host - @, Points to - IP address, choose TTL custom and type 600 seconds.
This should do it
Upvotes: 2
|
2018/03/16
| 963 | 3,360 |
<issue_start>username_0: I'm trying to display average ratings for jobs on my index page, it works perfectly fine on my [show page](https://i.stack.imgur.com/vQuM9.png)
but on my index page the stars are there but are [blank](https://i.stack.imgur.com/JWTiJ.png), how do i get them to display on my index page?
My Show Page:
```
#### Average Rating
>
Based on <%= @job.reviews.count %> reviews
$('.review-rating').raty({
readOnly: true,
score: function() {
return $(this).attr('data-score');
},
path: '/assets/'
});
$('.average-review-rating').raty({
readOnly: true,
path: '/assets/',
score: function() {
return $(this).attr('data-score')
}
});
```
Jobs Show controller
```
def show
if @job.reviews.blank?
@average_review = 0
else
@average_review = @job.reviews.average(:rating).round(2)
end
end
```
My Index Page:
```
<% @jobs.each do |job| %>
<%= link_to job.title, job_path(job) %>
<%= job.category %>
<%= job.city %>
>
Based on <%= job.reviews.count %> reviews
<% end %>
$('.average-review-rating').raty({
readOnly: true,
path: '/assets/',
score: function() {
return $(this).attr('data-score')
}
});
```<issue_comment>username_1: In your show page, you have @average\_review defined. I'm guessing this was done in your jobs controller in the show action.
In your index page, you will need to calculate the average rating for each job as you are iterating through them. You can do this the same way you defined @average\_rating. If you are defining your @average\_rating in the show action as:
```
@average_rating = job.reviews.sum('score') / job.reviews.count
```
You will need to either define this method in the model (the better option), so something like:
app/models/job.rb
```
def average_review
reviews.sum('score') / reviews.count
end
```
Then in your index page:
```
>
```
The other option is to set a variable for each object on the index page itself, less work but probably not as neat:
```
<% @jobs.each do |job| %>
<%= link_to job.title, job_path(job) %>
<%= job.category %>
<%= job.city %>
<% average_review = job.reviews.sum('score') / job.reviews.count %>
>
Based on <%= job.reviews.count %> reviews
<% end %>
```
EDIT:
In your app/models/job.rb
```
def average_review
reviews.blank? ? 0 : reviews.average(:rating).round(2)
end
```
And index.html.erb
```
>
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You can fetch these results like so:
```
def index
@jobs = Job.all
@average_reviews = Review.where(job_id: @jobs)
.group(:job_id)
.average(:rating)
.transform_values { |rating| rating.round(2) }
end
```
This will return a hash with job ids as keys and the average rating as value (rounded to 2 decimals). To access these average ratings you can do the following in your index view:
```
@jobs.each do |job|
average_review = @average_reviews[job.id]
end
```
The above code only makes 2 SQL calls, 1 for fetching your jobs and 1 for fetching all average reviews for all the fetched jobs. **Keep in mind that if a job has no reviews. When fetching the average rating from the hash the value `nil` is returned.**
Upvotes: 0
|
2018/03/16
| 926 | 2,656 |
<issue_start>username_0: i'm attempting to create a RESTful API using PHP. I'm unable to format it in a way I'm used to seeing. Would appreciate some guidance. Thank.
This is the current JSON output:
```
[
{
"boardID": "12345",
"MQ9": "673627",
"MQ131": "87565",
"MQ135": "67887",
"longitude": "51.504425",
"latitude": "-0.1291608",
"time": "13:32",
"date": "2018-03-14"
},
```
This is what i'm trying to achieve:
```
{
"data": [
{
"boardID": "12345",
"MQ9": "673627",
"MQ131": "87565",
"MQ135": "67887",
"longitude": "51.504425",
"latitude": "-0.1291608",
"time": "13:32",
"date": "2018-03-14"
},
```
This is my PHP:
```
php
require 'connect.php';
if(!$con){
die('Could not connect: '.mysqli_error());
}
$result = mysqli_query($con, "SELECT * FROM airQual");
while($row = mysqli_fetch_assoc($result)
{
$output[]=$row;
}
echo(json_encode($output, JSON_PRETTY_PRINT));
mysqli_close($con);
?
```<issue_comment>username_1: The output is correct, as you're creating an array. If you want an object, create an object!
Do something like;
```
$myOuput = new stdClass;
$myOuput->data = $output;
json_encode($myOutput);
```
Upvotes: -1 <issue_comment>username_2: You can do this one of a couple of ways, but the easy way would be to do this, adding your output to another array:
```
$data = array('data' => $output);
echo(json_encode($data, JSON_PRETTY_PRINT));
```
For example:
```
$output = array('foo'=>1,'bar'=>2,'glorp'=>3);
$data = array("data" => $output);
echo(json_encode($data, JSON_PRETTY_PRINT));
```
returns
```
{
"data": {
"foo": 1,
"bar": 2,
"glorp": 3
}
}
```
You can also add your output to another array, as others have suggested, this way:
```
$output['data'] = array('foo'=>1,'bar'=>2,'glorp'=>3);
echo(json_encode($output, JSON_PRETTY_PRINT));
```
Which will give you the same return as above.
NOTE
----
You have a typo in your code:
```
while($row = mysqli_fetch_assoc($result) // missing closing )
```
`mysqli_error()` requires the connection:
```
die('Could not connect: '.mysqli_error($con));
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: Try this:
```
php
require 'connect.php';
if(!$con){
die('Could not connect');
}
$result = mysqli_query($con, "SELECT * FROM airQual");
while($row = mysqli_fetch_assoc($result))
{
$output['data'][]=$row;
}
echo(json_encode($output, JSON_PRETTY_PRINT));
mysqli_close($con);
?
```
Upvotes: -1
|
2018/03/16
| 639 | 2,419 |
<issue_start>username_0: ```
```
I want to reduce the space between Tab Layout text and its indicator.
I am using tab layout with collapsing toolbar that when it slides up, Tab layout will pin on the top.
I have 4 tabs with its corrosponding indicator.
I am sharing my xml & have attached image also.
[](https://i.stack.imgur.com/5P2dk.png)<issue_comment>username_1: Try with this custom TabLayout and design your your TextView as you need. Default TextView may has margin or padding. Let me know if it works.
```
public class CustomTabLayout extends TabLayout{
private boolean useCustomTab;
public CustomTabLayout(Context context) {
super(context);
}
public CustomTabLayout(Context context, AttributeSet attrs) {
super(context, attrs);
}
public CustomTabLayout(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
public void addTab(@NonNull Tab tab, boolean setSelected) {
if (useCustomTab){
TextView tv = (TextView) View.inflate(getContext(), R.layout.tab_text, null);
tv.setText(tab.getText());
tab.setCustomView(tv);
}
super.addTab(tab, setSelected);
}
public void setUseCustomTab(boolean useCustomTab){
this.useCustomTab = useCustomTab;
}
}
```
Upvotes: 0 <issue_comment>username_2: Try this
```
```
**`drawable/tab_selector`**
```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You should set the following property
```
app:tabPaddingBottom="-10dp"
```
Upvotes: 5 <issue_comment>username_4: I know this is coming late but for anyone that may later come across the same issue, below worked for me.
In your CardView, add:
```
app:cardElevation="0dp"
app:cardUseCompatPadding="false"
```
Upvotes: -1 <issue_comment>username_5: Simply we can set the `tabPaddingBottom` to any negative value, say `-15dp`, as per our requirement.
`tabPaddingBottom = "-15dp"` worked for me.
[](https://i.stack.imgur.com/1Ulzt.png)
Refer here for other paraments: <https://material.io/components/tabs/android#scrollable-tabs>
Upvotes: 0
|
2018/03/16
| 267 | 873 |
<issue_start>username_0: I cant set kendo dropdownlist selected item the way i want.
so far this is what i tried , but it does not seems to work
```
```
i did try to set it like this but i am getting an error
```
$("#YolTipleri").data("kendoDropDownList").value(2);
```
and it give me error
```
Uncaught TypeError: Cannot read property 'value' of undefined
```
How can i set this value?<issue_comment>username_1: It might be just a typo, but I have [kendo dojo](https://dojo.telerik.com/OsaKoVOT) working.
In first line you don't have `id` selector, which are you using in `jQuery`.
For `MVVM` binding and dataSource you can use `data-bind` attribute.
```
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: if you want the solution without writing the id you can use it
```
$('[name="YolTipleri"]').data("kendoDropDownList").value(2);
```
Upvotes: 2
|
2018/03/16
| 352 | 1,060 |
<issue_start>username_0: I have been trying to separate each attribute(mmddhhmmyyyy.ss)
Was able to separate the seconds as they are it's preceded with a "."
```
set type [ split $timestamp "."]
lassign $type time seconds
```
Still can't figure out a way to get separate each value of month,day,minute etc<issue_comment>username_1: To just separate the parts, you can use scan:
```
scan $timestamp %02d%02d%02d%02d%04d.%02d
```
If you want to keep leading zeroes, use 's' instead of 'd':
```
scan $timestamp %02s%02s%02s%02s%04s.%02s
```
To use the timestamp in subsequent calculations or formatting, it's easiest to use clock scan:
```
set sec [clock scan $timestamp -format %m%d%H%M%Y.%S]
clock format $sec
```
Upvotes: 2 <issue_comment>username_2: This is the best way:
the code will be as follow:
```html
set milliseconds [clock clicks]
variable time [format "%s_%03d" [clock format [clock seconds] -format %Y%m%d_%H%M%S] [expr {$milliseconds % 1000}] ]
puts $time
```
this will print the current time along with milliseconds.
Upvotes: 0
|
2018/03/16
| 643 | 2,311 |
<issue_start>username_0: I'm struggling to get bootstrap tag inputs working, but I can't see what I could be doing wrong. As far as I can tell I've even followed the steps in this post [How to use Bootstrap Tags Input plugin](https://stackoverflow.com/questions/38488382/how-to-use-bootstrap-tags-input-plugin)
```html
Name
```
*I'm seeing 2 things that are strange:*
**1)** The tags appear in the DOM but don't seem to have any sort of background applied so they appear invisible. But I can't see anywhere any mention of needing to specify a default color in the docs.
**2)** The control seems to resize it self as you're typing and putting in tags which I don't understand as it's supposed to have a fixed size.
Does anyone know what I'm doing wrong here? Obviously I could add some CSS hacks to get the tags appearing, but my understanding was that it should just work out the box?<issue_comment>username_1: This is actually like proving the bootstrap wrong:
>
> Because it looks like they have removed the support for *Bootstrap Tags Input plugin* after 3.3.7
>
>
>
The below snippet will work just fine:
```css
.bootstrap-tagsinput {
width: 100%;
}
```
```html
Name
```
when I removed :
```
```
and added :
```
```
*The background style started to appear solving your **first** problem.*
The second is actually `Bootstrap Tags Input plugin` default behaviour you can refer to this [link](https://bootstrap-tagsinput.github.io/bootstrap-tagsinput/examples/) which is their documentation page, in that you can see -
```
.bootstrap-tagsinput {
width: 100%;
}
```
The above style comes from `app.css` which links from there example paths. This style is what keeps the `100%` of its parent width. So if you want `100%` width you have to use custom style.
Hope this was helpful.
Upvotes: 4 [selected_answer]<issue_comment>username_2: **Here my observation:**
Bootstrap CSS is not loaded
1) Bootstrap 4.X in beta.
2) integrity attribute is preventing to load the bootstrap CSS
I have updated the code with a stable version of bootstrap
```html
Name
```
Upvotes: 2 <issue_comment>username_3: You can set data in tags input like this:
```
$('#tag-input').tagsinput('add', 'tags data');
```
and then:
```
$('#tag-input').tagsinput('refresh');
```
Upvotes: 1
|
2018/03/16
| 879 | 3,421 |
<issue_start>username_0: I have an array of some objects with the method process() that I want to run parallelized. And I wanted to try lambdas to achieve the parallelization. So I tried this:
```
Arrays.asList(myArrayOfItems).forEach(item->{
System.out.println("processing " + item.getId());
item.process();
});
```
Each process() call takes about 2 seconds. And I have noticed that there is still no speedup with the "parallelization" approach. It seems that everything is still running serialized. The ids are printed in series (ordered) and between every print there is a pause of 2 seconds.
Probably I have misunderstood something. What is needed to execute this in parallel using lambdas (hopefully in a very condensed way)?<issue_comment>username_1: Method Collection.forEach() is just iteration through all the elements. It is called **internal iteration** as it leaves up to the collection *how* it will iterate, but it is still an iteration on all the elements.
If you want parallel processing, you have to:
1. Get a parallel stream from the collection.
2. Specify the operation(s) which will be done on the stream.
3. Do something with the result if you need to.
You may read first part of my explanation here: <https://stackoverflow.com/a/22942829/2886891>
Upvotes: 1 <issue_comment>username_2: To create a parallel stream, invoke the operation `.parallelStream` on a Collection
See <https://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html>
```
Arrays.asList(myArrayOfItems).parallelStream().forEach(item->{
System.out.println("processing " + item.getId());
item.process();
});
```
Upvotes: 1 <issue_comment>username_3: Lambdas itself aren't executing anything in parallel. `Stream`s are capable of doing this though.
Take a look at the method `Collection#parallelStream` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Collection.html#parallelStream--)):
```
Arrays.asList(myArrayOfItems).parallelStream().forEach(...);
```
However, note that there is no guarantee or control when it will actually go parallel. From its documentation:
>
> Returns a **possibly parallel** Stream with this collection as its source. It is **allowable** for this method to **return a sequential stream**.
>
>
>
The reason is simple. You really need **a lot** of elements in your collection (like millions) for parallelization to actually pay off (or doing other heavy things). The overhead introduced with parallelization is **huge**. Because of that, the method might choose to use sequential stream instead, if it thinks that it will be faster.
Before you think about using parallelism, you should actually setup some benchmarks to test if it improves anything. There are many examples where people did just blindly use it without noticing that they actually decreased the perfomance. Also see [Should I always use a parallel stream when possible?](https://stackoverflow.com/questions/20375176/should-i-always-use-a-parallel-stream-when-possible).
---
You can check if a `Stream` is parallel by using `Stream#isParallel` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/stream/BaseStream.html#isParallel--)).
If you use `Stream#parallel` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/stream/BaseStream.html#isParallel--)) directly on a stream, you get a parallel version.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 935 | 3,659 |
<issue_start>username_0: Something's probably wrong with the implementation of `update()` method but I'm not sure what is:
```
@Override
public int update(@NonNull Uri uri, @Nullable ContentValues contentValues, @Nullable String s, @Nullable String[] strings) {
int count = 0;
switch (uriMatcher.match(uri)) {
case uriCode:
break;
default:
throw new IllegalArgumentException("Unknown URI " + uri);
}
getContext().getContentResolver().notifyChange(uri, null);
return count;
}
```
Here's how I call the method:
```
Random r1 = new Random();
long insertValue1 = r1.nextInt(5);
updatedValues.put(Provider.adPoints, insertValue1);
int value = getContentResolver().update(Provider.CONTENT_URI, updatedValues, null, null); //insert(Provider.CONTENT_URI, values);
```
I also want to make use of `count` and return the number of row updated in `update()`. What seems to be wrong here?<issue_comment>username_1: Method Collection.forEach() is just iteration through all the elements. It is called **internal iteration** as it leaves up to the collection *how* it will iterate, but it is still an iteration on all the elements.
If you want parallel processing, you have to:
1. Get a parallel stream from the collection.
2. Specify the operation(s) which will be done on the stream.
3. Do something with the result if you need to.
You may read first part of my explanation here: <https://stackoverflow.com/a/22942829/2886891>
Upvotes: 1 <issue_comment>username_2: To create a parallel stream, invoke the operation `.parallelStream` on a Collection
See <https://docs.oracle.com/javase/tutorial/collections/streams/parallelism.html>
```
Arrays.asList(myArrayOfItems).parallelStream().forEach(item->{
System.out.println("processing " + item.getId());
item.process();
});
```
Upvotes: 1 <issue_comment>username_3: Lambdas itself aren't executing anything in parallel. `Stream`s are capable of doing this though.
Take a look at the method `Collection#parallelStream` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/Collection.html#parallelStream--)):
```
Arrays.asList(myArrayOfItems).parallelStream().forEach(...);
```
However, note that there is no guarantee or control when it will actually go parallel. From its documentation:
>
> Returns a **possibly parallel** Stream with this collection as its source. It is **allowable** for this method to **return a sequential stream**.
>
>
>
The reason is simple. You really need **a lot** of elements in your collection (like millions) for parallelization to actually pay off (or doing other heavy things). The overhead introduced with parallelization is **huge**. Because of that, the method might choose to use sequential stream instead, if it thinks that it will be faster.
Before you think about using parallelism, you should actually setup some benchmarks to test if it improves anything. There are many examples where people did just blindly use it without noticing that they actually decreased the perfomance. Also see [Should I always use a parallel stream when possible?](https://stackoverflow.com/questions/20375176/should-i-always-use-a-parallel-stream-when-possible).
---
You can check if a `Stream` is parallel by using `Stream#isParallel` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/stream/BaseStream.html#isParallel--)).
If you use `Stream#parallel` ([documentation](https://docs.oracle.com/javase/9/docs/api/java/util/stream/BaseStream.html#isParallel--)) directly on a stream, you get a parallel version.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 4,700 | 18,292 |
<issue_start>username_0: If I have a repository that contains submodules, and I want to make a bundle for sneakernet, how do I make the bundle include the objects needed to update the submodules?
For example, let's say I have a repository called `parent` and it contains a submodule called `submod`. I want to create a bundle that contains all the recent work since commit `basecommit`, so naturally, from inside the `parent` root directory I would do:
```
git bundle create mybundlefile.bundle basecommit..myworkingbranch
```
This creates a file called `mybundlefile.bundle` that contains all the commit objects from the `parent` repo on the range `basecommit..myworkingbranch` as well as the ref `myworkingbranch`. The problem is that if any of those commits changed the submodule, the resultant bundle will not be very useful because such commits are only stored in the bundle file as changing the submodule hash. So the object stored in the bundle file just says "I'm commit `3ba024b` and I change the hash of submodule `submod` from `2b941cf` to `1bb84ec`." but the bundle doesnt actually include the objects `2b941cf..1bb84ec` necceassary to update the submodule from the bundle and make a clean working tree for `myworkingbranch`.
How do I create the bundle file such that all those objects from the submodule repos are also included. That is, if the parents repo's base commit `basecommit` points submodule `submod` at hash `A` and the parent repo's working branch `myworkingbranch` points submodule `submod` at hash `B`, then my bundle needs to contain not only `basecommit..myworkingbranch`, but also `A..B`.<issue_comment>username_1: >
> How do I create the bundle file such that all those objects from the submodule repos are also included.
>
>
>
You can't. A bundle file is specific to a Git repository. A submodule is simply a link to *another* Git repository, so you must create a separate bundle for the separate Git repository. (You can then make an archive out of the various bundles.)
It's pretty clear that Git could descend into each submodule and run `git bundle` in each such one, making these various files for you. The arguments to pass to the sub-`git bundle` commands are tricky, though. You will have to write your own script and use `git submodule foreach` to get it to run in each submodule, and have your script figure out the parameters to use.
You will then probably want to package together each bundle (presumably into a tar or rar or zip or whatever archive) for transport and unbundling / fetching. You'll want another `git submodule foreach` during the unbundling, which will probably be even more annoying since ideally this should use the new set of submodules (after unbundling the top level one and selecting an appropriate commit).
It's possible that someone has written scripts to do this, but if so, I don't know of it. It's not included in Git itself—bundles themselves are kind of klunky and non-mainstream-y.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I wrote [github.com/username_2/submodule\_bundler](https://github.com/username_2/submodule_bundler) to do this. There are a lot of corner cases, and I doubt that I have gotten them all. Please try it out and if any fixes are necessary for your use case, open an issue on the project.
For posterity, I will list all of the code in the project above; but you should just clone directly from github.
bundle.py
=========
```py
#!/usr/bin/env python3
""" Create bundles for submodules """
import os
import argparse
import subprocess
import tarfile
import submodule_commits
import string
import random
import shutil
parser = argparse.ArgumentParser(description='Create bundles for submodules (recursively), \
to facilitate sneakernet connections. On the online computer, \
a bundle is made for each repository, and then packed into a .tar file. \
On the offline computer, use unbundle.py on the tarfile to unzip and \
pull from the corresponding bundle for each repository.')
parser.add_argument('filename', metavar='filename', type=str, help='file to create e.g. ../my_bundles.tar')
parser.add_argument('commit_range', metavar='[baseline]..[target]', type=str, default='..HEAD', nargs='?',
help='commit range of top-level repository to bundle; defaults to everything')
args = parser.parse_args()
class IllegalArgumentError(ValueError):
pass
try:
[baseline, target] = args.commit_range.split('..')
except ValueError:
raise IllegalArgumentError(f"Invalid commit range: '{args.commit_range}': "
+ "Expected [baseline]..[target]. Baseline and target are optional "
+ "but the dots are necessary to distinguish between the two.") from None
full_histories = False
from_str = f'from {baseline} '
if baseline == '':
print("No baseline (all bundles will be complete history bundles)")
full_histories = True
from_str = "from scratch "
if target == '':
target = 'HEAD'
print('Making bundles to update ' + from_str + f'to {target}')
updates_required = {}
new_submodules = {}
bundles = []
for submodule in submodule_commits.submodule_commits('.', target):
new_submodules[submodule['subdir']] = submodule['commit']
root_dir = os.getcwd()
tar_file_name = os.path.basename(args.filename).split('.')[0]
temp_dir = f'temp_dir_for_{tar_file_name}_bundles' # note this won't work if that dir already has contents
def create_bundle(submodule_dir, new_commit_sha, baseline_descriptor=''):
bundle_path_in_temp = f'{submodule_dir}.bundle'
bundle_path = f'{temp_dir}/{bundle_path_in_temp}'
if submodule_dir == '.':
route_to_root = './'
else:
route_to_root = (submodule_dir.count('/') + 1) * '../'
os.makedirs(os.path.dirname(bundle_path), exist_ok=True)
os.chdir(submodule_dir)
rev_parse_output = subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
current_branch = rev_parse_output.decode("utf-8").strip('\n')
subprocess.run(['git', 'bundle', 'create', route_to_root + bundle_path,
f'{baseline_descriptor}{current_branch}', '--tags'])
bundles.append(bundle_path_in_temp)
os.chdir(root_dir)
if not full_histories:
for existing_commit in submodule_commits.submodule_commits('.', baseline):
baseline_commit = existing_commit['commit']
submodule_dir = existing_commit['subdir']
new_commit_sha = new_submodules.pop(submodule_dir, None)
if new_commit_sha is None:
# the submodule was removed, don't need to make any bundle
continue
if new_commit_sha == baseline_commit:
# no change, no bundle
continue
print(f"Need to update {submodule_dir} from {baseline_commit} to {new_commit_sha}")
create_bundle(submodule_dir, new_commit_sha, f'{baseline_commit}..')
for submodule_dir, commit_sha in new_submodules.items():
print(f"New submodule {submodule_dir}")
bundle_name = f'{submodule_dir}.bundle'
create_bundle(submodule_dir, commit_sha)
# the bundle of the top-level repository itself is oddly called '..bundle'
# it is impossible to have a submodule that clashes with this
# because you cannot name a directory '.'
baseline_descriptor = ''
if not full_histories:
baseline_descriptor = f'{baseline}..'
create_bundle('.', target, baseline_descriptor)
print("Packing bundles into tarfile:")
with tarfile.open(args.filename, mode="w:") as tar: # no compression; git already does that
os.chdir(temp_dir)
for bundle in bundles:
print(bundle)
tar.add(bundle)
os.chdir(root_dir)
print("Removing temp directory")
shutil.rmtree(temp_dir)
```
unbundle.py
===========
```py
#!/usr/bin/env python3
""" Extract bundles for submodules """
import os
import argparse
import shutil
import tarfile
import pullbundle
import submodule_commits
import subprocess
parser = argparse.ArgumentParser(description='Create bundles for submodules (recursively), \
to facilitate sneakernet connections. On the online computer, \
a bundle is made for each repository, and then packed into a .tar file. \
On the offline computer, use unbundle.py on the tarfile to unzip and \
pull from the corresponding bundle for each repository.')
parser.add_argument('filename', metavar='filename', type=str, help='file to create e.g. ../my_bundles.tar')
args = parser.parse_args()
tar_file_name = os.path.basename(args.filename).split('.')[0]
temp_dir = f'temp_dir_for_{tar_file_name}_extraction'
with tarfile.open(args.filename, 'r:') as tar:
tar.extractall(temp_dir)
root_dir = os.getcwd()
def is_git_repository(dir):
""" Return true iff dir exists and is a git repository (by checking git rev-parse --show-toplevel) """
if not os.path.exists(dir):
return False
previous_dir = os.getcwd()
os.chdir(dir)
rev_parse_toplevel = subprocess.check_output(['git', 'rev-parse', '--show-toplevel'])
git_dir = rev_parse_toplevel.decode("utf-8").strip('\n')
current_dir = os.getcwd().replace('\\', '/')
os.chdir(previous_dir)
return current_dir == git_dir
pullbundle.pullbundle(f'{temp_dir}/..bundle', True)
for submodule in submodule_commits.submodule_commits():
subdir = submodule["subdir"]
commit = submodule["commit"]
print(f'{subdir} -> {commit}')
bundle_file_from_root = f'{temp_dir}/{subdir}.bundle'
if not os.path.isfile(bundle_file_from_root):
print(f'Skipping submodule {subdir} because there is no bundle')
else:
if not is_git_repository(subdir):
# clone first if the subdir doesn't exist or isn't a git repository yet
subprocess.run(['git', 'clone', bundle_file_from_root, subdir])
route_to_root = (subdir.count('/') + 1) * '../'
bundle_file = f'{route_to_root}{bundle_file_from_root}'
os.chdir(subdir)
pullbundle.pullbundle(bundle_file)
os.chdir(root_dir)
print("Removing temp directory")
shutil.rmtree(temp_dir)
subprocess.run(['git', 'submodule', 'update', '--recursive'])
```
pullbundle.py
=============
```py
#!/usr/bin/env python3
""" Pull from bundles """
import argparse
import subprocess
import re
import os
ref_head_regex = 'refs/heads/(.*)'
head_commit = None
class UnableToFastForwardError(RuntimeError):
pass
def iterate_branches(bundle_refs):
""" Given lines of output from 'git bundle unbundle' this writes the HEAD commit to the head_commit global
and yields each branch, commit pair """
global head_commit
for bundle_ref in bundle_refs:
ref_split = bundle_ref.split()
commit = ref_split[0]
ref_name = ref_split[1]
if ref_name == 'HEAD':
head_commit = commit
else:
match = re.search(ref_head_regex, ref_name)
if match:
branch_name = match.group(1)
yield (branch_name, commit)
def update_branch(branch, commit, check_divergence=False):
""" Update branch to commit if possible by fast-forward """
rev_parse_branch_output = subprocess.check_output(['git', 'rev-parse', branch])
old_commit = rev_parse_branch_output.decode("utf-8").strip('\n')
if old_commit == commit:
print(f'Skipping {branch} which is up-to-date at {commit}')
else:
rev_parse_current_output = subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
current_branch = rev_parse_current_output.decode("utf-8").strip('\n')
returncode = subprocess.call(['git', 'merge-base', '--is-ancestor', branch, commit])
branch_is_behind_commit = returncode == 0
if branch_is_behind_commit:
print(f'Fast-forwarding {branch} from {old_commit} to {commit}')
if current_branch == branch:
subprocess.call(['git', 'reset', '--hard', '-q', commit])
else:
subprocess.call(['git', 'branch', '-Dq', branch])
subprocess.run(['git', 'branch', '-q', branch, commit])
else:
returncode = subprocess.call(['git', 'merge-base', '--is-ancestor', commit, branch])
branch_is_ahead_of_commit = returncode == 0
if branch_is_ahead_of_commit:
print(f'Skipping {branch} which is at {old_commit}, ahead of bundle version {commit}')
if current_branch == branch and check_divergence:
raise UnableToFastForwardError("Unable to update branch: already ahead of bundle") from None
else:
print(f'Error: {branch} already exists, at {old_commit} which diverges from '
+ f'bundle version at {commit}')
print('You could switch to the bundle version as follows, but you might lose work.')
print(f'git checkout -B {branch} {commit}')
if current_branch == branch and check_divergence:
raise UnableToFastForwardError("Unable to update branch: diverged from bundle") from None
def checkout(commit):
subprocess.run(['git', 'checkout', '-q', '-f', commit])
def pullbundle(bundle_file, check_divergence=False):
""" Main function; update all branches from given bundle file """
global head_commit
head_commit = None
subprocess.run(['git', 'fetch', bundle_file, '+refs/tags/*:refs/tags/*'], stderr=subprocess.DEVNULL)
unbundle_output = subprocess.check_output(['git', 'bundle', 'unbundle', bundle_file])
bundle_refs = filter(None, unbundle_output.decode("utf-8").split('\n'))
for branch, commit in iterate_branches(bundle_refs):
returncode = subprocess.call(['git', 'show-ref', '-q', '--heads', branch])
branch_exists = returncode == 0
if branch_exists:
update_branch(branch, commit, check_divergence)
else:
print(f'Created {branch} pointing at {commit}')
subprocess.run(['git', 'branch', branch, commit])
checkout(commit)
if head_commit is not None:
# checkout as detached head without branch
# note this might not happen; if the bundle updates a bunch of branches
# then whichever one we were already on is updated already and we don't need to do anything here
checkout(head_commit)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Update all branches and tags contained in a bundle file')
parser.add_argument('filename', metavar='filename', help='git bundle file to pull e.g. ../foo.bundle')
parser.add_argument('-c', '--check_divergence', help="return an errorcode if the current branch was not updated "
+ "because of already being ahead or having diverged from the bundle version of that branch",
action='store_true')
args = parser.parse_args()
pullbundle(args.filename, args.check_divergence)
```
submodule\_commits.py
=====================
```py
#!/usr/bin/env python3
""" Print the commit of each submodule (recursively) at some commit"""
import os
import argparse
import subprocess
import re
def print_submodule_commits(root_subdir, root_commit):
for result in submodule_commits(root_subdir, root_commit):
print(f'{result["subdir"]} {result["commit"]}')
def submodule_commits(subdir='.', commit='HEAD', prefix=''):
is_subdir = subdir != '.'
if is_subdir:
previous_dir = os.getcwd()
os.chdir(subdir)
git_ls_tree = subprocess.check_output(['git', 'ls-tree', '-r', commit])
ls_tree_lines = filter(None, git_ls_tree.decode("utf-8").split("\n"))
submodule_regex = re.compile(r'^[0-9]+\s+commit')
for line in ls_tree_lines:
if submodule_regex.match(line):
line_split = line.split()
commit_hash = line_split[2]
subdirectory = line_split[3]
submodule_prefix = subdirectory
if prefix != '':
submodule_prefix = f'{prefix}/{subdirectory}'
yield {'subdir': submodule_prefix, 'commit': commit_hash}
yield from submodule_commits(subdirectory, commit_hash, submodule_prefix)
if is_subdir:
os.chdir(previous_dir)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Print the commit of each submodule (recursively) at some commit')
parser.add_argument('commit', metavar='commit_hash', type=str, default='HEAD', nargs='?',
help='commit to examine; defaults to HEAD')
args = parser.parse_args()
print_submodule_commits('.', args.commit)
```
Upvotes: 2 <issue_comment>username_3: I'm using Ansible to create a `git bundle` and push it to the server the remotely running `git clone repo.bundle`.
What I ended up doing, was just reviewing with an Ansible task the following files:
```
.git/config
.gitmodules
```
And adjusted the links accordingly with:
```
- name: "Adjust URLs for local git bundles"
replace:
path : "/path/to/repo/{{ item }}"
regexp : 'url = ssh://git@bitbucket.domain.com:7999/myproject/(\w+).git'
replace: 'url = /home/myuser/bundles/\1.bundle'
loop:
- ".git/config"
- ".gitmodules"
```
So my playbook looks like this:
```
# Task: Create a local bundle
loop:
- main_repo.bundle
- submodule.bundle
- submodule2.bundle
# Task: Push the bundles to the remote server
loop:
- main_repo.bundle
- submodule.bundle
- submodule2.bundle
# Task: Git clone only main repo
shell: "git clone {{item}} {{dest_dir}}"
loop:
- main_repo.bundle
# Task: Adjust the URLS ( details listed above )
# Task: Git init & update submodules
shell: "{{ item }}"
args:
chdir: {{ dest_dir }}
loop:
- "git submodule init"
- "git submodule update"
```
works just fine, then the only thing that i'm doing afterwards, is to update those bundles, then inside the directory keep running `git fetch` and `git submodule update`
Upvotes: 0
|
2018/03/16
| 4,654 | 17,784 |
<issue_start>username_0: I did some coding to get the nginx config file working.
My objective is to allow all `.well-known` folder and subfolders leaving the rest with basic auth, limit\_req and laravel compatible.
The problem now with let's Encrypt is that it is not renewing the cert because the route `.well-known/acme-challenge/wPCZZWAN8mlHLSQWr7ASZrJ_Tbk71g2Cd_1tPAv2JXM` is asking for permission, probably affected by `location ~ \.php$`
So the question is: Can I integrate one solo function? like `~ / and \.php$ \.(?!well-known).*` And if so, can I integrate the code of both all together?
```
location ~ /\.(?!well-known).* {
limit_req zone=admin burst=5 nodelay;
limit_req_status 503;
try_files $uri $uri/ /index.php?$query_string;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
```<issue_comment>username_1: >
> How do I create the bundle file such that all those objects from the submodule repos are also included.
>
>
>
You can't. A bundle file is specific to a Git repository. A submodule is simply a link to *another* Git repository, so you must create a separate bundle for the separate Git repository. (You can then make an archive out of the various bundles.)
It's pretty clear that Git could descend into each submodule and run `git bundle` in each such one, making these various files for you. The arguments to pass to the sub-`git bundle` commands are tricky, though. You will have to write your own script and use `git submodule foreach` to get it to run in each submodule, and have your script figure out the parameters to use.
You will then probably want to package together each bundle (presumably into a tar or rar or zip or whatever archive) for transport and unbundling / fetching. You'll want another `git submodule foreach` during the unbundling, which will probably be even more annoying since ideally this should use the new set of submodules (after unbundling the top level one and selecting an appropriate commit).
It's possible that someone has written scripts to do this, but if so, I don't know of it. It's not included in Git itself—bundles themselves are kind of klunky and non-mainstream-y.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I wrote [github.com/username_2/submodule\_bundler](https://github.com/username_2/submodule_bundler) to do this. There are a lot of corner cases, and I doubt that I have gotten them all. Please try it out and if any fixes are necessary for your use case, open an issue on the project.
For posterity, I will list all of the code in the project above; but you should just clone directly from github.
bundle.py
=========
```py
#!/usr/bin/env python3
""" Create bundles for submodules """
import os
import argparse
import subprocess
import tarfile
import submodule_commits
import string
import random
import shutil
parser = argparse.ArgumentParser(description='Create bundles for submodules (recursively), \
to facilitate sneakernet connections. On the online computer, \
a bundle is made for each repository, and then packed into a .tar file. \
On the offline computer, use unbundle.py on the tarfile to unzip and \
pull from the corresponding bundle for each repository.')
parser.add_argument('filename', metavar='filename', type=str, help='file to create e.g. ../my_bundles.tar')
parser.add_argument('commit_range', metavar='[baseline]..[target]', type=str, default='..HEAD', nargs='?',
help='commit range of top-level repository to bundle; defaults to everything')
args = parser.parse_args()
class IllegalArgumentError(ValueError):
pass
try:
[baseline, target] = args.commit_range.split('..')
except ValueError:
raise IllegalArgumentError(f"Invalid commit range: '{args.commit_range}': "
+ "Expected [baseline]..[target]. Baseline and target are optional "
+ "but the dots are necessary to distinguish between the two.") from None
full_histories = False
from_str = f'from {baseline} '
if baseline == '':
print("No baseline (all bundles will be complete history bundles)")
full_histories = True
from_str = "from scratch "
if target == '':
target = 'HEAD'
print('Making bundles to update ' + from_str + f'to {target}')
updates_required = {}
new_submodules = {}
bundles = []
for submodule in submodule_commits.submodule_commits('.', target):
new_submodules[submodule['subdir']] = submodule['commit']
root_dir = os.getcwd()
tar_file_name = os.path.basename(args.filename).split('.')[0]
temp_dir = f'temp_dir_for_{tar_file_name}_bundles' # note this won't work if that dir already has contents
def create_bundle(submodule_dir, new_commit_sha, baseline_descriptor=''):
bundle_path_in_temp = f'{submodule_dir}.bundle'
bundle_path = f'{temp_dir}/{bundle_path_in_temp}'
if submodule_dir == '.':
route_to_root = './'
else:
route_to_root = (submodule_dir.count('/') + 1) * '../'
os.makedirs(os.path.dirname(bundle_path), exist_ok=True)
os.chdir(submodule_dir)
rev_parse_output = subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
current_branch = rev_parse_output.decode("utf-8").strip('\n')
subprocess.run(['git', 'bundle', 'create', route_to_root + bundle_path,
f'{baseline_descriptor}{current_branch}', '--tags'])
bundles.append(bundle_path_in_temp)
os.chdir(root_dir)
if not full_histories:
for existing_commit in submodule_commits.submodule_commits('.', baseline):
baseline_commit = existing_commit['commit']
submodule_dir = existing_commit['subdir']
new_commit_sha = new_submodules.pop(submodule_dir, None)
if new_commit_sha is None:
# the submodule was removed, don't need to make any bundle
continue
if new_commit_sha == baseline_commit:
# no change, no bundle
continue
print(f"Need to update {submodule_dir} from {baseline_commit} to {new_commit_sha}")
create_bundle(submodule_dir, new_commit_sha, f'{baseline_commit}..')
for submodule_dir, commit_sha in new_submodules.items():
print(f"New submodule {submodule_dir}")
bundle_name = f'{submodule_dir}.bundle'
create_bundle(submodule_dir, commit_sha)
# the bundle of the top-level repository itself is oddly called '..bundle'
# it is impossible to have a submodule that clashes with this
# because you cannot name a directory '.'
baseline_descriptor = ''
if not full_histories:
baseline_descriptor = f'{baseline}..'
create_bundle('.', target, baseline_descriptor)
print("Packing bundles into tarfile:")
with tarfile.open(args.filename, mode="w:") as tar: # no compression; git already does that
os.chdir(temp_dir)
for bundle in bundles:
print(bundle)
tar.add(bundle)
os.chdir(root_dir)
print("Removing temp directory")
shutil.rmtree(temp_dir)
```
unbundle.py
===========
```py
#!/usr/bin/env python3
""" Extract bundles for submodules """
import os
import argparse
import shutil
import tarfile
import pullbundle
import submodule_commits
import subprocess
parser = argparse.ArgumentParser(description='Create bundles for submodules (recursively), \
to facilitate sneakernet connections. On the online computer, \
a bundle is made for each repository, and then packed into a .tar file. \
On the offline computer, use unbundle.py on the tarfile to unzip and \
pull from the corresponding bundle for each repository.')
parser.add_argument('filename', metavar='filename', type=str, help='file to create e.g. ../my_bundles.tar')
args = parser.parse_args()
tar_file_name = os.path.basename(args.filename).split('.')[0]
temp_dir = f'temp_dir_for_{tar_file_name}_extraction'
with tarfile.open(args.filename, 'r:') as tar:
tar.extractall(temp_dir)
root_dir = os.getcwd()
def is_git_repository(dir):
""" Return true iff dir exists and is a git repository (by checking git rev-parse --show-toplevel) """
if not os.path.exists(dir):
return False
previous_dir = os.getcwd()
os.chdir(dir)
rev_parse_toplevel = subprocess.check_output(['git', 'rev-parse', '--show-toplevel'])
git_dir = rev_parse_toplevel.decode("utf-8").strip('\n')
current_dir = os.getcwd().replace('\\', '/')
os.chdir(previous_dir)
return current_dir == git_dir
pullbundle.pullbundle(f'{temp_dir}/..bundle', True)
for submodule in submodule_commits.submodule_commits():
subdir = submodule["subdir"]
commit = submodule["commit"]
print(f'{subdir} -> {commit}')
bundle_file_from_root = f'{temp_dir}/{subdir}.bundle'
if not os.path.isfile(bundle_file_from_root):
print(f'Skipping submodule {subdir} because there is no bundle')
else:
if not is_git_repository(subdir):
# clone first if the subdir doesn't exist or isn't a git repository yet
subprocess.run(['git', 'clone', bundle_file_from_root, subdir])
route_to_root = (subdir.count('/') + 1) * '../'
bundle_file = f'{route_to_root}{bundle_file_from_root}'
os.chdir(subdir)
pullbundle.pullbundle(bundle_file)
os.chdir(root_dir)
print("Removing temp directory")
shutil.rmtree(temp_dir)
subprocess.run(['git', 'submodule', 'update', '--recursive'])
```
pullbundle.py
=============
```py
#!/usr/bin/env python3
""" Pull from bundles """
import argparse
import subprocess
import re
import os
ref_head_regex = 'refs/heads/(.*)'
head_commit = None
class UnableToFastForwardError(RuntimeError):
pass
def iterate_branches(bundle_refs):
""" Given lines of output from 'git bundle unbundle' this writes the HEAD commit to the head_commit global
and yields each branch, commit pair """
global head_commit
for bundle_ref in bundle_refs:
ref_split = bundle_ref.split()
commit = ref_split[0]
ref_name = ref_split[1]
if ref_name == 'HEAD':
head_commit = commit
else:
match = re.search(ref_head_regex, ref_name)
if match:
branch_name = match.group(1)
yield (branch_name, commit)
def update_branch(branch, commit, check_divergence=False):
""" Update branch to commit if possible by fast-forward """
rev_parse_branch_output = subprocess.check_output(['git', 'rev-parse', branch])
old_commit = rev_parse_branch_output.decode("utf-8").strip('\n')
if old_commit == commit:
print(f'Skipping {branch} which is up-to-date at {commit}')
else:
rev_parse_current_output = subprocess.check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD'])
current_branch = rev_parse_current_output.decode("utf-8").strip('\n')
returncode = subprocess.call(['git', 'merge-base', '--is-ancestor', branch, commit])
branch_is_behind_commit = returncode == 0
if branch_is_behind_commit:
print(f'Fast-forwarding {branch} from {old_commit} to {commit}')
if current_branch == branch:
subprocess.call(['git', 'reset', '--hard', '-q', commit])
else:
subprocess.call(['git', 'branch', '-Dq', branch])
subprocess.run(['git', 'branch', '-q', branch, commit])
else:
returncode = subprocess.call(['git', 'merge-base', '--is-ancestor', commit, branch])
branch_is_ahead_of_commit = returncode == 0
if branch_is_ahead_of_commit:
print(f'Skipping {branch} which is at {old_commit}, ahead of bundle version {commit}')
if current_branch == branch and check_divergence:
raise UnableToFastForwardError("Unable to update branch: already ahead of bundle") from None
else:
print(f'Error: {branch} already exists, at {old_commit} which diverges from '
+ f'bundle version at {commit}')
print('You could switch to the bundle version as follows, but you might lose work.')
print(f'git checkout -B {branch} {commit}')
if current_branch == branch and check_divergence:
raise UnableToFastForwardError("Unable to update branch: diverged from bundle") from None
def checkout(commit):
subprocess.run(['git', 'checkout', '-q', '-f', commit])
def pullbundle(bundle_file, check_divergence=False):
""" Main function; update all branches from given bundle file """
global head_commit
head_commit = None
subprocess.run(['git', 'fetch', bundle_file, '+refs/tags/*:refs/tags/*'], stderr=subprocess.DEVNULL)
unbundle_output = subprocess.check_output(['git', 'bundle', 'unbundle', bundle_file])
bundle_refs = filter(None, unbundle_output.decode("utf-8").split('\n'))
for branch, commit in iterate_branches(bundle_refs):
returncode = subprocess.call(['git', 'show-ref', '-q', '--heads', branch])
branch_exists = returncode == 0
if branch_exists:
update_branch(branch, commit, check_divergence)
else:
print(f'Created {branch} pointing at {commit}')
subprocess.run(['git', 'branch', branch, commit])
checkout(commit)
if head_commit is not None:
# checkout as detached head without branch
# note this might not happen; if the bundle updates a bunch of branches
# then whichever one we were already on is updated already and we don't need to do anything here
checkout(head_commit)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Update all branches and tags contained in a bundle file')
parser.add_argument('filename', metavar='filename', help='git bundle file to pull e.g. ../foo.bundle')
parser.add_argument('-c', '--check_divergence', help="return an errorcode if the current branch was not updated "
+ "because of already being ahead or having diverged from the bundle version of that branch",
action='store_true')
args = parser.parse_args()
pullbundle(args.filename, args.check_divergence)
```
submodule\_commits.py
=====================
```py
#!/usr/bin/env python3
""" Print the commit of each submodule (recursively) at some commit"""
import os
import argparse
import subprocess
import re
def print_submodule_commits(root_subdir, root_commit):
for result in submodule_commits(root_subdir, root_commit):
print(f'{result["subdir"]} {result["commit"]}')
def submodule_commits(subdir='.', commit='HEAD', prefix=''):
is_subdir = subdir != '.'
if is_subdir:
previous_dir = os.getcwd()
os.chdir(subdir)
git_ls_tree = subprocess.check_output(['git', 'ls-tree', '-r', commit])
ls_tree_lines = filter(None, git_ls_tree.decode("utf-8").split("\n"))
submodule_regex = re.compile(r'^[0-9]+\s+commit')
for line in ls_tree_lines:
if submodule_regex.match(line):
line_split = line.split()
commit_hash = line_split[2]
subdirectory = line_split[3]
submodule_prefix = subdirectory
if prefix != '':
submodule_prefix = f'{prefix}/{subdirectory}'
yield {'subdir': submodule_prefix, 'commit': commit_hash}
yield from submodule_commits(subdirectory, commit_hash, submodule_prefix)
if is_subdir:
os.chdir(previous_dir)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Print the commit of each submodule (recursively) at some commit')
parser.add_argument('commit', metavar='commit_hash', type=str, default='HEAD', nargs='?',
help='commit to examine; defaults to HEAD')
args = parser.parse_args()
print_submodule_commits('.', args.commit)
```
Upvotes: 2 <issue_comment>username_3: I'm using Ansible to create a `git bundle` and push it to the server the remotely running `git clone repo.bundle`.
What I ended up doing, was just reviewing with an Ansible task the following files:
```
.git/config
.gitmodules
```
And adjusted the links accordingly with:
```
- name: "Adjust URLs for local git bundles"
replace:
path : "/path/to/repo/{{ item }}"
regexp : 'url = ssh://git@bitbucket.domain.com:7999/myproject/(\w+).git'
replace: 'url = /home/myuser/bundles/\1.bundle'
loop:
- ".git/config"
- ".gitmodules"
```
So my playbook looks like this:
```
# Task: Create a local bundle
loop:
- main_repo.bundle
- submodule.bundle
- submodule2.bundle
# Task: Push the bundles to the remote server
loop:
- main_repo.bundle
- submodule.bundle
- submodule2.bundle
# Task: Git clone only main repo
shell: "git clone {{item}} {{dest_dir}}"
loop:
- main_repo.bundle
# Task: Adjust the URLS ( details listed above )
# Task: Git init & update submodules
shell: "{{ item }}"
args:
chdir: {{ dest_dir }}
loop:
- "git submodule init"
- "git submodule update"
```
works just fine, then the only thing that i'm doing afterwards, is to update those bundles, then inside the directory keep running `git fetch` and `git submodule update`
Upvotes: 0
|
2018/03/16
| 1,820 | 7,024 |
<issue_start>username_0: I am trying to bootstrap my angular 5 app based on conditions. Basically I have two modules `MobileModule` and `WebModule` which contains UI components of web and mobile separately.
I am trying to bootstrap `MobileModule` if user has opened the app in mobile browser otherwise `WebModule`.
Here is my `main.ts` source.
```
import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { environment } from './environments/environment';
import { AppSettings } from './app/core/app-settings';
import { MobileModule } from './app/mobile/mobile.module';
import { WebModule } from './app/web/web.module';
if (environment.production) {
enableProdMode();
}
/*
* Bootstrap mobile or web module based on mobile
* or desktop client.
*/
if (AppSettings.isMobileDevice) {
platformBrowserDynamic().bootstrapModule(MobileModule)
.catch(err => console.log(err));
} else {
platformBrowserDynamic().bootstrapModule(WebModule)
.catch(err => console.log(err));
}
```
`web.module.ts`
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppRoutingModule } from './app-routing/app-routing.module';
import { CoreModule } from '../core/core.module';
import { SharedModule } from '../shared/shared.module';
import { AuthModule } from './auth/auth.module';
import { SignupComponent } from './auth/signup/signup.component';
@NgModule({
imports: [
BrowserModule, // Support for common angular directives and services
AppRoutingModule, // Web routing controller module.
CoreModule, // Core modules
SharedModule, // Shared modules
AuthModule, // App authentication module (signup and login)
],
declarations: [],
providers: [],
bootstrap: [SignupComponent] // Bootstrap Signup component
})
export class WebModule { }
```
`mobile.module.ts`
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { AppRoutingModule } from './app-routing/app-routing.module';
import { CoreModule } from '../core/core.module';
import { SharedModule } from '../shared/shared.module';
import { AuthModule } from './auth/auth.module';
import { SignupComponent } from './auth/signup/signup.component';
@NgModule({
imports: [
BrowserModule, // Support for common angular directives and services
AppRoutingModule, // Mobile app routing controller module
CoreModule, // Core modules
SharedModule, // Shared modules
AuthModule, // App authentication module (signup and login)
],
declarations: [],
providers: [],
bootstrap: [SignupComponent] // Bootstrap Signup component
})
export class MobileModule { }
```
The above approach is working fine with `ng serve` but when I try to run it using `ng serve --aot` option its throwing error in chrome console.
```
Uncaught Error: No NgModule metadata found for 'WebModule'.
at NgModuleResolver.resolve (compiler.js:20242)
at CompileMetadataResolver.getNgModuleMetadata (compiler.js:15195)
at JitCompiler._loadModules (compiler.js:34405)
at JitCompiler._compileModuleAndComponents (compiler.js:34366)
at JitCompiler.compileModuleAsync (compiler.js:34260)
at CompilerImpl.compileModuleAsync (platform-browser-dynamic.js:239)
at PlatformRef.bootstrapModule (core.js:5567)
at eval (main.ts:26)
at Object../src/main.ts (main.bundle.js:53)
at __webpack_require__ (inline.bundle.js:55)
```
My project structure is
```
|-node_modules
|-src
|-app
|-core
|-mobile
|-mobile.module.ts
|-shared
|-web
|-web.module.ts
```
I tried several things to conditionally bootstrap the app but no success. Any suggestion will be appreciated.<issue_comment>username_1: I think Angular 5 does not support multiple root modules, because the way AOT (Ahead of Time compilation) currently works. It seems it only compiles the first module and its dependencies.
A workaround is to have a root module only for bootstrapping, and use Router with Guards, and lazy-loading only the desired module inside the root module.
References:
* Related Issue: <https://github.com/angular/angular-cli/issues/4624>
* Lazy-Loading Modules: <https://angular.io/guide/lazy-loading-ngmodules>
Upvotes: 2 <issue_comment>username_2: After a lot research I found that we can only bootstrap single module at a time. Which means single root module per angular app.
However, since the time of [angular lazy loading of modules](https://angular.io/guide/lazy-loading-ngmodules) we can load all feature modules dynamically. Hence, the idea is to load one root module and based on conditions you can load any sub-module dynamically.
Let's consider above example.
```
import { enableProdMode } from '@angular/core';
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
import { environment } from './environments/environment';
import { AppModule } from './app/app.module';
if (environment.production) {
enableProdMode();
}
/*
* Bootstrap single root module
*/
platformBrowserDynamic().bootstrapModule(AppModule)
.catch(err => console.log(err));
```
`app.module.ts`
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { RouterModule, Routes } from '@angular/router';
import { CoreModule } from '../core/core.module';
import { SharedModule } from '../shared/shared.module';
import { AppComponent } from './app.component';
const isMobileDevice = true; // Mobile device condition
const routes: Routes = [
{
path : 'app',
loadChildren: isMobileDevice ? './mobile/mobile.module#MobileModule' : './web/web.module#WebModule' // Conditionally load mobile or web module
}
];
@NgModule({
imports: [
BrowserModule, // Support for common angular directives and services
RouterModule.forRoot(routes),
CoreModule, // Core modules
],
declarations: [],
providers: [],
bootstrap: [AppComponent] // Bootstrap Signup component
})
export class AppModule { }
```
Hope this helps to some other folks who wanted such behaviour in their angular app.
Upvotes: 3 [selected_answer]<issue_comment>username_3: I achieved the conditionally loading of a module in `main.ts` by using `fileReplacements` in the angular.json file by creating an object under configuration and in that, I added replacements as below:
```
"fileReplacements": [
{
"replace": "src/environments/environment.ts",
"with": "src/environments/environment.lib.ts"
},
{
"replace": "src/main.ts",
"with": "src/main.lib.ts"
}
]
```
The above solved my problem.
Hope this helps someone.
Thanks...
Upvotes: 1
|
2018/03/16
| 1,015 | 3,167 |
<issue_start>username_0: I'm am trying to learn Ruby on rails and I keep getting this error.
My controller is
```
class Clasa9Controller < ApplicationController
def multimi
end
def progresii
end
def functii
end
def vectori
end
def trigonometrie
end
def geometrie
end
end
```
clasa9.html.erb
```
<%= link\_to "", multimi\_path %>
```
rails routes:
```
multimi GET /clasa_9/multimi(.:format) clasa_9#multimi
progresii GET /clasa_9/progresii(.:format) clasa_9#progresii
functii GET /clasa_9/functii(.:format) clasa_9#functii
vectori GET /clasa_9/vectori(.:format) clasa_9#vectori
trigonometrie GET /clasa_9/trigonometrie(.:format) clasa_9#trigonometrie
geometrie GET /clasa_9/geometrie(.:format) clasa_9#geometrie
```
and routes.rb
```
get 'clasa_9/multimi', to:"clasa_9#multimi", as:"multimi"
get 'clasa_9/progresii', to:"clasa_9#progresii", as:"progresii"
get 'clasa_9/functii', to:"clasa_9#functii", as:"functii"
get 'clasa_9/vectori', to:"clasa_9#vectori", as:"vectori"
get 'clasa_9/trigonometrie', to:"clasa_9#trigonometrie", as:"trigonometrie"
get 'clasa_9/geometrie', to:"clasa_9#geometrie", as:"geometrie"
devise_for :users
get 'pages/home'
get 'pages/clasa9'
get 'pages/clasa10'
get 'pages/clasa11'
get 'pages/clasa12'
get 'pages/about'
root 'pages#home'
```
and im am getting
>
> Routing Error
> uninitialized constant Clasa9Controller
>
>
>
I tried to solve this by looking up what is already posted here but I just can't solve it... I don't understand what I should change.<issue_comment>username_1: If your file is located inside the app/controllers folder, then it is probably a file name issue. Your file should have the name clasa9\_controller.rb.
If not, then you should load the file by creating an initializer or by adding an autoload\_path inside config/development.rb
Rails loads by default:
1. All subdirectories of app in the application and engines present at boot time. For example, app/controllers. They do not need to be the default ones, any custom directories like app/workers belong automatically to autoload\_paths.
2. Any existing second level directories called app/\*/concerns in the application and engines.
3. The directory test/mailers/previews.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Look it would be `clasa9` but why that when you run this with the [`underscore`](https://apidock.com/rails/ActiveSupport/Inflector/underscore) method like this
```
Loading development environment (Rails 5.1.4)
2.3.4 :001 > "Clasa9Controller".underscore
=> "clasa9_controller"
```
it returns `clasa9_controller` that means your controller is `clasa9` not `clasa_9` and file name will be `clasa9_controller.rb` then your `routes` would be `to: "clasa9#multimi"` like this
```
get 'clasa_9/multimi', to: "clasa9#multimi", as: "multimi"
#or
#get 'clasa_9/multimi', to: "clasa9#multimi", as: :multimi # removed doublw quotes from multimi
...
```
Follow this it should work.
Upvotes: 0
|
2018/03/16
| 436 | 1,595 |
<issue_start>username_0: I have a c++ code that at one part it stored some values of a measurement in a vector and this vector is a part of set of data schema which is serialized and then sent to a streamer.
There is new requirement that for a specific case I need just one value of the measurement which is always rewritten with the latest one, but I don't want to change the vector variable in order to keep the same schema. So I thought that for that case to rewrite each time the first element of the vector, something like this
```
vector store\_measurements;
int measurement = 10;
if (condition == "several\_values")
{
store\_measurements.pushback(measurement);
}
else
{
store\_measurements.at(0) = measurement ;
}
```
It seems to work fine when the vector is not cleared, but I'd like to ask if this is the correct way to do that or there is a more preferable way to do it?<issue_comment>username_1: You can use the `front()` function.
```
vector store\_measurements;
int measurement = 10;
if (condition == "several\_values")
{
store\_measurements.push\_back(measurement);
}
else
{
store\_measurements.resize(1);
store\_measurements.front() = measurement ;
}
```
**Edit:**
Based on the comments I added `store_measurements.resize(1);` before the assignment
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would probably use `assign()` which *replaces* all the values in the vector like this:
```
if (condition == "several_values")
{
store_measurements.push_back(measurement);
}
else
{
store_measurements.assign(1, measurement);
}
```
Upvotes: 1
|
2018/03/16
| 1,826 | 7,014 |
<issue_start>username_0: I have a button on my new and edit views that sends a post request to my Letter controller through an Ajax call. If the Ajax call works perfectly in the new view, it throws a 404 error for my edit view.
Route:
```
post 'letters/ajax_send_test_botletter', to: 'letters#send_test_botletter', as: 'send_test_botletter'
```
The form is defined like this:
```
<%= form_for(letter, :html => {class: "directUpload", remote: true}) do |f| %>
```
The button triggering the Ajax call in the form:
```
Send a test campaign to yourself
```
Ajax call:
```
$('#send_test_letter').on('click', function(){
$('form').submit(function() {
var valuesToSubmit = $(this).serialize();
$.ajax({
type: "POST",
url: "/letters/ajax_send_test_botletter",
data: valuesToSubmit,
dataType: "JSON" // you want a difference between normal and ajax-calls, and json is standard
}).success(function(json){
if(json['value'] == "No Recipient") {
$('#send_test_letter').css('display', 'none');
$('#save_test_user').css('display', 'block');
} else {
console.log("Success")
$('#confirmation_test_sent').html('Test successfully sent. Check your Messenger.')
}
$('form').unbind('submit');
});
return false; // prevents normal behaviour
});
});
```
My send\_test\_botletter method
```
def send_test_botletter
@message_content = params[:letter]['messages_attributes']['0']['content']
@button_message = params[:letter]['messages_attributes']['0']['buttons_attributes']['0']['button_text'] if params[:letter]['messages_attributes']['0']['buttons_attributes']['0']['button_text'] != ''
@button_url = params[:letter]['messages_attributes']['0']['buttons_attributes']['0']['button_url'] if params[:letter]['messages_attributes']['0']['buttons_attributes']['0']['button_url'] != ''
@cards = params[:letter]['cards_attributes'] if params[:letter]['cards_attributes'].present? == true
@test_segment = Segment.where(core_bot_id: @core_bot_active.id, name: "test").first
@recipients = BotUser.where(core_bot_id: @core_bot_active.id, source: @test_segment.token)
if @recipients.exists?
send_message_onboarding if @message_content != '' and @button_message.present? == false
send_message_button_onboarding if @message_content != '' and @button_message.present? == true and @button_url.present? == true
send_card_onboarding if @cards
respond_to do |format|
format.json { render json: {"value" => "Success"}}
end
else
respond_to do |format|
format.json { render json: {"value" => "No Recipient"}}
end
end
end
```
I get the following error in the Chrome console for the edit view:
>
> POST <http://localhost:3000/letters/ajax_send_test_botletter> 404 (Not
> Found)
>
>
>
And in my Rails logs:
>
> ActiveRecord::RecordNotFound (Couldn't find Letter with
> 'id'=ajax\_send\_test\_botletter):
>
>
>
It seems it calls the Update method instead of the send\_test\_botletter method...
Any idea what's wrong here?<issue_comment>username_1: `form_for(letter...` generates a different url and method depending whether or not the instance is persisted, defaulting to `create` and `post` or `update` and `patch` as appropriate.
When you hit submit, it's trying to hit this endpoint, *before* your listener kicks in. And in doing so, breaks the remaining js.
However, you can also provide `url` and `method` options to `form_for`. Try providing a blank `url` option and the correct method (`form_for letter, ..., url: '', method: :post`).
Alternatively, you could stop the default behaviour / propagation on form submission:
```
$('form').submit(function(e) {
e.stopPropagation() // Or could simply be `preventDefault()`, depending on your use case
...
// your AJAX
}
```
Able to test out these approaches?
---
**Update**
Your method is actually nesting a `submit` listener within the `click` one. Try the following:
```
$('#send_test_letter').on('click', function(e){
e.stopPropagation()
var $form = $(this).closest('form')
var valuesToSubmit = $form.serialize();
$.ajax({
type: "POST",
url: "/letters/ajax_send_test_botletter",
data: valuesToSubmit,
dataType: "JSON" // you want a difference between normal and ajax-calls, and json is standard
}).success(function(json){
if(json['value'] == "No Recipient") {
$('#send_test_letter').css('display', 'none');
$('#save_test_user').css('display', 'block');
} else {
console.log("Success")
$('#confirmation_test_sent').html('Test successfully sent. Check your Messenger.')
}
return false; // prevents normal behaviour
});
});
```
Upvotes: 0 <issue_comment>username_2: I found the trick. The problem was the PATCH method in the edit form.
I found a plugin in [this discussion](https://stackoverflow.com/questions/5075778/how-do-i-modify-serialized-form-data-in-jquery/49322459#49322459) in order to modify the serialized data and change the method to "post":
```
$('#send_test_letter').on('click', function(){
$('form').submit(function() {
var valuesToSubmit = $(this).awesomeFormSerializer({
_method: 'post',
});
$.ajax({
type: "POST",
url: "/letters/ajax_send_test_botletter",
data: valuesToSubmit,
dataType: "JSON" // you want a difference between normal and ajax-calls, and json is standard
}).success(function(json){
if(json['value'] == "No Recipient") {
$('#send_test_letter').css('display', 'none');
$('#save_test_user').css('display', 'block');
} else {
console.log("Success")
$('#confirmation_test_sent').html('Test successfully sent. Check your Messenger.')
}
$('form').unbind('submit');
});
return false; // prevents normal behaviour
});
});
(function ( $ ) {
// Pass an object of key/vals to override
$.fn.awesomeFormSerializer = function(overrides) {
// Get the parameters as an array
var newParams = this.serializeArray();
for(var key in overrides) {
var newVal = overrides[key]
// Find and replace `content` if there
for (index = 0; index < newParams.length; ++index) {
if (newParams[index].name == key) {
newParams[index].value = newVal;
break;
}
}
// Add it if it wasn't there
if (index >= newParams.length) {
newParams.push({
name: key,
value: newVal
});
}
}
// Convert to URL-encoded string
return $.param(newParams);
}
}( jQuery ));
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 347 | 1,360 |
<issue_start>username_0: I have an Android app which uses the immersive sticky mode and it works well on previous versions of Android as well as on Android P, but when turning on the option of simulating a display cutout (notch), the content starts right below the status bar, leaving a blank space.
My code is standard immersive code:
```
getWindow().getDecorView().setSystemUiVisibility(
View.SYSTEM_UI_FLAG_LAYOUT_STABLE
| View.SYSTEM_UI_FLAG_LAYOUT_HIDE_NAVIGATION
| View.SYSTEM_UI_FLAG_LAYOUT_FULLSCREEN
| View.SYSTEM_UI_FLAG_HIDE_NAVIGATION
| View.SYSTEM_UI_FLAG_FULLSCREEN
| View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY);
```
Any ideas?
Thank you.<issue_comment>username_1: Nevermind, on a real device it's a totally normal behavior, because you don't want fullscreen content to be obscured by the notch (although the iPhone X does it).
Leaving this question for future reference.
Upvotes: 3 [selected_answer]<issue_comment>username_2: We can have full immersive experience if required for devices with display cutout by setting the window layout attribute. This is documented [here](https://developer.android.com/guide/topics/display-cutout#choose_how_your_app_handles_cutout_areas).
```
shortEdges
```
THis solved my problem.
Upvotes: 0
|
2018/03/16
| 937 | 2,914 |
<issue_start>username_0: I keep getting the
>
> "Subscript Out of Range"
>
>
>
error on the .find line.
The idea is the user selects a cell based on the input box. The value of Cell(D) in the row of the selected cell would then be copied and used to find the value in Sheet2 in a given range.
```
Sub AddNewLine()
Dim Hull As Variant
Dim SST As Variant
Dim NRE_RE As Variant
Dim DropDownN As Range
Dim DropDownR As Range
Dim foundcell As Variant
Dim myCell As Range
Dim Task As Range
Dim WBS As Range
Set DropDownN = Sheets("Sheet2").Range("C3:C6")
Set DropDownR = Sheets("Sheet2").Range("C10:C64")
Sheets("Sheet1").Activate
Set myCell = Application.InputBox(prompt:="Select a Hull to add the task to
the 7300", Type:=8)
Set Task = myCell.EntireRow
Task.Select
NRE_RE = Task.Cells(4).Value
Hull = Task.Cells(3).Value
SST = Task.Cells(6).Value
If NRE_RE = "NRE" Then
Sheets("Sheet2").Activate
With DropDownN
.Find(What:=SST, LookIn:=xlValue, LookAt:=xlWhole, MatchCase:=True)
End With
ElseIf NRE_RE = "RE" Then
Sheets("Sheet2").Activate
With DropDownR
.Find(What:=SST, LookIn:=xlValue, LookAt:=xlWhole, MatchCase:=True)
End With
End If
Sheets("Sheet2").Activate
WBS.Interior.ColorIndex = 3
End Sub
```
I'm new to coding and have searched for a way to make it apply to my situation with no luck. I am also trying to name the result of the search as "WBS" which is what the "WBS.Interior.ColorIndex = 3" is referencing.
I'm also aware that my code might not be the most concise but as I get more understanding that will change. It's just the process as is easiest in my mind.
Thank you for your help!<issue_comment>username_1: You need to replace your line:
```
.Find(What:=SST, LookIn:=xlValue, LookAt:=xlWhole, MatchCase:=True)
```
With the proper way of using the `Find` function, which involves the following:
1. Set the Result of `Find` to a `Range` object.
`Dim FindRng As Range
Set FindRng = .Find(What:=SST, LookIn:=xlValue, LookAt:=xlWhole, MatchCase:=True)`
2. Also handle a scenario where `Find` wasn't able to find what you are looking for, in the first case it's `SST`.
`If FindRng Is Nothing Then`
***Modified Code*** for the first `Find` :
```
Dim FindRng As Range
Set FindRng = .Find(What:=SST, LookIn:=xlValue, LookAt:=xlWhole, MatchCase:=True)
If Not FindRng Is Nothing Then
' when Find is successfull finding SST
Else
' if Find faild finding SST
End If
```
**Note**: you should avoid using `Sheets("Sheet1").Activate` and `Task.Select`, the only thing it's doing is slowing up your code's run-time.
Upvotes: 2 <issue_comment>username_2: It is "**xlValues**" ending with "**s**", not "**Xlvalue**":
* [XlFindLookIn enumeration (Excel)](https://learn.microsoft.com/en-us/office/vba/api/excel.xlfindlookin)
* [Range.Find method (Excel)](https://learn.microsoft.com/en-us/office/vba/api/excel.range.find)
Upvotes: 0
|
2018/03/16
| 958 | 3,160 |
<issue_start>username_0: ```
[root@localhost etc]# systemctl status blu_av
● blu_av.service - avscan
Loaded: loaded (/etc/systemd/system/blu_av.service; disabled; vendor preset: disabled)
Active: active (running) since Fri 2018-03-16 16:31:14 IST; 3s ago
Main PID: 31934 (av)
CGroup: /system.slice/blu_av.service
├─31934 /opt/services/av
└─31956 /opt/services/av
Mar 16 16:31:14 localhost.localdomain systemd[1]: Started avscan.
Mar 16 16:31:14 localhost.localdomain systemd[1]: Starting avscan...
```
If the above is the output of my status. I want to retrieve the service name, uptime and status by using a python script.<issue_comment>username_1: `systemctl status` has an option [`--output`](https://www.freedesktop.org/software/systemd/man/systemctl.html#-o) that allows to use the options of [`journalctl`](https://www.freedesktop.org/software/systemd/man/journalctl#-o).
Try JSON which could easily be parsed by Python:
```
$ sudo systemctl status --output=json-pretty nginx
```
Upvotes: 3 <issue_comment>username_2: You could get the output from systemctl by using `subprocess.check_output()` and parse it afterwards.
Upvotes: 0 <issue_comment>username_3: Maybe you can try [regular expression](https://docs.python.org/3/howto/regex.html) to parse the output. Here is what I used. Please have a look and comment.
```
import subprocess, re
def read_status(service):
p = subprocess.Popen(["systemctl", "status", service], stdout=subprocess.PIPE)
(output, err) = p.communicate()
output = output.decode('utf-8')
service_regx = r"Loaded:.*\/(.*service);"
status_regx= r"Active:(.*) since (.*);(.*)"
service_status = {}
for line in output.splitlines():
service_search = re.search(service_regx, line)
status_search = re.search(status_regx, line)
if service_search:
service_status['service'] = service_search.group(1)
#print("service:", service)
elif status_search:
service_status['status'] = status_search.group(1).strip()
#print("status:", status.strip())
service_status['since'] = status_search.group(2).strip()
#print("since:", since.strip())
service_status['uptime'] = status_search.group(3).strip()
#print("uptime:", uptime.strip())
return service_status
def main():
service = 'mysql'
reponse = read_status(service)
for key in reponse:
print('{}:{}'.format(key, reponse[key]))
if __name__ == '__main__':
main()
```
Output:
```
service:mysql.service
status:active (running)
since:Fri 2018-03-16 09:17:57 CET
uptime:6h ago
```
I simply use [this](https://regex101.com/) to check my regular expressions.
Upvotes: 3 [selected_answer]<issue_comment>username_4: `systemd` services have a whole list of properties. To get the start time for example; you could run:
```
systemctl show your.service --property=ActiveEnterTimestamp
```
which would give something like:
```
ActiveEnterTimestamp=Fri 2019-05-03 09:35:02 CEST
```
To see the list of all properties; just run
```
systemctl show your.service
```
Upvotes: 4
|
2018/03/16
| 761 | 2,204 |
<issue_start>username_0: I come across this in Rails source code:
```
class Object
def duplicable?
true
end
end
class NilClass
begin
nil.dup
rescue TypeError
def duplicable?
false
end
end
end
```
With this code, even after `dup` is removed from an object, that object responds to `duplicable?` with `true`.
I think it can be rewritten to a simpler code like:
```
class Object
def duplicable?
repond_to?(:dup)
end
end
```
What is the merit of defining `duplicable?` using `begin`...`rescue`?<issue_comment>username_1: `nil` **responds** to `dup` explicitly throwing the `TypeError` (which has, in turn, nothing to do with `NoMethodError`.) [Correction: had responded to `dup` before 2.4, credits go to @username_2.]
```
NilClass.instance_method(:dup)
#⇒ #
```
The goal is to respond to `duplicable?` with `false` *unless* `NilClass#dup` is overwritten by another monkey patcher in the city. [Correction: read “another monkey patcher” as “Matz” :)]
Upvotes: 2 <issue_comment>username_2: >
> What is the merit of defining `duplicable?` using `begin`...`rescue`?
>
>
>
Ruby before 2.4 raised a TypeError when attempting to `nil.dup`:
```none
$ rbenv local 2.3.0
$ ruby --version
ruby 2.3.0p0 (2015-12-25 revision 53290) [x86_64-darwin15]
$ ruby -e 'p nil.dup'
-e:1:in `dup': can't dup NilClass (TypeError)
from -e:1:in `'
```
Starting with Ruby 2.4, `nil.dup` just returns itself:
```none
$ rbenv local 2.4.0
$ ruby --version
ruby 2.4.0p0 (2016-12-24 revision 57164) [x86_64-darwin15]
$ ruby -e 'p nil.dup'
nil
```
Putting the method definition inside `rescue` ensures that the method is only defined for Ruby versions which raise the `TypeError`.
>
> I think it can be rewritten to a simpler code like: [...]
>
>
>
Simply checking whether the receiver responds to `dup` doesn't work, because `nil` – being an `Object` – *does* respond to `dup`, even in 2.3. The `TypeError` is (was) raised from within `Object#dup`:
```c
VALUE rb_obj_dup(VALUE obj)
{
VALUE dup;
if (rb_special_const_p(obj)) {
rb_raise(rb_eTypeError, "can't dup %s", rb_obj_classname(obj));
}
// ...
}
```
Upvotes: 4 [selected_answer]
|
2018/03/16
| 927 | 2,016 |
<issue_start>username_0: I have string like this:
```
constant = 0.015
history = 90
[thresholds]
up = 100
down = -100
persistence = 0
[thresholds]
up = 100
down = -100
persistence = 0
```
I must convert it to format like this:
```
config.n8_v2_BB_RSI_SL = {
constant: 0.015,
history: 90,
thresholds: {
up: 100,
down: -100,
persistence: 0 },
thresholds: {
up: 100,
down: -100,
persistence: 0 }
};
```
I did all without closing bracket in first [thresholds]. I havent idea how do this.
My work:
```
$a =~ s/(?
```
After `persistence: 0` I must add `},`. How do it? [thresholds] can appear more than 1 times, or never appear.<issue_comment>username_1: ```
#!/usr/bin/perl
use warnings;
use strict;
my $input = '...';
my $expected = '...';
$input =~ s/ =/:/g;
$input =~ s/\[(.*)\]/}\n$1: {/g;
$input =~ s/^/config.n8_v2_BB_RSI_SL = {\n/;
$input =~ s/(?
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could perhaps `use Config::IniFiles` to read input and `use JSON` to generate the output. (Is it JSON?)
Or...
Using a bag of dirty tricks, this:
```
my $s=join"",;
my $n='00000001';
$s=~s/^(.\*?)(\w+)/$1\_@{[$n++]}\_$2/gm;
$s=sz({conf($s)});
$s=~s,\b\_(\d{8})\_(\w+),$2,g;
print "config.n8\_v2\_BB\_RSI\_SL = $s;\n";
sub sz {my$c=shift;ref$c?"{\n".(join",\n",map{"$\_: ".sz($$c{$\_})}sort keys%$c)."}":$c}
sub conf{
my($hr,$section)=({});
my@r=(qr/^\s\*\[\s\*(.\*?)\s\*\]/,
qr/^\s\*([^\:\=]+?)\s\*[:=]\s\*(.\*?)\s\*$/);
/$r[0]/ and $$hr{$section=$1}||={} or
/$r[1]/ and defined$section?$$hr{$section}{$1}:$$hr{$1}=$2
for split"\n",shift;
%$hr;
}
\_\_DATA\_\_
constant = 0.015
history = 90
[thresholds]
up = 100
down = -100
persistence = 0
[thresholds]
up = 20
down = -100
persistence = 0
[peaks]
up = 100
down = -100
persistence = 0
```
Produces this:
```
config.n8_v2_BB_RSI_SL = {
constant: 0.015,
history: 90,
thresholds: {
up: 100,
down: -100,
persistence: 0},
thresholds: {
up: 20,
down: -100,
persistence: 0},
peaks: {
up: 100,
down: -100,
persistence: 0}};
```
Upvotes: 0
|
2018/03/16
| 811 | 2,814 |
<issue_start>username_0: I've seen two ways of people executing queries using Golang builtin `database/sql` queries. One of them is using `fmt.Sprintf`:
```
func (db *DB) CreateUserTable() (sql.Result, error) {
statement := "CREATE TABLE %s (%s, %s, %s, %s, %s)"
v := []interface{}{"User", "ID int PRIMARY KEY NOT NULL", "Name varchar(100) UNIQUE", "Email varchar(100) UNIQUE", "Address varchar(100) ", "Username varchar(100) UNIQUE"}
return db.Exec(fmt.Sprintf(statement, v...))
}
```
and the other one is using prepared statement:
```
func (db *DB) CreateUserTable() (sql.Result, error) {
statement, err := db.Prepare("INSERT INTO User(tbl1,tbl2,tbl3) VALUES(?,?,?)")
if err != nil {
log.Fatal(err)
}
return statement.Exec("value1", "value2", "value3")
}
```
The first gives benefit by enabling you to dynamically set the table name, column name, and the values. But the second one only for values. What's the difference? Which one should I use?<issue_comment>username_1: **Never** build SQL from strings that come from outside your system.
[](https://i.stack.imgur.com/lMuyI.png)
*Always* use the `?` syntax.
If you must set SQL parts like table names, prepare multiple, complete SQL statements that contain `?` for the values. Select the SQL to execute, maybe based on user input, but **never** build SQL *from* user input.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It is cleaner to use prepared statements so that whenever a requirement changes you can easily modify the statements. Also to prevent SQL injections.
>
> Prepared statements is much better than concatenating strings, for all
> the usual reasons (avoiding SQL injection attacks, for example).
>
>
> In MySQL, the parameter placeholder is ?, and in PostgreSQL it is $N,
> where N is a number. SQLite accepts either of these.
>
>
>
One more thing is Prepared statements can be used for repetitive approach, can be executed multiple times and can be destroyed.
```
stmt, err := db.Prepare("select id, name from users where id = ?")
if err != nil {
log.Fatal(err)
}
defer stmt.Close() // closing the statement
rows, err := stmt.Query(1)
```
And you are using interfaces
```
func (db *DB) CreateUserTable() (sql.Result, error) {
statement := "CREATE TABLE %s (%s, %s, %s, %s, %s)"
v := []interface{}{"User", "ID int PRIMARY KEY NOT NULL", "Name varchar(100) UNIQUE", "Email varchar(100) UNIQUE", "Address varchar(100) ", "Username varchar(100) UNIQUE"}
return db.Exec(fmt.Sprintf(statement, v...))
}
```
which can take any type of parameter under the hood which can be vulnerable
For more detailed information Go for this [Link](http://go-database-sql.org/retrieving.html)
Upvotes: 2
|
2018/03/16
| 1,205 | 4,944 |
<issue_start>username_0: **Spring boot 1.5.3 project with test user-registry on H2 in memory DB**
This is the **Error Stacktrace**
```
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'securityConfiguration': Unsatisfied dependency expressed through field 'myAppUserDetailsService'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'SMRTUserService': Unsatisfied dependency expressed through field 'userInfoDAO'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'SMRTUserDAO': Injection of persistence dependencies failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory': Post-processing of FactoryBean's singleton object failed; nested exception is org.springframework.jdbc.datasource.init.ScriptStatementFailedException: Failed to execute SQL script statement #2 of URL [file:/C:/temp/SMRT/target/test-classes/data.sql]: ....
```
Can someone help me to understand the problem? I can't solve this errors.
---
**Test Controller**
```
public class CustomerControllerTest extends AbstractControllerTest {
@Test
@WithMockUser(roles = "ADMIN")
public void testShow() throws Exception {
mockMvc.perform(get("/customer/list")
.contentType(APPLICATION_JSON_UTF8))
.andExpect(status().isOk());
}
}
```
---
**AbstractControllerTest**
```
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = WebEnvironment.MOCK)
@AutoConfigureMockMvc
public abstract class AbstractControllerTest extends AbstractTest {
@Autowired protected MockMvc mockMvc;
@Autowired private FilterChainProxy filterChainProxy;
@Autowired private WebApplicationContext webApplicationContext;
@Before
public void setup() throws Exception {
MockitoAnnotations.initMocks(this);
this.mockMvc = webAppContextSetup(webApplicationContext)
.dispatchOptions(true)
.addFilters(filterChainProxy).build();
}
}
```
---
**SecurityConfiguration**
```
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfiguration extends WebSecurityConfigurerAdapter {
@Autowired private SMRTUserService myAppUserDetailsService;
@Autowired private BCryptPasswordEncoder bCryptPasswordEncoder;
@Bean
public BCryptPasswordEncoder passwordEncoder() {
BCryptPasswordEncoder bCryptPasswordEncoder = new BCryptPasswordEncoder();
return bCryptPasswordEncoder;
}
@Autowired
public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
auth.userDetailsService(myAppUserDetailsService)
.passwordEncoder(bCryptPasswordEncoder);
}
}
```
---
**SMRTUSerService**
```
@Service
@Slf4j
public class SMRTUserService implements UserDetailsService {
@Autowired private ISMRTUserDAO userInfoDAO;
@Autowired private SMRTUserRepository smrtuserRepository;
...
}
```
Thanks<issue_comment>username_1: **Never** build SQL from strings that come from outside your system.
[](https://i.stack.imgur.com/lMuyI.png)
*Always* use the `?` syntax.
If you must set SQL parts like table names, prepare multiple, complete SQL statements that contain `?` for the values. Select the SQL to execute, maybe based on user input, but **never** build SQL *from* user input.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It is cleaner to use prepared statements so that whenever a requirement changes you can easily modify the statements. Also to prevent SQL injections.
>
> Prepared statements is much better than concatenating strings, for all
> the usual reasons (avoiding SQL injection attacks, for example).
>
>
> In MySQL, the parameter placeholder is ?, and in PostgreSQL it is $N,
> where N is a number. SQLite accepts either of these.
>
>
>
One more thing is Prepared statements can be used for repetitive approach, can be executed multiple times and can be destroyed.
```
stmt, err := db.Prepare("select id, name from users where id = ?")
if err != nil {
log.Fatal(err)
}
defer stmt.Close() // closing the statement
rows, err := stmt.Query(1)
```
And you are using interfaces
```
func (db *DB) CreateUserTable() (sql.Result, error) {
statement := "CREATE TABLE %s (%s, %s, %s, %s, %s)"
v := []interface{}{"User", "ID int PRIMARY KEY NOT NULL", "Name varchar(100) UNIQUE", "Email varchar(100) UNIQUE", "Address varchar(100) ", "Username varchar(100) UNIQUE"}
return db.Exec(fmt.Sprintf(statement, v...))
}
```
which can take any type of parameter under the hood which can be vulnerable
For more detailed information Go for this [Link](http://go-database-sql.org/retrieving.html)
Upvotes: 2
|
2018/03/16
| 1,002 | 3,495 |
<issue_start>username_0: I am processing a text file with an irregular structure that consists of a header and of data in different sections. What I aim to do is walk through a list and jump to the next section once a certain character is encountered. I made a simple example below. What is the elegant way of dealing with this problem?
```
lines = ['a','b','c','$', 1, 2, 3]
for line in lines:
if line == '$':
print("FOUND END OF HEADER")
break
else:
print("Reading letters")
# Here, I start again, but I would like to continue with the actual
# state of the iterator, in order to only read the remaining elements.
for line in lines:
print("Reading numbers")
```<issue_comment>username_1: You could use `enumerate` to keep track of where you are in the iteration:
```
lines = ['a','b','c','$', 1, 2, 3]
for i, line in enumerate(lines):
if line == '$':
print("FOUND END OF HEADER")
break
else:
print("Reading letters")
print(lines[i+1:]) #prints [1,2,3]
```
But, unless you actually need to process the header portion, the idea of @EdChum to simply use `index` is probably better.
Upvotes: 1 <issue_comment>username_2: You actually can have one iterator for both loops by creating your line iterator outside the for loop with the builtin function `iter`. This way it will be partially exhausted in the first loop and reusable in the next loop.
```
lines = ['a','b','c','$', 1, 2, 3]
iter_lines = iter(lines) # This creates and iterator on lines
for line in iter_lines :
if line == '$':
print("FOUND END OF HEADER")
break
else:
print("Reading letters")
for line in iter_lines:
print("Reading numbers")
```
The above prints this result.
```
Reading letters
Reading letters
Reading letters
FOUND END OF HEADER
Reading numbers
Reading numbers
Reading numbers
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: A simpler way and maybe more pythonic:
```
lines = ['a','b','c','$', 1, 2, 3]
print([i for i in lines[lines.index('$')+1:]])
# [1, 2, 3]
```
If you want to read each element after `$` to different variables, try this:
```
lines = ['a','b','c','$', 1, 2, 3]
a, b, c = [i for i in lines[lines.index('$')+1:]]
print(a, b, c)
# 1 2 3
```
Or if you are unaware of how many elements follow `$`, you could do something like this:
```
lines = ['a','b','c','$', 1, 2, 3, 4, 5, 6]
a, *b = [i for i in lines[lines.index('$')+1:]]
print(a, *b)
# 1 2 3 4 5 6
```
Upvotes: 0 <issue_comment>username_4: If you have more that one kind of separators, the most generic solution would be to built a mini-state machine to parse your data:
```
def state0(line):
pass # processing function for state0
def state1(line):
pass # processing function for state1
# and so on...
states = (state0, state1, ...) # tuple grouping all processing functions
separators = {'$':1, '#':2, ...} # linking separators and states
state = 0 # initial state
for line in text:
if line in separators:
print('Found separator', line)
state = separators[line] # change state
else:
states[state](line) # process line with associated function
```
This solution is able to correctly process arbitrary number of separators in arbitrary order with arbitrary number of repetitions. The only constraint is that a given separator is always followed by the same kind of data, that can be process by its associated function.
Upvotes: 0
|
2018/03/16
| 768 | 2,915 |
<issue_start>username_0: I have a route like this:
```
[Route("api/elasticsearch/resync/products")]
[HttpGet]
public async Task ResyncProducts()
{
}
```
How can I make it accessible only from the localhost?<issue_comment>username_1: Look into using CORS. Once installed correctly, you should be able to apply an attribute like so
`[EnableCors(origins: "http://localhost", headers: "*", methods: "*")]`
See here:
<https://tahirnaushad.com/2017/09/09/cors-in-asp-net-core-2-0/>
Upvotes: 1 <issue_comment>username_2: You can use action filter and check if request goes from loopback interface:
```
public class RestrictToLocalhostAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var remoteIp = context.HttpContext.Connection.RemoteIpAddress;
if (!IPAddress.IsLoopback(remoteIp)) {
context.Result = new UnauthorizedResult();
return;
}
base.OnActionExecuting(context);
}
}
```
Then just decorate action with this attribute:
```
[Route("api/elasticsearch/resync/products")]
[HttpGet]
[RestrictToLocalhost]
public async Task ResyncProducts()
{
}
```
Be careful with `context.HttpContext.Connection.RemoteIpAddress`. If you in forward-proxy mode (some other webserver like IIS or Nginx forwards requests to you) - this ip might always be localhost (because it's actually nginx\iis who makes a request to you), or even null, even for remote requests, if you configure your application incorrectly. But if all is configured correctly - that should be fine.
Don't use CORS like other answer suggests. It will not prevent anyone from calling your api from whatever ip. CORS is browser feature, outside of browser (and malicious user will of course not request your api via browser page) - it has exactly zero effect.
Upvotes: 4 <issue_comment>username_3: Found the answer with the help of [username_1's answer](https://stackoverflow.com/a/49321759/9359129) here:
<https://learn.microsoft.com/en-us/aspnet/core/security/cors#enabling-cors-in-mvc>
**Enabling CORS in MVC per action:**
```
[HttpGet]
[EnableCors("AllowSpecificOrigin")]
public IEnumerable Get()
{
return new string[] { "value1", "value2" };
}
```
and in Startup.cs:
```
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy("AllowSpecificOrigin",
builder => builder.WithOrigins("http://example.com"));
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// Shows UseCors with named policy.
app.UseCors("AllowSpecificOrigin");
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
```
Upvotes: 0
|
2018/03/16
| 1,086 | 3,754 |
<issue_start>username_0: I have installed OpenMPI and it works with a simple parallelized hello world program but it doesn't work when `MPI_SEND()` or `MPI_RECV()` is called. I am using gfortran 5.1 and OpenMPI 3.0.1rc4.
The error is
>
> Error: There is no specific subroutine for the generic ‘mpi\_recv’ at
> (1)
>
>
>
It seems the compiler does not recognise basic subroutines such as `MPI_RECV()`.
This is the test program that causes the error:
```
program main
use mpi
implicit none
integer :: ierr,np,myid,i,rbuf
integer, dimension(:,:), allocatable :: ista
CALL MPI_INIT(ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierr)
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,np,ierr)
allocate(ista(MPI_STATUS_SIZE,np))
if (myid==0) then
do i = 1, np-1
CALL MPI_RECV(rbuf,1,MPI_INTEGER4,i,i,MPI_COMM_WORLD,ista,ierr)
write(*,"('process ',i2,' sent:',i2)") i,rbuf
end do
else
i=10*myid
CALL MPI_SEND(i,1,MPI_INTEGER4,0,myid,MPI_COMM_WORLD,ierr)
end if
CALL MPI_FINALIZE(ierr)
end program main
```<issue_comment>username_1: Look into using CORS. Once installed correctly, you should be able to apply an attribute like so
`[EnableCors(origins: "http://localhost", headers: "*", methods: "*")]`
See here:
<https://tahirnaushad.com/2017/09/09/cors-in-asp-net-core-2-0/>
Upvotes: 1 <issue_comment>username_2: You can use action filter and check if request goes from loopback interface:
```
public class RestrictToLocalhostAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext context)
{
var remoteIp = context.HttpContext.Connection.RemoteIpAddress;
if (!IPAddress.IsLoopback(remoteIp)) {
context.Result = new UnauthorizedResult();
return;
}
base.OnActionExecuting(context);
}
}
```
Then just decorate action with this attribute:
```
[Route("api/elasticsearch/resync/products")]
[HttpGet]
[RestrictToLocalhost]
public async Task ResyncProducts()
{
}
```
Be careful with `context.HttpContext.Connection.RemoteIpAddress`. If you in forward-proxy mode (some other webserver like IIS or Nginx forwards requests to you) - this ip might always be localhost (because it's actually nginx\iis who makes a request to you), or even null, even for remote requests, if you configure your application incorrectly. But if all is configured correctly - that should be fine.
Don't use CORS like other answer suggests. It will not prevent anyone from calling your api from whatever ip. CORS is browser feature, outside of browser (and malicious user will of course not request your api via browser page) - it has exactly zero effect.
Upvotes: 4 <issue_comment>username_3: Found the answer with the help of [username_1's answer](https://stackoverflow.com/a/49321759/9359129) here:
<https://learn.microsoft.com/en-us/aspnet/core/security/cors#enabling-cors-in-mvc>
**Enabling CORS in MVC per action:**
```
[HttpGet]
[EnableCors("AllowSpecificOrigin")]
public IEnumerable Get()
{
return new string[] { "value1", "value2" };
}
```
and in Startup.cs:
```
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(options =>
{
options.AddPolicy("AllowSpecificOrigin",
builder => builder.WithOrigins("http://example.com"));
});
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole();
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
// Shows UseCors with named policy.
app.UseCors("AllowSpecificOrigin");
app.Run(async (context) =>
{
await context.Response.WriteAsync("Hello World!");
});
}
```
Upvotes: 0
|
2018/03/16
| 1,041 | 4,554 |
<issue_start>username_0: I am getting data from a WebAPI call in my componentDidMount on my parent react component. I put the values into state.
When I render my form, I am just making custom labels with data (A component), and passing the label text, and the data to this component. (One for each field I am displaying). I pass the values from state, via props, to the child components.
But what I am finding is that my child components are rendering without data being populated... as this seems to happen before the api call happens. The api happens, the state gets set, but data never gets to the child components. I thought that the props would pass the updated state data to the components. I'm wrong. How should I achieve this? I want to load the data in my parent, and then render the child components, passing in the data.
```
componentDidMount() {
this.loadData();
}
loadData() {
var request = {
method: 'GET',
URL: "http://example.com/api/user/profile",
}
fetchData(request).then(response => {
if(response.errorcode != "OK")
{
console.log("Bad response from API. Need to redirect!")
}
else
{
this.setState(
{
firstname: response.payload.firstname,
surname: response.payload.surname,
email: response.payload.email,
countryId: response.payload.countryId,
countries: response.payload.countries
}
);
}
});
}
render() {
return (
Your Profile
------------
Edit
)
}
```
My display label component is simply this. I'm new to this, and just trying to make reusable components:
```
import React, {Component} from 'react';
export default class DisplayLabel extends Component {
constructor(props)
{
super(props);
this.state = {
labelText: this.props.labelText,
data: this.props.data
}
console.log(this.state);
}
componentDidMount() {
this.setState({
labelText: this.props.labelText,
data: this.props.data
});
console.log("ComponentDidMount", this.state);
}
render() {
return (
{this.state.labelText}
**{this.state.data}**
)
}
}
```
I need to wait for the API call to complete before I render the form?<issue_comment>username_1: This is a common problem in React. I usually try to resolve it with a pattern that shows some sort of loading indicator. So I would initialize your `state` like this in the constructor:
```
this.state = {
loading: true
}
```
And I would change your `render` to have a check for that `loading` bool:
```
render() {
if (this.state.loading) {
return (Loading, please wait...
=======================
);
}
return (
Your Profile
------------
Edit
)
}
```
Then you can set `loading` to `false` following a successful data pull, and your form will display correctly without any errors.
(I edited this to use your parent component rather than `DisplayLabel`, as it makes more sense there. However, the component name is omitted from your question).
Upvotes: 2 <issue_comment>username_2: I'm not sure you have to wait until the request complete (but you most likely do have to for UI issues), but i guess your problem is something else.
you took the props and included them inside your state (that's unneccery), and then when the component complete it's loading you update the state again, but the request probbaly compleate after the componnent complete it's loading.
if you must to put the props inside your state, you should add componentDidUpdate or componentWillReceiveNewProps lifecycle functions, not the componentDidMount.
but, if you don't have to (most likley), you might change your render funcion to:
```
render() {
return (
{this.props.labelText}
**{this.props.data}**
)
}
```
(just changed the state to props)
Try that, Good Luck!
Upvotes: 2 [selected_answer]<issue_comment>username_3: Just because constructor called once when component mount so you can do one thing, in your render take two var and assign data into that. And then use those variable to show data. Because child component will render Everytime state of parent will change but constructor will not.
Upvotes: 1
|
2018/03/16
| 701 | 2,766 |
<issue_start>username_0: I have this method in TypeScript/Angular, that generate my file
```
imprimir() {
this.service.emitirAdvertencia({ parametros: [{ name: 'CODIGO', value: 880 }] })
.subscribe((response) => {
console.log(response);
var fileURL = window.URL.createObjectURL(response);
//this not display my pdf document in a new tab.
window.open(fileURL, '_blank');
//this display my document pdf, but in current tab
window.location.href = fileURL;
}, error => {
console.log(error);
});
}
```
This is my service
```
emitirAdvertencia(parametros: Object): any {
parametros['dbenv'] = ApplicationContext.getInstance().getDbenv();
parametros['usuario'] = ApplicationContext.getInstance().getUser().codigoUsuario;
parametros['nome_relatorio'] = 'RelAdvertenciaDB';
var httpOptions = {
headers: new HttpHeaders({
'Authorization': this.localStorage.get('token'),
}),
responseType: 'blob' as 'blob',
};
return this.http.get(ApplicationContext.URL + '/adiantamento/gerar-relatorio/', httpOptions)
.map((res) => {
var report = new Blob([res], { type: 'application/pdf' });
return report;
});
```
Like a commented in code, when i try open in a new tab, not works, only works if i open in current tab
How open this blob pdf file in new tab ?<issue_comment>username_1: >
> To open file in new Tab, you need to create anchor Element in
> Typescript and add your file url in href attribute of this element.
>
>
>
In my example service response is as `data._body` for file blob, you can arrange it as your output response from your service.
```
var newTab = true;
var inputData = { parametros: [{ name: 'CODIGO', value: 880 }] };
this.service.emitirAdvertencia(inputData).subscribe(
(data: any) => {
var contentType = data.headers._headers.get('content-type')[0];
var blob = new Blob([data._body], { type: contentType });
var url = window.URL.createObjectURL(blob, { oneTimeOnly: true });
//window.open(url, '_blank', '');
var anchor = document.createElement('a');
anchor.href = url;
if (newTab) {
anchor.target = '_blank';
}
anchor.click();
},
error => {
//TODO
},
() => {
//TODO
}
);
```
Upvotes: 3 <issue_comment>username_2: I could make it works like this:
```js
var fileURL = window.URL.createObjectURL(data);
let tab = window.open();
tab.location.href = fileURL;
```
Upvotes: 3
|
2018/03/16
| 935 | 3,489 |
<issue_start>username_0: In my Android Aplication I just need to open SMS intent with pre populated ***message\_body*** and the ***PhoneNumber***.
Following is the code I am trying
```
Uri uri = Uri.parse(String.format("smsto:%s", strPhoneNumber));
Intent smsIntent = new Intent(Intent.ACTION_SENDTO, uri);
smsIntent.putExtra("sms_body", "Sample Body");
startActivityForResult(smsIntent, OPEN_SMS_APP);
```
All works great in default scenario but if ***Facebook Messenger*** is installed and setup it as the default SMS Application (settings -> Apps & Notifications -> Default Apps -> SMS app) then the functionality breaks.
Problem is, it opens FB messenger without the ***message\_body*** (empty) even though it correctly picks the phone number (in FB Messenger APP).
Further, I tried following tests but didn't pick SMS\_BODY or opened default Android APP
```
smsIntent.addCategory(Intent.CATEGORY_APP_MESSAGING); // STILL DIDN'T FIX
smsIntent.putExtra(Intent.EXTRA_TEXT, "Sample Body"); // STILL DIDN'T FIX
```
***Questions***
1. Is there a way that I can force to open default Android SMS
Application (Messages APP) even if someone have setup any other 3rd party SMS application as default App?
2. OR Any other way I can pass message\_body parameter to work in other 3rd party applications as well?<issue_comment>username_1: >
> it opens FB messenger without the message\_body (empty)
>
>
>
There is no requirement for any app to honor undocumented `Intent` extras.
>
> Further, I tried following tests but didn't pick SMS\_BODY or opened default Android APP
>
>
>
`EXTRA_TEXT` is not documented for use with `ACTION_SENDTO`, nor is `CATEGORY_APP_MESSAGING`.
>
> Is there a way that I can force to open default Android SMS Application (Messages APP)
>
>
>
There are ~10,000 Android device models. I would expect there to be dozens, if not hundreds, of pre-installed "Android SMS Application". None of them have to honor `Intent` extras or categories not specifically documented to be supported by `ACTION_SENDTO`.
>
> Any other way I can pass message\_body parameter to work in other 3rd party applications as well?
>
>
>
`ACTION_SEND` — not `ACTION_SENDTO` — offers `EXTRA_TEXT`. However, you cannot mandate where the content should be sent. That would be up to the user.
You can also use `SmsManager` to send the SMS directly, if you hold the `SEND_SMS` permission.
Upvotes: 0 <issue_comment>username_2: If you really want to restrict to Google Android SMS App then can specify the package name of the Google App. But there are so many other Applications that will read 'sms\_body' Intent Key.
```
Uri uri = Uri.parse(String.format("smsto:%s", strPhoneNumber));
Intent smsIntent = new Intent(Intent.ACTION_SENDTO, uri);
smsIntent.putExtra("sms_body", "Sample Body");
// To Force Google Android SMS APP To Open
smsIntent.setPackage("com.google.android.apps.messaging");
if (smsIntent.resolveActivity(getPackageManager()) != null)
{
startActivityForResult(smsIntent, OPEN_SMS_APP);
}
else
{
// Display error message to say Google Android SMS APP required.
}
```
But I would use the Application Chooser rather than just restricting to Google SMS App. Obviously user have to go through an additional step (click), but it lists all SMS Apps (every time) and user have the option to select a correctly working SMS Application.
```
startActivityForResult(Intent.createChooser(smsIntent, "Sample Title"), OPEN_SMS_APP);
```
Upvotes: 2
|
2018/03/16
| 1,210 | 3,377 |
<issue_start>username_0: I found a weird thing when I used the operator e.g. `*=` or `+=`
The code:
```
aa = Variable(torch.FloatTensor([[1,2],[3,4]]))
bb = aa
bb = bb*2
print(bb)
print(aa)
cc = Variable(torch.FloatTensor([[1,2],[3,4]]))
dd = cc
dd *= 2
print(cc)
print(dd)
```
The results showed as below:
```
Variable containing:
2 4
6 8
[torch.FloatTensor of size 2x2]
Variable containing:
1 2
3 4
[torch.FloatTensor of size 2x2]
Variable containing:
2 4
6 8
[torch.FloatTensor of size 2x2]
Variable containing:
2 4
6 8
[torch.FloatTensor of size 2x2]
```
As you see, when I used `bb=bb*2`, `aa` was not be affected. However, if using `dd *= 2`, `cc` seems to point towards (share) the same address as `cc`, it is changed.
Their respective previous line is the same, e.g. `bb = aa` and `dd = cc`. It seems that the `*=` operator changed the original deep copy to shallow copy and the change was made after the copy line itself.
I am wondering if this is a bug. If it is, it is important since it affects the basic mathematics operations. Generally, I thought that just use the built-in operation functions e.g. `torch.add()` is a good solution.
```
OS: Mac OS X
PyTorch version: 3.0
How you installed PyTorch (conda, pip, source): conda
Python version: 3.6
CUDA/cuDNN version: None
GPU models and configuration: None
```
\*---------------------------------------
I understand `dd *= 2` is multiplication in-place, but how the value of `dd` transfer into `cc`? but if I used `dd = dd * 2`, the new values did not transfer to `cc`? There is no difference in the previous line: `dd = cc` and `bb =aa`.
BTW, in python (not pytorch Variable or Tensor), `dd *= 2` and `dd = dd * 2` both will not affect `cc` value.<issue_comment>username_1: When you do `dd = cc`, both `dd` and `cc` are now references to the same object (same for `bb = aa`). Nothing is being copied!
When you do `bb = bb * 2`, the `*` operator creates a new object and `bb` now refers to that object. No existing object is changed.
When you do `dd *= 2`, the object that `dd` refers to (and which `cc` also refers to) is changed.
So the difference is that `*` creates a new object and `=` makes a variable refer to a new object (rather than changing the object in any way), whereas `*=` changes the object.
It may be counter-intuitive that `x *= y` behaves differently than `x = x * y`, but those are the semantics of the language, not a bug.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Repeat your tests, but additionally print the ID of your objects:
```
aa = Variable(torch.FloatTensor([[1,2],[3,4]]))
bb = aa
bb = bb*2
print(bb , id(bb))
print(aa , id(aa))
cc = Variable(torch.FloatTensor([[1,2],[3,4]]))
dd = cc
dd *= 2
print(cc, id(cc))
print(dd, id(dd))
```
This should provide you with an idea what happens.
I havent got torch, but normal lists do it similarly:
```
aa = [1,2,3,4]
bb = aa
bb = bb*2
print(bb, id(bb))
print(aa, id(aa))
cc =[1,2,3,4]
dd = cc
dd *= 2
print(cc, id(cc))
print(dd, id(dd))
```
Output for normal lists (have got no spark):
```
([1, 2, 3, 4, 1, 2, 3, 4], 140432043987888) # bb different ids
([1, 2, 3, 4], 140432043930400) # aa different ids
([1, 2, 3, 4, 1, 2, 3, 4], 140432043916032) # cc same id, same object
([1, 2, 3, 4, 1, 2, 3, 4], 140432043916032) # dd same id, same object
```
Upvotes: 0
|
2018/03/16
| 880 | 2,729 |
<issue_start>username_0: I have users and permissions tables. Relation between them is:
One user `HAS MANY` permissions.
I need to get all users who does not have permission `hide`.
For example, users table has:
```
id ¦ name ¦ email ¦ password
---+ -------+-----------------+----------
1 ¦ Test 1 ¦ <EMAIL> ¦ 1234
2 ¦ Test 2 ¦ <EMAIL> ¦ 2345
3 ¦ Test 3 ¦ <EMAIL> ¦ 2345
4 ¦ Test 4 ¦ <EMAIL> ¦ 8888
5 ¦ Test 5 ¦ <EMAIL> ¦ 9876
```
---
and permissions table looks like:
```
user_id ¦ permission
---------+------------
1 ¦ read
2 ¦ edit
2 ¦ hide
4 ¦ edit
5 ¦ hide
```
This is what I tried so far:
```
SELECT *
FROM users
LEFT JOIN permissions
ON users.id = permissions.user_id
AND permissions.permission != 'hide'
```
but this still gets me second user because second user has also permission edit.
EXPECTED RESULT:
```
id ¦ name ¦ email ¦ password ¦ permission
---+ -------+-----------------+---------- +------------
1 ¦ Test 1 ¦ <EMAIL> ¦ 1234 ¦ read
3 ¦ Test 3 ¦ <EMAIL> ¦ 2345 ¦ null
4 ¦ Test 4 ¦ <EMAIL> ¦ 8888 ¦ edit
```
What is best approach here?<issue_comment>username_1: ```
SELECT *
FROM users u
left JOIN permissions p
ON u.id = p.id
where u.id not in
(
select id
from permissions
where permission = 'hide'
group by id
)
```
output
```
id Name Email Password id permission
1 Test 1 <EMAIL> 1234 1 read
3 Test 3 <EMAIL> 2345 (null) (null)
```
Upvotes: 0 <issue_comment>username_2: Good case for `not exists`
```
SELECT *
FROM users u
left JOIN permissions p
ON u.id = p.user_id
where not exists ( select 1
from permissions p2
where p2.user_id = u.id
and p2.permission = 'hide'
)
```
Upvotes: 1 <issue_comment>username_3: one Possible answer might be
```
select * from Users U
inner join Permissions P on P.id=U.id
WHere U.id NOT IN (SELECT U.ID FROM Users U
inner join Permissions P on P.id=U.id
WHere P.Permission ='HIDE')
```
1) This will return ids that have hide permissions
```
SELECT U.ID FROM Users U
inner join Permissions P on P.id=U.id
WHere P.Permission ='HIDE'
```
2)This will return all data except the above returned ids, which will be your expected
```
select * from Users U
inner join Permissions P on P.id=U.id
WHere U.id NOT IN (SELECT U.ID FROM Users U
inner join Permissions P on P.id=U.id
WHere P.Permission ='HIDE')
```
Upvotes: 0
|
2018/03/16
| 920 | 3,864 |
<issue_start>username_0: can i intercept notifications when my app is closed?
I need for set badge with this library [ShortcutBadger](https://github.com/leolin310148/ShortcutBadger)
Thanks.<issue_comment>username_1: do you mean it?
```
public class AppFcmMessagingsService extends FirebaseMessagingService {
private static final String TAG = "FirebaseMessageService";
@Override
public void onCreate() {
super.onCreate();
}
@Override
public void onMessageReceived(RemoteMessage remoteMessage) {
try {
if(remoteMessage.getData().size() > 0) {
final JSONObject jsonObject = new JSONObject(remoteMessage.getData().toString());
Log.d(TAG,"remoteMessage = " + jsonObject.toString());
int badgeCount = 1;
ShortcutBadger.applyCount(getApplicationContext(), badgeCount);
}
} catch (Exception e) {
Log.e(TAG, "onMessageReceived: ", e);
}
if(remoteMessage.getNotification() != null) {
int badgeCount = 1;
ShortcutBadger.applyCount(getApplicationContext(), badgeCount);
Log.d(TAG, "notification body : " + remoteMessage.getNotification().getBody());
}
}
}
```
Upvotes: -1 <issue_comment>username_2: There are 3 types of notifications:
notification: Can be send from the web console or any backend, it has predefines values. If the app is open the behaviour is customizable on onMessageRecieve if the app is closed triggers a default notification.
data: a key value pair, only Strings. Can be send from any backend. The behaviour is always defined in onMessageReceived method.
notification and data: Combination of previous it will have the behaviour of a notification, the data will be available as extras once the notification is clicked in the default launcher activity. Can be send from the web console or any backend.
A push is a json called Payload which contains those objects:
payload: {
data: {...}
}
Yes, you can send yourself a data type notification it will always do what you write in the onMessageReceived method inside the MessagingService.
This doc should help you
<https://firebase.google.com/docs/cloud-messaging/concept-options?hl=es-419>
If you dont have a server use Functions.
Since the default notification wont be shown, you will probably want to show your own.
If you want to also show a notification then the NotificationCompat class must be called from inside onMessageReceived. The visual notification is not related to the push message, in fact, a visual notification can be triggered by pressing a button.
For creating a visual notification, the best approach is to let Android Studio do it for you. Second click on the packages where your activities .java are, new, then selecet ui-component and there is the notification. It will create a basic template of a notification. Then use those methods inside onMessaReceived passing the info that has to be show to the user.
The docs about the class
<https://developer.android.com/reference/android/support/v4/app/NotificationCompat.Builder.html>
And you will probably find this error
[NotificationCompat.Builder deprecated in Android O](https://stackoverflow.com/questions/45462666/notificationcompat-builder-deprecated-in-android-o)
Upvotes: 0 <issue_comment>username_3: In case you never solved this, the problem is not how you are implementing it within your app, but how the JSON data payload is being sent. See [this question](https://stackoverflow.com/q/40311279/2480714) and the respective answers for why you are not receiving the messages while they are in the background.
Very short summary is, if you are receiving the `notification` payload, it will **never** trigger in the background. If you receive the `data` payload **without** `notification`, you can parse and perform actions while the app is in the background.
Upvotes: 0
|
2018/03/16
| 1,017 | 4,258 |
<issue_start>username_0: I know, I know, I should have typed
```
docker run -it ubuntu bash
```
But the fact remains, a container has been created, it is there, and it is stopped. It stops as soon as it is started, so there's no way to attach or exec in it.
Is it really the case that there's is absolutely no way to change it's state so that bash is started instead ? This seems to be kind of a showstopper to me. Or maybe there's something I didn't get about the marvelous possibilities of docker that would make such a thing complicated to do ? I doubt that.
Why is it that way ?<issue_comment>username_1: do you mean it?
```
public class AppFcmMessagingsService extends FirebaseMessagingService {
private static final String TAG = "FirebaseMessageService";
@Override
public void onCreate() {
super.onCreate();
}
@Override
public void onMessageReceived(RemoteMessage remoteMessage) {
try {
if(remoteMessage.getData().size() > 0) {
final JSONObject jsonObject = new JSONObject(remoteMessage.getData().toString());
Log.d(TAG,"remoteMessage = " + jsonObject.toString());
int badgeCount = 1;
ShortcutBadger.applyCount(getApplicationContext(), badgeCount);
}
} catch (Exception e) {
Log.e(TAG, "onMessageReceived: ", e);
}
if(remoteMessage.getNotification() != null) {
int badgeCount = 1;
ShortcutBadger.applyCount(getApplicationContext(), badgeCount);
Log.d(TAG, "notification body : " + remoteMessage.getNotification().getBody());
}
}
}
```
Upvotes: -1 <issue_comment>username_2: There are 3 types of notifications:
notification: Can be send from the web console or any backend, it has predefines values. If the app is open the behaviour is customizable on onMessageRecieve if the app is closed triggers a default notification.
data: a key value pair, only Strings. Can be send from any backend. The behaviour is always defined in onMessageReceived method.
notification and data: Combination of previous it will have the behaviour of a notification, the data will be available as extras once the notification is clicked in the default launcher activity. Can be send from the web console or any backend.
A push is a json called Payload which contains those objects:
payload: {
data: {...}
}
Yes, you can send yourself a data type notification it will always do what you write in the onMessageReceived method inside the MessagingService.
This doc should help you
<https://firebase.google.com/docs/cloud-messaging/concept-options?hl=es-419>
If you dont have a server use Functions.
Since the default notification wont be shown, you will probably want to show your own.
If you want to also show a notification then the NotificationCompat class must be called from inside onMessageReceived. The visual notification is not related to the push message, in fact, a visual notification can be triggered by pressing a button.
For creating a visual notification, the best approach is to let Android Studio do it for you. Second click on the packages where your activities .java are, new, then selecet ui-component and there is the notification. It will create a basic template of a notification. Then use those methods inside onMessaReceived passing the info that has to be show to the user.
The docs about the class
<https://developer.android.com/reference/android/support/v4/app/NotificationCompat.Builder.html>
And you will probably find this error
[NotificationCompat.Builder deprecated in Android O](https://stackoverflow.com/questions/45462666/notificationcompat-builder-deprecated-in-android-o)
Upvotes: 0 <issue_comment>username_3: In case you never solved this, the problem is not how you are implementing it within your app, but how the JSON data payload is being sent. See [this question](https://stackoverflow.com/q/40311279/2480714) and the respective answers for why you are not receiving the messages while they are in the background.
Very short summary is, if you are receiving the `notification` payload, it will **never** trigger in the background. If you receive the `data` payload **without** `notification`, you can parse and perform actions while the app is in the background.
Upvotes: 0
|
2018/03/16
| 807 | 3,020 |
<issue_start>username_0: Assume we have got an array `arr` with all initial values `0`. Now we are given `n` operations - an operation consists of two numbers `a b`. It means that we are adding `+1` to the value of `arr[a]` and adding `-1` to the value of `arr[b]`.
Moreover, we can swap numbers in some operations, what means that we will add `-1` to `arr[a]` and `+1` to `arr[b]`.
We want to achieve a situation, in which all values of `arr` are equal to `0` even after all these operations. We are wondering if that is possible, and, if yes, what operations should we swap to achieve that.
Any thoughts on that?
Some example input:
```
3
1 2
3 2
3 1
```
should result in
`YES
R
N
R` where `R` means to reverse that operation, and `N` not to do it.
input:
```
3
1 2
2 3
3 2
```
results in answer `NO`.<issue_comment>username_1: Let each of the array element be a vertex in a graph and the operation `(a,b)` be a edge from vertex `a` to `b` (there might be multiple edges between same vertices). Now traveling from the vertex `a` means decrease array element `a` and traveling to the vertex `a` means increase array element `a`. Now if each vertex has an even number of edges and you find a cyclic path that visits each edge exactly once you will have a zero total sum in the array.
Such a path is called [Eulerian cycle](https://en.wikipedia.org/wiki/Eulerian_path) (Wikipedia). From the Wikipedia: An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. As in your case only all the disconnected sub graphs need to have an Eulerian cycle it is enough to count how many times each array index appears and if the count is even for each one of them there is always way to obtain zero total in the array.
If you want to find out which operations to reverse, you need to find one of such paths and check which directions you travel the edges.
Upvotes: 1 <issue_comment>username_2: Just count the number of times the indices appear. If all indices appear in even numbers, then the answer is YES.
You can prove it by construction. You will need to build a pair list from the original pair list. The goal is to build the list such that you can match every index that appears on the left with an index that appears on the right.
Go from the first pair to the last. For each pair, try to match an index that appears an odd number of times.
For example, in your first example, each index appears twice, so the answer is YES. To build the list, you start with (1,2). Then you look at the pair (3,2) and you know that 2 appears once on the right, so you swap it to have 2 on the left: (2,3). For the last pair, you have (3,1) which matches 1 and 3 that appear only once so far.
Note that at the end, you can always find a matching pair, because each number appears in an even number. Each number should have a match.
In the second example, 2 appears three times. So the answer is NO.
Upvotes: 0
|
2018/03/16
| 2,478 | 9,643 |
<issue_start>username_0: I have a problem with fragment and recyclerView and retrofit.
my first fragment that shows a list of persons , do not show the content when i start activity of fragments and when i move to third fragment and get back to first , content appear... and also when i use edit text of search and press search , the recycler view must be refresh but when i go to third fragment and get back data appear..
what is the problem? im using retrofit 2 to get data from server
my Fragment :
```
public class FragmentPerson extends Fragment{
private List persons = new ArrayList<>();
private RecyclerView recyclerView;
private Button Add;
//Datas
private PersonServise mTService;
public PersonAdapter adapter;
private Button search;
private EditText searchField;
@Override
public void onCreate(@Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setData();
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
View rootView = inflater.inflate(R.layout.fragment\_person, container, false);
recyclerView = (RecyclerView)rootView.findViewById(R.id.ry\_persons);
recyclerView.setLayoutManager(new LinearLayoutManager(getActivity()));
adapter = new PersonAdapter(persons);
recyclerView.setAdapter(adapter);
adapter.notifyDataSetChanged();
Add = (Button)rootView.findViewById(R.id.FP\_BT\_AddPerson);
Add.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent goToAdd = new Intent(getActivity(), AddPerson.class);
startActivity(goToAdd);
}
});
search = (Button)rootView.findViewById(R.id.FP\_BT\_Search);
searchField = (EditText)rootView.findViewById(R.id.FP\_ET\_SearchField);
search.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
try {
PersonProvider personProvider = new PersonProvider();
mTService = personProvider.getTService();
String s = searchField.getText().toString();
String[] words = s.split("\\s+");
for (int i = 0; i < words.length; i++) {
words[i] = words[i].replaceAll("[^\\w]", "");
}
if(words[1] == null){
words[1] = new String("a");
}
Toast.makeText(getActivity(), words[0], Toast.LENGTH\_SHORT).show();
Toast.makeText(getActivity(), words[1], Toast.LENGTH\_SHORT).show();
Call> call = mTService.getAllPersons(words[0],words[1]);
call.enqueue(new Callback>() {
@Override
public void onResponse(Call> call, Response> response) {
if(response.isSuccessful()){
if(response.body()!=null){
persons = response.body();
}
if(response.body()==null){
Toast.makeText(getActivity(), "No data", Toast.LENGTH\_SHORT).show();
}
}
else {
Toast.makeText(getActivity(), "Response is NOT succesful", Toast.LENGTH\_SHORT).show();
}
}
@Override
public void onFailure(Call> call, Throwable t) {
Toast.makeText(getActivity(), "Failure :" + t.getMessage(), Toast.LENGTH\_SHORT).show();
}
});
}
catch (IndexOutOfBoundsException e){
Toast.makeText(getActivity(), e.getMessage(), Toast.LENGTH\_SHORT).show();
}
}
});
return rootView;
}
private void setData(){
try {
PersonProvider personProvider = new PersonProvider();
mTService = personProvider.getTService();
Call> call = mTService.getAllPersons("a","a");
call.enqueue(new Callback>() {
@Override
public void onResponse(Call> call, Response> response) {
if(response.isSuccessful()){
if(response.body()!=null){
persons = response.body();
}
if(response.body()==null){
Toast.makeText(getActivity(), "No data", Toast.LENGTH\_SHORT).show();
}
}
else {
Toast.makeText(getActivity(), "Response is NOT succesful", Toast.LENGTH\_SHORT).show();
}
}
@Override
public void onFailure(Call> call, Throwable t) {
Toast.makeText(getActivity(), "Failure :" + t.getMessage(), Toast.LENGTH\_SHORT).show();
}
});
}
catch (NullPointerException e){
Toast.makeText(getActivity(), "crash because: " + e, Toast.LENGTH\_LONG).show();
}
}
```
}
it's my Activity of tablayout and fragments :
```
public class UserManageData extends AppCompatActivity {
//Declare of Tab
TabLayout tabLayout;
ViewPager viewPager;
protected void attachBaseContext(Context newBase) {
super.attachBaseContext(CalligraphyContextWrapper.wrap(newBase));
}
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.user_manage_data);
//Tabs Functions
viewPager = (ViewPager) findViewById(R.id.viewpager);
setupViewPager(viewPager);
tabLayout = (TabLayout) findViewById(R.id.UMD_TabLayout) ;
tabLayout.setupWithViewPager(viewPager);
}
void setupViewPager(ViewPager viewPager){
ViewPagerAdapter viewPagerAdapter = new ViewPagerAdapter(getSupportFragmentManager());
viewPagerAdapter.addFragment(new FragmentPerson(), "بارشمار"); // `new FragmentPerson()` should be inside `FragmentPagerAdapter.getItem()`
viewPagerAdapter.addFragment(new FragmentServise(), "سرویس"); // `new FragmentServise()` should be inside `FragmentPagerAdapter.getItem()`
viewPagerAdapter.addFragment(new FragmentShip(), "کشتی"); // `new FragmentShip()` should be inside `FragmentPagerAdapter.getItem()`
viewPagerAdapter.addFragment(new FragmentTransite(), "ترانزیت"); // `new FragmentTransite()` should be inside `FragmentPagerAdapter.getItem()`
viewPager.setAdapter(viewPagerAdapter);
```
// viewPager.setOffscreenPageLimit(4);
}
```
public class ViewPagerAdapter extends FragmentPagerAdapter {
private final List mFragmentList = new ArrayList<>(); // this line can cause crashes
private final List mFragmentTitleList = new ArrayList<>();
public ViewPagerAdapter(FragmentManager manager) {
super(manager);
}
@Override
public Fragment getItem(int position) {
return mFragmentList.get(position);
}
@Override
public int getCount() {
return mFragmentList.size();
}
public void addFragment(Fragment fragment, String title) {
mFragmentList.add(fragment); // this line can cause crashes
mFragmentTitleList.add(title);
}
@Override
public CharSequence getPageTitle(int position) {
return mFragmentTitleList.get(position);
}
}
```
}
And recycler view Adapter :
```
public class PersonAdapter extends RecyclerView.Adapter
{
List persons;
public PersonAdapter(List persons) {
this.persons = persons;
}
@Override
public PersonViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.manage\_data\_item,parent,false);
return new PersonViewHolder(view);
}
@Override
public void onBindViewHolder(PersonViewHolder holder, final int position) {
Person person = persons.get(position);
holder.name.setText(person.getPersonFirstName());
holder.mobile.setText(person.getPersonMobile());
holder.delete.setOnClickListener(new View.OnClickListener()
{
@Override
public void onClick(View v)
{
}
});
}
@Override
public int getItemCount() {
return persons.size();
}
public class PersonViewHolder extends RecyclerView.ViewHolder {
private Button delete;
private Button edit;
private TextView name;
private TextView mobile;
public PersonViewHolder(View itemView) {
super(itemView);
name = (TextView) itemView.findViewById(R.id.MDI\_TV\_1);
mobile = (TextView) itemView.findViewById(R.id.MDI\_TV\_mobile);
delete = (Button) itemView.findViewById(R.id.MDI\_BT\_Delete);
edit = (Button) itemView.findViewById(R.id.MDI\_BT\_Edit);
}
}
}
```<issue_comment>username_1: Let each of the array element be a vertex in a graph and the operation `(a,b)` be a edge from vertex `a` to `b` (there might be multiple edges between same vertices). Now traveling from the vertex `a` means decrease array element `a` and traveling to the vertex `a` means increase array element `a`. Now if each vertex has an even number of edges and you find a cyclic path that visits each edge exactly once you will have a zero total sum in the array.
Such a path is called [Eulerian cycle](https://en.wikipedia.org/wiki/Eulerian_path) (Wikipedia). From the Wikipedia: An undirected graph has an Eulerian cycle if and only if every vertex has even degree, and all of its vertices with nonzero degree belong to a single connected component. As in your case only all the disconnected sub graphs need to have an Eulerian cycle it is enough to count how many times each array index appears and if the count is even for each one of them there is always way to obtain zero total in the array.
If you want to find out which operations to reverse, you need to find one of such paths and check which directions you travel the edges.
Upvotes: 1 <issue_comment>username_2: Just count the number of times the indices appear. If all indices appear in even numbers, then the answer is YES.
You can prove it by construction. You will need to build a pair list from the original pair list. The goal is to build the list such that you can match every index that appears on the left with an index that appears on the right.
Go from the first pair to the last. For each pair, try to match an index that appears an odd number of times.
For example, in your first example, each index appears twice, so the answer is YES. To build the list, you start with (1,2). Then you look at the pair (3,2) and you know that 2 appears once on the right, so you swap it to have 2 on the left: (2,3). For the last pair, you have (3,1) which matches 1 and 3 that appear only once so far.
Note that at the end, you can always find a matching pair, because each number appears in an even number. Each number should have a match.
In the second example, 2 appears three times. So the answer is NO.
Upvotes: 0
|
2018/03/16
| 910 | 3,163 |
<issue_start>username_0: I am trying to automate my hybrid app, where I need to enter details in an input field but using `send_keys("Text value")` is not working in my case. I am getting the exception
`selenium.common.exceptions.WebDriverException: Message: unknown error: call function result missing 'value'`
```
def test_login(self):
self.driver.implicitly_wait(15)
loginemail = self.driver.find_element_by_id("userId")
loginpass = self.driver.find_element_by_id("userPassword")
email = loginemail.find_element_by_xpath("//*[@id='userId']/input")
email.click()
email.send_keys("<KEY>")
```
Here is the full error message:
```
File "/home/martial/PycharmProjects/pytestAndroid/test_login_android.py", line 45, in test_login
email.send_keys("<KEY>")
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/selenium/webdriver/remote/webelement.py", line 347, in send_keys
self._execute(Command.SEND_KEYS_TO_ELEMENT, {'value': keys_to_typing(value)})
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/selenium/webdriver/remote/webelement.py", line 491, in _execute
return self._parent.execute(command, params)
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/selenium/webdriver/remote/webdriver.py", line 238, in execute
self.error_handler.check_response(response)
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/appium/webdriver/errorhandler.py", line 29, in check_response
raise wde
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/appium/webdriver/errorhandler.py", line 24, in check_response
super(MobileErrorHandler, self).check_response(response)
File "/home/martial/PycharmProjects/pytestAndroid/venv/lib/python3.6/site-packages/selenium/webdriver/remote/errorhandler.py", line 193, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: call function result missing 'value'
```<issue_comment>username_1: I was getting this error when using chromedriver **v2.33** - when I updated to **v2.36** the error went away (this was on the mac using the chromedriver\_mac64.zip file downloaded from: <https://sites.google.com/a/chromium.org/chromedriver/downloads>)
Upvotes: 2 <issue_comment>username_2: **Question**: org.openqa.selenium.webdriverexception: unknown error: call function result missing 'value'. Please look into below steps to resolve such error.
**Answer**: **Update browser version**. Please follow below steps to get the latest version of browsers.
1. Go to: <https://www.seleniumhq.org/download/>
2. CTRL+F and search "Third Party Browser Drivers". All browsers are listed there.
3. Click on browser name/version -It redirects to the another page. (E.g.- <http://chromedriver.storage.googleapis.com/index.html?path=2.36/>)
4. Download appropriate driver for your operating system.
5. Place driver into one of folder from where you are calling that driver to your selenium webdriver scripts. It works without any error.
Thanks :) !!
Upvotes: 0
|
2018/03/16
| 541 | 2,082 |
<issue_start>username_0: I would like to know the relation between the size of TensorFlow's checkpoint file and the number of parameters of a model.
I have a model with `1.8` million parameters, which should require about `7` MB of memory (given that `(4 * 1.8 * 10^6) / 1024^2`). However, the saved checkpoint is apparently larger
```
model-5000.data-00000-of-00001 15MB
model-5000.index 15K
```
In Caffe, the fraction of 4 seems always right with `caffemodel`.<issue_comment>username_1: I would say the checkpoint file is not 100% comparable to a .caffemodel file. It will contain many more things in it besides the model params, e.g. loading data nodes, weight initializers, saver nodes etc. Also params will be saved as trainable variables which takes up more space I would think.
To get something comparable to caffe's .caffemodel you need to convert your variables to constants (called freezing the graph) and then save graph as a binary proto file (.pb). Doing this will remove all unncessary nodes in the graph.
To conert to .pb you cna do soething like the following:
```
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess, # The session is used to retrieve the weights
sess.graph_def, # The graph_def is used to retrieve the nodes
output_node_names # The output node name
)
# Finally we serialize and dump the output graph to the filesystem
with open('mode.pb', 'wb') as f:
f.write(output_graph_def.SerializeToString())
```
Upvotes: -1 <issue_comment>username_2: Did you use an optimizer with momentum? That will double the number of saved parameters (each parameter will have a corresponding momentum parameter). If you used Adam, the number of saved parameters will actually be tripled, because of the squared-gradient estimators. Other optimizers will vary in different ways.
You can [manually inspect the checkpoint](https://www.tensorflow.org/guide/checkpoint#manually_inspecting_checkpoints) to see what variables are being stored.
Upvotes: 1
|
2018/03/16
| 282 | 1,164 |
<issue_start>username_0: I am trying to call the REST API exposed from IBM TM1 Cognos. Using the HttpWebRequest object. Getting the 401 when i tried to pass Authorization header with base64(user:password:namespaceId).<issue_comment>username_1: ```
using (var client = new HttpClient())
{
var plainTextBytes = System.Text.Encoding.UTF8.GetBytes("username:password:camnamespace");
var encodeData= System.Convert.ToBase64String(plainTextBytes);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
client.DefaultRequestHeaders.Add("Authorization", "CAMNamespace "+ encodeData);
//GET Method
HttpResponseMessage response = await client.GetAsync("http://serveraddress/api/v1/Cubes");
if (response.IsSuccessStatusCode)
{
var det = await response.Content.ReadAsStringAsync();
}
else
{
Console.WriteLine("Internal server Error");
}
}
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think you need something like in `payton: verify=False` to trust the certificate of response.
Upvotes: 0
|
2018/03/16
| 490 | 1,985 |
<issue_start>username_0: this question is identical to the following: [WKWebView catch HTTP error codes](https://stackoverflow.com/questions/32936168/wkwebview-catch-http-error-codes); unfortunately the methods in Obj-C are not applicable to Swift 4, thus the cited `WKNavigationResponse.response` is no longer of type `NSHTTPURLResponse` so it doesn't have the http status code.
But the issue is still the same: I need to get the http status code of the answer to detect if the expected page is loaded or not.
Please note that `webView(_ webView: WKWebView, didFailProvisionalNavigation navigation: WKNavigation!, withError error: Error)` delegate is not called in case of `404` but only in case of network issue (i.e. server offline); the `func webView(_ webView: WKWebView, didFinish navigation: WKNavigation!)` is called instead.
Thanks a lot for your answers.<issue_comment>username_1: Using a `WKNavigationDelegate` on the `WKWebView` you can get the status code from the response each time one is received.
```
func webView(_ webView: WKWebView, decidePolicyFor navigationResponse: WKNavigationResponse,
decisionHandler: @escaping (WKNavigationResponsePolicy) -> Void) {
if let response = navigationResponse.response as? HTTPURLResponse {
if response.statusCode == 401 {
// ...
}
}
decisionHandler(.allow)
}
```
Upvotes: 7 [selected_answer]<issue_comment>username_2: `HTTPURLResponse` is a subclass of `URLResponse`. The Swift way of “conditional downcasting” is the conditional cast `as?`, this can be combined with conditional binding `if let`:
```
func webView(_ webView: WKWebView, decidePolicyFor navigationResponse: WKNavigationResponse,
decisionHandler: @escaping (WKNavigationResponsePolicy) -> Void) {
if let response = navigationResponse.response as? HTTPURLResponse {
if response.statusCode == 401 {
// ...
}
}
decisionHandler(.allow)
}
```
Upvotes: 2
|
2018/03/16
| 310 | 1,207 |
<issue_start>username_0: I'm using Team Foundation Server 2017 express edition. My question is related with dashboards. Is there any possibility to change default dashboard or copy dashboard from another project. It's very time-consuming to configure dashboard for every new project.
I'll be very grateful for any help.<issue_comment>username_1: Unfortunately the feature copy dashboard is not supported for now.
And there is already a **[user voice here](https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/15008376-ability-to-copy-tfs-dashboards)** submitted to suggest the feature, you can go and vote it up to push Dev team to achieve that in future release.
And there is an Extension : [VSTS Copy Dashboard](https://marketplace.visualstudio.com/items?itemName=canarysautomationspvtltd.dashboardmigratortool), but unfortunately it's only available for VSTS.
To manage the dashboard you can refer to this link for details : [Add and manage dashboards](https://learn.microsoft.com/en-us/vsts/report/dashboards/dashboards).
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use the rest api. Do a GET, edit the result, and then do a put for a new db
Upvotes: 1
|
2018/03/16
| 514 | 1,804 |
<issue_start>username_0: I wrote a method that calculates PI (π) by using infinite series:
```
public static decimal NilakanthaGetPI(ulong n)//Nilakantha Series
{
decimal sum = 0;
decimal temp = 0;
decimal a = 2, b = 3, c = 4;
for (ulong i = 0; i < n; i++)
{
temp = 4 / (a * b * c);
sum += i % 2 == 0 ? temp : -temp;
a += 2; b += 2; c += 2;
}
return 3 + sum;
}
```
The method works fine till the number of iterations reaches a few billion which gives me a `OverflowException` which is logical because the value of `temp` is greater then the `decimal` type can hold. It came to my mind to use `BigInteger` but then I can't do the division `temp = 4 / (a * b * c)`.With this method I can calculate the first 25 decimal digits of PI (`decimal type` can store 28 or 29 decimal digits). Is there a way to modify this method so that it can calculate more digits of PI?<issue_comment>username_1: Unfortunately the feature copy dashboard is not supported for now.
And there is already a **[user voice here](https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/15008376-ability-to-copy-tfs-dashboards)** submitted to suggest the feature, you can go and vote it up to push Dev team to achieve that in future release.
And there is an Extension : [VSTS Copy Dashboard](https://marketplace.visualstudio.com/items?itemName=canarysautomationspvtltd.dashboardmigratortool), but unfortunately it's only available for VSTS.
To manage the dashboard you can refer to this link for details : [Add and manage dashboards](https://learn.microsoft.com/en-us/vsts/report/dashboards/dashboards).
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use the rest api. Do a GET, edit the result, and then do a put for a new db
Upvotes: 1
|
2018/03/16
| 1,914 | 5,878 |
<issue_start>username_0: I am using the `webpack-dev-server` in development mode (watch). Some json and js files are crowding my build directory every time the server reloads like so :
`'hash'.hot-update.json`:
```json
{"h":"05e9f5d70580e65ef69b","c":{"main":true}}
```
`'hash'.hot-update.js`:
```js
webpackHotUpdate("main",{
***/ "./client/components/forms/SignIn.js":
/*!*******************************************!*\
!*** ./client/components/forms/SignIn.js ***!
\*******************************************/
/*! no static exports found */
/***/ (function(module, exports, __webpack_require__) {
"use strict";
eval("\n\nObject.defineProperty(exports, \"__esModule\", {\n\tvalue: true\n});\n\nvar _createClass = function () { function defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } } return function (Constructor, protoProps, staticProps) { if (protoProps) defineProperties(Constructor.prototype, protoProps); if (staticProps) defineProperties(Constructor, staticProps); return Constructor; }; }();\n\nvar _react = __webpack_require__(/*! react */ \"./node_modules/react/index.js\");\n\nvar _react2 = _interopRequireDefault(_react);\n\nfunction _interopRequireDefault(obj) { return obj && obj.__esModule ? obj : { default: obj }; }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _possibleConstructorReturn(self, call) { if (!self) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return call && (typeof call === \"object\" || typeof call === \"function\") ? call : self; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function, not \" + typeof superClass); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, enumerable: false, writable: true, configurable: true } }); if (superClass) Object.setPrototypeOf ? Object.setPrototypeOf(subClass, superClass) : subClass.__proto__ = superClass; }\n\nvar SignIn = function (_Component) {\n\t_inherits(SignIn, _Component);\n\n\tfunction SignIn(props) {\n\t\t_classCallCheck(this, SignIn);\n\n\t\tvar _this = _possibleConstructorReturn(this, (SignIn.__proto__ || Object.getPrototypeOf(SignIn)).call(this, props));\n\n\t\t_this.state = { value: '' };\n\n\t\t_this.handleChange = _this.handleChange.bind(_this);\n\t\t_this.handleSubmit = _this.handleSubmit.bind(_this);\n\t\treturn _this;\n\t}\n\n\t_createClass(SignIn, [{\n\t\tkey: 'handleChange',\n\t\tvalue: function handleChange(event) {\n\t\t\tthis.setState({ value: event.target.value });\n\t\t}\n\t}, {\n\t\tkey: 'handleSubmit',\n\t\tvalue: function handleSubmit(event) {\n\t\t\tfetch('http://localhost:3000/', {\n\t\t\t\tmethod: 'POST',\n\t\t\t\theaders: {\n\t\t\t\t\tAccept: 'application/json',\n\t\t\t\t\t'Content-Type': 'application/json'\n\t\t\t\t},\n\t\t\t\tbody: JSON.stringify({\n\t\t\t\t\tname: this.state.value\n\t\t\t\t})\n\t\t\t}).then(function (res) {\n\t\t\t\tconsole.log(res);\n\t\t\t}).catch(function (err) {\n\t\t\t\treturn err;\n\t\t\t});\n\t\t\tevent.preventDefault();\n\t\t}\n\t}, {\n\t\tkey: 'render',\n\t\tvalue: function render() {\n\t\t\treturn _react2.default.createElement(\n\t\t\t\t'div',\n\t\t\t\tnull,\n\t\t\t\t_react2.default.createElement(\n\t\t\t\t\t'form',\n\t\t\t\t\t{ onSubmit: this.handleSubmit },\n\t\t\t\t\t_react2.default.createElement(\n\t\t\t\t\t\t'label',\n\t\t\t\t\t\tnull,\n\t\t\t\t\t\t'Name:',\n\t\t\t\t\t\t_react2.default.createElement('input', { type: 'text', value: this.state.value, onChange: this.handleChange })\n\t\t\t\t\t),\n\t\t\t\t\t_react2.default.createElement('input', { type: 'submit', value: 'Submit' })\n\t\t\t\t),\n\t\t\t\t_react2.default.createElement(\n\t\t\t\t\t'h1',\n\t\t\t\t\tnull,\n\t\t\t\t\t' '\n\t\t\t\t)\n\t\t\t);\n\t\t}\n\t}]);\n\n\treturn SignIn;\n}(_react.Component);\n\nexports.default = SignIn;\n\n//# sourceURL=webpack:///./client/components/forms/SignIn.js?");
/***/
})
})
```
I can't understand where does are coming from since i don't use hashed files.<issue_comment>username_1: What you are seeing here is webpack's Hot Module Replacement feature which generates diffs of JavaScript files that have changed, and then submits them as patches (along with metadata). You can learn more about this at webpack.js.org/concepts (under HMR)
Upvotes: 4 [selected_answer]<issue_comment>username_2: This can happen when you ask `webpack-dev-server` to write on disk while you are in development mode. If, for example, you have something like this:
```js
devServer: {
devMiddleware: {
writeToDisk: true,
},
},
```
If you don't want them, you could write a function with a `regExp`. The example below tells to write on disk only files that don't contain the string `hot-update`.
```js
devServer: {
devMiddleware: {
writeToDisk: (filePath) => {
return !/hot-update/i.test(filePath); // you can change it to whatever you need
},
},
},
```
Upvotes: 2 <issue_comment>username_3: You can use webpack-clean-plugin instead of using option clean:true in your webpack config.
This plugin helps us clean everything including files generated by hot module replacement. Of course we will be having those two files but they will get replaced each time we make any change to any file.
```
npm install --save-dev clean-webpack-plugin
const { CleanWebpackPlugin } = require("clean-webpack-plugin");
plugins: [
new CleanWebpackPlugin()
]
```
Upvotes: 0
|
2018/03/16
| 790 | 2,378 |
<issue_start>username_0: I have a schema with three tables:
>
> **Project** (*project\_id*,proj\_name,chief\_arch)
>
> **Employee** (*emp\_id*,emp\_name)
>
> **Assigned-to** (*project\_id,emp\_id*)
>
>
>
I have created all tables with data on <http://sqlfiddle.com/#!9/3f21e>
You can view the all data (select \* ...) on <http://sqlfiddle.com/#!9/3f21e/1>
*Please first view the tables and data on SQLFIDDLE.*
I have an existing query to get employee names who work on *at least one project* where employee 107 also worked:
```
select EMP_NAME from employee natural join `assigned-to`
WHERE EMP_ID<>'107' AND
PROJECT_ID IN(
SELECT PROJECT_ID FROM `assigned-to`
WHERE EMP_ID='107'
)
GROUP BY EMP_NAME;
```
**[SQLFiddle](http://sqlfiddle.com/#!9/3f21e/5)**
But now I need to solve a slightly different problem. I need the employee names who on work on **ALL** projects that employee 107 works on.
How can I write a query for this problem?<issue_comment>username_1: You can do this by counting the projects other employees in common with the employee and then selecting only those where the count exactly matches the original employees count.
```
SELECT EMP_ID FROM `ASSIGNED-TO` WHERE PROJECT_ID IN
(SELECT PROJECT_ID FROM `ASSIGNED-TO` WHERE EMP_ID = '107')
AND EMP_ID <> '107'
GROUP BY EMP_ID
HAVING COUNT(*) = (SELECT COUNT(*) FROM `ASSIGNED-TO` WHERE EMP_ID = '107')
```
Upvotes: 0 <issue_comment>username_2: Try this:
```
SELECT EMP_NAME
FROM EMPLOYEE NATURAL JOIN `ASSIGNED-TO`
WHERE EMP_ID<>'107' AND
PROJECT_ID IN (
SELECT PROJECT_ID FROM `ASSIGNED-TO`
WHERE EMP_ID='107'
)
GROUP BY EMP_NAME
HAVING COUNT(*)=(
SELECT COUNT(*)
FROM `ASSIGNED-TO`
WHERE EMP_ID='107'
);
```
See it run on [SQL Fiddle](http://sqlfiddle.com/#!9/3f21e/30).
Upvotes: 2 [selected_answer]<issue_comment>username_3: This will work too. I want to validate if the project id in assigned-to is found in project table.
```
select e.emp_name
from employee e
natural join `assigned-to` a
where emp_id <> 107
and a.project_id in (
select project_id
from (
select emp_id, project_id
from employee natural join `assigned-to` natural join project
where emp_id = 107 ) t
)
group by e.emp_id
having count(project_id) = (select count(project_id) from `assigned-to` where emp_id = 107)
```
Upvotes: 0
|
2018/03/16
| 744 | 2,370 |
<issue_start>username_0: Currently, I have some VBA code to auto fill the formulas in columns AE:AH whenever more data is posted into the sheet. I am attempting to future proof it and make the range more dynamic in case we were to add more formulas. Here is the current code.
```
Sheets(twtsumsheet).Select
usedRows1 = Worksheets(twtsumsheet).Cells(Worksheets(twtsumsheet).Rows.Count, "A").End(xlUp).Row
Range("AE2:AH2").Select
Selection.AutoFill Destination:=Worksheets(twtsumsheet).Range(Cells(2, 31), Cells(usedRows1, 34)), Type:=xlFillDefault
```
So for example, currently formulas are in columns AE:AH. Let's say we were to add one more to get another calculation off the data, so now we have formulas in columns AE:AI. My macro would continue to just autofill the formulas in AE:AH due to this "Range("AE2:AH2").Select" and this "Cells(2, 31), Cells(usedRows1, 34)"
I'm new to VBA coding so I'm not getting any good ideas for it. Any help is appreciated.<issue_comment>username_1: You can do this by counting the projects other employees in common with the employee and then selecting only those where the count exactly matches the original employees count.
```
SELECT EMP_ID FROM `ASSIGNED-TO` WHERE PROJECT_ID IN
(SELECT PROJECT_ID FROM `ASSIGNED-TO` WHERE EMP_ID = '107')
AND EMP_ID <> '107'
GROUP BY EMP_ID
HAVING COUNT(*) = (SELECT COUNT(*) FROM `ASSIGNED-TO` WHERE EMP_ID = '107')
```
Upvotes: 0 <issue_comment>username_2: Try this:
```
SELECT EMP_NAME
FROM EMPLOYEE NATURAL JOIN `ASSIGNED-TO`
WHERE EMP_ID<>'107' AND
PROJECT_ID IN (
SELECT PROJECT_ID FROM `ASSIGNED-TO`
WHERE EMP_ID='107'
)
GROUP BY EMP_NAME
HAVING COUNT(*)=(
SELECT COUNT(*)
FROM `ASSIGNED-TO`
WHERE EMP_ID='107'
);
```
See it run on [SQL Fiddle](http://sqlfiddle.com/#!9/3f21e/30).
Upvotes: 2 [selected_answer]<issue_comment>username_3: This will work too. I want to validate if the project id in assigned-to is found in project table.
```
select e.emp_name
from employee e
natural join `assigned-to` a
where emp_id <> 107
and a.project_id in (
select project_id
from (
select emp_id, project_id
from employee natural join `assigned-to` natural join project
where emp_id = 107 ) t
)
group by e.emp_id
having count(project_id) = (select count(project_id) from `assigned-to` where emp_id = 107)
```
Upvotes: 0
|
2018/03/16
| 647 | 2,014 |
<issue_start>username_0: I've just launched my website's mobile version for browsers, which looks as expected in all the devices I tried, including many different sizes through the Device Emulator of Chrome DevTools.
However, a user just reported that he is seeing both images and text far too big with his OnePlus 3T smartphone.
Expected smartphone layout:
[](https://i.stack.imgur.com/0DziY.png)
Layout with OnePlus 3T:
[](https://i.stack.imgur.com/b6UvN.jpg)
[](https://i.stack.imgur.com/02GPY.jpg)
I tried to reproduce it in Chrome's Device Emulator, but without success.
I am using px as units in the CSS for `font-size` and `height`.
OnePlus 3T's screen has the following specs, which I think are quite normal:
>
> Size 5.5 inches, 83.4 cm2 (~73.1% screen-to-body ratio)
>
>
> Resolution 1080 x 1920 pixels, 16:9 ratio (~401 ppi density)
>
>
>
Any idea of why everything is appearing so big?
[Here](http://www.travelinho.com/en/results/?Orig=Heidelberg&Dest=Mallorca&OutboundDate=25-05-2018) a link to one of such layouts, in case anybody wants to have a look at it.
Update
------
Apparently, the problem is related to the meta tag
```
```
Somehow that Smartphone is applying the meta tag, or something that creates the same effect...<issue_comment>username_1: Might sound obvious but has he changed the font size or Display size under Android Settings > Display?
Upvotes: 1 <issue_comment>username_2: I would use relative measurements like **em** or **rem** instead of **px**. You can scale down the page by reducing the fontsize on smaller devices.
```
@media only screen and (max-width: 600px) {
.icon-selector {
font-size: 0.8rem;
}}
```
Upvotes: 1 <issue_comment>username_3: By adding the following line in the header, the problem was solved:
```
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 314 | 1,112 |
<issue_start>username_0: Is it possible to make a SAS Stored Process available via a clean, nicer looking URL, but still be hosted on the server?
The native URL is something like <http://[yourMachineName]:8080/SASStoredProcess/do?_PROGRAM=/WebApps/MyWebApp/Foo>.
I'd prefer a nicer looking URL like <http://[yourMachineName]:8080/SASStoredProcess/WebApps/MyWebApp/Foo>
The documentation for the overall process at <http://documentation.sas.com/?docsetId=stpug&docsetTarget=dbgsrvlt.htm&docsetVersion=9.4&locale=en>, doesn't seem to address the issue.<issue_comment>username_1: Might sound obvious but has he changed the font size or Display size under Android Settings > Display?
Upvotes: 1 <issue_comment>username_2: I would use relative measurements like **em** or **rem** instead of **px**. You can scale down the page by reducing the fontsize on smaller devices.
```
@media only screen and (max-width: 600px) {
.icon-selector {
font-size: 0.8rem;
}}
```
Upvotes: 1 <issue_comment>username_3: By adding the following line in the header, the problem was solved:
```
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 446 | 1,308 |
<issue_start>username_0: I have allocated a 1D array of structs by 2 methods, which I believe to give me contiguous memory allocation. But when I output the memory locations I see Type 1 allocates with a difference of 4bytes but Type 2 takes 32bytes. Could you please explain why this happens.
Thanks!!
```
#include
#include
typedef struct{
int a;
}str;
int main(){
int i;
str \*\*str\_ptr;
str \*str\_ptr1;
str\_ptr = (str \*\*)malloc(5\*sizeof(str \*));
for(i=0;i<5;i++){str\_ptr[i]=malloc(sizeof(str));}
str\_ptr1 = (str \*)malloc(5\*sizeof(str));
printf("Type 1 : %p and %p \n",str\_ptr1,str\_ptr1+1);
printf("Type 2 : %p and %p \n",str\_ptr[0],str\_ptr[1]);
}
```
Type 1 : 0xb830e0 and 0xb830e4
Type 2 : 0xb83040 and 0xb83060<issue_comment>username_1: Might sound obvious but has he changed the font size or Display size under Android Settings > Display?
Upvotes: 1 <issue_comment>username_2: I would use relative measurements like **em** or **rem** instead of **px**. You can scale down the page by reducing the fontsize on smaller devices.
```
@media only screen and (max-width: 600px) {
.icon-selector {
font-size: 0.8rem;
}}
```
Upvotes: 1 <issue_comment>username_3: By adding the following line in the header, the problem was solved:
```
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 838 | 2,973 |
<issue_start>username_0: I'm trying to implement an autosave feature in CKEditor 5, where a save only occurs if changes were made and after the editor has lost focus.
How could I do this? The documentation has been very confusing to me.
This is the closest I've gotten:
```
function onChange(el,editor) {
editor.document.once('change',function(evt,data) {
$(el).one('blur',function() {
saveFunction();
onChange(el,editor);
});
});
}
onChange(el,editor);
```
The problem with my solution is the blur event fires whenever CKEditor brings up a modal.<issue_comment>username_1: To track the focused element of the [editor ui](https://docs.ckeditor.com/ckeditor5/latest/api/module_core_editor_editorui-EditorUI.html) you can use the [focusTracker](https://docs.ckeditor.com/ckeditor5/latest/api/module_core_editor_editorui-EditorUI.html#member-focusTracker) (available on `editor.ui.focusTracker`). It tracks various parts of the editor that are currently focused.
The [focusTracker.isFocused](https://docs.ckeditor.com/ckeditor5/latest/api/module_utils_focustracker-FocusTracker.html#member-isFocused) is `true` whenever any of the editor's registered components are focused. For the [classic editor build](https://docs.ckeditor.com/ckeditor5/latest/examples/builds/classic-editor.html) the focused elements might be one of:
* editing area,
* toolbar,
* contextual toolbar.
```js
editor.ui.focusTracker.on( 'change:isFocused', ( evt, name, isFocused ) => {
if ( !isFocused ) {
// Do whatever you want with current editor data:
console.log( editor.getData() );
}
} );
```
Upvotes: 4 <issue_comment>username_2: Based on username_1s answer fullfilling Adams 2nd critieria (data has changed):
```
var editor_changed = false;
editor.model.document.on('change:data', () => { editor_changed = true; });
editor.ui.focusTracker.on('change:isFocused', (evt, name, isFocused) => {
if(!isFocused && editor_changed) {
editor_changed = false;
console.log(editor.getData());
}
} );
```
Upvotes: 3 <issue_comment>username_3: Here a complete example based on both the answers of @username_2 and @username_1. It uses the CKEditor 5 Inline Editor. As <NAME> mentioned in his question, the documentation is not very clear.
```
Lorem ipsum dolor sit amet.
Sed ut perspiciatis unde omnis iste natus.
var text\_changed = false;
InlineEditor
.create(document.querySelector('.editor'), {
})
.then(editor => {
window.editor = editor;
detectTextChanges(editor);
detectFocusOut(editor);
})
.catch(error => {
console.error(error);
});
function detectFocusOut(editor) {
editor.ui.focusTracker.on('change:isFocused', (evt, name, isFocused) => {
if (!isFocused && text\_changed) {
text\_changed = false;
console.log(editor.getData());
}
});
}
function detectTextChanges(editor) {
editor.model.document.on('change:data', () => {
text\_changed = true;
});
}
```
Upvotes: 2
|
2018/03/16
| 1,536 | 4,975 |
<issue_start>username_0: I have a map, filled with values:
```
std::map, double> Cache;
```
When the size of map is greater than 100, I want to delete/erase only the map elements whose values in the key tuple are all more than 10. Here is how it can be done with `while` and `if`:
```
if (Cache.size() > 100) {
auto it = Cache.begin();
while (it != Cache.end()) {
auto myCache = it->first;
auto actualX = std::get<0>(myCache);
auto actualY = std::get<1>(myCache);
auto actualZ = std::get<2>(myCache);
if (abs(actualX) > 10 || abs(actualY) > 10) || abs(actualZ) > 10)) {
Cache.erase(it++);
} else {
it++;
}
}
}
```
How one can implement the same behavior using [`find_if`](http://en.cppreference.com/w/cpp/algorithm/find) (or other function) and lambda function?
### Update2
After going through the wikipedia link provided in the comment section i have implemented the following code, but its giving an compiler error :(
```
Cache.erase(std::remove_if(Cache.begin(),
Cache.end(),
[=](auto &T) -> bool {
auto myCache = T->first;
auto actualX = std::get<0>(myCache);
auto actualY = std::get<1>(myCache);
return std::abs(actualX) > 10|| std::abs(actualY) > 10 || std::abs(actualZ) > 10;
}),
Cache.end());
```
update 3
--------
The above cannot be used with map ;(<issue_comment>username_1: Something like following should work
```
namespace myUtility {
template< typename C, typename P >
void custom_erase_if( C& items, const P& predicate ) {
for( auto it = items.begin(); it != items.end(); ) {
if( predicate(*it) ) it = items.erase(it);
else ++it;
}
}
}
```
Then,
```
myUtility::custom_erase_if( Cache, [](const auto& x) {
auto myCache = x.first;
auto actualX = std::get<0>(myCache);
auto actualY = std::get<1>(myCache);
auto actualZ = std::get<2>(myCache);
return ( std::abs(actualX) > 10 ||
std::abs(actualY) > 10 ||
std::abs(actualZ) > 10) ) ;
} );
```
Upvotes: 0 <issue_comment>username_2: You can use `std::find_if` with predicate:
```
auto pred = []( const auto &t ) {
return std::abs( std::get<0>( t. ) ) > 10 ||
std::abs( std::get<1>( t ) ) > 10 ||
std::fabs( std::get<2>( t ) ) > 10.0 ); };
for( auto it = Cache.begin(); true; it = Cache.erase( it ) ) {
it = std::find_if( it, Cache.end(), pred );
if( it == Cache.end() ) break;
}
```
Upvotes: 1 <issue_comment>username_3: What you should do is arrange the map so that the elements you want are at the back of the map.
```
struct over_10 {
bool operator()( std::tuple const& t) const {
return
(std::get<0>(t)>10 || std::get<0>(t)<-10) // min int doesn't like abs
&& (std::get<1>(t)>10 || std::get<1>(t)<-10)
&& fabs(std::get<2>(t))>10.0;
}
};
template
struct split\_order {
template
bool operator()( Lhs const& lhs, Rhs const& rhs ) const {
auto split = Split{};
return std::tie( split(lhs), lhs ) < std::tie( split(rhs), rhs );
}
};
using split\_on\_10\_order = split\_order;
```
what this does is it splits the sorting so that things that pass the `over_10` test are all ordered last. Everything is ordered sensibly within each "split".
Now:
```
std::map, double, split\_on\_10\_order> Cache;
auto it = Cache.lower\_bound(
std::make\_tuple(
std::numeric\_limits::min(),
std::numeric\_limits::min(),
std::numeric\_limits::min()
)
);
Cache.erase( it, Cache.end() );
```
and done.
This takes about O(lg n) time to find the split, then O(m) to delete the m elements that should be deleted.
Upvotes: 0 <issue_comment>username_4: Unfortunately, as pointed by username_2, `std::remove_if` can't be used with maps.
I think your solution (a loop to erase the unwanted elements) isn't bad, but if you want to use the algorithm library, the best that come in my mint is use `std::copy_if()` to **move** the elements you want preserve in a temporary map and following swap the two maps.
I mean... something as
```
if ( Cache.size() > 100 )
{
std::map, double> CacheTmp;
std::copy\_if(std::make\_move\_iterator(Cache.begin()),
std::make\_move\_iterator(Cache.end()),
std::inserter(CacheTmp, CacheTmp.end()),
[] (auto const & p)
{ return (std::abs(std::get<0>(p.first)) <= 10)
&& (std::abs(std::get<1>(p.first)) <= 10)
&& (std::abs(std::get<2>(p.first)) <= 10.0); });
Cache.swap(CacheTmp);
}
```
Unfortunately that `auto` parameter for lambda functions are available only starting from C++14, so if you need of a C++11 solution, instead of `auto const & p`, your lambda should receive a `std::pair, double> const & p`
Upvotes: 0
|
2018/03/16
| 1,221 | 3,957 |
<issue_start>username_0: I have an Application Load balancer and 1 EC2 instance currently behind it. Before, I was using another CA for receiving SSL certificates for my domain which was running on the EC2 with nginx. Now, I use SSL from Amazon's Certificate Manager for the load balancer's listener.
Should the domain's certificate be purchased individually on each EC2 instance for https connection to my application after moving it behind the ELB?
Is there any other way to establish https connection without using the separate certificates on the EC2 instances and with just the ACM on the load balancer?<issue_comment>username_1: Something like following should work
```
namespace myUtility {
template< typename C, typename P >
void custom_erase_if( C& items, const P& predicate ) {
for( auto it = items.begin(); it != items.end(); ) {
if( predicate(*it) ) it = items.erase(it);
else ++it;
}
}
}
```
Then,
```
myUtility::custom_erase_if( Cache, [](const auto& x) {
auto myCache = x.first;
auto actualX = std::get<0>(myCache);
auto actualY = std::get<1>(myCache);
auto actualZ = std::get<2>(myCache);
return ( std::abs(actualX) > 10 ||
std::abs(actualY) > 10 ||
std::abs(actualZ) > 10) ) ;
} );
```
Upvotes: 0 <issue_comment>username_2: You can use `std::find_if` with predicate:
```
auto pred = []( const auto &t ) {
return std::abs( std::get<0>( t. ) ) > 10 ||
std::abs( std::get<1>( t ) ) > 10 ||
std::fabs( std::get<2>( t ) ) > 10.0 ); };
for( auto it = Cache.begin(); true; it = Cache.erase( it ) ) {
it = std::find_if( it, Cache.end(), pred );
if( it == Cache.end() ) break;
}
```
Upvotes: 1 <issue_comment>username_3: What you should do is arrange the map so that the elements you want are at the back of the map.
```
struct over_10 {
bool operator()( std::tuple const& t) const {
return
(std::get<0>(t)>10 || std::get<0>(t)<-10) // min int doesn't like abs
&& (std::get<1>(t)>10 || std::get<1>(t)<-10)
&& fabs(std::get<2>(t))>10.0;
}
};
template
struct split\_order {
template
bool operator()( Lhs const& lhs, Rhs const& rhs ) const {
auto split = Split{};
return std::tie( split(lhs), lhs ) < std::tie( split(rhs), rhs );
}
};
using split\_on\_10\_order = split\_order;
```
what this does is it splits the sorting so that things that pass the `over_10` test are all ordered last. Everything is ordered sensibly within each "split".
Now:
```
std::map, double, split\_on\_10\_order> Cache;
auto it = Cache.lower\_bound(
std::make\_tuple(
std::numeric\_limits::min(),
std::numeric\_limits::min(),
std::numeric\_limits::min()
)
);
Cache.erase( it, Cache.end() );
```
and done.
This takes about O(lg n) time to find the split, then O(m) to delete the m elements that should be deleted.
Upvotes: 0 <issue_comment>username_4: Unfortunately, as pointed by username_2, `std::remove_if` can't be used with maps.
I think your solution (a loop to erase the unwanted elements) isn't bad, but if you want to use the algorithm library, the best that come in my mint is use `std::copy_if()` to **move** the elements you want preserve in a temporary map and following swap the two maps.
I mean... something as
```
if ( Cache.size() > 100 )
{
std::map, double> CacheTmp;
std::copy\_if(std::make\_move\_iterator(Cache.begin()),
std::make\_move\_iterator(Cache.end()),
std::inserter(CacheTmp, CacheTmp.end()),
[] (auto const & p)
{ return (std::abs(std::get<0>(p.first)) <= 10)
&& (std::abs(std::get<1>(p.first)) <= 10)
&& (std::abs(std::get<2>(p.first)) <= 10.0); });
Cache.swap(CacheTmp);
}
```
Unfortunately that `auto` parameter for lambda functions are available only starting from C++14, so if you need of a C++11 solution, instead of `auto const & p`, your lambda should receive a `std::pair, double> const & p`
Upvotes: 0
|
2018/03/16
| 702 | 2,673 |
<issue_start>username_0: Is there way to call the onclick event from a raycast?
I have world scale canvas attached to an object that has images with buttons.
When I select the button with the mouse my event function is called.
But now I am trying to adjust the code so I can avoid using mouse all together.
```cs
[SerializeField]
private Image cursor;
[SerializeField]
private LayerMask uI;
if (Input.GetButtonDown("Fire1"))
{
Ray ray = Camera.main.ScreenPointToRay(cursor.transform.position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 100,uI))
{
Debug.Log(hit.collider);
}
}
```
This is the code I use to raycast into the world space canvas which works and returns the images but I am not sure how I can now call the onclick event since I am not using the mouse but a image instead
Is there way to call the onclick event attached to image that I raycast to or do I have to redo my entire script and not use on click events?<issue_comment>username_1: I believe that the Button component does what you seek as well.
But to answer your question about raycasts.
>
> Is there way to call the onclick event from a raycast?
>
>
>
```cs
//Example raycast code
//Variables
public RayCastHit _Hit;
public LayerMask _RaycastCollidableLayers; //Set this in inspector, makes you able to say which layers should be collided with and which not.
public float _CheckDistance = 100f;
//Method
void PerformRaycast(){
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Physics.Raycast(ray, out _Hit, _CheckDistance + 0.1f, _RaycastCollidableLayers);
if (_Hit.collider == null)
{
Debug.Log("Raycast hit nothing");
return null;
}
GameObject go = _Hit.collider.gameObject;
}
```
Your raycast hits an object and stores a reference to that object. As you can see in my example you can fetch the game object through the RaycastHit.
When you have the game object you can access any scripts on it through GetComponent<>(), this means you can say.
```cs
GameObject go = _Hit.collider.gameObject;
OnClickScript ocs = go.GetComponent();
ocs.OnClick();
```
If I misunderstood your query, please let me know.
Upvotes: 1 <issue_comment>username_2: Hen you hit a GameObject, get its list of components that implement OnPointerClick event
```
IPointerClickHandler clickHandler=collider.gameObject.GetComponent();
```
once you have that reference (its not null) you can call
```
PointerEventData pointerEventData=new PointerEventData(EventSystem.current);
clickHandler.OnPointerClick(pointerEventData)
```
That should do the job
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,235 | 5,454 |
<issue_start>username_0: I have a code for Thread Pool example as follows
```
public class RunThreads{
static final int MAX_TASK = 3;
public static void main(String[] args)
{
Runnable r1 = new Task("task 1");
Runnable r2 = new Task("task 2");
Runnable r3 = new Task("task 3");
Runnable r4 = new Task("task 4");
Runnable r5 = new Task("task 5");
ExecutorService pool = Executors.newFixedThreadPool(MAX_TASK);
pool.execute(r1);
pool.execute(r2);
pool.execute(r3);
pool.execute(r4);
pool.execute(r5);
pool.shutdown();
}}
```
and
```
class Task implements Runnable{
private String name;
public Task(String s)
{
name = s;
}
public void run()
{
try
{
for (int i = 0; i<=5; i++)
{
if (i==0)
{
Date d = new Date();
SimpleDateFormat ft = new SimpleDateFormat("hh:mm:ss");
System.out.println("Initialization Time for"
+ " task name - "+ name +" = " +ft.format(d));
//prints the initialization time for every task
}
else
{
Date d = new Date();
SimpleDateFormat ft = new SimpleDateFormat("hh:mm:ss");
System.out.println("Executing Time for task name - "+
name +" = " +ft.format(d));
// prints the execution time for every task
}
Thread.sleep(1000);
}
System.out.println(name+" complete");
}
catch(InterruptedException e)
{
e.printStackTrace();
}
}}
```
The i create a small agent for instrument java `ThreadPoolExecutor` as follows
```
public class Agent {
public static void premain(String arguments, Instrumentation instrumentation) {
new AgentBuilder.Default()
.with(new AgentBuilder.InitializationStrategy.SelfInjection.Eager())
.type((ElementMatchers.nameContains("ThreadPoolExecutor")))
.transform(
new AgentBuilder.Transformer.ForAdvice()
.include(MonitorInterceptor.class.getClassLoader())
.advice(ElementMatchers.any(), MonitorInterceptor.class.getName())
).installOn(instrumentation);
}}
```
Can we Instrument java class like ThreadPoolExecutor using Byte Buddy. When i debug ThreadPoolExecutor class working.But when i try this using agent ThreadPoolExecutor class never works.
Edit
This is my MonitorInterceptor
```
public class MonitorInterceptor {
@Advice.OnMethodEnter
static void enter(@Advice.Origin String method) throws Exception {
System.out.println(method);
}
```
Edit
```
new AgentBuilder.Default()
.with(new AgentBuilder.InitializationStrategy.SelfInjection.Eager())
.with(AgentBuilder.Listener.StreamWriting.toSystemError())
.ignore(none())
.type((ElementMatchers.nameContains("ThreadPoolExecutor")))
.transform((builder, typeDescription, classLoader, module) -> builder
.constructor(ElementMatchers.any())
.intercept(Advice.to(MyAdvice.class))
.method(ElementMatchers.any())
.intercept(Advice.to(MonitorInterceptor.class))
).installOn(instrumentation);
```<issue_comment>username_1: Unless you configure it explicitly, Byte Buddy does not instrument core Java classes. You can change that by explicitly setting an ignore matcher that does not exclude such classes.
In this context, it should not be necessary to configure an initialization strategy when using Advice.
You might also want to limit the scope of your advice, right now you intercept any method or constructor.
For finding out what is wrong, you can also define an AgentBuilder.Listener to be notified of errors.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Using username_1 answer i get solved this problem. I create an agent as follows ,
```
new AgentBuilder.Default()
.ignore(ElementMatchers.none())
.type(ElementMatchers.nameContains("ThreadPoolExecutor"))
.transform((builder, type, classLoader, module) -> builder
.visit(Advice.to(ThreadPoolExecutorAdvice.class).on(ElementMatchers.any()))
).installOn(instrumentation);
```
Using this we can instrument Java class.Form this constructor giving like this
```
java.util.concurrent.ThreadPoolExecutor$Worker(java.util.concurrent.ThreadPoolExecutor,java.lang.Runnable)
```
But in code constructor not like that, its is
```
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue workQueue) {
this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
Executors.defaultThreadFactory(), defaultHandler);
}
```
So I expect something like
```
java.util.concurrent.ThreadPoolExecutor(int,int,long,java.util.concurrent.TimeUnit,java.util.concurrent.BlockingQueue)
```
I used javassist to get constructor and get its as what byte-buddy gives. So using `.ignore(ElementMatchers.none())` and `visit(Advice.to(ThreadPoolExecutorAdvice.class).on(ElementMatchers.any()))`we can get all constructors and methods in Java level class
Upvotes: 1
|
2018/03/16
| 1,070 | 4,202 |
<issue_start>username_0: ```
[
{
"userId": 1,
"id": 1,
"title": "xxxxx",
"body": "yyyyy"
},
{
```
My json data is like that and I'm using alamofire for loading data and objectmapper for mapping.
I create a swift file for mapping like that:
```
import Foundation
import ObjectMapper
class TrainSchedules: Mappable {
var mySchedules: [Schedules]
required init?(map: Map) {
mySchedules = []
}
func mapping(map: Map) {
mySchedules <- map["schedule"]
}
}
class Schedules: Mappable {
var userId: String
var id: String
var title: String
var body: String
required init?(map: Map) {
userId = ""
id = ""
title = ""
body = ""
}
func mapping(map: Map) {
userId <- map["userId"]
id <- map["id"]
title <- map["title"]
body <- map["body"]
}
}
```
and my view controller like that:
```
import Alamofire
import ObjectMapper
class ViewController: UIViewController, UITableViewDelegate, UITableViewDataSource {
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = UITableViewCell()
cell.textLabel?.text = "Landmark"
return cell
}
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return 10
}
@IBOutlet weak var tableViewData: UITableView!
override func viewDidLoad() {
super.viewDidLoad()
tableViewData.dataSource = self
tableViewData.delegate = self
let jsonDataUrl = "https://jsonplaceholder.typicode.com/posts"
Alamofire.request(jsonDataUrl).responseJSON { response in
print("Request: \(String(describing: response.request))")
print("Response: \(String(describing: response.response))")
print("Result: \(response.result)")
if let json = response.result.value {
print("JSON: \(json)")
}
if let data = response.data, let utf8Text = String(data: data, encoding: .utf8) {
print("Data: \(utf8Text)")
}
}
}
}
```
I tried to print json data to TableView.Data is coming however I couldn't add it to tableview.What should I do to solve this problem ?<issue_comment>username_1: There is a library called `AlamofireObjectMapper` <https://github.com/tristanhimmelman/AlamofireObjectMapper>
You can get Alamofire response as ObjectMapper objects and then pass use this result for showing data in tableview.
Upvotes: 0 <issue_comment>username_2: You don't need TrainSchedules model.
Your model:
```
import Foundation
import ObjectMapper
class Schedule: Mappable {
var userId: String
var id: String
var title: String
var body: String
required init?(map: Map) {
userId = ""
id = ""
title = ""
body = ""
}
func mapping(map: Map) {
userId <- map["userId"]
id <- map["id"]
title <- map["title"]
body <- map["body"]
}
}
```
Your ViewController:
```
import Alamofire
import ObjectMapper
import UIKit
class ViewController: UIViewController, UITableViewDelegate, UITableViewDataSource {
@IBOutlet weak var tableViewData: UITableView!
var schedules: [Schedule]?
override func viewDidLoad() {
super.viewDidLoad()
tableViewData.dataSource = self
tableViewData.delegate = self
loadData()
}
func loadData() {
let jsonDataUrl = "https://jsonplaceholder.typicode.com/posts"
Alamofire.request(jsonDataUrl).responseJSON { response in
self.schedules = Mapper().mapArray(JSONObject: response.result.value)
self.tableViewData.reloadData()
}
}
func tableView(\_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = UITableViewCell()
cell.textLabel?.text = schedules?[indexPath.row].title
return cell
}
func tableView(\_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return schedules?.count ?? 0
}
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 350 | 1,192 |
<issue_start>username_0: How can I downsize a virtual machine after its provisioning, from `terraform` script? Is there a way to update a resource without modifying the initial `.tf` file?<issue_comment>username_1: I have a solution, maybe you could try.
1.Copy your tf file, for example `cp vm.tf vm_back.tf` and move `vm.tf` to another directory.
2.Modify `vm_size` in `vm_back.tf`. I use this [tf file](https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html), so I use the following command to change the value.
```
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
```
3.Update VM size by executing `terraform apply`.
4.Remove vm\_back.tf and mv vm.tf to original directory.
Upvotes: 2 [selected_answer]<issue_comment>username_2: How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
`vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"`
And when you want to provision the small VM, you simply pass the vm\_size variable in on the command line:
`$ terraform apply -var="vm_size=small"`
Upvotes: 0
|
2018/03/16
| 1,141 | 4,189 |
<issue_start>username_0: The process gets stuck if I am trying to run the app (even when I start from a fresh new project) both on an emulator (Android 5.0 or 6.0) or on my phone (Android 7.1). Following some results I found online, I tried to run gradle offline but it did not work. I did also try the solutions suggested on this [link](https://stackoverflow.com/questions/24345488/gradle-gets-stuck-at-either-build-or-assembledebug-when-using-the-64bit-or-3), with no success. On Ubuntu, this seems to be caused because of a library issue concerning 32-bit software running from a 64-bit system, which can be solved installing the appropriate library, but I could not find any guidance when running Android Studio on Windows.
The build.gradle file (Module: app) says:
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 26
defaultConfig {
applicationId "com.example.denny.zuzuplayer"
minSdkVersion 21
targetSdkVersion 26
versionCode 1
versionName "1.0"
testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner"
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
}
dependencies {
implementation fileTree(dir: 'libs', include: ['*.jar'])
implementation 'com.android.support:appcompat-v7:26.1.0'
implementation 'com.android.support.constraint:constraint-layout:1.0.2'
testImplementation 'junit:junit:4.12'
androidTestImplementation 'com.android.support.test:runner:1.0.1'
androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.0.1'
}
```
The build.gradle (Project) says:
```
buildscript {
repositories {
google()
jcenter()
}
dependencies {
classpath 'com.android.tools.build:gradle:3.0.1'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
jcenter()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
```
Thanks!<issue_comment>username_1: I noticed that Android Studio has a tab called Gradle Console, which lists every gradle process that is currently running, in detail. That is how I got the message that the "aapt" instance was not fine the old rascal, due probably to the antivirus software. This is what the console told me:
```
> Exception in thread "queued-resource-processor_16"
> java.lang.RuntimeException: Timed out while waiting for slave aapt
> process, make sure the aapt execute at
> C:\Users\Denny\AppData\Local\Android\Sdk\build-tools\26.0.2\aapt2.exe
> can run successfully (some anti-virus may block it) or try setting
> environment variable SLAVE_AAPT_TIMEOUT to a value bigger than 5
> seconds
```
I have no idea how to "set environment variable SLAVE\_AAPT\_TIMEOUT to a value bigger than 5 seconds". I Googled the issue for a few minutes when I wondered, why not try the other possible solution first?
I added the SDK folder to my antivirus exceptions (aapt is an instance of SDK, for that matter), and voilà, the project is finally building. Hurray!
Upvotes: 1 <issue_comment>username_2: I just got this error and was able to resolve it by ensuring that the two Glide libraries I'm using refer to the same version.
This caused the error:
```
implementation 'com.github.bumptech.glide:glide:4.10.0'
annotationProcessor 'com.github.bumptech.glide:compiler:4.0.0'
```
And this fixed it:
```
implementation 'com.github.bumptech.glide:glide:4.10.0'
annotationProcessor 'com.github.bumptech.glide:compiler:4.10.0'
```
I was able to determine that the error was related to the Glide libraries, at least my configuration of them in the module's build.gradle file, by using Denny's suggestion to examine the Gradle tasks. Opening the Gradle Tool Window shows a list of tasks. The Event Log indicated that the error was related to the "assembleDebug" task. So, I double-clicked on that task in the Gradle Tool Window to execute it, and some useful information appeared in a tab on the Run console.
Upvotes: 0
|
2018/03/16
| 798 | 1,986 |
<issue_start>username_0: I have a list of data tables that are of unequal lengths. Some of the data tables have 35 columns and others have 36.
I have this line of code, but it generates an error
```
> lst <- unlist(full_data.lst, recursive = FALSE)
> model_dat <- do.call("rbind", lst)
Error in rbindlist(l, use.names, fill, idcol) :
Item 1362 has 35 columns, inconsistent with item 1 which has 36 columns. If instead you need to fill missing columns, use set argument 'fill' to TRUE.
```
Any suggestions on how I can modify that so that it works properly.<issue_comment>username_1: Try to use `rbind.fill` from package `plyr`:
Input data, 3 dataframes with different number of columns
```
df1<-data.frame(a=c(1,2,3,4,5),b=c(1,2,3,4,5))
df2<-data.frame(a=c(1,2,3,4,5,6),b=c(1,2,3,4,5,6),c=c(1,2,3,4,5,6))
df3<-data.frame(a=c(1,2,3),d=c(1,2,3))
full_data.lst<-list(df1,df2,df3)
```
The solution
```
library("plyr")
rbind.fill(full_data.lst)
a b c d
1 1 1 NA NA
2 2 2 NA NA
3 3 3 NA NA
4 4 4 NA NA
5 5 5 NA NA
6 1 1 1 NA
7 2 2 2 NA
8 3 3 3 NA
9 4 4 4 NA
10 5 5 5 NA
11 6 6 6 NA
12 1 NA NA 1
13 2 NA NA 2
14 3 NA NA 3
```
Upvotes: 2 <issue_comment>username_2: If I understood your question correctly, I could possibly see only two options for having your data tables appended.
Option A: Drop the extra variable from one of the datasets
```
table$column_Name <- NULL
```
Option B) Create the variable with missing values in the incomplete dataset.
```
full_data.lst$column_Name <- NA
```
And then do rbind function.
Upvotes: 2 <issue_comment>username_3: Here's a minimal example of what you are trying to do.
No need to use any other package to do this. Just set `fill=TRUE` in `rbindlist`.
You can do this:
```
df1 <- data.table(m1 = c(1,2,3))
df2 <- data.table(m1 = c(1,2,3), m2=c(3,4,5))
df3 <- rbindlist(list(df1, df2), fill=T)
print(df3)
m1 m2
1: 1 NA
2: 2 NA
3: 3 NA
4: 1 3
5: 2 4
6: 3 5
```
Upvotes: 3
|
2018/03/16
| 986 | 2,646 |
<issue_start>username_0: I'm working on a dataset with a with grouping-system with six digits. The first two digits denote grouping on the top-level, the next two denote different sub-groups, and the last two digits denote specific type within the sub-group. I want to group the data to the top level in the hierarchy (two first digits only), and count unique names in each group.
An example for the GroupID 010203:
* 01 denotes BMW
* 02 denotes 3-series
* 03 denotes 320i (the exact model)
All I care about in this example is how many of each brand there is.
Toy dataset and wanted output:
```
df <- data.table(Quarter = c('Q4', 'Q4', 'Q4', 'Q4', 'Q3'),
GroupID = c(010203, 150503, 010101, 150609, 010000),
Name = c('AAAA', 'AAAA', 'BBBB', 'BBBB', 'CCCC'))
```
Output:
```
Quarter Group Counts
Q3 01 1
Q4 01 2
Q4 15 2
```<issue_comment>username_1: Using `data.table` we could do:
```
library(data.table)
dt[, Group := substr(GroupID, 1, 2)][
, Counts := .N, by = list(Group, Quarter)][
, head(.SD, 1), by = .(Quarter, Group, Counts)][
, .(Quarter, Group, Counts)]
```
Returns:
>
>
> ```
> Quarter Group Counts
> 1: Q4 01 2
> 2: Q4 15 2
> 3: Q3 01 1
>
> ```
>
>
With `dplyr` and `stringr` we could do something like:
```
library(dplyr)
library(stringr)
df %>%
mutate(Group = str_sub(GroupID, 1, 2)) %>%
group_by(Group, Quarter) %>%
summarise(Counts = n()) %>%
ungroup()
```
Returns:
>
>
> ```
> # A tibble: 3 x 3
> Group Quarter Counts
>
> 1 01 Q3 1
> 2 01 Q4 2
> 3 15 Q4 2
>
> ```
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since you are already using `data.table`, you can do:
```
df[, Group := substr(GroupID,1,2)]
df <- df[,Counts := .N, .(Group,Quarter)][,.(Group, Quarter, Counts)]
df <- unique(df)
print(df)
Group Quarter Counts
1: 10 Q4 2
2: 15 Q4 2
3: 10 Q3 1
```
Upvotes: 1 <issue_comment>username_3: Here's my simple solution with `plyr` and `base R`, it is lightening fast.
```
library(plyr)
df$breakid <- as.character((substr(df$GroupID, start =0 , stop = 2)))
d <- plyr::count(df, c("Quarter", "breakid"))
```
**Result**
```
Quarter breakid freq
Q3 01 1
Q4 01 2
Q4 15 2
```
Upvotes: 0 <issue_comment>username_4: Alternatively, using `tapply` (and `data.table` indexing):
```
df$Brand <- substr(df$GroupID, 1, 2)
tapply(df$Brand, df[, .(Quarter, Brand)], length)
```
(If you don't care about the output being a matrix).
Upvotes: 0
|
2018/03/16
| 1,562 | 4,262 |
<issue_start>username_0: I'm using `bibtexparser` to parse a bibtex file.
```
import bibtexparser
with open('MetaGJK12842.bib','r') as bibfile:
bibdata = bibtexparser.load(bibfile)
```
While parsing I get the error message:
>
> Could not parse properly, starting at
>
>
>
> ```
> @article{Frenn:EvidenceBasedNursing:1999,
> Traceback (most recent call last):
> File "/usr/local/lib/python3.5/dist-packages/pyparsing.py", line 3183, in parseImpl
> raise ParseException(instring, loc, self.errmsg, self)
> pyparsing.ParseException: Expected end of text (at char 5773750),
> (line:47478, col:1)`
>
> ```
>
>
The line refers to the following bibtex entry:
```
@article{Frenn:EvidenceBasedNursing:1999,
author = {<NAME>.},
title = {A Mediterranean type diet reduced all cause and cardiac mortality after a first myocardial infarction [commentary on de Lorgeril M, Salen P, Martin JL, et al. Mediterranean dietary pattern in a randomized trial: prolonged survival and possible reduced cancer rate. ARCH INTERN MED 1998;158:1181-7]},
journal = {Evidence Based Nursing},
uuid = {15A66A61-0343-475A-8700-F311B08BB2BC},
volume = {2},
number = {2},
pages = {48-48},
address = {College of Nursing, Marquette University, Milwaukee, WI},
year = {1999},
ISSN = {1367-6539},
url = {},
keywords = {Treatment Outcomes;Mediterranean Diet;Mortality;France;Neoplasms -- Prevention and Control;Phase One Excluded - No Assessment of Vegetable as DV;Female;Phase One - Reviewed by Hao;Myocardial Infarction -- Diet Therapy;Diet, Fat-Restricted;Phase One Excluded - No Fruit or Vegetable Study;Phase One Excluded - No Assessment of Fruit as DV;Male;Clinical Trials},
tags = {Phase One Excluded - No Assessment of Vegetable as DV;Phase One Excluded - No Fruit or Vegetable Study;Phase One - Reviewed by Hao;Phase One Excluded - No Assessment of Fruit as DV},
accession_num = {2000008864. Language: English. Entry Date: 20000201. Revision Date: 20130524. Publication Type: journal article},
remote_database_name = {rzh},
source_app = {EndNote},
EndNote_reference_number = {4413},
Secondary_title = {Evidence Based Nursing},
Citation_identifier = {Frenn 1999a},
remote_database_provider = {EBSCOhost},
publicationStatus = {Unknown},
abstract = {Question: text.},
notes = {(0) abstract; commentary. Journal Subset: Core Nursing; Europe; Nursing; Peer Reviewed; UK \& Ireland. No. of Refs: 1 ref. NLM UID: 9815947.}
}
```
What is wrong with this entry?<issue_comment>username_1: Using `data.table` we could do:
```
library(data.table)
dt[, Group := substr(GroupID, 1, 2)][
, Counts := .N, by = list(Group, Quarter)][
, head(.SD, 1), by = .(Quarter, Group, Counts)][
, .(Quarter, Group, Counts)]
```
Returns:
>
>
> ```
> Quarter Group Counts
> 1: Q4 01 2
> 2: Q4 15 2
> 3: Q3 01 1
>
> ```
>
>
With `dplyr` and `stringr` we could do something like:
```
library(dplyr)
library(stringr)
df %>%
mutate(Group = str_sub(GroupID, 1, 2)) %>%
group_by(Group, Quarter) %>%
summarise(Counts = n()) %>%
ungroup()
```
Returns:
>
>
> ```
> # A tibble: 3 x 3
> Group Quarter Counts
>
> 1 01 Q3 1
> 2 01 Q4 2
> 3 15 Q4 2
>
> ```
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since you are already using `data.table`, you can do:
```
df[, Group := substr(GroupID,1,2)]
df <- df[,Counts := .N, .(Group,Quarter)][,.(Group, Quarter, Counts)]
df <- unique(df)
print(df)
Group Quarter Counts
1: 10 Q4 2
2: 15 Q4 2
3: 10 Q3 1
```
Upvotes: 1 <issue_comment>username_3: Here's my simple solution with `plyr` and `base R`, it is lightening fast.
```
library(plyr)
df$breakid <- as.character((substr(df$GroupID, start =0 , stop = 2)))
d <- plyr::count(df, c("Quarter", "breakid"))
```
**Result**
```
Quarter breakid freq
Q3 01 1
Q4 01 2
Q4 15 2
```
Upvotes: 0 <issue_comment>username_4: Alternatively, using `tapply` (and `data.table` indexing):
```
df$Brand <- substr(df$GroupID, 1, 2)
tapply(df$Brand, df[, .(Quarter, Brand)], length)
```
(If you don't care about the output being a matrix).
Upvotes: 0
|
2018/03/16
| 396 | 1,499 |
<issue_start>username_0: I need to get result after array loop is completed and do something with this result in Node Js project `count` and I need to check this variable after loop is completed
```
var count = 0;
myArray.forEach(element => {
if(element == 'something'){
count++;
}
});
if(count == 2){
// do smth
}else if(count == 1){
// do something else
}else{
// or do this
}
```
currently `if` statement is being implemented before I get result from loop.<issue_comment>username_1: Unless you are doing an *asynchronous operation* in each iteration of `forEach`, you can shorten your code as
```
var count = myArray.filter(s => s == "something").length
```
Or something equivalent to
```
var f = str => str == "something"
var count = myArray.filter(f).length
```
Upvotes: 2 <issue_comment>username_2: The filter solution already provided is probably the best approach here! That said, here's some more background info:
The reason that the forEach is happening before the if block is because the forEach uses a callback, which is passed to Node's event loop to be executed later (I recommend doing some research on the event loop! Very helpful to know).
The most similar approach to yours that runs synchronously would be a standard for loop:
```
var count = 0;
for (var i = 0; i < myArray.length; i++) {
if(element == 'something'){
count++;
}
});
if(count == 2){
// Things
}
```
Upvotes: 1 [selected_answer]
|
2018/03/16
| 996 | 3,161 |
<issue_start>username_0: I'm developing a WinForm desktop application for users to input employees retirement data, using SQL Server 2008 as DB.
One of the tables that gets part of the user data has a reference to another table whose records were defined at design time, adding a Foreign Key constraint for consistency.
```
CREATE TABLE tbCongedo (
ID int IDENTITY(0,1) PRIMARY KEY,
ID_ANAGRAFICA int NOT NULL,
ID_TIPOLOGIA int NOT NULL,
DECORRENZA datetime NOT NULL,
PROT_NUM nvarchar(7) NOT NULL,
PROT_DATA datetime NOT NULL
);
CREATE TABLE tbTipologia (
ID int IDENTITY(0,1) PRIMARY KEY,
TIPOLOGIA nvarchar(20)
);
INSERT INTO tbTipologia VALUES ('CONGEDO'), ('POSTICIPO'), ('ANTICIPO'), ('REVOCA'), ('DECESSO')
ALTER TABLE tbCongedo
ADD CONSTRAINT FK_tbCongedo_tbTipologia
FOREIGN KEY (ID_TIPOLOGIA) REFERENCES tbTipologia(ID)
```
Then, I have this code that should execute the `INSERT` statement
```
public int Insert(SqlConnection Connessione)
{
using (SqlCommand Comando = new SqlCommand("INSERT INTO tbCongedo VALUES (@ID_ANAGRAFICA, @PROT_NUM, @PROT_DATA, @DECORRENZA, @ID_TIPOLOGIA); SELECT SCOPE_IDENTITY()", Connessione))
{
Comando.Parameters.AddWithValue("@ID_ANAGRAFICA", ID_ANAGRAFICA);
Comando.Parameters.AddWithValue("@PROT_NUM", PROT_NUM);
Comando.Parameters.AddWithValue("@PROT_DATA", PROT_DATA);
Comando.Parameters.AddWithValue("@DECORRENZA", DECORRENZA);
Comando.Parameters.AddWithValue("@ID_TIPOLOGIA", ID_TIPOLOGIA);
ID = Int32.Parse(Comando.ExecuteScalar().ToString());
}
return ID;
}
```
but I'm given this SqlException:
>
> The INSERT statement conflicted with the FOREIGN KEY constraint "FK\_tbCongedo\_tbTipologia". The conflict occurred in database "Scadenziario\_ver4\_TEST", table "dbo.tbTipologia", column 'ID'
>
>
>
These are the data that I was trying to get inserted in the table:
```
ID_ANAGRAFICA = 2
ID_TIPOLOGIA = 0
PROT_DATA = {16/03/2018 00:00:00}
DECORRENZA = {16/03/2018 00:00:00}
PROT_NUM = 123456
```
Funny thing is that when I try to insert those same data manually through SQL Server Management Studio, I'm given no error at all.
Thanks.-<issue_comment>username_1: Try specifying fields: `(col_name1, col_name2, ...)`.
Without that the VALUES may not be applied as how you might hope. Variable names are NOT automagically matched with similarly-named columns.
So like this:
```
... new SqlCommand
(
"INSERT INTO tbCongedo " +
" (ID_ANAGRAFICA, PROT_NUM, PROT_DATA, DECORRENZA, ID_TIPOLOGIA) "
"VALUES (@ID_ANAGRAFICA, @PROT_NUM, @PROT_DATA, @DECORRENZA, @ID_TIPOLOGIA); " +
"SELECT SCOPE_IDENTITY()", Connessione
)
...
```
Upvotes: 2 <issue_comment>username_2: I think the problem isn't in the data but in the `INSERT` statement itself. You are trying to insert the values to the wrong columns using the wrong order. To solve the issue you should either specify the columns in the `INSERT` statement or correct the order of the values. In your case the query will try to insert the value of `@PROT_NUM` in the `ID_TIPOLOGIA` instead.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,220 | 3,008 |
<issue_start>username_0: Similar questions to this have been asked, but I have not been able to apply the suggested solutions successfully.
I have created a plot like so;
```
> elective_ga <- c(68, 51, 29, 10, 5)
> elective_epidural <- c(29, 42, 19, 3, 1)
> elective_cse <- c(0, 0, 0, 20, 7)
> elective_spinal <- c(3, 7, 52, 67, 87)
> years <- c('1982', '1987', '1992', '1997', '2002')
> values <- c(elective_ga, elective_epidural, elective_cse, elective_spinal)
> elective_technique <- data.frame(years, values)
> p <- ggplot(elective_technique, aes(years, values))
> p +geom_bar(stat='identity', aes(fill=c(rep('GA', 5), rep('Epidural', 5), rep('CSE', 5), rep('Spinal', 5)))) +labs(x='Year', y='Percent', fill='Type')
```
which produces the following chart;
[](https://i.stack.imgur.com/hbdSG.png)
I was expecting the bars to be stacked in the order (from top to bottom) GA, Epidural, CSE, Spinal. I would have thought the way I constructed the data frame that they should be ordered in this way but obviously I have not. Can anyone explain why the bars are ordered the way they are, and how to get them the way I want?<issue_comment>username_1: How about this?
```
elective_ga <- c(68, 51, 29, 10, 5)
elective_epidural <- c(29, 42, 19, 3, 1)
elective_cse <- c(0, 0, 0, 20, 7)
elective_spinal <- c(3, 7, 52, 67, 87)
years <- c('1982', '1987', '1992', '1997', '2002')
values <- c(elective_ga, elective_epidural, elective_cse, elective_spinal)
Type=c(rep('GA', 5), rep('Epidural', 5), rep('CSE', 5), rep('Spinal', 5))
elective_technique <- data.frame(years, values,Type)
elective_technique$Type=factor(elective_technique$Type,levels=c("GA","Epidural","CSE","Spinal"))
p <- ggplot(elective_technique, aes(years, values,fill=Type))+geom_bar(stat='identity') +
labs(x='Year', y='Percent', fill='Type')
```
[](https://i.stack.imgur.com/IQKIG.png)
Upvotes: 2 <issue_comment>username_2: One way is to reorder the levels of the factor.
```
library(ggplot2)
elective_ga <- c(68, 51, 29, 10, 5)
elective_epidural <- c(29, 42, 19, 3, 1)
elective_cse <- c(0, 0, 0, 20, 7)
elective_spinal <- c(3, 7, 52, 67, 87)
years <- c('1982', '1987', '1992', '1997', '2002')
values <- c(elective_ga, elective_epidural, elective_cse, elective_spinal)
type = c(rep('GA', 5), rep('Epidural', 5), rep('CSE', 5), rep('Spinal', 5))
elective_technique <- data.frame(years, values, type)
# reorder levels in factor
elective_technique$type <- factor(elective_technique$type,
levels = c("GA", "Epidural", "CSE", "Spinal"))
p <- ggplot(elective_technique, aes(years, values))
p +
geom_bar(stat='identity', aes(fill = type)) +
labs(x = 'Year', y = 'Percent', fill = 'Type')
```
[](https://i.stack.imgur.com/sBtAV.png)
The `forcats` package may provide a cleaner solution.
Upvotes: 1
|
2018/03/16
| 899 | 2,395 |
<issue_start>username_0: My data is a time series.
```
y <- ts(datafile[,"y"], start=1960, frequency=4, end=2010)
```
I want to include quarterly dummies in my forecasting ARIMA model. Is that possible? If so, what's the command for it? I can't seem to find one which allows me to merge the ARIMA model with the quarterly dummy variable.
So my ARIMA model is:
```
fit_y <- arima(y, order=c(2,1,2), method="ML")
```
I know how to fit seasonal ARs into the model:
```
fit_y <- arima(y, order=c(2,1,2), seasonal=list(order=c(0,1,1), period=4), method="ML")
```
Is there a way to include a quarterly dummy variable? I've created the dummy variables - *manually* - through excel and titled them Q1, Q2, Q3, Q4, with the following specification so that R reads them as a time series variable:
```
Q1 <- ts(datafile[,"Q1"], start=1960, frequency=4, end=2010)
Q2 <- ts(datafile[,"Q2"], start=1960, frequency=4, end=2010)
Q3 <- ts(datafile[,"Q3"], start=1960, frequency=4, end=2010)
Q4 <- ts(datafile[,"Q4"], start=1960, frequency=4, end=2010)
```<issue_comment>username_1: Have you tried using the xreg argument in arima? xreg, external regressors, will allow you to include dummy variables in your arima. I usually create a matrix for my xreg which will include all my dummy variables. Below is an example of how I would do it for daily dummy variables.
```
dfTs$wday <- wday(dfTs$date)
xreg <- cbind(wday=model.matrix(~as.factor(dfTs$wday)))
xreg <- xreg[,-1] # drop intercept
colnames(xreg) <- c("Mon", "Tue", "Wed", "Thur", "Fri", "Sat")
fit <- arima(dfTs, order = c(2,1,2),
seasonal = list(order = c(0,1,1), period = 4),
method = "ML", xreg = xreg)
```
Upvotes: 0 <issue_comment>username_2: You can add the dummy variables to the arima model via the option xreg of `arima`.
```
y <- ts(datafile[,"y"], start=1960, frequency=4)
Q1 <- ts(rep(c(1,0,0,0),44), start=1960, frequency=4)
Q2 <- ts(rep(c(0,1,0,0),44), start=1960, frequency=4)
Q3 <- ts(rep(c(0,0,1,0),44), start=1960, frequency=4)
xreg <- cbind(Q1,Q2,Q3)
fit_y <- arima(y, order=c(2,1,2), method = "ML", xreg = xreg)
```
Note that (i) I didn't add the Q4, to avoid the dummy trap (see for example [question about dummy trap](https://stats.stackexchange.com/questions/144372/dummy-variable-trap) ), and (ii) that you can easily generate these Q1, Q2 and Q3 in R.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 750 | 2,073 |
<issue_start>username_0: Is there a built-in function to join two 1D arrays into a 2D array?
Consider an example:
```
X=np.array([1,2])
y=np.array([3,4])
result=np.array([[1,3],[2,4]])
```
I can think of 2 simple solutions.
The first one is pretty straightforward.
```
np.transpose([X,y])
```
The other one employs a lambda function.
```
np.array(list(map(lambda i: [a[i],b[i]], range(len(X)))))
```
While the second one looks more complex, it seems to be almost twice as fast as the first one.
**Edit**
A third solution involves the zip() function.
```
np.array(list(zip(X, y)))
```
It's faster than the lambda function but slower than column\_stack solution suggested by @Divakar.
```
np.column_stack((X,y))
```<issue_comment>username_1: Have you tried using the xreg argument in arima? xreg, external regressors, will allow you to include dummy variables in your arima. I usually create a matrix for my xreg which will include all my dummy variables. Below is an example of how I would do it for daily dummy variables.
```
dfTs$wday <- wday(dfTs$date)
xreg <- cbind(wday=model.matrix(~as.factor(dfTs$wday)))
xreg <- xreg[,-1] # drop intercept
colnames(xreg) <- c("Mon", "Tue", "Wed", "Thur", "Fri", "Sat")
fit <- arima(dfTs, order = c(2,1,2),
seasonal = list(order = c(0,1,1), period = 4),
method = "ML", xreg = xreg)
```
Upvotes: 0 <issue_comment>username_2: You can add the dummy variables to the arima model via the option xreg of `arima`.
```
y <- ts(datafile[,"y"], start=1960, frequency=4)
Q1 <- ts(rep(c(1,0,0,0),44), start=1960, frequency=4)
Q2 <- ts(rep(c(0,1,0,0),44), start=1960, frequency=4)
Q3 <- ts(rep(c(0,0,1,0),44), start=1960, frequency=4)
xreg <- cbind(Q1,Q2,Q3)
fit_y <- arima(y, order=c(2,1,2), method = "ML", xreg = xreg)
```
Note that (i) I didn't add the Q4, to avoid the dummy trap (see for example [question about dummy trap](https://stats.stackexchange.com/questions/144372/dummy-variable-trap) ), and (ii) that you can easily generate these Q1, Q2 and Q3 in R.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,447 | 5,473 |
<issue_start>username_0: Good day everyone,
I'm creating a chatbot for my company and I started with the samples on github and the framework docs.
We decided to host it on Azure and added LUIS and Table Storage to it. The Bot runs fine locally in Botframework Emulator, but on Azure (WebChat, Telegram) it will only run for approximatly an hour to an hour and fifteen minutes, if no one tries to communicate with the bot. After this period of time, the bot will just run into an internal server error. When you ask the bot something, you can stretch this time window (For how long I don't know and why I don't know either, sorry).
In Azure "**Always On**" is set to true.
I'm really frustrated at this point, because I cannot find the problem and I'm pretty sure there must be something wrong with my code, because I don't properly understand the framework. I'm still a beginner with Azure, C# and Bot Framework.
Also I have already read everything on "internal server error's" on here and github. Also tried Debugging, even with extra Debbug options in VS. We have **not** tried Application Insights yet.
At the moment I'm doing everything with the LUIS Dialog which calls / Forwards to other IDialogs:
```
[LuisIntent(Intent_Existens)]
public async Task ExistensOf(IDialogContext context, IAwaitable message, LuisResult result)
{
var existens = new ExistensDialog();
var messageToForward = await message;
if (result.Entities.Count == 1)
{
messageToForward.Value = result.Entities[0].Entity;
await context.Forward(existens, AfterDialog, messageToForward);
}
else
{
context.Wait(this.MessageReceived);
}
}
```
I know that "Value" is for CardActions, but I don't know how else I could pass Entities to the child dialog.
```
[Serializable]
public class ExistensDialog : IDialog
{
public async Task StartAsync(IDialogContext context)
{
context.Wait(MessageReceivedAsync);
}
private async Task MessageReceivedAsync(IDialogContext context, IAwaitable result)
{
var message = await result;
if (message.Text.Contains("specificWord"))
{
await context.Forward(new ExistensHU(), AfterDialog, message);
}
else
{
await context.Forward(new ExistensBin(), AfterDialog, message);
}
}
private async Task AfterDialog(IDialogContext context, IAwaitable result)
{
context.Done(null);
}
}
```
then:
```
[Serializable]
internal class ExistensHU : IDialog
{
private Renamer renamer = new Renamer(); // Just for renaming
private ExternalConnection ec = new ExternalConnection(); //Just a HTTP connection to a WebApp to get data from it
public async Task StartAsync(IDialogContext context)
{
context.Wait(MessageReceivedAsync);
}
private async Task MessageReceivedAsync(IDialogContext context, IAwaitable result)
{
const string apiCallURL = "some/API/"; // ExternalConnection...
var message = await result;
string nameHU = renamer.RemoveBlanks(message.Value.ToString());
StringBuilder answerBuilder = new StringBuilder();
var name = ec.CreateSingleAPIParameter("name", nameHU);
Dictionary wms = await ec.APIResultAsDictionary(apiCallURL, name);
foreach (var item in wms)
{
if (item.Key.Equals("none") && item.Value.Equals("none"))
{
answerBuilder.AppendLine($"Wrong Answer");
}
else
{
answerBuilder.AppendLine($"Correct Answer");
}
}
await context.PostAsync(answerBuilder.ToString());
context.Done(null);
}
}
```
That's basically every Dialog in my project.
Also I have an IDialog which looks like this:
```
[Serializable]
public class VerificationDialog : IDialog
{
[NonSerializedAttribute]
private readonly LuisResult \_luisResult;
public VerificationDialog(LuisResult luisResult)
{
\_luisResult = luisResult;
}
public async Task StartAsync(IDialogContext context)
{
var message = \_luisResult.Query;
if (!message.StartsWith("Wie viele"))
{
context.Call(new ByVerificationDialog(\_luisResult), AfterDialog);
}
else
{
context.Call(new CountBinsByVerification(\_luisResult), AfterDialog);
}
}
private async Task AfterDialog(IDialogContext context, IAwaitable result)
{
context.Done(null);
}
}
```
I don't know if I'm allowed to pass the luisResult like this from BasicLuisDialog. This could be the issue or one of the issues.
Basically that's it and I'm still getting used to the framework. I'm not expecting an absolute answer. Just hints/tips and advice how to make everything better.
Thanks in advance!<issue_comment>username_1: Internal error usually means exceptions in .NET application.
Use `AppDomain.CurrentDomain.UnhandledException` to receive all unhandled exceptions and log them somewhere (consider using [Application Insights](https://azure.microsoft.com/en-us/services/application-insights/)).
After you investigate logged information fix that.
Upvotes: 1 <issue_comment>username_2: If you are using the .NET SDK version 3.14.0.7. There is currently a bug we are tracking in this version. There has been a number of reports and we are actively investigating. Please try downgrading to 3.13.1. This should fix the issue for you until we can release a new version.
for reference we are tracking the issue on these GitHub issues:
<https://github.com/Microsoft/BotBuilder/issues/4322>
<https://github.com/Microsoft/BotBuilder/issues/4321>
Update 3/21/2018:
We have pushed a new version of the SDK which includes a fix for this issue <https://www.nuget.org/packages/Microsoft.Bot.Builder/3.14.1.1>
Upvotes: 3 [selected_answer]
|
2018/03/16
| 322 | 1,069 |
<issue_start>username_0: With the help of answer of this [question](https://stackoverflow.com/questions/39436406/f-shortest-way-to-convert-a-string-option-to-a-string) I need help with the specific syntax on how to retrieve values from option types in the following case.
```
type Query = {
q : string
pageSize : int option
}
let search (query : Query) =
let url = sprintf "foo.com?q=%spageSize=%i" query.q (query.pageSize |> 10 |< query.pageSize) // ???
```
Syntax help for `(query.pageSize |> 10 |< query.pageSize)`<issue_comment>username_1: The answer you linked provides a pretty clear illustration of the syntax:
```
input |> defaultArg <| ""
```
In your case, the input is `query.pageSize` and the default value is `10` instead of empty string. So:
```
query.pageSize |> defaultArg <| 10
```
Upvotes: 2 <issue_comment>username_2: `Option.defaultValue` is your friend:
```
type Query = {
q : string
pageSize : int option
}
let q = {q = "foo"; pageSize = None}
let p = q.pageSize |> Option.defaultValue 10
```
Upvotes: 5 [selected_answer]
|
2018/03/16
| 500 | 1,494 |
<issue_start>username_0: I have a set
```
Set entityIds;
```
I want to divide it into packs of 10 pieces. To end up with a sheet of sets and in each set of 10 elements, like this:
```
List> batches
```
How can I do it?
My Solution:
```
private List> splitSet(Set input) {
List> batches = new ArrayList<>();
for (int p = 0; p <= input.size(); p += 10) {
Set partSet = input.stream().skip(p).limit(10).collect(Collectors.toSet());
batches.add(partSet);
}
return batches;
}
```<issue_comment>username_1: I'd probably do it something like this:
```
static List> splitSet(Set set, int n) {
List> split = new ArrayList<>();
int count = 0;
Set newSet = new HashSet<>();
for (T id : set) {
if (++count % n == 0) {
split.add(newSet);
newSet = new HashSet<>();
}
newSet.add(id);
}
if (!newSet.isEmpty()) {
split.add(newSet);
}
return split;
}
public void test(String[] args) throws Exception {
Set ids = new HashSet<>();
for (long i = 0; i < 98; i++) {
ids.add(i);
}
System.out.println(splitSet(ids, 10));
}
```
Upvotes: 1 <issue_comment>username_2: You can use the queue or stack and just pop/pull elements
```
private static List> extract(Set set, int size){
List> result = new ArrayList<>();
Queue queue = new LinkedList<>(set);
while (!queue.isEmpty() && size > 0){
Set subSet = new HashSet<>();
while (subSet.size() < size && !queue.isEmpty()){
subSet.add(queue.poll());
}
result.add(subSet);
}
return result;
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 615 | 1,952 |
<issue_start>username_0: From c# I am trying to check, if SQL is valid using `SET PARSEONLY ON`, as you see in the following example
```
BEGIN TRY
SET PARSEONLY ON UPDATE SchedulerAction S ET
SELECT 1
END TRY
BEGIN CATCH
SELECT -1
END CATCH
```
In SQL this gives the error
```
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near 'S'.
```
But my problem is, that this error is not caught by `try-catch`, and somehow neither `SELECT`s are run.
If I change the SQL to `SET PARSEONLY ON UPDATE SchedulerAction SET`.
It gives the error below
```
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'SELECT'.
```
But it neither gave a result from `SELECT`
If I changed it to a valid SQL like `SET PARSEONLY ON UPDATE SchedulerAction SET Action = 1` neither the `SELECT` in `try` is run.
Can I somehow get the result of, if the validation was correct or wrong by returning e.g. a `SELECT` or something similar?<issue_comment>username_1: I'd probably do it something like this:
```
static List> splitSet(Set set, int n) {
List> split = new ArrayList<>();
int count = 0;
Set newSet = new HashSet<>();
for (T id : set) {
if (++count % n == 0) {
split.add(newSet);
newSet = new HashSet<>();
}
newSet.add(id);
}
if (!newSet.isEmpty()) {
split.add(newSet);
}
return split;
}
public void test(String[] args) throws Exception {
Set ids = new HashSet<>();
for (long i = 0; i < 98; i++) {
ids.add(i);
}
System.out.println(splitSet(ids, 10));
}
```
Upvotes: 1 <issue_comment>username_2: You can use the queue or stack and just pop/pull elements
```
private static List> extract(Set set, int size){
List> result = new ArrayList<>();
Queue queue = new LinkedList<>(set);
while (!queue.isEmpty() && size > 0){
Set subSet = new HashSet<>();
while (subSet.size() < size && !queue.isEmpty()){
subSet.add(queue.poll());
}
result.add(subSet);
}
return result;
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 473 | 1,753 |
<issue_start>username_0: How do I store my trained model on Google Colab and retrieve further on my local disk?
Will checkpoints work? How do I store them and retrieve them after some time? Can you please mention code for that. It would be great.<issue_comment>username_1: Google Colab instances are created when you open the notebook and are deleted later on so you can't access data on different runs. If you want to download the trained model to your local machine you can use:
```
from google.colab import files
files.download()
```
And similarly if you want to upload the model from your local machine you can do:
```
from google.colab import files
files.upload()
```
Another possible (and better in my opinion) solution is to use a github repo to store your models and simply commit and push your models to github and clone the repo later to get the models back.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Ok thats works for me
```
> import os
> checkpoint_path = "training_1\cp.ckpt"
> checkpoint_dir = os.path.dirname(checkpoint_path)
# Create checkpoint callback
> cp_callback =ModelCheckpoint(checkpoint_path,
monitor='val_acc',save_best_only=True,save_weights_only=True,verbose=1)
> network_fit = myModel.fit(x, y, batch_size=25, epochs=20,
,callbacks = [cp_callback] )
```
By this code you can monitor val\_acc and save weights on that epoch if it decrease.
Now you can access this wights and load that in model by this code
```
myModel.load_weights(checkpoint_path)
```
You can check how to use it here
<https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/save_and_restore_models.ipynb#scrollTo=gXG5FVKFOVQ3>
Upvotes: 1
|
2018/03/16
| 552 | 1,969 |
<issue_start>username_0: I am creating a react component having html progress bar and i am trying to apply inline style to it but it's not applying the linear-gradient to progress bar.
Here's a sample code
```
const customColor = '#d3d4d5'
const element =
ReactDOM.render(element, document.getElementById('root'));
```
Link to codepen:
<https://codepen.io/anon/pen/RMGbGe?&editors=0010>
Any idea why it wont apply linear-gradient to progress bar?
Thanks<issue_comment>username_1: Google Colab instances are created when you open the notebook and are deleted later on so you can't access data on different runs. If you want to download the trained model to your local machine you can use:
```
from google.colab import files
files.download()
```
And similarly if you want to upload the model from your local machine you can do:
```
from google.colab import files
files.upload()
```
Another possible (and better in my opinion) solution is to use a github repo to store your models and simply commit and push your models to github and clone the repo later to get the models back.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Ok thats works for me
```
> import os
> checkpoint_path = "training_1\cp.ckpt"
> checkpoint_dir = os.path.dirname(checkpoint_path)
# Create checkpoint callback
> cp_callback =ModelCheckpoint(checkpoint_path,
monitor='val_acc',save_best_only=True,save_weights_only=True,verbose=1)
> network_fit = myModel.fit(x, y, batch_size=25, epochs=20,
,callbacks = [cp_callback] )
```
By this code you can monitor val\_acc and save weights on that epoch if it decrease.
Now you can access this wights and load that in model by this code
```
myModel.load_weights(checkpoint_path)
```
You can check how to use it here
<https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/tutorials/keras/save_and_restore_models.ipynb#scrollTo=gXG5FVKFOVQ3>
Upvotes: 1
|
2018/03/16
| 582 | 1,780 |
<issue_start>username_0: I have a container with elements of some arbitrary structure/class.
Is there an elegant way to extract a single property (e.g. member or member.function) into a own array (e.g. to pass it to another algorithm)?
```
#include
#include
#include
int main()
{
struct A{
int n;
int B()const;
};
std::vectormanyOfA{{1,1}, {2,2},{3,3}};
std::vector allNs;
// is there a standard way to do this kind of extraction?
for(const auto& a:manyOfA){
allNs.push\_back(a.n);
// or maybe allNs.push\_back(a.B());
};
std::cout << "result Vector: ";
for(const auto& n: allNs){
std::cout << n << ", ";
};
std::cout << std::endl;
}
```<issue_comment>username_1: You can use [std::transform](http://en.cppreference.com/w/cpp/algorithm/transform) to apply a function to every element of an iterator range and collect the results in a result range. In you case, the result range can be a `std::back_inserter` iterator. This is a special iterator that calls `v.push_back` on a given container when assigning a value.
```
std::vectormanyOfA{{1,1}, {2,2},{3,3}};
std::vector allNs;
std::transform(manyOfA.begin(), manyOfA.end(), std::back\_inserter(allNs), [](A const& a) {return a.n;});
```
Or even better with a range library, e.g. boost.range. This will save the creation of a new vector just to hold the values of `a.n`:
```
template
void doSomethingWithRange(const RangeType ⦥) { ... }
auto allNs = manyOfA | transformed([](A const& a) {return a.n;});
doSomethingWithRange(allNs);
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Additionally you could also use [`for_each`](http://en.cppreference.com/w/cpp/algorithm/for_each)
```
for_each(begin(manyOfA), end(manyOfA), [&](auto a) { allNs.emplace_back(a.n); });
```
Upvotes: 1
|
2018/03/16
| 979 | 3,783 |
<issue_start>username_0: I create Project In MVC.
I made functionality in my project for giving review to user, but i getting one problem.
so i need help.
[Before Adding review - we are showing multiple line(using enter) in text box](https://i.stack.imgur.com/ZLdL0.png)
[After Adding review - it'll display in single line](https://i.stack.imgur.com/JBWGQ.png)
How to Solve this Problem.
JavaScript Code:
```
$("#saveReview").click(function () {
debugger
if($("#saveReviewText").val() == "")
{
$("#BookReviewError").text("Review cannot be Empty!");
}
else
{
var saveStarReview = $("#saveStarReview").val();
$.ajax({
type: "POST",
url: '@Url.Action("SaveReview", "Book")',
data: { "reviewText": $("#saveReviewText").val(), "Bookid" : @Model.Book_Id ,"Rateid" : $("#RateId").val() , "StarReview" : saveStarReview },
cache: false,
});
location.reload();
}
});
$("#addReview").click(function () {
debugger
if($("#addReviewText").val() == "")
{
$("#BookReviewError").text("Review cannot be Empty!");
}
else
{
$.ajax({
type: "POST",
url: '@Url.Action("AddReview", "Book")',
data: { "reviewText": $("#addReviewText").val(),"Bookid" : @Model.Book_Id , "StarReview" : $("#addStarReview").val() },
cache: false,
});
location.reload();
}
});
```
Added Review Code:
```
@if (!string.IsNullOrEmpty(ViewBag.BookError))
{ @ViewBag.BookError }
```
Model Code:
```
public ActionResult AddReview(string reviewText, int Bookid, byte StarReview)
{
if (!string.IsNullOrEmpty(reviewText))
{
_ratingServices.AddUserRating(reviewText, Bookid, StarReview);
}
else
{
TempData["BookError"] = Resource.Messages.EmptyReviewText;
}
return View();
}
```
Services Code:
```
public void AddUserRating(string addReviewText, int Bookid , byte StarReview)
{
Rating rating = new Rating();
rating.User_Id = ProjectSession.UserID;
rating.Book_Id = Bookid;
rating.Review = addReviewText;
rating.Rating1 = StarReview;
rating.Rating_Time = DateTime.Now;
rating.Is_Deleted = false;
_unitOfWork.RatingRepository.Insert(rating);
_unitOfWork.Save();
}
```
storing<issue_comment>username_1: when you pass value in controller store database like this
```
public ActionResult AddReview(string reviewText, int Bookid, byte StarReview)
{ reviewText = reviewText.Replace("\n", "
");.... so on
```
Create function using javascript when you display your review using view in MVC
```
function multiliner(target, replacer) {
var text = $(target).html();
text = text.split("<br />");
text = text.join(replacer);
$(target).html(text);
}
$(".multiline.paragraph,.user-rating-list .paragraph").each(function () {
multiliner(this, "
");
});
```
Give class to your paragraph tag tag
Upvotes: 1 [selected_answer]<issue_comment>username_2: You can make use of CKEditor for getting or setting your review text.
CKEditor renders data as HTML and you can save it as it is.
Rendering the value will set the value in same format.
Sample Code :
1. Set CKEditor :
```
CKEDITOR.replace('editor1');
```
2. While passing data to controller get the text of review as :
var messageBody = CKEDITOR.instances.editor1.document.getBody().getHtml();
The message body will be formatted in the same manner as the user entered the text.
Try and let me know if you face any issue.
Upvotes: 1
|
2018/03/16
| 335 | 1,283 |
<issue_start>username_0: ```
export class myclass implements OnInit {
private outagesList;
private outageArray;
constructor(private outages: OutagesService){}
ngOnInit(){
this.outages.getOutagesList().subscribe((data) => {
this.outagesList = data;
this.outagesList.forEach( function (arrayItem)
{
this.outageArray.push(arrayItem["title"]);
});
)
}
```
If i try to push the data into outageArray it is throwing `ERROR TypeError: Cannot read property 'outageArray' of undefined`
How can i avoid this error? I need to access this outageArray value to html.<issue_comment>username_1: Use an arrow function
```
this.outagesList.forEach((arrayItem) => {
this.outageArray.push(arrayItem["title"]);
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Once you are using post-ES6 Javascript, functions like `forEach` are not likely to be your best solution. The easiest way to do this is with a `for..of` loop:
```
this.outages.getOutagesList().subscribe((data) => {
this.outagesList = data;
for (let arrayItem of this.outagesList)
{
this.outageArray.push(arrayItem.title);
});
)
```
Upvotes: -1
|
2018/03/16
| 531 | 2,124 |
<issue_start>username_0: In javascript, I am doing something like this
```
first_function: function() {
var timeout = setTimeout(function() {
// doing something
}, 300000);
},
```
In another function, after doing something important I have to access the `timeout` variable and clear timeout.
```
second_function : function () {
// after opening the hardware, have to cleartimeout from first_function
hardware.open().then(function() {
clearTimeout(timeout);
}
// calling first_function only after hardware open
this.first_function();
```
But, I get undefined variable `timeout` , how can I solve this problem?
I cannot call `this.first_function()` before resolving the promise from `then()`<issue_comment>username_1: You could just store `timeout` variable as another property, e.g. `this.timeout`:
```
first_function: function() {
this.timeout = setTimeout(function() {
// doing something
}, 300000);
},
second_function: function() {
// after opening the hardware, have to cleartimeout from first_function
hardware.open().then(() => {
clearTimeout(this.timeout);
// calling first_function only after hardware open
this.first_function();
})
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: you can put `timeout` in another property like this or outside of your functions.
```
var timeout;
first_function: function() {
timeout = setTimeout(function() {
// doing something
}, 300000);
},
second_function: function() {
// after opening the hardware, have to cleartimeout from first_function
hardware.open().then(() => {
clearTimeout(timeout);
// calling first_function only after hardware open
this.first_function();
})
}
```
Upvotes: 0 <issue_comment>username_3: I would do as follows;
```
var first_function = _ => setTimeout(_ => doSomething, 300000),
second_function = cto => hardware.open()
.then(_ => clearTimeout(cto));
second_function(first_function());
```
Upvotes: 0
|
2018/03/16
| 425 | 1,502 |
<issue_start>username_0: I'm working on an Angular 5 project and noticed some CSS properties are not inherited correctly from custom elements. For example, consider a custom component `foo`:
```js
@Component({
selector: 'foo',
template: `
inside form
inside form
outside
`,
})
export class FooComponent { }
```
Now, I want to alter its `opacity` and `max-height`:
```
foo {
opacity: 0.5;
max-height: 0;
overflow: hidden;
}
```
However, browsers seem to not inherit those properties correctly down to the `form` and `div` elements.
* Firefox (59) properly inherits `opacity`, but seems to ignore `max-height`.
* Chrome (64) doesn't inherit `opacity`, and also ignores `max-height` altogether.
I made a **[plunk demonstrating the issue](https://plnkr.co/edit/SSulJJ1GdPgV0zdBztJu?p=preview)**.
Is there some twist about how custom elements inherit CSS properties, or are those just browser bugs?<issue_comment>username_1: Neither `opacity` nor `max-height` are inherited properties to begin with. I think this is simply due to the fact that your custom foo component is inline by default, so a max-height for example isn’t even allowed to apply.
Add
```
foo { display: block; }
```
or
```
foo { display: inline-block; }
```
and check what result you get ...
Upvotes: 3 [selected_answer]<issue_comment>username_2: try you add respectively tags to apply the css in the necessary tags. Ej:
```
foo.opacity-test form {
opacity: 0.5;
}
```
Upvotes: 0
|
2018/03/16
| 220 | 865 |
<issue_start>username_0: I don't use contact permission in manifest file. But i am getting contact permission in app info screen.
```
```<issue_comment>username_1: Android uses groups of permissions instead of single permission when asking the user to grand them.
You use
```
```
which is in **Contacts** group.
This is the reason why you see \_Contacts\_\_ permission in app info screen.
More info here: <https://developer.android.com/guide/topics/permissions/overview.html>
>
> The dialog does not describe the specific permission within that
> group. For example, if an app requests the READ\_CONTACTS permission,
> the system dialog just says the app needs access to the device's
> contacts
>
>
>
Upvotes: 3 <issue_comment>username_2: Remove the following permission from android manifest file
```
```
it will solve your problem
Upvotes: 1
|
2018/03/16
| 551 | 1,972 |
<issue_start>username_0: I have a Cloud Service project that I use to deploy my WebRole through VSTS, on a Hosted Agent.
The build definition is created with the Azure Cloud Services template, and has, amongst others, these steps:
* Build solution \*\*\\*.sln (step #1)
* Build solution \*\*\\*.ccproj (step #2)
I've added
```
true
```
To the .csproj file of the class library that uses unsafe code (for the release and debug configurations). I am using the release configuration when deploying.
The **Build solution \*\*\\*.sln** step passes, but the **Build solution \*\*\\*.ccproj** step fails.
By inspecting the logs, I can see that the **Build solution \*\*\\*.sln** step is started with the /unsafe+ parameter, however the second build step is not.
Moreover, the MSBuild arguments for step #1 are empty, but for step #2, they are:
>
> /t:Publish /p:TargetProfile=$(targetProfile) /p:DebugType=None
> /p:SkipInvalidConfigurations=true /p:OutputPath=bin\
> /p:PublishDir="$(build.artifactstagingdirectory)\"
>
>
>
How can I add this parameter to the ccproj build?<issue_comment>username_1: It uses default platform of your project (Open project file through NotePad, and check it) if removing Platform argument in VS build task, so you can check the value of BuildPlatform build variable (Variables tab) in build definition.
Upvotes: 0 <issue_comment>username_2: The `true` was added to the Release|AnyCPU condition:
```
true
pdbonly
true
bin\
TRACE
prompt
4
true
```
However, when building the project with "any cpu" platform (which is default, with a space between 'any' and 'cpu') the condition does not get hit, contrary to when building a .sln project. I have to explicitly set the platform to "anycpu" without a space, and everything works. It wasnt opmitizing the code either, because of this.
[](https://i.stack.imgur.com/8HXC0.png)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 489 | 1,646 |
<issue_start>username_0: I have a tuple as {'Europe-Fiat-Italy-Warehouse'}.
Car = {'Europe-Fiat-Italy-Warehouse'}.
I want to search the string "Fiat" in the above tuple without converting them to string tokens in a list.
i.e.,
```
(madmax@erlang)46>string:tokens(atom_to_list(element(1, Car)), "-").
["Europe","Fiat","Italy","Warehouse"]
(madmax@erlang)46> ["Europe", "Fiat" | Other] =
string:tokens(atom_to_list(element(1, Car)), "-").
["Europe","Fiat","Italy","Warehouse"]
(madmax@erlang)47>
(madmax@erlang)47> Other.
["Italy","Warehouse"]
(madmax@erlang)48>
```
As in above, we convert tuple to atom, then atom to list and then list to string tokens. Is there any optimized way? or any Buit-in-Function available in erlang which make this task easier?<issue_comment>username_1: It uses default platform of your project (Open project file through NotePad, and check it) if removing Platform argument in VS build task, so you can check the value of BuildPlatform build variable (Variables tab) in build definition.
Upvotes: 0 <issue_comment>username_2: The `true` was added to the Release|AnyCPU condition:
```
true
pdbonly
true
bin\
TRACE
prompt
4
true
```
However, when building the project with "any cpu" platform (which is default, with a space between 'any' and 'cpu') the condition does not get hit, contrary to when building a .sln project. I have to explicitly set the platform to "anycpu" without a space, and everything works. It wasnt opmitizing the code either, because of this.
[](https://i.stack.imgur.com/8HXC0.png)
Upvotes: 2 [selected_answer]
|
2018/03/16
| 3,877 | 10,475 |
<issue_start>username_0: For a very simple classification problem where I have a target vector [0,0,0,....0] and a prediction vector [0,0.1,0.2,....1] would cross-entropy loss converge better/faster or would MSE loss?
When I plot them it seems to me that MSE loss has a lower error margin. Why would that be?
[](https://i.stack.imgur.com/tsPg2.png)
Or for example when I have the target as [1,1,1,1....1] I get the following:
[](https://i.stack.imgur.com/DCF8y.png)<issue_comment>username_1: You sound a little confused...
* Comparing the values of MSE & cross-entropy loss and saying that one is lower than the other is like comparing apples to oranges
* MSE is for regression problems, while cross-entropy loss is for classification ones; these contexts are mutually exclusive, hence comparing the numerical values of their corresponding loss measures makes no sense
* When your prediction vector is like `[0,0.1,0.2,....1]` (i.e. with non-integer components), as you say, the problem is a *regression* (and not a classification) one; in classification settings, we usually use one-hot encoded target vectors, where only one component is 1 and the rest are 0
* A target vector of `[1,1,1,1....1]` could be the case either in a regression setting, or in a *multi-label multi-class* classification, i.e. where the output may belong to more than one class simultaneously
On top of these, your plot choice, with the percentage (?) of predictions in the horizontal axis, is puzzling - I have never seen such plots in ML diagnostics, and I am not quite sure what exactly they represent or why they can be useful...
If you like a detailed discussion of the cross-entropy loss & accuracy in classification settings, you may have a look at [this answer](https://stackoverflow.com/questions/47817424/loss-accuracy-are-these-reasonable-learning-curves/47819022#47819022) of mine.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As complement to the accepted answer, I will answer the following questions
1. What is the interpretation of MSE loss and cross entropy loss from probability perspective?
2. Why cross entropy is used for classification and MSE is used for linear regression?
**TL;DR** Use MSE loss if (random) target variable is from Gaussian distribution and categorical cross entropy loss if (random) target variable is from Multinomial distribution.
MSE (Mean squared error)
------------------------
One of the assumptions of the linear regression is multi-variant normality. From this it follows that the target variable is normally distributed(more on the assumptions of linear regression can be found [here](https://www.statisticssolutions.com/assumptions-of-linear-regression/) and [here](http://r-statistics.co/Assumptions-of-Linear-Regression.html)).
[Gaussian distribution(Normal distribution)](https://en.wikipedia.org/wiki/Normal_distribution) with mean  and variance  is given by
=%5Cfrac%7B1%7D%7B%5Csqrt%7B2%5Cpi%5Csigma%5E%7B2%7D%7D%7De%5E%7B-%5Cfrac%7B(x-%5Cmu)%5E2%7D%7B2%5Csigma%5E2%7D%7D)
Often in machine learning we deal with distribution with mean 0 and variance 1(Or we transform our data to have mean 0 and variance 1). In this case the normal distribution will be,
=%5Cfrac%7B1%7D%7B%5Csqrt%7B2%5Cpi%7D%7De%5E%7B-%5Cfrac%7Bx%5E2%7D%7B2%7D%7D) This is called standard normal distribution.
For normal distribution model with weight parameter  and precision(inverse variance) parameter , the probability of observing a single target `t` given input `x` is expressed by the following equation
=%5Cmathcal%7BN%7D(t%7Cy(x,%5Cmathbf%7Bw%7D),%5Cbeta%5E%7B-1%7D)) , where ) is mean of the distribution and is calculated by model as
=%5Csum_%7Bi=1%7D%5E%7Bm%7Dw_ix%5Ei)
Now the probability of target vector  given input  can be expressed by
=%5Cprod_%7Bn=1%7D%5E%7BN%7D%5Cmathcal%7BN%7D(t_n%7Cy(x_n,%5Cmathbf%7Bw%7D),%5Cbeta%5E%7B-1%7D)=) )%5E2%7D%7B2%7D%7D)
Taking natural logarithm of left and right terms yields
=%5Cln%20%5Cprod_%7Bn=1%7D%5E%7BN%7D%5Cfrac%7B%5Cbeta%7D%7B%5Csqrt%7B2%5Cpi%7D%7De%5E%7B-%5Cbeta%5Cfrac%7B(t_n-y(x_n,w))%5E2%7D%7B2%7D%7D)
-t_n%5Cright%7D%5E2%2b%5Cfrac%7BN%7D%7B2%7D%5Cln%5Cbeta-%5Cfrac%7BN%7D%7B2%7D%5Cln(2%5Cpi)=) )
Where ) is log likelihood of normal function. Often training a model involves optimizing the likelihood function with respect to . Now maximum likelihood function for parameter  is given by (constant terms with respect to  can be omitted),
=-%5Cfrac%7B%5Cbeta%7D%7B2%7D%5Csum_%7Bn=1%7D%5EN%5Cleft%7By(x_n,w)-t_n%5Cright%7D%5E2)
For training the model omitting the constant  doesn't affect the convergence.
=%5Csum_%7Bn=1%7D%5EN%5Cleft%7By(x_n,w)-t_n%5Cright%7D%5E2)
This is called squared error and taking the `mean` yields mean squared error.
=%5Cfrac%7B1%7D%7BN%7D%5Csum_%7Bn=1%7D%5EN%5Cleft%7By(x_n,w)-t_n%5Cright%7D%5E2),
Cross entropy
-------------
Before going into more general cross entropy function, I will explain specific type of cross entropy - binary cross entropy.
### Binary Cross entropy
The assumption of binary cross entropy is probability distribution of target variable is drawn from Bernoulli distribution. According to Wikipedia
>
> Bernoulli distribution is the discrete probability distribution of a random variable which
> takes the value 1 with probability p and the value 0
> with probability q=1-p
>
>
>
Probability of Bernoulli distribution random variable is given by
=p%5Ek(1-p)%5E%7B1-k%7D), where  and p is probability of success.
This can be simply written as
=p%5Ey(1-p)%5E%7B1-y%7D)
Taking negative natural logarithm of both sides yields
=-y%5Cln(p)%2d(1-y)%5Cln(1-p)), this is called binary cross entropy.
Categorical cross entropy
=========================
Generalization of the cross entropy follows the general case
when the random variable is multi-variant(is from Multinomial distribution
) with the following probability distribution
=%5Cprod_%7Bn=1%7D%5E%7BN%7Dp_n%5E%7By_n%7D(1-p_n)%5E%7B1-y_n%7D=%7Bp_n%7D%5E%7B%5Csum_%7Bn=1%7D%5E%7BN%7Dy_n%7D(1-p_n)%5E%7Bn-%5Csum_%7Bn=1%7D%5E%7BN%7Dy_n%7D%7D)
Taking negative natural logarithm of both sides yields categorical cross entropy loss.
=-(%5Csum_%7Bn=1%7D%5E%7BN%7Dy_n%5Cln(p_n)%2b(1-y_n)%5Cln(1-p_n))),
Upvotes: 3 <issue_comment>username_3: I tend to disagree with the previously given answers. The point is that the cross-entropy and MSE loss are the same.
The modern NN learn their parameters using maximum likelihood estimation (MLE) of the parameter space. The maximum likelihood estimator is given by argmax of the product of probability distribution over the parameter space. If we apply a log transformation and scale the MLE by the number of free parameters, we will get an expectation of the empirical distribution defined by the training data.
Furthermore, we can assume different priors, e.g. Gaussian or Bernoulli, which yield either the MSE loss or negative log-likelihood of the sigmoid function.
For further reading:
[Ian Goodfellow "Deep Learning"](https://www.deeplearningbook.org/contents/mlp.html)
Upvotes: 0 <issue_comment>username_4: A simple answer to your first question:
>
> For a very simple classification problem ... would cross-entropy loss converge better/faster or would MSE loss?
>
>
>
is that MSE loss, when combined with sigmoid activation, will result in non-convex cost function with multiple local minima. This is explained by Prof <NAME> in his lecture:
[Lecture 6.4 — Logistic Regression | Cost Function — [ Machine Learning | Andrew Ng]](https://youtu.be/HIQlmHxI6-0?t=160)
I imagine the same applies to multiclass classification with softmax activation.
Upvotes: 0
|
2018/03/16
| 1,057 | 3,708 |
<issue_start>username_0: I have a select statement and in that select statement I have a few columns on which I perform basic calculations (e.g. [Col1] \* 3.14). However, occasionally I run into non-numeric values and when that happens, the whole stored procedure fails because of one row.
I've thought about using a `WHERE ISNUMERIC(Col1) <> 0`, but then I would be excluding information in the other columns.
Is there a way in TSQL to somehow replace all stings with NULL or 0??<issue_comment>username_1: Something like...
```
SELECT blah1, blah2, blah3
CASE WHEN ISNUMERIC(Col1) = 1 THEN [Col1] * 3.14 ELSE NULL END as whatever
FROM your_table
```
A case can also be made that..
* The non-numeric values should be converted to numeric or NULL if that's what's expected in the column, and
* If numbers are expected then the column should be a numeric data type in the first place and not a character data type, which allows for these types of errors.
Upvotes: 2 <issue_comment>username_2: `ISNUMERIC` is a terrible way to do this, as there are far too many things that identify as `NUMERIC` which are not able to be multiplied by a non-`MONEY` data type.
<https://www.brentozar.com/archive/2018/02/fifteen-things-hate-isnumeric/>
This fails miserably, as '-' is a numeric...
```
DECLARE @example TABLE (numerics VARCHAR(10));
INSERT INTO @example VALUES ('-')
SELECT CASE WHEN ISNUMERIC(numerics) = 1 THEN numerics * 3.14 ELSE NULL END
FROM @example;
```
Try `TRY_CAST` instead (albeit amend your DECIMAL precision to suit your needs):
```
DECLARE @example TABLE (numerics VARCHAR(10));
INSERT INTO @example VALUES ('-')
SELECT TRY_CAST(numerics AS decimal(10,2)) * 3.14 FROM @example;
```
Upvotes: 0 <issue_comment>username_3: I prefer Try\_Cast:
```
SELECT
someValue
,TRY_CAST(someValue as int) * 3.14 AS TRY_CAST_to_int
,TRY_CAST(someValue as decimal) * 3.14 AS TRY_CAST_to_decimal
,IIF(ISNUMERIC(someValue) = 1, someValue, null) * 3.14 as IIF_IS_NUMERIC
FROM (values
( 'asdf'),
( '2' ),
( '1.55')
) s(someValue)
```
Upvotes: 1 [selected_answer]<issue_comment>username_4: trycast will test for a specfic type
```
declare @T table (num varchar(20));
insert into @T values ('12'), ('3.14'), ('5.6E12'), ('$120'), ('-'), (''), ('cc'), ('aa'), ('bb'), ('1/5');
select t.num, ISNUMERIC(t.num) as isnumeric
, isnull(TRY_CONVERT(smallmoney, t.num), 0) as smallmoney
, TRY_CONVERT(float, t.num) as float
, TRY_CONVERT(decimal(18,4), t.num) as decimal
, isnull(TRY_CONVERT(smallmoney, t.num), TRY_CONVERT(float, t.num)) as mix
from @T t
num isnumeric smallmoney float decimal
-------------------- ----------- --------------------- ---------------------- ---------------------------------------
12 1 12.00 12 12.0000
3.14 1 3.14 3.14 3.1400
5.6E12 1 0.00 5600000000000 NULL
$120 1 120.00 NULL NULL
- 1 0.00 NULL NULL
0 0.00 0 NULL
cc 0 0.00 NULL NULL
aa 0 0.00 NULL NULL
bb 0 0.00 NULL NULL
1/5 0 0.00 NULL NULL
```
interesting the last still fails
Upvotes: 0
|
2018/03/16
| 621 | 2,410 |
<issue_start>username_0: I'm trying to write an sql script that returns an item from a list, if that item can be found in the list, if not, it returns the most recent item added to the list. I came up with a solution using count and an if-else statement. However my table has very frequent I/O operations and I think this solution is inefficient. Does anyone have a away to optimize this solution or a better approach.
here is my solution:
```
DECLARE @result_set INT
SET @result_set = (
SELECT COUNT(*) FROM
( SELECT *
FROM notification p
WHERE p.code = @code
AND p.reference = @reference
AND p.response ='00'
) x
)
IF(@result_set > 0)
BEGIN
SELECT *
FROM notification p
WHERE p.code = @code
AND p.reference = @reference
AND p.response ='00'
END
ELSE
BEGIN
SELECT
TOP 1 p.*
FROM notification p (nolock)
WHERE p.code = @code
AND p.reference = @reference
ORDER BY p.id DESC
END
```
I also think there should be a way around repeating this select statement:
```
SELECT *
FROM notification p
WHERE p.code = @code
AND p.reference = @reference
AND p.response ='00'
```
I'm just not proficient enough in SQL to figure it out.<issue_comment>username_1: You can do something like this:
```
SELECT TOP (1) n.*
FROM notification n
WHERE p.code = @code AND p.reference = @reference
ORDER BY (CASE WHEN p.response ='00' THEN 1 ELSE 2 END), id DESC;
```
This will return the row with response of `'00'` first and then any other row. I would expect another column i the `ORDER BY` to handle recency, but your sample code doesn't provide any clue on what this might be.
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
WITH ItemIWant AS (
SELECT *
FROM notification p
WHERE p.code = @code
AND p.reference = @reference
AND p.response ='00'
),
SELECT *
FROM ItemIWant
UNION ALL
SELECT TOP 1 *
FROM notification p
WHERE p.code = @code
AND p.reference = @reference
AND NOT EXISTS (SELECT * FROM ItemIWant)
ORDER BY id desc
```
This will do that with minimal passes on the table. It will only return the top row if there are no rows returned by ItemIWant. There is no conditional logic so it can be compiled and indexed effectively.
Upvotes: 0
|
2018/03/16
| 1,008 | 4,216 |
<issue_start>username_0: I have a form in a view that is marked with @html.beginForm. This form consists of dropdowns, text boxes and a button. The drop downs are populated dynamically through ajax call. ie. selection of one value from the drop down triggers an ajax call and dynamically populates all the drop down boxes.
This form has a button to post and bring up another form. The problem that I am having is that when an exception happens in the controller, how do i show the error message to the user and preserve the form values filled out by the user?
Here is my view:
```
@using (Html.BeginForm("Create", "MyEntities", FormMethod.Post, new { id = "createEntityForm" }))
{
/\*Form consists of dropdowns and text boxes\*/
}
```
Here is my controller:
```
[HttpPost]
public ActionResult Create([Bind(Include = Entities)] EntityModel model)
{
try
{
//If everything goes well, redirect to another form
return RedirectToAction("AnotherForm", "Event", new { id = eventid });
}
}
catch (Exception e)
{
//Catch exception and show it to the user?
log.Error(e);
model.Error = e.Message;
}
return View(model);
}
```
Here is my ajax call to show error message to the user
```
$("#createEntityForm").on("submit", function (e) {
$("#dashSpinner").show();
debugger;
$.ajax({
url: this.action,
type: this.method,
data: $(this).serialize(),
//success: function (response) {
// debugger;
//},
error: function (response) {
debugger;
$("#dashSpinner").hide();
swal({
title: "Error",
text: "You cannot take this type of action on this event.",
type: "error",
showConfirmButton: true
});
}
});
$("#dashSpinner").hide();
});
```<issue_comment>username_1: The jQuery.ajax error callback is executed when server response has status code different than 200.
You should return response using `HttpStatusCodeResult` ([msdn](https://msdn.microsoft.com/en-us/library/system.web.mvc.httpstatuscoderesult)) or setting `Response.StatusCode` ([msdn](https://msdn.microsoft.com/en-us/library/system.net.httpstatuscode(v=vs.110).aspx)) in catch block.
Upvotes: 1 <issue_comment>username_2: At the Controller
```
[HttpPost]
public ActionResult Create([Bind(Include = Entities)] EntityModel model)
{
//If something goes wrong return to the form with the errors.
if(!ModelState.isValid()) return Create(model);
//If everything goes well, redirect to another form
return RedirectToAction("AnotherForm", "Event", new { id =
return View(model);
}
```
You can check too unobtrusive for jquery at the client side. <https://exceptionnotfound.net/asp-net-mvc-demystified-unobtrusive-validation/>
Upvotes: 0 <issue_comment>username_3: The thing that triggers the Error event is getting a non 2xx status code.
Alter your ActionMethod to use a non-2xx status code. You can do this by using `Response.StatusCode = 500;`.
You are also always returning a view - if you want to show just an error message it may be easier to return a `JsonResult` and then update your error handling to just show this error. In that case your ActionMethod catch statement could become:
```
catch (Exception e)
{
log.Error(e);
Response.StatusCode = 500;
Response.TrySkipIisCustomErrors = true;
return Json(new { message = e.Message } );
}
```
You then need to update your JQuery error handler to show the message as currently it will always show the same error ("You cannot take this type of action on this event."). To do this you need to output the message sent as part of the JSON payload.
```
error: function (response) {
$("#dashSpinner").hide();
swal({
title: "Error",
text: response.responseJSON.message,
type: "error",
showConfirmButton: true
});
}
```
Upvotes: 1
|
2018/03/16
| 3,301 | 10,696 |
<issue_start>username_0: I'm intrigued by the construction described [here](https://stackoverflow.com/a/24844388/2684007) for determining a monad transformer from adjoint functors. Here's some code that summarizes the basic idea:
```
{-# LANGUAGE MultiParamTypeClasses #-}
import Control.Monad
newtype Three g f m a = Three { getThree :: g (m (f a)) }
class (Functor f, Functor g) => Adjoint f g where
counit :: f (g a) -> a
unit :: a -> g (f a)
instance (Adjoint f g, Monad m) => Monad (Three g f m) where
return = Three . fmap return . unit
m >>= f = Three $ fmap (>>= counit . fmap (getThree . f)) (getThree m)
instance (Adjoint f g, Monad m) => Applicative (Three g f m) where
pure = return
(<*>) = ap
instance (Adjoint f g, Monad m) => Functor (Three g f m) where
fmap = (<*>) . pure
```
Given that `Adjoint ((,) s) ((->) s)`, `Three ((->) s) ((,) s)` appears equivalent to `StateT s`.
Very cool, but I am puzzled by a couple things:
* How can we upgrade a monadic `m a` into a monadic `Three g f m a`? For the specific case of `Three ((->) s) ((,) s)`, it's of course obvious how to do this, but it seems desirable to have a recipe that works for any `Three g f` provided that `Adjoint f g`. In other words, it seems like there should be an analog of `lift` whose definition only requires `unit`, `counit`, and the `return` and `>>=` of the input monad. But I cannot seem to find one (I have seen [a definition using `sequence`](https://github.com/ekmett/adjunctions/blob/master/src/Control/Monad/Trans/Adjoint.hs), but this seems a bit like cheating since it requires `f` to be `Traversable`).
* For that matter, how can we upgrade `g a` into a `Three g f m a` (provided `Adjoint f g`)? Again, for the specific case of `Three ((->) s) ((,) s)` it's obvious how to do this, but I'm wondering if there's an analog of `gets` that only requires `unit`, `counit`, and the `return` and `>>=` of the input monad.<issue_comment>username_1: >
> How can we upgrade a monadic `m a` into a monadic `Three g f m a`?
>
>
>
Good question. Time for a game of type tennis!
```
-- i'm using Adjuction from the adjunctions package because I'll need the fundeps soon
lift :: Adjunction f g => m a -> Three g f m a
lift mx = Three _
```
The hole is typed `g (m (f a))`. We have `mx :: m a` in scope, and of course `unit :: a -> g (f a)` and `fmap :: (a -> b) -> m a -> m b`.
```
lift mx = let mgfx = fmap unit mx
in Three $ _ mgfx
```
Now it's `_ :: m (g (f a)) -> g (m (f a))`. This is [`distribute`](https://hackage.haskell.org/package/distributive-0.5.3/docs/Data-Distributive.html#v:distribute) if `g` is [`Distributive`](https://hackage.haskell.org/package/distributive-0.5.3/docs/Data-Distributive.html#t:Distributive).
```
lift mx = let mgfx = fmap unit mx
gmfx = distributeR mgfx
in Three gmfx
-- or
lift = Three . distributeR . fmap unit
```
So now we just need to prove that the right hand side of an adjunction is always `Distributive`:
```
distributeR :: (Functor m, Adjunction f g) => m (g x) -> g (m x)
distributeR mgx = _
```
Since we need to return a `g`, the clear choice of methods from `Adjunction` is [`leftAdjunct :: Adjunction f g => (f a -> b) -> a -> g b`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Adjunction.html#v:leftAdjunct), which uses `unit` to create a `g (f a)` and then tears down the inner `f a` by `fmap`ping a function.
```
distributeR mgx = leftAdjunct (\fa -> _) _
```
I'm going to attack the first hole first, with the expectation that filling it in might tell me something about the second one. The first hole has a type of `m a`. The only way we can get hold of an `m` of any type is by `fmap`ping something over `mgx`.
```
distributeR mgx = leftAdjunct (\fa -> fmap (\gx -> _) mgx) _
```
Now the first hole has a type of `a`, and we have `gx :: g a` in scope. If we had an `f (g a)` we could use `counit`. But we do have an `f x` (where `x` is currently an ambiguous type variable) and a `g a` in scope.
```
distributeR mgx = leftAdjunct (\fa -> fmap (\gx -> counit (fa $> gx)) mgx) _
```
It turns out that the remaining hole has an ambiguous type, so we can use anything we want. (It'll be ignored by `$>`.)
```
distributeR mgx = leftAdjunct (\fa -> fmap (\gx -> counit (fa $> gx)) mgx) ()
```
---
That derivation may have looked like a magic trick but really you just get better at type tennis with practice. The skill of the game is being able to look at the types and apply intuitions and facts about the objects you're working with. From looking at the types I could tell that I was going to need to exchange `m` and `g`, and traversing `m` was not an option (because `m` is not necessarily `Traversable`), so something like `distribute` was going to be necessary.
Besides guessing I was going to need to implement `distribute`, I was guided by some general knowledge about how adjunctions work.
Specifically, when you're talking about `* -> *`, the only interesting adjunctions are (uniquely isomorphic to) the `Reader`/`Writer` adjunction. In particular, that means any right adjoint on `Hask` is always [`Representable`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Rep.html#t:Representable), as witnessed by [`tabulateAdjunction`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Adjunction.html#v:tabulateAdjunction) and [`indexAdjunction`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Adjunction.html#v:indexAdjunction). I also know that all `Representable` functors are `Distributive` (in fact logically the converse is also true, as described in [`Distributive`'s docs](https://hackage.haskell.org/package/distributive-0.5.3/docs/Data-Distributive.html#t:Distributive), even though the classes aren't equivalent in power), per [`distributeRep`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Rep.html#v:distributeRep).
---
>
> For that matter, how can we upgrade `g a` into a `Three g f m a` (provided `Adjoint f g`)?
>
>
>
I'll leave that as an exercise. I suspect you'll need to lean on the `g ~ ((->) s)` isomorphism again. I actually don't expect this one to be true of all adjunctions, just the ones on `Hask`, of which there is only one.
Upvotes: 2 <issue_comment>username_2: `lift`, in [username_1's answer](https://stackoverflow.com/a/49325487/2751851), is set up as:
>
>
> ```
> lift mx = let mgfx = fmap unit mx
> gmfx = distributeR mgfx
> in Three gmfx
> -- or
> lift = Three . distributeR . fmap unit
>
> ```
>
>
As you know, that is not the only plausible strategy we might use there:
```
lift mx = let gfmx = unit mx
gmfx = fmap sequenceL gfmx
in Three gmfx
-- or
lift = Three . fmap sequenceL . unit
```
Whence the `Traversable` requirement for [Ed<NAME>'s corresponding `MonadTrans` instance](https://hackage.haskell.org/package/adjunctions-4.4/docs/Control-Monad-Trans-Adjoint.html#t:AdjointT) originates. The question, then, becomes whether relying on that is, as you put it, "cheating". I am going to argue it is not.
We can adapt Benjamin's game plan concerning `Distributive` and right adjoints and try to find whether left adjoints are `Traversable`. A look at [`Data.Functor.Adjunction`](https://hackage.haskell.org/package/adjunctions-4.4/docs/Data-Functor-Adjunction.html) shows we have a quite good toolbox to work with:
```
unabsurdL :: Adjunction f u => f Void -> Void
cozipL :: Adjunction f u => f (Either a b) -> Either (f a) (f b)
splitL :: Adjunction f u => f a -> (a, f ())
unsplitL :: Functor f => a -> f () -> f a
```
Edward helpfully tells us that `unabsurdL` and `cozipL` witness that "[a] left adjoint must be inhabited, [and that] a left adjoint must be inhabited by exactly one element", respectively. That, however, means `splitL` corresponds precisely to the shape-and-contents decomposition that characterises `Traversable` functors. If we add to that the fact that `splitL` and `unsplitL` are inverses, an implementation of `sequence` follows immediately:
```
sequenceL :: (Adjunction f u, Functor m) => f (m a) -> m (f a)
sequenceL = (\(mx, fu) -> fmap (\x -> unsplitL x fu) mx) . splitL
```
(Note that no more than `Functor` is demanded of `m`, as expected for traversable containers that hold exactly one value.)
All that is missing at this point is verifying that both implementations of `lift` are equivalent. That is not difficult, only a bit laborious. In a nutshell, the `distributeR` and `sequenceR` definitions here can be simplified to:
```
distributeR = \mgx ->
leftAdjunct (\fa -> fmap (\gx -> rightAdjunct (const gx) fa) mgx) ()
sequenceL =
rightAdjunct (\mx -> leftAdjunct (\fu -> fmap (\x -> fmap (const x) fu) mx) ())
```
We want to show that `distributeR . fmap unit = fmap sequenceL . unit`. After a few more rounds of simplification, we get:
```
distributeR . fmap unit = \mgx ->
leftAdjunct (\fa -> fmap (\gx -> rightAdjunct (const (unit gx)) fa) mgx) ()
fmap sequenceL . unit = \mx ->
leftAdjunct (\fu -> fmap (\x -> fmap (const x) fu) mx) ()
```
We can show those are really the same thing by picking `\fu -> fmap (\x -> fmap (const x) fu) mx` -- the argument to `leftAdjunct` in the second right-hand side -- and slipping `rightAdjunct unit = counit . fmap unit = id` into it:
```
\fu -> fmap (\x -> fmap (const x) fu) mx
\fu -> fmap (\x -> fmap (const x) fu) mx
\fu -> fmap (\x -> (counit . fmap unit . fmap (const x)) fu) mx
\fu -> fmap (\x -> rightAdjunct (unit . const x) fu) mx
\fu -> fmap (\x -> rightAdjunct (const (unit x)) fu) mx
-- Sans variable renaming, the same as
-- \fa -> fmap (\gx -> rightAdjunct (const (unit gx)) fa) mgx
```
The takeaway is that the `Traversable` route towards your `MonadTrans` is just as secure as the `Distributive` one, and concerns about it -- including the ones mentioned by the `Control.Monad.Trans.Adjoint` documentation -- should no longer trouble anyone.
P.S.: It is worth noting that the definition of `lift` put forward here can be spelled as:
```
lift = Three . leftAdjunct sequenceL
```
That is, `lift` is `sequenceL` sent through the adjunction isomorphism. Additionally, from...
```
leftAdjunct sequenceL = distributeR . fmap unit
```
... if we apply `rightAdjunct` on both sides, we get...
```
sequenceL = rightAdjunct (distributeR . fmap unit)
```
... and if we compose `fmap (fmap counit)` on the left of both sides, we eventually end up with:
```
distributeR = leftAdjunct (fmap counit . sequenceL)
```
So `distributeR` and `sequenceL` are interdefinable.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 875 | 3,157 |
<issue_start>username_0: Very basic: I need to pass information from one view to another. They are in the same `ViewController`. What is the most elegant way to approach this?
`NotificationCenter` is not an option
**It would be ideal, if I could access certain properties of a view from another view directly**<issue_comment>username_1: Let's say you have two views `view1` and `view2`.
For example some data changed in `view2` and you need to pass this data to `view1`. I'd do this with delegation pattern. So here's the setup:
```
protocol View2Delegate {
func didChangeSomeData(data: String)
}
```
Now in `view2`
```
class View2: UIView {
var delegate: View2Delegate?
var text: String = String() {
didSet {
self.delegate.didChangeSomeData(data: text)
}
}
}
```
and in `view1`
```
class View1: UIView, View2Delegate {
var textToChange: String = String()
func didChangeSomeData(data: String) {
// Do whatever you want with that changed data from view2
textToChange = data
}
}
```
and in your `viewController`
```
class MyViewController: UIViewController {
var: view1 = View1()
var: view2 = View2()
func viewDidLoad() {
super.viewDidLoad()
view2.delegate = view1
}
```
Now, if you don't really want to couple `view1`and `view2`, you could listen via the same pattern changes of `view2` in `ViewController` and then pass it directly to the `view1` via property accessor.
```
class MyViewController: UIViewController, View2Delegate {
var: view1 = View1()
var: view2 = View2()
func viewDidLoad() {
super.viewDidLoad()
view2.delegate = self
}
func didChangeSomeData(data: String) {
view1.textToChange = data
}
```
More on protocols in the [Apple's Documentation](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Protocols.html)
*Update*
You could also go with [Key Value Observing](http://skyefreeman.io/programming/2017/06/28/kvo-in-ios11.html) pattern. You could setup it like this:
```
class MyViewController: UIViewController {
var view1 = View1()
var view2 = View2()
override func viewDidLoad() {
super.viewDidLoad()
let observation = view2.observe(\.text) { (observed, change) in
if let newValue = change.newValue {
view1.textToChange = newValue
}
}
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: A way to communicate between 2 objects (may it be a UIView or not) is as suggested in a previous answer by using delegates or KVO.
You can also use a completion handler, which is quite simple by using closures.
Below is an example
```
class Object1 {
func foo(completed: () -> Void) {
print("do some work")
completed()
}
}
class Object2 {
func callFoo() {
let obj1 = Object1()
obj1.foo {
print("foo() from object 1 completed")
}
}
}
let obj2 = Object2()
obj2.callFoo()
// should print:
// do some work
// foo() from object 1 completed
```
Upvotes: 0
|
2018/03/16
| 1,183 | 3,982 |
<issue_start>username_0: Good day Stack, i'm working on an Android project that uses Android's Room 1.0.0 Alpha 5, the main issue that i'm facing is that every time i need to call one of the DAO from room i need to do something like this:
Activity.java:
```
...
AppDatabase db = Room.databaseBuilder(context, AppDatabase.class, "Storage").build();
Table1 table = new Table1();
table.setId(1);
table.setName("Hello");
new AccessDB().execute(1);
/* Generic AccessDB needed */
private class AccessDB extends AsyncTask> {
@Override
protected List doInBackground(Integer... param) {
switch(param[0]) {
case 1:
return db.Table1DAO().create();
case 2:
return db.Table1DAO().read();
}
return new ArrayList<>();
}
@Override
protected void onPostExecute(List list) {
processData(list);
}
}
...
```
I know that i can access Room DB from the main thread, and that would shrink the code, but i think that's not a good practice since that would lock the activity every time it has to handle data.
So if i need to insert or read data from "Table2" i would have to do the same all over again, it would be great if i could turn the entity types into generics like "T" or something like that and then make a generic "AccessDB".
But since i'm not too familiar with Java... I'm currently struggling with this.
Here is some other code of the instances.
AppDatabase.java:
```
@Database(entities = {Table1.class, Table2.class, Table3.class}, version = 1)
public abstract class AppDatabase extends RoomDatabase {
public abstract Table1DAO Table1DAO();
public abstract Table2DAO Table2DAO();
public abstract Table3DAO Table3DAO();
}
```
Table1.java:
```
@Entity
public class Table1 {
/* setters & getters */
@PrimaryKey(autoGenerate = true)
private int id;
private String name;
}
```
Table1DAO.java:
```
@Dao public interface Table1DAO {
@Query("SELECT * FROM Table1")
List read(Table1 table);
@Insert(onConflict = OnConflictStrategy.REPLACE)
List create(Table1... table);
}
```
Thank you all for your help.<issue_comment>username_1: You can use inheritance and create a `BaseDao` which will be implemented by all your child `Dao`. This way you won't need to write the common methods again and again.
```
interface BaseDao {
/\*\*
\* Insert an object in the database.
\*
\* @param obj the object to be inserted.
\*/
@Insert
fun insert(obj: T)
/\*\*
\* Insert an array of objects in the database.
\*
\* @param obj the objects to be inserted.
\*/
@Insert
fun insert(vararg obj: T)
/\*\*
\* Update an object from the database.
\*
\* @param obj the object to be updated
\*/
@Update
fun update(obj: T)
/\*\*
\* Delete an object from the database
\*
\* @param obj the object to be deleted
\*/
@Delete
fun delete(obj: T)
}
```
Read more about it: <https://gist.github.com/florina-muntenescu/1c78858f286d196d545c038a71a3e864#file-basedao-kt>
Original credits to Florina.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I played around a bit with the answer of username_1, but needed two additions:
* ability to insert/update List
* return values to monitor insert/update success
Here is what I came up with:
```
import androidx.room.Dao
import androidx.room.Insert
import androidx.room.OnConflictStrategy
import androidx.room.Update
/**
* List of all generic DB actions
* All use suspend to force kotlin coroutine usage, remove if not required
*/
@Dao
interface BaseDao {
// insert single
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(obj: T?): Long
// insert List
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(obj: List?) : List
// update List
@Update
suspend fun update(obj: List?): Int
}
```
---
```
@Dao
interface MyObjectDao : BaseDao {
@Query("SELECT \* from $TABLE\_NAME WHERE $COL\_ID = :id")
suspend fun getById(id: Long): MyObject
}
```
---
Can then be called like:
```
val ids = MyObjectDao.insert(objectList)
```
Upvotes: 2
|
2018/03/16
| 706 | 2,152 |
<issue_start>username_0: I don't know how to continue in this recurrence cause I don't see any pattern, any help??
```
T(n) = 2n + T(n/2)
= 3n + T(n/4)
= 7n/2 + T(n/8)
= 15n/4 + T(n/16)
and so on...
```<issue_comment>username_1: You can use inheritance and create a `BaseDao` which will be implemented by all your child `Dao`. This way you won't need to write the common methods again and again.
```
interface BaseDao {
/\*\*
\* Insert an object in the database.
\*
\* @param obj the object to be inserted.
\*/
@Insert
fun insert(obj: T)
/\*\*
\* Insert an array of objects in the database.
\*
\* @param obj the objects to be inserted.
\*/
@Insert
fun insert(vararg obj: T)
/\*\*
\* Update an object from the database.
\*
\* @param obj the object to be updated
\*/
@Update
fun update(obj: T)
/\*\*
\* Delete an object from the database
\*
\* @param obj the object to be deleted
\*/
@Delete
fun delete(obj: T)
}
```
Read more about it: <https://gist.github.com/florina-muntenescu/1c78858f286d196d545c038a71a3e864#file-basedao-kt>
Original credits to Florina.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I played around a bit with the answer of username_1, but needed two additions:
* ability to insert/update List
* return values to monitor insert/update success
Here is what I came up with:
```
import androidx.room.Dao
import androidx.room.Insert
import androidx.room.OnConflictStrategy
import androidx.room.Update
/**
* List of all generic DB actions
* All use suspend to force kotlin coroutine usage, remove if not required
*/
@Dao
interface BaseDao {
// insert single
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(obj: T?): Long
// insert List
@Insert(onConflict = OnConflictStrategy.REPLACE)
suspend fun insert(obj: List?) : List
// update List
@Update
suspend fun update(obj: List?): Int
}
```
---
```
@Dao
interface MyObjectDao : BaseDao {
@Query("SELECT \* from $TABLE\_NAME WHERE $COL\_ID = :id")
suspend fun getById(id: Long): MyObject
}
```
---
Can then be called like:
```
val ids = MyObjectDao.insert(objectList)
```
Upvotes: 2
|
2018/03/16
| 1,294 | 4,438 |
<issue_start>username_0: ```
"styles": [
"../node_modules/bootstrap/dist/css/bootstrap.min.css",
"../node_modules/datatables.net-dt/css/jquery.dataTables.css",
"../node_modules/datatables.net-buttons-dt/css/buttons.dataTables.css",
"styles.css"
],
"scripts": [
"../node_modules/jquery/dist/jquery.min.js",
"../node_modules/popper.js/dist/umd/popper.min.js",
"../node_modules/bootstrap/dist/js/bootstrap.min.js",
"../node_modules/jquery/dist/jquery.js",
"../node_modules/datatables.net/js/jquery.dataTables.js",
"../node_modules/datatables.net-buttons/js/dataTables.buttons.js",
"../node_modules/datatables.net-buttons/js/buttons.colVis.js",
"../node_modules/datatables.net-buttons/js/buttons.flash.js",
"../node_modules/datatables.net-buttons/js/buttons.html5.js",
"../node_modules/datatables.net-buttons/js/buttons.print.js",
],
```
This is the angular-cli.json I have created file for my project.
Component for the project looks like this;
```
ngOnInit() {
this.dtOptions = {
pagingType: 'full_numbers',
pageLength: 10,
dom: 'Bfrtip',
buttons: [
'copy', 'print', 'csv','columnsToggle','colvis','pdf','excel']
};
this.loadapi();
}
```
Here I have created a datatable from a sample json. Every thing was fine except the button to export excel and pdf didn't show other buttons are showing on UI. What may be the issue?<issue_comment>username_1: Please Follow steps:
1: Install its dependencies:
```
npm install jszip --save
npm install datatables.net-buttons --save
npm install datatables.net-buttons-dt --save
```
2:Add the dependencies in the scripts and styles attributes:
```
{
"projects": {
"your-app-name": {
"architect": {
"build": {
"options": {
"styles": [
...
"node_modules/datatables.net-buttons-dt/css/buttons.dataTables.css"
],
"scripts": [
...
"node_modules/jszip/dist/jszip.js",
"node_modules/datatables.net-buttons/js/dataTables.buttons.js",
"node_modules/datatables.net-buttons/js/buttons.colVis.js",
"node_modules/datatables.net-buttons/js/buttons.flash.js",
"node_modules/datatables.net-buttons/js/buttons.html5.js",
"node_modules/datatables.net-buttons/js/buttons.print.js"
],
...
}
```
If you want to have the excel export functionnality, then you must import the jszip.js before the buttons.html5.js file.
3:HTML:
```
```
4:DtOptions in Typescript:
```
this.dtOptions = {
dom: 'Bfrtip',
buttons: [
'copy',
'print',
'csv',
'excel',
'pdf'
]
};
```
For more help refer this [link](https://l-lin.github.io/angular-datatables/#/extensions/buttons).
Upvotes: 3 <issue_comment>username_2: I had a similar issue.. but in my case none of the buttons were coming up.. so, in addition to the other steps in <https://l-lin.github.io/angular-datatables/#/extensions/buttons>,
I had to import them in my specific file.. (I needed only the CSV button)
```
import 'datatables.net-buttons/js/buttons.colVis.min';
import 'datatables.net-buttons/js/dataTables.buttons.min';
import 'datatables.net-buttons/js/buttons.flash.min';
import 'datatables.net-buttons/js/buttons.html5.min';
```
Hope that helps!
Upvotes: 1 <issue_comment>username_3: I had the same problem. It's probably because you're also using page length.
Try this:
```
ngOnInit() {
this.dtOptions = {
pagingType: 'full_numbers',
pageLength: 10,
dom: 'Blfrtip', // Use 'Blfrtip' instead of 'Bfrtip'
buttons: [
'copy', 'print', 'csv','columnsToggle','colvis','pdf','excel']
};
this.loadapi();
}
```
This also works, but it doesn't look like you're really doing much with the page length (in terms of allowing one to change the length), so it might not be ideal for you. However, it might be useful to someone else:
```
ngOnInit() {
this.dtOptions = {
pagingType: 'full_numbers',
pageLength: 10,
dom: 'Bfrtip',
// Use the pageLength button
buttons: [
'pageLength', 'copy', 'print', 'csv','columnsToggle','colvis','pdf','excel']
};
this.loadapi();
}
```
You have more options, but these two are the simplest that I can think of.
Source: <https://datatables.net/reference/button/pageLength>
Upvotes: 0
|
2018/03/16
| 1,164 | 4,155 |
<issue_start>username_0: I have this code, where if click on `.comments` depending on if having prior comments...it loads the comments + a comment form, or just a form (no comments) in `.LastComments`
**HTML**
```
[Click to comment]
```
**JQUERY**
```
$('.comment').on('click', function() {
var user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if (number_comments) {
$(".LastComments").load(url, {
vars
}, /*Load_newform here*/ )
} else {
/*Load_newform here*/
}
});
```
**FUNCTION**
```
function Load_newform() {
form = "Hi " + user + " post a comment ";
$(".LastComments").append(form);
}
```
***PROBLEM***
The function gets values from the `.data` returned so It doesnt show the `user` value and others I'm working with. How do I retrieve the values to make it work correctly?<issue_comment>username_1: Try with below code:
1) Get User Variable value by adding one line [i.e. var user = $(this).data("user");]
```
function Load_newform() {
var user = $(this).data("user"); //Initialize user variable here.
var form = "Hi " + user + " post a comment ";
$(".LastComments").append(form);
}
```
Thanks!
Upvotes: 0 <issue_comment>username_2: If a variable is required in two different scopes, you can't retrieve it just by calling the variable.
In your case, as `Load_newform` is another function (scope), `user` acessibility isn't set to it.
With that in mind, you have some ways to make it possible:
---
### Pass the variable as parameter to the second method
```js
$('.comment').on('click', function(){
var user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if(number_comments){
$(".LastComments").load(Load_newform(user));
}else{
Load_newform(user);
}
});
function Load_newform(user) {
form = "Hi "+user+" post a comment";
$(".LastComments").append(form);
}
```
```html
[Click to comment]
```
### Create the variables in global scope
Please, remember that global variables can overwrite window variables!
```js
var user;
$('.comment').on('click', function(){
user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if(number_comments){
$(".LastComments").load(Load_newform());
}else{
Load_newform();
}
});
function Load_newform() {
form = "Hi "+user+" post a comment";
$(".LastComments").append(form);
}
```
```html
[Click to comment]
```
*(You can automatically create one global variable without calling the `var` before the variable name)*
```js
$('.comment').on('click', function(){
user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if(number_comments){
$(".LastComments").load(Load_newform());
}else{
Load_newform();
}
});
function Load_newform() {
form = "Hi "+user+" post a comment";
$(".LastComments").append(form);
}
```
```html
[Click to comment]
```
*(Or even using `window.variable name`)*
```js
$('.comment').on('click', function(){
window.user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if(number_comments){
$(".LastComments").load(Load_newform());
}else{
Load_newform();
}
});
function Load_newform() {
form = "Hi "+user+" post a comment";
$(".LastComments").append(form);
}
```
```html
[Click to comment]
```
### Create the function inside the event
It's not the best approach, because if you will always create the function when you click.
```js
$('.comment').on('click', function(){
function Load_newform() {
console.log(user);
form = "Hi "+user+" post a comment";
$(".LastComments").append(form);
}
var user = $(this).data("user");
var number_comments = $(this).data("number_comments");
if(number_comments){
$(".LastComments").load(Load_newform());
}else{
Load_newform();
}
});
```
```html
[Click to comment]
```
Upvotes: 3 [selected_answer]
|